linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL] scheduler fix
@ 2021-06-24  7:06 Ingo Molnar
  2021-06-24 16:34 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2021-06-24  7:06 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Rik van Riel, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2021-06-24

   # HEAD: fdaba61ef8a268d4136d0a113d153f7a89eb9984 sched/fair: Ensure that the CFS parent is added after unthrottling

A last minute cgroup bandwidth scheduling fix for a recently
introduced logic fail which triggered a kernel warning by
LTP's cfs_bandwidth01.

 Thanks,

	Ingo

------------------>
Rik van Riel (1):
      sched/fair: Ensure that the CFS parent is added after unthrottling


 kernel/sched/fair.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bfaa6e1f6067..23663318fb81 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3298,6 +3298,31 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags)
 
 #ifdef CONFIG_SMP
 #ifdef CONFIG_FAIR_GROUP_SCHED
+/*
+ * Because list_add_leaf_cfs_rq always places a child cfs_rq on the list
+ * immediately before a parent cfs_rq, and cfs_rqs are removed from the list
+ * bottom-up, we only have to test whether the cfs_rq before us on the list
+ * is our child.
+ * If cfs_rq is not on the list, test whether a child needs its to be added to
+ * connect a branch to the tree  * (see list_add_leaf_cfs_rq() for details).
+ */
+static inline bool child_cfs_rq_on_list(struct cfs_rq *cfs_rq)
+{
+	struct cfs_rq *prev_cfs_rq;
+	struct list_head *prev;
+
+	if (cfs_rq->on_list) {
+		prev = cfs_rq->leaf_cfs_rq_list.prev;
+	} else {
+		struct rq *rq = rq_of(cfs_rq);
+
+		prev = rq->tmp_alone_branch;
+	}
+
+	prev_cfs_rq = container_of(prev, struct cfs_rq, leaf_cfs_rq_list);
+
+	return (prev_cfs_rq->tg->parent == cfs_rq->tg);
+}
 
 static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
 {
@@ -3313,6 +3338,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
 	if (cfs_rq->avg.runnable_sum)
 		return false;
 
+	if (child_cfs_rq_on_list(cfs_rq))
+		return false;
+
 	return true;
 }
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2021-06-24  7:06 [GIT PULL] scheduler fix Ingo Molnar
@ 2021-06-24 16:34 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2021-06-24 16:34 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Rik van Riel, Thomas Gleixner, Andrew Morton

The pull request you sent on Thu, 24 Jun 2021 09:06:47 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2021-06-24

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/666751701b6e4b6b6ebc82186434806fa8a09cf3

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2023-10-01  8:43 Ingo Molnar
@ 2023-10-01 17:08 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2023-10-01 17:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Daniel Bristot de Oliveira, Valentin Schneider,
	Thomas Gleixner, Andrew Morton

The pull request you sent on Sun, 1 Oct 2023 10:43:05 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2023-10-01

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/c5ecffe6d3e438dd7094ac37461e77960269aff0

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2023-10-01  8:43 Ingo Molnar
  2023-10-01 17:08 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2023-10-01  8:43 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Thomas Gleixner,
	Andrew Morton


Linus,

Please pull the latest sched/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2023-10-01

   # HEAD: fc09027786c900368de98d03d40af058bcb01ad9 sched/rt: Fix live lock between select_fallback_rq() and RT push

Fix a RT tasks related lockup/live-lock during CPU offlining.

 Thanks,

	Ingo

------------------>
Joel Fernandes (Google) (1):
      sched/rt: Fix live lock between select_fallback_rq() and RT push


 kernel/sched/cpupri.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
index a286e726eb4b..42c40cfdf836 100644
--- a/kernel/sched/cpupri.c
+++ b/kernel/sched/cpupri.c
@@ -101,6 +101,7 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
 
 	if (lowest_mask) {
 		cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+		cpumask_and(lowest_mask, lowest_mask, cpu_active_mask);
 
 		/*
 		 * We have to ensure that we have at least one bit

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2023-09-22 10:26 Ingo Molnar
@ 2023-09-22 20:19 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2023-09-22 20:19 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider

The pull request you sent on Fri, 22 Sep 2023 12:26:19 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2023-09-22

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/5b47b5766b21a59bb247488b374e62c3b72639fb

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2023-09-22 10:26 Ingo Molnar
  2023-09-22 20:19 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2023-09-22 10:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira,
	Valentin Schneider

Linus,

Please pull the latest sched/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2023-09-22

   # HEAD: cff9b2332ab762b7e0586c793c431a8f2ea4db04 kernel/sched: Modify initial boot task idle setup

Fix a PF_IDLE initialization bug that generated warnings on tiny-RCU.

 Thanks,

	Ingo

------------------>
Liam R. Howlett (1):
      kernel/sched: Modify initial boot task idle setup


 kernel/sched/core.c | 2 +-
 kernel/sched/idle.c | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2299a5cfbfb9..802551e0009b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9269,7 +9269,7 @@ void __init init_idle(struct task_struct *idle, int cpu)
 	 * PF_KTHREAD should already be set at this point; regardless, make it
 	 * look like a proper per-CPU kthread.
 	 */
-	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
+	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
 	kthread_set_per_cpu(idle, cpu);
 
 #ifdef CONFIG_SMP
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 342f58a329f5..5007b25c5bc6 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -373,6 +373,7 @@ EXPORT_SYMBOL_GPL(play_idle_precise);
 
 void cpu_startup_entry(enum cpuhp_state state)
 {
+	current->flags |= PF_IDLE;
 	arch_cpu_idle_prepare();
 	cpuhp_online_idle(state);
 	while (1)

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2020-12-27  9:16 Ingo Molnar
@ 2020-12-27 17:27 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2020-12-27 17:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Andrew Morton,
	Thomas Gleixner

The pull request you sent on Sun, 27 Dec 2020 10:16:01 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2020-12-27

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/3b80dee70eaa5f9a120db058c30cc8e63c443571

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2020-12-27  9:16 Ingo Molnar
  2020-12-27 17:27 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2020-12-27  9:16 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Andrew Morton, Thomas Gleixner

Linus,

Please pull the latest sched/urgent git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2020-12-27

   # HEAD: ae7927023243dcc7389b2d59b16c09cbbeaecc36 sched: Optimize finish_lock_switch()

Fix a context switch performance regression.

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched: Optimize finish_lock_switch()


 kernel/sched/core.c  | 40 +++++++++++++++-------------------------
 kernel/sched/sched.h | 13 +++++--------
 2 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7af80c3fce12..0ca7d2dc16d5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3985,15 +3985,20 @@ static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
 	}
 }
 
+static void balance_push(struct rq *rq);
+
+struct callback_head balance_push_callback = {
+	.next = NULL,
+	.func = (void (*)(struct callback_head *))balance_push,
+};
+
 static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
 {
 	struct callback_head *head = rq->balance_callback;
 
 	lockdep_assert_held(&rq->lock);
-	if (head) {
+	if (head)
 		rq->balance_callback = NULL;
-		rq->balance_flags &= ~BALANCE_WORK;
-	}
 
 	return head;
 }
@@ -4014,21 +4019,6 @@ static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
 	}
 }
 
-static void balance_push(struct rq *rq);
-
-static inline void balance_switch(struct rq *rq)
-{
-	if (likely(!rq->balance_flags))
-		return;
-
-	if (rq->balance_flags & BALANCE_PUSH) {
-		balance_push(rq);
-		return;
-	}
-
-	__balance_callbacks(rq);
-}
-
 #else
 
 static inline void __balance_callbacks(struct rq *rq)
@@ -4044,10 +4034,6 @@ static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
 {
 }
 
-static inline void balance_switch(struct rq *rq)
-{
-}
-
 #endif
 
 static inline void
@@ -4075,7 +4061,7 @@ static inline void finish_lock_switch(struct rq *rq)
 	 * prev into current:
 	 */
 	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-	balance_switch(rq);
+	__balance_callbacks(rq);
 	raw_spin_unlock_irq(&rq->lock);
 }
 
@@ -7256,6 +7242,10 @@ static void balance_push(struct rq *rq)
 
 	lockdep_assert_held(&rq->lock);
 	SCHED_WARN_ON(rq->cpu != smp_processor_id());
+	/*
+	 * Ensure the thing is persistent until balance_push_set(.on = false);
+	 */
+	rq->balance_callback = &balance_push_callback;
 
 	/*
 	 * Both the cpu-hotplug and stop task are in this case and are
@@ -7305,9 +7295,9 @@ static void balance_push_set(int cpu, bool on)
 
 	rq_lock_irqsave(rq, &rf);
 	if (on)
-		rq->balance_flags |= BALANCE_PUSH;
+		rq->balance_callback = &balance_push_callback;
 	else
-		rq->balance_flags &= ~BALANCE_PUSH;
+		rq->balance_callback = NULL;
 	rq_unlock_irqrestore(rq, &rf);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f5acb6c5ce49..12ada79d40f3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -975,7 +975,6 @@ struct rq {
 	unsigned long		cpu_capacity_orig;
 
 	struct callback_head	*balance_callback;
-	unsigned char		balance_flags;
 
 	unsigned char		nohz_idle_balance;
 	unsigned char		idle_balance;
@@ -1226,6 +1225,8 @@ struct rq_flags {
 #endif
 };
 
+extern struct callback_head balance_push_callback;
+
 /*
  * Lockdep annotation that avoids accidental unlocks; it's like a
  * sticky/continuous lockdep_assert_held().
@@ -1243,9 +1244,9 @@ static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf)
 #ifdef CONFIG_SCHED_DEBUG
 	rq->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
 	rf->clock_update_flags = 0;
-#endif
 #ifdef CONFIG_SMP
-	SCHED_WARN_ON(rq->balance_callback);
+	SCHED_WARN_ON(rq->balance_callback && rq->balance_callback != &balance_push_callback);
+#endif
 #endif
 }
 
@@ -1408,9 +1409,6 @@ init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
 
 #ifdef CONFIG_SMP
 
-#define BALANCE_WORK	0x01
-#define BALANCE_PUSH	0x02
-
 static inline void
 queue_balance_callback(struct rq *rq,
 		       struct callback_head *head,
@@ -1418,13 +1416,12 @@ queue_balance_callback(struct rq *rq,
 {
 	lockdep_assert_held(&rq->lock);
 
-	if (unlikely(head->next || (rq->balance_flags & BALANCE_PUSH)))
+	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
 		return;
 
 	head->func = (void (*)(struct callback_head *))func;
 	head->next = rq->balance_callback;
 	rq->balance_callback = head;
-	rq->balance_flags |= BALANCE_WORK;
 }
 
 #define rcu_dereference_check_sched_domain(p) \

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2020-03-02  7:51 Ingo Molnar
@ 2020-03-03 23:35 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2020-03-03 23:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Mon, 2 Mar 2020 08:51:36 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/c105df5d865afbc10e9730b7b13abc831d5e9ac7

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2020-03-02  7:51 Ingo Molnar
  2020-03-03 23:35 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2020-03-02  7:51 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 289de35984815576793f579ec27248609e75976e sched/fair: Fix statistics for find_idlest_group()

Fix a scheduler statistics bug.

 Thanks,

	Ingo

------------------>
Vincent Guittot (1):
      sched/fair: Fix statistics for find_idlest_group()


 kernel/sched/fair.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3c8a379c357e..c1217bfe5e81 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8337,6 +8337,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 
 	sgs->group_capacity = group->sgc->capacity;
 
+	sgs->group_weight = group->group_weight;
+
 	sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
 
 	/*

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2019-12-17 11:54 Ingo Molnar
@ 2019-12-17 19:20 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2019-12-17 19:20 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Tue, 17 Dec 2019 12:54:35 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4340ebd19ff031fd97a69ea7e7249c898f2d3e06

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2019-12-17 11:54 Ingo Molnar
  2019-12-17 19:20 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2019-12-17 11:54 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 346da4d2c7ea39de65487b249aaa4733317a40ec sched/cputime, proc/stat: Fix incorrect guest nice cpustat value

Fix the guest-nice cpustat values in /proc.

 Thanks,

	Ingo

------------------>
Flavio Leitner (1):
      sched/cputime, proc/stat: Fix incorrect guest nice cpustat value


 fs/proc/stat.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/proc/stat.c b/fs/proc/stat.c
index 37bdbec5b402..fd931d3e77be 100644
--- a/fs/proc/stat.c
+++ b/fs/proc/stat.c
@@ -134,7 +134,7 @@ static int show_stat(struct seq_file *p, void *v)
 		softirq		+= cpustat[CPUTIME_SOFTIRQ];
 		steal		+= cpustat[CPUTIME_STEAL];
 		guest		+= cpustat[CPUTIME_GUEST];
-		guest_nice	+= cpustat[CPUTIME_USER];
+		guest_nice	+= cpustat[CPUTIME_GUEST_NICE];
 		sum		+= kstat_cpu_irqs_sum(i);
 		sum		+= arch_irq_stat_cpu(i);
 
@@ -175,7 +175,7 @@ static int show_stat(struct seq_file *p, void *v)
 		softirq		= cpustat[CPUTIME_SOFTIRQ];
 		steal		= cpustat[CPUTIME_STEAL];
 		guest		= cpustat[CPUTIME_GUEST];
-		guest_nice	= cpustat[CPUTIME_USER];
+		guest_nice	= cpustat[CPUTIME_GUEST_NICE];
 		seq_printf(p, "cpu%d", i);
 		seq_put_decimal_ull(p, " ", nsec_to_clock_t(user));
 		seq_put_decimal_ull(p, " ", nsec_to_clock_t(nice));

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2019-07-14 10:19 Ingo Molnar
@ 2019-07-14 18:45 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2019-07-14 18:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Sun, 14 Jul 2019 12:19:10 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/50ec18819cade37cccc914ffc71a8b0a2783c345

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2019-07-14 10:19 Ingo Molnar
  2019-07-14 18:45 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2019-07-14 10:19 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: e3d85487fba42206024bc3ed32e4b581c7cb46db sched/core: Fix preempt warning in ttwu

Fix a sched statistics related bug that would trigger a kernel warning on 
certain configs.

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched/core: Fix preempt warning in ttwu


 kernel/sched/core.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fa43ce3962e7..2b037f195473 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2399,6 +2399,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	unsigned long flags;
 	int cpu, success = 0;
 
+	preempt_disable();
 	if (p == current) {
 		/*
 		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
@@ -2412,7 +2413,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 		 *    it disabling IRQs (this allows not taking ->pi_lock).
 		 */
 		if (!(p->state & state))
-			return false;
+			goto out;
 
 		success = 1;
 		cpu = task_cpu(p);
@@ -2526,6 +2527,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 out:
 	if (success)
 		ttwu_stat(p, cpu, wake_flags);
+	preempt_enable();
 
 	return success;
 }

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2019-05-05 11:02 Ingo Molnar
@ 2019-05-05 22:10 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2019-05-05 22:10 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Sun, 5 May 2019 13:02:37 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/70c9fb570b7c1c3edb03cbe745cf81ceeef5d484

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2019-05-05 11:02 Ingo Molnar
  2019-05-05 22:10 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2019-05-05 11:02 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 9a4f26cc98d81b67ecc23b890c28e2df324e29f3 sched/cpufreq: Fix kobject memleak

Fix a kobject memory leak in the cpufreq code.

 Thanks,

	Ingo

------------------>
Tobin C. Harding (1):
      sched/cpufreq: Fix kobject memleak


 kernel/sched/cpufreq_schedutil.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 5c41ea367422..3638d2377e3c 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -771,6 +771,7 @@ static int sugov_init(struct cpufreq_policy *policy)
 	return 0;
 
 fail:
+	kobject_put(&tunables->attr_set.kobj);
 	policy->governor_data = NULL;
 	sugov_tunables_free(tunables);
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2019-04-27 14:39 Ingo Molnar
@ 2019-04-27 18:45 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2019-04-27 18:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Sat, 27 Apr 2019 16:39:26 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/15d4e26b816a39f2d1ba40bacb8e8ecf8884477c

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2019-04-27 14:39 Ingo Molnar
  2019-04-27 18:45 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2019-04-27 14:39 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: a860fa7b96e1a1c974556327aa1aee852d434c21 sched/numa: Fix a possible divide-by-zero

Fix a division by zero bug that can trigger in the NUMA placement code.

 Thanks,

	Ingo

------------------>
Xie XiuQi (1):
      sched/numa: Fix a possible divide-by-zero


 kernel/sched/fair.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4d9e14bf138..35f3ea375084 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2007,6 +2007,10 @@ static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
 	if (p->last_task_numa_placement) {
 		delta = runtime - p->last_sum_exec_runtime;
 		*period = now - p->last_task_numa_placement;
+
+		/* Avoid time going backwards, prevent potential divide error: */
+		if (unlikely((s64)*period < 0))
+			*period = 0;
 	} else {
 		delta = p->se.avg.load_sum;
 		*period = LOAD_AVG_MAX;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2019-04-12 13:08 Ingo Molnar
@ 2019-04-13  4:05 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2019-04-13  4:05 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Fri, 12 Apr 2019 15:08:11 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/5e6f1fee60a3d80582146835ac01d9808748434f

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2019-04-12 13:08 Ingo Molnar
  2019-04-13  4:05 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2019-04-12 13:08 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 0e9f02450da07fc7b1346c8c32c771555173e397 sched/fair: Do not re-read ->h_load_next during hierarchical load calculation

Fix a NULL pointer dereference crash in certain environments.

 Thanks,

	Ingo

------------------>
Mel Gorman (1):
      sched/fair: Do not re-read ->h_load_next during hierarchical load calculation


 kernel/sched/fair.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fdab7eb6f351..40bd1e27b1b7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7784,10 +7784,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
 	if (cfs_rq->last_h_load_update == now)
 		return;
 
-	cfs_rq->h_load_next = NULL;
+	WRITE_ONCE(cfs_rq->h_load_next, NULL);
 	for_each_sched_entity(se) {
 		cfs_rq = cfs_rq_of(se);
-		cfs_rq->h_load_next = se;
+		WRITE_ONCE(cfs_rq->h_load_next, se);
 		if (cfs_rq->last_h_load_update == now)
 			break;
 	}
@@ -7797,7 +7797,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
 		cfs_rq->last_h_load_update = now;
 	}
 
-	while ((se = cfs_rq->h_load_next) != NULL) {
+	while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
 		load = cfs_rq->h_load;
 		load = div64_ul(load * se->avg.load_avg,
 			cfs_rq_load_avg(cfs_rq) + 1);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2018-12-31 14:58 Ingo Molnar
@ 2018-12-31 18:05 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2018-12-31 18:05 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Vincent Guittot,
	Thomas Gleixner, Andrew Morton, Tejun Heo

The pull request you sent on Mon, 31 Dec 2018 15:58:27 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/e3ed513bcf0097c0b8a1f1b4d791a8d0d8933b3b

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2018-12-31 14:58 Ingo Molnar
  2018-12-31 18:05 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2018-12-31 14:58 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Vincent Guittot, Thomas Gleixner,
	Andrew Morton, Tejun Heo

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c

This is a revert for a lockup in cgroups-intense workloads - the real 
fixes will come later.

Happy new year,

	Ingo

------------------>
Linus Torvalds (1):
      sched/fair: Fix infinite loop in update_blocked_averages() by reverting a9e7f6544b9c


 kernel/sched/fair.c | 43 +++++++++----------------------------------
 1 file changed, 9 insertions(+), 34 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d1907506318a..6483834f1278 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -352,10 +352,9 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 	}
 }
 
-/* Iterate thr' all leaf cfs_rq's on a runqueue */
-#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)			\
-	list_for_each_entry_safe(cfs_rq, pos, &rq->leaf_cfs_rq_list,	\
-				 leaf_cfs_rq_list)
+/* Iterate through all leaf cfs_rq's on a runqueue: */
+#define for_each_leaf_cfs_rq(rq, cfs_rq) \
+	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
 
 /* Do the two (enqueued) entities belong to the same group ? */
 static inline struct cfs_rq *
@@ -447,8 +446,8 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 }
 
-#define for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos)	\
-		for (cfs_rq = &rq->cfs, pos = NULL; cfs_rq; cfs_rq = pos)
+#define for_each_leaf_cfs_rq(rq, cfs_rq)	\
+		for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL)
 
 static inline struct sched_entity *parent_entity(struct sched_entity *se)
 {
@@ -7647,27 +7646,10 @@ static inline bool others_have_blocked(struct rq *rq)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 
-static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
-{
-	if (cfs_rq->load.weight)
-		return false;
-
-	if (cfs_rq->avg.load_sum)
-		return false;
-
-	if (cfs_rq->avg.util_sum)
-		return false;
-
-	if (cfs_rq->avg.runnable_load_sum)
-		return false;
-
-	return true;
-}
-
 static void update_blocked_averages(int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
-	struct cfs_rq *cfs_rq, *pos;
+	struct cfs_rq *cfs_rq;
 	const struct sched_class *curr_class;
 	struct rq_flags rf;
 	bool done = true;
@@ -7679,7 +7661,7 @@ static void update_blocked_averages(int cpu)
 	 * Iterates the task_group tree in a bottom up fashion, see
 	 * list_add_leaf_cfs_rq() for details.
 	 */
-	for_each_leaf_cfs_rq_safe(rq, cfs_rq, pos) {
+	for_each_leaf_cfs_rq(rq, cfs_rq) {
 		struct sched_entity *se;
 
 		/* throttled entities do not contribute to load */
@@ -7694,13 +7676,6 @@ static void update_blocked_averages(int cpu)
 		if (se && !skip_blocked_update(se))
 			update_load_avg(cfs_rq_of(se), se, 0);
 
-		/*
-		 * There can be a lot of idle CPU cgroups.  Don't let fully
-		 * decayed cfs_rqs linger on the list.
-		 */
-		if (cfs_rq_is_decayed(cfs_rq))
-			list_del_leaf_cfs_rq(cfs_rq);
-
 		/* Don't need periodic decay once load/util_avg are null */
 		if (cfs_rq_has_blocked(cfs_rq))
 			done = false;
@@ -10570,10 +10545,10 @@ const struct sched_class fair_sched_class = {
 #ifdef CONFIG_SCHED_DEBUG
 void print_cfs_stats(struct seq_file *m, int cpu)
 {
-	struct cfs_rq *cfs_rq, *pos;
+	struct cfs_rq *cfs_rq;
 
 	rcu_read_lock();
-	for_each_leaf_cfs_rq_safe(cpu_rq(cpu), cfs_rq, pos)
+	for_each_leaf_cfs_rq(cpu_rq(cpu), cfs_rq)
 		print_cfs_rq(m, cpu, cfs_rq);
 	rcu_read_unlock();
 }

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2018-11-17 10:57 Ingo Molnar
@ 2018-11-18 20:05 ` pr-tracker-bot
  0 siblings, 0 replies; 69+ messages in thread
From: pr-tracker-bot @ 2018-11-18 20:05 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

The pull request you sent on Sat, 17 Nov 2018 11:57:57 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/03582f338e39ed8f8e8451ef1ef04f060d785a87

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2018-11-17 10:57 Ingo Molnar
  2018-11-18 20:05 ` pr-tracker-bot
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2018-11-17 10:57 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: c469933e772132aad040bd6a2adc8edf9ad6f825 sched/fair: Fix cpu_util_wake() for 'execl' type workloads

Fix an exec() related scalability/performance regression, which was 
caused by incorrectly calculating load and migrating tasks on exec() when 
they shouldn't be.

 Thanks,

	Ingo

------------------>
Patrick Bellasi (1):
      sched/fair: Fix cpu_util_wake() for 'execl' type workloads


 kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 48 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3648d0300fdf..ac855b2f4774 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5674,11 +5674,11 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 	return target;
 }
 
-static unsigned long cpu_util_wake(int cpu, struct task_struct *p);
+static unsigned long cpu_util_without(int cpu, struct task_struct *p);
 
-static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
+static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
 {
-	return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);
+	return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
 }
 
 /*
@@ -5738,7 +5738,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 
 			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
 
-			spare_cap = capacity_spare_wake(i, p);
+			spare_cap = capacity_spare_without(i, p);
 
 			if (spare_cap > max_spare_cap)
 				max_spare_cap = spare_cap;
@@ -5889,8 +5889,8 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 		return prev_cpu;
 
 	/*
-	 * We need task's util for capacity_spare_wake, sync it up to prev_cpu's
-	 * last_update_time.
+	 * We need task's util for capacity_spare_without, sync it up to
+	 * prev_cpu's last_update_time.
 	 */
 	if (!(sd_flag & SD_BALANCE_FORK))
 		sync_entity_load_avg(&p->se);
@@ -6216,10 +6216,19 @@ static inline unsigned long cpu_util(int cpu)
 }
 
 /*
- * cpu_util_wake: Compute CPU utilization with any contributions from
- * the waking task p removed.
+ * cpu_util_without: compute cpu utilization without any contributions from *p
+ * @cpu: the CPU which utilization is requested
+ * @p: the task which utilization should be discounted
+ *
+ * The utilization of a CPU is defined by the utilization of tasks currently
+ * enqueued on that CPU as well as tasks which are currently sleeping after an
+ * execution on that CPU.
+ *
+ * This method returns the utilization of the specified CPU by discounting the
+ * utilization of the specified task, whenever the task is currently
+ * contributing to the CPU utilization.
  */
-static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
+static unsigned long cpu_util_without(int cpu, struct task_struct *p)
 {
 	struct cfs_rq *cfs_rq;
 	unsigned int util;
@@ -6231,7 +6240,7 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 	cfs_rq = &cpu_rq(cpu)->cfs;
 	util = READ_ONCE(cfs_rq->avg.util_avg);
 
-	/* Discount task's blocked util from CPU's util */
+	/* Discount task's util from CPU's util */
 	util -= min_t(unsigned int, util, task_util(p));
 
 	/*
@@ -6240,14 +6249,14 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 	 * a) if *p is the only task sleeping on this CPU, then:
 	 *      cpu_util (== task_util) > util_est (== 0)
 	 *    and thus we return:
-	 *      cpu_util_wake = (cpu_util - task_util) = 0
+	 *      cpu_util_without = (cpu_util - task_util) = 0
 	 *
 	 * b) if other tasks are SLEEPING on this CPU, which is now exiting
 	 *    IDLE, then:
 	 *      cpu_util >= task_util
 	 *      cpu_util > util_est (== 0)
 	 *    and thus we discount *p's blocked utilization to return:
-	 *      cpu_util_wake = (cpu_util - task_util) >= 0
+	 *      cpu_util_without = (cpu_util - task_util) >= 0
 	 *
 	 * c) if other tasks are RUNNABLE on that CPU and
 	 *      util_est > cpu_util
@@ -6260,8 +6269,33 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 	 * covered by the following code when estimated utilization is
 	 * enabled.
 	 */
-	if (sched_feat(UTIL_EST))
-		util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));
+	if (sched_feat(UTIL_EST)) {
+		unsigned int estimated =
+			READ_ONCE(cfs_rq->avg.util_est.enqueued);
+
+		/*
+		 * Despite the following checks we still have a small window
+		 * for a possible race, when an execl's select_task_rq_fair()
+		 * races with LB's detach_task():
+		 *
+		 *   detach_task()
+		 *     p->on_rq = TASK_ON_RQ_MIGRATING;
+		 *     ---------------------------------- A
+		 *     deactivate_task()                   \
+		 *       dequeue_task()                     + RaceTime
+		 *         util_est_dequeue()              /
+		 *     ---------------------------------- B
+		 *
+		 * The additional check on "current == p" it's required to
+		 * properly fix the execl regression and it helps in further
+		 * reducing the chances for the above race.
+		 */
+		if (unlikely(task_on_rq_queued(p) || current == p)) {
+			estimated -= min_t(unsigned int, estimated,
+					   (_task_util_est(p) | UTIL_AVG_UNCHANGED));
+		}
+		util = max(util, estimated);
+	}
 
 	/*
 	 * Utilization (estimated) can exceed the CPU capacity, thus let's

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2018-10-11  9:12 Ingo Molnar
@ 2018-10-11 12:32 ` Greg Kroah-Hartman
  0 siblings, 0 replies; 69+ messages in thread
From: Greg Kroah-Hartman @ 2018-10-11 12:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Linus Torvalds, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

On Thu, Oct 11, 2018 at 11:12:37AM +0200, Ingo Molnar wrote:
> Greg,
> 
> Please pull the latest sched-urgent-for-linus git tree from:
> 
>    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

Now merged, thanks.

greg k-h


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2018-10-11  9:12 Ingo Molnar
  2018-10-11 12:32 ` Greg Kroah-Hartman
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2018-10-11  9:12 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, Linus Torvalds, Peter Zijlstra, Thomas Gleixner,
	Andrew Morton

Greg,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: e054637597ba36d3729ba6a3a3dd7aad8e2a3003 mm, sched/numa: Remove remaining traces of NUMA rate-limiting

Cleanup of dead code left over from the recent sched/numa fixes.

 Thanks,

	Ingo

------------------>
Srikar Dronamraju (1):
      mm, sched/numa: Remove remaining traces of NUMA rate-limiting


 include/linux/mmzone.h |  4 ----
 mm/page_alloc.c        | 10 ----------
 2 files changed, 14 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3f4c0b167333..d4b0c79d2924 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -667,10 +667,6 @@ typedef struct pglist_data {
 	enum zone_type kcompactd_classzone_idx;
 	wait_queue_head_t kcompactd_wait;
 	struct task_struct *kcompactd;
-#endif
-#ifdef CONFIG_NUMA_BALANCING
-	/* Lock serializing the migrate rate limiting window */
-	spinlock_t numabalancing_migrate_lock;
 #endif
 	/*
 	 * This is a per-node reserve of pages that are not available
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 706a738c0aee..e2ef1c17942f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6193,15 +6193,6 @@ static unsigned long __init calc_memmap_size(unsigned long spanned_pages,
 	return PAGE_ALIGN(pages * sizeof(struct page)) >> PAGE_SHIFT;
 }
 
-#ifdef CONFIG_NUMA_BALANCING
-static void pgdat_init_numabalancing(struct pglist_data *pgdat)
-{
-	spin_lock_init(&pgdat->numabalancing_migrate_lock);
-}
-#else
-static void pgdat_init_numabalancing(struct pglist_data *pgdat) {}
-#endif
-
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void pgdat_init_split_queue(struct pglist_data *pgdat)
 {
@@ -6226,7 +6217,6 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
 {
 	pgdat_resize_init(pgdat);
 
-	pgdat_init_numabalancing(pgdat);
 	pgdat_init_split_queue(pgdat);
 	pgdat_init_kcompactd(pgdat);
 

----- End forwarded message -----

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2018-10-11  9:02 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2018-10-11  9:02 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Mel Gorman,
	Linus Torvalds, Andrew Morton

Greg,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: e054637597ba36d3729ba6a3a3dd7aad8e2a3003 mm, sched/numa: Remove remaining traces of NUMA rate-limiting

Cleanup of dead code left over from the recent sched/numa fixes.

 Thanks,

	Ingo

------------------>
Srikar Dronamraju (1):
      mm, sched/numa: Remove remaining traces of NUMA rate-limiting


 include/linux/mmzone.h |  4 ----
 mm/page_alloc.c        | 10 ----------
 2 files changed, 14 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3f4c0b167333..d4b0c79d2924 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -667,10 +667,6 @@ typedef struct pglist_data {
 	enum zone_type kcompactd_classzone_idx;
 	wait_queue_head_t kcompactd_wait;
 	struct task_struct *kcompactd;
-#endif
-#ifdef CONFIG_NUMA_BALANCING
-	/* Lock serializing the migrate rate limiting window */
-	spinlock_t numabalancing_migrate_lock;
 #endif
 	/*
 	 * This is a per-node reserve of pages that are not available
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 706a738c0aee..e2ef1c17942f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6193,15 +6193,6 @@ static unsigned long __init calc_memmap_size(unsigned long spanned_pages,
 	return PAGE_ALIGN(pages * sizeof(struct page)) >> PAGE_SHIFT;
 }
 
-#ifdef CONFIG_NUMA_BALANCING
-static void pgdat_init_numabalancing(struct pglist_data *pgdat)
-{
-	spin_lock_init(&pgdat->numabalancing_migrate_lock);
-}
-#else
-static void pgdat_init_numabalancing(struct pglist_data *pgdat) {}
-#endif
-
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void pgdat_init_split_queue(struct pglist_data *pgdat)
 {
@@ -6226,7 +6217,6 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat)
 {
 	pgdat_resize_init(pgdat);
 
-	pgdat_init_numabalancing(pgdat);
 	pgdat_init_split_queue(pgdat);
 	pgdat_init_kcompactd(pgdat);
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2018-01-17 15:34 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2018-01-17 15:34 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: c96f5471ce7d2aefd0dda560cc23f08ab00bc65d delayacct: Account blkio completion on the correct task

A delayacct statistics correctness fix.

 Thanks,

	Ingo

------------------>
Josh Snyder (1):
      delayacct: Account blkio completion on the correct task


 include/linux/delayacct.h |  8 ++++----
 kernel/delayacct.c        | 42 ++++++++++++++++++++++++++----------------
 kernel/sched/core.c       |  6 +++---
 3 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/include/linux/delayacct.h b/include/linux/delayacct.h
index 4178d2493547..5e335b6203f4 100644
--- a/include/linux/delayacct.h
+++ b/include/linux/delayacct.h
@@ -71,7 +71,7 @@ extern void delayacct_init(void);
 extern void __delayacct_tsk_init(struct task_struct *);
 extern void __delayacct_tsk_exit(struct task_struct *);
 extern void __delayacct_blkio_start(void);
-extern void __delayacct_blkio_end(void);
+extern void __delayacct_blkio_end(struct task_struct *);
 extern int __delayacct_add_tsk(struct taskstats *, struct task_struct *);
 extern __u64 __delayacct_blkio_ticks(struct task_struct *);
 extern void __delayacct_freepages_start(void);
@@ -122,10 +122,10 @@ static inline void delayacct_blkio_start(void)
 		__delayacct_blkio_start();
 }
 
-static inline void delayacct_blkio_end(void)
+static inline void delayacct_blkio_end(struct task_struct *p)
 {
 	if (current->delays)
-		__delayacct_blkio_end();
+		__delayacct_blkio_end(p);
 	delayacct_clear_flag(DELAYACCT_PF_BLKIO);
 }
 
@@ -169,7 +169,7 @@ static inline void delayacct_tsk_free(struct task_struct *tsk)
 {}
 static inline void delayacct_blkio_start(void)
 {}
-static inline void delayacct_blkio_end(void)
+static inline void delayacct_blkio_end(struct task_struct *p)
 {}
 static inline int delayacct_add_tsk(struct taskstats *d,
 					struct task_struct *tsk)
diff --git a/kernel/delayacct.c b/kernel/delayacct.c
index 4a1c33416b6a..e2764d767f18 100644
--- a/kernel/delayacct.c
+++ b/kernel/delayacct.c
@@ -51,16 +51,16 @@ void __delayacct_tsk_init(struct task_struct *tsk)
  * Finish delay accounting for a statistic using its timestamps (@start),
  * accumalator (@total) and @count
  */
-static void delayacct_end(u64 *start, u64 *total, u32 *count)
+static void delayacct_end(spinlock_t *lock, u64 *start, u64 *total, u32 *count)
 {
 	s64 ns = ktime_get_ns() - *start;
 	unsigned long flags;
 
 	if (ns > 0) {
-		spin_lock_irqsave(&current->delays->lock, flags);
+		spin_lock_irqsave(lock, flags);
 		*total += ns;
 		(*count)++;
-		spin_unlock_irqrestore(&current->delays->lock, flags);
+		spin_unlock_irqrestore(lock, flags);
 	}
 }
 
@@ -69,17 +69,25 @@ void __delayacct_blkio_start(void)
 	current->delays->blkio_start = ktime_get_ns();
 }
 
-void __delayacct_blkio_end(void)
+/*
+ * We cannot rely on the `current` macro, as we haven't yet switched back to
+ * the process being woken.
+ */
+void __delayacct_blkio_end(struct task_struct *p)
 {
-	if (current->delays->flags & DELAYACCT_PF_SWAPIN)
-		/* Swapin block I/O */
-		delayacct_end(&current->delays->blkio_start,
-			&current->delays->swapin_delay,
-			&current->delays->swapin_count);
-	else	/* Other block I/O */
-		delayacct_end(&current->delays->blkio_start,
-			&current->delays->blkio_delay,
-			&current->delays->blkio_count);
+	struct task_delay_info *delays = p->delays;
+	u64 *total;
+	u32 *count;
+
+	if (p->delays->flags & DELAYACCT_PF_SWAPIN) {
+		total = &delays->swapin_delay;
+		count = &delays->swapin_count;
+	} else {
+		total = &delays->blkio_delay;
+		count = &delays->blkio_count;
+	}
+
+	delayacct_end(&delays->lock, &delays->blkio_start, total, count);
 }
 
 int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
@@ -153,8 +161,10 @@ void __delayacct_freepages_start(void)
 
 void __delayacct_freepages_end(void)
 {
-	delayacct_end(&current->delays->freepages_start,
-			&current->delays->freepages_delay,
-			&current->delays->freepages_count);
+	delayacct_end(
+		&current->delays->lock,
+		&current->delays->freepages_start,
+		&current->delays->freepages_delay,
+		&current->delays->freepages_count);
 }
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 644fa2e3d993..a7bf32aabfda 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2056,7 +2056,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	p->state = TASK_WAKING;
 
 	if (p->in_iowait) {
-		delayacct_blkio_end();
+		delayacct_blkio_end(p);
 		atomic_dec(&task_rq(p)->nr_iowait);
 	}
 
@@ -2069,7 +2069,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 #else /* CONFIG_SMP */
 
 	if (p->in_iowait) {
-		delayacct_blkio_end();
+		delayacct_blkio_end(p);
 		atomic_dec(&task_rq(p)->nr_iowait);
 	}
 
@@ -2122,7 +2122,7 @@ static void try_to_wake_up_local(struct task_struct *p, struct rq_flags *rf)
 
 	if (!task_on_rq_queued(p)) {
 		if (p->in_iowait) {
-			delayacct_blkio_end();
+			delayacct_blkio_end(p);
 			atomic_dec(&rq->nr_iowait);
 		}
 		ttwu_activate(rq, p, ENQUEUE_WAKEUP | ENQUEUE_NOCLOCK);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2017-10-27 19:16 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2017-10-27 19:16 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 88796e7e5c457cae72833196cb98e6895dd107e2 sched/swait: Document it clearly that the swait facilities are special and shouldn't be used

Update the <linux/swait.h> documentation to discourage their use.

 Thanks,

	Ingo

------------------>
Davidlohr Bueso (1):
      sched/swait: Document it clearly that the swait facilities are special and shouldn't be used


 include/linux/swait.h | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/include/linux/swait.h b/include/linux/swait.h
index 73e97a08d3d0..cf30f5022472 100644
--- a/include/linux/swait.h
+++ b/include/linux/swait.h
@@ -9,13 +9,16 @@
 /*
  * Simple wait queues
  *
- * While these are very similar to the other/complex wait queues (wait.h) the
- * most important difference is that the simple waitqueue allows for
- * deterministic behaviour -- IOW it has strictly bounded IRQ and lock hold
- * times.
+ * While these are very similar to regular wait queues (wait.h) the most
+ * important difference is that the simple waitqueue allows for deterministic
+ * behaviour -- IOW it has strictly bounded IRQ and lock hold times.
  *
- * In order to make this so, we had to drop a fair number of features of the
- * other waitqueue code; notably:
+ * Mainly, this is accomplished by two things. Firstly not allowing swake_up_all
+ * from IRQ disabled, and dropping the lock upon every wakeup, giving a higher
+ * priority task a chance to run.
+ *
+ * Secondly, we had to drop a fair number of features of the other waitqueue
+ * code; notably:
  *
  *  - mixing INTERRUPTIBLE and UNINTERRUPTIBLE sleeps on the same waitqueue;
  *    all wakeups are TASK_NORMAL in order to avoid O(n) lookups for the right
@@ -24,12 +27,14 @@
  *  - the exclusive mode; because this requires preserving the list order
  *    and this is hard.
  *
- *  - custom wake functions; because you cannot give any guarantees about
- *    random code.
- *
- * As a side effect of this; the data structures are slimmer.
+ *  - custom wake callback functions; because you cannot give any guarantees
+ *    about random code. This also allows swait to be used in RT, such that
+ *    raw spinlock can be used for the swait queue head.
  *
- * One would recommend using this wait queue where possible.
+ * As a side effect of these; the data structures are slimmer albeit more ad-hoc.
+ * For all the above, note that simple wait queues should _only_ be used under
+ * very specific realtime constraints -- it is best to stick with the regular
+ * wait queues in most cases.
  */
 
 struct task_struct;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-12-07 18:48 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-12-07 18:48 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Thomas Gleixner,
	Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 83929cce95251cc77e5659bf493bd424ae0e7a67 sched/autogroup: Fix 64-bit kernel nice level adjustment

An autogroup nice level adjustment bug fix.

 Thanks,

	Ingo

------------------>
Mike Galbraith (1):
      sched/autogroup: Fix 64-bit kernel nice level adjustment


 kernel/sched/auto_group.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
index f1c8fd566246..da39489d2d80 100644
--- a/kernel/sched/auto_group.c
+++ b/kernel/sched/auto_group.c
@@ -212,6 +212,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
 {
 	static unsigned long next = INITIAL_JIFFIES;
 	struct autogroup *ag;
+	unsigned long shares;
 	int err;
 
 	if (nice < MIN_NICE || nice > MAX_NICE)
@@ -230,9 +231,10 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
 
 	next = HZ / 10 + jiffies;
 	ag = autogroup_task_get(p);
+	shares = scale_load(sched_prio_to_weight[nice + 20]);
 
 	down_write(&ag->lock);
-	err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]);
+	err = sched_group_set_shares(ag->tg, shares);
 	if (!err)
 		ag->nice = nice;
 	up_write(&ag->lock);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-10-28  8:35 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-10-28  8:35 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: f5d6d2da0d9098a4aa0ebcc187aa0fc167045d6b sched/fair: Remove unused but set variable 'rq'

An unused variable warning fix.

 Thanks,

	Ingo

------------------>
Tobias Klauser (1):
      sched/fair: Remove unused but set variable 'rq'


 kernel/sched/fair.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d941c97dfbc3..c242944f5cbd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8839,7 +8839,6 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
 {
 	struct sched_entity *se;
 	struct cfs_rq *cfs_rq;
-	struct rq *rq;
 	int i;
 
 	tg->cfs_rq = kzalloc(sizeof(cfs_rq) * nr_cpu_ids, GFP_KERNEL);
@@ -8854,8 +8853,6 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
 	init_cfs_bandwidth(tg_cfs_bandwidth(tg));
 
 	for_each_possible_cpu(i) {
-		rq = cpu_rq(i);
-
 		cfs_rq = kzalloc_node(sizeof(struct cfs_rq),
 				      GFP_KERNEL, cpu_to_node(i));
 		if (!cfs_rq)

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-10-19 15:52 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-10-19 15:52 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: b5a9b340789b2b24c6896bcf7a065c31a4db671c sched/fair: Fix incorrect task group ->load_avg

This fixes a group scheduling related performance/interactivity regression 
introduced in v4.8, which affects certain hardware environments where 
cpu_possible_mask != cpu_present_mask.

 Thanks,

	Ingo

------------------>
Vincent Guittot (1):
      sched/fair: Fix incorrect task group ->load_avg


 kernel/sched/fair.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76ee7de1859d..d941c97dfbc3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -690,7 +690,14 @@ void init_entity_runnable_average(struct sched_entity *se)
 	 * will definitely be update (after enqueue).
 	 */
 	sa->period_contrib = 1023;
-	sa->load_avg = scale_load_down(se->load.weight);
+	/*
+	 * Tasks are intialized with full load to be seen as heavy tasks until
+	 * they get a chance to stabilize to their real load level.
+	 * Group entities are intialized with zero load to reflect the fact that
+	 * nothing has been attached to the task group yet.
+	 */
+	if (entity_is_task(se))
+		sa->load_avg = scale_load_down(se->load.weight);
 	sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
 	/*
 	 * At this point, util_avg won't be used in select_task_rq_fair anyway

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-10-18 11:17 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-10-18 11:17 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 9cfb38a7ba5a9c27c1af8093fb1af4b699c0a441 sched/fair: Fix sched domains NULL dereference in select_idle_sibling()

Fix a crash that can trigger when racing with CPU hotplug: we didn't use 
sched-domains data structures carefully enough in select_idle_cpu().

 Thanks,

	Ingo

------------------>
Wanpeng Li (1):
      sched/fair: Fix sched domains NULL dereference in select_idle_sibling()


 kernel/sched/fair.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 502e95a6e927..8b03fb5d1b9e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5471,13 +5471,18 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
  */
 static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
-	struct sched_domain *this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
-	u64 avg_idle = this_rq()->avg_idle;
-	u64 avg_cost = this_sd->avg_scan_cost;
+	struct sched_domain *this_sd;
+	u64 avg_cost, avg_idle = this_rq()->avg_idle;
 	u64 time, cost;
 	s64 delta;
 	int cpu, wrap;
 
+	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
+	if (!this_sd)
+		return -1;
+
+	avg_cost = this_sd->avg_scan_cost;
+
 	/*
 	 * Due to large variance we need a large fuzz factor; hackbench in
 	 * particularly is sensitive here.

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-09-13 18:17 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-09-13 18:17 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 135e8c9250dd5c8c9aae5984fde6f230d0cbfeaf sched/core: Fix a race between try_to_wake_up() and a woken up task

A try_to_wake_up() memory ordering race fix causing a busy-loop in ttwu().

 Thanks,

	Ingo

------------------>
Balbir Singh (1):
      sched/core: Fix a race between try_to_wake_up() and a woken up task


 kernel/sched/core.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2a906f20fba7..44817c640e99 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2016,6 +2016,28 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	success = 1; /* we're going to change ->state */
 	cpu = task_cpu(p);
 
+	/*
+	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
+	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
+	 * in smp_cond_load_acquire() below.
+	 *
+	 * sched_ttwu_pending()                 try_to_wake_up()
+	 *   [S] p->on_rq = 1;                  [L] P->state
+	 *       UNLOCK rq->lock  -----.
+	 *                              \
+	 *				 +---   RMB
+	 * schedule()                   /
+	 *       LOCK rq->lock    -----'
+	 *       UNLOCK rq->lock
+	 *
+	 * [task p]
+	 *   [S] p->state = UNINTERRUPTIBLE     [L] p->on_rq
+	 *
+	 * Pairs with the UNLOCK+LOCK on rq->lock from the
+	 * last wakeup of our task and the schedule that got our task
+	 * current.
+	 */
+	smp_rmb();
 	if (p->on_rq && ttwu_remote(p, wake_flags))
 		goto stat;
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-07-14 18:56 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-07-14 18:56 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: d60585c5766e9620d5d83e2b25dc042c7bdada2c sched/core: Correct off by one bug in load migration calculation

Fix a CPU hotplug related corruption of the load average that got introduced in 
this merge window.

 Thanks,

	Ingo

------------------>
Thomas Gleixner (1):
      sched/core: Correct off by one bug in load migration calculation


 kernel/sched/core.c    | 6 ++++--
 kernel/sched/loadavg.c | 8 ++++----
 kernel/sched/sched.h   | 2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 51d7105f529a..97ee9ac7e97c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5394,13 +5394,15 @@ void idle_task_exit(void)
 /*
  * Since this CPU is going 'away' for a while, fold any nr_active delta
  * we might have. Assumes we're called after migrate_tasks() so that the
- * nr_active count is stable.
+ * nr_active count is stable. We need to take the teardown thread which
+ * is calling this into account, so we hand in adjust = 1 to the load
+ * calculation.
  *
  * Also see the comment "Global load-average calculations".
  */
 static void calc_load_migrate(struct rq *rq)
 {
-	long delta = calc_load_fold_active(rq);
+	long delta = calc_load_fold_active(rq, 1);
 	if (delta)
 		atomic_long_add(delta, &calc_load_tasks);
 }
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index b0b93fd33af9..a2d6eb71f06b 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -78,11 +78,11 @@ void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
 	loads[2] = (avenrun[2] + offset) << shift;
 }
 
-long calc_load_fold_active(struct rq *this_rq)
+long calc_load_fold_active(struct rq *this_rq, long adjust)
 {
 	long nr_active, delta = 0;
 
-	nr_active = this_rq->nr_running;
+	nr_active = this_rq->nr_running - adjust;
 	nr_active += (long)this_rq->nr_uninterruptible;
 
 	if (nr_active != this_rq->calc_load_active) {
@@ -188,7 +188,7 @@ void calc_load_enter_idle(void)
 	 * We're going into NOHZ mode, if there's any pending delta, fold it
 	 * into the pending idle delta.
 	 */
-	delta = calc_load_fold_active(this_rq);
+	delta = calc_load_fold_active(this_rq, 0);
 	if (delta) {
 		int idx = calc_load_write_idx();
 
@@ -389,7 +389,7 @@ void calc_global_load_tick(struct rq *this_rq)
 	if (time_before(jiffies, this_rq->calc_load_update))
 		return;
 
-	delta  = calc_load_fold_active(this_rq);
+	delta  = calc_load_fold_active(this_rq, 0);
 	if (delta)
 		atomic_long_add(delta, &calc_load_tasks);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 7cbeb92a1cb9..898c0d2f18fe 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -28,7 +28,7 @@ extern unsigned long calc_load_update;
 extern atomic_long_t calc_load_tasks;
 
 extern void calc_global_load_tick(struct rq *this_rq);
-extern long calc_load_fold_active(struct rq *this_rq);
+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
 
 #ifdef CONFIG_SMP
 extern void cpu_load_update_active(struct rq *this_rq);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-05-13 18:54 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-05-13 18:54 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 Revert "sched/fair: Fix fairness issue on migration"

This is a revert to fix an interactivity problem. The proper fixes for the 
problems that the reverted commit exposed are now in sched/core (consisting of 3 
patches), but were too risky for v4.6 and will arrive in the v4.7 merge window.

 Thanks,

	Ingo

------------------>
Ingo Molnar (1):
      Revert "sched/fair: Fix fairness issue on migration"


 kernel/sched/fair.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 40748dc8ea3e..e7dd0ec169be 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3188,25 +3188,17 @@ static inline void check_schedstat_required(void)
 static void
 enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 {
-	bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING);
-	bool curr = cfs_rq->curr == se;
-
 	/*
-	 * If we're the current task, we must renormalise before calling
-	 * update_curr().
+	 * Update the normalized vruntime before updating min_vruntime
+	 * through calling update_curr().
 	 */
-	if (renorm && curr)
+	if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
 		se->vruntime += cfs_rq->min_vruntime;
 
-	update_curr(cfs_rq);
-
 	/*
-	 * Otherwise, renormalise after, such that we're placed at the current
-	 * moment in time, instead of some random moment in the past.
+	 * Update run-time statistics of the 'current'.
 	 */
-	if (renorm && !curr)
-		se->vruntime += cfs_rq->min_vruntime;
-
+	update_curr(cfs_rq);
 	enqueue_entity_load_avg(cfs_rq, se);
 	account_entity_enqueue(cfs_rq, se);
 	update_cfs_shares(cfs_rq);
@@ -3222,7 +3214,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		update_stats_enqueue(cfs_rq, se);
 		check_spread(cfs_rq, se);
 	}
-	if (!curr)
+	if (se != cfs_rq->curr)
 		__enqueue_entity(cfs_rq, se);
 	se->on_rq = 1;
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2016-05-06 11:31 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2016-05-06 11:31 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 2548d546d40c0014efdde88a53bf7896e917dcce nohz/full, sched/rt: Fix missed tick-reenabling bug in sched_can_stop_tick()

This tree contains a single fix that fixes a nohz tick stopping bug when 
mixed-poliocy SCHED_FIFO and SCHED_RR tasks are present on a runqueue.

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      nohz/full, sched/rt: Fix missed tick-reenabling bug in sched_can_stop_tick()


 kernel/sched/core.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8b489fcac37b..d1f7149f8704 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -596,17 +596,8 @@ bool sched_can_stop_tick(struct rq *rq)
 		return false;
 
 	/*
-	 * FIFO realtime policy runs the highest priority task (after DEADLINE).
-	 * Other runnable tasks are of a lower priority. The scheduler tick
-	 * isn't needed.
-	 */
-	fifo_nr_running = rq->rt.rt_nr_running - rq->rt.rr_nr_running;
-	if (fifo_nr_running)
-		return true;
-
-	/*
-	 * Round-robin realtime tasks time slice with other tasks at the same
-	 * realtime priority.
+	 * If there are more than one RR tasks, we need the tick to effect the
+	 * actual RR behaviour.
 	 */
 	if (rq->rt.rr_nr_running) {
 		if (rq->rt.rr_nr_running == 1)
@@ -615,8 +606,20 @@ bool sched_can_stop_tick(struct rq *rq)
 			return false;
 	}
 
-	/* Normal multitasking need periodic preemption checks */
-	if (rq->cfs.nr_running > 1)
+	/*
+	 * If there's no RR tasks, but FIFO tasks, we can skip the tick, no
+	 * forced preemption between FIFO tasks.
+	 */
+	fifo_nr_running = rq->rt.rt_nr_running - rq->rt.rr_nr_running;
+	if (fifo_nr_running)
+		return true;
+
+	/*
+	 * If there are no DL,RR/FIFO tasks, there must only be CFS tasks left;
+	 * if there's more than one we need the tick for involuntary
+	 * preemption.
+	 */
+	if (rq->nr_running > 1)
 		return false;
 
 	return true;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2015-07-18  2:56 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2015-07-18  2:56 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: d49db342f0e276b354383b3281c5668b6b80f5c2 sched/fair: Test list head instead of list entry in throttle_cfs_rq()

A rq throttling fix.

 Thanks,

	Ingo

------------------>
Cong Wang (1):
      sched/fair: Test list head instead of list entry in throttle_cfs_rq()


 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 65c8f3ebdc3c..d113c3ba8bc4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3683,7 +3683,7 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
 	cfs_rq->throttled = 1;
 	cfs_rq->throttled_clock = rq_clock(rq);
 	raw_spin_lock(&cfs_b->lock);
-	empty = list_empty(&cfs_rq->throttled_list);
+	empty = list_empty(&cfs_b->throttled_cfs_rq);
 
 	/*
 	 * Add to the _head_ of the list, so that an already-started

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2015-03-28 13:45 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2015-03-28 13:45 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 746db9443ea57fd9c059f62c4bfbf41cf224fe13 sched: Fix RLIMIT_RTTIME when PI-boosting to RT

A single sched/rt corner case fix for RLIMIT_RTIME correctness.

 Thanks,

	Ingo

------------------>
Brian Silverman (1):
      sched: Fix RLIMIT_RTTIME when PI-boosting to RT


 kernel/sched/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f0f831e8a345..62671f53202a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3034,6 +3034,8 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
 	} else {
 		if (dl_prio(oldprio))
 			p->dl.dl_boosted = 0;
+		if (rt_prio(oldprio))
+			p->rt.timeout = 0;
 		p->sched_class = &fair_sched_class;
 	}
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2014-01-15 18:19 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2014-01-15 18:19 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, H. Peter Anvin,
	Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   # HEAD: 9722c2dac708e9468cc0dc30218ef76946ffbc9d sched: Calculate effective load even if local weight is 0

Contains a fix for a bug that manifested itself as a 3D performance 
regression.

 Thanks,

	Ingo

------------------>
Rik van Riel (1):
      sched: Calculate effective load even if local weight is 0


 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c7395d9..e64b079 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3923,7 +3923,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 {
 	struct sched_entity *se = tg->se[cpu];
 
-	if (!tg->parent || !wl)	/* the trivial, non-cgroup case */
+	if (!tg->parent)	/* the trivial, non-cgroup case */
 		return wl;
 
 	for_each_sched_entity(se) {

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2013-09-28 18:08 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2013-09-28 18:08 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Frédéric Weisbecker, Peter Zijlstra,
	Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   HEAD: 62d08aec6a9f4b45cc9cba1e3b2855995df133e6 Merge branch 'context_tracking/fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks into sched/urgent

A context tracking ARM build and functional fix.

 Thanks,

	Ingo

------------------>
Frederic Weisbecker (1):
      arm: Fix build error with context tracking calls


 arch/arm/kernel/entry-header.S |  8 ++++----
 kernel/context_tracking.c      | 12 ++++++++++++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
index de23a9b..39f89fb 100644
--- a/arch/arm/kernel/entry-header.S
+++ b/arch/arm/kernel/entry-header.S
@@ -329,10 +329,10 @@
 #ifdef CONFIG_CONTEXT_TRACKING
 	.if	\save
 	stmdb   sp!, {r0-r3, ip, lr}
-	bl	user_exit
+	bl	context_tracking_user_exit
 	ldmia	sp!, {r0-r3, ip, lr}
 	.else
-	bl	user_exit
+	bl	context_tracking_user_exit
 	.endif
 #endif
 	.endm
@@ -341,10 +341,10 @@
 #ifdef CONFIG_CONTEXT_TRACKING
 	.if	\save
 	stmdb   sp!, {r0-r3, ip, lr}
-	bl	user_enter
+	bl	context_tracking_user_enter
 	ldmia	sp!, {r0-r3, ip, lr}
 	.else
-	bl	user_enter
+	bl	context_tracking_user_enter
 	.endif
 #endif
 	.endm
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 247091b..859c8df 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -51,6 +51,15 @@ void context_tracking_user_enter(void)
 	unsigned long flags;
 
 	/*
+	 * Repeat the user_enter() check here because some archs may be calling
+	 * this from asm and if no CPU needs context tracking, they shouldn't
+	 * go further. Repeat the check here until they support the static key
+	 * check.
+	 */
+	if (!static_key_false(&context_tracking_enabled))
+		return;
+
+	/*
 	 * Some contexts may involve an exception occuring in an irq,
 	 * leading to that nesting:
 	 * rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit()
@@ -151,6 +160,9 @@ void context_tracking_user_exit(void)
 {
 	unsigned long flags;
 
+	if (!static_key_false(&context_tracking_enabled))
+		return;
+
 	if (in_interrupt())
 		return;
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2013-09-12 12:58 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2013-09-12 12:58 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   HEAD: b0cff9d88ce2f3030f73138078c5b1019f17e1cc sched: Fix load balancing performance regression in should_we_balance()

Performance regression fix.

 Thanks,

	Ingo

------------------>
Joonsoo Kim (1):
      sched: Fix load balancing performance regression in should_we_balance()


 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7f0a5e6..9b3fe1c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5151,7 +5151,7 @@ static int should_we_balance(struct lb_env *env)
 	 * First idle cpu or the first cpu(busiest) in this sched group
 	 * is eligible for doing load balancing at this and above domains.
 	 */
-	return balance_cpu != env->dst_cpu;
+	return balance_cpu == env->dst_cpu;
 }
 
 /*

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2012-05-17  8:46 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2012-05-17  8:46 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   HEAD: 30b4e9eb783d94e9f5d503b15eb31720679ae1c7 sched: Fix KVM and ia64 boot crash due to sched_groups circular linked list assumption

 Thanks,

	Ingo

------------------>
Igor Mammedov (1):
      sched: Fix KVM and ia64 boot crash due to sched_groups circular linked list assumption


 kernel/sched/core.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0533a68..e5212ae 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6382,6 +6382,8 @@ static int __sdt_alloc(const struct cpumask *cpu_map)
 			if (!sg)
 				return -ENOMEM;
 
+			sg->next = sg;
+
 			*per_cpu_ptr(sdd->sg, j) = sg;
 
 			sgp = kzalloc_node(sizeof(struct sched_group_power),

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2012-03-02 10:57 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2012-03-02 10:57 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   HEAD: 8f2f748b0656257153bcf0941df8d6060acc5ca6 CPU hotplug, cpusets, suspend: Don't touch cpusets during suspend/resume

 Thanks,

	Ingo

------------------>
Srivatsa S. Bhat (1):
      CPU hotplug, cpusets, suspend: Don't touch cpusets during suspend/resume


 kernel/sched/core.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b342f57..33a0676 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6728,7 +6728,7 @@ int __init sched_create_sysfs_power_savings_entries(struct device *dev)
 static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 			     void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 	case CPU_DOWN_FAILED:
 		cpuset_update_active_cpus();
@@ -6741,7 +6741,7 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
 			       void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		cpuset_update_active_cpus();
 		return NOTIFY_OK;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2012-02-27 10:29 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2012-02-27 10:29 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-urgent-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-for-linus

   HEAD: 8c79a045fd590a26e81e75f5d8d4ec5c7d23e565 sched/events: Revert trace_sched_stat_sleeptime()

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched/events: Revert trace_sched_stat_sleeptime()


 include/trace/events/sched.h |   50 ------------------------------------------
 kernel/sched/core.c          |    1 -
 kernel/sched/fair.c          |    2 +
 3 files changed, 2 insertions(+), 51 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 6ba596b..e33ed1b 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -370,56 +370,6 @@ TRACE_EVENT(sched_stat_runtime,
 			(unsigned long long)__entry->vruntime)
 );
 
-#ifdef CREATE_TRACE_POINTS
-static inline u64 trace_get_sleeptime(struct task_struct *tsk)
-{
-#ifdef CONFIG_SCHEDSTATS
-	u64 block, sleep;
-
-	block = tsk->se.statistics.block_start;
-	sleep = tsk->se.statistics.sleep_start;
-	tsk->se.statistics.block_start = 0;
-	tsk->se.statistics.sleep_start = 0;
-
-	return block ? block : sleep ? sleep : 0;
-#else
-	return 0;
-#endif
-}
-#endif
-
-/*
- * Tracepoint for accounting sleeptime (time the task is sleeping
- * or waiting for I/O).
- */
-TRACE_EVENT(sched_stat_sleeptime,
-
-	TP_PROTO(struct task_struct *tsk, u64 now),
-
-	TP_ARGS(tsk, now),
-
-	TP_STRUCT__entry(
-		__array( char,	comm,	TASK_COMM_LEN	)
-		__field( pid_t,	pid			)
-		__field( u64,	sleeptime		)
-	),
-
-	TP_fast_assign(
-		memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
-		__entry->pid		= tsk->pid;
-		__entry->sleeptime = trace_get_sleeptime(tsk);
-		__entry->sleeptime = __entry->sleeptime ?
-				now - __entry->sleeptime : 0;
-	)
-	TP_perf_assign(
-		__perf_count(__entry->sleeptime);
-	),
-
-	TP_printk("comm=%s pid=%d sleeptime=%Lu [ns]",
-			__entry->comm, __entry->pid,
-			(unsigned long long)__entry->sleeptime)
-);
-
 /*
  * Tracepoint for showing priority inheritance modifying a tasks
  * priority.
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5255c9d..b342f57 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1932,7 +1932,6 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
 	local_irq_enable();
 #endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 	finish_lock_switch(rq, prev);
-	trace_sched_stat_sleeptime(current, rq->clock);
 
 	fire_sched_in_preempt_notifiers(current);
 	if (mm)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7c6414f..aca16b8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1003,6 +1003,7 @@ static void enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se)
 		if (unlikely(delta > se->statistics.sleep_max))
 			se->statistics.sleep_max = delta;
 
+		se->statistics.sleep_start = 0;
 		se->statistics.sum_sleep_runtime += delta;
 
 		if (tsk) {
@@ -1019,6 +1020,7 @@ static void enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se)
 		if (unlikely(delta > se->statistics.block_max))
 			se->statistics.block_max = delta;
 
+		se->statistics.block_start = 0;
 		se->statistics.sum_sleep_runtime += delta;
 
 		if (tsk) {

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2011-04-07 17:38 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2011-04-07 17:38 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched: Clean up rebalance_domains() load-balance interval calculation


 kernel/sched.c      |    3 +++
 kernel/sched_fair.c |   16 ++++++++++++----
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index a884551..17b4d22 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6331,6 +6331,9 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		break;
 #endif
 	}
+
+	update_max_interval();
+
 	return NOTIFY_OK;
 }
 
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index c7ec5c8..80ecd09 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -3820,6 +3820,17 @@ void select_nohz_load_balancer(int stop_tick)
 
 static DEFINE_SPINLOCK(balancing);
 
+static unsigned long __read_mostly max_load_balance_interval = HZ/10;
+
+/*
+ * Scale the max load_balance interval with the number of CPUs in the system.
+ * This trades load-balance latency on larger machines for less cross talk.
+ */
+static void update_max_interval(void)
+{
+	max_load_balance_interval = HZ*num_online_cpus()/10;
+}
+
 /*
  * It checks each scheduling domain to see if it is due to be balanced,
  * and initiates a balancing operation if so.
@@ -3849,10 +3860,7 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 
 		/* scale ms to jiffies */
 		interval = msecs_to_jiffies(interval);
-		if (unlikely(!interval))
-			interval = 1;
-		if (interval > HZ*num_online_cpus()/10)
-			interval = HZ*num_online_cpus()/10;
+		interval = clamp(interval, 1UL, max_load_balance_interval);
 
 		need_serialize = sd->flags & SD_SERIALIZE;
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2011-03-18 13:52 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2011-03-18 13:52 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Thomas Gleixner, Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Randy Dunlap (1):
      sched, kernel-doc: Fix runqueue_is_locked() description


 kernel/sched.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index c8e40b7..58d66ea 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -661,10 +661,9 @@ static void update_rq_clock(struct rq *rq)
 #endif
 
 /**
- * runqueue_is_locked
+ * runqueue_is_locked - Returns true if the current cpu runqueue is locked
  * @cpu: the processor in question.
  *
- * Returns true if the current cpu runqueue is locked.
  * This interface allows printk to be called with the runqueue lock
  * held and know whether or not it is OK to wake up the klogd.
  */

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2011-03-10  8:01 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2011-03-10  8:01 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Thomas Gleixner,
	Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Balbir Singh (1):
      sched: Fix sched rt group scheduling when hierachy is enabled


 kernel/sched_rt.c |   14 +++++++++-----
 1 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index ad62677..01f75a5 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -210,11 +210,12 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se);
 
 static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
 {
-	int this_cpu = smp_processor_id();
 	struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr;
 	struct sched_rt_entity *rt_se;
 
-	rt_se = rt_rq->tg->rt_se[this_cpu];
+	int cpu = cpu_of(rq_of_rt_rq(rt_rq));
+
+	rt_se = rt_rq->tg->rt_se[cpu];
 
 	if (rt_rq->rt_nr_running) {
 		if (rt_se && !on_rt_rq(rt_se))
@@ -226,10 +227,10 @@ static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
 
 static void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
 {
-	int this_cpu = smp_processor_id();
 	struct sched_rt_entity *rt_se;
+	int cpu = cpu_of(rq_of_rt_rq(rt_rq));
 
-	rt_se = rt_rq->tg->rt_se[this_cpu];
+	rt_se = rt_rq->tg->rt_se[cpu];
 
 	if (rt_se && on_rt_rq(rt_se))
 		dequeue_rt_entity(rt_se);
@@ -565,8 +566,11 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
 			if (rt_rq->rt_time || rt_rq->rt_nr_running)
 				idle = 0;
 			raw_spin_unlock(&rt_rq->rt_runtime_lock);
-		} else if (rt_rq->rt_nr_running)
+		} else if (rt_rq->rt_nr_running) {
 			idle = 0;
+			if (!rt_rq_throttled(rt_rq))
+				enqueue = 1;
+		}
 
 		if (enqueue)
 			sched_rt_rq_enqueue(rt_rq);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2011-01-24 13:07 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2011-01-24 13:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Thomas Gleixner,
	Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Yong Zhang (1):
      sched: Fix poor interactivity on UP systems due to group scheduler nice tune bug


 kernel/sched_fair.c |   78 ++++++++++++++++++++++++++++++++++----------------
 1 files changed, 53 insertions(+), 25 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 77e9166..3547699 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -699,7 +699,8 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	cfs_rq->nr_running--;
 }
 
-#if defined CONFIG_SMP && defined CONFIG_FAIR_GROUP_SCHED
+#ifdef CONFIG_FAIR_GROUP_SCHED
+# ifdef CONFIG_SMP
 static void update_cfs_rq_load_contribution(struct cfs_rq *cfs_rq,
 					    int global_update)
 {
@@ -762,6 +763,51 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
 		list_del_leaf_cfs_rq(cfs_rq);
 }
 
+static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg,
+				long weight_delta)
+{
+	long load_weight, load, shares;
+
+	load = cfs_rq->load.weight + weight_delta;
+
+	load_weight = atomic_read(&tg->load_weight);
+	load_weight -= cfs_rq->load_contribution;
+	load_weight += load;
+
+	shares = (tg->shares * load);
+	if (load_weight)
+		shares /= load_weight;
+
+	if (shares < MIN_SHARES)
+		shares = MIN_SHARES;
+	if (shares > tg->shares)
+		shares = tg->shares;
+
+	return shares;
+}
+
+static void update_entity_shares_tick(struct cfs_rq *cfs_rq)
+{
+	if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
+		update_cfs_load(cfs_rq, 0);
+		update_cfs_shares(cfs_rq, 0);
+	}
+}
+# else /* CONFIG_SMP */
+static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
+{
+}
+
+static inline long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg,
+				long weight_delta)
+{
+	return tg->shares;
+}
+
+static inline void update_entity_shares_tick(struct cfs_rq *cfs_rq)
+{
+}
+# endif /* CONFIG_SMP */
 static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
 			    unsigned long weight)
 {
@@ -782,7 +828,7 @@ static void update_cfs_shares(struct cfs_rq *cfs_rq, long weight_delta)
 {
 	struct task_group *tg;
 	struct sched_entity *se;
-	long load_weight, load, shares;
+	long shares;
 
 	if (!cfs_rq)
 		return;
@@ -791,32 +837,14 @@ static void update_cfs_shares(struct cfs_rq *cfs_rq, long weight_delta)
 	se = tg->se[cpu_of(rq_of(cfs_rq))];
 	if (!se)
 		return;
-
-	load = cfs_rq->load.weight + weight_delta;
-
-	load_weight = atomic_read(&tg->load_weight);
-	load_weight -= cfs_rq->load_contribution;
-	load_weight += load;
-
-	shares = (tg->shares * load);
-	if (load_weight)
-		shares /= load_weight;
-
-	if (shares < MIN_SHARES)
-		shares = MIN_SHARES;
-	if (shares > tg->shares)
-		shares = tg->shares;
+#ifndef CONFIG_SMP
+	if (likely(se->load.weight == tg->shares))
+		return;
+#endif
+	shares = calc_cfs_shares(cfs_rq, tg, weight_delta);
 
 	reweight_entity(cfs_rq_of(se), se, shares);
 }
-
-static void update_entity_shares_tick(struct cfs_rq *cfs_rq)
-{
-	if (cfs_rq->load_unacc_exec_time > sysctl_sched_shares_window) {
-		update_cfs_load(cfs_rq, 0);
-		update_cfs_shares(cfs_rq, 0);
-	}
-}
 #else /* CONFIG_FAIR_GROUP_SCHED */
 static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update)
 {

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2010-04-08 18:36       ` Linus Torvalds
@ 2010-04-08 18:52         ` Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2010-04-08 18:52 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andreas Schwab, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Thomas Gleixner, Andrew Morton, Anton Blanchard


* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> 
> 
> On Thu, 8 Apr 2010, Ingo Molnar wrote:
> > 
> > So i'd suggest changing nr_cpu_ids to unsigned int [unless there's some strong 
> > reason to have it signed] plus doing something like:
> > 
> > 	if (len < (nr_cpu_ids >> BITS_PER_BYTE_BITS))
> 
> No workee.
> 
> It really should round up.

Indeed.

> If you worry about code generation, I'd suggest looking at whether 
> nr_cpu_ids could just be made unsigned.
> 
> Anyway, this all was _not_ the point of my original email. I really don't 
> care about this particular instance. I care more about the whole "in general 
> people should think _way_ more about validating user-supplied arguments than 
> clearly happened this time".

Yeah, no argument about that, point taken and accepted.

	Ingo

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2010-04-08 18:26     ` Ingo Molnar
@ 2010-04-08 18:36       ` Linus Torvalds
  2010-04-08 18:52         ` Ingo Molnar
  0 siblings, 1 reply; 69+ messages in thread
From: Linus Torvalds @ 2010-04-08 18:36 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andreas Schwab, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Thomas Gleixner, Andrew Morton, Anton Blanchard



On Thu, 8 Apr 2010, Ingo Molnar wrote:
> 
> So i'd suggest changing nr_cpu_ids to unsigned int [unless there's some strong 
> reason to have it signed] plus doing something like:
> 
> 	if (len < (nr_cpu_ids >> BITS_PER_BYTE_BITS))

No workee.

It really should round up.

If you worry about code generation, I'd suggest looking at whether 
nr_cpu_ids could just be made unsigned.

Anyway, this all was _not_ the point of my original email. I really don't 
care about this particular instance. I care more about the whole "in 
general people should think _way_ more about validating user-supplied 
arguments than clearly happened this time".

			Linus

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2010-04-08 16:03   ` Andreas Schwab
@ 2010-04-08 18:26     ` Ingo Molnar
  2010-04-08 18:36       ` Linus Torvalds
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2010-04-08 18:26 UTC (permalink / raw)
  To: Andreas Schwab
  Cc: Linus Torvalds, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Thomas Gleixner, Andrew Morton, Anton Blanchard


* Andreas Schwab <schwab@redhat.com> wrote:

> Linus Torvalds <torvalds@linux-foundation.org> writes:
> 
> > On Thu, 8 Apr 2010, Ingo Molnar wrote:
> >>  
> >> -	if (len < nr_cpu_ids)
> >> +	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
> >>  		return -EINVAL;
> >
> > Not that it really matters, but this will now fail for no good reason if 
> > you pass it a half-gigabyte area due to overflow.
> 
> Which can easily be avoided.
> 
> 	if (len < DIV_ROUND_UP(nr_cpu_ids, BITS_PER_BYTE))

nr_cpu_ids is a signed integer which turns the DIV_ROUND_UP into a somewhat 
suboptimal instruction sequence. (havent checked it though)

So i'd suggest changing nr_cpu_ids to unsigned int [unless there's some strong 
reason to have it signed] plus doing something like:

	if (len < (nr_cpu_ids >> BITS_PER_BYTE_BITS))

ought to both result in better code and should be more readable. We'd have to 
add:

  #define BITS_PER_BYTE_BITS 3

to linux/bitops.h.

	Ingo

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2010-04-08 15:42 ` Linus Torvalds
@ 2010-04-08 16:03   ` Andreas Schwab
  2010-04-08 18:26     ` Ingo Molnar
  0 siblings, 1 reply; 69+ messages in thread
From: Andreas Schwab @ 2010-04-08 16:03 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Thomas Gleixner, Andrew Morton

Linus Torvalds <torvalds@linux-foundation.org> writes:

> On Thu, 8 Apr 2010, Ingo Molnar wrote:
>>  
>> -	if (len < nr_cpu_ids)
>> +	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
>>  		return -EINVAL;
>
> Not that it really matters, but this will now fail for no good reason if 
> you pass it a half-gigabyte area due to overflow.

Which can easily be avoided.

	if (len < DIV_ROUND_UP(nr_cpu_ids, BITS_PER_BYTE))

Andreas.

-- 
Andreas Schwab, schwab@redhat.com
GPG Key fingerprint = D4E8 DBE3 3813 BB5D FA84  5EC7 45C6 250E 6F00 984E
"And now for something completely different."

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [GIT PULL] scheduler fix
  2010-04-08 15:38 Ingo Molnar
@ 2010-04-08 15:42 ` Linus Torvalds
  2010-04-08 16:03   ` Andreas Schwab
  0 siblings, 1 reply; 69+ messages in thread
From: Linus Torvalds @ 2010-04-08 15:42 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Thomas Gleixner,
	Andrew Morton



On Thu, 8 Apr 2010, Ingo Molnar wrote:
>  
> -	if (len < nr_cpu_ids)
> +	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
>  		return -EINVAL;

Not that it really matters, but this will now fail for no good reason if 
you pass it a half-gigabyte area due to overflow.

Of course, if you pass it a half gig memory array, you're a f*cking moron 
to begin with, so I don't think anybody really _cares_. But in general, 
when checking system call arguments, I'd like people to think about 
overflow issues more.

In this case it doesn't matter, and overflow just makes the test more 
conservative than they need to be, but when it _does_ matter it often ends 
up being a security issue.

		Linus

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2010-04-08 15:38 Ingo Molnar
  2010-04-08 15:42 ` Linus Torvalds
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2010-04-08 15:38 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Thomas Gleixner,
	Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Anton Blanchard (1):
      sched: Fix sched_getaffinity()


 kernel/sched.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 528a105..eaf5c73 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4902,7 +4902,7 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
 	int ret;
 	cpumask_var_t mask;
 
-	if (len < nr_cpu_ids)
+	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
 		return -EINVAL;
 	if (len & (sizeof(unsigned long)-1))
 		return -EINVAL;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2009-12-23 16:03 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2009-12-23 16:03 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Peter Zijlstra, Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched: Revert 738d2be, simplify set_task_cpu()


 kernel/sched.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 87f1f47..c535cc4 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2045,11 +2045,10 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
 
 	trace_sched_migrate_task(p, new_cpu);
 
-	if (task_cpu(p) == new_cpu)
-		return;
-
-	p->se.nr_migrations++;
-	perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0);
+	if (task_cpu(p) != new_cpu) {
+		p->se.nr_migrations++;
+		perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0);
+	}
 
 	__set_task_cpu(p, new_cpu);
 }

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2009-10-08 19:01 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2009-10-08 19:01 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Peter Zijlstra

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Peter Williams (1):
      sched: Set correct normal_prio and prio values in sched_fork()


 kernel/sched.c |   20 +++++++++-----------
 1 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 1535f38..76c0e96 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2515,22 +2515,17 @@ void sched_fork(struct task_struct *p, int clone_flags)
 	__sched_fork(p);
 
 	/*
-	 * Make sure we do not leak PI boosting priority to the child.
-	 */
-	p->prio = current->normal_prio;
-
-	/*
 	 * Revert to default priority/policy on fork if requested.
 	 */
 	if (unlikely(p->sched_reset_on_fork)) {
-		if (p->policy == SCHED_FIFO || p->policy == SCHED_RR)
+		if (p->policy == SCHED_FIFO || p->policy == SCHED_RR) {
 			p->policy = SCHED_NORMAL;
-
-		if (p->normal_prio < DEFAULT_PRIO)
-			p->prio = DEFAULT_PRIO;
+			p->normal_prio = p->static_prio;
+		}
 
 		if (PRIO_TO_NICE(p->static_prio) < 0) {
 			p->static_prio = NICE_TO_PRIO(0);
+			p->normal_prio = p->static_prio;
 			set_load_weight(p);
 		}
 
@@ -2541,6 +2536,11 @@ void sched_fork(struct task_struct *p, int clone_flags)
 		p->sched_reset_on_fork = 0;
 	}
 
+	/*
+	 * Make sure we do not leak PI boosting priority to the child.
+	 */
+	p->prio = current->normal_prio;
+
 	if (!rt_prio(p->prio))
 		p->sched_class = &fair_sched_class;
 
@@ -2581,8 +2581,6 @@ void wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
 	BUG_ON(p->state != TASK_RUNNING);
 	update_rq_clock(rq);
 
-	p->prio = effective_prio(p);
-
 	if (!p->sched_class->task_new || !current->se.on_rq) {
 		activate_task(rq, p, 0);
 	} else {

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [GIT PULL] scheduler fix
@ 2009-05-05  9:35 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2009-05-05  9:35 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Peter Zijlstra, Andrew Morton

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Eric Dumazet (1):
      sched: account system time properly


 kernel/sched.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index b902e58..26efa47 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4732,7 +4732,7 @@ void account_process_tick(struct task_struct *p, int user_tick)
 
 	if (user_tick)
 		account_user_time(p, one_jiffy, one_jiffy_scaled);
-	else if (p != rq->idle)
+	else if ((p != rq->idle) || (irq_count() != HARDIRQ_OFFSET))
 		account_system_time(p, HARDIRQ_OFFSET, one_jiffy,
 				    one_jiffy_scaled);
 	else

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2009-02-17 16:40 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2009-02-17 16:40 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton, Peter Zijlstra

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Ingo Molnar (1):
      sched: cpu hotplug fix


 kernel/sched.c |   15 ++++++++++++---
 1 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index c1d0ed3..410eec4 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6944,20 +6944,26 @@ static void free_rootdomain(struct root_domain *rd)
 
 static void rq_attach_root(struct rq *rq, struct root_domain *rd)
 {
+	struct root_domain *old_rd = NULL;
 	unsigned long flags;
 
 	spin_lock_irqsave(&rq->lock, flags);
 
 	if (rq->rd) {
-		struct root_domain *old_rd = rq->rd;
+		old_rd = rq->rd;
 
 		if (cpumask_test_cpu(rq->cpu, old_rd->online))
 			set_rq_offline(rq);
 
 		cpumask_clear_cpu(rq->cpu, old_rd->span);
 
-		if (atomic_dec_and_test(&old_rd->refcount))
-			free_rootdomain(old_rd);
+		/*
+		 * If we dont want to free the old_rt yet then
+		 * set old_rd to NULL to skip the freeing later
+		 * in this function:
+		 */
+		if (!atomic_dec_and_test(&old_rd->refcount))
+			old_rd = NULL;
 	}
 
 	atomic_inc(&rd->refcount);
@@ -6968,6 +6974,9 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd)
 		set_rq_online(rq);
 
 	spin_unlock_irqrestore(&rq->lock, flags);
+
+	if (old_rd)
+		free_rootdomain(old_rd);
 }
 
 static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2009-02-04 19:18 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2009-02-04 19:18 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton, Peter Zijlstra

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Randy Dunlap (1):
      sched: add missing kernel-doc in sched.h


 include/linux/sched.h |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 5a7c763..2127e95 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -443,6 +443,7 @@ struct pacct_struct {
  * @utime:		time spent in user mode, in &cputime_t units
  * @stime:		time spent in kernel mode, in &cputime_t units
  * @sum_exec_runtime:	total time spent on the CPU, in nanoseconds
+ * @lock:		lock for fields in this struct
  *
  * This structure groups together three kinds of CPU time that are
  * tracked for threads and thread groups.  Most things considering

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [git pull] scheduler fix
  2009-01-07 23:47 ` Linus Torvalds
@ 2009-01-08  7:50   ` Peter Zijlstra
  0 siblings, 0 replies; 69+ messages in thread
From: Peter Zijlstra @ 2009-01-08  7:50 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Ingo Molnar, linux-kernel, Andrew Morton

On Wed, 2009-01-07 at 15:47 -0800, Linus Torvalds wrote:
> 
> On Wed, 7 Jan 2009, Ingo Molnar wrote:
> > +		/*
> > +		 * Should not call ttwu while holding a rq->lock
> > +		 */
> > +		spin_unlock(&this_rq->lock);
> >  		if (active_balance)
> >  			wake_up_process(busiest->migration_thread);
> > +		spin_lock(&this_rq->lock);
> 
> Btw, this isn't the first time we've wanted to do a wakeup while 
> potentially locked.
> 
> Is there any way to perhaps go a "wake_up_gentle()" that doesn't need the 
> lock, and just basically does a potentially delayed wakeup by just 
> scheduling it asynchronously.
> 
> That would have solved all those nasty printk issues too. These kinds of 
> things don't need the strict "wake up NOW" behaviour - they are more of a 
> "kick the dang thing and make sure it wakes up in some timely manner".

Right -- so the printk thing was solved by polling some state from the
timer tick, we could make that into a list, but then you'd have to worry
about memory allocation for list elements failing etc.

Same story as generic smp function call, we could do this using a self
(or remote) IPI, but you'd still have the memory allocation issue -- and
would need to make self-IPI work on !SMP.

I'll ponder the issue a bit more, but I'm not directly seeing anything
(of course, if it were easy, we'd have done it ages ago ;-)


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [git pull] scheduler fix
  2009-01-07 22:26 Ingo Molnar
@ 2009-01-07 23:47 ` Linus Torvalds
  2009-01-08  7:50   ` Peter Zijlstra
  0 siblings, 1 reply; 69+ messages in thread
From: Linus Torvalds @ 2009-01-07 23:47 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel, Andrew Morton, Peter Zijlstra



On Wed, 7 Jan 2009, Ingo Molnar wrote:
> +		/*
> +		 * Should not call ttwu while holding a rq->lock
> +		 */
> +		spin_unlock(&this_rq->lock);
>  		if (active_balance)
>  			wake_up_process(busiest->migration_thread);
> +		spin_lock(&this_rq->lock);

Btw, this isn't the first time we've wanted to do a wakeup while 
potentially locked.

Is there any way to perhaps go a "wake_up_gentle()" that doesn't need the 
lock, and just basically does a potentially delayed wakeup by just 
scheduling it asynchronously.

That would have solved all those nasty printk issues too. These kinds of 
things don't need the strict "wake up NOW" behaviour - they are more of a 
"kick the dang thing and make sure it wakes up in some timely manner".

		Linus

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2009-01-07 22:26 Ingo Molnar
  2009-01-07 23:47 ` Linus Torvalds
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2009-01-07 22:26 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton, Peter Zijlstra

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      sched: fix possible recursive rq->lock


 kernel/sched.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 2e3545f..deb5ac8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3728,8 +3728,13 @@ redo:
 		}
 
 		double_unlock_balance(this_rq, busiest);
+		/*
+		 * Should not call ttwu while holding a rq->lock
+		 */
+		spin_unlock(&this_rq->lock);
 		if (active_balance)
 			wake_up_process(busiest->migration_thread);
+		spin_lock(&this_rq->lock);
 
 	} else
 		sd->nr_balance_failed = 0;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2008-12-04 19:41 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2008-12-04 19:41 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton, Peter Zijlstra

Linus,

Please pull the latest sched-fixes-for-linus git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-fixes-for-linus

 Thanks,

	Ingo

------------------>
Mahesh Salgaonkar (1):
      sched: don't export sched_mc_power_savings in laptops


 arch/x86/include/asm/topology.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 4850e4b..ff386ff 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -239,7 +239,7 @@ struct pci_bus;
 void set_pci_bus_resources_arch_default(struct pci_bus *b);
 
 #ifdef CONFIG_SMP
-#define mc_capable()			(boot_cpu_data.x86_max_cores > 1)
+#define mc_capable()	(cpus_weight(per_cpu(cpu_core_map, 0)) != nr_cpu_ids)
 #define smt_capable()			(smp_num_siblings > 1)
 #endif
 

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2008-04-14 15:07 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2008-04-14 15:07 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Peter Zijlstra


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git for-linus

it contains a single revert for an interactivity commit that has been 
tracked down to cause audio skipping for some folks. We'll delay that 
change to v2.6.26 instead. Thanks,

	Ingo

------------------>
Ingo Molnar (1):
      revert "sched: fix fair sleepers"

 kernel/sched_fair.c |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 86a9337..0080968 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -510,10 +510,8 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
 
 	if (!initial) {
 		/* sleeps upto a single latency don't count. */
-		if (sched_feat(NEW_FAIR_SLEEPERS)) {
-			vruntime -= calc_delta_fair(sysctl_sched_latency,
-						    &cfs_rq->load);
-		}
+		if (sched_feat(NEW_FAIR_SLEEPERS))
+			vruntime -= sysctl_sched_latency;
 
 		/* ensure we never gain time by being placed backwards. */
 		vruntime = max_vruntime(se->vruntime, vruntime);

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
@ 2008-01-22 10:33 Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2008-01-22 10:33 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler-fixes git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

it's a fix for a late-breaking bug: if the root user / admin sets the 
new /sys/uids/*/cpu_share tunable too low from the default 1024 then we 
can crash/hang. [ in sched-devel.git we've already had MIN_GROUP_SHARES 
for a long time to enforce this limit - but it was not backported. ]

	Ingo

------------------>
Ingo Molnar (1):
      sched: group scheduler, set uid share fix

 sched.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched.c b/kernel/sched.c
index 37cf07a..e76b11c 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7153,6 +7153,14 @@ int sched_group_set_shares(struct task_group *tg, unsigned long shares)
 {
 	int i;
 
+	/*
+	 * A weight of 0 or 1 can cause arithmetics problems.
+	 * (The default weight is 1024 - so there's no practical
+	 *  limitation from this.)
+	 */
+	if (shares < 2)
+		shares = 2;
+
 	spin_lock(&tg->lock);
 	if (tg->shares == shares)
 		goto done;

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [git pull] scheduler fix
  2007-10-30 10:15   ` Guillaume Chazarain
@ 2007-11-01  8:39     ` Ingo Molnar
  0 siblings, 0 replies; 69+ messages in thread
From: Ingo Molnar @ 2007-11-01  8:39 UTC (permalink / raw)
  To: Guillaume Chazarain; +Cc: Linus Torvalds, linux-kernel


* Guillaume Chazarain <guichaz@yahoo.fr> wrote:

> 2007/10/30, Ingo Molnar <mingo@elte.hu>:
> >  fs/proc/array.c       |    3 ++-
> >  include/linux/sched.h |    2 +-
> >  kernel/fork.c         |    1 +
> >  3 files changed, 4 insertions(+), 2 deletions(-)
> 
> Hello Ingo,
> 
> do you think it would be possible to include the patch in your git 
> pull request emails? Especially when the patch is small like this one.

yeah, will do that in the future.

	Ingo

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [git pull] scheduler fix
  2007-10-29 23:34 ` [git pull] scheduler fix Ingo Molnar
@ 2007-10-30 10:15   ` Guillaume Chazarain
  2007-11-01  8:39     ` Ingo Molnar
  0 siblings, 1 reply; 69+ messages in thread
From: Guillaume Chazarain @ 2007-10-30 10:15 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Linus Torvalds, linux-kernel

2007/10/30, Ingo Molnar <mingo@elte.hu>:
>  fs/proc/array.c       |    3 ++-
>  include/linux/sched.h |    2 +-
>  kernel/fork.c         |    1 +
>  3 files changed, 4 insertions(+), 2 deletions(-)

Hello Ingo,

do you think it would be possible to include the patch in your git
pull request emails? Especially when the patch is small like this one.

Jeff showed a suitable script in
http://www.uwsg.iu.edu/hypermail/linux/kernel/0710.1/2218.html

Thanks.

-- 
Guillaume

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [git pull] scheduler fix
  2007-10-29 20:39 [git pull] scheduler fixes Ingo Molnar
@ 2007-10-29 23:34 ` Ingo Molnar
  2007-10-30 10:15   ` Guillaume Chazarain
  0 siblings, 1 reply; 69+ messages in thread
From: Ingo Molnar @ 2007-10-29 23:34 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel


Linus, this is a followup git pull request for a single fix:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

Frans Pop has tested a fix from Balbir Singh that (finally) resolves the 
procps CPU accounting bug. The fix you pulled earlier today was correct 
but it solved only half of the problem.

	Ingo

------------------>
Balbir Singh (1):
      sched: fix /proc/<PID>/stat stime/utime monotonicity, part 2

 fs/proc/array.c       |    3 ++-
 include/linux/sched.h |    2 +-
 kernel/fork.c         |    1 +
 3 files changed, 4 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2023-10-01 17:08 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-24  7:06 [GIT PULL] scheduler fix Ingo Molnar
2021-06-24 16:34 ` pr-tracker-bot
  -- strict thread matches above, loose matches on Subject: below --
2023-10-01  8:43 Ingo Molnar
2023-10-01 17:08 ` pr-tracker-bot
2023-09-22 10:26 Ingo Molnar
2023-09-22 20:19 ` pr-tracker-bot
2020-12-27  9:16 Ingo Molnar
2020-12-27 17:27 ` pr-tracker-bot
2020-03-02  7:51 Ingo Molnar
2020-03-03 23:35 ` pr-tracker-bot
2019-12-17 11:54 Ingo Molnar
2019-12-17 19:20 ` pr-tracker-bot
2019-07-14 10:19 Ingo Molnar
2019-07-14 18:45 ` pr-tracker-bot
2019-05-05 11:02 Ingo Molnar
2019-05-05 22:10 ` pr-tracker-bot
2019-04-27 14:39 Ingo Molnar
2019-04-27 18:45 ` pr-tracker-bot
2019-04-12 13:08 Ingo Molnar
2019-04-13  4:05 ` pr-tracker-bot
2018-12-31 14:58 Ingo Molnar
2018-12-31 18:05 ` pr-tracker-bot
2018-11-17 10:57 Ingo Molnar
2018-11-18 20:05 ` pr-tracker-bot
2018-10-11  9:12 Ingo Molnar
2018-10-11 12:32 ` Greg Kroah-Hartman
2018-10-11  9:02 Ingo Molnar
2018-01-17 15:34 Ingo Molnar
2017-10-27 19:16 Ingo Molnar
2016-12-07 18:48 Ingo Molnar
2016-10-28  8:35 Ingo Molnar
2016-10-19 15:52 Ingo Molnar
2016-10-18 11:17 Ingo Molnar
2016-09-13 18:17 Ingo Molnar
2016-07-14 18:56 Ingo Molnar
2016-05-13 18:54 Ingo Molnar
2016-05-06 11:31 Ingo Molnar
2015-07-18  2:56 Ingo Molnar
2015-03-28 13:45 Ingo Molnar
2014-01-15 18:19 Ingo Molnar
2013-09-28 18:08 Ingo Molnar
2013-09-12 12:58 Ingo Molnar
2012-05-17  8:46 Ingo Molnar
2012-03-02 10:57 Ingo Molnar
2012-02-27 10:29 Ingo Molnar
2011-04-07 17:38 Ingo Molnar
2011-03-18 13:52 Ingo Molnar
2011-03-10  8:01 Ingo Molnar
2011-01-24 13:07 Ingo Molnar
2010-04-08 15:38 Ingo Molnar
2010-04-08 15:42 ` Linus Torvalds
2010-04-08 16:03   ` Andreas Schwab
2010-04-08 18:26     ` Ingo Molnar
2010-04-08 18:36       ` Linus Torvalds
2010-04-08 18:52         ` Ingo Molnar
2009-12-23 16:03 Ingo Molnar
2009-10-08 19:01 Ingo Molnar
2009-05-05  9:35 Ingo Molnar
2009-02-17 16:40 [git pull] " Ingo Molnar
2009-02-04 19:18 Ingo Molnar
2009-01-07 22:26 Ingo Molnar
2009-01-07 23:47 ` Linus Torvalds
2009-01-08  7:50   ` Peter Zijlstra
2008-12-04 19:41 Ingo Molnar
2008-04-14 15:07 Ingo Molnar
2008-01-22 10:33 Ingo Molnar
2007-10-29 20:39 [git pull] scheduler fixes Ingo Molnar
2007-10-29 23:34 ` [git pull] scheduler fix Ingo Molnar
2007-10-30 10:15   ` Guillaume Chazarain
2007-11-01  8:39     ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).