linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 0/8] Linux 4.19.106-rt45-rc1
@ 2020-03-06 18:40 Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 1/8] userfaultfd: Use a seqlock instead of seqcount Steven Rostedt
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat


Dear RT Folks,

This is the RT stable review cycle of patch 4.19.106-rt45-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 3/16/2020.

Enjoy,

-- Steve


To build 4.19.106-rt45-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.19.tar.xz

  http://www.kernel.org/pub/linux/kernel/v4.x/patch-4.19.106.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/4.19/patch-4.19.106-rt45-rc1.patch.xz

You can also build from 4.19.106-rt44 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/4.19/incr/patch-4.19.106-rt44-rt45-rc1.patch.xz


Changes from 4.19.106-rt44:

---


Matt Fleming (1):
      mm/memcontrol: Move misplaced local_unlock_irqrestore()

Scott Wood (2):
      sched: migrate_enable: Use per-cpu cpu_stop_work
      sched: migrate_enable: Remove __schedule() call

Sebastian Andrzej Siewior (4):
      userfaultfd: Use a seqlock instead of seqcount
      locallock: Include header for the `current' macro
      drm/vmwgfx: Drop preempt_disable() in vmw_fifo_ping_host()
      tracing: make preempt_lazy and migrate_disable counter smaller

Steven Rostedt (VMware) (1):
      Linux 4.19.106-rt45-rc1

----
 drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c |  2 --
 fs/userfaultfd.c                     | 12 ++++++------
 include/linux/locallock.h            |  1 +
 include/linux/trace_events.h         |  3 +--
 kernel/sched/core.c                  | 23 ++++++++++++++---------
 kernel/trace/trace_events.c          |  4 ++--
 localversion-rt                      |  2 +-
 mm/memcontrol.c                      |  2 +-
 8 files changed, 26 insertions(+), 23 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH RT 1/8] userfaultfd: Use a seqlock instead of seqcount
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work Steven Rostedt
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat, stable-rt

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit dc952a564d02997330654be9628bbe97ba2a05d3 ]

On RT write_seqcount_begin() disables preemption which leads to warning
in add_wait_queue() while the spinlock_t is acquired.
The waitqueue can't be converted to swait_queue because
userfaultfd_wake_function() is used as a custom wake function.

Use seqlock instead seqcount to avoid the preempt_disable() section
during add_wait_queue().

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 fs/userfaultfd.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index d269d1139f7f..ff6be687f68e 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -61,7 +61,7 @@ struct userfaultfd_ctx {
 	/* waitqueue head for events */
 	wait_queue_head_t event_wqh;
 	/* a refile sequence protected by fault_pending_wqh lock */
-	struct seqcount refile_seq;
+	seqlock_t refile_seq;
 	/* pseudo fd refcounting */
 	atomic_t refcount;
 	/* userfaultfd syscall flags */
@@ -1064,7 +1064,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
 			 * waitqueue could become empty if this is the
 			 * only userfault.
 			 */
-			write_seqcount_begin(&ctx->refile_seq);
+			write_seqlock(&ctx->refile_seq);
 
 			/*
 			 * The fault_pending_wqh.lock prevents the uwq
@@ -1090,7 +1090,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
 			list_del(&uwq->wq.entry);
 			add_wait_queue(&ctx->fault_wqh, &uwq->wq);
 
-			write_seqcount_end(&ctx->refile_seq);
+			write_sequnlock(&ctx->refile_seq);
 
 			/* careful to always initialize msg if ret == 0 */
 			*msg = uwq->msg;
@@ -1263,11 +1263,11 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
 	 * sure we've userfaults to wake.
 	 */
 	do {
-		seq = read_seqcount_begin(&ctx->refile_seq);
+		seq = read_seqbegin(&ctx->refile_seq);
 		need_wakeup = waitqueue_active(&ctx->fault_pending_wqh) ||
 			waitqueue_active(&ctx->fault_wqh);
 		cond_resched();
-	} while (read_seqcount_retry(&ctx->refile_seq, seq));
+	} while (read_seqretry(&ctx->refile_seq, seq));
 	if (need_wakeup)
 		__wake_userfault(ctx, range);
 }
@@ -1938,7 +1938,7 @@ static void init_once_userfaultfd_ctx(void *mem)
 	init_waitqueue_head(&ctx->fault_wqh);
 	init_waitqueue_head(&ctx->event_wqh);
 	init_waitqueue_head(&ctx->fd_wqh);
-	seqcount_init(&ctx->refile_seq);
+	seqlock_init(&ctx->refile_seq);
 }
 
 SYSCALL_DEFINE1(userfaultfd, int, flags)
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 1/8] userfaultfd: Use a seqlock instead of seqcount Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 3/8] sched: migrate_enable: Remove __schedule() call Steven Rostedt
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat, Scott Wood

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Scott Wood <swood@redhat.com>

[ Upstream commit 2dcd94b443c5dcbc20281666321b7f025f9cc85c ]

Commit e6c287b1512d ("sched: migrate_enable: Use stop_one_cpu_nowait()")
adds a busy wait to deal with an edge case where the migrated thread
can resume running on another CPU before the stopper has consumed
cpu_stop_work.  However, this is done with preemption disabled and can
potentially lead to deadlock.

While it is not guaranteed that the cpu_stop_work will be consumed before
the migrating thread resumes and exits the stack frame, it is guaranteed
that nothing other than the stopper can run on the old cpu between the
migrating thread scheduling out and the cpu_stop_work being consumed.
Thus, we can store cpu_stop_work in per-cpu data without it being
reused too early.

Fixes: e6c287b1512d ("sched: migrate_enable: Use stop_one_cpu_nowait()")
Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Scott Wood <swood@redhat.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/sched/core.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4616c086dd26..c4290fa5c0b6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7291,6 +7291,9 @@ static void migrate_disabled_sched(struct task_struct *p)
 	p->migrate_disable_scheduled = 1;
 }
 
+static DEFINE_PER_CPU(struct cpu_stop_work, migrate_work);
+static DEFINE_PER_CPU(struct migration_arg, migrate_arg);
+
 void migrate_enable(void)
 {
 	struct task_struct *p = current;
@@ -7329,23 +7332,26 @@ void migrate_enable(void)
 
 	WARN_ON(smp_processor_id() != cpu);
 	if (!is_cpu_allowed(p, cpu)) {
-		struct migration_arg arg = { .task = p };
-		struct cpu_stop_work work;
+		struct migration_arg __percpu *arg;
+		struct cpu_stop_work __percpu *work;
 		struct rq_flags rf;
 
+		work = this_cpu_ptr(&migrate_work);
+		arg = this_cpu_ptr(&migrate_arg);
+		WARN_ON_ONCE(!arg->done && !work->disabled && work->arg);
+
+		arg->task = p;
+		arg->done = false;
+
 		rq = task_rq_lock(p, &rf);
 		update_rq_clock(rq);
-		arg.dest_cpu = select_fallback_rq(cpu, p);
+		arg->dest_cpu = select_fallback_rq(cpu, p);
 		task_rq_unlock(rq, p, &rf);
 
 		stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
-				    &arg, &work);
+				    arg, work);
 		tlb_migrate_finish(p->mm);
 		__schedule(true);
-		if (!work.disabled) {
-			while (!arg.done)
-				cpu_relax();
-		}
 	}
 
 out:
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 3/8] sched: migrate_enable: Remove __schedule() call
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 1/8] userfaultfd: Use a seqlock instead of seqcount Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 4/8] mm/memcontrol: Move misplaced local_unlock_irqrestore() Steven Rostedt
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat, Scott Wood

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Scott Wood <swood@redhat.com>

[ Upstream commit b8162e61e9a33bd1de6452eb838fbf50a93ddd9a ]

We can rely on preempt_enable() to schedule.  Besides simplifying the
code, this potentially allows sequences such as the following to be
permitted:

migrate_disable();
preempt_disable();
migrate_enable();
preempt_enable();

Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Scott Wood <swood@redhat.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/sched/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c4290fa5c0b6..02e51c74e0bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7351,7 +7351,6 @@ void migrate_enable(void)
 		stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
 				    arg, work);
 		tlb_migrate_finish(p->mm);
-		__schedule(true);
 	}
 
 out:
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 4/8] mm/memcontrol: Move misplaced local_unlock_irqrestore()
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2020-03-06 18:40 ` [PATCH RT 3/8] sched: migrate_enable: Remove __schedule() call Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 5/8] locallock: Include header for the `current macro Steven Rostedt
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat, Matt Fleming

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Matt Fleming <matt@codeblueprint.co.uk>

[ Upstream commit 071a1d6a6e14d0dec240a8c67b425140d7f92f6a ]

The comment about local_lock_irqsave() mentions just the counters and
css_put_many()'s callback just invokes a worker so it is safe to move the
unlock function after memcg_check_events() so css_put_many() can be invoked
without the lock acquired.

Cc: Daniel Wagner <wagi@monom.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
[bigeasy: rewrote the patch description]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/memcontrol.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 421ac74450f6..519528959eef 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6540,10 +6540,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
 				     -nr_entries);
 	memcg_check_events(memcg, page);
+	local_unlock_irqrestore(event_lock, flags);
 
 	if (!mem_cgroup_is_root(memcg))
 		css_put_many(&memcg->css, nr_entries);
-	local_unlock_irqrestore(event_lock, flags);
 }
 
 /**
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 5/8] locallock: Include header for the `current macro
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2020-03-06 18:40 ` [PATCH RT 4/8] mm/memcontrol: Move misplaced local_unlock_irqrestore() Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 6/8] drm/vmwgfx: Drop preempt_disable() in vmw_fifo_ping_host() Steven Rostedt
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit e693075a5fd852043fa8d2b0467e078d9e5cb782 ]

Include the header for `current' macro so that
CONFIG_KERNEL_HEADER_TEST=y passes.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/locallock.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 921eab83cd34..81c89d87723b 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -3,6 +3,7 @@
 
 #include <linux/percpu.h>
 #include <linux/spinlock.h>
+#include <asm/current.h>
 
 #ifdef CONFIG_PREEMPT_RT_BASE
 
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 6/8] drm/vmwgfx: Drop preempt_disable() in vmw_fifo_ping_host()
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2020-03-06 18:40 ` [PATCH RT 5/8] locallock: Include header for the `current macro Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 7/8] tracing: make preempt_lazy and migrate_disable counter smaller Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 8/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit b901491e7b9b7a676818d84e482b69be72fc142f ]

vmw_fifo_ping_host() disables preemption around a test and a register
write via vmw_write(). The write function acquires a spinlock_t typed
lock which is not allowed in a preempt_disable()ed section on
PREEMPT_RT. This has been reported in the bugzilla.

It has been explained by Thomas Hellstrom that this preempt_disable()ed
section is not required for correctness.

Remove the preempt_disable() section.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=206591
Link: https://lkml.kernel.org/r/0b5e1c65d89951de993deab06d1d197b40fd67aa.camel@vmware.com
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
index d0fd147ef75f..fb5a3461bb8c 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
@@ -167,10 +167,8 @@ void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason)
 {
 	u32 *fifo_mem = dev_priv->mmio_virt;
 
-	preempt_disable();
 	if (cmpxchg(fifo_mem + SVGA_FIFO_BUSY, 0, 1) == 0)
 		vmw_write(dev_priv, SVGA_REG_SYNC, reason);
-	preempt_enable();
 }
 
 void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo)
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 7/8] tracing: make preempt_lazy and migrate_disable counter smaller
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2020-03-06 18:40 ` [PATCH RT 6/8] drm/vmwgfx: Drop preempt_disable() in vmw_fifo_ping_host() Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  2020-03-06 18:40 ` [PATCH RT 8/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit dd430bf5ecb40f9a89679c85868826475d71de54 ]

The migrate_disable counter should not exceed 255 so it is enough to
store it in an 8bit field.
With this change we can move the `preempt_lazy_count' member into the
gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
Remove the `padding' field, it is not needed.
Update the tracing fields in trace_define_common_fields() (it was
missing the preempt_lazy_count field).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/trace_events.h | 3 +--
 kernel/trace/trace_events.c  | 4 ++--
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 72864a11cec0..e26a85c1b7ba 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -62,8 +62,7 @@ struct trace_entry {
 	unsigned char		flags;
 	unsigned char		preempt_count;
 	int			pid;
-	unsigned short		migrate_disable;
-	unsigned short		padding;
+	unsigned char		migrate_disable;
 	unsigned char		preempt_lazy_count;
 };
 
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 1febb0ca4c81..07b8f5bfd263 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -188,8 +188,8 @@ static int trace_define_common_fields(void)
 	__common_field(unsigned char, flags);
 	__common_field(unsigned char, preempt_count);
 	__common_field(int, pid);
-	__common_field(unsigned short, migrate_disable);
-	__common_field(unsigned short, padding);
+	__common_field(unsigned char, migrate_disable);
+	__common_field(unsigned char, preempt_lazy_count);
 
 	return ret;
 }
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 8/8] Linux 4.19.106-rt45-rc1
  2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2020-03-06 18:40 ` [PATCH RT 7/8] tracing: make preempt_lazy and migrate_disable counter smaller Steven Rostedt
@ 2020-03-06 18:40 ` Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2020-03-06 18:40 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Julia Cartwright, Daniel Wagner, Tom Zanussi,
	Srivatsa S. Bhat

4.19.106-rt45-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index ac4d836a809d..e6421b58f4c8 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt44
+-rt45-rc1
-- 
2.25.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work
  2020-03-09 19:47 [PATCH RT 0/8] Linux v4.14.172-rt78-rc1 zanussi
@ 2020-03-09 19:47 ` zanussi
  0 siblings, 0 replies; 10+ messages in thread
From: zanussi @ 2020-03-09 19:47 UTC (permalink / raw)
  To: LKML, linux-rt-users, Steven Rostedt, Thomas Gleixner,
	Carsten Emde, John Kacur, Sebastian Andrzej Siewior,
	Daniel Wagner, Tom Zanussi
  Cc: Scott Wood

From: Scott Wood <swood@redhat.com>

v4.14.172-rt78-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


[ Upstream commit 2dcd94b443c5dcbc20281666321b7f025f9cc85c ]

Commit e6c287b1512d ("sched: migrate_enable: Use stop_one_cpu_nowait()")
adds a busy wait to deal with an edge case where the migrated thread
can resume running on another CPU before the stopper has consumed
cpu_stop_work.  However, this is done with preemption disabled and can
potentially lead to deadlock.

While it is not guaranteed that the cpu_stop_work will be consumed before
the migrating thread resumes and exits the stack frame, it is guaranteed
that nothing other than the stopper can run on the old cpu between the
migrating thread scheduling out and the cpu_stop_work being consumed.
Thus, we can store cpu_stop_work in per-cpu data without it being
reused too early.

Fixes: e6c287b1512d ("sched: migrate_enable: Use stop_one_cpu_nowait()")
Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Scott Wood <swood@redhat.com>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>

 Conflicts:
	kernel/sched/core.c
---
 kernel/sched/core.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f30bb249123b5..960daa6bc7f04 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6964,6 +6964,9 @@ static void migrate_disabled_sched(struct task_struct *p)
 	p->migrate_disable_scheduled = 1;
 }
 
+static DEFINE_PER_CPU(struct cpu_stop_work, migrate_work);
+static DEFINE_PER_CPU(struct migration_arg, migrate_arg);
+
 void migrate_enable(void)
 {
 	struct task_struct *p = current;
@@ -7002,23 +7005,26 @@ void migrate_enable(void)
 
 	WARN_ON(smp_processor_id() != cpu);
 	if (!is_cpu_allowed(p, cpu)) {
-		struct migration_arg arg = { .task = p };
-		struct cpu_stop_work work;
+		struct migration_arg __percpu *arg;
+		struct cpu_stop_work __percpu *work;
 		struct rq_flags rf;
 
+		work = this_cpu_ptr(&migrate_work);
+		arg = this_cpu_ptr(&migrate_arg);
+		WARN_ON_ONCE(!arg->done && !work->disabled && work->arg);
+
+		arg->task = p;
+		arg->done = false;
+
 		rq = task_rq_lock(p, &rf);
 		update_rq_clock(rq);
-		arg.dest_cpu = select_fallback_rq(cpu, p);
+		arg->dest_cpu = select_fallback_rq(cpu, p);
 		task_rq_unlock(rq, p, &rf);
 
 		stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop,
-				    &arg, &work);
+				    arg, work);
 		tlb_migrate_finish(p->mm);
 		__schedule(true);
-		if (!work.disabled) {
-			while (!arg.done)
-				cpu_relax();
-		}
 	}
 
 out:
-- 
2.14.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-03-09 19:48 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-06 18:40 [PATCH RT 0/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 1/8] userfaultfd: Use a seqlock instead of seqcount Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 3/8] sched: migrate_enable: Remove __schedule() call Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 4/8] mm/memcontrol: Move misplaced local_unlock_irqrestore() Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 5/8] locallock: Include header for the `current macro Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 6/8] drm/vmwgfx: Drop preempt_disable() in vmw_fifo_ping_host() Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 7/8] tracing: make preempt_lazy and migrate_disable counter smaller Steven Rostedt
2020-03-06 18:40 ` [PATCH RT 8/8] Linux 4.19.106-rt45-rc1 Steven Rostedt
2020-03-09 19:47 [PATCH RT 0/8] Linux v4.14.172-rt78-rc1 zanussi
2020-03-09 19:47 ` [PATCH RT 2/8] sched: migrate_enable: Use per-cpu cpu_stop_work zanussi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).