linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/13] Linux 4.14.87-rt50-rc1
@ 2019-01-07 19:52 Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING Steven Rostedt
                   ` (10 more replies)
  0 siblings, 11 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi


Dear RT Folks,

This is the RT stable review cycle of patch 4.14.87-rt50-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 01/09/2019.

Enjoy,

-- Steve


To build 4.14.87-rt50-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.14.tar.xz

  http://www.kernel.org/pub/linux/kernel/v4.x/patch-4.14.87.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/4.14/patch-4.14.87-rt50-rc1.patch.xz

You can also build from 4.14.87-rt49 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/4.14/incr/patch-4.14.87-rt49-rt50-rc1.patch.xz


Changes from 4.14.87-rt49:

---


Clark Williams (1):
      mm/kasan: make quarantine_lock a raw_spinlock_t

He Zhe (1):
      kmemleak: Turn kmemleak_lock to raw spinlock on RT

Julia Cartwright (1):
      kthread: convert worker lock to raw spinlock

Kurt Kanzenbach (1):
      tty: serial: pl011: explicitly initialize the flags variable

Lukas Wunner (1):
      pinctrl: bcm2835: Use raw spinlock for RT compatibility

Sebastian Andrzej Siewior (7):
      work-simple: drop a shit statement in SWORK_EVENT_PENDING
      sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled
      rcu: make RCU_BOOST default on RT without EXPERT
      x86/fpu: Disable preemption around local_bh_disable()
      hrtimer: move state change before hrtimer_cancel in do_nanosleep()
      drm/i915: disable tracing on -RT
      x86/mm/pat: disable preemption __split_large_page() after spin_lock()

Steven Rostedt (VMware) (1):
      Linux 4.14.87-rt50-rc1

----
 arch/x86/kernel/fpu/signal.c          |  2 ++
 arch/x86/mm/pageattr.c                |  8 +++++++
 drivers/gpu/drm/i915/i915_trace.h     |  4 ++++
 drivers/pinctrl/bcm/pinctrl-bcm2835.c | 16 ++++++-------
 drivers/tty/serial/amba-pl011.c       |  2 +-
 include/linux/kthread.h               |  2 +-
 kernel/kthread.c                      | 42 +++++++++++++++++------------------
 kernel/rcu/Kconfig                    |  4 ++--
 kernel/sched/core.c                   |  1 +
 kernel/sched/swork.c                  |  2 +-
 kernel/time/hrtimer.c                 |  2 +-
 localversion-rt                       |  2 +-
 mm/kasan/quarantine.c                 | 18 +++++++--------
 mm/kmemleak.c                         | 20 ++++++++---------
 14 files changed, 70 insertions(+), 55 deletions(-)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-08  3:06   ` Sergey Senozhatsky
  2019-01-07 19:52 ` [PATCH RT 02/13] kthread: convert worker lock to raw spinlock Steven Rostedt
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 22f41ebe5579cc847a7bb6c71916be92c8926216 ]

Dan Carpenter reported
| smatch warnings:
|kernel/sched/swork.c:63 swork_kthread() warn: test_bit() takes a bit number

This is not a bug because we shift by zero (and use the same value in
both places).
Nevertheless I'm dropping that shift by zero to keep smatch quiet.

Cc: Daniel Wagner <daniel.wagner@siemens.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/sched/swork.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/swork.c b/kernel/sched/swork.c
index 1950f40ca725..5559c22f664c 100644
--- a/kernel/sched/swork.c
+++ b/kernel/sched/swork.c
@@ -12,7 +12,7 @@
 #include <linux/spinlock.h>
 #include <linux/export.h>
 
-#define SWORK_EVENT_PENDING     (1 << 0)
+#define SWORK_EVENT_PENDING     1
 
 static DEFINE_MUTEX(worker_mutex);
 static struct sworker *glob_worker;
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 02/13] kthread: convert worker lock to raw spinlock
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 03/13] mm/kasan: make quarantine_lock a raw_spinlock_t Steven Rostedt
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Guenter Roeck, Tim Sander

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Julia Cartwright <julia@ni.com>

[ Upstream commit 5c8919eed1cfcad5da452047bd4ab088837afc41 ]

In order to enable the queuing of kthread work items from hardirq
context even when PREEMPT_RT_FULL is enabled, convert the worker
spin_lock to a raw_spin_lock.

This is only acceptable to do because the work performed under the lock
is well-bounded and minimal.

Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Guenter Roeck <linux@roeck-us.net>
Reported-and-tested-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Reported-by: Tim Sander <tim@krieglstein.org>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/kthread.h |  2 +-
 kernel/kthread.c        | 42 ++++++++++++++++++++---------------------
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 4e26609c77d4..4e0449df82c3 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -84,7 +84,7 @@ enum {
 
 struct kthread_worker {
 	unsigned int		flags;
-	spinlock_t		lock;
+	raw_spinlock_t		lock;
 	struct list_head	work_list;
 	struct list_head	delayed_work_list;
 	struct task_struct	*task;
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 4e6d85b63201..430fd79cd3fe 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -579,7 +579,7 @@ void __kthread_init_worker(struct kthread_worker *worker,
 				struct lock_class_key *key)
 {
 	memset(worker, 0, sizeof(struct kthread_worker));
-	spin_lock_init(&worker->lock);
+	raw_spin_lock_init(&worker->lock);
 	lockdep_set_class_and_name(&worker->lock, key, name);
 	INIT_LIST_HEAD(&worker->work_list);
 	INIT_LIST_HEAD(&worker->delayed_work_list);
@@ -621,21 +621,21 @@ int kthread_worker_fn(void *worker_ptr)
 
 	if (kthread_should_stop()) {
 		__set_current_state(TASK_RUNNING);
-		spin_lock_irq(&worker->lock);
+		raw_spin_lock_irq(&worker->lock);
 		worker->task = NULL;
-		spin_unlock_irq(&worker->lock);
+		raw_spin_unlock_irq(&worker->lock);
 		return 0;
 	}
 
 	work = NULL;
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	if (!list_empty(&worker->work_list)) {
 		work = list_first_entry(&worker->work_list,
 					struct kthread_work, node);
 		list_del_init(&work->node);
 	}
 	worker->current_work = work;
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (work) {
 		__set_current_state(TASK_RUNNING);
@@ -792,12 +792,12 @@ bool kthread_queue_work(struct kthread_worker *worker,
 	bool ret = false;
 	unsigned long flags;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	if (!queuing_blocked(worker, work)) {
 		kthread_insert_work(worker, work, &worker->work_list);
 		ret = true;
 	}
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_work);
@@ -824,7 +824,7 @@ void kthread_delayed_work_timer_fn(unsigned long __data)
 	if (WARN_ON_ONCE(!worker))
 		return;
 
-	spin_lock(&worker->lock);
+	raw_spin_lock(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -833,7 +833,7 @@ void kthread_delayed_work_timer_fn(unsigned long __data)
 	list_del_init(&work->node);
 	kthread_insert_work(worker, work, &worker->work_list);
 
-	spin_unlock(&worker->lock);
+	raw_spin_unlock(&worker->lock);
 }
 EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
 
@@ -890,14 +890,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	if (!queuing_blocked(worker, work)) {
 		__kthread_queue_delayed_work(worker, dwork, delay);
 		ret = true;
 	}
 
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@@ -933,7 +933,7 @@ void kthread_flush_work(struct kthread_work *work)
 	if (!worker)
 		return;
 
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -945,7 +945,7 @@ void kthread_flush_work(struct kthread_work *work)
 	else
 		noop = true;
 
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (!noop)
 		wait_for_completion(&fwork.done);
@@ -978,9 +978,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork,
 		 * any queuing is blocked by setting the canceling counter.
 		 */
 		work->canceling++;
-		spin_unlock_irqrestore(&worker->lock, *flags);
+		raw_spin_unlock_irqrestore(&worker->lock, *flags);
 		del_timer_sync(&dwork->timer);
-		spin_lock_irqsave(&worker->lock, *flags);
+		raw_spin_lock_irqsave(&worker->lock, *flags);
 		work->canceling--;
 	}
 
@@ -1027,7 +1027,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 	unsigned long flags;
 	int ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	/* Do not bother with canceling when never queued. */
 	if (!work->worker)
@@ -1044,7 +1044,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
 fast_queue:
 	__kthread_queue_delayed_work(worker, dwork, delay);
 out:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@@ -1058,7 +1058,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	if (!worker)
 		goto out;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@@ -1072,13 +1072,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork)
 	 * In the meantime, block any queuing by setting the canceling counter.
 	 */
 	work->canceling++;
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	kthread_flush_work(work);
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	work->canceling--;
 
 out_fast:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 out:
 	return ret;
 }
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 03/13] mm/kasan: make quarantine_lock a raw_spinlock_t
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 02/13] kthread: convert worker lock to raw spinlock Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 05/13] sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled Steven Rostedt
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Clark Williams

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Clark Williams <williams@redhat.com>

[ Upstream commit 089cb35faad5da57fa90399b230b3aee4920bb02 ]

The static lock quarantine_lock is used in quarantine.c to protect the
quarantine queue datastructures. It is taken inside quarantine queue
manipulation routines (quarantine_put(), quarantine_reduce() and
quarantine_remove_cache()), with IRQs disabled.
This is not a problem on a stock kernel but is problematic on an RT
kernel where spin locks are sleeping spinlocks, which can sleep and can
not be acquired with disabled interrupts.

Convert the quarantine_lock to a raw spinlock_t. The usage of
quarantine_lock is confined to quarantine.c and the work performed while
the lock is held is limited.

Signed-off-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 mm/kasan/quarantine.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 3a8ddf8baf7d..b209dbaefde8 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -103,7 +103,7 @@ static int quarantine_head;
 static int quarantine_tail;
 /* Total size of all objects in global_quarantine across all batches. */
 static unsigned long quarantine_size;
-static DEFINE_SPINLOCK(quarantine_lock);
+static DEFINE_RAW_SPINLOCK(quarantine_lock);
 DEFINE_STATIC_SRCU(remove_cache_srcu);
 
 /* Maximum size of the global queue. */
@@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 	if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) {
 		qlist_move_all(q, &temp);
 
-		spin_lock(&quarantine_lock);
+		raw_spin_lock(&quarantine_lock);
 		WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes);
 		qlist_move_all(&temp, &global_quarantine[quarantine_tail]);
 		if (global_quarantine[quarantine_tail].bytes >=
@@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache)
 			if (new_tail != quarantine_head)
 				quarantine_tail = new_tail;
 		}
-		spin_unlock(&quarantine_lock);
+		raw_spin_unlock(&quarantine_lock);
 	}
 
 	local_irq_restore(flags);
@@ -230,7 +230,7 @@ void quarantine_reduce(void)
 	 * expected case).
 	 */
 	srcu_idx = srcu_read_lock(&remove_cache_srcu);
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 
 	/*
 	 * Update quarantine size in case of hotplug. Allocate a fraction of
@@ -254,7 +254,7 @@ void quarantine_reduce(void)
 			quarantine_head = 0;
 	}
 
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, NULL);
 	srcu_read_unlock(&remove_cache_srcu, srcu_idx);
@@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem_cache *cache)
 	 */
 	on_each_cpu(per_cpu_remove_cache, cache, 1);
 
-	spin_lock_irqsave(&quarantine_lock, flags);
+	raw_spin_lock_irqsave(&quarantine_lock, flags);
 	for (i = 0; i < QUARANTINE_BATCHES; i++) {
 		if (qlist_empty(&global_quarantine[i]))
 			continue;
 		qlist_move_cache(&global_quarantine[i], &to_free, cache);
 		/* Scanning whole quarantine can take a while. */
-		spin_unlock_irqrestore(&quarantine_lock, flags);
+		raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 		cond_resched();
-		spin_lock_irqsave(&quarantine_lock, flags);
+		raw_spin_lock_irqsave(&quarantine_lock, flags);
 	}
-	spin_unlock_irqrestore(&quarantine_lock, flags);
+	raw_spin_unlock_irqrestore(&quarantine_lock, flags);
 
 	qlist_free_all(&to_free, cache);
 
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 05/13] sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2019-01-07 19:52 ` [PATCH RT 03/13] mm/kasan: make quarantine_lock a raw_spinlock_t Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 06/13] pinctrl: bcm2835: Use raw spinlock for RT compatibility Steven Rostedt
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Jonathan Rajott

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit c0f0dd3ced7abe307d8e89477dae2929e488ba6c ]

Jonathan reported that lttng/modules can't use __migrate_disabled().
This function is only used by sched/core itself and the tracing
infrastructure to report the migrate counter (lttng does probably the
same). Since the rework migrate_disable() it moved from sched.h to
preempt.h and is became an exported function instead of a "static
inline" due to the header recursion of preempt vs sched.

Since the compiler inlines the function for sched/core usage, add a
EXPORT_SYMBOL_GPL to allow the module/LTTNG usage.

Reported-by: Jonathan Rajott <jonathan.rajotte-julien@efficios.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/sched/core.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6ce950f24a7f..7c960cf07e7b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1112,6 +1112,7 @@ int __migrate_disabled(struct task_struct *p)
 {
 	return p->migrate_disable;
 }
+EXPORT_SYMBOL_GPL(__migrate_disabled);
 #endif
 
 static void __do_set_cpus_allowed_tail(struct task_struct *p,
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 06/13] pinctrl: bcm2835: Use raw spinlock for RT compatibility
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2019-01-07 19:52 ` [PATCH RT 05/13] sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-07 19:52 ` [PATCH RT 07/13] rcu: make RCU_BOOST default on RT without EXPERT Steven Rostedt
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Mathias Duckeck, Lukas Wunner, Linus Walleij

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Lukas Wunner <lukas@wunner.de>

[ Upstream commit 71dfaa749f2f7c1722ebf6716d3f797a04528cba ]

The BCM2835 pinctrl driver acquires a spinlock in its ->irq_enable,
->irq_disable and ->irq_set_type callbacks.  Spinlocks become sleeping
locks with CONFIG_PREEMPT_RT_FULL=y, therefore invocation of one of the
callbacks in atomic context may cause a hard lockup if at least two GPIO
pins in the same bank are used as interrupts.  The issue doesn't occur
with just a single interrupt pin per bank because the lock is never
contended.  I'm experiencing such lockups with GPIO 8 and 28 used as
level-triggered interrupts, i.e. with ->irq_disable being invoked on
reception of every IRQ.

The critical section protected by the spinlock is very small (one bitop
and one RMW of an MMIO register), hence converting to a raw spinlock
seems a better trade-off than converting the driver to threaded IRQ
handling (which would increase latency to handle an interrupt).

Cc: Mathias Duckeck <m.duckeck@kunbus.de>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Acked-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/pinctrl/bcm/pinctrl-bcm2835.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
index ff782445dfb7..e72bf2502eca 100644
--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
@@ -92,7 +92,7 @@ struct bcm2835_pinctrl {
 	struct gpio_chip gpio_chip;
 	struct pinctrl_gpio_range gpio_range;
 
-	spinlock_t irq_lock[BCM2835_NUM_BANKS];
+	raw_spinlock_t irq_lock[BCM2835_NUM_BANKS];
 };
 
 /* pins are just named GPIO0..GPIO53 */
@@ -471,10 +471,10 @@ static void bcm2835_gpio_irq_enable(struct irq_data *data)
 	unsigned bank = GPIO_REG_OFFSET(gpio);
 	unsigned long flags;
 
-	spin_lock_irqsave(&pc->irq_lock[bank], flags);
+	raw_spin_lock_irqsave(&pc->irq_lock[bank], flags);
 	set_bit(offset, &pc->enabled_irq_map[bank]);
 	bcm2835_gpio_irq_config(pc, gpio, true);
-	spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
+	raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
 }
 
 static void bcm2835_gpio_irq_disable(struct irq_data *data)
@@ -486,12 +486,12 @@ static void bcm2835_gpio_irq_disable(struct irq_data *data)
 	unsigned bank = GPIO_REG_OFFSET(gpio);
 	unsigned long flags;
 
-	spin_lock_irqsave(&pc->irq_lock[bank], flags);
+	raw_spin_lock_irqsave(&pc->irq_lock[bank], flags);
 	bcm2835_gpio_irq_config(pc, gpio, false);
 	/* Clear events that were latched prior to clearing event sources */
 	bcm2835_gpio_set_bit(pc, GPEDS0, gpio);
 	clear_bit(offset, &pc->enabled_irq_map[bank]);
-	spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
+	raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
 }
 
 static int __bcm2835_gpio_irq_set_type_disabled(struct bcm2835_pinctrl *pc,
@@ -594,7 +594,7 @@ static int bcm2835_gpio_irq_set_type(struct irq_data *data, unsigned int type)
 	unsigned long flags;
 	int ret;
 
-	spin_lock_irqsave(&pc->irq_lock[bank], flags);
+	raw_spin_lock_irqsave(&pc->irq_lock[bank], flags);
 
 	if (test_bit(offset, &pc->enabled_irq_map[bank]))
 		ret = __bcm2835_gpio_irq_set_type_enabled(pc, gpio, type);
@@ -606,7 +606,7 @@ static int bcm2835_gpio_irq_set_type(struct irq_data *data, unsigned int type)
 	else
 		irq_set_handler_locked(data, handle_level_irq);
 
-	spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
+	raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags);
 
 	return ret;
 }
@@ -1021,7 +1021,7 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev)
 		for_each_set_bit(offset, &events, 32)
 			bcm2835_gpio_wr(pc, GPEDS0 + i * 4, BIT(offset));
 
-		spin_lock_init(&pc->irq_lock[i]);
+		raw_spin_lock_init(&pc->irq_lock[i]);
 	}
 
 	err = gpiochip_add_data(&pc->gpio_chip, pc);
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 07/13] rcu: make RCU_BOOST default on RT without EXPERT
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2019-01-07 19:52 ` [PATCH RT 06/13] pinctrl: bcm2835: Use raw spinlock for RT compatibility Steven Rostedt
@ 2019-01-07 19:52 ` Steven Rostedt
  2019-01-07 19:53 ` [PATCH RT 08/13] x86/fpu: Disable preemption around local_bh_disable() Steven Rostedt
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:52 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 78cab7cb632b6a4c84e78e4f12bb9e83c09b8885 ]

Paul E. McKenney suggested to allow enabling RCU_BOOST on RT without the
need to go through the EXPERT option first.

Suggeted-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/rcu/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index 0be2c96fb640..a243a78ff38c 100644
--- a/kernel/rcu/Kconfig
+++ b/kernel/rcu/Kconfig
@@ -36,7 +36,7 @@ config TINY_RCU
 
 config RCU_EXPERT
 	bool "Make expert-level adjustments to RCU configuration"
-	default y if PREEMPT_RT_FULL
+	default n
 	help
 	  This option needs to be enabled if you wish to make
 	  expert-level adjustments to RCU configuration.  By default,
@@ -190,7 +190,7 @@ config RCU_FAST_NO_HZ
 
 config RCU_BOOST
 	bool "Enable RCU priority boosting"
-	depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
+	depends on (RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT) || PREEMPT_RT_FULL
 	default y if PREEMPT_RT_FULL
 	help
 	  This option boosts the priority of preempted RCU readers that
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 08/13] x86/fpu: Disable preemption around local_bh_disable()
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2019-01-07 19:52 ` [PATCH RT 07/13] rcu: make RCU_BOOST default on RT without EXPERT Steven Rostedt
@ 2019-01-07 19:53 ` Steven Rostedt
  2019-01-07 19:53 ` [PATCH RT 09/13] hrtimer: move state change before hrtimer_cancel in do_nanosleep() Steven Rostedt
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:53 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit f70ac4a5ca5df1d84dae809453464eca16b54f51 ]

__fpu__restore_sig() restores the content of the FPU state in the CPUs
and in order to avoid concurency it disbles BH. On !RT it also disables
preemption but on RT we can get preempted in BH.

Add preempt_disable() while the FPU state is restored.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/kernel/fpu/signal.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index d99a8ee9e185..5e0274a94133 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -344,10 +344,12 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
 		}
 
+		preempt_disable();
 		local_bh_disable();
 		fpu->initialized = 1;
 		fpu__restore(fpu);
 		local_bh_enable();
+		preempt_enable();
 
 		return err;
 	} else {
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 09/13] hrtimer: move state change before hrtimer_cancel in do_nanosleep()
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2019-01-07 19:53 ` [PATCH RT 08/13] x86/fpu: Disable preemption around local_bh_disable() Steven Rostedt
@ 2019-01-07 19:53 ` Steven Rostedt
  2019-01-07 19:53 ` [PATCH RT 10/13] drm/i915: disable tracing on -RT Steven Rostedt
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:53 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Daniel Bristot de Oliveira

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 8115ac730fd5aa27134f002cf710204b5dd7cd5e ]

There is a small window between setting t->task to NULL and waking the
task up (which would set TASK_RUNNING). So the timer would fire, run and
set ->task to NULL while the other side/do_nanosleep() wouldn't enter
freezable_schedule(). After all we are peemptible here (in
do_nanosleep() and on the timer wake up path) and on KVM/virt the
virt-CPU might get preempted.
So do_nanosleep() wouldn't enter freezable_schedule() but cancel the
timer which is still running and wait for it via
hrtimer_wait_for_timer(). Then wait_event()/might_sleep() would complain
that it is invoked with state != TASK_RUNNING.
This isn't a problem since it would be reset to TASK_RUNNING later
anyway and we don't rely on the previous state.

Move the state update to TASK_RUNNING before hrtimer_cancel() so there
are no complains from might_sleep() about wrong state.

Cc: stable-rt@vger.kernel.org
Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/time/hrtimer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index b59e009087a9..c8d806126381 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1753,12 +1753,12 @@ static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mod
 		if (likely(t->task))
 			freezable_schedule();
 
+		__set_current_state(TASK_RUNNING);
 		hrtimer_cancel(&t->timer);
 		mode = HRTIMER_MODE_ABS;
 
 	} while (t->task && !signal_pending(current));
 
-	__set_current_state(TASK_RUNNING);
 
 	if (!t->task)
 		return 0;
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 10/13] drm/i915: disable tracing on -RT
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2019-01-07 19:53 ` [PATCH RT 09/13] hrtimer: move state change before hrtimer_cancel in do_nanosleep() Steven Rostedt
@ 2019-01-07 19:53 ` Steven Rostedt
  2019-01-07 20:10   ` Sebastian Andrzej Siewior
  2019-01-07 19:53 ` [PATCH RT 11/13] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Steven Rostedt
  2019-01-07 19:53 ` [PATCH RT 13/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
  10 siblings, 1 reply; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:53 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Luca Abeni

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 05cebb309b156646e61b898e92acc8e46c47ba75 ]

Luca Abeni reported this:
| BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003
| CPU: 1 PID: 15203 Comm: kworker/u8:2 Not tainted 4.19.1-rt3 #10
| Call Trace:
|  rt_spin_lock+0x3f/0x50
|  gen6_read32+0x45/0x1d0 [i915]
|  g4x_get_vblank_counter+0x36/0x40 [i915]
|  trace_event_raw_event_i915_pipe_update_start+0x7d/0xf0 [i915]

The tracing events use trace_i915_pipe_update_start() among other events
use functions acquire spin locks. A few trace points use
intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
might acquire a sleeping lock.

Based on this I don't see any other way than disable trace points on RT.

Cc: stable-rt@vger.kernel.org
Reported-by: Luca Abeni <lucabe72@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/gpu/drm/i915/i915_trace.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index ef72da74b87f..adf0974415bc 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -2,6 +2,10 @@
 #if !defined(_I915_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
 #define _I915_TRACE_H_
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+#define NOTRACE
+#endif
+
 #include <linux/stringify.h>
 #include <linux/types.h>
 #include <linux/tracepoint.h>
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 11/13] x86/mm/pat: disable preemption __split_large_page() after spin_lock()
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2019-01-07 19:53 ` [PATCH RT 10/13] drm/i915: disable tracing on -RT Steven Rostedt
@ 2019-01-07 19:53 ` Steven Rostedt
  2019-01-07 19:53 ` [PATCH RT 13/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:53 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 45c6ff4811878e5c1c2ae31303cd95cdc6ae2ab4 ]

Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
warning if __flush_tlb_all() is invoked in preemptible context. On !RT
the warning does not trigger because a spin lock is acquired which
disables preemption. On RT the spin lock does not disable preemption and
so the warning is seen.

Disable preemption to avoid the warning __flush_tlb_all().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/mm/pageattr.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 835620ab435f..57a04ef6fe47 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -661,12 +661,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	pgprot_t ref_prot;
 
 	spin_lock(&pgd_lock);
+	/*
+	 * Keep preemption disabled after __flush_tlb_all() which expects not be
+	 * preempted during the flush of the local TLB.
+	 */
+	preempt_disable();
 	/*
 	 * Check for races, another CPU might have split this page
 	 * up for us already:
 	 */
 	tmp = _lookup_address_cpa(cpa, address, &level);
 	if (tmp != kpte) {
+		preempt_enable();
 		spin_unlock(&pgd_lock);
 		return 1;
 	}
@@ -696,6 +702,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 		break;
 
 	default:
+		preempt_enable();
 		spin_unlock(&pgd_lock);
 		return 1;
 	}
@@ -743,6 +750,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	 * going on.
 	 */
 	__flush_tlb_all();
+	preempt_enable();
 	spin_unlock(&pgd_lock);
 
 	return 0;
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH RT 13/13] Linux 4.14.87-rt50-rc1
  2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2019-01-07 19:53 ` [PATCH RT 11/13] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Steven Rostedt
@ 2019-01-07 19:53 ` Steven Rostedt
  10 siblings, 0 replies; 17+ messages in thread
From: Steven Rostedt @ 2019-01-07 19:53 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.87-rt50-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 4b7dca68a5b4..e8a9a36bb066 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt49
+-rt50-rc1
-- 
2.19.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH RT 10/13] drm/i915: disable tracing on -RT
  2019-01-07 19:53 ` [PATCH RT 10/13] drm/i915: disable tracing on -RT Steven Rostedt
@ 2019-01-07 20:10   ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-01-07 20:10 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Luca Abeni

On 2019-01-07 14:53:02 [-0500], Steven Rostedt wrote:
> 4.14.87-rt50-rc1 stable review patch.
> If anyone has any objections, please let me know.
there is
  https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/tree/patches/drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch?h=linux-4.19.y-rt-patches

to address a build failure with CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS.

I know you are aware of that and that there is/was some discussion about
this gem. I would be happier if we could avoid complex code within a
tracepoint.

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING
  2019-01-07 19:52 ` [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING Steven Rostedt
@ 2019-01-08  3:06   ` Sergey Senozhatsky
  2019-01-08  3:26     ` Steven Rostedt
  0 siblings, 1 reply; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-01-08  3:06 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker,
	Julia Cartwright, Daniel Wagner, tom.zanussi

On (01/07/19 14:52), Steven Rostedt wrote:
> Subject: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING

						shift?

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING
  2019-01-08  3:06   ` Sergey Senozhatsky
@ 2019-01-08  3:26     ` Steven Rostedt
  2019-01-08  4:47       ` Sergey Senozhatsky
  0 siblings, 1 reply; 17+ messages in thread
From: Steven Rostedt @ 2019-01-08  3:26 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker,
	Julia Cartwright, Daniel Wagner, tom.zanussi

On Tue, 8 Jan 2019 12:06:23 +0900
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> wrote:

> On (01/07/19 14:52), Steven Rostedt wrote:
> > Subject: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING  
> 
> 						shift?

Yes. And I noticed this typo after I sent out the list. It appears that
Sebastian's keyboard has a faulty 'f' key, as he has made this typo
more than once.

-- Steve

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING
  2019-01-08  3:26     ` Steven Rostedt
@ 2019-01-08  4:47       ` Sergey Senozhatsky
  2019-01-08  9:00         ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-01-08  4:47 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Sergey Senozhatsky, linux-kernel, linux-rt-users,
	Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On (01/07/19 22:26), Steven Rostedt wrote:
> > On (01/07/19 14:52), Steven Rostedt wrote:
> > > Subject: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING  
> > 
> > 						shift?
> 
> Yes. And I noticed this typo after I sent out the list. It appears that
> Sebastian's keyboard has a faulty 'f' key, as he has made this typo
> more than once.

I think Sebastian can configure his emacs to interpret a long
spacebar hold as 'f' key [0].

[0] https://imgs.xkcd.com/comics/workflow.png

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING
  2019-01-08  4:47       ` Sergey Senozhatsky
@ 2019-01-08  9:00         ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 17+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-01-08  9:00 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Steven Rostedt, linux-kernel, linux-rt-users, Thomas Gleixner,
	Carsten Emde, John Kacur, Paul Gortmaker, Julia Cartwright,
	Daniel Wagner, tom.zanussi

On 2019-01-08 13:47:19 [+0900], Sergey Senozhatsky wrote:
> On (01/07/19 22:26), Steven Rostedt wrote:
> > > On (01/07/19 14:52), Steven Rostedt wrote:
> > > > Subject: [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING  
> > > 
> > > 						shift?
> > 
> > Yes. And I noticed this typo after I sent out the list. It appears that
> > Sebastian's keyboard has a faulty 'f' key, as he has made this typo
> > more than once.
> 
> I think Sebastian can configure his emacs to interpret a long
> spacebar hold as 'f' key [0].

Thanks for the hint but Sebastian is a vim user. Might be a reason to
migrate.

> [0] https://imgs.xkcd.com/comics/workflow.png
> 
> 	-ss

Sebastian

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-01-08  9:00 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-07 19:52 [PATCH RT 00/13] Linux 4.14.87-rt50-rc1 Steven Rostedt
2019-01-07 19:52 ` [PATCH RT 01/13] work-simple: drop a shit statement in SWORK_EVENT_PENDING Steven Rostedt
2019-01-08  3:06   ` Sergey Senozhatsky
2019-01-08  3:26     ` Steven Rostedt
2019-01-08  4:47       ` Sergey Senozhatsky
2019-01-08  9:00         ` Sebastian Andrzej Siewior
2019-01-07 19:52 ` [PATCH RT 02/13] kthread: convert worker lock to raw spinlock Steven Rostedt
2019-01-07 19:52 ` [PATCH RT 03/13] mm/kasan: make quarantine_lock a raw_spinlock_t Steven Rostedt
2019-01-07 19:52 ` [PATCH RT 05/13] sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled Steven Rostedt
2019-01-07 19:52 ` [PATCH RT 06/13] pinctrl: bcm2835: Use raw spinlock for RT compatibility Steven Rostedt
2019-01-07 19:52 ` [PATCH RT 07/13] rcu: make RCU_BOOST default on RT without EXPERT Steven Rostedt
2019-01-07 19:53 ` [PATCH RT 08/13] x86/fpu: Disable preemption around local_bh_disable() Steven Rostedt
2019-01-07 19:53 ` [PATCH RT 09/13] hrtimer: move state change before hrtimer_cancel in do_nanosleep() Steven Rostedt
2019-01-07 19:53 ` [PATCH RT 10/13] drm/i915: disable tracing on -RT Steven Rostedt
2019-01-07 20:10   ` Sebastian Andrzej Siewior
2019-01-07 19:53 ` [PATCH RT 11/13] x86/mm/pat: disable preemption __split_large_page() after spin_lock() Steven Rostedt
2019-01-07 19:53 ` [PATCH RT 13/13] Linux 4.14.87-rt50-rc1 Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).