linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/22] Linux 4.14.63-rt41-rc1
@ 2018-09-05 12:27 Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 01/22] sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report Steven Rostedt
                   ` (22 more replies)
  0 siblings, 23 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi


Dear RT Folks,

This is the RT stable review cycle of patch 4.14.63-rt41-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 9/7/2018.

Enjoy,

-- Steve


To build 4.14.63-rt41-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.14.tar.xz

  http://www.kernel.org/pub/linux/kernel/v4.x/patch-4.14.63.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/4.14/patch-4.14.63-rt41-rc1.patch.xz

You can also build from 4.14.63-rt40 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/4.14/incr/patch-4.14.63-rt40-rt41-rc1.patch.xz


Changes from 4.14.63-rt40:

---


Anna-Maria Gleixner (1):
      Revert "timer: delay waking softirqs from the jiffy tick"

Daniel Bristot de Oliveira (1):
      sched/core: Avoid __schedule() being called twice in a row

Julia Cartwright (3):
      locallock: provide {get,put}_locked_ptr() variants
      squashfs: make use of local lock in multi_cpu decompressor
      seqlock: provide the same ordering semantics as mainline

Mike Galbraith (3):
      sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report
      crypto: scompress - serialize RT percpu scratch buffer access with a local lock
      sched: Allow pinned user tasks to be awakened to the CPU they pinned

Sebastian Andrzej Siewior (12):
      PM / suspend: Prevent might sleep splats (updated)
      PM / wakeup: Make events_lock a RAW_SPINLOCK
      PM / s2idle: Make s2idle_wait_head swait based
      Revert "x86: UV: raw_spinlock conversion"
      irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t
      irqchip/gic-v3-its: Move ITS' ->pend_page allocation into an early CPU up hook
      sched/migrate_disable: fallback to preempt_disable() instead barrier()
      efi: Allow efi=runtime
      efi: Disable runtime services on RT
      crypto: cryptd - add a lock instead preempt_disable/local_bh_disable
      Revert "arm64/xen: Make XEN depend on !RT"
      Drivers: hv: vmbus: include header for get_irq_regs()

Steven Rostedt (VMware) (1):
      Linux 4.14.63-rt41-rc1

Thomas Gleixner (1):
      x86/ioapic: Don't let setaffinity unmask threaded EOI interrupt too early

----
 arch/arm64/Kconfig                      |  2 +-
 arch/x86/include/asm/uv/uv_bau.h        | 14 +++----
 arch/x86/kernel/apic/io_apic.c          | 26 ++++++------
 arch/x86/platform/uv/tlb_uv.c           | 26 ++++++------
 arch/x86/platform/uv/uv_time.c          | 20 ++++------
 crypto/cryptd.c                         | 19 +++++----
 crypto/scompress.c                      |  6 ++-
 drivers/base/power/wakeup.c             | 18 ++++-----
 drivers/firmware/efi/efi.c              |  5 ++-
 drivers/hv/hyperv_vmbus.h               |  1 +
 drivers/irqchip/irq-gic-v3-its.c        | 70 ++++++++++++++++++++++-----------
 fs/squashfs/decompressor_multi_percpu.c | 16 ++++++--
 include/linux/locallock.h               | 10 +++++
 include/linux/preempt.h                 |  6 +--
 include/linux/sched.h                   |  4 +-
 include/linux/seqlock.h                 |  1 +
 kernel/power/suspend.c                  |  9 +++--
 kernel/sched/core.c                     | 34 +++++++++-------
 kernel/sched/debug.c                    |  2 +-
 kernel/sched/fair.c                     |  4 +-
 kernel/time/tick-common.c               |  2 +
 kernel/time/timer.c                     |  2 +-
 localversion-rt                         |  2 +-
 23 files changed, 175 insertions(+), 124 deletions(-)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH RT 01/22] sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
@ 2018-09-05 12:27 ` Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 02/22] locallock: provide {get,put}_locked_ptr() variants Steven Rostedt
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Mike Galbraith

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <efault@gmx.de>

[ Upstream commit df7e8acc0c9a84979a448d215b8ef889efe4ac5a ]

CFS bandwidth control yields the inversion gripe below, moving
handling quells it.

|========================================================
|WARNING: possible irq lock inversion dependency detected
|4.16.7-rt1-rt #2 Tainted: G            E
|--------------------------------------------------------
|sirq-hrtimer/0/15 just changed the state of lock:
| (&cfs_b->lock){+...}, at: [<000000009adb5cf7>] sched_cfs_period_timer+0x28/0x140
|but this lock was taken by another, HARDIRQ-safe lock in the past: (&rq->lock){-...}
|and interrupts could create inverse lock ordering between them.
|other info that might help us debug this:
| Possible interrupt unsafe locking scenario:
|       CPU0                    CPU1
|       ----                    ----
|  lock(&cfs_b->lock);
|                               local_irq_disable();
|                               lock(&rq->lock);
|                               lock(&cfs_b->lock);
|  <Interrupt>
|    lock(&rq->lock);

Cc: stable-rt@vger.kernel.org
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/sched/fair.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 36ef77839be4..51ecea4f5d16 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4684,9 +4684,9 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
 	cfs_b->period = ns_to_ktime(default_cfs_period());
 
 	INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq);
-	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+	hrtimer_init(&cfs_b->period_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
 	cfs_b->period_timer.function = sched_cfs_period_timer;
-	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	cfs_b->slack_timer.function = sched_cfs_slack_timer;
 }
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 02/22] locallock: provide {get,put}_locked_ptr() variants
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 01/22] sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report Steven Rostedt
@ 2018-09-05 12:27 ` Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 03/22] squashfs: make use of local lock in multi_cpu decompressor Steven Rostedt
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Julia Cartwright <julia@ni.com>

[ Upstream commit 3d45cf23db4f76cd356ebb0aa4cdaa7d92d1a64e ]

Provide a set of locallocked accessors for pointers to per-CPU data;
this is useful for dynamically-allocated per-CPU regions, for example.

These are symmetric with the {get,put}_cpu_ptr() per-CPU accessor
variants.

Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/locallock.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index d658c2552601..921eab83cd34 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -222,6 +222,14 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
 
 #define put_locked_var(lvar, var)	local_unlock(lvar);
 
+#define get_locked_ptr(lvar, var)					\
+	({								\
+		local_lock(lvar);					\
+		this_cpu_ptr(var);					\
+	})
+
+#define put_locked_ptr(lvar, var)	local_unlock(lvar);
+
 #define local_lock_cpu(lvar)						\
 	({								\
 		local_lock(lvar);					\
@@ -262,6 +270,8 @@ static inline void local_irq_lock_init(int lvar) { }
 
 #define get_locked_var(lvar, var)		get_cpu_var(var)
 #define put_locked_var(lvar, var)		put_cpu_var(var)
+#define get_locked_ptr(lvar, var)		get_cpu_ptr(var)
+#define put_locked_ptr(lvar, var)		put_cpu_ptr(var)
 
 #define local_lock_cpu(lvar)			get_cpu()
 #define local_unlock_cpu(lvar)			put_cpu()
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 03/22] squashfs: make use of local lock in multi_cpu decompressor
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 01/22] sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 02/22] locallock: provide {get,put}_locked_ptr() variants Steven Rostedt
@ 2018-09-05 12:27 ` Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 04/22] PM / suspend: Prevent might sleep splats (updated) Steven Rostedt
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Alexander Stein

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Julia Cartwright <julia@ni.com>

[ Upstream commit c160736542d7b3d67da32848d2f028b8e35730e5 ]

Currently, the squashfs multi_cpu decompressor makes use of
get_cpu_ptr()/put_cpu_ptr(), which unconditionally disable preemption
during decompression.

Because the workload is distributed across CPUs, all CPUs can observe a
very high wakeup latency, which has been seen to be as much as 8000us.

Convert this decompressor to make use of a local lock, which will allow
execution of the decompressor with preemption-enabled, but also ensure
concurrent accesses to the percpu compressor data on the local CPU will
be serialized.

Cc: stable-rt@vger.kernel.org
Reported-by: Alexander Stein <alexander.stein@systec-electronic.com>
Tested-by: Alexander Stein <alexander.stein@systec-electronic.com>
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 fs/squashfs/decompressor_multi_percpu.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c
index 23a9c28ad8ea..6a73c4fa88e7 100644
--- a/fs/squashfs/decompressor_multi_percpu.c
+++ b/fs/squashfs/decompressor_multi_percpu.c
@@ -10,6 +10,7 @@
 #include <linux/slab.h>
 #include <linux/percpu.h>
 #include <linux/buffer_head.h>
+#include <linux/locallock.h>
 
 #include "squashfs_fs.h"
 #include "squashfs_fs_sb.h"
@@ -25,6 +26,8 @@ struct squashfs_stream {
 	void		*stream;
 };
 
+static DEFINE_LOCAL_IRQ_LOCK(stream_lock);
+
 void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
 						void *comp_opts)
 {
@@ -79,10 +82,15 @@ int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh,
 {
 	struct squashfs_stream __percpu *percpu =
 			(struct squashfs_stream __percpu *) msblk->stream;
-	struct squashfs_stream *stream = get_cpu_ptr(percpu);
-	int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
-		offset, length, output);
-	put_cpu_ptr(stream);
+	struct squashfs_stream *stream;
+	int res;
+
+	stream = get_locked_ptr(stream_lock, percpu);
+
+	res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+			offset, length, output);
+
+	put_locked_ptr(stream_lock, stream);
 
 	if (res < 0)
 		ERROR("%s decompression failed, data probably corrupt\n",
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 04/22] PM / suspend: Prevent might sleep splats (updated)
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2018-09-05 12:27 ` [PATCH RT 03/22] squashfs: make use of local lock in multi_cpu decompressor Steven Rostedt
@ 2018-09-05 12:27 ` Steven Rostedt
  2018-09-05 12:27 ` [PATCH RT 05/22] PM / wakeup: Make events_lock a RAW_SPINLOCK Steven Rostedt
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit ec7ff06b919647a2fd7d2761a26f5a1d465e819c ]

This is an updated version of this patch which was merged upstream as
commit c1a957d17086d20d52d7f9c8dffaeac2ee09d6f9

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/time/tick-common.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 7f5a26c3a8ee..7a87a4488a5e 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -492,6 +492,7 @@ void tick_freeze(void)
 	if (tick_freeze_depth == num_online_cpus()) {
 		trace_suspend_resume(TPS("timekeeping_freeze"),
 				     smp_processor_id(), true);
+		system_state = SYSTEM_SUSPEND;
 		timekeeping_suspend();
 	} else {
 		tick_suspend_local();
@@ -515,6 +516,7 @@ void tick_unfreeze(void)
 
 	if (tick_freeze_depth == num_online_cpus()) {
 		timekeeping_resume();
+		system_state = SYSTEM_RUNNING;
 		trace_suspend_resume(TPS("timekeeping_freeze"),
 				     smp_processor_id(), false);
 	} else {
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 05/22] PM / wakeup: Make events_lock a RAW_SPINLOCK
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2018-09-05 12:27 ` [PATCH RT 04/22] PM / suspend: Prevent might sleep splats (updated) Steven Rostedt
@ 2018-09-05 12:27 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 06/22] PM / s2idle: Make s2idle_wait_head swait based Steven Rostedt
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:27 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 1debb85a1d7d5c7655b4574f5b0ddf5f7c84873e ]

The `events_lock' is acquired during suspend while interrupts are
disabled even on RT. The lock is taken only for a very brief moment.
Make it a RAW lock which avoids "sleeping while atomic" warnings on RT.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/base/power/wakeup.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index cdd6f256da59..2269d379c92f 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -52,7 +52,7 @@ static void split_counters(unsigned int *cnt, unsigned int *inpr)
 /* A preserved old value of the events counter. */
 static unsigned int saved_count;
 
-static DEFINE_SPINLOCK(events_lock);
+static DEFINE_RAW_SPINLOCK(events_lock);
 
 static void pm_wakeup_timer_fn(unsigned long data);
 
@@ -180,9 +180,9 @@ void wakeup_source_add(struct wakeup_source *ws)
 	ws->active = false;
 	ws->last_time = ktime_get();
 
-	spin_lock_irqsave(&events_lock, flags);
+	raw_spin_lock_irqsave(&events_lock, flags);
 	list_add_rcu(&ws->entry, &wakeup_sources);
-	spin_unlock_irqrestore(&events_lock, flags);
+	raw_spin_unlock_irqrestore(&events_lock, flags);
 }
 EXPORT_SYMBOL_GPL(wakeup_source_add);
 
@@ -197,9 +197,9 @@ void wakeup_source_remove(struct wakeup_source *ws)
 	if (WARN_ON(!ws))
 		return;
 
-	spin_lock_irqsave(&events_lock, flags);
+	raw_spin_lock_irqsave(&events_lock, flags);
 	list_del_rcu(&ws->entry);
-	spin_unlock_irqrestore(&events_lock, flags);
+	raw_spin_unlock_irqrestore(&events_lock, flags);
 	synchronize_srcu(&wakeup_srcu);
 }
 EXPORT_SYMBOL_GPL(wakeup_source_remove);
@@ -844,7 +844,7 @@ bool pm_wakeup_pending(void)
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&events_lock, flags);
+	raw_spin_lock_irqsave(&events_lock, flags);
 	if (events_check_enabled) {
 		unsigned int cnt, inpr;
 
@@ -852,7 +852,7 @@ bool pm_wakeup_pending(void)
 		ret = (cnt != saved_count || inpr > 0);
 		events_check_enabled = !ret;
 	}
-	spin_unlock_irqrestore(&events_lock, flags);
+	raw_spin_unlock_irqrestore(&events_lock, flags);
 
 	if (ret) {
 		pr_info("PM: Wakeup pending, aborting suspend\n");
@@ -941,13 +941,13 @@ bool pm_save_wakeup_count(unsigned int count)
 	unsigned long flags;
 
 	events_check_enabled = false;
-	spin_lock_irqsave(&events_lock, flags);
+	raw_spin_lock_irqsave(&events_lock, flags);
 	split_counters(&cnt, &inpr);
 	if (cnt == count && inpr == 0) {
 		saved_count = count;
 		events_check_enabled = true;
 	}
-	spin_unlock_irqrestore(&events_lock, flags);
+	raw_spin_unlock_irqrestore(&events_lock, flags);
 	return events_check_enabled;
 }
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 06/22] PM / s2idle: Make s2idle_wait_head swait based
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2018-09-05 12:27 ` [PATCH RT 05/22] PM / wakeup: Make events_lock a RAW_SPINLOCK Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 07/22] seqlock: provide the same ordering semantics as mainline Steven Rostedt
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 93f141324d4860a1294e6899923c01ec5411d70b ]

s2idle_wait_head is used during s2idle with interrupts disabled even on
RT. There is no "custom" wake up function so swait could be used instead
which is also lower weight compared to the wait_queue.
Make s2idle_wait_head a swait_queue_head.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/power/suspend.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index 999236413460..b89605fe0e88 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -27,6 +27,7 @@
 #include <linux/export.h>
 #include <linux/suspend.h>
 #include <linux/syscore_ops.h>
+#include <linux/swait.h>
 #include <linux/ftrace.h>
 #include <trace/events/power.h>
 #include <linux/compiler.h>
@@ -57,7 +58,7 @@ EXPORT_SYMBOL_GPL(pm_suspend_global_flags);
 
 static const struct platform_suspend_ops *suspend_ops;
 static const struct platform_s2idle_ops *s2idle_ops;
-static DECLARE_WAIT_QUEUE_HEAD(s2idle_wait_head);
+static DECLARE_SWAIT_QUEUE_HEAD(s2idle_wait_head);
 
 enum s2idle_states __read_mostly s2idle_state;
 static DEFINE_RAW_SPINLOCK(s2idle_lock);
@@ -91,8 +92,8 @@ static void s2idle_enter(void)
 	/* Push all the CPUs into the idle loop. */
 	wake_up_all_idle_cpus();
 	/* Make the current CPU wait so it can enter the idle loop too. */
-	wait_event(s2idle_wait_head,
-		   s2idle_state == S2IDLE_STATE_WAKE);
+	swait_event(s2idle_wait_head,
+		    s2idle_state == S2IDLE_STATE_WAKE);
 
 	cpuidle_pause();
 	put_online_cpus();
@@ -159,7 +160,7 @@ void s2idle_wake(void)
 	raw_spin_lock_irqsave(&s2idle_lock, flags);
 	if (s2idle_state > S2IDLE_STATE_NONE) {
 		s2idle_state = S2IDLE_STATE_WAKE;
-		wake_up(&s2idle_wait_head);
+		swake_up(&s2idle_wait_head);
 	}
 	raw_spin_unlock_irqrestore(&s2idle_lock, flags);
 }
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 07/22] seqlock: provide the same ordering semantics as mainline
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 06/22] PM / s2idle: Make s2idle_wait_head swait based Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion" Steven Rostedt
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Julia Cartwright <julia@ni.com>

[ Upstream commit afa4c06b89a3c0fb7784ff900ccd707bef519cb7 ]

The mainline implementation of read_seqbegin() orders prior loads w.r.t.
the read-side critical section.  Fixup the RT writer-boosting
implementation to provide the same guarantee.

Also, while we're here, update the usage of ACCESS_ONCE() to use
READ_ONCE().

Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation")
Cc: stable-rt@vger.kernel.org
Signed-off-by: Julia Cartwright <julia@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/seqlock.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index a59751276b94..107079a2d7ed 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -462,6 +462,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl)
 		spin_unlock_wait(&sl->lock);
 		goto repeat;
 	}
+	smp_rmb();
 	return ret;
 }
 #endif
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion"
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 07/22] seqlock: provide the same ordering semantics as mainline Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-06  7:35   ` Sebastian Andrzej Siewior
  2018-09-05 12:28 ` [PATCH RT 09/22] Revert "timer: delay waking softirqs from the jiffy tick" Steven Rostedt
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 2a9c45d8f89112458364285cbe2b0729561953f1 ]

Drop the Ultraviolet patch. UV looks broken upstream for PREEMPT, too.
Mike is the only person I know that has such a thing and he isn't going
to fix this upstream (from 1526977462.6491.1.camel@gmx.de):

|From: Mike Galbraith <gleep@gmx.de>
|On Tue, 2018-05-22 at 08:50 +0200, Sebastian Andrzej Siewior wrote:
|>
|> Regarding the preempt_disable() in the original patch in uv_read_rtc():
|> This looks essential for PREEMPT configs. Is it possible to get this
|> tested by someone or else get rid of the UV code? It looks broken for
|> "uv_get_min_hub_revision_id() != 1".
|
|I suspect SGI cares not one whit about PREEMPT.
|
|> Why does PREEMPT_RT require migrate_disable() but PREEMPT only is fine
|> as-is? This does not look right.
|
|UV is not ok with a PREEMPT config, it's just that for RT it's dirt
|simple to shut it up, whereas for PREEMPT, preempt_disable() across
|uv_bau_init() doesn't cut it due to allocations, and whatever else I
|would have met before ending the whack-a-mole game.
|
|If I were in your shoes, I think I'd just stop caring about UV until a
|real user appears.  AFAIK, I'm the only guy who ever ran RT on UV, and
|I only did so because SUSE asked me to look into it.. years ago now.
|
|        -Mike

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/x86/include/asm/uv/uv_bau.h | 14 +++++++-------
 arch/x86/platform/uv/tlb_uv.c    | 26 +++++++++++++-------------
 arch/x86/platform/uv/uv_time.c   | 20 ++++++++------------
 3 files changed, 28 insertions(+), 32 deletions(-)

diff --git a/arch/x86/include/asm/uv/uv_bau.h b/arch/x86/include/asm/uv/uv_bau.h
index 2ac6e347bdc5..7cac79802ad2 100644
--- a/arch/x86/include/asm/uv/uv_bau.h
+++ b/arch/x86/include/asm/uv/uv_bau.h
@@ -643,9 +643,9 @@ struct bau_control {
 	cycles_t		send_message;
 	cycles_t		period_end;
 	cycles_t		period_time;
-	raw_spinlock_t		uvhub_lock;
-	raw_spinlock_t		queue_lock;
-	raw_spinlock_t		disable_lock;
+	spinlock_t		uvhub_lock;
+	spinlock_t		queue_lock;
+	spinlock_t		disable_lock;
 	/* tunables */
 	int			max_concurr;
 	int			max_concurr_const;
@@ -847,15 +847,15 @@ static inline int atom_asr(short i, struct atomic_short *v)
  * to be lowered below the current 'v'.  atomic_add_unless can only stop
  * on equal.
  */
-static inline int atomic_inc_unless_ge(raw_spinlock_t *lock, atomic_t *v, int u)
+static inline int atomic_inc_unless_ge(spinlock_t *lock, atomic_t *v, int u)
 {
-	raw_spin_lock(lock);
+	spin_lock(lock);
 	if (atomic_read(v) >= u) {
-		raw_spin_unlock(lock);
+		spin_unlock(lock);
 		return 0;
 	}
 	atomic_inc(v);
-	raw_spin_unlock(lock);
+	spin_unlock(lock);
 	return 1;
 }
 
diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
index 5607611df740..34f9a9ce6236 100644
--- a/arch/x86/platform/uv/tlb_uv.c
+++ b/arch/x86/platform/uv/tlb_uv.c
@@ -740,9 +740,9 @@ static void destination_plugged(struct bau_desc *bau_desc,
 
 		quiesce_local_uvhub(hmaster);
 
-		raw_spin_lock(&hmaster->queue_lock);
+		spin_lock(&hmaster->queue_lock);
 		reset_with_ipi(&bau_desc->distribution, bcp);
-		raw_spin_unlock(&hmaster->queue_lock);
+		spin_unlock(&hmaster->queue_lock);
 
 		end_uvhub_quiesce(hmaster);
 
@@ -762,9 +762,9 @@ static void destination_timeout(struct bau_desc *bau_desc,
 
 		quiesce_local_uvhub(hmaster);
 
-		raw_spin_lock(&hmaster->queue_lock);
+		spin_lock(&hmaster->queue_lock);
 		reset_with_ipi(&bau_desc->distribution, bcp);
-		raw_spin_unlock(&hmaster->queue_lock);
+		spin_unlock(&hmaster->queue_lock);
 
 		end_uvhub_quiesce(hmaster);
 
@@ -785,7 +785,7 @@ static void disable_for_period(struct bau_control *bcp, struct ptc_stats *stat)
 	cycles_t tm1;
 
 	hmaster = bcp->uvhub_master;
-	raw_spin_lock(&hmaster->disable_lock);
+	spin_lock(&hmaster->disable_lock);
 	if (!bcp->baudisabled) {
 		stat->s_bau_disabled++;
 		tm1 = get_cycles();
@@ -798,7 +798,7 @@ static void disable_for_period(struct bau_control *bcp, struct ptc_stats *stat)
 			}
 		}
 	}
-	raw_spin_unlock(&hmaster->disable_lock);
+	spin_unlock(&hmaster->disable_lock);
 }
 
 static void count_max_concurr(int stat, struct bau_control *bcp,
@@ -861,7 +861,7 @@ static void record_send_stats(cycles_t time1, cycles_t time2,
  */
 static void uv1_throttle(struct bau_control *hmaster, struct ptc_stats *stat)
 {
-	raw_spinlock_t *lock = &hmaster->uvhub_lock;
+	spinlock_t *lock = &hmaster->uvhub_lock;
 	atomic_t *v;
 
 	v = &hmaster->active_descriptor_count;
@@ -995,7 +995,7 @@ static int check_enable(struct bau_control *bcp, struct ptc_stats *stat)
 	struct bau_control *hmaster;
 
 	hmaster = bcp->uvhub_master;
-	raw_spin_lock(&hmaster->disable_lock);
+	spin_lock(&hmaster->disable_lock);
 	if (bcp->baudisabled && (get_cycles() >= bcp->set_bau_on_time)) {
 		stat->s_bau_reenabled++;
 		for_each_present_cpu(tcpu) {
@@ -1007,10 +1007,10 @@ static int check_enable(struct bau_control *bcp, struct ptc_stats *stat)
 				tbcp->period_giveups = 0;
 			}
 		}
-		raw_spin_unlock(&hmaster->disable_lock);
+		spin_unlock(&hmaster->disable_lock);
 		return 0;
 	}
-	raw_spin_unlock(&hmaster->disable_lock);
+	spin_unlock(&hmaster->disable_lock);
 	return -1;
 }
 
@@ -1942,9 +1942,9 @@ static void __init init_per_cpu_tunables(void)
 		bcp->cong_reps			= congested_reps;
 		bcp->disabled_period		= sec_2_cycles(disabled_period);
 		bcp->giveup_limit		= giveup_limit;
-		raw_spin_lock_init(&bcp->queue_lock);
-		raw_spin_lock_init(&bcp->uvhub_lock);
-		raw_spin_lock_init(&bcp->disable_lock);
+		spin_lock_init(&bcp->queue_lock);
+		spin_lock_init(&bcp->uvhub_lock);
+		spin_lock_init(&bcp->disable_lock);
 	}
 }
 
diff --git a/arch/x86/platform/uv/uv_time.c b/arch/x86/platform/uv/uv_time.c
index badf377efc21..b082d71b08ee 100644
--- a/arch/x86/platform/uv/uv_time.c
+++ b/arch/x86/platform/uv/uv_time.c
@@ -57,7 +57,7 @@ static DEFINE_PER_CPU(struct clock_event_device, cpu_ced);
 
 /* There is one of these allocated per node */
 struct uv_rtc_timer_head {
-	raw_spinlock_t	lock;
+	spinlock_t	lock;
 	/* next cpu waiting for timer, local node relative: */
 	int		next_cpu;
 	/* number of cpus on this node: */
@@ -177,7 +177,7 @@ static __init int uv_rtc_allocate_timers(void)
 				uv_rtc_deallocate_timers();
 				return -ENOMEM;
 			}
-			raw_spin_lock_init(&head->lock);
+			spin_lock_init(&head->lock);
 			head->ncpus = uv_blade_nr_possible_cpus(bid);
 			head->next_cpu = -1;
 			blade_info[bid] = head;
@@ -231,7 +231,7 @@ static int uv_rtc_set_timer(int cpu, u64 expires)
 	unsigned long flags;
 	int next_cpu;
 
-	raw_spin_lock_irqsave(&head->lock, flags);
+	spin_lock_irqsave(&head->lock, flags);
 
 	next_cpu = head->next_cpu;
 	*t = expires;
@@ -243,12 +243,12 @@ static int uv_rtc_set_timer(int cpu, u64 expires)
 		if (uv_setup_intr(cpu, expires)) {
 			*t = ULLONG_MAX;
 			uv_rtc_find_next_timer(head, pnode);
-			raw_spin_unlock_irqrestore(&head->lock, flags);
+			spin_unlock_irqrestore(&head->lock, flags);
 			return -ETIME;
 		}
 	}
 
-	raw_spin_unlock_irqrestore(&head->lock, flags);
+	spin_unlock_irqrestore(&head->lock, flags);
 	return 0;
 }
 
@@ -267,7 +267,7 @@ static int uv_rtc_unset_timer(int cpu, int force)
 	unsigned long flags;
 	int rc = 0;
 
-	raw_spin_lock_irqsave(&head->lock, flags);
+	spin_lock_irqsave(&head->lock, flags);
 
 	if ((head->next_cpu == bcpu && uv_read_rtc(NULL) >= *t) || force)
 		rc = 1;
@@ -279,7 +279,7 @@ static int uv_rtc_unset_timer(int cpu, int force)
 			uv_rtc_find_next_timer(head, pnode);
 	}
 
-	raw_spin_unlock_irqrestore(&head->lock, flags);
+	spin_unlock_irqrestore(&head->lock, flags);
 
 	return rc;
 }
@@ -299,17 +299,13 @@ static int uv_rtc_unset_timer(int cpu, int force)
 static u64 uv_read_rtc(struct clocksource *cs)
 {
 	unsigned long offset;
-	u64 cycles;
 
-	preempt_disable();
 	if (uv_get_min_hub_revision_id() == 1)
 		offset = 0;
 	else
 		offset = (uv_blade_processor_id() * L1_CACHE_BYTES) % PAGE_SIZE;
 
-	cycles = (u64)uv_read_local_mmr(UVH_RTC | offset);
-	preempt_enable();
-	return cycles;
+	return (u64)uv_read_local_mmr(UVH_RTC | offset);
 }
 
 /*
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 09/22] Revert "timer: delay waking softirqs from the jiffy tick"
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion" Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 10/22] irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t Steven Rostedt
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Anna-Maria Gleixner

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Anna-Maria Gleixner <anna-maria@linutronix.de>

[ Upstream commit b5b16907c58280e015d5673dca4c6bd3fde0c348 ]

This patch was required as long as RT tasks where accounted to CFS
load but this was only a work around. Upstream Commit 17bdcf949d03
("sched: Drop all load weight manipulation for RT tasks") fixed the
accounting of RT tasks into CFS load.

Remove the patch and fix dependencies.

Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/time/timer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index ff1d60d4c0cc..f57106c6e786 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1635,13 +1635,13 @@ void update_process_times(int user_tick)
 
 	/* Note: this timer irq context must be accounted for as well. */
 	account_process_tick(p, user_tick);
-	scheduler_tick();
 	run_local_timers();
 	rcu_check_callbacks(user_tick);
 #if defined(CONFIG_IRQ_WORK)
 	if (in_irq())
 		irq_work_tick();
 #endif
+	scheduler_tick();
 	if (IS_ENABLED(CONFIG_POSIX_TIMERS))
 		run_posix_cpu_timers(p);
 }
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 10/22] irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 09/22] Revert "timer: delay waking softirqs from the jiffy tick" Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook Steven Rostedt
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit c7a3334c762a9b1dd2e39cb2ded00ce66e8a06d1 ]

The its_lock lock is held while a new device is added to the list and
during setup while the CPU is booted. Even on -RT the CPU-bootup is
performed with disabled interrupts.

Make its_lock a raw_spin_lock_t.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 2ea39a83737f..e8217ebe8c1e 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -148,7 +148,7 @@ static struct {
 } vpe_proxy;
 
 static LIST_HEAD(its_nodes);
-static DEFINE_SPINLOCK(its_lock);
+static DEFINE_RAW_SPINLOCK(its_lock);
 static struct rdists *gic_rdists;
 static struct irq_domain *its_parent;
 
@@ -1850,7 +1850,7 @@ static void its_cpu_init_collection(void)
 	struct its_node *its;
 	int cpu;
 
-	spin_lock(&its_lock);
+	raw_spin_lock(&its_lock);
 	cpu = smp_processor_id();
 
 	list_for_each_entry(its, &its_nodes, entry) {
@@ -1892,7 +1892,7 @@ static void its_cpu_init_collection(void)
 		its_send_invall(its, &its->collections[cpu]);
 	}
 
-	spin_unlock(&its_lock);
+	raw_spin_unlock(&its_lock);
 }
 
 static struct its_device *its_find_device(struct its_node *its, u32 dev_id)
@@ -3041,9 +3041,9 @@ static int __init its_probe_one(struct resource *res,
 	if (err)
 		goto out_free_tables;
 
-	spin_lock(&its_lock);
+	raw_spin_lock(&its_lock);
 	list_add(&its->entry, &its_nodes);
-	spin_unlock(&its_lock);
+	raw_spin_unlock(&its_lock);
 
 	return 0;
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 10/22] irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-06  7:40   ` Sebastian Andrzej Siewior
  2018-09-05 12:28 ` [PATCH RT 12/22] sched/migrate_disable: fallback to preempt_disable() instead barrier() Steven Rostedt
                   ` (11 subsequent siblings)
  22 siblings, 1 reply; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit e083f14dc2e98ced872bf077b4d1cccf95b7e4f8 ]

The AP-GIC-starting hook allocates memory for the ->pend_page while the
CPU is started during boot-up. This callback is invoked on the target
CPU with disabled interrupts.
This does not work on -RT beacuse memory allocations are not possible
with disabled interrupts.
Move the memory allocation to an earlier hotplug step which invoked with
enabled interrupts on the boot CPU.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 60 ++++++++++++++++++++++----------
 1 file changed, 41 insertions(+), 19 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index e8217ebe8c1e..60533a795124 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -165,6 +165,7 @@ static DEFINE_RAW_SPINLOCK(vmovp_lock);
 static DEFINE_IDA(its_vpeid_ida);
 
 #define gic_data_rdist()		(raw_cpu_ptr(gic_rdists->rdist))
+#define gic_data_rdist_cpu(cpu)		(per_cpu_ptr(gic_rdists->rdist, cpu))
 #define gic_data_rdist_rd_base()	(gic_data_rdist()->rd_base)
 #define gic_data_rdist_vlpi_base()	(gic_data_rdist_rd_base() + SZ_128K)
 
@@ -1734,15 +1735,17 @@ static int its_alloc_collections(struct its_node *its)
 	return 0;
 }
 
-static struct page *its_allocate_pending_table(gfp_t gfp_flags)
+static struct page *its_allocate_pending_table(unsigned int cpu)
 {
 	struct page *pend_page;
+	unsigned int order;
 	/*
 	 * The pending pages have to be at least 64kB aligned,
 	 * hence the 'max(LPI_PENDBASE_SZ, SZ_64K)' below.
 	 */
-	pend_page = alloc_pages(gfp_flags | __GFP_ZERO,
-				get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K)));
+	order = get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K));
+	pend_page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO,
+				     order);
 	if (!pend_page)
 		return NULL;
 
@@ -1758,6 +1761,28 @@ static void its_free_pending_table(struct page *pt)
 		   get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K)));
 }
 
+static int its_alloc_pend_page(unsigned int cpu)
+{
+	struct page *pend_page;
+	phys_addr_t paddr;
+
+	pend_page = gic_data_rdist_cpu(cpu)->pend_page;
+	if (pend_page)
+		return 0;
+
+	pend_page = its_allocate_pending_table(cpu);
+	if (!pend_page) {
+		pr_err("Failed to allocate PENDBASE for CPU%d\n",
+		       smp_processor_id());
+		return -ENOMEM;
+	}
+
+	paddr = page_to_phys(pend_page);
+	pr_info("CPU%d: using LPI pending table @%pa\n", cpu, &paddr);
+	gic_data_rdist_cpu(cpu)->pend_page = pend_page;
+	return 0;
+}
+
 static void its_cpu_init_lpis(void)
 {
 	void __iomem *rbase = gic_data_rdist_rd_base();
@@ -1766,21 +1791,8 @@ static void its_cpu_init_lpis(void)
 
 	/* If we didn't allocate the pending table yet, do it now */
 	pend_page = gic_data_rdist()->pend_page;
-	if (!pend_page) {
-		phys_addr_t paddr;
-
-		pend_page = its_allocate_pending_table(GFP_NOWAIT);
-		if (!pend_page) {
-			pr_err("Failed to allocate PENDBASE for CPU%d\n",
-			       smp_processor_id());
-			return;
-		}
-
-		paddr = page_to_phys(pend_page);
-		pr_info("CPU%d: using LPI pending table @%pa\n",
-			smp_processor_id(), &paddr);
-		gic_data_rdist()->pend_page = pend_page;
-	}
+	if (!pend_page)
+		return;
 
 	/* Disable LPIs */
 	val = readl_relaxed(rbase + GICR_CTLR);
@@ -2599,7 +2611,7 @@ static int its_vpe_init(struct its_vpe *vpe)
 		return vpe_id;
 
 	/* Allocate VPT */
-	vpt_page = its_allocate_pending_table(GFP_KERNEL);
+	vpt_page = its_allocate_pending_table(raw_smp_processor_id());
 	if (!vpt_page) {
 		its_vpe_id_free(vpe_id);
 		return -ENOMEM;
@@ -3282,6 +3294,16 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
 	if (err)
 		return err;
 
+	err = cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "irqchip/arm/gicv3:prepare",
+				its_alloc_pend_page, NULL);
+	if (err < 0) {
+		pr_warn("ITS: Can't register CPU-hoplug callback.\n");
+		return err;
+	}
+	err = its_alloc_pend_page(smp_processor_id());
+	if (err < 0)
+		return err;
+
 	list_for_each_entry(its, &its_nodes, entry)
 		has_v4 |= its->is_v4;
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 12/22] sched/migrate_disable: fallback to preempt_disable() instead barrier()
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (10 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 13/22] x86/ioapic: Dont let setaffinity unmask threaded EOI interrupt too early Steven Rostedt
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, joe.korty

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 10e90c155bbc7cab420f47694404f8f9fe33c2b2 ]

On SMP + !RT migrate_disable() is still around. It is not part of spin_lock()
anymore so it has almost no users. However the futex code has a workaround for
the !in_atomic() part of migrate disable which fails because the matching
migrade_disable() is no longer part of spin_lock().

On !SMP + !RT migrate_disable() is reduced to barrier(). This is not optimal
because we few spots where a "preempt_disable()" statement was replaced with
"migrate_disable()".

We also used the migration_disable counter to figure out if a sleeping lock is
acquired so RCU does not complain about schedule() during rcu_read_lock() while
a sleeping lock is held. This changed, we no longer use it, we have now a
sleeping_lock counter for the RCU purpose.

This means we can now:
- for SMP + RT_BASE
  full migration program, nothing changes here

- for !SMP + RT_BASE
  the migration counting is no longer required. It used to ensure that the task
  is not migrated to another CPU and that this CPU remains online. !SMP ensures
  that already.
  Move it to CONFIG_SCHED_DEBUG so the counting is done for debugging purpose
  only.

- for all other cases including !RT
  fallback to preempt_disable(). The only remaining users of migrate_disable()
  are those which were converted from preempt_disable() and the futex
  workaround which is already in the preempt_disable() section due to the
  spin_lock that is held.

Cc: stable-rt@vger.kernel.org
Reported-by: joe.korty@concurrent-rt.com
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/preempt.h |  6 +++---
 include/linux/sched.h   |  4 ++--
 kernel/sched/core.c     | 23 +++++++++++------------
 kernel/sched/debug.c    |  2 +-
 4 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 0591df500e9d..6728662a81e8 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -224,7 +224,7 @@ do { \
 
 #define preemptible()	(preempt_count() == 0 && !irqs_disabled())
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 
 extern void migrate_disable(void);
 extern void migrate_enable(void);
@@ -241,8 +241,8 @@ static inline int __migrate_disabled(struct task_struct *p)
 }
 
 #else
-#define migrate_disable()		barrier()
-#define migrate_enable()		barrier()
+#define migrate_disable()		preempt_disable()
+#define migrate_enable()		preempt_enable()
 static inline int __migrate_disabled(struct task_struct *p)
 {
 	return 0;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index c26b5ff005ab..a6ffb552be01 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -626,7 +626,7 @@ struct task_struct {
 	int				nr_cpus_allowed;
 	const cpumask_t			*cpus_ptr;
 	cpumask_t			cpus_mask;
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 	int				migrate_disable;
 	int				migrate_disable_update;
 	int				pinned_on_cpu;
@@ -635,8 +635,8 @@ struct task_struct {
 # endif
 
 #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
-	int				migrate_disable;
 # ifdef CONFIG_SCHED_DEBUG
+	int				migrate_disable;
 	int				migrate_disable_atomic;
 # endif
 #endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e7817c6c44d2..fa5b76255f8c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1107,7 +1107,7 @@ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_ma
 	p->nr_cpus_allowed = cpumask_weight(new_mask);
 }
 
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 int __migrate_disabled(struct task_struct *p)
 {
 	return p->migrate_disable;
@@ -1146,7 +1146,7 @@ static void __do_set_cpus_allowed_tail(struct task_struct *p,
 
 void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
 {
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 	if (__migrate_disabled(p)) {
 		lockdep_assert_held(&p->pi_lock);
 
@@ -1219,7 +1219,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
 	if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
 		goto out;
 
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 	if (__migrate_disabled(p)) {
 		p->migrate_disable_update = 1;
 		goto out;
@@ -6897,7 +6897,7 @@ const u32 sched_prio_to_wmult[40] = {
  /*  15 */ 119304647, 148102320, 186737708, 238609294, 286331153,
 };
 
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 
 static inline void
 update_nr_migratory(struct task_struct *p, long delta)
@@ -7048,45 +7048,44 @@ EXPORT_SYMBOL(migrate_enable);
 #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 void migrate_disable(void)
 {
+#ifdef CONFIG_SCHED_DEBUG
 	struct task_struct *p = current;
 
 	if (in_atomic() || irqs_disabled()) {
-#ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic++;
-#endif
 		return;
 	}
-#ifdef CONFIG_SCHED_DEBUG
+
 	if (unlikely(p->migrate_disable_atomic)) {
 		tracing_off();
 		WARN_ON_ONCE(1);
 	}
-#endif
 
 	p->migrate_disable++;
+#endif
+	barrier();
 }
 EXPORT_SYMBOL(migrate_disable);
 
 void migrate_enable(void)
 {
+#ifdef CONFIG_SCHED_DEBUG
 	struct task_struct *p = current;
 
 	if (in_atomic() || irqs_disabled()) {
-#ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic--;
-#endif
 		return;
 	}
 
-#ifdef CONFIG_SCHED_DEBUG
 	if (unlikely(p->migrate_disable_atomic)) {
 		tracing_off();
 		WARN_ON_ONCE(1);
 	}
-#endif
 
 	WARN_ON_ONCE(p->migrate_disable <= 0);
 	p->migrate_disable--;
+#endif
+	barrier();
 }
 EXPORT_SYMBOL(migrate_enable);
 #endif
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 3108da1ee253..b5b43861c2b6 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -1017,7 +1017,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
 		P(dl.runtime);
 		P(dl.deadline);
 	}
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
 	P(migrate_disable);
 #endif
 	P(nr_cpus_allowed);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 13/22] x86/ioapic: Dont let setaffinity unmask threaded EOI interrupt too early
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (11 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 12/22] sched/migrate_disable: fallback to preempt_disable() instead barrier() Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 14/22] efi: Allow efi=runtime Steven Rostedt
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Thomas Gleixner <tglx@linutronix.de>

[ Upstream commit ac14002317721910204b82b9d8611dadb1cec2bb ]

There is an issue with threaded interrupts which are marked ONESHOT
and using the fasteoi handler.

    if (IS_ONESHOT())
        mask_irq();

    ....
    ....

    cond_unmask_eoi_irq()
        chip->irq_eoi();

So if setaffinity is pending then the interrupt will be moved and then
unmasked, which is wrong as it should be kept masked up to the point where
the threaded handler finished. It's not a real problem, the interrupt will
just be able to fire before the threaded handler has finished, though the irq
masked state will be wrong for a bit.

The patch below should cure the issue. It also renames the horribly
misnomed functions so it becomes clear what they are supposed to do.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
[bigeasy: add the body of the patch, use the same functions in both
          ifdef paths (spotted by Andy Shevchenko)]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/kernel/apic/io_apic.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 5832a9d657f2..c9af5afebc4a 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1688,20 +1688,20 @@ static bool io_apic_level_ack_pending(struct mp_chip_data *data)
 	return false;
 }
 
-static inline bool ioapic_irqd_mask(struct irq_data *data)
+static inline bool ioapic_prepare_move(struct irq_data *data)
 {
 	/* If we are moving the irq we need to mask it */
-	if (unlikely(irqd_is_setaffinity_pending(data) &&
-		     !irqd_irq_inprogress(data))) {
-		mask_ioapic_irq(data);
+	if (unlikely(irqd_is_setaffinity_pending(data))) {
+		if (!irqd_irq_masked(data))
+			mask_ioapic_irq(data);
 		return true;
 	}
 	return false;
 }
 
-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
+static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
 {
-	if (unlikely(masked)) {
+	if (unlikely(moveit)) {
 		/* Only migrate the irq if the ack has been received.
 		 *
 		 * On rare occasions the broadcast level triggered ack gets
@@ -1730,15 +1730,17 @@ static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
 		 */
 		if (!io_apic_level_ack_pending(data->chip_data))
 			irq_move_masked_irq(data);
-		unmask_ioapic_irq(data);
+		/* If the irq is masked in the core, leave it */
+		if (!irqd_irq_masked(data))
+			unmask_ioapic_irq(data);
 	}
 }
 #else
-static inline bool ioapic_irqd_mask(struct irq_data *data)
+static inline bool ioapic_prepare_move(struct irq_data *data)
 {
 	return false;
 }
-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
+static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
 {
 }
 #endif
@@ -1747,11 +1749,11 @@ static void ioapic_ack_level(struct irq_data *irq_data)
 {
 	struct irq_cfg *cfg = irqd_cfg(irq_data);
 	unsigned long v;
-	bool masked;
+	bool moveit;
 	int i;
 
 	irq_complete_move(cfg);
-	masked = ioapic_irqd_mask(irq_data);
+	moveit = ioapic_prepare_move(irq_data);
 
 	/*
 	 * It appears there is an erratum which affects at least version 0x11
@@ -1806,7 +1808,7 @@ static void ioapic_ack_level(struct irq_data *irq_data)
 		eoi_ioapic_pin(cfg->vector, irq_data->chip_data);
 	}
 
-	ioapic_irqd_unmask(irq_data, masked);
+	ioapic_finish_move(irq_data, moveit);
 }
 
 static void ioapic_ir_ack_level(struct irq_data *irq_data)
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 14/22] efi: Allow efi=runtime
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (12 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 13/22] x86/ioapic: Dont let setaffinity unmask threaded EOI interrupt too early Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 15/22] efi: Disable runtime services on RT Steven Rostedt
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 71bef7da4112ed2677d4f10a58202a5a4638fb90 ]

In case the option "efi=noruntime" is default at built-time, the user
could overwrite its sate by `efi=runtime' and allow it again.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/firmware/efi/efi.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index c3eefa126e3b..9bd749389f31 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -100,6 +100,9 @@ static int __init parse_efi_cmdline(char *str)
 	if (parse_option_str(str, "noruntime"))
 		disable_runtime = true;
 
+	if (parse_option_str(str, "runtime"))
+		disable_runtime = false;
+
 	return 0;
 }
 early_param("efi", parse_efi_cmdline);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 15/22] efi: Disable runtime services on RT
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (13 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 14/22] efi: Allow efi=runtime Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 16/22] crypto: cryptd - add a lock instead preempt_disable/local_bh_disable Steven Rostedt
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 55544e1d5eb0d7608e2b41452729649c8ea1607a ]

Based on meassurements the EFI functions get_variable /
get_next_variable take up to 2us which looks okay.
The functions get_time, set_time take around 10ms. Those 10ms are too
much. Even one ms would be too much.
Ard mentioned that SetVariable might even trigger larger latencies if
the firware will erase flash blocks on NOR.

The time-functions are used by efi-rtc and can be triggered during
runtimed (either via explicit read/write or ntp sync).

The variable write could be used by pstore.
These functions can be disabled without much of a loss. The poweroff /
reboot hooks may be provided by PSCI.

Disable EFI's runtime wrappers.

This was observed on "EFI v2.60 by SoftIron Overdrive 1000".

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 drivers/firmware/efi/efi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index 9bd749389f31..47093745a53c 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -74,7 +74,7 @@ static unsigned long *efi_tables[] = {
 	&efi.mem_attr_table,
 };
 
-static bool disable_runtime;
+static bool disable_runtime = IS_ENABLED(CONFIG_PREEMPT_RT_BASE);
 static int __init setup_noefi(char *arg)
 {
 	disable_runtime = true;
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 16/22] crypto: cryptd - add a lock instead preempt_disable/local_bh_disable
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (14 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 15/22] efi: Disable runtime services on RT Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 17/22] crypto: scompress - serialize RT percpu scratch buffer access with a local lock Steven Rostedt
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit 21aedb30d85979697f79a72a084e5d781e323663 ]

cryptd has a per-CPU lock which protected with local_bh_disable() and
preempt_disable().
Add an explicit spin_lock to make the locking context more obvious and
visible to lockdep. Since it is a per-CPU lock, there should be no lock
contention on the actual spinlock.
There is a small race-window where we could be migrated to another CPU
after the cpu_queue has been obtain. This is not a problem because the
actual ressource is protected by the spinlock.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 crypto/cryptd.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 248f6ba41688..54b7985c8caa 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -37,6 +37,7 @@
 struct cryptd_cpu_queue {
 	struct crypto_queue queue;
 	struct work_struct work;
+	spinlock_t qlock;
 };
 
 struct cryptd_queue {
@@ -115,6 +116,7 @@ static int cryptd_init_queue(struct cryptd_queue *queue,
 		cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
 		crypto_init_queue(&cpu_queue->queue, max_cpu_qlen);
 		INIT_WORK(&cpu_queue->work, cryptd_queue_worker);
+		spin_lock_init(&cpu_queue->qlock);
 	}
 	return 0;
 }
@@ -139,8 +141,10 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue,
 	atomic_t *refcnt;
 	bool may_backlog;
 
-	cpu = get_cpu();
-	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+	cpu_queue = raw_cpu_ptr(queue->cpu_queue);
+	spin_lock_bh(&cpu_queue->qlock);
+	cpu = smp_processor_id();
+
 	err = crypto_enqueue_request(&cpu_queue->queue, request);
 
 	refcnt = crypto_tfm_ctx(request->tfm);
@@ -157,7 +161,7 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue,
 	atomic_inc(refcnt);
 
 out_put_cpu:
-	put_cpu();
+	spin_unlock_bh(&cpu_queue->qlock);
 
 	return err;
 }
@@ -173,16 +177,11 @@ static void cryptd_queue_worker(struct work_struct *work)
 	cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
 	/*
 	 * Only handle one request at a time to avoid hogging crypto workqueue.
-	 * preempt_disable/enable is used to prevent being preempted by
-	 * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
-	 * cryptd_enqueue_request() being accessed from software interrupts.
 	 */
-	local_bh_disable();
-	preempt_disable();
+	spin_lock_bh(&cpu_queue->qlock);
 	backlog = crypto_get_backlog(&cpu_queue->queue);
 	req = crypto_dequeue_request(&cpu_queue->queue);
-	preempt_enable();
-	local_bh_enable();
+	spin_unlock_bh(&cpu_queue->qlock);
 
 	if (!req)
 		return;
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 17/22] crypto: scompress - serialize RT percpu scratch buffer access with a local lock
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (15 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 16/22] crypto: cryptd - add a lock instead preempt_disable/local_bh_disable Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 18/22] sched/core: Avoid __schedule() being called twice in a row Steven Rostedt
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Mike Galbraith

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <efault@gmx.de>

[ Upstream commit 1a4eff3f8e743d149be26a414822710aef07fe14 ]

| BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:974
| in_atomic(): 1, irqs_disabled(): 0, pid: 1401, name: cryptomgr_test
| Preemption disabled at:
| [<ffff00000849941c>] scomp_acomp_comp_decomp+0x34/0x1a0
| CPU: 21 PID: 1401 Comm: cryptomgr_test Tainted: G        W        4.16.18-rt9-rt #1
| Hardware name: www.cavium.com crb-1s/crb-1s, BIOS 0.3 Apr 25 2017
| Call trace:
|  dump_backtrace+0x0/0x1c8
|  show_stack+0x24/0x30
|  dump_stack+0xac/0xe8
|  ___might_sleep+0x124/0x188
|  rt_spin_lock+0x40/0x88
|  zip_load_instr+0x44/0x170 [thunderx_zip]
|  zip_deflate+0x184/0x378 [thunderx_zip]
|  zip_compress+0xb0/0x130 [thunderx_zip]
|  zip_scomp_compress+0x48/0x60 [thunderx_zip]
|  scomp_acomp_comp_decomp+0xd8/0x1a0
|  scomp_acomp_compress+0x24/0x30
|  test_acomp+0x15c/0x558
|  alg_test_comp+0xc0/0x128
|  alg_test.part.6+0x120/0x2c0
|  alg_test+0x6c/0xa0
|  cryptomgr_test+0x50/0x58
|  kthread+0x134/0x138
|  ret_from_fork+0x10/0x18

Mainline disables preemption to serialize percpu scratch buffer access,
causing the splat above.  Serialize with a local lock for RT instead.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 crypto/scompress.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/crypto/scompress.c b/crypto/scompress.c
index 2075e2c4e7df..c6b4e265c6bf 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -24,6 +24,7 @@
 #include <linux/cryptouser.h>
 #include <net/netlink.h>
 #include <linux/scatterlist.h>
+#include <linux/locallock.h>
 #include <crypto/scatterwalk.h>
 #include <crypto/internal/acompress.h>
 #include <crypto/internal/scompress.h>
@@ -34,6 +35,7 @@ static void * __percpu *scomp_src_scratches;
 static void * __percpu *scomp_dst_scratches;
 static int scomp_scratch_users;
 static DEFINE_MUTEX(scomp_lock);
+static DEFINE_LOCAL_IRQ_LOCK(scomp_scratches_lock);
 
 #ifdef CONFIG_NET
 static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
@@ -193,7 +195,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 	void **tfm_ctx = acomp_tfm_ctx(tfm);
 	struct crypto_scomp *scomp = *tfm_ctx;
 	void **ctx = acomp_request_ctx(req);
-	const int cpu = get_cpu();
+	const int cpu = local_lock_cpu(scomp_scratches_lock);
 	u8 *scratch_src = *per_cpu_ptr(scomp_src_scratches, cpu);
 	u8 *scratch_dst = *per_cpu_ptr(scomp_dst_scratches, cpu);
 	int ret;
@@ -228,7 +230,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
 					 1);
 	}
 out:
-	put_cpu();
+	local_unlock_cpu(scomp_scratches_lock);
 	return ret;
 }
 
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 18/22] sched/core: Avoid __schedule() being called twice in a row
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (16 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 17/22] crypto: scompress - serialize RT percpu scratch buffer access with a local lock Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 19/22] Revert "arm64/xen: Make XEN depend on !RT" Steven Rostedt
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Daniel Bristot de Oliveira, Clark Williams,
	Tommaso Cucinotta, Romulo da Silva de Oliveira, Ingo Molnar,
	Peter Zijlstra

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Daniel Bristot de Oliveira <bristot@redhat.com>

[ Upstream commit 2bb94c48b2ffaabf8c15a51e5cc1b4c541988cab ]

If a worker invokes schedule() then we may have the call chain:
 schedule()
 -> sched_submit_work()
    -> wq_worker_sleeping()
       -> wake_up_worker()
	  -> wake_up_process().

The last wakeup may cause a schedule which is unnecessary because we are
already in schedule() and do it anyway.

Add a preempt_disable() + preempt_enable_no_resched() around
wq_worker_sleeping() so the context switch could be delayed until
__schedule().

Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Tommaso Cucinotta <tommaso.cucinotta@sssup.it>
Cc: Romulo da Silva de Oliveira <romulo.deoliveira@ufsc.br>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
[bigeasy: rewrite changelog]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/sched/core.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fa5b76255f8c..a5ce37b90fca 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3482,10 +3482,15 @@ static inline void sched_submit_work(struct task_struct *tsk)
 	/*
 	 * If a worker went to sleep, notify and ask workqueue whether
 	 * it wants to wake up a task to maintain concurrency.
+	 * As this function is called inside the schedule() context,
+	 * we disable preemption to avoid it calling schedule() again
+	 * in the possible wakeup of a kworker.
 	 */
-	if (tsk->flags & PF_WQ_WORKER)
+	if (tsk->flags & PF_WQ_WORKER) {
+		preempt_disable();
 		wq_worker_sleeping(tsk);
-
+		preempt_enable_no_resched();
+	}
 
 	if (tsk_is_pi_blocked(tsk))
 		return;
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 19/22] Revert "arm64/xen: Make XEN depend on !RT"
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (17 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 18/22] sched/core: Avoid __schedule() being called twice in a row Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 20/22] sched: Allow pinned user tasks to be awakened to the CPU they pinned Steven Rostedt
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Iain Hunter

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

[ Upstream commit c0a308b58829bd4066bce841fe49e8277a0cb32b ]

Iain Hunter reported that there are no problems with it so there is no
reason to keep it disabled.

Reported-by: Iain Hunter <drhunter95@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 arch/arm64/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6ccd878c32c2..ebc261c8620b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -792,7 +792,7 @@ config XEN_DOM0
 
 config XEN
 	bool "Xen guest support on ARM64"
-	depends on ARM64 && OF && !PREEMPT_RT_FULL
+	depends on ARM64 && OF
 	select SWIOTLB_XEN
 	select PARAVIRT
 	help
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 20/22] sched: Allow pinned user tasks to be awakened to the CPU they pinned
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (18 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 19/22] Revert "arm64/xen: Make XEN depend on !RT" Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
  2018-09-05 12:28 ` [PATCH RT 22/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, stable-rt, Mike Galbraith

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <efault@gmx.de>

[ Upstream commit cd4d35ef89948221f7cd1751cee453943967364c ]

Since commit 7af443ee16976 ("sched/core: Require cpu_active() in
select_task_rq(), for user tasks") select_fallback_rq() will BUG() if
the CPU to which a task has pinned itself and pinned becomes
!cpu_active() while it slept.
The task will continue running on the to-be-removed CPU and will remove
itself from the CPU during takedown_cpu() (while cpuhp_pin_lock will be
acquired) and move to another CPU based on its mask after the
migrate_disable() section has been left.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a5ce37b90fca..6e6bd5262f23 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -980,7 +980,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
 	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
 		return false;
 
-	if (is_per_cpu_kthread(p))
+	if (is_per_cpu_kthread(p) || __migrate_disabled(p))
 		return cpu_online(cpu);
 
 	return cpu_active(cpu);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH RT 22/22] Linux 4.14.63-rt41-rc1
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (19 preceding siblings ...)
  2018-09-05 12:28 ` [PATCH RT 20/22] sched: Allow pinned user tasks to be awakened to the CPU they pinned Steven Rostedt
@ 2018-09-05 12:28 ` Steven Rostedt
       [not found] ` <20180905122837.830614967@goodmis.org>
  2018-09-06  7:54 ` [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Sebastian Andrzej Siewior
  22 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:28 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

4.14.63-rt41-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 2af6c89aee6d..ad263cec032a 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt40
+-rt41-rc1
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 21/22] Drivers: hv: vmbus: include header for get_irq_regs()
       [not found] ` <20180905122837.830614967@goodmis.org>
@ 2018-09-05 12:34   ` Steven Rostedt
  0 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-05 12:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Bernhard Landauer, Ralf Ramsauer


[ It appears that quilt doesn't use the right mime for the strange
  characters in the change log. I'm replying here as it was bounced
  by the mailing lists ]

On Wed, 05 Sep 2018 08:28:15 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> 4.14.63-rt41-rc1 stable review patch.
> If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> [ Upstream commit b9fcc1867cc7921bb8441be327ed58461ed12255 ]
> 
> On !RT the header file get_irq_regs() gets pulled in via other header files. On
> RT it does not and the build fails:
> 
>     drivers/hv/vmbus_drv.c:975 implicit declaration of function ‘get_irq_regs’ [-Werror=implicit-function-declaration]
>     drivers/hv/hv.c:115 implicit declaration of function ‘get_irq_regs’ [-Werror=implicit-function-declaration]
> 
> Add the header file for get_irq_regs() in a common header so it used by
> vmbus_drv.c by hv.c for their get_irq_regs() usage.
> 
> Reported-by: Bernhard Landauer <oberon@manjaro.org>
> Reported-by: Ralf Ramsauer <ralf.ramsauer@oth-regensburg.de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  drivers/hv/hyperv_vmbus.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
> index 49569f8fe038..a3608cd52805 100644
> --- a/drivers/hv/hyperv_vmbus.h
> +++ b/drivers/hv/hyperv_vmbus.h
> @@ -30,6 +30,7 @@
>  #include <linux/atomic.h>
>  #include <linux/hyperv.h>
>  #include <linux/interrupt.h>
> +#include <linux/irq.h>
>  
>  /*
>   * Timeout for services such as KVP and fcopy.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion"
  2018-09-05 12:28 ` [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion" Steven Rostedt
@ 2018-09-06  7:35   ` Sebastian Andrzej Siewior
  2018-09-06  8:38     ` Mike Galbraith
  0 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-06  7:35 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On 2018-09-05 08:28:02 [-0400], Steven Rostedt wrote:
> 4.14.63-rt41-rc1 stable review patch.
> If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> [ Upstream commit 2a9c45d8f89112458364285cbe2b0729561953f1 ]
> 
> Drop the Ultraviolet patch. UV looks broken upstream for PREEMPT, too.
> Mike is the only person I know that has such a thing and he isn't going
> to fix this upstream (from 1526977462.6491.1.camel@gmx.de):

I don't think that we need to propagate that revert for stable. I
reverted it in the devel tree because nobody wanted this upstream and I
couldn't test it. For that reason I didn't see the point for having it
in the RT tree.
However, if you want to revert it for stable, be my guest. It probably
will have no impact and if it will people might step forward and fix it
properly / upstream.

Sebastian

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook
  2018-09-05 12:28 ` [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook Steven Rostedt
@ 2018-09-06  7:40   ` Sebastian Andrzej Siewior
  2018-09-07 19:28     ` Steven Rostedt
  0 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-06  7:40 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On 2018-09-05 08:28:05 [-0400], Steven Rostedt wrote:
> 4.14.63-rt41-rc1 stable review patch.
> If anyone has any objections, please let me know.

could you please take commit d6914631a84f4 ("irqchip/gic-v3-its: Move
pending table allocation to init time")
  https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?h=linux-4.16.y-rt-rebase&id=d6914631a84f47eaf5647da3bb09d58eca156b3f

instead? This was just an intermediate step and was replaced with Marc's
patch (which is either went upstream or is going to).

Sebastian

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 00/22] Linux 4.14.63-rt41-rc1
  2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
                   ` (21 preceding siblings ...)
       [not found] ` <20180905122837.830614967@goodmis.org>
@ 2018-09-06  7:54 ` Sebastian Andrzej Siewior
  2018-09-06 16:43   ` Steven Rostedt
  22 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-06  7:54 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On 2018-09-05 08:27:54 [-0400], Steven Rostedt wrote:
> Dear RT Folks,
> 
> This is the RT stable review cycle of patch 4.14.63-rt41-rc1.
> 
> Please scream at me if I messed something up. Please test the patches too.
> 
> The -rc release will be uploaded to kernel.org and will be deleted when
> the final release is out. This is just a review release (or release candidate).
> 
> The pre-releases will not be pushed to the git repository, only the
> final release is.
> 
> If all goes well, this patch will be converted to the next main release
> on 9/7/2018.

Your tree (v4.14.63) has
  80d20d35af1ed ("nohz: Fix local_timer_softirq_pending()")
  0a0e0829f9901 ("nohz: Fix missing tick reprogram when interrupting an
                 inline softirq")

which means that the patch "nohz: Prevent erroneous tick stop
invocations" can be reverted / dropped. It might have clashed during the
emerge of the stable tree.

Sebastian

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion"
  2018-09-06  7:35   ` Sebastian Andrzej Siewior
@ 2018-09-06  8:38     ` Mike Galbraith
  2018-09-06 12:58       ` Steven Rostedt
  0 siblings, 1 reply; 31+ messages in thread
From: Mike Galbraith @ 2018-09-06  8:38 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On Thu, 2018-09-06 at 09:35 +0200, Sebastian Andrzej Siewior wrote:
> On 2018-09-05 08:28:02 [-0400], Steven Rostedt wrote:
> > 4.14.63-rt41-rc1 stable review patch.
> > If anyone has any objections, please let me know.
> > 
> > ------------------
> > 
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > 
> > [ Upstream commit 2a9c45d8f89112458364285cbe2b0729561953f1 ]
> > 
> > Drop the Ultraviolet patch. UV looks broken upstream for PREEMPT, too.
> > Mike is the only person I know that has such a thing and he isn't going
> > to fix this upstream (from 1526977462.6491.1.camel@gmx.de):
> 
> I don't think that we need to propagate that revert for stable. I
> reverted it in the devel tree because nobody wanted this upstream and I
> couldn't test it. For that reason I didn't see the point for having it
> in the RT tree.
> However, if you want to revert it for stable, be my guest. It probably
> will have no impact and if it will people might step forward and fix it
> properly / upstream.

I'm in favor of reverting it as useless cruft.  UV has been broken
forever wrt PREEMPT, and nobody cares.  The original interest in UV RT
support evaporated while 2.6.33-rt was still current (and when getting
it working took a bit more than a spinlock conversion). 

	-Mike

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion"
  2018-09-06  8:38     ` Mike Galbraith
@ 2018-09-06 12:58       ` Steven Rostedt
  0 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-06 12:58 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Sebastian Andrzej Siewior, linux-kernel, linux-rt-users,
	Thomas Gleixner, Carsten Emde, John Kacur, Paul Gortmaker,
	Julia Cartwright, Daniel Wagner, tom.zanussi

On Thu, 06 Sep 2018 10:38:16 +0200
Mike Galbraith <efault@gmx.de> wrote:

> On Thu, 2018-09-06 at 09:35 +0200, Sebastian Andrzej Siewior wrote:
> > On 2018-09-05 08:28:02 [-0400], Steven Rostedt wrote:  
> > > 4.14.63-rt41-rc1 stable review patch.
> > > If anyone has any objections, please let me know.
> > > 
> > > ------------------
> > > 
> > > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > > 
> > > [ Upstream commit 2a9c45d8f89112458364285cbe2b0729561953f1 ]
> > > 
> > > Drop the Ultraviolet patch. UV looks broken upstream for PREEMPT, too.
> > > Mike is the only person I know that has such a thing and he isn't going
> > > to fix this upstream (from 1526977462.6491.1.camel@gmx.de):  
> > 
> > I don't think that we need to propagate that revert for stable. I
> > reverted it in the devel tree because nobody wanted this upstream and I
> > couldn't test it. For that reason I didn't see the point for having it
> > in the RT tree.
> > However, if you want to revert it for stable, be my guest. It probably
> > will have no impact and if it will people might step forward and fix it
> > properly / upstream.  
> 
> I'm in favor of reverting it as useless cruft.  UV has been broken
> forever wrt PREEMPT, and nobody cares.  The original interest in UV RT
> support evaporated while 2.6.33-rt was still current (and when getting
> it working took a bit more than a spinlock conversion). 
> 

Yeah, I skipped other reverts as I didn't think it was stable relevant,
but this one seemed like a good idea to backport. As Mike is in favor,
and Sebastian said "be my guest", I'll keep this in.

-- Steve

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 00/22] Linux 4.14.63-rt41-rc1
  2018-09-06  7:54 ` [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Sebastian Andrzej Siewior
@ 2018-09-06 16:43   ` Steven Rostedt
  2018-09-06 19:30     ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 31+ messages in thread
From: Steven Rostedt @ 2018-09-06 16:43 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On Thu, 6 Sep 2018 09:54:34 +0200
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> On 2018-09-05 08:27:54 [-0400], Steven Rostedt wrote:
> > Dear RT Folks,
> > 
> > This is the RT stable review cycle of patch 4.14.63-rt41-rc1.
> > 
> > Please scream at me if I messed something up. Please test the patches too.
> > 
> > The -rc release will be uploaded to kernel.org and will be deleted when
> > the final release is out. This is just a review release (or release candidate).
> > 
> > The pre-releases will not be pushed to the git repository, only the
> > final release is.
> > 
> > If all goes well, this patch will be converted to the next main release
> > on 9/7/2018.  
> 
> Your tree (v4.14.63) has
>   80d20d35af1ed ("nohz: Fix local_timer_softirq_pending()")
>   0a0e0829f9901 ("nohz: Fix missing tick reprogram when interrupting an
>                  inline softirq")
> 
> which means that the patch "nohz: Prevent erroneous tick stop
> invocations" can be reverted / dropped. It might have clashed during the
> emerge of the stable tree.
>

My tree also has this:

commit 5536f5491a2e098 

    softirq: keep the 'softirq pending' check RT-only
    
    The patch "nohz: Prevent erroneous tick stop invocations" was merged
    differently upstream. The original issue where a slow box could lock up
    with a pending timer remained. I currently assume that this is a RT only
    issue and keep the patch as RT only.
    
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>


What needs to be done?

-- Steve

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 00/22] Linux 4.14.63-rt41-rc1
  2018-09-06 16:43   ` Steven Rostedt
@ 2018-09-06 19:30     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-09-06 19:30 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On 2018-09-06 12:43:49 [-0400], Steven Rostedt wrote:
> > Your tree (v4.14.63) has
> >   80d20d35af1ed ("nohz: Fix local_timer_softirq_pending()")
> >   0a0e0829f9901 ("nohz: Fix missing tick reprogram when interrupting an
> >                  inline softirq")
> > 
> > which means that the patch "nohz: Prevent erroneous tick stop
> > invocations" can be reverted / dropped. It might have clashed during the
> > emerge of the stable tree.
> >
> 
> My tree also has this:
> 
> commit 5536f5491a2e098 
> 
>     softirq: keep the 'softirq pending' check RT-only
>     
>     The patch "nohz: Prevent erroneous tick stop invocations" was merged
>     differently upstream. The original issue where a slow box could lock up
>     with a pending timer remained. I currently assume that this is a RT only
>     issue and keep the patch as RT only.
>     
>     Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> 
> What needs to be done?

This commit can be removed / reverted. The two commits (80d20d35af1ed
and 0a0e0829f9901) fix the issue and this duct-tape commit
(5536f5491a2e098) is no longer required.

> -- Steve

Sebastian

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook
  2018-09-06  7:40   ` Sebastian Andrzej Siewior
@ 2018-09-07 19:28     ` Steven Rostedt
  0 siblings, 0 replies; 31+ messages in thread
From: Steven Rostedt @ 2018-09-07 19:28 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi

On Thu, 6 Sep 2018 09:40:54 +0200
Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> On 2018-09-05 08:28:05 [-0400], Steven Rostedt wrote:
> > 4.14.63-rt41-rc1 stable review patch.
> > If anyone has any objections, please let me know.  
> 
> could you please take commit d6914631a84f4 ("irqchip/gic-v3-its: Move
> pending table allocation to init time")
>   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?h=linux-4.16.y-rt-rebase&id=d6914631a84f47eaf5647da3bb09d58eca156b3f
> 
> instead? This was just an intermediate step and was replaced with Marc's
> patch (which is either went upstream or is going to).

That doesn't apply without this patch. Should I apply this patch and
that one?

-- Steve

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2018-09-07 19:28 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-05 12:27 [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
2018-09-05 12:27 ` [PATCH RT 01/22] sched/fair: Fix CFS bandwidth control lockdep DEADLOCK report Steven Rostedt
2018-09-05 12:27 ` [PATCH RT 02/22] locallock: provide {get,put}_locked_ptr() variants Steven Rostedt
2018-09-05 12:27 ` [PATCH RT 03/22] squashfs: make use of local lock in multi_cpu decompressor Steven Rostedt
2018-09-05 12:27 ` [PATCH RT 04/22] PM / suspend: Prevent might sleep splats (updated) Steven Rostedt
2018-09-05 12:27 ` [PATCH RT 05/22] PM / wakeup: Make events_lock a RAW_SPINLOCK Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 06/22] PM / s2idle: Make s2idle_wait_head swait based Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 07/22] seqlock: provide the same ordering semantics as mainline Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 08/22] Revert "x86: UV: raw_spinlock conversion" Steven Rostedt
2018-09-06  7:35   ` Sebastian Andrzej Siewior
2018-09-06  8:38     ` Mike Galbraith
2018-09-06 12:58       ` Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 09/22] Revert "timer: delay waking softirqs from the jiffy tick" Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 10/22] irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 11/22] irqchip/gic-v3-its: Move ITS ->pend_page allocation into an early CPU up hook Steven Rostedt
2018-09-06  7:40   ` Sebastian Andrzej Siewior
2018-09-07 19:28     ` Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 12/22] sched/migrate_disable: fallback to preempt_disable() instead barrier() Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 13/22] x86/ioapic: Dont let setaffinity unmask threaded EOI interrupt too early Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 14/22] efi: Allow efi=runtime Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 15/22] efi: Disable runtime services on RT Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 16/22] crypto: cryptd - add a lock instead preempt_disable/local_bh_disable Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 17/22] crypto: scompress - serialize RT percpu scratch buffer access with a local lock Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 18/22] sched/core: Avoid __schedule() being called twice in a row Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 19/22] Revert "arm64/xen: Make XEN depend on !RT" Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 20/22] sched: Allow pinned user tasks to be awakened to the CPU they pinned Steven Rostedt
2018-09-05 12:28 ` [PATCH RT 22/22] Linux 4.14.63-rt41-rc1 Steven Rostedt
     [not found] ` <20180905122837.830614967@goodmis.org>
2018-09-05 12:34   ` [PATCH RT 21/22] Drivers: hv: vmbus: include header for get_irq_regs() Steven Rostedt
2018-09-06  7:54 ` [PATCH RT 00/22] Linux 4.14.63-rt41-rc1 Sebastian Andrzej Siewior
2018-09-06 16:43   ` Steven Rostedt
2018-09-06 19:30     ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).