linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/15] Linux 4.9.65-rt57-rc1
@ 2017-12-01 15:49 Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
                   ` (14 more replies)
  0 siblings, 15 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

[ Sorry for the duplicate, I tried to cancel sending, but quilt decided
  to send anyway :-p ]

Dear RT Folks,

This is the RT stable review cycle of patch 4.9.65-rt57-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 12/5/2017.

Enjoy,

-- Steve


To build 4.9.65-rt57-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.9.tar.xz

  http://www.kernel.org/pub/linux/kernel/v4.x/patch-4.9.65.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.65-rt57-rc1.patch.xz

You can also build from 4.9.65-rt56 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/4.9/incr/patch-4.9.65-rt56-rt57-rc1.patch.xz


Changes from 4.9.65-rt56:

---


Alex Shi (1):
      cpu_pm: replace raw_notifier to atomic_notifier

Mike Galbraith (2):
      rtmutex: Fix lock stealing logic
      kernel/hrtimer/hotplug: don't wake ktimersoftd while holding the hrtimer base lock

Sebastian Andrzej Siewior (10):
      Revert "fs: jbd2: pull your plug when waiting for space"
      posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled
      PM / CPU: replace raw_notifier with atomic_notifier (fixup)
      kernel/hrtimer: migrate deferred timer on CPU down
      net: take the tcp_sk_lock lock with BH disabled
      kernel/hrtimer: don't wakeup a process while holding the hrtimer base lock
      Bluetooth: avoid recursive locking in hci_send_to_channel()
      iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue
      rt/locking: allow recursive local_trylock()
      locking/rtmutex: don't drop the wait_lock twice

Steven Rostedt (VMware) (2):
      Revert "memcontrol: Prevent scheduling while atomic in cgroup code"
      Linux 4.9.65-rt57-rc1

----
 drivers/iommu/amd_iommu.c |  4 +--
 fs/jbd2/checkpoint.c      |  2 --
 include/linux/init_task.h |  2 +-
 include/linux/locallock.h |  9 ++++++
 kernel/cpu_pm.c           | 50 +++++++++-----------------------
 kernel/locking/rtmutex.c  | 74 +++++++++++++++++++++++------------------------
 kernel/time/hrtimer.c     | 35 ++++++++++++++++------
 localversion-rt           |  2 +-
 mm/memcontrol.c           | 13 ++++-----
 net/bluetooth/hci_sock.c  | 17 +++++++----
 net/ipv4/tcp_ipv4.c       |  8 ++---
 11 files changed, 108 insertions(+), 108 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 02/15] Revert "fs: jbd2: pull your plug when waiting for space" Steven Rostedt
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, stable, Haiyang HY1 Tan

[-- Attachment #1: 0001-Revert-memcontrol-Prevent-scheduling-while-atomic-in.patch --]
[-- Type: text/plain, Size: 3431 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:

       refill_stock()
          get_cpu_var()
          drain_stock()
             res_counter_uncharge()
                res_counter_uncharge_until()
                   spin_lock() <== boom

But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
the calls to res_counter_uncharge() in drain_stock() to the lockless
function page_counter_uncharge(). There is no more spin lock there and no
more reason to have that local lock.

Cc: <stable@vger.kernel.org>
Reported-by: Haiyang HY1 Tan <tanhy1@lenovo.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
[bigeasy: That upstream commit appeared in v3.19 and the patch in
  question in v3.18.7-rt2 and v3.18 seems still to be maintained. So I
  guess that v3.18 would need the locallocks that we are about to remove
  here. I am not sure if any earlier versions have the patch
  backported.
  The stable tag here is because Haiyang reported (and debugged) a crash
  in 4.4-RT with this patch applied (which has get_cpu_light() instead
  the locallocks it gained in v4.9-RT).
  https://lkml.kernel.org/r/05AA4EC5C6EC1D48BE2CDCFF3AE0B8A637F78A15@CNMAILEX04.lenovo.com
]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/memcontrol.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 12b94909ba7b..c04403033aec 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1698,7 +1698,6 @@ struct memcg_stock_pcp {
 #define FLUSHING_CACHED_CHARGE	0
 };
 static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
-static DEFINE_LOCAL_IRQ_LOCK(memcg_stock_ll);
 static DEFINE_MUTEX(percpu_charge_mutex);
 
 /**
@@ -1721,7 +1720,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	if (nr_pages > CHARGE_BATCH)
 		return ret;
 
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	if (memcg == stock->cached && stock->nr_pages >= nr_pages) {
@@ -1729,7 +1728,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 		ret = true;
 	}
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 
 	return ret;
 }
@@ -1756,13 +1755,13 @@ static void drain_local_stock(struct work_struct *dummy)
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
 
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	drain_stock(stock);
 	clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 }
 
 /*
@@ -1774,7 +1773,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
 
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	if (stock->cached != memcg) { /* reset if necessary */
@@ -1783,7 +1782,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	}
 	stock->nr_pages += nr_pages;
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 }
 
 /*
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 02/15] Revert "fs: jbd2: pull your plug when waiting for space"
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 03/15] rtmutex: Fix lock stealing logic Steven Rostedt
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, stable

[-- Attachment #1: 0002-Revert-fs-jbd2-pull-your-plug-when-waiting-for-space.patch --]
[-- Type: text/plain, Size: 1041 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

This reverts commit "fs: jbd2: pull your plug when waiting for space".
This was a duct-tape fix which shouldn't be needed since commit
"locking/rt-mutex: fix deadlock in device mapper / block-IO".

Cc: stable@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 fs/jbd2/checkpoint.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
index 6e18a06aaabe..684996c8a3a4 100644
--- a/fs/jbd2/checkpoint.c
+++ b/fs/jbd2/checkpoint.c
@@ -116,8 +116,6 @@ void __jbd2_log_wait_for_space(journal_t *journal)
 	nblocks = jbd2_space_needed(journal);
 	while (jbd2_log_space_left(journal) < nblocks) {
 		write_unlock(&journal->j_state_lock);
-		if (current->plug)
-			io_schedule();
 		mutex_lock(&journal->j_checkpoint_mutex);
 
 		/*
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 03/15] rtmutex: Fix lock stealing logic
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 02/15] Revert "fs: jbd2: pull your plug when waiting for space" Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled Steven Rostedt
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Mike Galbraith

[-- Attachment #1: 0003-rtmutex-Fix-lock-stealing-logic.patch --]
[-- Type: text/plain, Size: 5570 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <efault@gmx.de>

1. When trying to acquire an rtmutex, we first try to grab it without
queueing the waiter, and explicitly check for that initial attempt
in the !waiter path of __try_to_take_rt_mutex().  Checking whether
the lock taker is top waiter before allowing a steal attempt in that
path is a thinko: the lock taker has not yet blocked.

2. It seems wrong to change the definition of rt_mutex_waiter_less()
to mean less or perhaps equal when we have an rt_mutex_waiter_equal().

Remove the thinko, restore rt_mutex_waiter_less(), implement and use
rt_mutex_steal() based upon rt_mutex_waiter_less/equal(), moving all
qualification criteria into the function itself.

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/rtmutex.c | 73 ++++++++++++++++++++++++------------------------
 1 file changed, 36 insertions(+), 37 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index b73cd7c87551..5dbf6789383b 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -235,25 +235,18 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
 }
 #endif
 
-#define STEAL_NORMAL  0
-#define STEAL_LATERAL 1
 /*
  * Only use with rt_mutex_waiter_{less,equal}()
  */
-#define task_to_waiter(p)	\
-	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
+#define task_to_waiter(p) &(struct rt_mutex_waiter) \
+	{ .prio = (p)->prio, .deadline = (p)->dl.deadline, .task = (p) }
 
 static inline int
 rt_mutex_waiter_less(struct rt_mutex_waiter *left,
-		     struct rt_mutex_waiter *right, int mode)
+		     struct rt_mutex_waiter *right)
 {
-	if (mode == STEAL_NORMAL) {
-		if (left->prio < right->prio)
-			return 1;
-	} else {
-		if (left->prio <= right->prio)
-			return 1;
-	}
+	if (left->prio < right->prio)
+		return 1;
 
 	/*
 	 * If both waiters have dl_prio(), we check the deadlines of the
@@ -286,6 +279,27 @@ rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
 	return 1;
 }
 
+#define STEAL_NORMAL  0
+#define STEAL_LATERAL 1
+
+static inline int
+rt_mutex_steal(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, int mode)
+{
+	struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
+
+	if (waiter == top_waiter || rt_mutex_waiter_less(waiter, top_waiter))
+		return 1;
+
+	/*
+	 * Note that RT tasks are excluded from lateral-steals
+	 * to prevent the introduction of an unbounded latency.
+	 */
+	if (mode == STEAL_NORMAL || rt_task(waiter->task))
+		return 0;
+
+	return rt_mutex_waiter_equal(waiter, top_waiter);
+}
+
 static void
 rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
 {
@@ -297,7 +311,7 @@ rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
 	while (*link) {
 		parent = *link;
 		entry = rb_entry(parent, struct rt_mutex_waiter, tree_entry);
-		if (rt_mutex_waiter_less(waiter, entry, STEAL_NORMAL)) {
+		if (rt_mutex_waiter_less(waiter, entry)) {
 			link = &parent->rb_left;
 		} else {
 			link = &parent->rb_right;
@@ -336,7 +350,7 @@ rt_mutex_enqueue_pi(struct task_struct *task, struct rt_mutex_waiter *waiter)
 	while (*link) {
 		parent = *link;
 		entry = rb_entry(parent, struct rt_mutex_waiter, pi_tree_entry);
-		if (rt_mutex_waiter_less(waiter, entry, STEAL_NORMAL)) {
+		if (rt_mutex_waiter_less(waiter, entry)) {
 			link = &parent->rb_left;
 		} else {
 			link = &parent->rb_right;
@@ -847,6 +861,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
  * @task:   The task which wants to acquire the lock
  * @waiter: The waiter that is queued to the lock's wait tree if the
  *	    callsite called task_blocked_on_lock(), otherwise NULL
+ * @mode:   Lock steal mode (STEAL_NORMAL, STEAL_LATERAL)
  */
 static int __try_to_take_rt_mutex(struct rt_mutex *lock,
 				  struct task_struct *task,
@@ -886,14 +901,11 @@ static int __try_to_take_rt_mutex(struct rt_mutex *lock,
 	 */
 	if (waiter) {
 		/*
-		 * If waiter is not the highest priority waiter of
-		 * @lock, give up.
+		 * If waiter is not the highest priority waiter of @lock,
+		 * or its peer when lateral steal is allowed, give up.
 		 */
-		if (waiter != rt_mutex_top_waiter(lock)) {
-			/* XXX rt_mutex_waiter_less() ? */
+		if (!rt_mutex_steal(lock, waiter, mode))
 			return 0;
-		}
-
 		/*
 		 * We can acquire the lock. Remove the waiter from the
 		 * lock waiters tree.
@@ -910,25 +922,12 @@ static int __try_to_take_rt_mutex(struct rt_mutex *lock,
 		 * not need to be dequeued.
 		 */
 		if (rt_mutex_has_waiters(lock)) {
-			struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
-
-			if (task != pown)
-				return 0;
-
-			/*
-			 * Note that RT tasks are excluded from lateral-steals
-			 * to prevent the introduction of an unbounded latency.
-			 */
-			if (rt_task(task))
-				mode = STEAL_NORMAL;
 			/*
-			 * If @task->prio is greater than or equal to
-			 * the top waiter priority (kernel view),
-			 * @task lost.
+			 * If @task->prio is greater than the top waiter
+			 * priority (kernel view), or equal to it when a
+			 * lateral steal is forbidden, @task lost.
 			 */
-			if (!rt_mutex_waiter_less(task_to_waiter(task),
-						  rt_mutex_top_waiter(lock),
-						  mode))
+			if (!rt_mutex_steal(lock, task_to_waiter(task), mode))
 				return 0;
 			/*
 			 * The current top waiter stays enqueued. We
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 03/15] rtmutex: Fix lock stealing logic Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 17:35   ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 05/15] cpu_pm: replace raw_notifier to atomic_notifier Steven Rostedt
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

[-- Attachment #1: 0004-posixtimer-init-timer-only-with-CONFIG_POSIX_TIMERS-.patch --]
[-- Type: text/plain, Size: 944 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

In v4.11 it is possible to disable the posix timers and so we must not
attempt to initialize the task_struct on RT with !POSIX_TIMERS.
This patch does so.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/init_task.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index a56e263f5005..526ecfc58909 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -150,7 +150,7 @@ extern struct task_group root_task_group;
 # define INIT_PERF_EVENTS(tsk)
 #endif
 
-#ifdef CONFIG_PREEMPT_RT_BASE
+#if defined(CONFIG_POSIX_TIMERS) && defined(CONFIG_PREEMPT_RT_BASE)
 # define INIT_TIMER_LIST		.posix_timer_list = NULL,
 #else
 # define INIT_TIMER_LIST
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 05/15] cpu_pm: replace raw_notifier to atomic_notifier
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 06/15] PM / CPU: replace raw_notifier with atomic_notifier (fixup) Steven Rostedt
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Anders Roxell, Rik van Riel,
	Rafael J. Wysocki, Daniel Lezcano

[-- Attachment #1: 0005-cpu_pm-replace-raw_notifier-to-atomic_notifier.patch --]
[-- Type: text/plain, Size: 5590 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Alex Shi <alex.shi@linaro.org>

This patch replace a rwlock and raw notifier by atomic notifier which
protected by spin_lock and rcu.

The first to reason to have this replace is due to a 'scheduling while
 atomic' bug of RT kernel on arm/arm64 platform. On arm/arm64, rwlock
cpu_pm_notifier_lock in cpu_pm cause a potential schedule after irq
disable in idle call chain:

cpu_startup_entry
  cpu_idle_loop
    local_irq_disable()
    cpuidle_idle_call
      call_cpuidle
        cpuidle_enter
          cpuidle_enter_state
            ->enter :arm_enter_idle_state
              cpu_pm_enter/exit
                CPU_PM_CPU_IDLE_ENTER
                  read_lock(&cpu_pm_notifier_lock); <-- sleep in idle
                     __rt_spin_lock();
                        schedule();

The kernel panic is here:
[    4.609601] BUG: scheduling while atomic: swapper/1/0/0x00000002
[    4.609608] [<ffff0000086fae70>] arm_enter_idle_state+0x18/0x70
[    4.609614] Modules linked in:
[    4.609615] [<ffff0000086f9298>] cpuidle_enter_state+0xf0/0x218
[    4.609620] [<ffff0000086f93f8>] cpuidle_enter+0x18/0x20
[    4.609626] Preemption disabled at:
[    4.609627] [<ffff0000080fa234>] call_cpuidle+0x24/0x40
[    4.609635] [<ffff000008882fa4>] schedule_preempt_disabled+0x1c/0x28
[    4.609639] [<ffff0000080fa49c>] cpu_startup_entry+0x154/0x1f8
[    4.609645] [<ffff00000808e004>] secondary_start_kernel+0x15c/0x1a0

Daniel Lezcano said this notification is needed on arm/arm64 platforms.
Sebastian suggested using atomic_notifier instead of rwlock, which is not
only removing the sleeping in idle, but also getting better latency
improvement.

This patch passed Fengguang's 0day testing.

Signed-off-by: Alex Shi <alex.shi@linaro.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Anders Roxell <anders.roxell@linaro.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/cpu_pm.c | 43 ++++++-------------------------------------
 1 file changed, 6 insertions(+), 37 deletions(-)

diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 009cc9a17d95..10f4640f991e 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -22,14 +22,13 @@
 #include <linux/spinlock.h>
 #include <linux/syscore_ops.h>
 
-static DEFINE_RWLOCK(cpu_pm_notifier_lock);
-static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain);
+static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
 
 static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
 {
 	int ret;
 
-	ret = __raw_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL,
+	ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL,
 		nr_to_call, nr_calls);
 
 	return notifier_to_errno(ret);
@@ -47,14 +46,7 @@ static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
  */
 int cpu_pm_register_notifier(struct notifier_block *nb)
 {
-	unsigned long flags;
-	int ret;
-
-	write_lock_irqsave(&cpu_pm_notifier_lock, flags);
-	ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb);
-	write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
-
-	return ret;
+	return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
 }
 EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
 
@@ -69,14 +61,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
  */
 int cpu_pm_unregister_notifier(struct notifier_block *nb)
 {
-	unsigned long flags;
-	int ret;
-
-	write_lock_irqsave(&cpu_pm_notifier_lock, flags);
-	ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
-	write_unlock_irqrestore(&cpu_pm_notifier_lock, flags);
-
-	return ret;
+	return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
 }
 EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
 
@@ -100,7 +85,6 @@ int cpu_pm_enter(void)
 	int nr_calls;
 	int ret = 0;
 
-	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
 	if (ret)
 		/*
@@ -108,7 +92,6 @@ int cpu_pm_enter(void)
 		 * PM entry who are notified earlier to prepare for it.
 		 */
 		cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
 
 	return ret;
 }
@@ -128,13 +111,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
  */
 int cpu_pm_exit(void)
 {
-	int ret;
-
-	read_lock(&cpu_pm_notifier_lock);
-	ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
-
-	return ret;
+	return cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
 }
 EXPORT_SYMBOL_GPL(cpu_pm_exit);
 
@@ -159,7 +136,6 @@ int cpu_cluster_pm_enter(void)
 	int nr_calls;
 	int ret = 0;
 
-	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
 	if (ret)
 		/*
@@ -167,7 +143,6 @@ int cpu_cluster_pm_enter(void)
 		 * PM entry who are notified earlier to prepare for it.
 		 */
 		cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
 
 	return ret;
 }
@@ -190,13 +165,7 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter);
  */
 int cpu_cluster_pm_exit(void)
 {
-	int ret;
-
-	read_lock(&cpu_pm_notifier_lock);
-	ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
-	read_unlock(&cpu_pm_notifier_lock);
-
-	return ret;
+	return cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL);
 }
 EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
 
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 06/15] PM / CPU: replace raw_notifier with atomic_notifier (fixup)
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 05/15] cpu_pm: replace raw_notifier to atomic_notifier Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 07/15] kernel/hrtimer: migrate deferred timer on CPU down Steven Rostedt
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

[-- Attachment #1: 0006-PM-CPU-replace-raw_notifier-with-atomic_notifier-fix.patch --]
[-- Type: text/plain, Size: 1028 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The original patch changed betwen its posting and what finally went into
Rafael's tree so here is the delta.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/cpu_pm.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 10f4640f991e..67b02e138a47 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -28,8 +28,15 @@ static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls)
 {
 	int ret;
 
+	/*
+	 * __atomic_notifier_call_chain has a RCU read critical section, which
+	 * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
+	 * RCU know this.
+	 */
+	rcu_irq_enter_irqson();
 	ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL,
 		nr_to_call, nr_calls);
+	rcu_irq_exit_irqson();
 
 	return notifier_to_errno(ret);
 }
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 07/15] kernel/hrtimer: migrate deferred timer on CPU down
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 06/15] PM / CPU: replace raw_notifier with atomic_notifier (fixup) Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 08/15] net: take the tcp_sk_lock lock with BH disabled Steven Rostedt
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, stable-rt, Mike Galbraith

[-- Attachment #1: 0007-kernel-hrtimer-migrate-deferred-timer-on-CPU-down.patch --]
[-- Type: text/plain, Size: 1275 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

hrtimers, which were deferred to the softirq context, and expire between
softirq shutdown and hrtimer migration are dangling around. If the CPU
goes back up the list head will be initialized and this corrupts the
timer's list. It will remain unnoticed until a hrtimer_cancel().
This moves those timers so they will expire.

Cc: stable-rt@vger.kernel.org
Reported-by: Mike Galbraith <efault@gmx.de>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/time/hrtimer.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 0797bd6eadb4..39e4435b8451 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1860,6 +1860,11 @@ static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
 		 */
 		enqueue_hrtimer(timer, new_base);
 	}
+#ifdef CONFIG_PREEMPT_RT_BASE
+	list_splice_tail(&old_base->expired, &new_base->expired);
+	if (!list_empty(&new_base->expired))
+		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+#endif
 }
 
 int hrtimers_dead_cpu(unsigned int scpu)
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 08/15] net: take the tcp_sk_lock lock with BH disabled
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 07/15] kernel/hrtimer: migrate deferred timer on CPU down Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 09/15] kernel/hrtimer: dont wakeup a process while holding the hrtimer base lock Steven Rostedt
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Jacek Konieczny

[-- Attachment #1: 0008-net-take-the-tcp_sk_lock-lock-with-BH-disabled.patch --]
[-- Type: text/plain, Size: 2310 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Lockdep may complain about an unsafe locking scenario:
|      CPU0                    CPU1
|      ----                    ----
| lock((tcp_sk_lock).lock);
|                              lock(&per_cpu(local_softirq_locks[i], __cpu).lock);
|                              lock((tcp_sk_lock).lock);
| lock(&per_cpu(local_softirq_locks[i], __cpu).lock);

in the call paths:
	do_current_softirqs -> tcp_v4_send_ack()
vs
	tcp_v4_send_reset -> do_current_softirqs().

This should not happen since local_softirq_locks is per CPU. Reversing
the order makes lockdep happy.

Reported-by: Jacek Konieczny <jajcus@jajcus.net>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/ipv4/tcp_ipv4.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 3336e1534bc5..3b7298459c87 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -698,8 +698,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
 
 	arg.tos = ip_hdr(skb)->tos;
 
-	local_lock(tcp_sk_lock);
 	local_bh_disable();
+	local_lock(tcp_sk_lock);
 	ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
 			      ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
@@ -707,8 +707,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
 
 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
 	__TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
-	local_bh_enable();
 	local_unlock(tcp_sk_lock);
+	local_bh_enable();
 
 #ifdef CONFIG_TCP_MD5SIG
 out:
@@ -784,16 +784,16 @@ static void tcp_v4_send_ack(struct net *net,
 	if (oif)
 		arg.bound_dev_if = oif;
 	arg.tos = tos;
-	local_lock(tcp_sk_lock);
 	local_bh_disable();
+	local_lock(tcp_sk_lock);
 	ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
 			      skb, &TCP_SKB_CB(skb)->header.h4.opt,
 			      ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
 			      &arg, arg.iov[0].iov_len);
 
 	__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
-	local_bh_enable();
 	local_unlock(tcp_sk_lock);
+	local_bh_enable();
 }
 
 static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 09/15] kernel/hrtimer: dont wakeup a process while holding the hrtimer base lock
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 08/15] net: take the tcp_sk_lock lock with BH disabled Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 10/15] kernel/hrtimer/hotplug: dont wake ktimersoftd " Steven Rostedt
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Mike Galbraith

[-- Attachment #1: 0009-kernel-hrtimer-don-t-wakeup-a-process-while-holding-.patch --]
[-- Type: text/plain, Size: 3014 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

We must not wake any process (and thus acquire the pi->lock) while
holding the hrtimer's base lock. This does not happen usually because
the hrtimer-callback is invoked in IRQ-context and so
raise_softirq_irqoff() does not wakeup a process.
However during CPU-hotplug it might get called from hrtimers_dead_cpu()
which would wakeup the thread immediately.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/time/hrtimer.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 39e4435b8451..8a20ef6919d9 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1440,7 +1440,7 @@ static inline int hrtimer_rt_defer(struct hrtimer *timer) { return 0; }
 
 static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
 
-static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
+static int __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
 {
 	struct hrtimer_clock_base *base = cpu_base->clock_base;
 	unsigned int active = cpu_base->active_bases;
@@ -1490,8 +1490,7 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
 				raise = 1;
 		}
 	}
-	if (raise)
-		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+	return raise;
 }
 
 #ifdef CONFIG_HIGH_RES_TIMERS
@@ -1505,6 +1504,7 @@ void hrtimer_interrupt(struct clock_event_device *dev)
 	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
 	ktime_t expires_next, now, entry_time, delta;
 	int retries = 0;
+	int raise;
 
 	BUG_ON(!cpu_base->hres_active);
 	cpu_base->nr_events++;
@@ -1523,7 +1523,7 @@ void hrtimer_interrupt(struct clock_event_device *dev)
 	 */
 	cpu_base->expires_next.tv64 = KTIME_MAX;
 
-	__hrtimer_run_queues(cpu_base, now);
+	raise = __hrtimer_run_queues(cpu_base, now);
 
 	/* Reevaluate the clock bases for the next expiry */
 	expires_next = __hrtimer_get_next_event(cpu_base);
@@ -1534,6 +1534,8 @@ void hrtimer_interrupt(struct clock_event_device *dev)
 	cpu_base->expires_next = expires_next;
 	cpu_base->in_hrtirq = 0;
 	raw_spin_unlock(&cpu_base->lock);
+	if (raise)
+		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
 
 	/* Reprogramming necessary ? */
 	if (!tick_program_event(expires_next, 0)) {
@@ -1613,6 +1615,7 @@ void hrtimer_run_queues(void)
 {
 	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
 	ktime_t now;
+	int raise;
 
 	if (__hrtimer_hres_active(cpu_base))
 		return;
@@ -1631,8 +1634,10 @@ void hrtimer_run_queues(void)
 
 	raw_spin_lock(&cpu_base->lock);
 	now = hrtimer_update_base(cpu_base);
-	__hrtimer_run_queues(cpu_base, now);
+	raise = __hrtimer_run_queues(cpu_base, now);
 	raw_spin_unlock(&cpu_base->lock);
+	if (raise)
+		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
 }
 
 /*
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 10/15] kernel/hrtimer/hotplug: dont wake ktimersoftd while holding the hrtimer base lock
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 09/15] kernel/hrtimer: dont wakeup a process while holding the hrtimer base lock Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 11/15] Bluetooth: avoid recursive locking in hci_send_to_channel() Steven Rostedt
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Mike Galbraith

[-- Attachment #1: 0010-kernel-hrtimer-hotplug-don-t-wake-ktimersoftd-while-.patch --]
[-- Type: text/plain, Size: 2317 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Mike Galbraith <efault@gmx.de>

kernel/hrtimer: don't wakeup a process while holding the hrtimer base lock
missed a path, namely hrtimers_dead_cpu() -> migrate_hrtimer_list().  Defer
raising softirq until after base lock has been released there as well.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/time/hrtimer.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 8a20ef6919d9..369203af6406 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1837,7 +1837,7 @@ int hrtimers_prepare_cpu(unsigned int cpu)
 
 #ifdef CONFIG_HOTPLUG_CPU
 
-static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
+static int migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
 				struct hrtimer_clock_base *new_base)
 {
 	struct hrtimer *timer;
@@ -1867,15 +1867,19 @@ static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
 	}
 #ifdef CONFIG_PREEMPT_RT_BASE
 	list_splice_tail(&old_base->expired, &new_base->expired);
-	if (!list_empty(&new_base->expired))
-		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+	/*
+	 * Tell the caller to raise HRTIMER_SOFTIRQ.  We can't safely
+	 * acquire ktimersoftd->pi_lock while the base lock is held.
+	 */
+	return !list_empty(&new_base->expired);
 #endif
+	return 0;
 }
 
 int hrtimers_dead_cpu(unsigned int scpu)
 {
 	struct hrtimer_cpu_base *old_base, *new_base;
-	int i;
+	int i, raise = 0;
 
 	BUG_ON(cpu_online(scpu));
 	tick_cancel_sched_timer(scpu);
@@ -1891,13 +1895,16 @@ int hrtimers_dead_cpu(unsigned int scpu)
 	raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
 
 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
-		migrate_hrtimer_list(&old_base->clock_base[i],
-				     &new_base->clock_base[i]);
+		raise |= migrate_hrtimer_list(&old_base->clock_base[i],
+					      &new_base->clock_base[i]);
 	}
 
 	raw_spin_unlock(&old_base->lock);
 	raw_spin_unlock(&new_base->lock);
 
+	if (raise)
+		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
+
 	/* Check, if we got expired work to do */
 	__hrtimer_peek_ahead_timers();
 	local_irq_enable();
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 11/15] Bluetooth: avoid recursive locking in hci_send_to_channel()
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 10/15] kernel/hrtimer/hotplug: dont wake ktimersoftd " Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 12/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue Steven Rostedt
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Marcel Holtmann, Johan Hedberg, rt-stable,
	Mart van de Wege

[-- Attachment #1: 0011-Bluetooth-avoid-recursive-locking-in-hci_send_to_cha.patch --]
[-- Type: text/plain, Size: 2454 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Mart reported a deadlock in -RT in the call path:
  hci_send_monitor_ctrl_event() -> hci_send_to_channel()

because both functions acquire the same read lock hci_sk_list.lock. This
is also a mainline issue because the qrwlock implementation is writer
fair (the traditional rwlock implementation is reader biased).

To avoid the deadlock there is now __hci_send_to_channel() which expects
the readlock to be held.

Cc: Marcel Holtmann <marcel@holtmann.org>
Cc: Johan Hedberg <johan.hedberg@intel.com>
Cc: rt-stable@vger.kernel.org
Fixes: 38ceaa00d02d ("Bluetooth: Add support for sending MGMT commands and events to monitor")
Reported-by: Mart van de Wege <mvdwege@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/bluetooth/hci_sock.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
index c88a6007e643..5de85b55a821 100644
--- a/net/bluetooth/hci_sock.c
+++ b/net/bluetooth/hci_sock.c
@@ -251,15 +251,13 @@ void hci_send_to_sock(struct hci_dev *hdev, struct sk_buff *skb)
 }
 
 /* Send frame to sockets with specific channel */
-void hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
-			 int flag, struct sock *skip_sk)
+static void __hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
+				  int flag, struct sock *skip_sk)
 {
 	struct sock *sk;
 
 	BT_DBG("channel %u len %d", channel, skb->len);
 
-	read_lock(&hci_sk_list.lock);
-
 	sk_for_each(sk, &hci_sk_list.head) {
 		struct sk_buff *nskb;
 
@@ -285,6 +283,13 @@ void hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
 			kfree_skb(nskb);
 	}
 
+}
+
+void hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
+			 int flag, struct sock *skip_sk)
+{
+	read_lock(&hci_sk_list.lock);
+	__hci_send_to_channel(channel, skb, flag, skip_sk);
 	read_unlock(&hci_sk_list.lock);
 }
 
@@ -388,8 +393,8 @@ void hci_send_monitor_ctrl_event(struct hci_dev *hdev, u16 event,
 		hdr->index = index;
 		hdr->len = cpu_to_le16(skb->len - HCI_MON_HDR_SIZE);
 
-		hci_send_to_channel(HCI_CHANNEL_MONITOR, skb,
-				    HCI_SOCK_TRUSTED, NULL);
+		__hci_send_to_channel(HCI_CHANNEL_MONITOR, skb,
+				      HCI_SOCK_TRUSTED, NULL);
 		kfree_skb(skb);
 	}
 
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 12/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (10 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 11/15] Bluetooth: avoid recursive locking in hci_send_to_channel() Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 13/15] rt/locking: allow recursive local_trylock() Steven Rostedt
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, rt-stable, Joerg Roedel, iommu,
	Vinod Adhikary

[-- Attachment #1: 0012-iommu-amd-Use-raw_cpu_ptr-instead-of-get_cpu_ptr-for.patch --]
[-- Type: text/plain, Size: 1645 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

get_cpu_ptr() disabled preemption and returns the ->flush_queue object
of the current CPU. raw_cpu_ptr() does the same except that it not
disable preemption which means the scheduler can move it to another CPU
after it obtained the per-CPU object.
In this case this is not bad because the data structure itself is
protected with a spin_lock. This change shouldn't matter however on RT
it does because the sleeping lock can't be accessed with disabled
preemption.

Cc: rt-stable@vger.kernel.org
Cc: Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org
Reported-by: Vinod Adhikary <vinadhy@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/iommu/amd_iommu.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index a88595b21111..ff5c2424eb9e 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2283,7 +2283,7 @@ static void queue_add(struct dma_ops_domain *dma_dom,
 	pages     = __roundup_pow_of_two(pages);
 	address >>= PAGE_SHIFT;
 
-	queue = get_cpu_ptr(&flush_queue);
+	queue = raw_cpu_ptr(&flush_queue);
 	spin_lock_irqsave(&queue->lock, flags);
 
 	if (queue->next == FLUSH_QUEUE_SIZE)
@@ -2300,8 +2300,6 @@ static void queue_add(struct dma_ops_domain *dma_dom,
 
 	if (atomic_cmpxchg(&queue_timer_on, 0, 1) == 0)
 		mod_timer(&queue_timer, jiffies + msecs_to_jiffies(10));
-
-	put_cpu_ptr(&flush_queue);
 }
 
 
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 13/15] rt/locking: allow recursive local_trylock()
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (11 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 12/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 14/15] locking/rtmutex: dont drop the wait_lock twice Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 15/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, rt-stable

[-- Attachment #1: 0013-rt-locking-allow-recursive-local_trylock.patch --]
[-- Type: text/plain, Size: 1275 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

required for following networking patch which does recursive try-lock.
While at it, add the !RT version of it because it did not yet exist.

Cc: rt-stable@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/locallock.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 845c77f1a5ca..280f884a05a3 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -77,6 +77,9 @@ static inline int __local_trylock(struct local_irq_lock *lv)
 		lv->owner = current;
 		lv->nestcnt = 1;
 		return 1;
+	} else if (lv->owner == current) {
+		lv->nestcnt++;
+		return 1;
 	}
 	return 0;
 }
@@ -250,6 +253,12 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
 
 static inline void local_irq_lock_init(int lvar) { }
 
+#define local_trylock(lvar)					\
+	({							\
+		preempt_disable();				\
+		1;						\
+	})
+
 #define local_lock(lvar)			preempt_disable()
 #define local_unlock(lvar)			preempt_enable()
 #define local_lock_irq(lvar)			local_irq_disable()
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 14/15] locking/rtmutex: dont drop the wait_lock twice
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (12 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 13/15] rt/locking: allow recursive local_trylock() Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  2017-12-01 15:50 ` [PATCH RT 15/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi, Gusenleitner Klaus

[-- Attachment #1: 0014-locking-rtmutex-don-t-drop-the-wait_lock-twice.patch --]
[-- Type: text/plain, Size: 1256 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Since the futex rework, __rt_mutex_start_proxy_lock() does no longer
acquire the wait_lock so it must not drop it. Otherwise the lock is not
only unlocked twice but also the preemption counter is underflown.

It is okay to remove that line because this function does not disable
interrupts nor does it acquire the ->wait_lock. The caller does this so it is
wrong do it here (after the futex rework).

Cc: rt-stable@vger.kernel.org #v4.9.18-rt14+
Reported-by: Gusenleitner Klaus <gus@keba.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/rtmutex.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 5dbf6789383b..3a8b5d44aaf8 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -2312,7 +2312,6 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 	raw_spin_lock(&task->pi_lock);
 	if (task->pi_blocked_on) {
 		raw_spin_unlock(&task->pi_lock);
-		raw_spin_unlock_irq(&lock->wait_lock);
 		return -EAGAIN;
 	}
 	task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH RT 15/15] Linux 4.9.65-rt57-rc1
  2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
                   ` (13 preceding siblings ...)
  2017-12-01 15:50 ` [PATCH RT 14/15] locking/rtmutex: dont drop the wait_lock twice Steven Rostedt
@ 2017-12-01 15:50 ` Steven Rostedt
  14 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

[-- Attachment #1: 0015-Linux-4.9.65-rt57-rc1.patch --]
[-- Type: text/plain, Size: 411 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index fdb0f880c7e9..c12fc9d17724 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt56
+-rt57-rc1
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled
  2017-12-01 15:50 ` [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled Steven Rostedt
@ 2017-12-01 17:35   ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 17:35 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

On Fri, 01 Dec 2017 10:50:03 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> 4.9.65-rt57-rc1 stable review patch.
> If anyone has any objections, please let me know.
> 
> ------------------
> 
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> In v4.11 it is possible to disable the posix timers and so we must not
> attempt to initialize the task_struct on RT with !POSIX_TIMERS.
> This patch does so.

Hmm, I may have been too greedy in pulling in this patch. I'm going to
remove it from the list, and post a -rc2.

-- Steve

> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  include/linux/init_task.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> index a56e263f5005..526ecfc58909 100644
> --- a/include/linux/init_task.h
> +++ b/include/linux/init_task.h
> @@ -150,7 +150,7 @@ extern struct task_group root_task_group;
>  # define INIT_PERF_EVENTS(tsk)
>  #endif
>  
> -#ifdef CONFIG_PREEMPT_RT_BASE
> +#if defined(CONFIG_POSIX_TIMERS) && defined(CONFIG_PREEMPT_RT_BASE)
>  # define INIT_TIMER_LIST		.posix_timer_list = NULL,
>  #else
>  # define INIT_TIMER_LIST

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled
  2017-12-01 15:48 [PATCH RT 00/15] Linux Steven Rostedt
@ 2017-12-01 15:48 ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2017-12-01 15:48 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
	tom.zanussi, Alex Shi

[-- Attachment #1: 0004-posixtimer-init-timer-only-with-CONFIG_POSIX_TIMERS-.patch --]
[-- Type: text/plain, Size: 944 bytes --]

4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

In v4.11 it is possible to disable the posix timers and so we must not
attempt to initialize the task_struct on RT with !POSIX_TIMERS.
This patch does so.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/init_task.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index a56e263f5005..526ecfc58909 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -150,7 +150,7 @@ extern struct task_group root_task_group;
 # define INIT_PERF_EVENTS(tsk)
 #endif
 
-#ifdef CONFIG_PREEMPT_RT_BASE
+#if defined(CONFIG_POSIX_TIMERS) && defined(CONFIG_PREEMPT_RT_BASE)
 # define INIT_TIMER_LIST		.posix_timer_list = NULL,
 #else
 # define INIT_TIMER_LIST
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-12-01 17:35 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 02/15] Revert "fs: jbd2: pull your plug when waiting for space" Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 03/15] rtmutex: Fix lock stealing logic Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled Steven Rostedt
2017-12-01 17:35   ` Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 05/15] cpu_pm: replace raw_notifier to atomic_notifier Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 06/15] PM / CPU: replace raw_notifier with atomic_notifier (fixup) Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 07/15] kernel/hrtimer: migrate deferred timer on CPU down Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 08/15] net: take the tcp_sk_lock lock with BH disabled Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 09/15] kernel/hrtimer: dont wakeup a process while holding the hrtimer base lock Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 10/15] kernel/hrtimer/hotplug: dont wake ktimersoftd " Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 11/15] Bluetooth: avoid recursive locking in hci_send_to_channel() Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 12/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 13/15] rt/locking: allow recursive local_trylock() Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 14/15] locking/rtmutex: dont drop the wait_lock twice Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 15/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
  -- strict thread matches above, loose matches on Subject: below --
2017-12-01 15:48 [PATCH RT 00/15] Linux Steven Rostedt
2017-12-01 15:48 ` [PATCH RT 04/15] posixtimer: init timer only with CONFIG_POSIX_TIMERS enabled Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).