linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/core: Schedule new worker even if PI-blocked
@ 2019-08-16 16:06 Sebastian Andrzej Siewior
  2019-08-19  9:52 ` [tip:sched/urgent] " tip-bot for Sebastian Andrzej Siewior
  2019-08-20 13:50 ` [PATCH] " Peter Zijlstra
  0 siblings, 2 replies; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-08-16 16:06 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar, Peter Zijlstra, tglx, Sebastian Andrzej Siewior

If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
schedule a new kworker if we schedule out due to lock contention because !RT
does not do that as well. A spinning spinlock disables preemption and a worker
does not schedule out on lock contention (but spin).

On RT the RW-semaphore implementation uses an rtmutex so
tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
will now start a new worker which may deadlock if one worker is waiting on
progress from another worker. Since a RW-semaphore starts a new worker on !RT,
we should do the same on RT.

XFS is able to trigger this deadlock.

Allow to schedule new worker if the current worker is PI-blocked.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/sched/core.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
 
 static inline void sched_submit_work(struct task_struct *tsk)
 {
-	if (!tsk->state || tsk_is_pi_blocked(tsk))
+	if (!tsk->state)
 		return;
 
 	/*
@@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
 		preempt_enable_no_resched();
 	}
 
+	if (tsk_is_pi_blocked(tsk))
+		return;
+
 	/*
 	 * If we are going to sleep and we have plugged IO queued,
 	 * make sure to submit it to avoid deadlocks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [tip:sched/urgent] sched/core: Schedule new worker even if PI-blocked
  2019-08-16 16:06 [PATCH] sched/core: Schedule new worker even if PI-blocked Sebastian Andrzej Siewior
@ 2019-08-19  9:52 ` tip-bot for Sebastian Andrzej Siewior
  2019-08-20 13:50 ` [PATCH] " Peter Zijlstra
  1 sibling, 0 replies; 8+ messages in thread
From: tip-bot for Sebastian Andrzej Siewior @ 2019-08-19  9:52 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, tglx, linux-kernel, torvalds, peterz, bigeasy, mingo

Commit-ID:  b0fdc01354f45d43f082025636ef808968a27b36
Gitweb:     https://git.kernel.org/tip/b0fdc01354f45d43f082025636ef808968a27b36
Author:     Sebastian Andrzej Siewior <bigeasy@linutronix.de>
AuthorDate: Fri, 16 Aug 2019 18:06:26 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 19 Aug 2019 10:57:26 +0200

sched/core: Schedule new worker even if PI-blocked

If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
schedule a new kworker if we schedule out due to lock contention because !RT
does not do that as well. A spinning spinlock disables preemption and a worker
does not schedule out on lock contention (but spin).

On RT the RW-semaphore implementation uses an rtmutex so
tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
will now start a new worker which may deadlock if one worker is waiting on
progress from another worker. Since a RW-semaphore starts a new worker on !RT,
we should do the same on RT.

XFS is able to trigger this deadlock.

Allow to schedule new worker if the current worker is PI-blocked.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20190816160626.12742-1-bigeasy@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2b037f195473..010d578118d6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3904,7 +3904,7 @@ void __noreturn do_task_dead(void)
 
 static inline void sched_submit_work(struct task_struct *tsk)
 {
-	if (!tsk->state || tsk_is_pi_blocked(tsk))
+	if (!tsk->state)
 		return;
 
 	/*
@@ -3920,6 +3920,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
 		preempt_enable_no_resched();
 	}
 
+	if (tsk_is_pi_blocked(tsk))
+		return;
+
 	/*
 	 * If we are going to sleep and we have plugged IO queued,
 	 * make sure to submit it to avoid deadlocks.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-16 16:06 [PATCH] sched/core: Schedule new worker even if PI-blocked Sebastian Andrzej Siewior
  2019-08-19  9:52 ` [tip:sched/urgent] " tip-bot for Sebastian Andrzej Siewior
@ 2019-08-20 13:50 ` Peter Zijlstra
  2019-08-20 14:59   ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2019-08-20 13:50 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-kernel, Ingo Molnar, tglx

On Fri, Aug 16, 2019 at 06:06:26PM +0200, Sebastian Andrzej Siewior wrote:
> If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
> schedule a new kworker if we schedule out due to lock contention because !RT
> does not do that as well.

 s/as well/either/

> A spinning spinlock disables preemption and a worker
> does not schedule out on lock contention (but spin).

I'm not much liking this; it means that rt_mutex and mutex have
different behaviour, and there are 'normal' rt_mutex users in the tree.

> On RT the RW-semaphore implementation uses an rtmutex so
> tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
> will now start a new worker

I'm confused, by bailing out early it does _NOT_ start a new worker; or
am I reading it wrong?

> which may deadlock if one worker is waiting on
> progress from another worker.

> Since a RW-semaphore starts a new worker on !RT, we should do the same on RT.
> 
> XFS is able to trigger this deadlock.
> 
> Allow to schedule new worker if the current worker is PI-blocked.

Which contradicts earlier parts of this changelog.

> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  kernel/sched/core.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
>  
>  static inline void sched_submit_work(struct task_struct *tsk)
>  {
> -	if (!tsk->state || tsk_is_pi_blocked(tsk))
> +	if (!tsk->state)
>  		return;
>  
>  	/*
> @@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
>  		preempt_enable_no_resched();
>  	}
>  
> +	if (tsk_is_pi_blocked(tsk))
> +		return;
> +
>  	/*
>  	 * If we are going to sleep and we have plugged IO queued,
>  	 * make sure to submit it to avoid deadlocks.

What do we need that clause for? Why is pi_blocked special _at_all_?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-20 13:50 ` [PATCH] " Peter Zijlstra
@ 2019-08-20 14:59   ` Sebastian Andrzej Siewior
  2019-08-20 15:20     ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-08-20 14:59 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, tglx

On 2019-08-20 15:50:14 [+0200], Peter Zijlstra wrote:
> On Fri, Aug 16, 2019 at 06:06:26PM +0200, Sebastian Andrzej Siewior wrote:
> > If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
> > schedule a new kworker if we schedule out due to lock contention because !RT
> > does not do that as well.
> 
>  s/as well/either/
> 
> > A spinning spinlock disables preemption and a worker
> > does not schedule out on lock contention (but spin).
> 
> I'm not much liking this; it means that rt_mutex and mutex have
> different behaviour, and there are 'normal' rt_mutex users in the tree.

There isc RCU (boosting) and futex. I'm sceptical about the i2c users…

> > On RT the RW-semaphore implementation uses an rtmutex so
> > tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
> > will now start a new worker
> 
> I'm confused, by bailing out early it does _NOT_ start a new worker; or
> am I reading it wrong?

s@now@not@. Your eyes work good, soory for that.

> > which may deadlock if one worker is waiting on
> > progress from another worker.
> 
> > Since a RW-semaphore starts a new worker on !RT, we should do the same on RT.
> > 
> > XFS is able to trigger this deadlock.
> > 
> > Allow to schedule new worker if the current worker is PI-blocked.
> 
> Which contradicts earlier parts of this changelog.
> 
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> >  kernel/sched/core.c |    5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
> >  
> >  static inline void sched_submit_work(struct task_struct *tsk)
> >  {
> > -	if (!tsk->state || tsk_is_pi_blocked(tsk))
> > +	if (!tsk->state)
> >  		return;
> >  
> >  	/*
> > @@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
> >  		preempt_enable_no_resched();
> >  	}
> >  
> > +	if (tsk_is_pi_blocked(tsk))
> > +		return;
> > +
> >  	/*
> >  	 * If we are going to sleep and we have plugged IO queued,
> >  	 * make sure to submit it to avoid deadlocks.
> 
> What do we need that clause for? Why is pi_blocked special _at_all_?

so !RT the scheduler does nothing special if a task blocks on sleeping
lock. 
If I remember correctly then blk_schedule_flush_plug() is the problem.
It may require a lock which is held by the task. 
It may hold A and wait for B while another task has B and waits for A. 
If my memory does bot betray me then ext+jbd can lockup without this.

Sebastian

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-20 14:59   ` Sebastian Andrzej Siewior
@ 2019-08-20 15:20     ` Peter Zijlstra
  2019-08-20 15:54       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2019-08-20 15:20 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-kernel, Ingo Molnar, tglx

On Tue, Aug 20, 2019 at 04:59:26PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-20 15:50:14 [+0200], Peter Zijlstra wrote:
> > On Fri, Aug 16, 2019 at 06:06:26PM +0200, Sebastian Andrzej Siewior wrote:
> > > If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
> > > schedule a new kworker if we schedule out due to lock contention because !RT
> > > does not do that as well.
> > 
> >  s/as well/either/
> > 
> > > A spinning spinlock disables preemption and a worker
> > > does not schedule out on lock contention (but spin).
> > 
> > I'm not much liking this; it means that rt_mutex and mutex have
> > different behaviour, and there are 'normal' rt_mutex users in the tree.
> 
> There isc RCU (boosting) and futex. I'm sceptical about the i2c users…

Well, yes, I too was/am sceptical, but it was tglx who twisted my arm
and said the i2c people were right and rt_mutex is/should-be a generic
usable interface.

This then resulted in the futex specific interface and lockdep support
for rt_mutex:

  5293c2efda37 ("futex,rt_mutex: Provide futex specific rt_mutex API")
  f5694788ad8d ("rt_mutex: Add lockdep annotations")

> > > On RT the RW-semaphore implementation uses an rtmutex so
> > > tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
> > > will now start a new worker
> > 
> > I'm confused, by bailing out early it does _NOT_ start a new worker; or
> > am I reading it wrong?
> 
> s@now@not@. Your eyes work good, soory for that.

All good, just trying to make sense of things :-)

> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
> > >  
> > >  static inline void sched_submit_work(struct task_struct *tsk)
> > >  {
> > > -	if (!tsk->state || tsk_is_pi_blocked(tsk))
> > > +	if (!tsk->state)
> > >  		return;
> > >  
> > >  	/*

So this part actually makes rt_mutex less special and is good.

> > > @@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
> > >  		preempt_enable_no_resched();
> > >  	}
> > >  
> > > +	if (tsk_is_pi_blocked(tsk))
> > > +		return;
> > > +
> > >  	/*
> > >  	 * If we are going to sleep and we have plugged IO queued,
> > >  	 * make sure to submit it to avoid deadlocks.
> > 
> > What do we need that clause for? Why is pi_blocked special _at_all_?
> 
> so !RT the scheduler does nothing special if a task blocks on sleeping
> lock. 
> If I remember correctly then blk_schedule_flush_plug() is the problem.
> It may require a lock which is held by the task. 
> It may hold A and wait for B while another task has B and waits for A. 
> If my memory does bot betray me then ext+jbd can lockup without this.

And am I right in thinking that that, again, is specific to the
sleeping-spinlocks from PREEMPT_RT? Is there really nothing else that
identifies those more specifically? It's been a while since I looked at
them.

Also, I suppose it would be really good to put that in a comment.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-20 15:20     ` Peter Zijlstra
@ 2019-08-20 15:54       ` Sebastian Andrzej Siewior
  2019-08-20 16:02         ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-08-20 15:54 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, tglx

On 2019-08-20 17:20:25 [+0200], Peter Zijlstra wrote:
> > There isc RCU (boosting) and futex. I'm sceptical about the i2c users…
> 
> Well, yes, I too was/am sceptical, but it was tglx who twisted my arm
> and said the i2c people were right and rt_mutex is/should-be a generic
> usable interface.

I don't mind the generic interface I just find the use-case odd. So by
now rtmutex is used by i2c core and not a single driver like it the case
the last time I looked at it. But still, why is it (PI-boosting)
important for I2C to use it and not for other subsystems? Moving on…

> > > > --- a/kernel/sched/core.c
> > > > +++ b/kernel/sched/core.c
> > > > @@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
> > > >  
> > > >  static inline void sched_submit_work(struct task_struct *tsk)
> > > >  {
> > > > -	if (!tsk->state || tsk_is_pi_blocked(tsk))
> > > > +	if (!tsk->state)
> > > >  		return;
> > > >  
> > > >  	/*
> 
> So this part actually makes rt_mutex less special and is good.
> 
> > > > @@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
> > > >  		preempt_enable_no_resched();
> > > >  	}
> > > >  
> > > > +	if (tsk_is_pi_blocked(tsk))
> > > > +		return;
> > > > +
> > > >  	/*
> > > >  	 * If we are going to sleep and we have plugged IO queued,
> > > >  	 * make sure to submit it to avoid deadlocks.
> > > 
> > > What do we need that clause for? Why is pi_blocked special _at_all_?
> > 
> > so !RT the scheduler does nothing special if a task blocks on sleeping
> > lock. 
> > If I remember correctly then blk_schedule_flush_plug() is the problem.
> > It may require a lock which is held by the task. 
> > It may hold A and wait for B while another task has B and waits for A. 
> > If my memory does bot betray me then ext+jbd can lockup without this.
> 
> And am I right in thinking that that, again, is specific to the
> sleeping-spinlocks from PREEMPT_RT? Is there really nothing else that
> identifies those more specifically? It's been a while since I looked at
> them.

Not really. I hacked "int sleeping_lock" into task_struct which is
incremented each time a "sleeping lock" version of rtmutex is requested.
We have two users as of now:
- RCU, which checks if we schedule() while holding rcu_read_lock() which
  is okay if it is a sleeping lock.

- NOHZ's pending softirq detection while going to idle. It is possible
  that "ksoftirqd" and "current" are blocked on locks and the CPU goes
  to idle (because nothing else is runnable) with pending softirqs.

I wanted to let rtmutex invoke another schedule() function in case of a
sleeping lock to avoid the RCU warning. This would avoid incrementing
"sleeping_lock" in the fast path. But then I had no idea what to do with
the NOHZ thing.

> Also, I suppose it would be really good to put that in a comment.
So, what does that mean for that patch. According to my inbox it has
applied to an "urgent" branch. Do I resubmit the whole thing or just a
comment on top?

Sebastian

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-20 15:54       ` Sebastian Andrzej Siewior
@ 2019-08-20 16:02         ` Peter Zijlstra
  2019-08-20 16:14           ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2019-08-20 16:02 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: linux-kernel, Ingo Molnar, tglx

On Tue, Aug 20, 2019 at 05:54:01PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-20 17:20:25 [+0200], Peter Zijlstra wrote:

> > And am I right in thinking that that, again, is specific to the
> > sleeping-spinlocks from PREEMPT_RT? Is there really nothing else that
> > identifies those more specifically? It's been a while since I looked at
> > them.
> 
> Not really. I hacked "int sleeping_lock" into task_struct which is
> incremented each time a "sleeping lock" version of rtmutex is requested.
> We have two users as of now:
> - RCU, which checks if we schedule() while holding rcu_read_lock() which
>   is okay if it is a sleeping lock.
> 
> - NOHZ's pending softirq detection while going to idle. It is possible
>   that "ksoftirqd" and "current" are blocked on locks and the CPU goes
>   to idle (because nothing else is runnable) with pending softirqs.
> 
> I wanted to let rtmutex invoke another schedule() function in case of a
> sleeping lock to avoid the RCU warning. This would avoid incrementing
> "sleeping_lock" in the fast path. But then I had no idea what to do with
> the NOHZ thing.

Once upon a time there was also a shadow task->state thing, that was
specific to the sleeping locks, because normally spinlocks don't muck
with task->state and so we have code relying on it not getting trampled.

Can't we use that somewhow? Or is that gone?

> > Also, I suppose it would be really good to put that in a comment.
> So, what does that mean for that patch. According to my inbox it has
> applied to an "urgent" branch. Do I resubmit the whole thing or just a
> comment on top?

Yeah, I'm not sure. I was surprised by that, because afaict all this is
PREEMPT_RT specific and not really /urgent material in the first place.
Ingo?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] sched/core: Schedule new worker even if PI-blocked
  2019-08-20 16:02         ` Peter Zijlstra
@ 2019-08-20 16:14           ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 8+ messages in thread
From: Sebastian Andrzej Siewior @ 2019-08-20 16:14 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, tglx

On 2019-08-20 18:02:17 [+0200], Peter Zijlstra wrote:
> On Tue, Aug 20, 2019 at 05:54:01PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-08-20 17:20:25 [+0200], Peter Zijlstra wrote:
> 
> > > And am I right in thinking that that, again, is specific to the
> > > sleeping-spinlocks from PREEMPT_RT? Is there really nothing else that
> > > identifies those more specifically? It's been a while since I looked at
> > > them.
> > 
> > Not really. I hacked "int sleeping_lock" into task_struct which is
> > incremented each time a "sleeping lock" version of rtmutex is requested.
> > We have two users as of now:
> > - RCU, which checks if we schedule() while holding rcu_read_lock() which
> >   is okay if it is a sleeping lock.
> > 
> > - NOHZ's pending softirq detection while going to idle. It is possible
> >   that "ksoftirqd" and "current" are blocked on locks and the CPU goes
> >   to idle (because nothing else is runnable) with pending softirqs.
> > 
> > I wanted to let rtmutex invoke another schedule() function in case of a
> > sleeping lock to avoid the RCU warning. This would avoid incrementing
> > "sleeping_lock" in the fast path. But then I had no idea what to do with
> > the NOHZ thing.
> 
> Once upon a time there was also a shadow task->state thing, that was
> specific to the sleeping locks, because normally spinlocks don't muck
> with task->state and so we have code relying on it not getting trampled.
> 
> Can't we use that somewhow? Or is that gone?

we have ->state and ->saved_state. While sleeping on a sleeping lock
->state goes to ->saved_state (usually TASK_RUNNING) and ->state becomes
TASK_UNINTERRUPTIBLE. This is no different compared to regular
blocked-on-I/O wait.
We could add a state, say, TASK_LOCK_BLOCK to identify a task blocking
on sleeping lock. This shouldn't break anything. After all only a
regular "unlock" is allowed to wake such a task and "non-matching" wakes
are redirected to update ->saved_state.

> > > Also, I suppose it would be really good to put that in a comment.
> > So, what does that mean for that patch. According to my inbox it has
> > applied to an "urgent" branch. Do I resubmit the whole thing or just a
> > comment on top?
> 
> Yeah, I'm not sure. I was surprised by that, because afaict all this is
> PREEMPT_RT specific and not really /urgent material in the first place.
> Ingo?

Sebastian

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-08-20 16:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-16 16:06 [PATCH] sched/core: Schedule new worker even if PI-blocked Sebastian Andrzej Siewior
2019-08-19  9:52 ` [tip:sched/urgent] " tip-bot for Sebastian Andrzej Siewior
2019-08-20 13:50 ` [PATCH] " Peter Zijlstra
2019-08-20 14:59   ` Sebastian Andrzej Siewior
2019-08-20 15:20     ` Peter Zijlstra
2019-08-20 15:54       ` Sebastian Andrzej Siewior
2019-08-20 16:02         ` Peter Zijlstra
2019-08-20 16:14           ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).