* [PATCH] task_work: only grab task signal lock when needed
@ 2020-08-11 14:25 Jens Axboe
2020-08-11 15:23 ` Oleg Nesterov
0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2020-08-11 14:25 UTC (permalink / raw)
To: linux-kernel; +Cc: Oleg Nesterov, Peter Zijlstra
If JOBCTL_TASK_WORK is already set on the targeted task, then we need
not go through {lock,unlock}_task_sighand() to set it again and queue
a signal wakeup. This is safe as we're checking it _after adding the
new task_work with cmpxchg().
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
Tested this with an intensive task_work based io_uring workload, and
the benefits are quite large.
diff --git a/kernel/task_work.c b/kernel/task_work.c
index 5c0848ca1287..cbf8cab6e864 100644
--- a/kernel/task_work.c
+++ b/kernel/task_work.c
@@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
set_notify_resume(task);
break;
case TWA_SIGNAL:
- if (lock_task_sighand(task, &flags)) {
+ if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
+ lock_task_sighand(task, &flags)) {
task->jobctl |= JOBCTL_TASK_WORK;
signal_wake_up(task, 0);
unlock_task_sighand(task, &flags);
--
Jens Axboe
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-11 14:25 [PATCH] task_work: only grab task signal lock when needed Jens Axboe
@ 2020-08-11 15:23 ` Oleg Nesterov
2020-08-11 16:45 ` Jens Axboe
2020-08-12 14:54 ` Oleg Nesterov
0 siblings, 2 replies; 8+ messages in thread
From: Oleg Nesterov @ 2020-08-11 15:23 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, Peter Zijlstra, Jann Horn
On 08/11, Jens Axboe wrote:
>
> --- a/kernel/task_work.c
> +++ b/kernel/task_work.c
> @@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
> set_notify_resume(task);
> break;
> case TWA_SIGNAL:
> - if (lock_task_sighand(task, &flags)) {
> + if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
> + lock_task_sighand(task, &flags)) {
Aaaaah, sorry Jens, now I think this is racy. So I am glad I didn't add
this optimization into the initial version ;)
It is possible that JOBCTL_TASK_WORK is set but ->task_works == NULL. Say,
task_work_add(TWA_SIGNAL) + task_work_cancel(), or the target task can call
task_work_run() before it enters get_signal().
And in this case another task_work_add(tsk, TWA_SIGNAL) can actually race
with get_signal() which does
current->jobctl &= ~JOBCTL_TASK_WORK;
if (unlikely(current->task_works)) {
spin_unlock_irq(&sighand->siglock);
task_work_run();
nothing guarantees that get_signal() sees ->task_works != NULL. Probably
this is what Jann meant.
We can probably add a barrier into get_signal() but I didn't sleep today,
I'll try to think tomorrow.
Oleg.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-11 15:23 ` Oleg Nesterov
@ 2020-08-11 16:45 ` Jens Axboe
2020-08-12 14:54 ` Oleg Nesterov
1 sibling, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2020-08-11 16:45 UTC (permalink / raw)
To: Oleg Nesterov; +Cc: linux-kernel, Peter Zijlstra, Jann Horn
On 8/11/20 9:23 AM, Oleg Nesterov wrote:
> On 08/11, Jens Axboe wrote:
>>
>> --- a/kernel/task_work.c
>> +++ b/kernel/task_work.c
>> @@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
>> set_notify_resume(task);
>> break;
>> case TWA_SIGNAL:
>> - if (lock_task_sighand(task, &flags)) {
>> + if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
>> + lock_task_sighand(task, &flags)) {
>
> Aaaaah, sorry Jens, now I think this is racy. So I am glad I didn't add
> this optimization into the initial version ;)
>
> It is possible that JOBCTL_TASK_WORK is set but ->task_works == NULL. Say,
> task_work_add(TWA_SIGNAL) + task_work_cancel(), or the target task can call
> task_work_run() before it enters get_signal().
>
> And in this case another task_work_add(tsk, TWA_SIGNAL) can actually race
> with get_signal() which does
>
> current->jobctl &= ~JOBCTL_TASK_WORK;
> if (unlikely(current->task_works)) {
> spin_unlock_irq(&sighand->siglock);
> task_work_run();
>
> nothing guarantees that get_signal() sees ->task_works != NULL. Probably
> this is what Jann meant.
>
> We can probably add a barrier into get_signal() but I didn't sleep today,
> I'll try to think tomorrow.
Appreciate you looking into this! Would be pretty critical for me to get
this working.
--
Jens Axboe
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-11 15:23 ` Oleg Nesterov
2020-08-11 16:45 ` Jens Axboe
@ 2020-08-12 14:54 ` Oleg Nesterov
2020-08-12 20:06 ` Peter Zijlstra
2020-08-12 23:13 ` Jens Axboe
1 sibling, 2 replies; 8+ messages in thread
From: Oleg Nesterov @ 2020-08-12 14:54 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, Peter Zijlstra, Jann Horn
On 08/11, Oleg Nesterov wrote:
>
> On 08/11, Jens Axboe wrote:
> >
> > --- a/kernel/task_work.c
> > +++ b/kernel/task_work.c
> > @@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
> > set_notify_resume(task);
> > break;
> > case TWA_SIGNAL:
> > - if (lock_task_sighand(task, &flags)) {
> > + if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
> > + lock_task_sighand(task, &flags)) {
>
> Aaaaah, sorry Jens, now I think this is racy. So I am glad I didn't add
> this optimization into the initial version ;)
>
> It is possible that JOBCTL_TASK_WORK is set but ->task_works == NULL. Say,
> task_work_add(TWA_SIGNAL) + task_work_cancel(), or the target task can call
> task_work_run() before it enters get_signal().
>
> And in this case another task_work_add(tsk, TWA_SIGNAL) can actually race
> with get_signal() which does
>
> current->jobctl &= ~JOBCTL_TASK_WORK;
> if (unlikely(current->task_works)) {
> spin_unlock_irq(&sighand->siglock);
> task_work_run();
>
> nothing guarantees that get_signal() sees ->task_works != NULL. Probably
> this is what Jann meant.
>
> We can probably add a barrier into get_signal() but I didn't sleep today,
> I'll try to think tomorrow.
I see nothing better than the additional change below. Peter, do you see
another solution?
This needs a comment to explain that this mb() pairs with another barrier
provided by cmpxchg() in task_work_add(). It ensures that either get_signal()
sees the new work added by task_work_add(), or task_work_add() sees the
result of "&= ~JOBCTL_TASK_WORK".
Oleg.
--- x/kernel/signal.c
+++ x/kernel/signal.c
@@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
relock:
spin_lock_irq(&sighand->siglock);
- current->jobctl &= ~JOBCTL_TASK_WORK;
+ smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
if (unlikely(current->task_works)) {
spin_unlock_irq(&sighand->siglock);
task_work_run();
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-12 14:54 ` Oleg Nesterov
@ 2020-08-12 20:06 ` Peter Zijlstra
2020-08-12 23:13 ` Jens Axboe
1 sibling, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2020-08-12 20:06 UTC (permalink / raw)
To: Oleg Nesterov; +Cc: Jens Axboe, linux-kernel, Jann Horn
On Wed, Aug 12, 2020 at 04:54:23PM +0200, Oleg Nesterov wrote:
> I see nothing better than the additional change below. Peter, do you see
> another solution?
Nope -- although I don't claim to understand the signal code much.
> This needs a comment to explain that this mb() pairs with another barrier
> provided by cmpxchg() in task_work_add(). It ensures that either get_signal()
> sees the new work added by task_work_add(), or task_work_add() sees the
> result of "&= ~JOBCTL_TASK_WORK".
>
> Oleg.
>
> --- x/kernel/signal.c
> +++ x/kernel/signal.c
> @@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
>
> relock:
> spin_lock_irq(&sighand->siglock);
> - current->jobctl &= ~JOBCTL_TASK_WORK;
> + smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
> if (unlikely(current->task_works)) {
> spin_unlock_irq(&sighand->siglock);
> task_work_run();
>
I agree this should work; smp_store_mb() isn't my favourite primitive,
but yes, this seems as good a use of it as there is so why not.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-12 14:54 ` Oleg Nesterov
2020-08-12 20:06 ` Peter Zijlstra
@ 2020-08-12 23:13 ` Jens Axboe
2020-08-13 11:48 ` Oleg Nesterov
1 sibling, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2020-08-12 23:13 UTC (permalink / raw)
To: Oleg Nesterov; +Cc: linux-kernel, Peter Zijlstra, Jann Horn
On 8/12/20 8:54 AM, Oleg Nesterov wrote:
> On 08/11, Oleg Nesterov wrote:
>>
>> On 08/11, Jens Axboe wrote:
>>>
>>> --- a/kernel/task_work.c
>>> +++ b/kernel/task_work.c
>>> @@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
>>> set_notify_resume(task);
>>> break;
>>> case TWA_SIGNAL:
>>> - if (lock_task_sighand(task, &flags)) {
>>> + if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
>>> + lock_task_sighand(task, &flags)) {
>>
>> Aaaaah, sorry Jens, now I think this is racy. So I am glad I didn't add
>> this optimization into the initial version ;)
>>
>> It is possible that JOBCTL_TASK_WORK is set but ->task_works == NULL. Say,
>> task_work_add(TWA_SIGNAL) + task_work_cancel(), or the target task can call
>> task_work_run() before it enters get_signal().
>>
>> And in this case another task_work_add(tsk, TWA_SIGNAL) can actually race
>> with get_signal() which does
>>
>> current->jobctl &= ~JOBCTL_TASK_WORK;
>> if (unlikely(current->task_works)) {
>> spin_unlock_irq(&sighand->siglock);
>> task_work_run();
>>
>> nothing guarantees that get_signal() sees ->task_works != NULL. Probably
>> this is what Jann meant.
>>
>> We can probably add a barrier into get_signal() but I didn't sleep today,
>> I'll try to think tomorrow.
>
> I see nothing better than the additional change below. Peter, do you see
> another solution?
>
> This needs a comment to explain that this mb() pairs with another barrier
> provided by cmpxchg() in task_work_add(). It ensures that either get_signal()
> sees the new work added by task_work_add(), or task_work_add() sees the
> result of "&= ~JOBCTL_TASK_WORK".
>
> Oleg.
>
> --- x/kernel/signal.c
> +++ x/kernel/signal.c
> @@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
>
> relock:
> spin_lock_irq(&sighand->siglock);
> - current->jobctl &= ~JOBCTL_TASK_WORK;
> + smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
> if (unlikely(current->task_works)) {
> spin_unlock_irq(&sighand->siglock);
> task_work_run();
>
I think this should work when paired with the READ_ONCE() on the
task_work_add() side. I haven't managed to reproduce badness with the
existing one that doesn't have the smp_store_mb() here, so can't verify
much beyond that...
Are you going to send this out as a complete patch?
--
Jens Axboe
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] task_work: only grab task signal lock when needed
2020-08-12 23:13 ` Jens Axboe
@ 2020-08-13 11:48 ` Oleg Nesterov
0 siblings, 0 replies; 8+ messages in thread
From: Oleg Nesterov @ 2020-08-13 11:48 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, Peter Zijlstra, Jann Horn
On 08/12, Jens Axboe wrote:
>
> On 8/12/20 8:54 AM, Oleg Nesterov wrote:
> >
> > --- x/kernel/signal.c
> > +++ x/kernel/signal.c
> > @@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
> >
> > relock:
> > spin_lock_irq(&sighand->siglock);
> > - current->jobctl &= ~JOBCTL_TASK_WORK;
> > + smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
> > if (unlikely(current->task_works)) {
> > spin_unlock_irq(&sighand->siglock);
> > task_work_run();
> >
>
> I think this should work when paired with the READ_ONCE() on the
> task_work_add() side.
It pairs with mb (implied by cmpxchg) before READ_ONCE. So we roughly have
task_work_add: get_signal:
STORE(task->task_works, new_work); STORE(task->jobctl);
mb(); mb();
LOAD(task->jobctl); LOAD(task->task_works);
and we can rely on STORE-MB-LOAD.
> I haven't managed to reproduce badness with the
> existing one that doesn't have the smp_store_mb() here, so can't verify
> much beyond that...
Yes, the race is very unlikely. And the problem is minor, the target task
can miss the new work added by TWA_SIGNAL and return from get_signal() without
TIF_SIGPENDING.
> Are you going to send this out as a complete patch?
Jens, could you please send the patch? I am on vacation and travelling.
Feel free to add my ACK.
Oleg.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] task_work: only grab task signal lock when needed
@ 2020-08-13 15:07 Jens Axboe
0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2020-08-13 15:07 UTC (permalink / raw)
To: linux-kernel; +Cc: Oleg Nesterov, Jann Horn, Peter Zijlstra
If JOBCTL_TASK_WORK is already set on the targeted task, then we need
not go through {lock,unlock}_task_sighand() to set it again and queue
a signal wakeup. This is safe as we're checking it _after_ adding the
new task_work with cmpxchg().
The ordering is as follows:
task_work_add() get_signal()
--------------------------------------------------------------
STORE(task->task_works, new_work); STORE(task->jobctl);
mb(); mb();
LOAD(task->jobctl); LOAD(task->task_works);
This speeds up TWA_SIGNAL handling quite a bit, which is important now
that io_uring is relying on it for all task_work deliveries.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jann Horn <jannh@google.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
diff --git a/kernel/signal.c b/kernel/signal.c
index 6f16f7c5d375..42b67d2cea37 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -2541,7 +2541,21 @@ bool get_signal(struct ksignal *ksig)
relock:
spin_lock_irq(&sighand->siglock);
- current->jobctl &= ~JOBCTL_TASK_WORK;
+ /*
+ * Make sure we can safely read ->jobctl() in task_work add. As Oleg
+ * states:
+ *
+ * It pairs with mb (implied by cmpxchg) before READ_ONCE. So we
+ * roughly have
+ *
+ * task_work_add: get_signal:
+ * STORE(task->task_works, new_work); STORE(task->jobctl);
+ * mb(); mb();
+ * LOAD(task->jobctl); LOAD(task->task_works);
+ *
+ * and we can rely on STORE-MB-LOAD [ in task_work_add].
+ */
+ smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
if (unlikely(current->task_works)) {
spin_unlock_irq(&sighand->siglock);
task_work_run();
diff --git a/kernel/task_work.c b/kernel/task_work.c
index 5c0848ca1287..613b2d634af8 100644
--- a/kernel/task_work.c
+++ b/kernel/task_work.c
@@ -42,7 +42,13 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
set_notify_resume(task);
break;
case TWA_SIGNAL:
- if (lock_task_sighand(task, &flags)) {
+ /*
+ * Only grab the sighand lock if we don't already have some
+ * task_work pending. This pairs with the smp_store_mb()
+ * in get_signal(), see comment there.
+ */
+ if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
+ lock_task_sighand(task, &flags)) {
task->jobctl |= JOBCTL_TASK_WORK;
signal_wake_up(task, 0);
unlock_task_sighand(task, &flags);
--
Jens Axboe
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-08-13 15:07 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-11 14:25 [PATCH] task_work: only grab task signal lock when needed Jens Axboe
2020-08-11 15:23 ` Oleg Nesterov
2020-08-11 16:45 ` Jens Axboe
2020-08-12 14:54 ` Oleg Nesterov
2020-08-12 20:06 ` Peter Zijlstra
2020-08-12 23:13 ` Jens Axboe
2020-08-13 11:48 ` Oleg Nesterov
2020-08-13 15:07 Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).