linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V2 0/3] signals: Allow caching one sigqueue object per task
@ 2021-03-11 13:20 Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 1/3] signal: Provide and use exit_task_sighand() Thomas Gleixner
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-11 13:20 UTC (permalink / raw)
  To: LKML
  Cc: Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming, Eric W. Biederman

This is a follow up to the initial submission which can be found here:

  https://lore.kernel.org/r/20210303142025.wbbt2nnr6dtgwjfi@linutronix.de

Signal sending requires a kmem cache allocation at the sender side and the
receiver hands it back to the kmem cache when consuming the signal.

This works pretty well even for realtime workloads except for the case when
the kmem cache allocation has to go into the slow path which is rare but
happens.

Preempt-RT carries a patch which allows caching of one sigqueue object per
task. The object is not preallocated. It's cached when the task receives a
signal. The cache is freed when the task exits.

The memory overhead for a standard distro setup is pretty small. After boot
there are less than 10 objects cached in about 1500 tasks. The speedup for
sending a signal from a cached sigqueue object is small (~3us) per signal
and almost invisible, but for signal heavy workloads it's definitely
measurable and for the targeted realtime workloads it's solving a real
world latency issue.

Changes vs V1:

   - the caching is now unconditional
   - drop the pointless cmpxchg
   - split the patch up for better readability
   - add a proper analysis to the changelog vs. the impact and benefits

Thanks,

	tglx
---
 include/linux/sched.h  |    1 +
 include/linux/signal.h |    1 +
 kernel/exit.c          |    3 +--
 kernel/fork.c          |    1 +
 kernel/signal.c        |   44 +++++++++++++++++++++++++++++++++-----------
 5 files changed, 37 insertions(+), 13 deletions(-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [patch V2 1/3] signal: Provide and use exit_task_sighand()
  2021-03-11 13:20 [patch V2 0/3] signals: Allow caching one sigqueue object per task Thomas Gleixner
@ 2021-03-11 13:20 ` Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 2/3] signal: Hand SIGQUEUE_PREALLOC flag to __sigqueue_alloc() Thomas Gleixner
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-11 13:20 UTC (permalink / raw)
  To: LKML
  Cc: Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming, Eric W. Biederman

To prepare for caching a sigqueue per task, implement a dedicated function
to flush the sigqueue of the exiting task.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/signal.h |    1 +
 kernel/exit.c          |    3 +--
 kernel/signal.c        |    9 +++++++++
 3 files changed, 11 insertions(+), 2 deletions(-)

--- a/include/linux/signal.h
+++ b/include/linux/signal.h
@@ -265,6 +265,7 @@ static inline void init_sigpending(struc
 }
 
 extern void flush_sigqueue(struct sigpending *queue);
+extern void exit_task_sighand(struct task_struct *tsk);
 
 /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */
 static inline int valid_signal(unsigned long sig)
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -152,8 +152,7 @@ static void __exit_signal(struct task_st
 	 * Do this under ->siglock, we can race with another thread
 	 * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
 	 */
-	flush_sigqueue(&tsk->pending);
-	tsk->sighand = NULL;
+	exit_task_sighand(tsk);
 	spin_unlock(&sighand->siglock);
 
 	__cleanup_sighand(sighand);
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -471,6 +471,15 @@ void flush_sigqueue(struct sigpending *q
 }
 
 /*
+ * Called from __exit_signal. Flush tsk->pending and clear tsk->sighand.
+ */
+void exit_task_sighand(struct task_struct *tsk)
+{
+	flush_sigqueue(&tsk->pending);
+	tsk->sighand = NULL;
+}
+
+/*
  * Flush all pending signals for this kthread.
  */
 void flush_signals(struct task_struct *t)


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [patch V2 2/3] signal: Hand SIGQUEUE_PREALLOC flag to __sigqueue_alloc()
  2021-03-11 13:20 [patch V2 0/3] signals: Allow caching one sigqueue object per task Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 1/3] signal: Provide and use exit_task_sighand() Thomas Gleixner
@ 2021-03-11 13:20 ` Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct Thomas Gleixner
  2021-03-11 21:13 ` [patch V2 0/3] signals: Allow caching one sigqueue object per task Eric W. Biederman
  3 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-11 13:20 UTC (permalink / raw)
  To: LKML
  Cc: Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming, Eric W. Biederman

There is no point in having the conditional at the callsite.

Just hand in the allocation mode flag to __sigqueue_alloc() and use it to
initialize sigqueue::flags.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/signal.c |   17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -410,7 +410,8 @@ void task_join_group_stop(struct task_st
  *   appropriate lock must be held to stop the target task from exiting
  */
 static struct sigqueue *
-__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimit)
+__sigqueue_alloc(int sig, struct task_struct *t, gfp_t gfp_flags,
+		 int override_rlimit, const unsigned int sigqueue_flags)
 {
 	struct sigqueue *q = NULL;
 	struct user_struct *user;
@@ -432,7 +433,7 @@ static struct sigqueue *
 	rcu_read_unlock();
 
 	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
-		q = kmem_cache_alloc(sigqueue_cachep, flags);
+		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
 	} else {
 		print_dropped_signal(sig);
 	}
@@ -442,7 +443,7 @@ static struct sigqueue *
 			free_uid(user);
 	} else {
 		INIT_LIST_HEAD(&q->list);
-		q->flags = 0;
+		q->flags = sigqueue_flags;
 		q->user = user;
 	}
 
@@ -1122,7 +1123,8 @@ static int __send_signal(int sig, struct
 	else
 		override_rlimit = 0;
 
-	q = __sigqueue_alloc(sig, t, GFP_ATOMIC, override_rlimit);
+	q = __sigqueue_alloc(sig, t, GFP_ATOMIC, override_rlimit, 0);
+
 	if (q) {
 		list_add_tail(&q->list, &pending->list);
 		switch ((unsigned long) info) {
@@ -1816,12 +1818,7 @@ EXPORT_SYMBOL(kill_pid);
  */
 struct sigqueue *sigqueue_alloc(void)
 {
-	struct sigqueue *q = __sigqueue_alloc(-1, current, GFP_KERNEL, 0);
-
-	if (q)
-		q->flags |= SIGQUEUE_PREALLOC;
-
-	return q;
+	return __sigqueue_alloc(-1, current, GFP_KERNEL, 0, SIGQUEUE_PREALLOC);
 }
 
 void sigqueue_free(struct sigqueue *q)


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-11 13:20 [patch V2 0/3] signals: Allow caching one sigqueue object per task Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 1/3] signal: Provide and use exit_task_sighand() Thomas Gleixner
  2021-03-11 13:20 ` [patch V2 2/3] signal: Hand SIGQUEUE_PREALLOC flag to __sigqueue_alloc() Thomas Gleixner
@ 2021-03-11 13:20 ` Thomas Gleixner
  2021-03-12 11:35   ` Sebastian Andrzej Siewior
  2021-03-12 16:11   ` Oleg Nesterov
  2021-03-11 21:13 ` [patch V2 0/3] signals: Allow caching one sigqueue object per task Eric W. Biederman
  3 siblings, 2 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-11 13:20 UTC (permalink / raw)
  To: LKML
  Cc: Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming, Eric W. Biederman

From: Thomas Gleixner <tglx@linutronix.de>

The idea for this originates from the real time tree to make signal
delivery for realtime applications more efficient. In quite some of these
application scenarios a control tasks signals workers to start their
computations. There is usually only one signal per worker on flight.  This
works nicely as long as the kmem cache allocations do not hit the slow path
and cause latencies.

To cure this an optimistic caching was introduced (limited to RT tasks)
which allows a task to cache a single sigqueue in a pointer in task_struct
instead of handing it back to the kmem cache after consuming a signal. When
the next signal is sent to the task then the cached sigqueue is used
instead of allocating a new one. This solved the problem for this set of
application scenarios nicely.

The task cache is not preallocated so the first signal sent to a task goes
always to the cache allocator. The cached sigqueue stays around until the
task exits and is freed when task::sighand is dropped.

After posting this solution for mainline the discussion came up whether
this would be useful in general and should not be limited to realtime
tasks: https://lore.kernel.org/r/m11rcu7nbr.fsf@fess.ebiederm.org

One concern leading to the original limitation was to avoid a large amount
of pointlessly cached sigqueues in alive tasks. The other concern was
vs. RLIMIT_SIGPENDING as these cached sigqueues are not accounted for.

The accounting problem is real, but on the other hand slightly academic.
After gathering some statistics it turned out that after boot of a regular
distro install there are less than 10 sigqueues cached in ~1500 tasks.

In case of a 'mass fork and fire signal to child' scenario the extra 80
bytes of memory per task are well in the noise of the overall memory
consumption of the fork bomb.

If this should be limited then this would need an extra counter in struct
user, more atomic instructions and a seperate rlimit. Yet another tunable
which is mostly unused.

The caching is actually used. After boot and a full kernel compile on a
64CPU machine with make -j128 the number of 'allocations' looks like this:

  From slab: 	   23996
  From task cache: 52223

I.e. it reduces the number of slab cache operations by ~68%.

A typical pattern there is:

<...>-58490 __sigqueue_alloc:  for 58488 from slab ffff8881132df460
<...>-58488 __sigqueue_free:   cache ffff8881132df460
<...>-58488 __sigqueue_alloc:  for 1149 from cache ffff8881103dc550
  bash-1149 exit_task_sighand: free ffff8881132df460
  bash-1149 __sigqueue_free:   cache ffff8881103dc550

The interesting sequence is that the exiting task 58488 grabs the sigqueue
from bash's task cache to signal exit and bash sticks it back into it's own
cache. Lather, rinse and repeat.

The caching is probably not noticable for the general use case, but the
benefit for latency sensitive applications is clear. While kmem caches are
usually just serving from the fast path the slab merging (default) can
depending on the usage pattern of the merged slabs cause occasional slow
path allocations.

The time spared per cached entry is a few micro seconds per signal which is
not relevant for e.g. a kernel build, but for signal heavy workloads it's
measurable.

As there is no real downside of this caching mechanism making it
unconditionally available is preferred over more conditional code or new
magic tunables.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V2: Remove the realtime task restriction and get rid of the cmpxchg()
    (Eric, Oleg)
    Add more information to the changelog.
---
 include/linux/sched.h |    1 +
 kernel/fork.c         |    1 +
 kernel/signal.c       |   22 +++++++++++++++++++---
 3 files changed, 21 insertions(+), 3 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -985,6 +985,7 @@ struct task_struct {
 	/* Signal handlers: */
 	struct signal_struct		*signal;
 	struct sighand_struct __rcu		*sighand;
+	struct sigqueue			*sigqueue_cache;
 	sigset_t			blocked;
 	sigset_t			real_blocked;
 	/* Restored if set_restore_sigmask() was used: */
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1993,6 +1993,7 @@ static __latent_entropy struct task_stru
 	spin_lock_init(&p->alloc_lock);
 
 	init_sigpending(&p->pending);
+	p->sigqueue_cache = NULL;
 
 	p->utime = p->stime = p->gtime = 0;
 #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -433,7 +433,11 @@ static struct sigqueue *
 	rcu_read_unlock();
 
 	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
-		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
+		/* Preallocation does not hold sighand::siglock */
+		if (sigqueue_flags || !t->sigqueue_cache)
+			q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
+		else
+			q = xchg(&t->sigqueue_cache, NULL);
 	} else {
 		print_dropped_signal(sig);
 	}
@@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
 		return;
 	if (atomic_dec_and_test(&q->user->sigpending))
 		free_uid(q->user);
-	kmem_cache_free(sigqueue_cachep, q);
+
+	/* Cache one sigqueue per task */
+	if (!current->sigqueue_cache)
+		current->sigqueue_cache = q;
+	else
+		kmem_cache_free(sigqueue_cachep, q);
 }
 
 void flush_sigqueue(struct sigpending *queue)
@@ -472,12 +481,19 @@ void flush_sigqueue(struct sigpending *q
 }
 
 /*
- * Called from __exit_signal. Flush tsk->pending and clear tsk->sighand.
+ * Called from __exit_signal. Flush tsk->pending, clear tsk->sighand and
+ * free tsk->sigqueue_cache.
  */
 void exit_task_sighand(struct task_struct *tsk)
 {
+	struct sigqueue *q;
+
 	flush_sigqueue(&tsk->pending);
 	tsk->sighand = NULL;
+
+	q = xchg(&tsk->sigqueue_cache, NULL);
+	if (q)
+		kmem_cache_free(sigqueue_cachep, q);
 }
 
 /*


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 0/3] signals: Allow caching one sigqueue object per task
  2021-03-11 13:20 [patch V2 0/3] signals: Allow caching one sigqueue object per task Thomas Gleixner
                   ` (2 preceding siblings ...)
  2021-03-11 13:20 ` [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct Thomas Gleixner
@ 2021-03-11 21:13 ` Eric W. Biederman
  2021-03-12 20:02   ` Thomas Gleixner
  3 siblings, 1 reply; 15+ messages in thread
From: Eric W. Biederman @ 2021-03-11 21:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming

Thomas Gleixner <tglx@linutronix.de> writes:

> This is a follow up to the initial submission which can be found here:
>
>   https://lore.kernel.org/r/20210303142025.wbbt2nnr6dtgwjfi@linutronix.de
>
> Signal sending requires a kmem cache allocation at the sender side and the
> receiver hands it back to the kmem cache when consuming the signal.
>
> This works pretty well even for realtime workloads except for the case when
> the kmem cache allocation has to go into the slow path which is rare but
> happens.
>
> Preempt-RT carries a patch which allows caching of one sigqueue object per
> task. The object is not preallocated. It's cached when the task receives a
> signal. The cache is freed when the task exits.

I am probably skimming fast and missed your explanation but is there
a reason the caching is per task (aka thread) and not per signal_struct
(aka process)?

My sense is most signal delivery is per process.  Are realtime workloads
that extensively use pthread_sigqueue?  The ordinary sigqueue interface
only allows targeting a process.

Mostly I am just trying to get a sense of the workloads that are
improved by this.

Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-11 13:20 ` [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct Thomas Gleixner
@ 2021-03-12 11:35   ` Sebastian Andrzej Siewior
  2021-03-12 16:18     ` Oleg Nesterov
  2021-03-12 16:11   ` Oleg Nesterov
  1 sibling, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-03-12 11:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Oleg Nesterov, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On 2021-03-11 14:20:39 [+0100], Thomas Gleixner wrote:
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -433,7 +433,11 @@ static struct sigqueue *
>  	rcu_read_unlock();
>  
>  	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
> -		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
> +		/* Preallocation does not hold sighand::siglock */
> +		if (sigqueue_flags || !t->sigqueue_cache)
> +			q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
> +		else
> +			q = xchg(&t->sigqueue_cache, NULL);

Could it happen that two tasks saw t->sigqueue_cache != NULL, the first
one got the pointer via xchg() and the second got NULL via xchg()?

>  	} else {
>  		print_dropped_signal(sig);
>  	}
> @@ -472,12 +481,19 @@ void flush_sigqueue(struct sigpending *q
>  }
>  
>  /*
> - * Called from __exit_signal. Flush tsk->pending and clear tsk->sighand.
> + * Called from __exit_signal. Flush tsk->pending, clear tsk->sighand and
> + * free tsk->sigqueue_cache.
>   */
>  void exit_task_sighand(struct task_struct *tsk)
>  {
> +	struct sigqueue *q;
> +
>  	flush_sigqueue(&tsk->pending);
>  	tsk->sighand = NULL;
> +
> +	q = xchg(&tsk->sigqueue_cache, NULL);
> +	if (q)
> +		kmem_cache_free(sigqueue_cachep, q);

Do we need this xchg() here? Only the task itself adds something here
and the task is on its way out so it should not add an entry to the
cache.

>  }
>  
>  /*

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-11 13:20 ` [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct Thomas Gleixner
  2021-03-12 11:35   ` Sebastian Andrzej Siewior
@ 2021-03-12 16:11   ` Oleg Nesterov
  2021-03-12 19:26     ` Thomas Gleixner
  1 sibling, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2021-03-12 16:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On 03/11, Thomas Gleixner wrote:
>
> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
>  		return;
>  	if (atomic_dec_and_test(&q->user->sigpending))
>  		free_uid(q->user);
> -	kmem_cache_free(sigqueue_cachep, q);
> +
> +	/* Cache one sigqueue per task */
> +	if (!current->sigqueue_cache)
> +		current->sigqueue_cache = q;
> +	else
> +		kmem_cache_free(sigqueue_cachep, q);
>  }

This doesn't look right, note that __exit_signal() does
flush_sigqueue(&sig->shared_pending) at the end, after exit_task_sighand()
was already called.

I'd suggest to not add the new exit_task_sighand() helper and simply free
current->sigqueue_cache at the end of __exit_signal().

Oleg.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 11:35   ` Sebastian Andrzej Siewior
@ 2021-03-12 16:18     ` Oleg Nesterov
  2021-03-12 19:25       ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2021-03-12 16:18 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Thomas Gleixner, LKML, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On 03/12, Sebastian Andrzej Siewior wrote:
>
> On 2021-03-11 14:20:39 [+0100], Thomas Gleixner wrote:
> > --- a/kernel/signal.c
> > +++ b/kernel/signal.c
> > @@ -433,7 +433,11 @@ static struct sigqueue *
> >  	rcu_read_unlock();
> >
> >  	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
> > -		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
> > +		/* Preallocation does not hold sighand::siglock */
> > +		if (sigqueue_flags || !t->sigqueue_cache)
> > +			q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
> > +		else
> > +			q = xchg(&t->sigqueue_cache, NULL);
>
> Could it happen that two tasks saw t->sigqueue_cache != NULL, the first
> one got the pointer via xchg() and the second got NULL via xchg()?

It is called with sighand::siglock held, we don't even need xchg().

Oleg.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 16:18     ` Oleg Nesterov
@ 2021-03-12 19:25       ` Thomas Gleixner
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-12 19:25 UTC (permalink / raw)
  To: Oleg Nesterov, Sebastian Andrzej Siewior
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming, Eric W. Biederman

On Fri, Mar 12 2021 at 17:18, Oleg Nesterov wrote:
> On 03/12, Sebastian Andrzej Siewior wrote:
>>
>> On 2021-03-11 14:20:39 [+0100], Thomas Gleixner wrote:
>> > --- a/kernel/signal.c
>> > +++ b/kernel/signal.c
>> > @@ -433,7 +433,11 @@ static struct sigqueue *
>> >  	rcu_read_unlock();
>> >
>> >  	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
>> > -		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
>> > +		/* Preallocation does not hold sighand::siglock */
>> > +		if (sigqueue_flags || !t->sigqueue_cache)
>> > +			q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
>> > +		else
>> > +			q = xchg(&t->sigqueue_cache, NULL);
>>
>> Could it happen that two tasks saw t->sigqueue_cache != NULL, the first
>> one got the pointer via xchg() and the second got NULL via xchg()?
>
> It is called with sighand::siglock held, we don't even need xchg().

Yes, it was me being lazy. Lemme open code it as it's actually resulting
in a locked instruction.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 16:11   ` Oleg Nesterov
@ 2021-03-12 19:26     ` Thomas Gleixner
  2021-03-12 21:13       ` Thomas Gleixner
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-12 19:26 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On Fri, Mar 12 2021 at 17:11, Oleg Nesterov wrote:
> On 03/11, Thomas Gleixner wrote:
>>
>> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
>>  		return;
>>  	if (atomic_dec_and_test(&q->user->sigpending))
>>  		free_uid(q->user);
>> -	kmem_cache_free(sigqueue_cachep, q);
>> +
>> +	/* Cache one sigqueue per task */
>> +	if (!current->sigqueue_cache)
>> +		current->sigqueue_cache = q;
>> +	else
>> +		kmem_cache_free(sigqueue_cachep, q);
>>  }
>
> This doesn't look right, note that __exit_signal() does
> flush_sigqueue(&sig->shared_pending) at the end, after exit_task_sighand()
> was already called.
>
> I'd suggest to not add the new exit_task_sighand() helper and simply free
> current->sigqueue_cache at the end of __exit_signal().

Ooops. Thanks for spotting this!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 0/3] signals: Allow caching one sigqueue object per task
  2021-03-11 21:13 ` [patch V2 0/3] signals: Allow caching one sigqueue object per task Eric W. Biederman
@ 2021-03-12 20:02   ` Thomas Gleixner
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-12 20:02 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: LKML, Oleg Nesterov, Sebastian Andrzej Siewior, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Matt Fleming

On Thu, Mar 11 2021 at 15:13, Eric W. Biederman wrote:
> Thomas Gleixner <tglx@linutronix.de> writes:
>
>> This is a follow up to the initial submission which can be found here:
>>
>>   https://lore.kernel.org/r/20210303142025.wbbt2nnr6dtgwjfi@linutronix.de
>>
>> Signal sending requires a kmem cache allocation at the sender side and the
>> receiver hands it back to the kmem cache when consuming the signal.
>>
>> This works pretty well even for realtime workloads except for the case when
>> the kmem cache allocation has to go into the slow path which is rare but
>> happens.
>>
>> Preempt-RT carries a patch which allows caching of one sigqueue object per
>> task. The object is not preallocated. It's cached when the task receives a
>> signal. The cache is freed when the task exits.
>
> I am probably skimming fast and missed your explanation but is there
> a reason the caching is per task (aka thread) and not per signal_struct
> (aka process)?
>
> My sense is most signal delivery is per process.  Are realtime workloads
> that extensively use pthread_sigqueue?  The ordinary sigqueue interface
> only allows targeting a process.

Unfortunately they use both. The majority is probably process based.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 19:26     ` Thomas Gleixner
@ 2021-03-12 21:13       ` Thomas Gleixner
  2021-03-13 11:17         ` [patch V3 " Thomas Gleixner
  2021-03-13 16:49         ` [patch V2 " Oleg Nesterov
  0 siblings, 2 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-12 21:13 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On Fri, Mar 12 2021 at 20:26, Thomas Gleixner wrote:
> On Fri, Mar 12 2021 at 17:11, Oleg Nesterov wrote:
>> On 03/11, Thomas Gleixner wrote:
>>>
>>> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
>>>  		return;
>>>  	if (atomic_dec_and_test(&q->user->sigpending))
>>>  		free_uid(q->user);
>>> -	kmem_cache_free(sigqueue_cachep, q);
>>> +
>>> +	/* Cache one sigqueue per task */
>>> +	if (!current->sigqueue_cache)
>>> +		current->sigqueue_cache = q;
>>> +	else
>>> +		kmem_cache_free(sigqueue_cachep, q);
>>>  }
>>
>> This doesn't look right, note that __exit_signal() does
>> flush_sigqueue(&sig->shared_pending) at the end, after exit_task_sighand()
>> was already called.
>>
>> I'd suggest to not add the new exit_task_sighand() helper and simply free
>> current->sigqueue_cache at the end of __exit_signal().
>
> Ooops. Thanks for spotting this!

Hrm.

The task which is released is obviously not current, so even if there
are still sigqueues in shared_pending then they wont end up in the
released tasks sigqueue_cache. They can only ever end up in
current->sigqueue_cache.

But that brings my memory back why I had cmpxchg() in the original
version. This code runs without current->sighand->siglock held.

So we need READ/WRITE_ONCE() for that on both sides which is sufficient.

Thanks,

        tglx



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [patch V3 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 21:13       ` Thomas Gleixner
@ 2021-03-13 11:17         ` Thomas Gleixner
  2021-03-13 16:49         ` [patch V2 " Oleg Nesterov
  1 sibling, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-13 11:17 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

From:  Thomas Gleixner <tglx@linutronix.de>
Subject: signal: Allow tasks to cache one sigqueue struct
Date: Wed, 03 Mar 2021 15:20:25 +0100

From: Thomas Gleixner <tglx@linutronix.de>

The idea for this originates from the real time tree to make signal
delivery for realtime applications more efficient. In quite some of these
application scenarios a control tasks signals workers to start their
computations. There is usually only one signal per worker on flight.  This
works nicely as long as the kmem cache allocations do not hit the slow path
and cause latencies.

To cure this an optimistic caching was introduced (limited to RT tasks)
which allows a task to cache a single sigqueue in a pointer in task_struct
instead of handing it back to the kmem cache after consuming a signal. When
the next signal is sent to the task then the cached sigqueue is used
instead of allocating a new one. This solved the problem for this set of
application scenarios nicely.

The task cache is not preallocated so the first signal sent to a task goes
always to the cache allocator. The cached sigqueue stays around until the
task exits and is freed when task::sighand is dropped.

After posting this solution for mainline the discussion came up whether
this would be useful in general and should not be limited to realtime
tasks: https://lore.kernel.org/r/m11rcu7nbr.fsf@fess.ebiederm.org

One concern leading to the original limitation was to avoid a large amount
of pointlessly cached sigqueues in alive tasks. The other concern was
vs. RLIMIT_SIGPENDING as these cached sigqueues are not accounted for.

The accounting problem is real, but on the other hand slightly academic.
After gathering some statistics it turned out that after boot of a regular
distro install there are less than 10 sigqueues cached in ~1500 tasks.

In case of a 'mass fork and fire signal to child' scenario the extra 80
bytes of memory per task are well in the noise of the overall memory
consumption of the fork bomb.

If this should be limited then this would need an extra counter in struct
user, more atomic instructions and a seperate rlimit. Yet another tunable
which is mostly unused.

The caching is actually used. After boot and a full kernel compile on a
64CPU machine with make -j128 the number of 'allocations' looks like this:

  From slab: 	   23996
  From task cache: 52223

I.e. it reduces the number of slab cache operations by ~68%.

A typical pattern there is:

<...>-58490 __sigqueue_alloc:  for 58488 from slab ffff8881132df460
<...>-58488 __sigqueue_free:   cache ffff8881132df460
<...>-58488 __sigqueue_alloc:  for 1149 from cache ffff8881103dc550
  bash-1149 exit_task_sighand: free ffff8881132df460
  bash-1149 __sigqueue_free:   cache ffff8881103dc550

The interesting sequence is that the exiting task 58488 grabs the sigqueue
from bash's task cache to signal exit and bash sticks it back into it's own
cache. Lather, rinse and repeat.

The caching is probably not noticable for the general use case, but the
benefit for latency sensitive applications is clear. While kmem caches are
usually just serving from the fast path the slab merging (default) can
depending on the usage pattern of the merged slabs cause occasional slow
path allocations.

The time spared per cached entry is a few micro seconds per signal which is
not relevant for e.g. a kernel build, but for signal heavy workloads it's
measurable.

As there is no real downside of this caching mechanism making it
unconditionally available is preferred over more conditional code or new
magic tunables.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V2: Remove the realtime task restriction and get rid of the cmpxchg()
    (Eric, Oleg)
    Add more information to the changelog.

V3: Use READ/WRITE_ONCE() for the cache operations and add commentry
    for it.
---
 include/linux/sched.h |    1 +
 kernel/fork.c         |    1 +
 kernel/signal.c       |   41 ++++++++++++++++++++++++++++++++++++++---
 3 files changed, 40 insertions(+), 3 deletions(-)

--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -985,6 +985,7 @@ struct task_struct {
 	/* Signal handlers: */
 	struct signal_struct		*signal;
 	struct sighand_struct __rcu		*sighand;
+	struct sigqueue			*sigqueue_cache;
 	sigset_t			blocked;
 	sigset_t			real_blocked;
 	/* Restored if set_restore_sigmask() was used: */
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1995,6 +1995,7 @@ static __latent_entropy struct task_stru
 	spin_lock_init(&p->alloc_lock);
 
 	init_sigpending(&p->pending);
+	p->sigqueue_cache = NULL;
 
 	p->utime = p->stime = p->gtime = 0;
 #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -433,7 +433,16 @@ static struct sigqueue *
 	rcu_read_unlock();
 
 	if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
-		q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
+		/*
+		 * Preallocation does not hold sighand::siglock so it can't
+		 * use the cache. The lockless caching requires that only
+		 * one consumer and only one producer run at a time.
+		 */
+		q = READ_ONCE(t->sigqueue_cache);
+		if (!q || sigqueue_flags)
+			q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);
+		else
+			WRITE_ONCE(t->sigqueue_cache, NULL);
 	} else {
 		print_dropped_signal(sig);
 	}
@@ -450,13 +459,29 @@ static struct sigqueue *
 	return q;
 }
 
+static void sigqueue_cache_or_free(struct sigqueue *q)
+{
+	/*
+	 * Cache one sigqueue per task. This pairs with the consumer side
+	 * in __sigqueue_alloc() and needs READ/WRITE_ONCE() to prevent the
+	 * compiler from store tearing and to tell KCSAN that the data race
+	 * is intentional when run without holding current->sighand->siglock,
+	 * which is fine as current obviously cannot run __sigqueue_free()
+	 * concurrently.
+	 */
+	if (!READ_ONCE(current->sigqueue_cache))
+		WRITE_ONCE(current->sigqueue_cache, q);
+	else
+		kmem_cache_free(sigqueue_cachep, q);
+}
+
 static void __sigqueue_free(struct sigqueue *q)
 {
 	if (q->flags & SIGQUEUE_PREALLOC)
 		return;
 	if (atomic_dec_and_test(&q->user->sigpending))
 		free_uid(q->user);
-	kmem_cache_free(sigqueue_cachep, q);
+	sigqueue_cache_or_free(q);
 }
 
 void flush_sigqueue(struct sigpending *queue)
@@ -472,12 +497,22 @@ void flush_sigqueue(struct sigpending *q
 }
 
 /*
- * Called from __exit_signal. Flush tsk->pending and clear tsk->sighand.
+ * Called from __exit_signal(). Flush tsk->pending, clear tsk->sighand and
+ * free tsk->sigqueue_cache.
  */
 void exit_task_sighand(struct task_struct *tsk)
 {
+	struct sigqueue *q;
+
 	flush_sigqueue(&tsk->pending);
 	tsk->sighand = NULL;
+
+	/* Race free because @tsk is mopped up */
+	q = tsk->sigqueue_cache;
+	if (q) {
+		tsk->sigqueue_cache = NULL;
+		sigqueue_cache_or_free(q);
+	}
 }
 
 /*

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-12 21:13       ` Thomas Gleixner
  2021-03-13 11:17         ` [patch V3 " Thomas Gleixner
@ 2021-03-13 16:49         ` Oleg Nesterov
  2021-03-16 12:36           ` Thomas Gleixner
  1 sibling, 1 reply; 15+ messages in thread
From: Oleg Nesterov @ 2021-03-13 16:49 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On 03/12, Thomas Gleixner wrote:
>
> On Fri, Mar 12 2021 at 20:26, Thomas Gleixner wrote:
> > On Fri, Mar 12 2021 at 17:11, Oleg Nesterov wrote:
> >> On 03/11, Thomas Gleixner wrote:
> >>>
> >>> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
> >>>  		return;
> >>>  	if (atomic_dec_and_test(&q->user->sigpending))
> >>>  		free_uid(q->user);
> >>> -	kmem_cache_free(sigqueue_cachep, q);
> >>> +
> >>> +	/* Cache one sigqueue per task */
> >>> +	if (!current->sigqueue_cache)
> >>> +		current->sigqueue_cache = q;
> >>> +	else
> >>> +		kmem_cache_free(sigqueue_cachep, q);
> >>>  }
> >>
> >> This doesn't look right, note that __exit_signal() does
> >> flush_sigqueue(&sig->shared_pending) at the end, after exit_task_sighand()
> >> was already called.
> >>
> >> I'd suggest to not add the new exit_task_sighand() helper and simply free
> >> current->sigqueue_cache at the end of __exit_signal().
> >
> > Ooops. Thanks for spotting this!
>
> Hrm.
>
> The task which is released is obviously not current, so even if there
> are still sigqueues in shared_pending then they wont end up in the
> released tasks sigqueue_cache. They can only ever end up in
> current->sigqueue_cache.

The task which is released can be "current" if this task autoreaps.
For example, if its parent ignores SIGCHLD. In this case exit_notify()
does release_task(current).

> But that brings my memory back why I had cmpxchg() in the original
> version. This code runs without current->sighand->siglock held.

Yes, I was wrong too. This code runs without exiting_task->sighand->siglock
and this is fine in that it can not race with send_signal(exiting_task),
but somehow I missed the fact that it always populates current->sigqueue_cache,
not exiting_task->sigqueue_cache.

Oleg.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct
  2021-03-13 16:49         ` [patch V2 " Oleg Nesterov
@ 2021-03-16 12:36           ` Thomas Gleixner
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Gleixner @ 2021-03-16 12:36 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: LKML, Sebastian Andrzej Siewior, Ingo Molnar, Peter Zijlstra,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira, Matt Fleming,
	Eric W. Biederman

On Sat, Mar 13 2021 at 17:49, Oleg Nesterov wrote:
> On 03/12, Thomas Gleixner wrote:
>>
>> On Fri, Mar 12 2021 at 20:26, Thomas Gleixner wrote:
>> > On Fri, Mar 12 2021 at 17:11, Oleg Nesterov wrote:
>> >> On 03/11, Thomas Gleixner wrote:
>> >>>
>> >>> @@ -456,7 +460,12 @@ static void __sigqueue_free(struct sigqu
>> >>>  		return;
>> >>>  	if (atomic_dec_and_test(&q->user->sigpending))
>> >>>  		free_uid(q->user);
>> >>> -	kmem_cache_free(sigqueue_cachep, q);
>> >>> +
>> >>> +	/* Cache one sigqueue per task */
>> >>> +	if (!current->sigqueue_cache)
>> >>> +		current->sigqueue_cache = q;
>> >>> +	else
>> >>> +		kmem_cache_free(sigqueue_cachep, q);
>> >>>  }
>> >>
>> >> This doesn't look right, note that __exit_signal() does
>> >> flush_sigqueue(&sig->shared_pending) at the end, after exit_task_sighand()
>> >> was already called.
>> >>
>> >> I'd suggest to not add the new exit_task_sighand() helper and simply free
>> >> current->sigqueue_cache at the end of __exit_signal().
>> >
>> > Ooops. Thanks for spotting this!
>>
>> Hrm.
>>
>> The task which is released is obviously not current, so even if there
>> are still sigqueues in shared_pending then they wont end up in the
>> released tasks sigqueue_cache. They can only ever end up in
>> current->sigqueue_cache.
>
> The task which is released can be "current" if this task autoreaps.
> For example, if its parent ignores SIGCHLD. In this case exit_notify()
> does release_task(current).

Bah. Let me stare at it moar.


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-03-16 12:37 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-11 13:20 [patch V2 0/3] signals: Allow caching one sigqueue object per task Thomas Gleixner
2021-03-11 13:20 ` [patch V2 1/3] signal: Provide and use exit_task_sighand() Thomas Gleixner
2021-03-11 13:20 ` [patch V2 2/3] signal: Hand SIGQUEUE_PREALLOC flag to __sigqueue_alloc() Thomas Gleixner
2021-03-11 13:20 ` [patch V2 3/3] signal: Allow tasks to cache one sigqueue struct Thomas Gleixner
2021-03-12 11:35   ` Sebastian Andrzej Siewior
2021-03-12 16:18     ` Oleg Nesterov
2021-03-12 19:25       ` Thomas Gleixner
2021-03-12 16:11   ` Oleg Nesterov
2021-03-12 19:26     ` Thomas Gleixner
2021-03-12 21:13       ` Thomas Gleixner
2021-03-13 11:17         ` [patch V3 " Thomas Gleixner
2021-03-13 16:49         ` [patch V2 " Oleg Nesterov
2021-03-16 12:36           ` Thomas Gleixner
2021-03-11 21:13 ` [patch V2 0/3] signals: Allow caching one sigqueue object per task Eric W. Biederman
2021-03-12 20:02   ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).