All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Eric W. Biederman" <ebiederm@xmission.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: oleg@redhat.com, linux@leemhuis.info, nicolas.dichtel@6wind.com,
	axboe@kernel.dk, torvalds@linux-foundation.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mst@redhat.com,
	sgarzare@redhat.com, jasowang@redhat.com, stefanha@redhat.com,
	brauner@kernel.org
Subject: Re: [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set
Date: Wed, 17 May 2023 22:49:31 -0500	[thread overview]
Message-ID: <87ednei9is.fsf@email.froward.int.ebiederm.org> (raw)
In-Reply-To: <20230518000920.191583-2-michael.christie@oracle.com> (Mike Christie's message of "Wed, 17 May 2023 19:09:13 -0500")


Long story short.

In the patch below the first hunk is a noop.

The code you are bypassing was added to ensure that process termination
(aka SIGKILL) is processed before any other signals.  Other than signal
processing order there are not any substantive differences in the two
code paths.  With all signals except SIGSTOP == 19 and SIGKILL == 9
blocked SIGKILL should always be processed before SIGSTOP.

Can you try patch with just the last hunk that does
s/PF_IO_WORKER/PF_USER_WORKER/ and see if that is enough?

I have no objections to the final hunk.

Mike Christie <michael.christie@oracle.com> writes:

> This has us deqeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is
> set when we are dealing with PF_USER_WORKER tasks.
>
> When a vhost_task gets a SIGKILL, we could have outstanding IO in flight.
> We can easily stop new work/IO from being queued to the vhost_task, but
> for IO that's already been sent to something like the block layer we
> need to wait for the response then process it. These type of IO
> completions use the vhost_task to process the completion so we can't
> exit immediately.
>
> We need to handle wait for then handle those completions from the
> vhost_task, but when we have a SIGKLL pending, functions like
> schedule() return immediately so we can't wait like normal. Functions
> like vhost_worker() degrade to just a while(1); loop.
>
> This patch has get_signal drop down to the normal code path when
> SIGNAL_GROUP_EXIT/group_exec_task is set so the caller can still detect
> there is a SIGKILL but still perform some blocking cleanup.
>
> Note that in that chunk I'm now bypassing that does:
>
> sigdelset(&current->pending.signal, SIGKILL);
>
> we look to be ok, because in the places we set SIGNAL_GROUP_EXIT/
> group_exec_task we are already doing that on the threads in the
> group.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>  kernel/signal.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/signal.c b/kernel/signal.c
> index 8f6330f0e9ca..ae4972eea5db 100644
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -2705,9 +2705,18 @@ bool get_signal(struct ksignal *ksig)
>  		struct k_sigaction *ka;
>  		enum pid_type type;
>  
> -		/* Has this task already been marked for death? */
> -		if ((signal->flags & SIGNAL_GROUP_EXIT) ||
> -		     signal->group_exec_task) {
> +		/*
> +		 * Has this task already been marked for death?
> +		 *
> +		 * If this is a PF_USER_WORKER then the task may need to do
> +		 * extra work that requires waiting on running work, so we want
> +		 * to dequeue the signal below and tell the caller its time to
> +		 * start its exit procedure. When the work has completed then
> +		 * the task will exit.
> +		 */
> +		if (!(current->flags & PF_USER_WORKER) &&
> +		    ((signal->flags & SIGNAL_GROUP_EXIT) ||
> +		     signal->group_exec_task)) {
>  			clear_siginfo(&ksig->info);
>  			ksig->info.si_signo = signr = SIGKILL;
>  			sigdelset(&current->pending.signal, SIGKILL);

This hunk is a confusing no-op.

> @@ -2861,11 +2870,11 @@ bool get_signal(struct ksignal *ksig)
>  		}
>  
>  		/*
> -		 * PF_IO_WORKER threads will catch and exit on fatal signals
> +		 * PF_USER_WORKER threads will catch and exit on fatal signals
>  		 * themselves. They have cleanup that must be performed, so
>  		 * we cannot call do_exit() on their behalf.
>  		 */
> -		if (current->flags & PF_IO_WORKER)
> +		if (current->flags & PF_USER_WORKER)
>  			goto out;
>  
>  		/*

This hunk is good and makes sense.

Eric

WARNING: multiple messages have this Message-ID (diff)
From: "Eric W. Biederman" <ebiederm@xmission.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: axboe@kernel.dk, brauner@kernel.org, mst@redhat.com,
	linux-kernel@vger.kernel.org, oleg@redhat.com,
	stefanha@redhat.com, linux@leemhuis.info,
	nicolas.dichtel@6wind.com,
	virtualization@lists.linux-foundation.org,
	torvalds@linux-foundation.org
Subject: Re: [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set
Date: Wed, 17 May 2023 22:49:31 -0500	[thread overview]
Message-ID: <87ednei9is.fsf@email.froward.int.ebiederm.org> (raw)
In-Reply-To: <20230518000920.191583-2-michael.christie@oracle.com> (Mike Christie's message of "Wed, 17 May 2023 19:09:13 -0500")


Long story short.

In the patch below the first hunk is a noop.

The code you are bypassing was added to ensure that process termination
(aka SIGKILL) is processed before any other signals.  Other than signal
processing order there are not any substantive differences in the two
code paths.  With all signals except SIGSTOP == 19 and SIGKILL == 9
blocked SIGKILL should always be processed before SIGSTOP.

Can you try patch with just the last hunk that does
s/PF_IO_WORKER/PF_USER_WORKER/ and see if that is enough?

I have no objections to the final hunk.

Mike Christie <michael.christie@oracle.com> writes:

> This has us deqeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is
> set when we are dealing with PF_USER_WORKER tasks.
>
> When a vhost_task gets a SIGKILL, we could have outstanding IO in flight.
> We can easily stop new work/IO from being queued to the vhost_task, but
> for IO that's already been sent to something like the block layer we
> need to wait for the response then process it. These type of IO
> completions use the vhost_task to process the completion so we can't
> exit immediately.
>
> We need to handle wait for then handle those completions from the
> vhost_task, but when we have a SIGKLL pending, functions like
> schedule() return immediately so we can't wait like normal. Functions
> like vhost_worker() degrade to just a while(1); loop.
>
> This patch has get_signal drop down to the normal code path when
> SIGNAL_GROUP_EXIT/group_exec_task is set so the caller can still detect
> there is a SIGKILL but still perform some blocking cleanup.
>
> Note that in that chunk I'm now bypassing that does:
>
> sigdelset(&current->pending.signal, SIGKILL);
>
> we look to be ok, because in the places we set SIGNAL_GROUP_EXIT/
> group_exec_task we are already doing that on the threads in the
> group.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> ---
>  kernel/signal.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/signal.c b/kernel/signal.c
> index 8f6330f0e9ca..ae4972eea5db 100644
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -2705,9 +2705,18 @@ bool get_signal(struct ksignal *ksig)
>  		struct k_sigaction *ka;
>  		enum pid_type type;
>  
> -		/* Has this task already been marked for death? */
> -		if ((signal->flags & SIGNAL_GROUP_EXIT) ||
> -		     signal->group_exec_task) {
> +		/*
> +		 * Has this task already been marked for death?
> +		 *
> +		 * If this is a PF_USER_WORKER then the task may need to do
> +		 * extra work that requires waiting on running work, so we want
> +		 * to dequeue the signal below and tell the caller its time to
> +		 * start its exit procedure. When the work has completed then
> +		 * the task will exit.
> +		 */
> +		if (!(current->flags & PF_USER_WORKER) &&
> +		    ((signal->flags & SIGNAL_GROUP_EXIT) ||
> +		     signal->group_exec_task)) {
>  			clear_siginfo(&ksig->info);
>  			ksig->info.si_signo = signr = SIGKILL;
>  			sigdelset(&current->pending.signal, SIGKILL);

This hunk is a confusing no-op.

> @@ -2861,11 +2870,11 @@ bool get_signal(struct ksignal *ksig)
>  		}
>  
>  		/*
> -		 * PF_IO_WORKER threads will catch and exit on fatal signals
> +		 * PF_USER_WORKER threads will catch and exit on fatal signals
>  		 * themselves. They have cleanup that must be performed, so
>  		 * we cannot call do_exit() on their behalf.
>  		 */
> -		if (current->flags & PF_IO_WORKER)
> +		if (current->flags & PF_USER_WORKER)
>  			goto out;
>  
>  		/*

This hunk is good and makes sense.

Eric
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2023-05-18  3:49 UTC|newest]

Thread overview: 176+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-18  0:09 [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Mike Christie
2023-05-18  0:09 ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  2:34   ` Eric W. Biederman
2023-05-18  2:34     ` Eric W. Biederman
2023-05-18  3:49   ` Eric W. Biederman [this message]
2023-05-18  3:49     ` Eric W. Biederman
2023-05-18 15:21     ` Mike Christie
2023-05-18 15:21       ` Mike Christie
2023-05-18 16:25       ` Oleg Nesterov
2023-05-18 16:25         ` Oleg Nesterov
2023-05-18 16:42         ` Mike Christie
2023-05-18 16:42           ` Mike Christie
2023-05-18 17:04           ` Oleg Nesterov
2023-05-18 17:04             ` Oleg Nesterov
2023-05-18 18:28             ` Eric W. Biederman
2023-05-18 18:28               ` Eric W. Biederman
2023-05-18 22:57               ` Mike Christie
2023-05-18 22:57                 ` Mike Christie
2023-05-19  4:16                 ` Eric W. Biederman
2023-05-19  4:16                   ` Eric W. Biederman
2023-05-19 23:24                   ` Mike Christie
2023-05-19 23:24                     ` Mike Christie
2023-05-22 13:30               ` Oleg Nesterov
2023-05-22 13:30                 ` Oleg Nesterov
2023-05-18  8:08   ` Christian Brauner
2023-05-18 15:27     ` Mike Christie
2023-05-18 15:27       ` Mike Christie
2023-05-18 17:07       ` Christian Brauner
2023-05-18 18:08         ` Oleg Nesterov
2023-05-18 18:08           ` Oleg Nesterov
2023-05-18 18:12           ` Christian Brauner
2023-05-18 18:23             ` Oleg Nesterov
2023-05-18 18:23               ` Oleg Nesterov
2023-05-18  0:09 ` [RFC PATCH 2/8] vhost/vhost_task: Hook vhost layer into signal handler Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:16   ` Linus Torvalds
2023-05-18  0:16     ` Linus Torvalds
2023-05-18  1:01     ` Mike Christie
2023-05-18  1:01       ` Mike Christie
2023-05-18  8:16       ` Christian Brauner
2023-05-18  0:09 ` [RFC PATCH 3/8] fork/vhost_task: Switch to CLONE_THREAD and CLONE_SIGHAND Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  8:18   ` Christian Brauner
2023-05-18  0:09 ` [RFC PATCH 4/8] vhost-net: Move vhost_net_open Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 5/8] vhost: Add callback that stops new work and waits on running ones Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18 14:18   ` Christian Brauner
2023-05-18 15:03     ` Mike Christie
2023-05-18 15:03       ` Mike Christie
2023-05-18 15:09       ` Christian Brauner
2023-05-18 18:38       ` Eric W. Biederman
2023-05-18 18:38         ` Eric W. Biederman
2023-05-18  0:09 ` [RFC PATCH 6/8] vhost-scsi: Add callback to stop and wait on works Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 7/8] vhost-net: " Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 8/8] fork/vhost_task: remove no_files Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  1:04   ` Mike Christie
2023-05-18  1:04     ` Mike Christie
2023-05-18 12:31   ` kernel test robot
2023-05-18 15:30   ` kernel test robot
2023-05-18 23:14   ` kernel test robot
2023-05-19  7:26   ` kernel test robot
2023-05-18  8:25 ` [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Christian Brauner
2023-05-18  8:40   ` Christian Brauner
2023-05-18 14:30   ` Christian Brauner
  -- strict thread matches above, loose matches on Subject: below --
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
2023-02-02 23:25 ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 1/8] fork: Make IO worker options flag based Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:14   ` Linus Torvalds
2023-02-03  0:14     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 2/8] fork/vm: Move common PF_IO_WORKER behavior to new flag Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:16   ` Linus Torvalds
2023-02-03  0:16     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:19   ` Linus Torvalds
2023-02-03  0:19     ` Linus Torvalds
2023-02-05 16:06     ` Mike Christie
2023-02-05 16:06       ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 5/8] fork: allow kernel code to call copy_process Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:43   ` Linus Torvalds
2023-02-03  0:43     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 7/8] vhost: move worker thread fields to new struct Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-05-05 13:40   ` Nicolas Dichtel
2023-05-05 18:22     ` Linus Torvalds
2023-05-05 18:22       ` Linus Torvalds
2023-05-05 22:37       ` Mike Christie
2023-05-05 22:37         ` Mike Christie
2023-05-06  1:53         ` Linus Torvalds
2023-05-06  1:53           ` Linus Torvalds
2023-05-08 17:13         ` Christian Brauner
2023-05-09  8:09         ` Nicolas Dichtel
2023-05-09  8:17           ` Nicolas Dichtel
2023-05-13 12:39         ` Thorsten Leemhuis
2023-05-13 12:39           ` Thorsten Leemhuis
2023-05-13 15:08           ` Linus Torvalds
2023-05-13 15:08             ` Linus Torvalds
2023-05-15 14:23             ` Christian Brauner
2023-05-15 15:44               ` Linus Torvalds
2023-05-15 15:44                 ` Linus Torvalds
2023-05-15 15:52                 ` Jens Axboe
2023-05-15 15:52                   ` Jens Axboe
2023-05-15 15:54                   ` Linus Torvalds
2023-05-15 15:54                     ` Linus Torvalds
2023-05-15 17:23                     ` Linus Torvalds
2023-05-15 17:23                       ` Linus Torvalds
2023-05-15 15:56                   ` Linus Torvalds
2023-05-15 15:56                     ` Linus Torvalds
2023-05-15 22:23                 ` Mike Christie
2023-05-15 22:23                   ` Mike Christie
2023-05-15 22:54                   ` Linus Torvalds
2023-05-15 22:54                     ` Linus Torvalds
2023-05-16  3:53                     ` Mike Christie
2023-05-16  3:53                       ` Mike Christie
2023-05-16 13:18                       ` Oleg Nesterov
2023-05-16 13:18                         ` Oleg Nesterov
2023-05-16 13:40                       ` Oleg Nesterov
2023-05-16 13:40                         ` Oleg Nesterov
2023-05-16 15:56                     ` Eric W. Biederman
2023-05-16 15:56                       ` Eric W. Biederman
2023-05-16 18:37                       ` Oleg Nesterov
2023-05-16 18:37                         ` Oleg Nesterov
2023-05-16 20:12                         ` Eric W. Biederman
2023-05-16 20:12                           ` Eric W. Biederman
2023-05-17 17:09                           ` Oleg Nesterov
2023-05-17 17:09                             ` Oleg Nesterov
2023-05-17 18:22                             ` Mike Christie
2023-05-17 18:22                               ` Mike Christie
2023-05-16  8:39                   ` Christian Brauner
2023-05-16 16:24                     ` Mike Christie
2023-05-16 16:24                       ` Mike Christie
2023-05-16 16:44                       ` Christian Brauner
2023-05-19 12:15                     ` [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Christian Brauner
2023-06-01  7:58                       ` Thorsten Leemhuis
2023-06-01  7:58                         ` Thorsten Leemhuis
2023-06-01 10:18                         ` Nicolas Dichtel
2023-06-01 10:47                         ` Christian Brauner
2023-06-01 11:29                           ` Thorsten Leemhuis
2023-06-01 11:29                             ` Thorsten Leemhuis
2023-06-01 12:26                           ` Linus Torvalds
2023-06-01 12:26                             ` Linus Torvalds
2023-06-01 16:10                           ` Mike Christie
2023-06-01 16:10                             ` Mike Christie
2023-05-16 14:06     ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Linux regression tracking #adding (Thorsten Leemhuis)
2023-05-26  9:03       ` Linux regression tracking #update (Thorsten Leemhuis)
2023-06-02 11:38       ` Thorsten Leemhuis
2023-07-20 13:06   ` Michael S. Tsirkin
2023-07-20 13:06     ` Michael S. Tsirkin
2023-07-23  4:03     ` michael.christie
2023-07-23  4:03       ` michael.christie
2023-07-23  9:31       ` Michael S. Tsirkin
2023-07-23  9:31         ` Michael S. Tsirkin
2023-08-10 18:57       ` Michael S. Tsirkin
2023-08-10 18:57         ` Michael S. Tsirkin
2023-08-11 18:51         ` Mike Christie
2023-08-11 18:51           ` Mike Christie
2023-08-13 19:01           ` Michael S. Tsirkin
2023-08-13 19:01             ` Michael S. Tsirkin
2023-08-14  3:13             ` michael.christie
2023-08-14  3:13               ` michael.christie
2023-02-07  8:19 ` [PATCH v11 0/8] Use copy_process in vhost layer Christian Brauner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87ednei9is.fsf@email.froward.int.ebiederm.org \
    --to=ebiederm@xmission.com \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@leemhuis.info \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=nicolas.dichtel@6wind.com \
    --cc=oleg@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.