All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Oleg Nesterov <oleg@redhat.com>,
	linux@leemhuis.info, nicolas.dichtel@6wind.com, axboe@kernel.dk,
	torvalds@linux-foundation.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, mst@redhat.com,
	sgarzare@redhat.com, jasowang@redhat.com, stefanha@redhat.com,
	brauner@kernel.org
Subject: Re: [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set
Date: Fri, 19 May 2023 18:24:34 -0500	[thread overview]
Message-ID: <04f2853d-3bdb-8893-91c8-074893310d1d@oracle.com> (raw)
In-Reply-To: <874jo9c5x3.fsf@email.froward.int.ebiederm.org>

On 5/18/23 11:16 PM, Eric W. Biederman wrote:
> Mike Christie <michael.christie@oracle.com> writes:
> 
>> On 5/18/23 1:28 PM, Eric W. Biederman wrote:
>>> Still the big issue seems to be the way get_signal is connected into
>>> these threads so that it keeps getting called.  Calling get_signal after
>>> a fatal signal has been returned happens nowhere else and even if we fix
>>> it today it is likely to lead to bugs in the future because whoever is
>>> testing and updating the code is unlikely they have a vhost test case
>>> the care about.
>>>
>>> diff --git a/kernel/signal.c b/kernel/signal.c
>>> index 8f6330f0e9ca..4d54718cad36 100644
>>> --- a/kernel/signal.c
>>> +++ b/kernel/signal.c
>>> @@ -181,7 +181,9 @@ void recalc_sigpending_and_wake(struct task_struct *t)
>>>  
>>>  void recalc_sigpending(void)
>>>  {
>>> -       if (!recalc_sigpending_tsk(current) && !freezing(current))
>>> +       if ((!recalc_sigpending_tsk(current) && !freezing(current)) ||
>>> +           ((current->signal->flags & SIGNAL_GROUP_EXIT) &&
>>> +                   !__fatal_signal_pending(current)))
>>>                 clear_thread_flag(TIF_SIGPENDING);
>>>  
>>>  }
>>> @@ -1043,6 +1045,13 @@ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
>>>                  * This signal will be fatal to the whole group.
>>>                  */
>>>                 if (!sig_kernel_coredump(sig)) {
>>> +                       /*
>>> +                        * The signal is being short circuit delivered
>>> +                        * don't it pending.
>>> +                        */
>>> +                       if (type != PIDTYPE_PID) {
>>> +                               sigdelset(&t->signal->shared_pending,  sig);
>>> +
>>>                         /*
>>>                          * Start a group exit and wake everybody up.
>>>                          * This way we don't have other threads
>>>
>>
>> If I change up your patch so the last part is moved down a bit to when we set t
>> like this:
>>
>> diff --git a/kernel/signal.c b/kernel/signal.c
>> index 0ac48c96ab04..c976a80650db 100644
>> --- a/kernel/signal.c
>> +++ b/kernel/signal.c
>> @@ -181,9 +181,10 @@ void recalc_sigpending_and_wake(struct task_struct *t)
>>  
>>  void recalc_sigpending(void)
>>  {
>> -	if (!recalc_sigpending_tsk(current) && !freezing(current))
>> +	if ((!recalc_sigpending_tsk(current) && !freezing(current)) ||
>> +	    ((current->signal->flags & SIGNAL_GROUP_EXIT) &&
>> +	     !__fatal_signal_pending(current)))
>>  		clear_thread_flag(TIF_SIGPENDING);
>> -
> Can we get rid of this suggestion to recalc_sigpending.  The more I look
> at it the more I am convinced it is not safe.  In particular I believe
> it is incompatible with dump_interrupted() in fs/coredump.c


With your clear_thread_flag call in vhost_worker suggestion I don't need
the above chunk.


> 
> The code in fs/coredump.c is the closest code we have to what you are
> trying to do with vhost_worker after the session is killed.  It also
> struggles with TIF_SIGPENDING getting set. 
>>  }
>>  EXPORT_SYMBOL(recalc_sigpending);
>>  
>> @@ -1053,6 +1054,17 @@ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
>>  			signal->group_exit_code = sig;
>>  			signal->group_stop_count = 0;
>>  			t = p;
>> +			/*
>> +			 * The signal is being short circuit delivered
>> +			 * don't it pending.
>> +			 */
>> +			if (type != PIDTYPE_PID) {
>> +				struct sigpending *pending;
>> +
>> +				pending = &t->signal->shared_pending;
>> +				sigdelset(&pending->signal, sig);
>> +			}
>> +
>>  			do {
>>  				task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
>>  				sigaddset(&t->pending.signal, SIGKILL);
>>
>>
>> Then get_signal() works like how Oleg mentioned it should earlier.
> 
> I am puzzled it makes a difference as t->signal and p->signal should
> point to the same thing, and in fact the code would more clearly read
> sigdelset(&signal->shared_pending, sig);


Yeah either should work. The original patch had used t before it was
set so my patch just moved it down to after we set it. I just used signal
like you wrote and it works fine.


> 
> But all of that seems minor.
> 
>> For vhost I just need the code below which is just Linus's patch plus a call
>> to get_signal() in vhost_worker() and the PF_IO_WORKER->PF_USER_WORKER change.
>>
>> Note that when we get SIGKILL, the vhost file_operations->release function is called via
>>
>>             do_exit -> exit_files -> put_files_struct -> close_files
>>
>> and so the vhost release function starts to flush IO and stop the worker/vhost
>> task. In vhost_worker() then we just handle those last completions for already
>> running IO. When  the vhost release function detects they are done it does
>> vhost_task   _stop() and vhost_worker() returns and then vhost_task_fn() does do_exit().
>> So we don't return immediately when get_signal() returns non-zero.
>>
>> So it works, but it sounds like you don't like vhost relying on the behavior,
>> and it's non standard to use get_signal() like we are. So I'm not sure how we
>> want to proceed.
> 
> Let me clarify my concern.
> 
> Your code modifies get_signal as:
>  		/*
> -		 * PF_IO_WORKER threads will catch and exit on fatal signals
> +		 * PF_USER_WORKER threads will catch and exit on fatal signals
>  		 * themselves. They have cleanup that must be performed, so
>  		 * we cannot call do_exit() on their behalf.
>  		 */
> -		if (current->flags & PF_IO_WORKER)
> +		if (current->flags & PF_USER_WORKER)
>  			goto out;
>  		/*
>  		 * Death signals, no core dump.
>  		 */
>  		do_group_exit(ksig->info.si_signo);
>  		/* NOTREACHED */
> 
> Which means by modifying get_signal you are logically deleting the
> do_group_exit from get_signal.  As far as that goes that is a perfectly
> reasonable change.  The problem is you wind up calling get_signal again
> after that.  That does not make sense.
> 
> I would suggest doing something like:

I see. I've run some tests today and what you suggested for vhost_worker
and your signal change and it works for SIGKILL/STOP/CONT and freeze.

> 
> What is the diff below?  It does not appear to a revert diff.

It was just the most simple patch that was needed with your signal changes
(and the PF_IO_WORKER -> PF_USER_WORKER signal change) to fix the 2
regressions reported. I wanted to give the vhost devs an idea of what was
needed with your signal changes.

Let me do some more testing over the weekend and I'll post a RFC with your
signal change and the minimal changes needed to vhost to handle the 2
regressions that were reported. The vhost developers can get a better idea
of what needs to be done and they can better decide what they want to do to
proceed.

WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: axboe@kernel.dk, brauner@kernel.org, mst@redhat.com,
	linux-kernel@vger.kernel.org, Oleg Nesterov <oleg@redhat.com>,
	stefanha@redhat.com, linux@leemhuis.info,
	nicolas.dichtel@6wind.com,
	virtualization@lists.linux-foundation.org,
	torvalds@linux-foundation.org
Subject: Re: [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set
Date: Fri, 19 May 2023 18:24:34 -0500	[thread overview]
Message-ID: <04f2853d-3bdb-8893-91c8-074893310d1d@oracle.com> (raw)
In-Reply-To: <874jo9c5x3.fsf@email.froward.int.ebiederm.org>

On 5/18/23 11:16 PM, Eric W. Biederman wrote:
> Mike Christie <michael.christie@oracle.com> writes:
> 
>> On 5/18/23 1:28 PM, Eric W. Biederman wrote:
>>> Still the big issue seems to be the way get_signal is connected into
>>> these threads so that it keeps getting called.  Calling get_signal after
>>> a fatal signal has been returned happens nowhere else and even if we fix
>>> it today it is likely to lead to bugs in the future because whoever is
>>> testing and updating the code is unlikely they have a vhost test case
>>> the care about.
>>>
>>> diff --git a/kernel/signal.c b/kernel/signal.c
>>> index 8f6330f0e9ca..4d54718cad36 100644
>>> --- a/kernel/signal.c
>>> +++ b/kernel/signal.c
>>> @@ -181,7 +181,9 @@ void recalc_sigpending_and_wake(struct task_struct *t)
>>>  
>>>  void recalc_sigpending(void)
>>>  {
>>> -       if (!recalc_sigpending_tsk(current) && !freezing(current))
>>> +       if ((!recalc_sigpending_tsk(current) && !freezing(current)) ||
>>> +           ((current->signal->flags & SIGNAL_GROUP_EXIT) &&
>>> +                   !__fatal_signal_pending(current)))
>>>                 clear_thread_flag(TIF_SIGPENDING);
>>>  
>>>  }
>>> @@ -1043,6 +1045,13 @@ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
>>>                  * This signal will be fatal to the whole group.
>>>                  */
>>>                 if (!sig_kernel_coredump(sig)) {
>>> +                       /*
>>> +                        * The signal is being short circuit delivered
>>> +                        * don't it pending.
>>> +                        */
>>> +                       if (type != PIDTYPE_PID) {
>>> +                               sigdelset(&t->signal->shared_pending,  sig);
>>> +
>>>                         /*
>>>                          * Start a group exit and wake everybody up.
>>>                          * This way we don't have other threads
>>>
>>
>> If I change up your patch so the last part is moved down a bit to when we set t
>> like this:
>>
>> diff --git a/kernel/signal.c b/kernel/signal.c
>> index 0ac48c96ab04..c976a80650db 100644
>> --- a/kernel/signal.c
>> +++ b/kernel/signal.c
>> @@ -181,9 +181,10 @@ void recalc_sigpending_and_wake(struct task_struct *t)
>>  
>>  void recalc_sigpending(void)
>>  {
>> -	if (!recalc_sigpending_tsk(current) && !freezing(current))
>> +	if ((!recalc_sigpending_tsk(current) && !freezing(current)) ||
>> +	    ((current->signal->flags & SIGNAL_GROUP_EXIT) &&
>> +	     !__fatal_signal_pending(current)))
>>  		clear_thread_flag(TIF_SIGPENDING);
>> -
> Can we get rid of this suggestion to recalc_sigpending.  The more I look
> at it the more I am convinced it is not safe.  In particular I believe
> it is incompatible with dump_interrupted() in fs/coredump.c


With your clear_thread_flag call in vhost_worker suggestion I don't need
the above chunk.


> 
> The code in fs/coredump.c is the closest code we have to what you are
> trying to do with vhost_worker after the session is killed.  It also
> struggles with TIF_SIGPENDING getting set. 
>>  }
>>  EXPORT_SYMBOL(recalc_sigpending);
>>  
>> @@ -1053,6 +1054,17 @@ static void complete_signal(int sig, struct task_struct *p, enum pid_type type)
>>  			signal->group_exit_code = sig;
>>  			signal->group_stop_count = 0;
>>  			t = p;
>> +			/*
>> +			 * The signal is being short circuit delivered
>> +			 * don't it pending.
>> +			 */
>> +			if (type != PIDTYPE_PID) {
>> +				struct sigpending *pending;
>> +
>> +				pending = &t->signal->shared_pending;
>> +				sigdelset(&pending->signal, sig);
>> +			}
>> +
>>  			do {
>>  				task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
>>  				sigaddset(&t->pending.signal, SIGKILL);
>>
>>
>> Then get_signal() works like how Oleg mentioned it should earlier.
> 
> I am puzzled it makes a difference as t->signal and p->signal should
> point to the same thing, and in fact the code would more clearly read
> sigdelset(&signal->shared_pending, sig);


Yeah either should work. The original patch had used t before it was
set so my patch just moved it down to after we set it. I just used signal
like you wrote and it works fine.


> 
> But all of that seems minor.
> 
>> For vhost I just need the code below which is just Linus's patch plus a call
>> to get_signal() in vhost_worker() and the PF_IO_WORKER->PF_USER_WORKER change.
>>
>> Note that when we get SIGKILL, the vhost file_operations->release function is called via
>>
>>             do_exit -> exit_files -> put_files_struct -> close_files
>>
>> and so the vhost release function starts to flush IO and stop the worker/vhost
>> task. In vhost_worker() then we just handle those last completions for already
>> running IO. When  the vhost release function detects they are done it does
>> vhost_task   _stop() and vhost_worker() returns and then vhost_task_fn() does do_exit().
>> So we don't return immediately when get_signal() returns non-zero.
>>
>> So it works, but it sounds like you don't like vhost relying on the behavior,
>> and it's non standard to use get_signal() like we are. So I'm not sure how we
>> want to proceed.
> 
> Let me clarify my concern.
> 
> Your code modifies get_signal as:
>  		/*
> -		 * PF_IO_WORKER threads will catch and exit on fatal signals
> +		 * PF_USER_WORKER threads will catch and exit on fatal signals
>  		 * themselves. They have cleanup that must be performed, so
>  		 * we cannot call do_exit() on their behalf.
>  		 */
> -		if (current->flags & PF_IO_WORKER)
> +		if (current->flags & PF_USER_WORKER)
>  			goto out;
>  		/*
>  		 * Death signals, no core dump.
>  		 */
>  		do_group_exit(ksig->info.si_signo);
>  		/* NOTREACHED */
> 
> Which means by modifying get_signal you are logically deleting the
> do_group_exit from get_signal.  As far as that goes that is a perfectly
> reasonable change.  The problem is you wind up calling get_signal again
> after that.  That does not make sense.
> 
> I would suggest doing something like:

I see. I've run some tests today and what you suggested for vhost_worker
and your signal change and it works for SIGKILL/STOP/CONT and freeze.

> 
> What is the diff below?  It does not appear to a revert diff.

It was just the most simple patch that was needed with your signal changes
(and the PF_IO_WORKER -> PF_USER_WORKER signal change) to fix the 2
regressions reported. I wanted to give the vhost devs an idea of what was
needed with your signal changes.

Let me do some more testing over the weekend and I'll post a RFC with your
signal change and the minimal changes needed to vhost to handle the 2
regressions that were reported. The vhost developers can get a better idea
of what needs to be done and they can better decide what they want to do to
proceed.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2023-05-19 23:24 UTC|newest]

Thread overview: 176+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-18  0:09 [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Mike Christie
2023-05-18  0:09 ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 1/8] signal: Dequeue SIGKILL even if SIGNAL_GROUP_EXIT/group_exec_task is set Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  2:34   ` Eric W. Biederman
2023-05-18  2:34     ` Eric W. Biederman
2023-05-18  3:49   ` Eric W. Biederman
2023-05-18  3:49     ` Eric W. Biederman
2023-05-18 15:21     ` Mike Christie
2023-05-18 15:21       ` Mike Christie
2023-05-18 16:25       ` Oleg Nesterov
2023-05-18 16:25         ` Oleg Nesterov
2023-05-18 16:42         ` Mike Christie
2023-05-18 16:42           ` Mike Christie
2023-05-18 17:04           ` Oleg Nesterov
2023-05-18 17:04             ` Oleg Nesterov
2023-05-18 18:28             ` Eric W. Biederman
2023-05-18 18:28               ` Eric W. Biederman
2023-05-18 22:57               ` Mike Christie
2023-05-18 22:57                 ` Mike Christie
2023-05-19  4:16                 ` Eric W. Biederman
2023-05-19  4:16                   ` Eric W. Biederman
2023-05-19 23:24                   ` Mike Christie [this message]
2023-05-19 23:24                     ` Mike Christie
2023-05-22 13:30               ` Oleg Nesterov
2023-05-22 13:30                 ` Oleg Nesterov
2023-05-18  8:08   ` Christian Brauner
2023-05-18 15:27     ` Mike Christie
2023-05-18 15:27       ` Mike Christie
2023-05-18 17:07       ` Christian Brauner
2023-05-18 18:08         ` Oleg Nesterov
2023-05-18 18:08           ` Oleg Nesterov
2023-05-18 18:12           ` Christian Brauner
2023-05-18 18:23             ` Oleg Nesterov
2023-05-18 18:23               ` Oleg Nesterov
2023-05-18  0:09 ` [RFC PATCH 2/8] vhost/vhost_task: Hook vhost layer into signal handler Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:16   ` Linus Torvalds
2023-05-18  0:16     ` Linus Torvalds
2023-05-18  1:01     ` Mike Christie
2023-05-18  1:01       ` Mike Christie
2023-05-18  8:16       ` Christian Brauner
2023-05-18  0:09 ` [RFC PATCH 3/8] fork/vhost_task: Switch to CLONE_THREAD and CLONE_SIGHAND Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  8:18   ` Christian Brauner
2023-05-18  0:09 ` [RFC PATCH 4/8] vhost-net: Move vhost_net_open Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 5/8] vhost: Add callback that stops new work and waits on running ones Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18 14:18   ` Christian Brauner
2023-05-18 15:03     ` Mike Christie
2023-05-18 15:03       ` Mike Christie
2023-05-18 15:09       ` Christian Brauner
2023-05-18 18:38       ` Eric W. Biederman
2023-05-18 18:38         ` Eric W. Biederman
2023-05-18  0:09 ` [RFC PATCH 6/8] vhost-scsi: Add callback to stop and wait on works Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 7/8] vhost-net: " Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  0:09 ` [RFC PATCH 8/8] fork/vhost_task: remove no_files Mike Christie
2023-05-18  0:09   ` Mike Christie
2023-05-18  1:04   ` Mike Christie
2023-05-18  1:04     ` Mike Christie
2023-05-18 12:31   ` kernel test robot
2023-05-18 15:30   ` kernel test robot
2023-05-18 23:14   ` kernel test robot
2023-05-19  7:26   ` kernel test robot
2023-05-18  8:25 ` [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Christian Brauner
2023-05-18  8:40   ` Christian Brauner
2023-05-18 14:30   ` Christian Brauner
  -- strict thread matches above, loose matches on Subject: below --
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
2023-02-02 23:25 ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 1/8] fork: Make IO worker options flag based Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:14   ` Linus Torvalds
2023-02-03  0:14     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 2/8] fork/vm: Move common PF_IO_WORKER behavior to new flag Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:16   ` Linus Torvalds
2023-02-03  0:16     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:19   ` Linus Torvalds
2023-02-03  0:19     ` Linus Torvalds
2023-02-05 16:06     ` Mike Christie
2023-02-05 16:06       ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 5/8] fork: allow kernel code to call copy_process Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-03  0:43   ` Linus Torvalds
2023-02-03  0:43     ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 7/8] vhost: move worker thread fields to new struct Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Mike Christie
2023-02-02 23:25   ` Mike Christie
2023-05-05 13:40   ` Nicolas Dichtel
2023-05-05 18:22     ` Linus Torvalds
2023-05-05 18:22       ` Linus Torvalds
2023-05-05 22:37       ` Mike Christie
2023-05-05 22:37         ` Mike Christie
2023-05-06  1:53         ` Linus Torvalds
2023-05-06  1:53           ` Linus Torvalds
2023-05-08 17:13         ` Christian Brauner
2023-05-09  8:09         ` Nicolas Dichtel
2023-05-09  8:17           ` Nicolas Dichtel
2023-05-13 12:39         ` Thorsten Leemhuis
2023-05-13 12:39           ` Thorsten Leemhuis
2023-05-13 15:08           ` Linus Torvalds
2023-05-13 15:08             ` Linus Torvalds
2023-05-15 14:23             ` Christian Brauner
2023-05-15 15:44               ` Linus Torvalds
2023-05-15 15:44                 ` Linus Torvalds
2023-05-15 15:52                 ` Jens Axboe
2023-05-15 15:52                   ` Jens Axboe
2023-05-15 15:54                   ` Linus Torvalds
2023-05-15 15:54                     ` Linus Torvalds
2023-05-15 17:23                     ` Linus Torvalds
2023-05-15 17:23                       ` Linus Torvalds
2023-05-15 15:56                   ` Linus Torvalds
2023-05-15 15:56                     ` Linus Torvalds
2023-05-15 22:23                 ` Mike Christie
2023-05-15 22:23                   ` Mike Christie
2023-05-15 22:54                   ` Linus Torvalds
2023-05-15 22:54                     ` Linus Torvalds
2023-05-16  3:53                     ` Mike Christie
2023-05-16  3:53                       ` Mike Christie
2023-05-16 13:18                       ` Oleg Nesterov
2023-05-16 13:18                         ` Oleg Nesterov
2023-05-16 13:40                       ` Oleg Nesterov
2023-05-16 13:40                         ` Oleg Nesterov
2023-05-16 15:56                     ` Eric W. Biederman
2023-05-16 15:56                       ` Eric W. Biederman
2023-05-16 18:37                       ` Oleg Nesterov
2023-05-16 18:37                         ` Oleg Nesterov
2023-05-16 20:12                         ` Eric W. Biederman
2023-05-16 20:12                           ` Eric W. Biederman
2023-05-17 17:09                           ` Oleg Nesterov
2023-05-17 17:09                             ` Oleg Nesterov
2023-05-17 18:22                             ` Mike Christie
2023-05-17 18:22                               ` Mike Christie
2023-05-16  8:39                   ` Christian Brauner
2023-05-16 16:24                     ` Mike Christie
2023-05-16 16:24                       ` Mike Christie
2023-05-16 16:44                       ` Christian Brauner
2023-05-19 12:15                     ` [RFC PATCH 0/8] vhost_tasks: Use CLONE_THREAD/SIGHAND Christian Brauner
2023-06-01  7:58                       ` Thorsten Leemhuis
2023-06-01  7:58                         ` Thorsten Leemhuis
2023-06-01 10:18                         ` Nicolas Dichtel
2023-06-01 10:47                         ` Christian Brauner
2023-06-01 11:29                           ` Thorsten Leemhuis
2023-06-01 11:29                             ` Thorsten Leemhuis
2023-06-01 12:26                           ` Linus Torvalds
2023-06-01 12:26                             ` Linus Torvalds
2023-06-01 16:10                           ` Mike Christie
2023-06-01 16:10                             ` Mike Christie
2023-05-16 14:06     ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Linux regression tracking #adding (Thorsten Leemhuis)
2023-05-26  9:03       ` Linux regression tracking #update (Thorsten Leemhuis)
2023-06-02 11:38       ` Thorsten Leemhuis
2023-07-20 13:06   ` Michael S. Tsirkin
2023-07-20 13:06     ` Michael S. Tsirkin
2023-07-23  4:03     ` michael.christie
2023-07-23  4:03       ` michael.christie
2023-07-23  9:31       ` Michael S. Tsirkin
2023-07-23  9:31         ` Michael S. Tsirkin
2023-08-10 18:57       ` Michael S. Tsirkin
2023-08-10 18:57         ` Michael S. Tsirkin
2023-08-11 18:51         ` Mike Christie
2023-08-11 18:51           ` Mike Christie
2023-08-13 19:01           ` Michael S. Tsirkin
2023-08-13 19:01             ` Michael S. Tsirkin
2023-08-14  3:13             ` michael.christie
2023-08-14  3:13               ` michael.christie
2023-02-07  8:19 ` [PATCH v11 0/8] Use copy_process in vhost layer Christian Brauner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=04f2853d-3bdb-8893-91c8-074893310d1d@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=ebiederm@xmission.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@leemhuis.info \
    --cc=mst@redhat.com \
    --cc=nicolas.dichtel@6wind.com \
    --cc=oleg@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.