* [PATCH v11 1/8] fork: Make IO worker options flag based
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-03 0:14 ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 2/8] fork/vm: Move common PF_IO_WORKER behavior to new flag Mike Christie
` (6 subsequent siblings)
7 siblings, 1 reply; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
Cc: Christoph Hellwig
This patchset adds a couple new options to kernel_clone_args for the vhost
layer which is going to work like PF_IO_WORKER but will differ enough that
we will need to add several fields to kernel_clone_args. This patch moves
us to a flags based approach for these types of users.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Suggested-by: Christian Brauner <brauner@kernel.org>
Acked-by: Christian Brauner <brauner@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/sched/task.h | 4 +++-
kernel/fork.c | 4 ++--
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 357e0068497c..a759ce5aa603 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -18,8 +18,11 @@ struct css_set;
/* All the bits taken by the old clone syscall. */
#define CLONE_LEGACY_FLAGS 0xffffffffULL
+#define USER_WORKER_IO BIT(0)
+
struct kernel_clone_args {
u64 flags;
+ u32 worker_flags;
int __user *pidfd;
int __user *child_tid;
int __user *parent_tid;
@@ -31,7 +34,6 @@ struct kernel_clone_args {
/* Number of elements in *set_tid */
size_t set_tid_size;
int cgroup;
- int io_thread;
int kthread;
int idle;
int (*fn)(void *);
diff --git a/kernel/fork.c b/kernel/fork.c
index 9f7fe3541897..b030aefba26c 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2100,7 +2100,7 @@ static __latent_entropy struct task_struct *copy_process(
p->flags &= ~PF_KTHREAD;
if (args->kthread)
p->flags |= PF_KTHREAD;
- if (args->io_thread) {
+ if (args->worker_flags & USER_WORKER_IO) {
/*
* Mark us an IO worker, and block any signal that isn't
* fatal or STOP
@@ -2623,7 +2623,7 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node)
.exit_signal = (lower_32_bits(flags) & CSIGNAL),
.fn = fn,
.fn_arg = arg,
- .io_thread = 1,
+ .worker_flags = USER_WORKER_IO,
};
return copy_process(NULL, 0, node, &args);
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 1/8] fork: Make IO worker options flag based
2023-02-02 23:25 ` [PATCH v11 1/8] fork: Make IO worker options flag based Mike Christie
@ 2023-02-03 0:14 ` Linus Torvalds
0 siblings, 0 replies; 42+ messages in thread
From: Linus Torvalds @ 2023-02-03 0:14 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, mst, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, Christoph Hellwig
On Thu, Feb 2, 2023 at 3:25 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> struct kernel_clone_args {
> u64 flags;
> + u32 worker_flags;
> int __user *pidfd;
> int __user *child_tid;
> int __user *parent_tid;
Minor nit: please put this next to "exit_signal".
As it is, you've put a new 32-bit field in between two 64-bit fields
and are generating extra pointless padding.
We have that padding by "exit_signal" already, so let's just use it.
Also, I like moving those flags to a "flags" field, but can we please
make it consistent? We have that "args->kthread" field too, which is
100% analogous to args->io_thread.
So don't make a bit field for io_thread, and then not do the same for kthread.
Finally, why isn't this all just a bitfield - every single case would
seem to prefer something like
if (args->user_worker) ..
instead of
if (args->worker_flags & USER_WORKER)
which would seem to make everything simpler still?
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v11 2/8] fork/vm: Move common PF_IO_WORKER behavior to new flag
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
2023-02-02 23:25 ` [PATCH v11 1/8] fork: Make IO worker options flag based Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files Mike Christie
` (5 subsequent siblings)
7 siblings, 0 replies; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
This adds a new flag, PF_USER_WORKER, that's used for behavior common to
to both PF_IO_WORKER and users like vhost which will use a new helper
instead of create_io_thread because they require different behavior for
operations like signal handling.
The common behavior PF_USER_WORKER covers is the vm reclaim handling.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/sched.h | 2 +-
include/linux/sched/task.h | 3 ++-
kernel/fork.c | 4 ++++
mm/vmscan.c | 4 ++--
4 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 853d08f7562b..2ca9269332c1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1723,7 +1723,7 @@ extern struct pid *cad_pid;
#define PF_MEMALLOC 0x00000800 /* Allocating memory */
#define PF_NPROC_EXCEEDED 0x00001000 /* set_user() noticed that RLIMIT_NPROC was exceeded */
#define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */
-#define PF__HOLE__00004000 0x00004000
+#define PF_USER_WORKER 0x00004000 /* Kernel thread cloned from userspace thread */
#define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */
#define PF__HOLE__00010000 0x00010000
#define PF_KSWAPD 0x00020000 /* I am kswapd */
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index a759ce5aa603..dfc585e0373c 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -18,7 +18,8 @@ struct css_set;
/* All the bits taken by the old clone syscall. */
#define CLONE_LEGACY_FLAGS 0xffffffffULL
-#define USER_WORKER_IO BIT(0)
+#define USER_WORKER BIT(0)
+#define USER_WORKER_IO BIT(1)
struct kernel_clone_args {
u64 flags;
diff --git a/kernel/fork.c b/kernel/fork.c
index b030aefba26c..77d2c527e917 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2100,6 +2100,10 @@ static __latent_entropy struct task_struct *copy_process(
p->flags &= ~PF_KTHREAD;
if (args->kthread)
p->flags |= PF_KTHREAD;
+
+ if (args->worker_flags & USER_WORKER)
+ p->flags |= PF_USER_WORKER;
+
if (args->worker_flags & USER_WORKER_IO) {
/*
* Mark us an IO worker, and block any signal that isn't
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bd6637fcd8f9..54de4adb91cf 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1141,12 +1141,12 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)
DEFINE_WAIT(wait);
/*
- * Do not throttle IO workers, kthreads other than kswapd or
+ * Do not throttle user workers, kthreads other than kswapd or
* workqueues. They may be required for reclaim to make
* forward progress (e.g. journalling workqueues or kthreads).
*/
if (!current_is_kswapd() &&
- current->flags & (PF_IO_WORKER|PF_KTHREAD)) {
+ current->flags & (PF_USER_WORKER|PF_KTHREAD)) {
cond_resched();
return;
}
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
2023-02-02 23:25 ` [PATCH v11 1/8] fork: Make IO worker options flag based Mike Christie
2023-02-02 23:25 ` [PATCH v11 2/8] fork/vm: Move common PF_IO_WORKER behavior to new flag Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-03 0:16 ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals Mike Christie
` (4 subsequent siblings)
7 siblings, 1 reply; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
Cc: Christoph Hellwig
Each vhost device gets a thread that is used to perform IO and management
operations. Instead of a thread that is accessing a device, the thread is
part of the device, so when it creates a thread using a helper based on
copy_process we can't dup or clone the parent's files/FDS because it
would do an extra increment on ourself.
Later, when we do:
Qemu process exits:
do_exit -> exit_files -> put_files_struct -> close_files
we would leak the device's resources because of that extra refcount
on the fd or file_struct.
This patch adds a no_files option so these worker threads can prevent
taking an extra refcount on themselves.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Christian Brauner <brauner@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/sched/task.h | 1 +
kernel/fork.c | 11 +++++++++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index dfc585e0373c..18e614591c24 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -20,6 +20,7 @@ struct css_set;
#define USER_WORKER BIT(0)
#define USER_WORKER_IO BIT(1)
+#define USER_WORKER_NO_FILES BIT(2)
struct kernel_clone_args {
u64 flags;
diff --git a/kernel/fork.c b/kernel/fork.c
index 77d2c527e917..bb98b48bc35c 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1624,7 +1624,8 @@ static int copy_fs(unsigned long clone_flags, struct task_struct *tsk)
return 0;
}
-static int copy_files(unsigned long clone_flags, struct task_struct *tsk)
+static int copy_files(unsigned long clone_flags, struct task_struct *tsk,
+ int no_files)
{
struct files_struct *oldf, *newf;
int error = 0;
@@ -1636,6 +1637,11 @@ static int copy_files(unsigned long clone_flags, struct task_struct *tsk)
if (!oldf)
goto out;
+ if (no_files) {
+ tsk->files = NULL;
+ goto out;
+ }
+
if (clone_flags & CLONE_FILES) {
atomic_inc(&oldf->count);
goto out;
@@ -2255,7 +2261,8 @@ static __latent_entropy struct task_struct *copy_process(
retval = copy_semundo(clone_flags, p);
if (retval)
goto bad_fork_cleanup_security;
- retval = copy_files(clone_flags, p);
+ retval = copy_files(clone_flags, p,
+ args->worker_flags & USER_WORKER_NO_FILES);
if (retval)
goto bad_fork_cleanup_semundo;
retval = copy_fs(clone_flags, p);
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files
2023-02-02 23:25 ` [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files Mike Christie
@ 2023-02-03 0:16 ` Linus Torvalds
0 siblings, 0 replies; 42+ messages in thread
From: Linus Torvalds @ 2023-02-03 0:16 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, mst, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, Christoph Hellwig
On Thu, Feb 2, 2023 at 3:25 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> - retval = copy_files(clone_flags, p);
> + retval = copy_files(clone_flags, p,
> + args->worker_flags & USER_WORKER_NO_FILES);
Just to hit the previous email comment home, adding just another
bitfield case would have made this patch simpler, and this would just
be
retval = copy_files(clone_flags, p, args->no_files);
which seems more legible too.
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
` (2 preceding siblings ...)
2023-02-02 23:25 ` [PATCH v11 3/8] fork: add USER_WORKER flag to not dup/clone files Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-03 0:19 ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 5/8] fork: allow kernel code to call copy_process Mike Christie
` (3 subsequent siblings)
7 siblings, 1 reply; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
Cc: Christoph Hellwig
From: Christian Brauner <brauner@kernel.org>
Since:
commit 10ab825bdef8 ("change kernel threads to ignore signals instead of
blocking them")
kthreads have been ignoring signals by default, and the vhost layer has
never had a need to change that. This patch adds an option flag,
USER_WORKER_SIG_IGN, handled in copy_process() after copy_sighand()
and copy_signals() so vhost_tasks added in the next patches can continue
to ignore singals.
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/sched/task.h | 1 +
kernel/fork.c | 3 +++
2 files changed, 4 insertions(+)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index 18e614591c24..ce6240a006cf 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -21,6 +21,7 @@ struct css_set;
#define USER_WORKER BIT(0)
#define USER_WORKER_IO BIT(1)
#define USER_WORKER_NO_FILES BIT(2)
+#define USER_WORKER_SIG_IGN BIT(3)
struct kernel_clone_args {
u64 flags;
diff --git a/kernel/fork.c b/kernel/fork.c
index bb98b48bc35c..55c77de45271 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2287,6 +2287,9 @@ static __latent_entropy struct task_struct *copy_process(
if (retval)
goto bad_fork_cleanup_io;
+ if (args->worker_flags & USER_WORKER_SIG_IGN)
+ ignore_signals(p);
+
stackleak_task_init(p);
if (pid != &init_struct_pid) {
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals
2023-02-02 23:25 ` [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals Mike Christie
@ 2023-02-03 0:19 ` Linus Torvalds
2023-02-05 16:06 ` Mike Christie
0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2023-02-03 0:19 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, mst, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, Christoph Hellwig
On Thu, Feb 2, 2023 at 3:25 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> + if (args->worker_flags & USER_WORKER_SIG_IGN)
> + ignore_signals(p);
Same comment as for the other case.
There are real reasons to avoid bitfields:
- you can't pass addresses to them around
- it's easier to read or assign multiple fields in one go
- they are horrible for ABI issues due to the exact bit ordering and
padding being very subtle
but none of those issues are relevant here, where it's a kernel-internal ABI.
All these use-cases seem to actually be testing one bit at a time, and
the "assignments" are structure initializers for which named bitfields
are actually perfect and just make the initializer more legible.
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals
2023-02-03 0:19 ` Linus Torvalds
@ 2023-02-05 16:06 ` Mike Christie
0 siblings, 0 replies; 42+ messages in thread
From: Mike Christie @ 2023-02-05 16:06 UTC (permalink / raw)
To: Linus Torvalds
Cc: brauner, mst, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, Christoph Hellwig
On 2/2/23 6:19 PM, Linus Torvalds wrote:
> On Thu, Feb 2, 2023 at 3:25 PM Mike Christie
> <michael.christie@oracle.com> wrote:
>>
>> + if (args->worker_flags & USER_WORKER_SIG_IGN)
>> + ignore_signals(p);
>
> Same comment as for the other case.
>
> There are real reasons to avoid bitfields:
>
> - you can't pass addresses to them around
>
> - it's easier to read or assign multiple fields in one go
>
> - they are horrible for ABI issues due to the exact bit ordering and
> padding being very subtle
>
> but none of those issues are relevant here, where it's a kernel-internal ABI.
>
> All these use-cases seem to actually be testing one bit at a time, and
> the "assignments" are structure initializers for which named bitfields
> are actually perfect and just make the initializer more legible.
>
Thanks for the comments. I see what you mean and have fixed those instances and
updated kthread as well.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v11 5/8] fork: allow kernel code to call copy_process
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
` (3 preceding siblings ...)
2023-02-02 23:25 ` [PATCH v11 4/8] fork: Add USER_WORKER flag to ignore signals Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process Mike Christie
` (2 subsequent siblings)
7 siblings, 0 replies; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
The next patch adds helpers like create_io_thread, but for use by the
vhost layer. There are several functions, so they are in their own file
instead of cluttering up fork.c. This patch allows that new file to
call copy_process.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/sched/task.h | 2 ++
kernel/fork.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index ce6240a006cf..b0e43a1fd21d 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -94,6 +94,8 @@ extern void exit_files(struct task_struct *);
extern void exit_itimers(struct task_struct *);
extern pid_t kernel_clone(struct kernel_clone_args *kargs);
+struct task_struct *copy_process(struct pid *pid, int trace, int node,
+ struct kernel_clone_args *args);
struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
struct task_struct *fork_idle(int);
extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
diff --git a/kernel/fork.c b/kernel/fork.c
index 55c77de45271..93e545b08205 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2013,7 +2013,7 @@ static void rv_task_fork(struct task_struct *p)
* parts of the process environment (as per the clone
* flags). The actual kick-off is left to the caller.
*/
-static __latent_entropy struct task_struct *copy_process(
+__latent_entropy struct task_struct *copy_process(
struct pid *pid,
int trace,
int node,
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
` (4 preceding siblings ...)
2023-02-02 23:25 ` [PATCH v11 5/8] fork: allow kernel code to call copy_process Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-03 0:43 ` Linus Torvalds
2023-02-02 23:25 ` [PATCH v11 7/8] vhost: move worker thread fields to new struct Mike Christie
2023-02-02 23:25 ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Mike Christie
7 siblings, 1 reply; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
Qemu will create vhost devices in the kernel which perform network, SCSI,
etc IO and management operations from worker threads created by the
kthread API. Because the kthread API does a copy_process on the kthreadd
thread, the vhost layer has to use kthread_use_mm to access the Qemu
thread's memory and cgroup_attach_task_all to add itself to the Qemu
thread's cgroups, and it bypasses the RLIMIT_NPROC limit which can result
in VMs creating more threads than the admin expected.
This patch adds a new struct vhost_task which can be used instead of
kthreads. They allow the vhost layer to use copy_process and inherit
the userspace process's mm and cgroups, the task is accounted for
under the userspace's nproc count and can be seen in its process tree,
and other features like namespaces work and are inherited by default.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
MAINTAINERS | 2 +
drivers/vhost/Kconfig | 5 ++
include/linux/sched/vhost_task.h | 23 ++++++
kernel/Makefile | 1 +
kernel/vhost_task.c | 122 +++++++++++++++++++++++++++++++
5 files changed, 153 insertions(+)
create mode 100644 include/linux/sched/vhost_task.h
create mode 100644 kernel/vhost_task.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 8a5c25c20d00..5f7a3b3af7aa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22125,7 +22125,9 @@ L: virtualization@lists.linux-foundation.org
L: netdev@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
+F: kernel/vhost_task.c
F: drivers/vhost/
+F: include/linux/sched/vhost_task.h
F: include/linux/vhost_iotlb.h
F: include/uapi/linux/vhost.h
diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 587fbae06182..b455d9ab6f3d 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -13,9 +13,14 @@ config VHOST_RING
This option is selected by any driver which needs to access
the host side of a virtio ring.
+config VHOST_TASK
+ bool
+ default n
+
config VHOST
tristate
select VHOST_IOTLB
+ select VHOST_TASK
help
This option is selected by any driver which needs to access
the core of vhost.
diff --git a/include/linux/sched/vhost_task.h b/include/linux/sched/vhost_task.h
new file mode 100644
index 000000000000..50d02a25d37b
--- /dev/null
+++ b/include/linux/sched/vhost_task.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_VHOST_TASK_H
+#define _LINUX_VHOST_TASK_H
+
+#include <linux/completion.h>
+
+struct task_struct;
+
+struct vhost_task {
+ int (*fn)(void *data);
+ void *data;
+ struct completion exited;
+ unsigned long flags;
+ struct task_struct *task;
+};
+
+struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, int node);
+__printf(2, 3)
+void vhost_task_start(struct vhost_task *vtsk, const char namefmt[], ...);
+void vhost_task_stop(struct vhost_task *vtsk);
+bool vhost_task_should_stop(struct vhost_task *vtsk);
+
+#endif
diff --git a/kernel/Makefile b/kernel/Makefile
index 10ef068f598d..6fc72b3afbde 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -15,6 +15,7 @@ obj-y = fork.o exec_domain.o panic.o \
obj-$(CONFIG_USERMODE_DRIVER) += usermode_driver.o
obj-$(CONFIG_MODULES) += kmod.o
obj-$(CONFIG_MULTIUSER) += groups.o
+obj-$(CONFIG_VHOST_TASK) += vhost_task.o
ifdef CONFIG_FUNCTION_TRACER
# Do not trace internal ftrace files
diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
new file mode 100644
index 000000000000..517dd166bb2b
--- /dev/null
+++ b/kernel/vhost_task.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Oracle Corporation
+ */
+#include <linux/slab.h>
+#include <linux/completion.h>
+#include <linux/sched/task.h>
+#include <linux/sched/vhost_task.h>
+#include <linux/sched/signal.h>
+
+enum vhost_task_flags {
+ VHOST_TASK_FLAGS_STOP,
+};
+
+static int vhost_task_fn(void *data)
+{
+ struct vhost_task *vtsk = data;
+ int ret;
+
+ ret = vtsk->fn(vtsk->data);
+ complete(&vtsk->exited);
+ do_exit(ret);
+}
+
+/**
+ * vhost_task_stop - stop a vhost_task
+ * @vtsk: vhost_task to stop
+ *
+ * Callers must call vhost_task_should_stop and return from their worker
+ * function when it returns true;
+ */
+void vhost_task_stop(struct vhost_task *vtsk)
+{
+ pid_t pid = vtsk->task->pid;
+
+ set_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags);
+ wake_up_process(vtsk->task);
+ /*
+ * Make sure vhost_task_fn is no longer accessing the vhost_task before
+ * freeing it below. If userspace crashed or exited without closing,
+ * then the vhost_task->task could already be marked dead so
+ * kernel_wait will return early.
+ */
+ wait_for_completion(&vtsk->exited);
+ /*
+ * If we are just closing/removing a device and the parent process is
+ * not exiting then reap the task.
+ */
+ kernel_wait4(pid, NULL, __WCLONE, NULL);
+ kfree(vtsk);
+}
+EXPORT_SYMBOL_GPL(vhost_task_stop);
+
+/**
+ * vhost_task_should_stop - should the vhost task return from the work function
+ */
+bool vhost_task_should_stop(struct vhost_task *vtsk)
+{
+ return test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags);
+}
+EXPORT_SYMBOL_GPL(vhost_task_should_stop);
+
+/**
+ * vhost_task_create - create a copy of a process to be used by the kernel
+ * @fn: thread stack
+ * @arg: data to be passed to fn
+ * @node: numa node to allocate task from
+ *
+ * This returns a specialized task for use by the vhost layer or NULL on
+ * failure. The returned task is inactive, and the caller must fire it up
+ * through vhost_task_start().
+ */
+struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, int node)
+{
+ struct kernel_clone_args args = {
+ .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM,
+ .exit_signal = 0,
+ .worker_flags = USER_WORKER | USER_WORKER_NO_FILES |
+ USER_WORKER_SIG_IGN,
+ .fn = vhost_task_fn,
+ };
+ struct vhost_task *vtsk;
+ struct task_struct *tsk;
+
+ vtsk = kzalloc(sizeof(*vtsk), GFP_KERNEL);
+ if (!vtsk)
+ return ERR_PTR(-ENOMEM);
+ init_completion(&vtsk->exited);
+ vtsk->data = arg;
+ vtsk->fn = fn;
+
+ args.fn_arg = vtsk;
+
+ tsk = copy_process(NULL, 0, node, &args);
+ if (IS_ERR(tsk)) {
+ kfree(vtsk);
+ return NULL;
+ }
+
+ vtsk->task = tsk;
+ return vtsk;
+}
+EXPORT_SYMBOL_GPL(vhost_task_create);
+
+/**
+ * vhost_task_start - start a vhost_task created with vhost_task_create
+ * @vtsk: vhost_task to wake up
+ * @namefmt: printf-style format string for the thread name
+ */
+void vhost_task_start(struct vhost_task *vtsk, const char namefmt[], ...)
+{
+ char name[TASK_COMM_LEN];
+ va_list args;
+
+ va_start(args, namefmt);
+ vsnprintf(name, sizeof(name), namefmt, args);
+ set_task_comm(vtsk->task, name);
+ va_end(args);
+
+ wake_up_new_task(vtsk->task);
+}
+EXPORT_SYMBOL_GPL(vhost_task_start);
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process
2023-02-02 23:25 ` [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process Mike Christie
@ 2023-02-03 0:43 ` Linus Torvalds
0 siblings, 0 replies; 42+ messages in thread
From: Linus Torvalds @ 2023-02-03 0:43 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, mst, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha
On Thu, Feb 2, 2023 at 3:25 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> +/**
> + * vhost_task_start - start a vhost_task created with vhost_task_create
> + * @vtsk: vhost_task to wake up
> + * @namefmt: printf-style format string for the thread name
> + */
> +void vhost_task_start(struct vhost_task *vtsk, const char namefmt[], ...)
> +{
> + char name[TASK_COMM_LEN];
> + va_list args;
> +
> + va_start(args, namefmt);
> + vsnprintf(name, sizeof(name), namefmt, args);
> + set_task_comm(vtsk->task, name);
> + va_end(args);
> +
> + wake_up_new_task(vtsk->task);
> +}
Ok, I like this more than what we do for the IO workers - they set
their own names themselves once they start running, rather than have
the creator do it like this.
At the same time, my reaction to this was "why do we need to go
through that temporary 'name[]' buffer at all?"
And I think this patch is very much correct to do so, because
"copy_thread()" has already exposed the new thread to the rest of the
world, even though it hasn't actually started running yet.
So I think this is all doing the right thing, and I like how it does
it better than what io_uring does, BUT...
It does make me think that maybe we should make that task name
handling part of copy_process(), and simply create the task name
before we need this careful set_task_comm() with a temporary buffer.
Because if we just did it in copy_process() before the new task has
been exposed anywhere,. we could just do it as
if (args->name)
vsnprintf(tsk->comm, TASK_COMM_LEN, "%s-%d", args->name, tsk->pid);
or something like that.
Not a big deal, it was just me reacting to this patch with "do we
really need set_task_comm() when we're creating the task?"
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v11 7/8] vhost: move worker thread fields to new struct
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
` (5 preceding siblings ...)
2023-02-02 23:25 ` [PATCH v11 6/8] vhost_task: Allow vhost layer to use copy_process Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
2023-02-02 23:25 ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Mike Christie
7 siblings, 0 replies; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
Cc: Christoph Hellwig
This is just a prep patch. It moves the worker related fields to a new
vhost_worker struct and moves the code around to create some helpers that
will be used in the next patch.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
drivers/vhost/vhost.c | 98 ++++++++++++++++++++++++++++---------------
drivers/vhost/vhost.h | 11 +++--
2 files changed, 72 insertions(+), 37 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index cbe72bfd2f1f..74378d241f8d 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -255,8 +255,8 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
* sure it was not in the list.
* test_and_set_bit() implies a memory barrier.
*/
- llist_add(&work->node, &dev->work_list);
- wake_up_process(dev->worker);
+ llist_add(&work->node, &dev->worker->work_list);
+ wake_up_process(dev->worker->task);
}
}
EXPORT_SYMBOL_GPL(vhost_work_queue);
@@ -264,7 +264,7 @@ EXPORT_SYMBOL_GPL(vhost_work_queue);
/* A lockless hint for busy polling code to exit the loop */
bool vhost_has_work(struct vhost_dev *dev)
{
- return !llist_empty(&dev->work_list);
+ return dev->worker && !llist_empty(&dev->worker->work_list);
}
EXPORT_SYMBOL_GPL(vhost_has_work);
@@ -335,7 +335,8 @@ static void vhost_vq_reset(struct vhost_dev *dev,
static int vhost_worker(void *data)
{
- struct vhost_dev *dev = data;
+ struct vhost_worker *worker = data;
+ struct vhost_dev *dev = worker->dev;
struct vhost_work *work, *work_next;
struct llist_node *node;
@@ -350,7 +351,7 @@ static int vhost_worker(void *data)
break;
}
- node = llist_del_all(&dev->work_list);
+ node = llist_del_all(&worker->work_list);
if (!node)
schedule();
@@ -360,7 +361,7 @@ static int vhost_worker(void *data)
llist_for_each_entry_safe(work, work_next, node, node) {
clear_bit(VHOST_WORK_QUEUED, &work->flags);
__set_current_state(TASK_RUNNING);
- kcov_remote_start_common(dev->kcov_handle);
+ kcov_remote_start_common(worker->kcov_handle);
work->fn(work);
kcov_remote_stop();
if (need_resched())
@@ -479,7 +480,6 @@ void vhost_dev_init(struct vhost_dev *dev,
dev->byte_weight = byte_weight;
dev->use_worker = use_worker;
dev->msg_handler = msg_handler;
- init_llist_head(&dev->work_list);
init_waitqueue_head(&dev->wait);
INIT_LIST_HEAD(&dev->read_list);
INIT_LIST_HEAD(&dev->pending_list);
@@ -571,10 +571,60 @@ static void vhost_detach_mm(struct vhost_dev *dev)
dev->mm = NULL;
}
+static void vhost_worker_free(struct vhost_dev *dev)
+{
+ struct vhost_worker *worker = dev->worker;
+
+ if (!worker)
+ return;
+
+ dev->worker = NULL;
+ WARN_ON(!llist_empty(&worker->work_list));
+ kthread_stop(worker->task);
+ kfree(worker);
+}
+
+static int vhost_worker_create(struct vhost_dev *dev)
+{
+ struct vhost_worker *worker;
+ struct task_struct *task;
+ int ret;
+
+ worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
+ if (!worker)
+ return -ENOMEM;
+
+ dev->worker = worker;
+ worker->dev = dev;
+ worker->kcov_handle = kcov_common_handle();
+ init_llist_head(&worker->work_list);
+
+ task = kthread_create(vhost_worker, worker, "vhost-%d", current->pid);
+ if (IS_ERR(task)) {
+ ret = PTR_ERR(task);
+ goto free_worker;
+ }
+
+ worker->task = task;
+ wake_up_process(task); /* avoid contributing to loadavg */
+
+ ret = vhost_attach_cgroups(dev);
+ if (ret)
+ goto stop_worker;
+
+ return 0;
+
+stop_worker:
+ kthread_stop(worker->task);
+free_worker:
+ kfree(worker);
+ dev->worker = NULL;
+ return ret;
+}
+
/* Caller should have device mutex */
long vhost_dev_set_owner(struct vhost_dev *dev)
{
- struct task_struct *worker;
int err;
/* Is there an owner already? */
@@ -585,36 +635,21 @@ long vhost_dev_set_owner(struct vhost_dev *dev)
vhost_attach_mm(dev);
- dev->kcov_handle = kcov_common_handle();
if (dev->use_worker) {
- worker = kthread_create(vhost_worker, dev,
- "vhost-%d", current->pid);
- if (IS_ERR(worker)) {
- err = PTR_ERR(worker);
- goto err_worker;
- }
-
- dev->worker = worker;
- wake_up_process(worker); /* avoid contributing to loadavg */
-
- err = vhost_attach_cgroups(dev);
+ err = vhost_worker_create(dev);
if (err)
- goto err_cgroup;
+ goto err_worker;
}
err = vhost_dev_alloc_iovecs(dev);
if (err)
- goto err_cgroup;
+ goto err_iovecs;
return 0;
-err_cgroup:
- if (dev->worker) {
- kthread_stop(dev->worker);
- dev->worker = NULL;
- }
+err_iovecs:
+ vhost_worker_free(dev);
err_worker:
vhost_detach_mm(dev);
- dev->kcov_handle = 0;
err_mm:
return err;
}
@@ -704,12 +739,7 @@ void vhost_dev_cleanup(struct vhost_dev *dev)
dev->iotlb = NULL;
vhost_clear_msg(dev);
wake_up_interruptible_poll(&dev->wait, EPOLLIN | EPOLLRDNORM);
- WARN_ON(!llist_empty(&dev->work_list));
- if (dev->worker) {
- kthread_stop(dev->worker);
- dev->worker = NULL;
- dev->kcov_handle = 0;
- }
+ vhost_worker_free(dev);
vhost_detach_mm(dev);
}
EXPORT_SYMBOL_GPL(vhost_dev_cleanup);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index d9109107af08..2f6beab93784 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -25,6 +25,13 @@ struct vhost_work {
unsigned long flags;
};
+struct vhost_worker {
+ struct task_struct *task;
+ struct llist_head work_list;
+ struct vhost_dev *dev;
+ u64 kcov_handle;
+};
+
/* Poll a file (eventfd or socket) */
/* Note: there's nothing vhost specific about this structure. */
struct vhost_poll {
@@ -147,8 +154,7 @@ struct vhost_dev {
struct vhost_virtqueue **vqs;
int nvqs;
struct eventfd_ctx *log_ctx;
- struct llist_head work_list;
- struct task_struct *worker;
+ struct vhost_worker *worker;
struct vhost_iotlb *umem;
struct vhost_iotlb *iotlb;
spinlock_t iotlb_lock;
@@ -158,7 +164,6 @@ struct vhost_dev {
int iov_limit;
int weight;
int byte_weight;
- u64 kcov_handle;
bool use_worker;
int (*msg_handler)(struct vhost_dev *dev, u32 asid,
struct vhost_iotlb_msg *msg);
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-02-02 23:25 [PATCH v11 0/8] Use copy_process in vhost layer Mike Christie
` (6 preceding siblings ...)
2023-02-02 23:25 ` [PATCH v11 7/8] vhost: move worker thread fields to new struct Mike Christie
@ 2023-02-02 23:25 ` Mike Christie
[not found] ` <aba6cca4-e66c-768f-375c-b38c8ba5e8a8@6wind.com>
2023-07-20 13:06 ` Michael S. Tsirkin
7 siblings, 2 replies; 42+ messages in thread
From: Mike Christie @ 2023-02-02 23:25 UTC (permalink / raw)
To: hch, stefanha, jasowang, mst, sgarzare, virtualization, brauner,
ebiederm, torvalds, konrad.wilk, linux-kernel
For vhost workers we use the kthread API which inherit's its values from
and checks against the kthreadd thread. This results in the wrong RLIMITs
being checked, so while tools like libvirt try to control the number of
threads based on the nproc rlimit setting we can end up creating more
threads than the user wanted.
This patch has us use the vhost_task helpers which will inherit its
values/checks from the thread that owns the device similar to if we did
a clone in userspace. The vhost threads will now be counted in the nproc
rlimits. And we get features like cgroups and mm sharing automatically,
so we can remove those calls.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
drivers/vhost/vhost.c | 58 ++++++++-----------------------------------
drivers/vhost/vhost.h | 4 +--
2 files changed, 13 insertions(+), 49 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 74378d241f8d..d3c7c37b69a7 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -22,11 +22,11 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/kthread.h>
-#include <linux/cgroup.h>
#include <linux/module.h>
#include <linux/sort.h>
#include <linux/sched/mm.h>
#include <linux/sched/signal.h>
+#include <linux/sched/vhost_task.h>
#include <linux/interval_tree_generic.h>
#include <linux/nospec.h>
#include <linux/kcov.h>
@@ -256,7 +256,7 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
* test_and_set_bit() implies a memory barrier.
*/
llist_add(&work->node, &dev->worker->work_list);
- wake_up_process(dev->worker->task);
+ wake_up_process(dev->worker->vtsk->task);
}
}
EXPORT_SYMBOL_GPL(vhost_work_queue);
@@ -336,17 +336,14 @@ static void vhost_vq_reset(struct vhost_dev *dev,
static int vhost_worker(void *data)
{
struct vhost_worker *worker = data;
- struct vhost_dev *dev = worker->dev;
struct vhost_work *work, *work_next;
struct llist_node *node;
- kthread_use_mm(dev->mm);
-
for (;;) {
/* mb paired w/ kthread_stop */
set_current_state(TASK_INTERRUPTIBLE);
- if (kthread_should_stop()) {
+ if (vhost_task_should_stop(worker->vtsk)) {
__set_current_state(TASK_RUNNING);
break;
}
@@ -368,7 +365,7 @@ static int vhost_worker(void *data)
schedule();
}
}
- kthread_unuse_mm(dev->mm);
+
return 0;
}
@@ -509,31 +506,6 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
}
EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
-struct vhost_attach_cgroups_struct {
- struct vhost_work work;
- struct task_struct *owner;
- int ret;
-};
-
-static void vhost_attach_cgroups_work(struct vhost_work *work)
-{
- struct vhost_attach_cgroups_struct *s;
-
- s = container_of(work, struct vhost_attach_cgroups_struct, work);
- s->ret = cgroup_attach_task_all(s->owner, current);
-}
-
-static int vhost_attach_cgroups(struct vhost_dev *dev)
-{
- struct vhost_attach_cgroups_struct attach;
-
- attach.owner = current;
- vhost_work_init(&attach.work, vhost_attach_cgroups_work);
- vhost_work_queue(dev, &attach.work);
- vhost_dev_flush(dev);
- return attach.ret;
-}
-
/* Caller should have device mutex */
bool vhost_dev_has_owner(struct vhost_dev *dev)
{
@@ -580,14 +552,14 @@ static void vhost_worker_free(struct vhost_dev *dev)
dev->worker = NULL;
WARN_ON(!llist_empty(&worker->work_list));
- kthread_stop(worker->task);
+ vhost_task_stop(worker->vtsk);
kfree(worker);
}
static int vhost_worker_create(struct vhost_dev *dev)
{
struct vhost_worker *worker;
- struct task_struct *task;
+ struct vhost_task *vtsk;
int ret;
worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
@@ -595,27 +567,19 @@ static int vhost_worker_create(struct vhost_dev *dev)
return -ENOMEM;
dev->worker = worker;
- worker->dev = dev;
worker->kcov_handle = kcov_common_handle();
init_llist_head(&worker->work_list);
- task = kthread_create(vhost_worker, worker, "vhost-%d", current->pid);
- if (IS_ERR(task)) {
- ret = PTR_ERR(task);
+ vtsk = vhost_task_create(vhost_worker, worker, NUMA_NO_NODE);
+ if (!vtsk) {
+ ret = -ENOMEM;
goto free_worker;
}
- worker->task = task;
- wake_up_process(task); /* avoid contributing to loadavg */
-
- ret = vhost_attach_cgroups(dev);
- if (ret)
- goto stop_worker;
-
+ worker->vtsk = vtsk;
+ vhost_task_start(vtsk, "vhost-%d", current->pid);
return 0;
-stop_worker:
- kthread_stop(worker->task);
free_worker:
kfree(worker);
dev->worker = NULL;
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 2f6beab93784..3af59c65025e 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -16,6 +16,7 @@
#include <linux/irqbypass.h>
struct vhost_work;
+struct vhost_task;
typedef void (*vhost_work_fn_t)(struct vhost_work *work);
#define VHOST_WORK_QUEUED 1
@@ -26,9 +27,8 @@ struct vhost_work {
};
struct vhost_worker {
- struct task_struct *task;
+ struct vhost_task *vtsk;
struct llist_head work_list;
- struct vhost_dev *dev;
u64 kcov_handle;
};
--
2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
[parent not found: <aba6cca4-e66c-768f-375c-b38c8ba5e8a8@6wind.com>]
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
[not found] ` <aba6cca4-e66c-768f-375c-b38c8ba5e8a8@6wind.com>
@ 2023-05-05 18:22 ` Linus Torvalds
2023-05-05 22:37 ` Mike Christie
0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2023-05-05 18:22 UTC (permalink / raw)
To: nicolas.dichtel, Christian Brauner
Cc: mst, konrad.wilk, linux-kernel, virtualization, hch, ebiederm, stefanha
On Fri, May 5, 2023 at 6:40 AM Nicolas Dichtel
<nicolas.dichtel@6wind.com> wrote:
>
> Is this an intended behavior?
> This breaks some of our scripts.
It doesn't just break your scripts (which counts as a regression), I
think it's really wrong.
The worker threads should show up as threads of the thing that started
them, not as processes.
So they should show up in 'ps' only when one of the "show threads" flag is set.
But I suspect the fix is trivial: the virtio code should likely use
CLONE_THREAD for the copy_process() it does.
It should look more like "create_io_thread()" than "copy_process()", I think.
For example, do virtio worker threads really want their own signals
and files? That sounds wrong. create_io_thread() uses all of
CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_IO
to share much more of the context with the process it is actually run within.
Christian? Mike?
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-05-05 18:22 ` Linus Torvalds
@ 2023-05-05 22:37 ` Mike Christie
2023-05-06 1:53 ` Linus Torvalds
2023-05-13 12:39 ` Thorsten Leemhuis
0 siblings, 2 replies; 42+ messages in thread
From: Mike Christie @ 2023-05-05 22:37 UTC (permalink / raw)
To: Linus Torvalds, nicolas.dichtel, Christian Brauner
Cc: mst, konrad.wilk, linux-kernel, virtualization, hch, ebiederm, stefanha
On 5/5/23 1:22 PM, Linus Torvalds wrote:
> On Fri, May 5, 2023 at 6:40 AM Nicolas Dichtel
> <nicolas.dichtel@6wind.com> wrote:
>>
>> Is this an intended behavior?
>> This breaks some of our scripts.
>
> It doesn't just break your scripts (which counts as a regression), I
> think it's really wrong.
>
> The worker threads should show up as threads of the thing that started
> them, not as processes.
>
> So they should show up in 'ps' only when one of the "show threads" flag is set.
>
> But I suspect the fix is trivial: the virtio code should likely use
> CLONE_THREAD for the copy_process() it does.
>
> It should look more like "create_io_thread()" than "copy_process()", I think.
>
> For example, do virtio worker threads really want their own signals
> and files? That sounds wrong. create_io_thread() uses all of
>
> CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_IO
>
> to share much more of the context with the process it is actually run within.
>
For the vhost tasks and the CLONE flags:
1. I didn't use CLONE_FILES in the vhost task patches because you are right
and we didn't need our own. We needed it to work like kthreads where there
are no files, so I set the kernel_clone_args.no_files bit to have copy_files
not do a dup or clone (task->files is NULL).
2. vhost tasks didn't use CLONE_SIGHAND, because userspace apps like qemu use
signals for management operations. But, the vhost thread's worker functions
assume signals are ignored like they were with kthreads. So if they were doing
IO and got a signal like a SIGHUP they might return early and fail from whatever
network/block function they were calling. And currently the parent like qemu
handles something like a SIGSTOP by shutting everything down by calling into
the vhost interface to remove the device.
So similar to files I used the kernel_clone_args.ignore_signals bit so
copy_process has the vhost thread have it's own signal handle that just ignores
signals.
3. I didn't use CLONE_THREAD because before my patches you could do
"ps -u root" and see all the vhost threads. If we use CLONE_THREAD, then we
can only see it when we do something like "ps -T -p $parent" like you mentioned
above. I guess I messed up and did the reverse and thought it would be a
regression if "ps -u root" no longer showed the vhost threads.
If it's ok to change the behavior of "ps -u root", then we can do this patch:
(Nicolas, I confirmed it fixes the 'ps a' case, but couldn't replicate the 'ps'
case. If you could test the ps only case or give me info on what /usr/bin/example
was doing I can replicate and test here):
diff --git a/kernel/fork.c b/kernel/fork.c
index ed4e01daccaa..eb9ffc58e211 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2269,8 +2269,14 @@ __latent_entropy struct task_struct *copy_process(
/*
* Thread groups must share signals as well, and detached threads
* can only be started up within the thread group.
+ *
+ * A userworker's parent thread will normally have a signal handler
+ * that performs management operations, but the worker will not
+ * because the parent will handle the signal then user a worker
+ * specific interface to manage the thread and related resources.
*/
- if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND))
+ if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND) &&
+ !args->user_worker && !args->ignore_signals)
return ERR_PTR(-EINVAL);
/*
diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
index b7cbd66f889e..3700c21ea39d 100644
--- a/kernel/vhost_task.c
+++ b/kernel/vhost_task.c
@@ -75,7 +78,8 @@ struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg,
const char *name)
{
struct kernel_clone_args args = {
- .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM,
+ .flags = CLONE_FS | CLONE_THREAD | CLONE_VM |
+ CLONE_UNTRACED,
.exit_signal = 0,
.fn = vhost_task_fn,
.name = name,
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-05-05 22:37 ` Mike Christie
@ 2023-05-06 1:53 ` Linus Torvalds
2023-05-13 12:39 ` Thorsten Leemhuis
1 sibling, 0 replies; 42+ messages in thread
From: Linus Torvalds @ 2023-05-06 1:53 UTC (permalink / raw)
To: Mike Christie
Cc: Christian Brauner, mst, konrad.wilk, linux-kernel,
virtualization, hch, ebiederm, stefanha, nicolas.dichtel
On Fri, May 5, 2023 at 3:38 PM Mike Christie
<michael.christie@oracle.com> wrote:
>
> If it's ok to change the behavior of "ps -u root", then we can do this patch:
I think this is the right thing to do.
Making the user worker threads show up as threads with the vhost
process as the parent really seems like a much better model, and more
accurate.
Yes, they used to show up as random kernel threads, and you'd see them
as such (not just for "ps -u root", but simply also with just a normal
"ps ax" kind of thing). But that isn't all that helpful, and it's
really just annoying to see our kernel threads in "ps ax" output, and
I've often wished we didn't do that (it think of all the random
"kworker/xyz-kcryptd" etc things that show up).
So I think showing them as the threaded children of the vhost process
is much nicer, and probably the best option.
Because I don't thin kanything is going to get the *old* behavior of
showing them as the '[vhost-xyz]' system threads (or whatever the old
output ended up being in 'ps ax'), but hopefully nothing wants that
horror anyway.
At a minimum, the parenting is fundamentally going to look different
in the new model.
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-05-05 22:37 ` Mike Christie
2023-05-06 1:53 ` Linus Torvalds
@ 2023-05-13 12:39 ` Thorsten Leemhuis
2023-05-13 15:08 ` Linus Torvalds
1 sibling, 1 reply; 42+ messages in thread
From: Thorsten Leemhuis @ 2023-05-13 12:39 UTC (permalink / raw)
To: Mike Christie, Linus Torvalds, nicolas.dichtel,
Christian Brauner, Linux kernel regressions list
Cc: mst, konrad.wilk, linux-kernel, virtualization, hch, ebiederm, stefanha
[CCing the regression list]
On 06.05.23 00:37, Mike Christie wrote:
> On 5/5/23 1:22 PM, Linus Torvalds wrote:
>> On Fri, May 5, 2023 at 6:40 AM Nicolas Dichtel
>> <nicolas.dichtel@6wind.com> wrote:
>>>
>>> Is this an intended behavior?
>>> This breaks some of our scripts.
Jumping in here, as I found another problem with that patch: it broke
s2idle on my laptop when a qemu-kvm VM is running, as freezing user
space processes now fails for me:
```
[ 195.442949] PM: suspend entry (s2idle)
[ 195.641271] Filesystems sync: 0.198 seconds
[ 195.833828] Freezing user space processes
[ 215.841084] Freezing user space processes failed after 20.007
seconds (1 tasks refusing to freeze, wq_busy=0):
[ 215.841255] task:vhost-3221 state:R stack:0 pid:3250
ppid:3221 flags:0x00004006
[ 215.841264] Call Trace:
[ 215.841266] <TASK>
[ 215.841270] ? update_rq_clock+0x39/0x270
[ 215.841283] ? _raw_spin_unlock+0x19/0x40
[ 215.841290] ? __schedule+0x3f/0x1510
[ 215.841296] ? sysvec_apic_timer_interrupt+0xaf/0xd0
[ 215.841306] ? schedule+0x61/0xe0
[ 215.841313] ? vhost_worker+0x87/0xb0 [vhost]
[ 215.841329] ? vhost_task_fn+0x1a/0x30
[ 215.841336] ? __pfx_vhost_task_fn+0x10/0x10
[ 215.841341] ? ret_from_fork+0x2c/0x50
[ 215.841352] </TASK>
[ 215.841936] OOM killer enabled.
[ 215.841938] Restarting tasks ... done.
[ 215.844204] random: crng reseeded on system resumption
[ 215.957095] PM: suspend exit
[ 215.957185] PM: suspend entry (s2idle)
[ 215.967646] Filesystems sync: 0.010 seconds
[ 215.971326] Freezing user space processes
[ 235.974400] Freezing user space processes failed after 20.003
seconds (1 tasks refusing to freeze, wq_busy=0):
[ 235.974574] task:vhost-3221 state:R stack:0 pid:3250
ppid:3221 flags:0x00004806
[ 235.974583] Call Trace:
[ 235.974586] <TASK>
[ 235.974593] ? __schedule+0x184/0x1510
[ 235.974605] ? sysvec_apic_timer_interrupt+0xaf/0xd0
[ 235.974616] ? schedule+0x61/0xe0
[ 235.974624] ? vhost_worker+0x87/0xb0 [vhost]
[ 235.974648] ? vhost_task_fn+0x1a/0x30
[ 235.974656] ? __pfx_vhost_task_fn+0x10/0x10
[ 235.974662] ? ret_from_fork+0x2c/0x50
[ 235.974673] </TASK>
[ 235.975190] OOM killer enabled.
[ 235.975192] Restarting tasks ... done.
[ 235.978131] random: crng reseeded on system resumption
[ 236.091219] PM: suspend exit
```
After running into the problem I booted 6.3.1-rc1 again and there s2idle
still worked. Didn't do a bisection, just looked at the vhost commits
during the latest merge window; 6e890c5d502 ("vhost: use vhost_tasks for
worker threads") looked suspicious, so I reverted it on top of latest
mainline and then things work again. Through a search on lore I arrived
in this thread and found below patch from Mike. Gave it a try on top of
latest mainline, but it didn't help.
Ciao, Thorsten
> [...]
> If it's ok to change the behavior of "ps -u root", then we can do this patch:
> (Nicolas, I confirmed it fixes the 'ps a' case, but couldn't replicate the 'ps'
> case. If you could test the ps only case or give me info on what /usr/bin/example
> was doing I can replicate and test here):
>
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index ed4e01daccaa..eb9ffc58e211 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -2269,8 +2269,14 @@ __latent_entropy struct task_struct *copy_process(
> /*
> * Thread groups must share signals as well, and detached threads
> * can only be started up within the thread group.
> + *
> + * A userworker's parent thread will normally have a signal handler
> + * that performs management operations, but the worker will not
> + * because the parent will handle the signal then user a worker
> + * specific interface to manage the thread and related resources.
> */
> - if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND))
> + if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND) &&
> + !args->user_worker && !args->ignore_signals)
> return ERR_PTR(-EINVAL);
>
> /*
> diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
> index b7cbd66f889e..3700c21ea39d 100644
> --- a/kernel/vhost_task.c
> +++ b/kernel/vhost_task.c
> @@ -75,7 +78,8 @@ struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg,
> const char *name)
> {
> struct kernel_clone_args args = {
> - .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM,
> + .flags = CLONE_FS | CLONE_THREAD | CLONE_VM |
> + CLONE_UNTRACED,
> .exit_signal = 0,
> .fn = vhost_task_fn,
> .name = name
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-05-13 12:39 ` Thorsten Leemhuis
@ 2023-05-13 15:08 ` Linus Torvalds
[not found] ` <20230515-vollrausch-liebgeworden-2765f3ca3540@brauner>
0 siblings, 1 reply; 42+ messages in thread
From: Linus Torvalds @ 2023-05-13 15:08 UTC (permalink / raw)
To: Thorsten Leemhuis
Cc: Christian Brauner, Linux kernel regressions list, mst,
konrad.wilk, linux-kernel, virtualization, hch, ebiederm,
stefanha, nicolas.dichtel
On Sat, May 13, 2023 at 7:39 AM Thorsten Leemhuis <linux@leemhuis.info> wrote:
>
> Jumping in here, as I found another problem with that patch: it broke
> s2idle on my laptop when a qemu-kvm VM is running, as freezing user
> space processes now fails for me:
Hmm. kthreads have PF_NOFREEZE by default, which is probably the reason.
Adding
current->flags |= PF_NOFREEZE;
to the vhost_task setup might just fix it, but it feels a bit off.
The way io_uring does this is to do
if (signal_pending(current)) {
struct ksignal ksig;
if (!get_signal(&ksig))
continue;
break;
}
in the main loop, which ends up handling the freezer situation too.
But it should handle things like SIGSTOP etc as well, and also exit on
actual signals.
I get the feeling that the whole "vhost_task_should_stop()" logic
should have the exact logic above, and basically make those threads
killable as well.
Hmm?
Linus
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-02-02 23:25 ` [PATCH v11 8/8] vhost: use vhost_tasks for worker threads Mike Christie
[not found] ` <aba6cca4-e66c-768f-375c-b38c8ba5e8a8@6wind.com>
@ 2023-07-20 13:06 ` Michael S. Tsirkin
2023-07-23 4:03 ` michael.christie
1 sibling, 1 reply; 42+ messages in thread
From: Michael S. Tsirkin @ 2023-07-20 13:06 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
> For vhost workers we use the kthread API which inherit's its values from
> and checks against the kthreadd thread. This results in the wrong RLIMITs
> being checked, so while tools like libvirt try to control the number of
> threads based on the nproc rlimit setting we can end up creating more
> threads than the user wanted.
>
> This patch has us use the vhost_task helpers which will inherit its
> values/checks from the thread that owns the device similar to if we did
> a clone in userspace. The vhost threads will now be counted in the nproc
> rlimits. And we get features like cgroups and mm sharing automatically,
> so we can remove those calls.
>
> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
Hi Mike,
So this seems to have caused a measureable regression in networking
performance (about 30%). Take a look here, and there's a zip file
with detailed measuraments attached:
https://bugzilla.redhat.com/show_bug.cgi?id=2222603
Could you take a look please?
You can also ask reporter questions there assuming you
have or can create a (free) account.
> ---
> drivers/vhost/vhost.c | 58 ++++++++-----------------------------------
> drivers/vhost/vhost.h | 4 +--
> 2 files changed, 13 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 74378d241f8d..d3c7c37b69a7 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -22,11 +22,11 @@
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
> #include <linux/kthread.h>
> -#include <linux/cgroup.h>
> #include <linux/module.h>
> #include <linux/sort.h>
> #include <linux/sched/mm.h>
> #include <linux/sched/signal.h>
> +#include <linux/sched/vhost_task.h>
> #include <linux/interval_tree_generic.h>
> #include <linux/nospec.h>
> #include <linux/kcov.h>
> @@ -256,7 +256,7 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
> * test_and_set_bit() implies a memory barrier.
> */
> llist_add(&work->node, &dev->worker->work_list);
> - wake_up_process(dev->worker->task);
> + wake_up_process(dev->worker->vtsk->task);
> }
> }
> EXPORT_SYMBOL_GPL(vhost_work_queue);
> @@ -336,17 +336,14 @@ static void vhost_vq_reset(struct vhost_dev *dev,
> static int vhost_worker(void *data)
> {
> struct vhost_worker *worker = data;
> - struct vhost_dev *dev = worker->dev;
> struct vhost_work *work, *work_next;
> struct llist_node *node;
>
> - kthread_use_mm(dev->mm);
> -
> for (;;) {
> /* mb paired w/ kthread_stop */
> set_current_state(TASK_INTERRUPTIBLE);
>
> - if (kthread_should_stop()) {
> + if (vhost_task_should_stop(worker->vtsk)) {
> __set_current_state(TASK_RUNNING);
> break;
> }
> @@ -368,7 +365,7 @@ static int vhost_worker(void *data)
> schedule();
> }
> }
> - kthread_unuse_mm(dev->mm);
> +
> return 0;
> }
>
> @@ -509,31 +506,6 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
> }
> EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
>
> -struct vhost_attach_cgroups_struct {
> - struct vhost_work work;
> - struct task_struct *owner;
> - int ret;
> -};
> -
> -static void vhost_attach_cgroups_work(struct vhost_work *work)
> -{
> - struct vhost_attach_cgroups_struct *s;
> -
> - s = container_of(work, struct vhost_attach_cgroups_struct, work);
> - s->ret = cgroup_attach_task_all(s->owner, current);
> -}
> -
> -static int vhost_attach_cgroups(struct vhost_dev *dev)
> -{
> - struct vhost_attach_cgroups_struct attach;
> -
> - attach.owner = current;
> - vhost_work_init(&attach.work, vhost_attach_cgroups_work);
> - vhost_work_queue(dev, &attach.work);
> - vhost_dev_flush(dev);
> - return attach.ret;
> -}
> -
> /* Caller should have device mutex */
> bool vhost_dev_has_owner(struct vhost_dev *dev)
> {
> @@ -580,14 +552,14 @@ static void vhost_worker_free(struct vhost_dev *dev)
>
> dev->worker = NULL;
> WARN_ON(!llist_empty(&worker->work_list));
> - kthread_stop(worker->task);
> + vhost_task_stop(worker->vtsk);
> kfree(worker);
> }
>
> static int vhost_worker_create(struct vhost_dev *dev)
> {
> struct vhost_worker *worker;
> - struct task_struct *task;
> + struct vhost_task *vtsk;
> int ret;
>
> worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
> @@ -595,27 +567,19 @@ static int vhost_worker_create(struct vhost_dev *dev)
> return -ENOMEM;
>
> dev->worker = worker;
> - worker->dev = dev;
> worker->kcov_handle = kcov_common_handle();
> init_llist_head(&worker->work_list);
>
> - task = kthread_create(vhost_worker, worker, "vhost-%d", current->pid);
> - if (IS_ERR(task)) {
> - ret = PTR_ERR(task);
> + vtsk = vhost_task_create(vhost_worker, worker, NUMA_NO_NODE);
> + if (!vtsk) {
> + ret = -ENOMEM;
> goto free_worker;
> }
>
> - worker->task = task;
> - wake_up_process(task); /* avoid contributing to loadavg */
> -
> - ret = vhost_attach_cgroups(dev);
> - if (ret)
> - goto stop_worker;
> -
> + worker->vtsk = vtsk;
> + vhost_task_start(vtsk, "vhost-%d", current->pid);
> return 0;
>
> -stop_worker:
> - kthread_stop(worker->task);
> free_worker:
> kfree(worker);
> dev->worker = NULL;
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index 2f6beab93784..3af59c65025e 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -16,6 +16,7 @@
> #include <linux/irqbypass.h>
>
> struct vhost_work;
> +struct vhost_task;
> typedef void (*vhost_work_fn_t)(struct vhost_work *work);
>
> #define VHOST_WORK_QUEUED 1
> @@ -26,9 +27,8 @@ struct vhost_work {
> };
>
> struct vhost_worker {
> - struct task_struct *task;
> + struct vhost_task *vtsk;
> struct llist_head work_list;
> - struct vhost_dev *dev;
> u64 kcov_handle;
> };
>
> --
> 2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-07-20 13:06 ` Michael S. Tsirkin
@ 2023-07-23 4:03 ` michael.christie
2023-07-23 9:31 ` Michael S. Tsirkin
2023-08-10 18:57 ` Michael S. Tsirkin
0 siblings, 2 replies; 42+ messages in thread
From: michael.christie @ 2023-07-23 4:03 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
> On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
>> For vhost workers we use the kthread API which inherit's its values from
>> and checks against the kthreadd thread. This results in the wrong RLIMITs
>> being checked, so while tools like libvirt try to control the number of
>> threads based on the nproc rlimit setting we can end up creating more
>> threads than the user wanted.
>>
>> This patch has us use the vhost_task helpers which will inherit its
>> values/checks from the thread that owns the device similar to if we did
>> a clone in userspace. The vhost threads will now be counted in the nproc
>> rlimits. And we get features like cgroups and mm sharing automatically,
>> so we can remove those calls.
>>
>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
>
> Hi Mike,
> So this seems to have caused a measureable regression in networking
> performance (about 30%). Take a look here, and there's a zip file
> with detailed measuraments attached:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=2222603
>
>
> Could you take a look please?
> You can also ask reporter questions there assuming you
> have or can create a (free) account.
>
Sorry for the late reply. I just got home from vacation.
The account creation link seems to be down. I keep getting a
"unable to establish SMTP connection to bz-exim-prod port 25 " error.
Can you give me Quan's email?
I think I can replicate the problem. I just need some extra info from Quan:
1. Just double check that they are using RHEL 9 on the host running the VMs.
2. The kernel config
3. Any tuning that was done. Is tuned running in guest and/or host running the
VMs and what profile is being used in each.
4. Number of vCPUs and virtqueues being used.
5. Can they dump the contents of:
/sys/kernel/debug/sched
and
sysctl -a
on the host running the VMs.
6. With the 6.4 kernel, can they also run a quick test and tell me if they set
the scheduler to batch:
ps -T -o comm,pid,tid $QEMU_THREAD
then for each vhost thread do:
chrt -b -p 0 $VHOST_THREAD
Does that end up increasing perf? When I do this I see throughput go up by
around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
It's just a difference I noticed when running some other tests.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-07-23 4:03 ` michael.christie
@ 2023-07-23 9:31 ` Michael S. Tsirkin
2023-08-10 18:57 ` Michael S. Tsirkin
1 sibling, 0 replies; 42+ messages in thread
From: Michael S. Tsirkin @ 2023-07-23 9:31 UTC (permalink / raw)
To: michael.christie
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On Sat, Jul 22, 2023 at 11:03:29PM -0500, michael.christie@oracle.com wrote:
> On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
> > On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
> >> For vhost workers we use the kthread API which inherit's its values from
> >> and checks against the kthreadd thread. This results in the wrong RLIMITs
> >> being checked, so while tools like libvirt try to control the number of
> >> threads based on the nproc rlimit setting we can end up creating more
> >> threads than the user wanted.
> >>
> >> This patch has us use the vhost_task helpers which will inherit its
> >> values/checks from the thread that owns the device similar to if we did
> >> a clone in userspace. The vhost threads will now be counted in the nproc
> >> rlimits. And we get features like cgroups and mm sharing automatically,
> >> so we can remove those calls.
> >>
> >> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> >> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >
> >
> > Hi Mike,
> > So this seems to have caused a measureable regression in networking
> > performance (about 30%). Take a look here, and there's a zip file
> > with detailed measuraments attached:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=2222603
> >
> >
> > Could you take a look please?
> > You can also ask reporter questions there assuming you
> > have or can create a (free) account.
> >
>
> Sorry for the late reply. I just got home from vacation.
>
> The account creation link seems to be down. I keep getting a
> "unable to establish SMTP connection to bz-exim-prod port 25 " error.
>
> Can you give me Quan's email?
Thanks for getting back! I asked whether it's ok to share the email.
For now pasted your request in the bugzilla.
> I think I can replicate the problem. I just need some extra info from Quan:
>
> 1. Just double check that they are using RHEL 9 on the host running the VMs.
> 2. The kernel config
> 3. Any tuning that was done. Is tuned running in guest and/or host running the
> VMs and what profile is being used in each.
> 4. Number of vCPUs and virtqueues being used.
> 5. Can they dump the contents of:
>
> /sys/kernel/debug/sched
>
> and
>
> sysctl -a
>
> on the host running the VMs.
>
> 6. With the 6.4 kernel, can they also run a quick test and tell me if they set
> the scheduler to batch:
>
> ps -T -o comm,pid,tid $QEMU_THREAD
>
> then for each vhost thread do:
>
> chrt -b -p 0 $VHOST_THREAD
>
> Does that end up increasing perf? When I do this I see throughput go up by
> around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
> and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
> It's just a difference I noticed when running some other tests.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-07-23 4:03 ` michael.christie
2023-07-23 9:31 ` Michael S. Tsirkin
@ 2023-08-10 18:57 ` Michael S. Tsirkin
2023-08-11 18:51 ` Mike Christie
1 sibling, 1 reply; 42+ messages in thread
From: Michael S. Tsirkin @ 2023-08-10 18:57 UTC (permalink / raw)
To: michael.christie
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On Sat, Jul 22, 2023 at 11:03:29PM -0500, michael.christie@oracle.com wrote:
> On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
> > On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
> >> For vhost workers we use the kthread API which inherit's its values from
> >> and checks against the kthreadd thread. This results in the wrong RLIMITs
> >> being checked, so while tools like libvirt try to control the number of
> >> threads based on the nproc rlimit setting we can end up creating more
> >> threads than the user wanted.
> >>
> >> This patch has us use the vhost_task helpers which will inherit its
> >> values/checks from the thread that owns the device similar to if we did
> >> a clone in userspace. The vhost threads will now be counted in the nproc
> >> rlimits. And we get features like cgroups and mm sharing automatically,
> >> so we can remove those calls.
> >>
> >> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> >> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >
> >
> > Hi Mike,
> > So this seems to have caused a measureable regression in networking
> > performance (about 30%). Take a look here, and there's a zip file
> > with detailed measuraments attached:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=2222603
> >
> >
> > Could you take a look please?
> > You can also ask reporter questions there assuming you
> > have or can create a (free) account.
> >
>
> Sorry for the late reply. I just got home from vacation.
>
> The account creation link seems to be down. I keep getting a
> "unable to establish SMTP connection to bz-exim-prod port 25 " error.
>
> Can you give me Quan's email?
>
> I think I can replicate the problem. I just need some extra info from Quan:
>
> 1. Just double check that they are using RHEL 9 on the host running the VMs.
> 2. The kernel config
> 3. Any tuning that was done. Is tuned running in guest and/or host running the
> VMs and what profile is being used in each.
> 4. Number of vCPUs and virtqueues being used.
> 5. Can they dump the contents of:
>
> /sys/kernel/debug/sched
>
> and
>
> sysctl -a
>
> on the host running the VMs.
>
> 6. With the 6.4 kernel, can they also run a quick test and tell me if they set
> the scheduler to batch:
>
> ps -T -o comm,pid,tid $QEMU_THREAD
>
> then for each vhost thread do:
>
> chrt -b -p 0 $VHOST_THREAD
>
> Does that end up increasing perf? When I do this I see throughput go up by
> around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
> and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
> It's just a difference I noticed when running some other tests.
Mike I'm unsure what to do at this point. Regressions are not nice
but if the kernel is released with the new userspace api we won't
be able to revert. So what's the plan?
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-08-10 18:57 ` Michael S. Tsirkin
@ 2023-08-11 18:51 ` Mike Christie
2023-08-13 19:01 ` Michael S. Tsirkin
0 siblings, 1 reply; 42+ messages in thread
From: Mike Christie @ 2023-08-11 18:51 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On 8/10/23 1:57 PM, Michael S. Tsirkin wrote:
> On Sat, Jul 22, 2023 at 11:03:29PM -0500, michael.christie@oracle.com wrote:
>> On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
>>> On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
>>>> For vhost workers we use the kthread API which inherit's its values from
>>>> and checks against the kthreadd thread. This results in the wrong RLIMITs
>>>> being checked, so while tools like libvirt try to control the number of
>>>> threads based on the nproc rlimit setting we can end up creating more
>>>> threads than the user wanted.
>>>>
>>>> This patch has us use the vhost_task helpers which will inherit its
>>>> values/checks from the thread that owns the device similar to if we did
>>>> a clone in userspace. The vhost threads will now be counted in the nproc
>>>> rlimits. And we get features like cgroups and mm sharing automatically,
>>>> so we can remove those calls.
>>>>
>>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>>
>>>
>>> Hi Mike,
>>> So this seems to have caused a measureable regression in networking
>>> performance (about 30%). Take a look here, and there's a zip file
>>> with detailed measuraments attached:
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=2222603
>>>
>>>
>>> Could you take a look please?
>>> You can also ask reporter questions there assuming you
>>> have or can create a (free) account.
>>>
>>
>> Sorry for the late reply. I just got home from vacation.
>>
>> The account creation link seems to be down. I keep getting a
>> "unable to establish SMTP connection to bz-exim-prod port 25 " error.
>>
>> Can you give me Quan's email?
>>
>> I think I can replicate the problem. I just need some extra info from Quan:
>>
>> 1. Just double check that they are using RHEL 9 on the host running the VMs.
>> 2. The kernel config
>> 3. Any tuning that was done. Is tuned running in guest and/or host running the
>> VMs and what profile is being used in each.
>> 4. Number of vCPUs and virtqueues being used.
>> 5. Can they dump the contents of:
>>
>> /sys/kernel/debug/sched
>>
>> and
>>
>> sysctl -a
>>
>> on the host running the VMs.
>>
>> 6. With the 6.4 kernel, can they also run a quick test and tell me if they set
>> the scheduler to batch:
>>
>> ps -T -o comm,pid,tid $QEMU_THREAD
>>
>> then for each vhost thread do:
>>
>> chrt -b -p 0 $VHOST_THREAD
>>
>> Does that end up increasing perf? When I do this I see throughput go up by
>> around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
>> and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
>> It's just a difference I noticed when running some other tests.
>
>
> Mike I'm unsure what to do at this point. Regressions are not nice
> but if the kernel is released with the new userspace api we won't
> be able to revert. So what's the plan?
>
I'm sort of stumped. I still can't replicate the problem out of the box. 6.3 and
6.4 perform the same for me. I've tried your setup and settings and with different
combos of using things like tuned and irqbalance.
I can sort of force the issue. In 6.4, the vhost thread inherits it's settings
from the parent thread. In 6.3, the vhost thread inherits from kthreadd and we
would then reset the sched settings. So in 6.4 if I just tune the parent differently
I can cause different performance. If we want the 6.3 behavior we can do the patch
below.
However, I don't think you guys are hitting this because you are just running
qemu from the normal shell and were not doing anything fancy with the sched
settings.
diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
index da35e5b7f047..f2c2638d1106 100644
--- a/kernel/vhost_task.c
+++ b/kernel/vhost_task.c
@@ -2,6 +2,7 @@
/*
* Copyright (C) 2021 Oracle Corporation
*/
+#include <uapi/linux/sched/types.h>
#include <linux/slab.h>
#include <linux/completion.h>
#include <linux/sched/task.h>
@@ -22,9 +23,16 @@ struct vhost_task {
static int vhost_task_fn(void *data)
{
+ static const struct sched_param param = { .sched_priority = 0 };
struct vhost_task *vtsk = data;
bool dead = false;
+ /*
+ * Don't inherit the parent's sched info, so we maintain compat from
+ * when we used kthreads and it reset this info.
+ */
+ sched_setscheduler_nocheck(current, SCHED_NORMAL, ¶m);
+
for (;;) {
bool did_work;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply related [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-08-11 18:51 ` Mike Christie
@ 2023-08-13 19:01 ` Michael S. Tsirkin
2023-08-14 3:13 ` michael.christie
0 siblings, 1 reply; 42+ messages in thread
From: Michael S. Tsirkin @ 2023-08-13 19:01 UTC (permalink / raw)
To: Mike Christie
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On Fri, Aug 11, 2023 at 01:51:36PM -0500, Mike Christie wrote:
> On 8/10/23 1:57 PM, Michael S. Tsirkin wrote:
> > On Sat, Jul 22, 2023 at 11:03:29PM -0500, michael.christie@oracle.com wrote:
> >> On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
> >>> On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
> >>>> For vhost workers we use the kthread API which inherit's its values from
> >>>> and checks against the kthreadd thread. This results in the wrong RLIMITs
> >>>> being checked, so while tools like libvirt try to control the number of
> >>>> threads based on the nproc rlimit setting we can end up creating more
> >>>> threads than the user wanted.
> >>>>
> >>>> This patch has us use the vhost_task helpers which will inherit its
> >>>> values/checks from the thread that owns the device similar to if we did
> >>>> a clone in userspace. The vhost threads will now be counted in the nproc
> >>>> rlimits. And we get features like cgroups and mm sharing automatically,
> >>>> so we can remove those calls.
> >>>>
> >>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
> >>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> >>>
> >>>
> >>> Hi Mike,
> >>> So this seems to have caused a measureable regression in networking
> >>> performance (about 30%). Take a look here, and there's a zip file
> >>> with detailed measuraments attached:
> >>>
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=2222603
> >>>
> >>>
> >>> Could you take a look please?
> >>> You can also ask reporter questions there assuming you
> >>> have or can create a (free) account.
> >>>
> >>
> >> Sorry for the late reply. I just got home from vacation.
> >>
> >> The account creation link seems to be down. I keep getting a
> >> "unable to establish SMTP connection to bz-exim-prod port 25 " error.
> >>
> >> Can you give me Quan's email?
> >>
> >> I think I can replicate the problem. I just need some extra info from Quan:
> >>
> >> 1. Just double check that they are using RHEL 9 on the host running the VMs.
> >> 2. The kernel config
> >> 3. Any tuning that was done. Is tuned running in guest and/or host running the
> >> VMs and what profile is being used in each.
> >> 4. Number of vCPUs and virtqueues being used.
> >> 5. Can they dump the contents of:
> >>
> >> /sys/kernel/debug/sched
> >>
> >> and
> >>
> >> sysctl -a
> >>
> >> on the host running the VMs.
> >>
> >> 6. With the 6.4 kernel, can they also run a quick test and tell me if they set
> >> the scheduler to batch:
> >>
> >> ps -T -o comm,pid,tid $QEMU_THREAD
> >>
> >> then for each vhost thread do:
> >>
> >> chrt -b -p 0 $VHOST_THREAD
> >>
> >> Does that end up increasing perf? When I do this I see throughput go up by
> >> around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
> >> and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
> >> It's just a difference I noticed when running some other tests.
> >
> >
> > Mike I'm unsure what to do at this point. Regressions are not nice
> > but if the kernel is released with the new userspace api we won't
> > be able to revert. So what's the plan?
> >
>
> I'm sort of stumped. I still can't replicate the problem out of the box. 6.3 and
> 6.4 perform the same for me. I've tried your setup and settings and with different
> combos of using things like tuned and irqbalance.
>
> I can sort of force the issue. In 6.4, the vhost thread inherits it's settings
> from the parent thread. In 6.3, the vhost thread inherits from kthreadd and we
> would then reset the sched settings. So in 6.4 if I just tune the parent differently
> I can cause different performance. If we want the 6.3 behavior we can do the patch
> below.
>
> However, I don't think you guys are hitting this because you are just running
> qemu from the normal shell and were not doing anything fancy with the sched
> settings.
>
>
> diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
> index da35e5b7f047..f2c2638d1106 100644
> --- a/kernel/vhost_task.c
> +++ b/kernel/vhost_task.c
> @@ -2,6 +2,7 @@
> /*
> * Copyright (C) 2021 Oracle Corporation
> */
> +#include <uapi/linux/sched/types.h>
> #include <linux/slab.h>
> #include <linux/completion.h>
> #include <linux/sched/task.h>
> @@ -22,9 +23,16 @@ struct vhost_task {
>
> static int vhost_task_fn(void *data)
> {
> + static const struct sched_param param = { .sched_priority = 0 };
> struct vhost_task *vtsk = data;
> bool dead = false;
>
> + /*
> + * Don't inherit the parent's sched info, so we maintain compat from
> + * when we used kthreads and it reset this info.
> + */
> + sched_setscheduler_nocheck(current, SCHED_NORMAL, ¶m);
> +
> for (;;) {
> bool did_work;
>
>
>
yes seems unlikely, still, attach this to bugzilla so it can be
tested?
and, what will help you debug? any traces to enable?
Also wasn't there another issue with a non standard config?
Maybe if we fix that it will by chance fix this one too?
>
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v11 8/8] vhost: use vhost_tasks for worker threads
2023-08-13 19:01 ` Michael S. Tsirkin
@ 2023-08-14 3:13 ` michael.christie
0 siblings, 0 replies; 42+ messages in thread
From: michael.christie @ 2023-08-14 3:13 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: brauner, konrad.wilk, linux-kernel, virtualization, hch,
ebiederm, stefanha, torvalds
On 8/13/23 2:01 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 11, 2023 at 01:51:36PM -0500, Mike Christie wrote:
>> On 8/10/23 1:57 PM, Michael S. Tsirkin wrote:
>>> On Sat, Jul 22, 2023 at 11:03:29PM -0500, michael.christie@oracle.com wrote:
>>>> On 7/20/23 8:06 AM, Michael S. Tsirkin wrote:
>>>>> On Thu, Feb 02, 2023 at 05:25:17PM -0600, Mike Christie wrote:
>>>>>> For vhost workers we use the kthread API which inherit's its values from
>>>>>> and checks against the kthreadd thread. This results in the wrong RLIMITs
>>>>>> being checked, so while tools like libvirt try to control the number of
>>>>>> threads based on the nproc rlimit setting we can end up creating more
>>>>>> threads than the user wanted.
>>>>>>
>>>>>> This patch has us use the vhost_task helpers which will inherit its
>>>>>> values/checks from the thread that owns the device similar to if we did
>>>>>> a clone in userspace. The vhost threads will now be counted in the nproc
>>>>>> rlimits. And we get features like cgroups and mm sharing automatically,
>>>>>> so we can remove those calls.
>>>>>>
>>>>>> Signed-off-by: Mike Christie <michael.christie@oracle.com>
>>>>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>>>>>
>>>>>
>>>>> Hi Mike,
>>>>> So this seems to have caused a measureable regression in networking
>>>>> performance (about 30%). Take a look here, and there's a zip file
>>>>> with detailed measuraments attached:
>>>>>
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2222603
>>>>>
>>>>>
>>>>> Could you take a look please?
>>>>> You can also ask reporter questions there assuming you
>>>>> have or can create a (free) account.
>>>>>
>>>>
>>>> Sorry for the late reply. I just got home from vacation.
>>>>
>>>> The account creation link seems to be down. I keep getting a
>>>> "unable to establish SMTP connection to bz-exim-prod port 25 " error.
>>>>
>>>> Can you give me Quan's email?
>>>>
>>>> I think I can replicate the problem. I just need some extra info from Quan:
>>>>
>>>> 1. Just double check that they are using RHEL 9 on the host running the VMs.
>>>> 2. The kernel config
>>>> 3. Any tuning that was done. Is tuned running in guest and/or host running the
>>>> VMs and what profile is being used in each.
>>>> 4. Number of vCPUs and virtqueues being used.
>>>> 5. Can they dump the contents of:
>>>>
>>>> /sys/kernel/debug/sched
>>>>
>>>> and
>>>>
>>>> sysctl -a
>>>>
>>>> on the host running the VMs.
>>>>
>>>> 6. With the 6.4 kernel, can they also run a quick test and tell me if they set
>>>> the scheduler to batch:
>>>>
>>>> ps -T -o comm,pid,tid $QEMU_THREAD
>>>>
>>>> then for each vhost thread do:
>>>>
>>>> chrt -b -p 0 $VHOST_THREAD
>>>>
>>>> Does that end up increasing perf? When I do this I see throughput go up by
>>>> around 50% vs 6.3 when sessions was 16 or more (16 was the number of vCPUs
>>>> and virtqueues per net device in the VM). Note that I'm not saying that is a fix.
>>>> It's just a difference I noticed when running some other tests.
>>>
>>>
>>> Mike I'm unsure what to do at this point. Regressions are not nice
>>> but if the kernel is released with the new userspace api we won't
>>> be able to revert. So what's the plan?
>>>
>>
>> I'm sort of stumped. I still can't replicate the problem out of the box. 6.3 and
>> 6.4 perform the same for me. I've tried your setup and settings and with different
>> combos of using things like tuned and irqbalance.
>>
>> I can sort of force the issue. In 6.4, the vhost thread inherits it's settings
>> from the parent thread. In 6.3, the vhost thread inherits from kthreadd and we
>> would then reset the sched settings. So in 6.4 if I just tune the parent differently
>> I can cause different performance. If we want the 6.3 behavior we can do the patch
>> below.
>>
>> However, I don't think you guys are hitting this because you are just running
>> qemu from the normal shell and were not doing anything fancy with the sched
>> settings.
>>
>>
>> diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
>> index da35e5b7f047..f2c2638d1106 100644
>> --- a/kernel/vhost_task.c
>> +++ b/kernel/vhost_task.c
>> @@ -2,6 +2,7 @@
>> /*
>> * Copyright (C) 2021 Oracle Corporation
>> */
>> +#include <uapi/linux/sched/types.h>
>> #include <linux/slab.h>
>> #include <linux/completion.h>
>> #include <linux/sched/task.h>
>> @@ -22,9 +23,16 @@ struct vhost_task {
>>
>> static int vhost_task_fn(void *data)
>> {
>> + static const struct sched_param param = { .sched_priority = 0 };
>> struct vhost_task *vtsk = data;
>> bool dead = false;
>>
>> + /*
>> + * Don't inherit the parent's sched info, so we maintain compat from
>> + * when we used kthreads and it reset this info.
>> + */
>> + sched_setscheduler_nocheck(current, SCHED_NORMAL, ¶m);
>> +
>> for (;;) {
>> bool did_work;
>>
>>
>>
>
> yes seems unlikely, still, attach this to bugzilla so it can be
> tested?
>
> and, what will help you debug? any traces to enable?
I added the patch and asked for a perf trace.
>
> Also wasn't there another issue with a non standard config?
> Maybe if we fix that it will by chance fix this one too?
>
It was when CONFIG_RT_GROUP_SCHED was enabled in the kernel config then
I would see a large drop in IOPs/throughput.
In the current 6.5-rc6 I don't see the problem anymore. I haven't had a
chance to narrow down what fixed it.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 42+ messages in thread