From: Mike Christie <michael.christie@oracle.com> To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org, stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com, sgarzare@redhat.com, virtualization@lists.linux-foundation.org Cc: Mike Christie <michael.christie@oracle.com> Subject: [PATCH 9/9] vhost: support sharing workers across devs Date: Tue, 25 May 2021 13:06:00 -0500 [thread overview] Message-ID: <20210525180600.6349-10-michael.christie@oracle.com> (raw) In-Reply-To: <20210525180600.6349-1-michael.christie@oracle.com> This allows a worker to handle multiple device's vqs. TODO: - The worker is attached to the cgroup of the device that created it. In this patch you can share workers with devices with different owners which could be in different cgroups. Do we want to restict sharing workers with devices that have the same owner (dev->mm value)? Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/vhost.c | 16 +++++++--------- drivers/vhost/vhost.h | 1 - 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eb16eb2bbee0..c32f72b1901c 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -388,12 +388,10 @@ static void vhost_vq_reset(struct vhost_dev *dev, static int vhost_worker(void *data) { struct vhost_worker *worker = data; - struct vhost_dev *dev = worker->dev; struct vhost_work *work, *work_next; + struct vhost_dev *dev; struct llist_node *node; - kthread_use_mm(dev->mm); - for (;;) { /* mb paired w/ kthread_stop */ set_current_state(TASK_INTERRUPTIBLE); @@ -412,15 +410,20 @@ static int vhost_worker(void *data) smp_wmb(); llist_for_each_entry_safe(work, work_next, node, node) { clear_bit(VHOST_WORK_QUEUED, &work->flags); + dev = work->dev; + + kthread_use_mm(dev->mm); + __set_current_state(TASK_RUNNING); kcov_remote_start_common(dev->kcov_handle); work->fn(work); kcov_remote_stop(); if (need_resched()) schedule(); + + kthread_unuse_mm(dev->mm); } } - kthread_unuse_mm(dev->mm); return 0; } @@ -667,7 +670,6 @@ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev) return NULL; worker->id = dev->num_workers; - worker->dev = dev; init_llist_head(&worker->work_list); INIT_HLIST_NODE(&worker->h_node); refcount_set(&worker->refcount, 1); @@ -702,10 +704,6 @@ static struct vhost_worker *vhost_worker_find(struct vhost_dev *dev, pid_t pid) spin_lock(&vhost_workers_lock); hash_for_each_possible(vhost_workers, worker, h_node, pid) { if (worker->task->pid == pid) { - /* tmp - next patch allows sharing across devs */ - if (worker->dev != dev) - break; - found_worker = worker; refcount_inc(&worker->refcount); break; diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 75ad3aa5adca..40c400172a84 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -32,7 +32,6 @@ struct vhost_worker { struct llist_head work_list; struct hlist_node h_node; refcount_t refcount; - struct vhost_dev *dev; int id; }; -- 2.25.1
WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com> To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org, stefanha@redhat.com, pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com, sgarzare@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 9/9] vhost: support sharing workers across devs Date: Tue, 25 May 2021 13:06:00 -0500 [thread overview] Message-ID: <20210525180600.6349-10-michael.christie@oracle.com> (raw) In-Reply-To: <20210525180600.6349-1-michael.christie@oracle.com> This allows a worker to handle multiple device's vqs. TODO: - The worker is attached to the cgroup of the device that created it. In this patch you can share workers with devices with different owners which could be in different cgroups. Do we want to restict sharing workers with devices that have the same owner (dev->mm value)? Signed-off-by: Mike Christie <michael.christie@oracle.com> --- drivers/vhost/vhost.c | 16 +++++++--------- drivers/vhost/vhost.h | 1 - 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eb16eb2bbee0..c32f72b1901c 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -388,12 +388,10 @@ static void vhost_vq_reset(struct vhost_dev *dev, static int vhost_worker(void *data) { struct vhost_worker *worker = data; - struct vhost_dev *dev = worker->dev; struct vhost_work *work, *work_next; + struct vhost_dev *dev; struct llist_node *node; - kthread_use_mm(dev->mm); - for (;;) { /* mb paired w/ kthread_stop */ set_current_state(TASK_INTERRUPTIBLE); @@ -412,15 +410,20 @@ static int vhost_worker(void *data) smp_wmb(); llist_for_each_entry_safe(work, work_next, node, node) { clear_bit(VHOST_WORK_QUEUED, &work->flags); + dev = work->dev; + + kthread_use_mm(dev->mm); + __set_current_state(TASK_RUNNING); kcov_remote_start_common(dev->kcov_handle); work->fn(work); kcov_remote_stop(); if (need_resched()) schedule(); + + kthread_unuse_mm(dev->mm); } } - kthread_unuse_mm(dev->mm); return 0; } @@ -667,7 +670,6 @@ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev) return NULL; worker->id = dev->num_workers; - worker->dev = dev; init_llist_head(&worker->work_list); INIT_HLIST_NODE(&worker->h_node); refcount_set(&worker->refcount, 1); @@ -702,10 +704,6 @@ static struct vhost_worker *vhost_worker_find(struct vhost_dev *dev, pid_t pid) spin_lock(&vhost_workers_lock); hash_for_each_possible(vhost_workers, worker, h_node, pid) { if (worker->task->pid == pid) { - /* tmp - next patch allows sharing across devs */ - if (worker->dev != dev) - break; - found_worker = worker; refcount_inc(&worker->refcount); break; diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 75ad3aa5adca..40c400172a84 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -32,7 +32,6 @@ struct vhost_worker { struct llist_head work_list; struct hlist_node h_node; refcount_t refcount; - struct vhost_dev *dev; int id; }; -- 2.25.1 _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-05-25 18:06 UTC|newest] Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-25 18:05 vhost: multiple worker support Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-05-25 18:05 ` [PATCH 1/9] vhost: move worker thread fields to new struct Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 10:16 ` Stefan Hajnoczi 2021-06-03 10:16 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 2/9] vhost: move vhost worker creation to kick setup Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 10:28 ` Stefan Hajnoczi 2021-06-03 10:28 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 3/9] vhost: modify internal functions to take a vhost_worker Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 10:45 ` Stefan Hajnoczi 2021-06-03 10:45 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 4/9] vhost: allow vhost_polls to use different vhost_workers Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 13:51 ` Stefan Hajnoczi 2021-06-03 13:51 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 5/9] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 13:54 ` Stefan Hajnoczi 2021-06-03 13:54 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 6/9] vhost-scsi: make SCSI cmd completion per vq Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 13:57 ` Stefan Hajnoczi 2021-06-03 13:57 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 7/9] vhost: allow userspace to create workers Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 14:30 ` Stefan Hajnoczi 2021-06-03 14:30 ` Stefan Hajnoczi 2021-06-05 23:53 ` michael.christie 2021-06-05 23:53 ` michael.christie 2021-06-07 15:19 ` Stefan Hajnoczi 2021-06-07 15:19 ` Stefan Hajnoczi 2021-06-09 21:03 ` Mike Christie 2021-06-09 21:03 ` Mike Christie 2021-06-10 8:06 ` Stefan Hajnoczi 2021-06-10 8:06 ` Stefan Hajnoczi 2021-06-18 2:49 ` Mike Christie 2021-06-18 2:49 ` Mike Christie 2021-06-21 13:41 ` Stefan Hajnoczi 2021-06-21 13:41 ` Stefan Hajnoczi 2021-05-25 18:05 ` [PATCH 8/9] vhost: add vhost_dev pointer to vhost_work Mike Christie 2021-05-25 18:05 ` Mike Christie 2021-06-03 14:31 ` Stefan Hajnoczi 2021-06-03 14:31 ` Stefan Hajnoczi 2021-05-25 18:06 ` Mike Christie [this message] 2021-05-25 18:06 ` [PATCH 9/9] vhost: support sharing workers across devs Mike Christie 2021-06-03 14:32 ` Stefan Hajnoczi 2021-06-03 14:32 ` Stefan Hajnoczi 2021-06-07 2:18 ` Jason Wang 2021-06-07 2:18 ` Jason Wang 2021-06-03 10:13 ` vhost: multiple worker support Stefan Hajnoczi 2021-06-03 10:13 ` Stefan Hajnoczi 2021-06-03 18:45 ` Mike Christie 2021-06-03 18:45 ` Mike Christie 2021-06-03 14:37 ` Stefan Hajnoczi 2021-06-03 14:37 ` Stefan Hajnoczi 2021-06-03 22:16 ` Mike Christie 2021-06-03 22:16 ` Mike Christie 2021-06-05 22:40 ` michael.christie 2021-06-05 22:40 ` michael.christie 2021-06-07 15:23 ` Stefan Hajnoczi 2021-06-07 15:23 ` Stefan Hajnoczi
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210525180600.6349-10-michael.christie@oracle.com \ --to=michael.christie@oracle.com \ --cc=jasowang@redhat.com \ --cc=linux-scsi@vger.kernel.org \ --cc=mst@redhat.com \ --cc=pbonzini@redhat.com \ --cc=sgarzare@redhat.com \ --cc=stefanha@redhat.com \ --cc=target-devel@vger.kernel.org \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.