All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	stefanha@redhat.com, jasowang@redhat.com, mst@redhat.com,
	sgarzare@redhat.com, virtualization@lists.linux-foundation.org
Subject: [PATCH V5 00/12] vhost: multiple worker support
Date: Mon,  6 Dec 2021 20:51:05 -0600	[thread overview]
Message-ID: <20211207025117.23551-1-michael.christie@oracle.com> (raw)

The following patches apply over linus's tree and the user_worker
patchset here:

https://lore.kernel.org/virtualization/20211129194707.5863-1-michael.christie@oracle.com/T/#t

which allows us to check the vhost owner thread's RLIMITs, and they
are built over Andrey's flush cleanups:

https://lore.kernel.org/virtualization/20211207024510.23292-1-michael.christie@oracle.com/T/#t

The patches allow us to support multiple vhost workers per device. The
design is a modified version of Stefan's original idea where userspace has
the kernel create a worker and we pass back the pid. In this version, V4,
instead of passing the pid between user/kernel space we use a worker_id
which is just an integer managed by the vhost driver and we allow userspace
to create and free workers and then attach them to virtqueues at setup time
or while IO is running.

All review comments from the past reviews should be handled. If I didn't
reply to a review comment, I agreed with the comment and should have
handled it in this posting. Let me know if I missed one.

Results:
--------

fio jobs        1       2       4       8       12      16
----------------------------------------------------------
1 worker        84k    492k    510k    -       -       -
worker per vq   184k   380k    744k    1422k   2256k   2434k

Notes:
0. This used a simple fio command:

fio --filename=/dev/sdb  --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128  --numjobs=$JOBS_ABOVE

and I used a VM with 16 vCPUs and 16 virtqueues.

1. The patches were tested with emulate_pr=0 and these patches:

https://lore.kernel.org/all/yq1tuhge4bg.fsf@ca-mkp.ca.oracle.com/t/

which are in mkp's scsi branches for the next kernel. They fix the perf
issues where IOPs dropped at 12 vqs/jobs.

2. Because we have a hard limit of 1024 cmds, if the num jobs * iodepth
was greater than 1024, I would decrease iodepth. So 12 jobs used 85 cmds,
and 16 used 64.

3. The perf issue above at 2 jobs is because when we only have 1 worker
we execute more cmds per vhost_work due to all vqs funneling to one worker.
This results in less context switches and better batching without having to
tweak any settings. I'm working on patches to add back batching during lio
completion and do polling on the submission side.

We will still want the threading patches, because if we batch at the fio
level plus use the vhost theading patches, we can see a big boost like
below. So hopefully doing it at the kernel will allow apps to just work
without having to be smart like fio.

fio using io_uring and batching with the iodepth_batch* settings:

fio jobs        1       2       4       8       12      16
-------------------------------------------------------------
1 worker        494k    520k    -       -       -       -
worker per vq   496k    878k    1542k   2436k   2304k   2590k

V5:
- Rebase against user_worker patchset.
- Rebase against flush patchset.
- Redo vhost-scsi tmf flush handling so it doesn't access vq->worker.
V4:
- fix vhost-sock VSOCK_VQ_RX use.
- name functions called directly by ioctl cmd's to match the ioctl cmd.
- break up VHOST_SET_VRING_WORKER into a new, free and attach cmd.
- document worker lifetime, and cgroup, namespace, mm, rlimit
inheritance, make it clear we currently only support sharing within the
device.
- add support to attach workers while IO is running.
- instead of passing a pid_t of the kernel thread, pass a int allocated
by the vhost layer with an idr.

V3:
- fully convert vhost code to use vq based APIs instead of leaving it
half per dev and half per vq.
- rebase against kernel worker API.
- Drop delayed worker creation. We always create the default worker at
VHOST_SET_OWNER time. Userspace can create and bind workers after that.

V2:
- change loop that we take a refcount to the worker in
- replaced pid == -1 with define.
- fixed tabbing/spacing coding style issue
- use hash instead of list to lookup workers.
- I dropped the patch that added an ioctl cmd to get a vq's worker's
pid. I saw we might do a generic netlink interface instead.




WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com>
To: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	stefanha@redhat.com, jasowang@redhat.com, mst@redhat.com,
	sgarzare@redhat.com, virtualization@lists.linux-foundation.org
Subject: [PATCH V5 00/12] vhost: multiple worker support
Date: Mon,  6 Dec 2021 20:51:05 -0600	[thread overview]
Message-ID: <20211207025117.23551-1-michael.christie@oracle.com> (raw)

The following patches apply over linus's tree and the user_worker
patchset here:

https://lore.kernel.org/virtualization/20211129194707.5863-1-michael.christie@oracle.com/T/#t

which allows us to check the vhost owner thread's RLIMITs, and they
are built over Andrey's flush cleanups:

https://lore.kernel.org/virtualization/20211207024510.23292-1-michael.christie@oracle.com/T/#t

The patches allow us to support multiple vhost workers per device. The
design is a modified version of Stefan's original idea where userspace has
the kernel create a worker and we pass back the pid. In this version, V4,
instead of passing the pid between user/kernel space we use a worker_id
which is just an integer managed by the vhost driver and we allow userspace
to create and free workers and then attach them to virtqueues at setup time
or while IO is running.

All review comments from the past reviews should be handled. If I didn't
reply to a review comment, I agreed with the comment and should have
handled it in this posting. Let me know if I missed one.

Results:
--------

fio jobs        1       2       4       8       12      16
----------------------------------------------------------
1 worker        84k    492k    510k    -       -       -
worker per vq   184k   380k    744k    1422k   2256k   2434k

Notes:
0. This used a simple fio command:

fio --filename=/dev/sdb  --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128  --numjobs=$JOBS_ABOVE

and I used a VM with 16 vCPUs and 16 virtqueues.

1. The patches were tested with emulate_pr=0 and these patches:

https://lore.kernel.org/all/yq1tuhge4bg.fsf@ca-mkp.ca.oracle.com/t/

which are in mkp's scsi branches for the next kernel. They fix the perf
issues where IOPs dropped at 12 vqs/jobs.

2. Because we have a hard limit of 1024 cmds, if the num jobs * iodepth
was greater than 1024, I would decrease iodepth. So 12 jobs used 85 cmds,
and 16 used 64.

3. The perf issue above at 2 jobs is because when we only have 1 worker
we execute more cmds per vhost_work due to all vqs funneling to one worker.
This results in less context switches and better batching without having to
tweak any settings. I'm working on patches to add back batching during lio
completion and do polling on the submission side.

We will still want the threading patches, because if we batch at the fio
level plus use the vhost theading patches, we can see a big boost like
below. So hopefully doing it at the kernel will allow apps to just work
without having to be smart like fio.

fio using io_uring and batching with the iodepth_batch* settings:

fio jobs        1       2       4       8       12      16
-------------------------------------------------------------
1 worker        494k    520k    -       -       -       -
worker per vq   496k    878k    1542k   2436k   2304k   2590k

V5:
- Rebase against user_worker patchset.
- Rebase against flush patchset.
- Redo vhost-scsi tmf flush handling so it doesn't access vq->worker.
V4:
- fix vhost-sock VSOCK_VQ_RX use.
- name functions called directly by ioctl cmd's to match the ioctl cmd.
- break up VHOST_SET_VRING_WORKER into a new, free and attach cmd.
- document worker lifetime, and cgroup, namespace, mm, rlimit
inheritance, make it clear we currently only support sharing within the
device.
- add support to attach workers while IO is running.
- instead of passing a pid_t of the kernel thread, pass a int allocated
by the vhost layer with an idr.

V3:
- fully convert vhost code to use vq based APIs instead of leaving it
half per dev and half per vq.
- rebase against kernel worker API.
- Drop delayed worker creation. We always create the default worker at
VHOST_SET_OWNER time. Userspace can create and bind workers after that.

V2:
- change loop that we take a refcount to the worker in
- replaced pid == -1 with define.
- fixed tabbing/spacing coding style issue
- use hash instead of list to lookup workers.
- I dropped the patch that added an ioctl cmd to get a vq's worker's
pid. I saw we might do a generic netlink interface instead.



_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

             reply	other threads:[~2021-12-07  2:51 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-07  2:51 Mike Christie [this message]
2021-12-07  2:51 ` [PATCH V5 00/12] vhost: multiple worker support Mike Christie
2021-12-07  2:51 ` [PATCH V5 01/12] vhost: add vhost_worker pointer to vhost_virtqueue Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 02/12] vhost, vhost-net: add helper to check if vq has work Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 03/12] vhost: take worker or vq instead of dev for queueing Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 04/12] vhost: take worker or vq instead of dev for flushing Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 05/12] vhost: convert poll work to be vq based Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 06/12] vhost-sock: convert to vhost_vq_work_queue Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 07/12] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 08/12] vhost-scsi: convert to vhost_vq_work_queue Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 09/12] vhost: remove vhost_work_queue Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 10/12] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 11/12] vhost: allow userspace to create workers Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-07  2:51 ` [PATCH V5 12/12] vhost: allow worker attachment after initial setup Mike Christie
2021-12-07  2:51   ` Mike Christie
2021-12-08  4:24 ` [PATCH V5 00/12] vhost: multiple worker support Jason Wang
2021-12-08  4:24   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211207025117.23551-1-michael.christie@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.