All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	pbonzini@redhat.com, jasowang@redhat.com, mst@redhat.com,
	sgarzare@redhat.com, virtualization@lists.linux-foundation.org
Subject: Re: vhost: multiple worker support
Date: Thu, 3 Jun 2021 17:16:42 -0500	[thread overview]
Message-ID: <523e6207-c380-9a9d-7a5d-7b7ee554d7f2@oracle.com> (raw)
In-Reply-To: <YLjpMXbfJsaLrgF5@stefanha-x1.localdomain>

On 6/3/21 9:37 AM, Stefan Hajnoczi wrote:
> On Tue, May 25, 2021 at 01:05:51PM -0500, Mike Christie wrote:
>> The following patches apply over linus's tree or mst's vhost branch
>> and my cleanup patchset:
>>
>> https://lists.linuxfoundation.org/pipermail/virtualization/2021-May/054354.html
>>
>> These patches allow us to support multiple vhost workers per device. I
>> ended up just doing Stefan's original idea where userspace has the
>> kernel create a worker and we pass back the pid. This has the benefit
>> over the workqueue and userspace thread approach where we only have
>> one'ish code path in the kernel during setup to detect old tools. The
>> main IO paths and device/vq setup/teardown paths all use common code.
>>
>> The kernel patches here allow us to then do N workers device and also
>> share workers across devices.
>>
>> I've also included a patch for qemu so you can get an idea of how it
>> works. If we are ok with the kernel code then I'll break that up into
>> a patchset and send to qemu-devel.
> 
> It seems risky to allow userspace process A to "share" a vhost worker
> thread with userspace process B based on a matching pid alone. Should
> they have ptrace_may_access() or similar?
> 

I'm not sure. I already made it a little restrictive in this posting, but
it may not be enough depending on what's possible and what we want to allow.

Right now to share a worker the userspace process doing the
VHOST_SET_VRING_WORKER ioctl has to be the owner. Before we do a
VHOST_SET_VRING_WORKER, vhost_dev_ioctl calls vhost_dev_check_owner,
so we will fail if 2 random processes try to share.

So we can share a worker across a vhost-dev's N virtqueues or share a
worker with multiple vhost devs and their virtqueues, but the devs have
to be managed by the same VM/qemu process. If that's all we want to support,
then is the owner check enough?

If we want to share workers across VMs then I think we definitely want
something like ptrace_may_access.

WARNING: multiple messages have this Message-ID (diff)
From: Mike Christie <michael.christie@oracle.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: linux-scsi@vger.kernel.org, mst@redhat.com,
	virtualization@lists.linux-foundation.org,
	target-devel@vger.kernel.org, pbonzini@redhat.com
Subject: Re: vhost: multiple worker support
Date: Thu, 3 Jun 2021 17:16:42 -0500	[thread overview]
Message-ID: <523e6207-c380-9a9d-7a5d-7b7ee554d7f2@oracle.com> (raw)
In-Reply-To: <YLjpMXbfJsaLrgF5@stefanha-x1.localdomain>

On 6/3/21 9:37 AM, Stefan Hajnoczi wrote:
> On Tue, May 25, 2021 at 01:05:51PM -0500, Mike Christie wrote:
>> The following patches apply over linus's tree or mst's vhost branch
>> and my cleanup patchset:
>>
>> https://lists.linuxfoundation.org/pipermail/virtualization/2021-May/054354.html
>>
>> These patches allow us to support multiple vhost workers per device. I
>> ended up just doing Stefan's original idea where userspace has the
>> kernel create a worker and we pass back the pid. This has the benefit
>> over the workqueue and userspace thread approach where we only have
>> one'ish code path in the kernel during setup to detect old tools. The
>> main IO paths and device/vq setup/teardown paths all use common code.
>>
>> The kernel patches here allow us to then do N workers device and also
>> share workers across devices.
>>
>> I've also included a patch for qemu so you can get an idea of how it
>> works. If we are ok with the kernel code then I'll break that up into
>> a patchset and send to qemu-devel.
> 
> It seems risky to allow userspace process A to "share" a vhost worker
> thread with userspace process B based on a matching pid alone. Should
> they have ptrace_may_access() or similar?
> 

I'm not sure. I already made it a little restrictive in this posting, but
it may not be enough depending on what's possible and what we want to allow.

Right now to share a worker the userspace process doing the
VHOST_SET_VRING_WORKER ioctl has to be the owner. Before we do a
VHOST_SET_VRING_WORKER, vhost_dev_ioctl calls vhost_dev_check_owner,
so we will fail if 2 random processes try to share.

So we can share a worker across a vhost-dev's N virtqueues or share a
worker with multiple vhost devs and their virtqueues, but the devs have
to be managed by the same VM/qemu process. If that's all we want to support,
then is the owner check enough?

If we want to share workers across VMs then I think we definitely want
something like ptrace_may_access.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2021-06-03 22:16 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25 18:05 vhost: multiple worker support Mike Christie
2021-05-25 18:05 ` Mike Christie
2021-05-25 18:05 ` [PATCH 1/9] vhost: move worker thread fields to new struct Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:16   ` Stefan Hajnoczi
2021-06-03 10:16     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 2/9] vhost: move vhost worker creation to kick setup Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:28   ` Stefan Hajnoczi
2021-06-03 10:28     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 3/9] vhost: modify internal functions to take a vhost_worker Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 10:45   ` Stefan Hajnoczi
2021-06-03 10:45     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 4/9] vhost: allow vhost_polls to use different vhost_workers Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:51   ` Stefan Hajnoczi
2021-06-03 13:51     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 5/9] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:54   ` Stefan Hajnoczi
2021-06-03 13:54     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 6/9] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 13:57   ` Stefan Hajnoczi
2021-06-03 13:57     ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 7/9] vhost: allow userspace to create workers Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 14:30   ` Stefan Hajnoczi
2021-06-03 14:30     ` Stefan Hajnoczi
2021-06-05 23:53     ` michael.christie
2021-06-05 23:53       ` michael.christie
2021-06-07 15:19       ` Stefan Hajnoczi
2021-06-07 15:19         ` Stefan Hajnoczi
2021-06-09 21:03         ` Mike Christie
2021-06-09 21:03           ` Mike Christie
2021-06-10  8:06           ` Stefan Hajnoczi
2021-06-10  8:06             ` Stefan Hajnoczi
2021-06-18  2:49             ` Mike Christie
2021-06-18  2:49               ` Mike Christie
2021-06-21 13:41               ` Stefan Hajnoczi
2021-06-21 13:41                 ` Stefan Hajnoczi
2021-05-25 18:05 ` [PATCH 8/9] vhost: add vhost_dev pointer to vhost_work Mike Christie
2021-05-25 18:05   ` Mike Christie
2021-06-03 14:31   ` Stefan Hajnoczi
2021-06-03 14:31     ` Stefan Hajnoczi
2021-05-25 18:06 ` [PATCH 9/9] vhost: support sharing workers across devs Mike Christie
2021-05-25 18:06   ` Mike Christie
2021-06-03 14:32   ` Stefan Hajnoczi
2021-06-03 14:32     ` Stefan Hajnoczi
2021-06-07  2:18     ` Jason Wang
2021-06-07  2:18       ` Jason Wang
2021-06-03 10:13 ` vhost: multiple worker support Stefan Hajnoczi
2021-06-03 10:13   ` Stefan Hajnoczi
2021-06-03 18:45   ` Mike Christie
2021-06-03 18:45     ` Mike Christie
2021-06-03 14:37 ` Stefan Hajnoczi
2021-06-03 14:37   ` Stefan Hajnoczi
2021-06-03 22:16   ` Mike Christie [this message]
2021-06-03 22:16     ` Mike Christie
2021-06-05 22:40     ` michael.christie
2021-06-05 22:40       ` michael.christie
2021-06-07 15:23       ` Stefan Hajnoczi
2021-06-07 15:23         ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=523e6207-c380-9a9d-7a5d-7b7ee554d7f2@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.