All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Mike Christie <michael.christie@oracle.com>,
	target-devel@vger.kernel.org, linux-scsi@vger.kernel.org,
	pbonzini <pbonzini@redhat.com>, mst <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	virtualization <virtualization@lists.linux-foundation.org>
Subject: Re: [PATCH V3 11/11] vhost: allow userspace to create workers
Date: Wed, 27 Oct 2021 10:55:04 +0800	[thread overview]
Message-ID: <CACGkMEsD=JwjWgTM4XpcKVy+ZKs6siW_1Q=3zzB8jZ3vq1CyZA@mail.gmail.com> (raw)
In-Reply-To: <YXgiYFIUTKtoRJWW@stefanha-x1.localdomain>

On Tue, Oct 26, 2021 at 11:45 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Oct 26, 2021 at 01:37:14PM +0800, Jason Wang wrote:
> >
> > 在 2021/10/22 下午1:19, Mike Christie 写道:
> > > This patch allows userspace to create workers and bind them to vqs. You
> > > can have N workers per dev and also share N workers with M vqs.
> > >
> > > Signed-off-by: Mike Christie <michael.christie@oracle.com>
> >
> >
> > A question, who is the best one to determine the binding? Is it the VMM
> > (Qemu etc) or the management stack? If the latter, it looks to me it's
> > better to expose this via sysfs?
>
> A few options that let the management stack control vhost worker CPU
> affinity:
>
> 1. The management tool opens the vhost device node, calls
>    ioctl(VHOST_SET_VRING_WORKER), sets up CPU affinity, and then passes
>    the fd to the VMM. In this case the VMM is still able to call the
>    ioctl, which may be undesirable from an attack surface perspective.

Yes, and we can't do post or dynamic configuration afterwards after
the VM is launched?

>
> 2. The VMM calls ioctl(VHOST_SET_VRING_WORKER) itself and the management
>    tool queries the vq:worker details from the VMM (e.g. a new QEMU QMP
>    query-vhost-workers command similar to query-iothreads). The
>    management tool can then control CPU affinity on the vhost worker
>    threads.
>
>    (This is how CPU affinity works in QEMU and libvirt today.)

Then we also need a "bind-vhost-workers" command.

>
> 3. The sysfs approach you suggested. Does sysfs export vq-0/, vq-1/, etc
>    directories with a "worker" attribute?

Something like this.

> Do we need to define a point
>    when the VMM has set up vqs and the management stack is able to query
>    them?

It could be the point that the vhost fd is opened.

>  Vhost devices currently pre-allocate the maximum number of vqs
>    and I'm not sure how to determine the number of vqs that will
>    actually be used?

It requires more information to be exposed. But before this, we should
allow the dynamic binding of between vq and worker.

>
>    One advantage of this is that access to the vq:worker mapping can be
>    limited to the management stack and the VMM cannot access it. But it
>    seems a little tricky because the vhost model today doesn't use sysfs
>    or define a lifecycle where the management stack can configure
>    devices.

Yes.

Thanks

>
> Stefan


WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: linux-scsi@vger.kernel.org, mst <mst@redhat.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	target-devel@vger.kernel.org, pbonzini <pbonzini@redhat.com>
Subject: Re: [PATCH V3 11/11] vhost: allow userspace to create workers
Date: Wed, 27 Oct 2021 10:55:04 +0800	[thread overview]
Message-ID: <CACGkMEsD=JwjWgTM4XpcKVy+ZKs6siW_1Q=3zzB8jZ3vq1CyZA@mail.gmail.com> (raw)
In-Reply-To: <YXgiYFIUTKtoRJWW@stefanha-x1.localdomain>

On Tue, Oct 26, 2021 at 11:45 PM Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> On Tue, Oct 26, 2021 at 01:37:14PM +0800, Jason Wang wrote:
> >
> > 在 2021/10/22 下午1:19, Mike Christie 写道:
> > > This patch allows userspace to create workers and bind them to vqs. You
> > > can have N workers per dev and also share N workers with M vqs.
> > >
> > > Signed-off-by: Mike Christie <michael.christie@oracle.com>
> >
> >
> > A question, who is the best one to determine the binding? Is it the VMM
> > (Qemu etc) or the management stack? If the latter, it looks to me it's
> > better to expose this via sysfs?
>
> A few options that let the management stack control vhost worker CPU
> affinity:
>
> 1. The management tool opens the vhost device node, calls
>    ioctl(VHOST_SET_VRING_WORKER), sets up CPU affinity, and then passes
>    the fd to the VMM. In this case the VMM is still able to call the
>    ioctl, which may be undesirable from an attack surface perspective.

Yes, and we can't do post or dynamic configuration afterwards after
the VM is launched?

>
> 2. The VMM calls ioctl(VHOST_SET_VRING_WORKER) itself and the management
>    tool queries the vq:worker details from the VMM (e.g. a new QEMU QMP
>    query-vhost-workers command similar to query-iothreads). The
>    management tool can then control CPU affinity on the vhost worker
>    threads.
>
>    (This is how CPU affinity works in QEMU and libvirt today.)

Then we also need a "bind-vhost-workers" command.

>
> 3. The sysfs approach you suggested. Does sysfs export vq-0/, vq-1/, etc
>    directories with a "worker" attribute?

Something like this.

> Do we need to define a point
>    when the VMM has set up vqs and the management stack is able to query
>    them?

It could be the point that the vhost fd is opened.

>  Vhost devices currently pre-allocate the maximum number of vqs
>    and I'm not sure how to determine the number of vqs that will
>    actually be used?

It requires more information to be exposed. But before this, we should
allow the dynamic binding of between vq and worker.

>
>    One advantage of this is that access to the vq:worker mapping can be
>    limited to the management stack and the VMM cannot access it. But it
>    seems a little tricky because the vhost model today doesn't use sysfs
>    or define a lifecycle where the management stack can configure
>    devices.

Yes.

Thanks

>
> Stefan

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2021-10-27  2:55 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-22  5:18 [PATCH V3 00/11] vhost: multiple worker support Mike Christie
2021-10-22  5:18 ` Mike Christie
2021-10-22  5:19 ` [PATCH] QEMU vhost-scsi: add support for VHOST_SET_VRING_WORKER Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 01/11] vhost: add vhost_worker pointer to vhost_virtqueue Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 02/11] vhost, vhost-net: add helper to check if vq has work Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 03/11] vhost: take worker or vq instead of dev for queueing Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 04/11] vhost: take worker or vq instead of dev for flushing Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 05/11] vhost: convert poll work to be vq based Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 06/11] vhost-sock: convert to vq helpers Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-25  9:08   ` Stefano Garzarella
2021-10-25  9:08     ` Stefano Garzarella
2021-10-25 16:09     ` michael.christie
2021-10-25 16:09       ` michael.christie
2021-10-22  5:19 ` [PATCH V3 07/11] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 08/11] vhost-scsi: convert to vq helpers Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 09/11] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 10/11] vhost: remove device wide queu/flushing helpers Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22  5:19 ` [PATCH V3 11/11] vhost: allow userspace to create workers Mike Christie
2021-10-22  5:19   ` Mike Christie
2021-10-22 10:47   ` Michael S. Tsirkin
2021-10-22 10:47     ` Michael S. Tsirkin
2021-10-22 16:12     ` michael.christie
2021-10-22 16:12       ` michael.christie
2021-10-22 18:17       ` michael.christie
2021-10-22 18:17         ` michael.christie
2021-10-23 20:11         ` Michael S. Tsirkin
2021-10-23 20:11           ` Michael S. Tsirkin
2021-10-25 16:04           ` michael.christie
2021-10-25 16:04             ` michael.christie
2021-10-25 17:14             ` Michael S. Tsirkin
2021-10-25 17:14               ` Michael S. Tsirkin
2021-10-26  5:37   ` Jason Wang
2021-10-26  5:37     ` Jason Wang
2021-10-26 13:09     ` Michael S. Tsirkin
2021-10-26 13:09       ` Michael S. Tsirkin
2021-10-26 16:36       ` Stefan Hajnoczi
2021-10-26 16:36         ` Stefan Hajnoczi
2021-10-26 15:44     ` Stefan Hajnoczi
2021-10-26 15:44       ` Stefan Hajnoczi
2021-10-27  2:55       ` Jason Wang [this message]
2021-10-27  2:55         ` Jason Wang
2021-10-27  9:01         ` Stefan Hajnoczi
2021-10-27  9:01           ` Stefan Hajnoczi
2021-10-26 16:49     ` michael.christie
2021-10-26 16:49       ` michael.christie
2021-10-27  6:02       ` Jason Wang
2021-10-27  6:02         ` Jason Wang
2021-10-27  9:03       ` Stefan Hajnoczi
2021-10-27  9:03         ` Stefan Hajnoczi
2021-10-26 15:22   ` Stefan Hajnoczi
2021-10-26 15:22     ` Stefan Hajnoczi
2021-10-26 15:24   ` Stefan Hajnoczi
2021-10-26 15:24     ` Stefan Hajnoczi
2021-10-22  6:02 ` [PATCH V3 00/11] vhost: multiple worker support michael.christie
2021-10-22  6:02   ` michael.christie
2021-10-22  9:49   ` Michael S. Tsirkin
2021-10-22  9:49     ` Michael S. Tsirkin
2021-10-22  9:48 ` Michael S. Tsirkin
2021-10-22  9:48   ` Michael S. Tsirkin
2021-10-22 15:54   ` michael.christie
2021-10-22 15:54     ` michael.christie
2021-10-23 20:12     ` Michael S. Tsirkin
2021-10-23 20:12       ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACGkMEsD=JwjWgTM4XpcKVy+ZKs6siW_1Q=3zzB8jZ3vq1CyZA@mail.gmail.com' \
    --to=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.