virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Mike Christie <michael.christie@oracle.com>
Cc: virtualization@lists.linux-foundation.org, stefanha@redhat.com,
	mst@redhat.com
Subject: Re: [PATCH v6 11/11] vhost: allow userspace to create workers
Date: Wed, 12 Apr 2023 15:56:20 +0800	[thread overview]
Message-ID: <CACGkMEsYK5yWuFECrUM-_fLA1V4H=WayxfovT2ezMeQCmwcDoQ@mail.gmail.com> (raw)
In-Reply-To: <c7da1c60-1b89-5fab-942a-1e74214f03d5@oracle.com>

On Wed, Apr 12, 2023 at 6:15 AM Mike Christie
<michael.christie@oracle.com> wrote:
>
> On 4/10/23 10:00 PM, Jason Wang wrote:
> >>> vhost_zerocopy_callback(). But since you want to limit the call before
> >>> set_backend, another question comes, do we really need the dynamic
> >>> attaching/creating in this case? How about a simple one ioctl that is
> >>> used to build the queue to workers mapping instead?
> >>
> >>
> >> I didn't think we need the dynamic case. It was from a review comment.
> >
> > Right, so we actually don't need three new ioctls but only a single is
> > sufficient?
> >
> > struct vhost_worker_state {
> >       __u16 workers;
> >       __u16 queue_to_work_map[];
> > };
> >
> > And limiting this to be called before datapath can run is sufficient?
> > (sorry I missed some of the previous comments).
>
> It's been like 3 years since this was last discussed so no problem :)
>

I'm really sorry for that, -ENOMEM :(

> Yeah, what you describe is all I need. Originally I just had the one
> ioctl:
>
> +#define VHOST_SET_VRING_WORKER _IOWR(VHOST_VIRTIO, 0x15, struct vhost_vring_worker)
>
> The VHOST_SET_VRING_WORKER created a worker on the virtqueue in the
> vhost_vring_worker.
>
>
> >>>>
> >>>>>> -       default:
> >>>>>> +       case VHOST_ATTACH_VRING_WORKER:
> >>>>>> +               if (copy_from_user(&w, argp, sizeof(w))) {
> >>>>>> +                       r = -EFAULT;
> >>>>>> +                       break;
> >>>>>> +               }
> >>>>>> +               r = vhost_vq_attach_worker(vq, &w);
> >>>>>> +               if (!r && copy_to_user(argp, &w, sizeof(w)))
> >>>>>> +                       r = -EFAULT;
> >>>>>> +               break;
> >>>>>
> >>>>> It's a little odd that we have new and attach but only a free.
> >>>>
> >>>> Do you mean we would also want a detach? I've been debating that.
> >>>> I'm not sure what the user wanted though.
> >>>>
> >>>> Would it leave the virtqueue with no worker? Or, does it pick the first
> >>>> worker it finds? If there is no worker because we did the former or
> >>>> because userspace detached all of them, then do we just drop work that
> >>>> gets queued?
> >>>>
> >>>> It seemed like a user would never want to drop work, so I made it so
> >>>> you can only tell it to use new workers (attach which I guess is more
> >>>> like a swap worker)
> >>>
> >>> swap and free old worker indeed?
> >>
> >> I didn't understand the question.
> >
> > I mean if I understand the code correctly, the code tries to drop
> > refcnt and free the old worker during attach.
>
> I see. We actually don't free until VHOST_FREE_WORKER.
>
> When we create the worker from VHOST_NEW_WORKER we set the refcount
> to 1. Then each time a virtqueue and worker are attached to each other
> we increase the refcount.
>
> When you do vhost_vq_detach_worker then it drops the refcount from the
> attach. Then if you detached the worker from all the virtqueues you
> still have the refcount=1 from the VHOST_NEW_WORKER call.
>
> The refcount decrement in VHOST_FREE_WORKER via vhost_worker_put would
> be the final put. Note that if userspace just closes the dev without
> doing a put, then we do the final puts for it.

Right, I mis-read the code, you did

        /*
         * We can free the worker if it's not attached to any virtqueues.
         */
        if (refcount_read(&worker->refcount) != 1)
                return -EBUSY;

And I thought it was a dec and test.

>
> Note that I originally didn't have the free. I don't need it for any
> specific reason. It was just from a review comment. I originally just had
> the one create worker type of ioctl. Then it was suggested to do the split
> attach/new/free design.

I see.

>
> I can spin another patchset with the single ioctl design so we can compare.

So I'm fine with this approach. One last question, I see this:

/* By default, a device gets one vhost_worker that its virtqueues share. This */

I'm wondering if it is better to have a vhost_get_worker() to return
the worker_id of a specific queue. In the future, this might be useful
for devices that have multiple workers by default?

Thanks

>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2023-04-12  7:56 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-28  2:17 [PATCH v6 00/11] vhost: multiple worker support Mike Christie
2023-03-28  2:17 ` [PATCH v6 01/11] vhost: add vhost_worker pointer to vhost_virtqueue Mike Christie
2023-04-04  7:04   ` Jason Wang
2023-04-04 18:38   ` Michael S. Tsirkin
2023-04-04 23:15     ` Mike Christie
2023-03-28  2:17 ` [PATCH v6 02/11] vhost, vhost-net: add helper to check if vq has work Mike Christie
2023-04-04  7:05   ` Jason Wang
2023-03-28  2:17 ` [PATCH v6 03/11] vhost: take worker or vq instead of dev for queueing Mike Christie
2023-04-04  7:07   ` Jason Wang
2023-03-28  2:17 ` [PATCH v6 04/11] vhost: take worker or vq instead of dev for flushing Mike Christie
2023-04-04  7:08   ` Jason Wang
2023-03-28  2:17 ` [PATCH v6 05/11] vhost: convert poll work to be vq based Mike Christie
2023-03-28  2:17 ` [PATCH v6 06/11] vhost-sock: convert to vhost_vq_work_queue Mike Christie
2023-03-28  2:17 ` [PATCH v6 07/11] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2023-03-28  2:17 ` [PATCH v6 08/11] vhost-scsi: convert to vhost_vq_work_queue Mike Christie
2023-03-28  2:17 ` [PATCH v6 09/11] vhost: remove vhost_work_queue Mike Christie
2023-03-28  2:17 ` [PATCH v6 10/11] vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2023-03-28  2:17 ` [PATCH v6 11/11] vhost: allow userspace to create workers Mike Christie
2023-04-04  8:00   ` Jason Wang
2023-04-04 23:08     ` Mike Christie
2023-04-10  7:04       ` Jason Wang
2023-04-10 17:16         ` Mike Christie
2023-04-11  3:00           ` Jason Wang
2023-04-11 22:15             ` Mike Christie
2023-04-12  7:56               ` Jason Wang [this message]
2023-04-13 22:36                 ` Mike Christie
2023-04-14  2:26                   ` Jason Wang
2023-04-14 16:49                     ` Mike Christie
2023-04-21  6:49 ` [PATCH v6 00/11] vhost: multiple worker support Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACGkMEsYK5yWuFECrUM-_fLA1V4H=WayxfovT2ezMeQCmwcDoQ@mail.gmail.com' \
    --to=jasowang@redhat.com \
    --cc=michael.christie@oracle.com \
    --cc=mst@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).