qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: qemu-devel@nongnu.org, vgoyal@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
Date: Thu, 22 Aug 2019 09:52:37 +0100	[thread overview]
Message-ID: <20190822085237.GA20491@stefanha-x1.localdomain> (raw)
In-Reply-To: <20190821191118.GN3309@work-vm>

[-- Attachment #1: Type: text/plain, Size: 3725 bytes --]

On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (mst@redhat.com) wrote:
> > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > > +{
> > > +    /* Do nothing */
> > 
> > Why is this safe?  Is this because this never triggers?  assert(0) then?
> > If it triggers then backend won't be notified, which might
> > cause it to get stuck.
> 
> We never process these queues in qemu - always in the guest; so am I
> correct in thinking those shouldn't be used?

s/guest/vhost-user backend process/

vuf_handle_output() should never be called.

> > > +}
> > > +
> > > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > +                                            bool mask)
> > > +{
> > > +    VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > +
> > > +    vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > > +}
> > > +
> > > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > > +{
> > > +    VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > +
> > > +    return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > > +}
> > > +
> > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > +{
> > > +    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > +    VHostUserFS *fs = VHOST_USER_FS(dev);
> > > +    unsigned int i;
> > > +    size_t len;
> > > +    int ret;
> > > +
> > > +    if (!fs->conf.chardev.chr) {
> > > +        error_setg(errp, "missing chardev");
> > > +        return;
> > > +    }
> > > +
> > > +    if (!fs->conf.tag) {
> > > +        error_setg(errp, "missing tag property");
> > > +        return;
> > > +    }
> > > +    len = strlen(fs->conf.tag);
> > > +    if (len == 0) {
> > > +        error_setg(errp, "tag property cannot be empty");
> > > +        return;
> > > +    }
> > > +    if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > +        error_setg(errp, "tag property must be %zu bytes or less",
> > > +                   sizeof_field(struct virtio_fs_config, tag));
> > > +        return;
> > > +    }
> > > +
> > > +    if (fs->conf.num_queues == 0) {
> > > +        error_setg(errp, "num-queues property must be larger than 0");
> > > +        return;
> > > +    }
> > 
> > The strange thing is that actual # of queues is this number + 2.
> > And this affects an optimal number of vectors (see patch 2).
> > Not sure what a good solution is - include the
> > mandatory queues in the number?
> > Needs to be documented in some way.
> 
> Should we be doing nvectors the same way virtio-scsi-pci does it;
> with a magic 'unspecified' default where it sets the nvectors based on
> the number of queues?
> 
> I think my preference is not to show the users the mandatory queues.

I agree.  Users want to control multiqueue, not on the absolute number
of virtqueues including mandatory queues.

> > > +
> > > +    if (!is_power_of_2(fs->conf.queue_size)) {
> > > +        error_setg(errp, "queue-size property must be a power of 2");
> > > +        return;
> > > +    }
> > 
> > Hmm packed ring allows non power of 2 ...
> > We need to look into a generic helper to support VQ
> > size checks.
> 
> Which would also have to include the negotiation of where it's doing
> packaged ring?

It's impossible to perform this check at .realize() time since the
packed virtqueue layout is negotiated via a VIRTIO feature bit.  This
puts us in the awkward position of either failing when the guest has
already booted or rounding up the queue size for split ring layouts
(with a warning message?).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-08-22  8:53 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-16 14:33 [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) Dr. David Alan Gilbert (git)
2019-08-16 14:33 ` [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device Dr. David Alan Gilbert (git)
2019-08-18 11:08   ` Michael S. Tsirkin
2019-08-20 12:24     ` Cornelia Huck
2019-08-20 13:39       ` Dr. David Alan Gilbert
2019-08-21 17:52     ` Dr. David Alan Gilbert
2019-08-21 19:11     ` Dr. David Alan Gilbert
2019-08-22  8:52       ` Stefan Hajnoczi [this message]
2019-08-22  9:19         ` Cornelia Huck
2019-08-23  9:25           ` Stefan Hajnoczi
2019-09-17  9:21         ` Dr. David Alan Gilbert
2019-08-16 14:33 ` [Qemu-devel] [PATCH 2/2] virtio: add vhost-user-fs-pci device Dr. David Alan Gilbert (git)
2019-08-16 18:38 ` [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental) no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190822085237.GA20491@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).