From: Cornelia Huck <cohuck@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
Eduardo Habkost <ehabkost@redhat.com>,
qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N
Date: Mon, 3 Feb 2020 11:51:50 +0100 [thread overview]
Message-ID: <20200203115150.46bd27a3.cohuck@redhat.com> (raw)
In-Reply-To: <20200203102529.3op54zggtquoguuo@dritchie>
[-- Attachment #1: Type: text/plain, Size: 3203 bytes --]
On Mon, 3 Feb 2020 11:25:29 +0100
Sergio Lopez <slp@redhat.com> wrote:
> On Thu, Jan 30, 2020 at 10:52:35AM +0000, Stefan Hajnoczi wrote:
> > On Thu, Jan 30, 2020 at 01:29:16AM +0100, Paolo Bonzini wrote:
> > > On 29/01/20 16:44, Stefan Hajnoczi wrote:
> > > > On Mon, Jan 27, 2020 at 02:10:31PM +0100, Cornelia Huck wrote:
> > > >> On Fri, 24 Jan 2020 10:01:57 +0000
> > > >> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > > >>> @@ -47,10 +48,15 @@ static void vhost_scsi_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> > > >>> {
> > > >>> VHostSCSIPCI *dev = VHOST_SCSI_PCI(vpci_dev);
> > > >>> DeviceState *vdev = DEVICE(&dev->vdev);
> > > >>> - VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
> > > >>> + VirtIOSCSIConf *conf = &dev->vdev.parent_obj.parent_obj.conf;
> > > >>> +
> > > >>> + /* 1:1 vq to vcpu mapping is ideal because it avoids IPIs */
> > > >>> + if (conf->num_queues == VIRTIO_SCSI_AUTO_NUM_QUEUES) {
> > > >>> + conf->num_queues = current_machine->smp.cpus;
> > > >> This now maps the request vqs 1:1 to the vcpus. What about the fixed
> > > >> vqs? If they don't really matter, amend the comment to explain that?
> > > > The fixed vqs don't matter. They are typically not involved in the data
> > > > path, only the control path where performance doesn't matter.
> > >
> > > Should we put a limit on the number of vCPUs? For anything above ~128
> > > the guest is probably not going to be disk or network bound.
> >
> > Michael Tsirkin pointed out there's a hard limit of VIRTIO_QUEUE_MAX
> > (1024). We need to at least stay under that limit.
> >
> > Should the guest have >128 virtqueues? Each virtqueue requires guest
> > RAM and 2 host eventfds. Eventually these resource requirements will
> > become a scalability problem, but how do we choose a hard limit and what
> > happens to guest performance above that limit?
>
> From the UX perspective, I think it's safer to use a rather low upper
> limit for the automatic configuration.
>
> Users of large VMs (>=32 vCPUs) aiming for the optimal performance are
> already facing the need of manually tuning (or relying on a software
> to do that for them) other aspects of it, like vNUMA, IOThreads and
> CPU pinning, so I don't think we should focus on this group.
>
> On the other hand, the increase in host resource requirements may have
> unforeseen in some environments, specially to virtio-blk users with
> multiple disks.
Yes... what happens on systems that have both a lot of vcpus and a lot
of disks? We don't know how many other disks are there in the
configuration, and they might be hotplugged later, anyway.
>
> All in all, I don't have data that would justify setting the limit to
> one value or the other. The only argument I can put on the table is
> that, so far, we only had one VQ per device, so perhaps a conservative
> value (4? 8?) would make sense from a safety and compatibility point
> of view.
The more I think about it, the more I agree. Aiming a bit lower will
hopefully give more performance with less opportunity for unforeseen
breakage due to resource exhaustion.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-02-03 10:53 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-24 10:01 [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 1/4] virtio-scsi: introduce a constant for fixed virtqueues Stefan Hajnoczi
2020-01-27 12:59 ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N Stefan Hajnoczi
2020-01-27 13:10 ` Cornelia Huck
2020-01-29 15:44 ` Stefan Hajnoczi
2020-01-30 0:29 ` Paolo Bonzini
2020-01-30 10:52 ` Stefan Hajnoczi
2020-01-30 11:03 ` Cornelia Huck
2020-02-03 10:25 ` Sergio Lopez
2020-02-03 10:35 ` Michael S. Tsirkin
2020-02-03 10:51 ` Cornelia Huck [this message]
2020-02-03 10:57 ` Daniel P. Berrangé
2020-02-03 11:39 ` Sergio Lopez
2020-02-03 12:53 ` Michael S. Tsirkin
2020-02-11 16:20 ` Stefan Hajnoczi
2020-02-11 16:31 ` Michael S. Tsirkin
2020-02-12 11:18 ` Stefan Hajnoczi
2020-02-21 10:55 ` Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 3/4] virtio-blk: " Stefan Hajnoczi
2020-01-27 13:14 ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 4/4] vhost-user-blk: " Stefan Hajnoczi
2020-01-27 13:17 ` Cornelia Huck
2020-01-27 9:59 ` [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200203115150.46bd27a3.cohuck@redhat.com \
--to=cohuck@redhat.com \
--cc=ehabkost@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=slp@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).