All of lore.kernel.org
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Eduardo Habkost <ehabkost@redhat.com>,
	qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@gmail.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N
Date: Thu, 30 Jan 2020 12:03:14 +0100	[thread overview]
Message-ID: <20200130120314.5d4ad113.cohuck@redhat.com> (raw)
In-Reply-To: <20200130105235.GC176651@stefanha-x1.localdomain>

[-- Attachment #1: Type: text/plain, Size: 2353 bytes --]

On Thu, 30 Jan 2020 10:52:35 +0000
Stefan Hajnoczi <stefanha@redhat.com> wrote:

> On Thu, Jan 30, 2020 at 01:29:16AM +0100, Paolo Bonzini wrote:
> > On 29/01/20 16:44, Stefan Hajnoczi wrote:  
> > > On Mon, Jan 27, 2020 at 02:10:31PM +0100, Cornelia Huck wrote:  
> > >> On Fri, 24 Jan 2020 10:01:57 +0000
> > >> Stefan Hajnoczi <stefanha@redhat.com> wrote:  
> > >>> @@ -47,10 +48,15 @@ static void vhost_scsi_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> > >>>  {
> > >>>      VHostSCSIPCI *dev = VHOST_SCSI_PCI(vpci_dev);
> > >>>      DeviceState *vdev = DEVICE(&dev->vdev);
> > >>> -    VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
> > >>> +    VirtIOSCSIConf *conf = &dev->vdev.parent_obj.parent_obj.conf;
> > >>> +
> > >>> +    /* 1:1 vq to vcpu mapping is ideal because it avoids IPIs */
> > >>> +    if (conf->num_queues == VIRTIO_SCSI_AUTO_NUM_QUEUES) {
> > >>> +        conf->num_queues = current_machine->smp.cpus;  
> > >> This now maps the request vqs 1:1 to the vcpus. What about the fixed
> > >> vqs? If they don't really matter, amend the comment to explain that?  
> > > The fixed vqs don't matter.  They are typically not involved in the data
> > > path, only the control path where performance doesn't matter.  
> > 
> > Should we put a limit on the number of vCPUs?  For anything above ~128
> > the guest is probably not going to be disk or network bound.  
> 
> Michael Tsirkin pointed out there's a hard limit of VIRTIO_QUEUE_MAX
> (1024).  We need to at least stay under that limit.
> 
> Should the guest have >128 virtqueues?  Each virtqueue requires guest
> RAM and 2 host eventfds.  Eventually these resource requirements will
> become a scalability problem, but how do we choose a hard limit and what
> happens to guest performance above that limit?

There's probably two kind of limits involved here:

- a hard limit (we cannot do more), which should be checked even for
  user-specified values, and
- a soft limit (it does not make sense to go beyond this for the
  default case), which can be overridden if explicitly specified.

VIRTIO_QUEUE_MAX (and two less for virtio-scsi) sounds like a hard
limit, maybe 128 is a reasonable candidate for a soft limit.

(I would expect systems that give 128 vcpus to the guest to also be
generously sized in other respects.)

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2020-01-30 11:04 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-24 10:01 [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 1/4] virtio-scsi: introduce a constant for fixed virtqueues Stefan Hajnoczi
2020-01-27 12:59   ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N Stefan Hajnoczi
2020-01-27 13:10   ` Cornelia Huck
2020-01-29 15:44     ` Stefan Hajnoczi
2020-01-30  0:29       ` Paolo Bonzini
2020-01-30 10:52         ` Stefan Hajnoczi
2020-01-30 11:03           ` Cornelia Huck [this message]
2020-02-03 10:25           ` Sergio Lopez
2020-02-03 10:35             ` Michael S. Tsirkin
2020-02-03 10:51             ` Cornelia Huck
2020-02-03 10:57             ` Daniel P. Berrangé
2020-02-03 11:39               ` Sergio Lopez
2020-02-03 12:53                 ` Michael S. Tsirkin
2020-02-11 16:20                 ` Stefan Hajnoczi
2020-02-11 16:31                   ` Michael S. Tsirkin
2020-02-12 11:18                     ` Stefan Hajnoczi
2020-02-21 10:55                       ` Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 3/4] virtio-blk: " Stefan Hajnoczi
2020-01-27 13:14   ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 4/4] vhost-user-blk: " Stefan Hajnoczi
2020-01-27 13:17   ` Cornelia Huck
2020-01-27  9:59 ` [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200130120314.5d4ad113.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=fam@euphon.net \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.