All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>
Subject: Re: virtio 4M limit
Date: Mon, 4 Oct 2021 15:59:18 -0400	[thread overview]
Message-ID: <20211004155652-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <25522755.cqRP6gS9TD@silver>

On Mon, Oct 04, 2021 at 12:44:21PM +0200, Christian Schoenebeck wrote:
> On Sonntag, 3. Oktober 2021 22:27:03 CEST Michael S. Tsirkin wrote:
> > On Sun, Oct 03, 2021 at 08:14:55PM +0200, Christian Schoenebeck wrote:
> > > On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> > > > Hi Michael,
> > > > 
> > > > while testing the following kernel patches I realized there is currently
> > > > a
> > > > size limitation of 4 MB with virtio on QEMU side:
> > > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.
> > > > com/
> > > > 
> > > > So with those kernel patches applied I can mount 9pfs on Linux guest
> > > > with
> > > > the 9p 'msize' (maximum message size) option with a value of up to
> > > > 4186112
> > > > successfully. If I try to go higher with 'msize' then the system would
> > > > hang
> > > > 
> > > > with the following QEMU error:
> > > >   qemu-system-x86_64: virtio: too many write descriptors in indirect
> > > >   table
> > > > 
> > > > Which apparently is due to the amount of scatter gather lists on QEMU
> > > > virtio side currently being hard coded to 1024 (i.e. multiplied by 4k
> > > > page size =>> > 
> > > > 4 MB):
> > > >   ./include/hw/virtio/virtio.h:
> > > >   #define VIRTQUEUE_MAX_SIZE 1024
> > > > 
> > > > Is that hard coded limit carved into stone for some reason or would it
> > > > be OK if I change that into a runtime variable?
> > > 
> > > After reviewing the code and protocol specs, it seems that this value is
> > > simply too small. I will therefore send a patch suggsting to raise this
> > > value to 32768, as this is the maximum possible value according to the
> > > virtio specs.
> > > 
> > > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#
> > > x1-240006
> > I think it's too aggressive to change it for all devices.
> > Pls find a way to only have it affect 9pfs.
> 
> So basically I should rather introduce a variable that would be used at most 
> places instead of using the macro VIRTQUEUE_MAX_SIZE?

I guess so.

> > > > If that would be Ok, maybe something similar that I did with those
> > > > kernel
> > > > patches, i.e. retaining 1024 as an initial default value and if
> > > > indicated
> > > > from guest side that more is needed, increasing the SG list amount
> > > > subsequently according to whatever is needed by guest?
> > > 
> > > Further changes are probably not necessary.
> > > 
> > > The only thing I have spotted that probably should be changed is that at
> > > some few locations, a local array is allocated on the stack with
> > > VIRTQUEUE_MAX_SIZE as array size, e.g.:
> > > 
> > > static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> > > {
> > > 
> > >     ...
> > >     hwaddr addr[VIRTQUEUE_MAX_SIZE];
> > >     struct iovec iov[VIRTQUEUE_MAX_SIZE];
> > >     ...
> > > 
> > > }
> 
> What about these allocations on the stack? Is it Ok to disregard this as 
> theoretical issue for now and just retain them on the stack, just with the 
> runtime variable instead of macro as array size?

I think it's not a big deal ... why do you think it is? Are we running
out of stack?

> > > 
> > > > And as I am not too familiar with the virtio protocol, is that current
> > > > limit already visible to guest side? Because obviously it would make
> > > > sense if I change my kernel patches so that they automatically limit to
> > > > whatever QEMU supports instead of causing a hang.
> > > 
> > > Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of scatter
> > > gather lists or the maximum queue size ever possible) is not visible to
> > > guest.
> > > 
> > > I thought about making a hack to make the guest Linux kernel aware whether
> > > host side has the old limit of 1024 or rather the correct value 32768, but
> > > probably not worth it.
> > > 
> > > Best regards,
> > > Christian Schoenebeck
> 



  reply	other threads:[~2021-10-04 20:03 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-01 11:21 virtio 4M limit Christian Schoenebeck
2021-10-03 18:14 ` Christian Schoenebeck
2021-10-03 18:15   ` [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-03 20:31     ` Michael S. Tsirkin
2021-10-03 20:27   ` virtio 4M limit Michael S. Tsirkin
2021-10-04 10:44     ` Christian Schoenebeck
2021-10-04 19:59       ` Michael S. Tsirkin [this message]
2021-10-04 20:10         ` Christian Schoenebeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211004155652-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=groug@kaod.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu_oss@crudebyte.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.