All of lore.kernel.org
 help / color / mirror / Atom feed
* virtio 4M limit
@ 2021-10-01 11:21 Christian Schoenebeck
  2021-10-03 18:14 ` Christian Schoenebeck
  0 siblings, 1 reply; 8+ messages in thread
From: Christian Schoenebeck @ 2021-10-01 11:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: Michael S. Tsirkin, Greg Kurz

Hi Michael,

while testing the following kernel patches I realized there is currently a 
size limitation of 4 MB with virtio on QEMU side:
https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.com/

So with those kernel patches applied I can mount 9pfs on Linux guest with the 
9p 'msize' (maximum message size) option with a value of up to 4186112 
successfully. If I try to go higher with 'msize' then the system would hang 
with the following QEMU error:

  qemu-system-x86_64: virtio: too many write descriptors in indirect table

Which apparently is due to the amount of scatter gather lists on QEMU virtio 
side currently being hard coded to 1024 (i.e. multiplied by 4k page size => 4 
MB):

  ./include/hw/virtio/virtio.h:
  #define VIRTQUEUE_MAX_SIZE 1024

Is that hard coded limit carved into stone for some reason or would it be OK 
if I change that into a runtime variable?

If that would be Ok, maybe something similar that I did with those kernel 
patches, i.e. retaining 1024 as an initial default value and if indicated from 
guest side that more is needed, increasing the SG list amount subsequently 
according to whatever is needed by guest?

And as I am not too familiar with the virtio protocol, is that current limit 
already visible to guest side? Because obviously it would make sense if I 
change my kernel patches so that they automatically limit to whatever QEMU 
supports instead of causing a hang.

Best regards,
Christian Schoenebeck




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: virtio 4M limit
  2021-10-01 11:21 virtio 4M limit Christian Schoenebeck
@ 2021-10-03 18:14 ` Christian Schoenebeck
  2021-10-03 18:15   ` [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
  2021-10-03 20:27   ` virtio 4M limit Michael S. Tsirkin
  0 siblings, 2 replies; 8+ messages in thread
From: Christian Schoenebeck @ 2021-10-03 18:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Michael S. Tsirkin, Greg Kurz

On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> Hi Michael,
> 
> while testing the following kernel patches I realized there is currently a
> size limitation of 4 MB with virtio on QEMU side:
> https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.com/
> 
> So with those kernel patches applied I can mount 9pfs on Linux guest with
> the 9p 'msize' (maximum message size) option with a value of up to 4186112
> successfully. If I try to go higher with 'msize' then the system would hang
> with the following QEMU error:
> 
>   qemu-system-x86_64: virtio: too many write descriptors in indirect table
> 
> Which apparently is due to the amount of scatter gather lists on QEMU virtio
> side currently being hard coded to 1024 (i.e. multiplied by 4k page size =>
> 4 MB):
> 
>   ./include/hw/virtio/virtio.h:
>   #define VIRTQUEUE_MAX_SIZE 1024
> 
> Is that hard coded limit carved into stone for some reason or would it be OK
> if I change that into a runtime variable?

After reviewing the code and protocol specs, it seems that this value is
simply too small. I will therefore send a patch suggsting to raise this value
to 32768, as this is the maximum possible value according to the virtio specs.

https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-240006

> If that would be Ok, maybe something similar that I did with those kernel
> patches, i.e. retaining 1024 as an initial default value and if indicated
> from guest side that more is needed, increasing the SG list amount
> subsequently according to whatever is needed by guest?

Further changes are probably not necessary.

The only thing I have spotted that probably should be changed is that at some
few locations, a local array is allocated on the stack with VIRTQUEUE_MAX_SIZE
as array size, e.g.:

static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
{
    ...
    hwaddr addr[VIRTQUEUE_MAX_SIZE];
    struct iovec iov[VIRTQUEUE_MAX_SIZE];
    ...
}

> And as I am not too familiar with the virtio protocol, is that current limit
> already visible to guest side? Because obviously it would make sense if I
> change my kernel patches so that they automatically limit to whatever QEMU
> supports instead of causing a hang.

Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of scatter
gather lists or the maximum queue size ever possible) is not visible to guest.

I thought about making a hack to make the guest Linux kernel aware whether
host side has the old limit of 1024 or rather the correct value 32768, but
probably not worth it.

Best regards,
Christian Schoenebeck




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k
  2021-10-03 18:14 ` Christian Schoenebeck
@ 2021-10-03 18:15   ` Christian Schoenebeck
  2021-10-03 20:31     ` Michael S. Tsirkin
  2021-10-03 20:27   ` virtio 4M limit Michael S. Tsirkin
  1 sibling, 1 reply; 8+ messages in thread
From: Christian Schoenebeck @ 2021-10-03 18:15 UTC (permalink / raw)
  To: qemu-devel; +Cc: Michael S. Tsirkin, Greg Kurz

VIRTQUEUE_MAX_SIZE reflects the absolute theoretical maximum
queue size possible, which is actually the maximum queue size
allowed by the virtio protocol. The appropriate value for
VIRTQUEUE_MAX_SIZE is therefore 32768:

https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-240006

Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
---
 include/hw/virtio/virtio.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 8bab9cfb75..1f18efa0bc 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -48,7 +48,7 @@ size_t virtio_feature_get_config_size(const VirtIOFeature *features,
 
 typedef struct VirtQueue VirtQueue;
 
-#define VIRTQUEUE_MAX_SIZE 1024
+#define VIRTQUEUE_MAX_SIZE 32768
 
 typedef struct VirtQueueElement
 {
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: virtio 4M limit
  2021-10-03 18:14 ` Christian Schoenebeck
  2021-10-03 18:15   ` [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
@ 2021-10-03 20:27   ` Michael S. Tsirkin
  2021-10-04 10:44     ` Christian Schoenebeck
  1 sibling, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2021-10-03 20:27 UTC (permalink / raw)
  To: Christian Schoenebeck; +Cc: qemu-devel, Greg Kurz

On Sun, Oct 03, 2021 at 08:14:55PM +0200, Christian Schoenebeck wrote:
> On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> > Hi Michael,
> > 
> > while testing the following kernel patches I realized there is currently a
> > size limitation of 4 MB with virtio on QEMU side:
> > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.com/
> > 
> > So with those kernel patches applied I can mount 9pfs on Linux guest with
> > the 9p 'msize' (maximum message size) option with a value of up to 4186112
> > successfully. If I try to go higher with 'msize' then the system would hang
> > with the following QEMU error:
> > 
> >   qemu-system-x86_64: virtio: too many write descriptors in indirect table
> > 
> > Which apparently is due to the amount of scatter gather lists on QEMU virtio
> > side currently being hard coded to 1024 (i.e. multiplied by 4k page size =>
> > 4 MB):
> > 
> >   ./include/hw/virtio/virtio.h:
> >   #define VIRTQUEUE_MAX_SIZE 1024
> > 
> > Is that hard coded limit carved into stone for some reason or would it be OK
> > if I change that into a runtime variable?
> 
> After reviewing the code and protocol specs, it seems that this value is
> simply too small. I will therefore send a patch suggsting to raise this value
> to 32768, as this is the maximum possible value according to the virtio specs.
> 
> https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-240006

I think it's too aggressive to change it for all devices.
Pls find a way to only have it affect 9pfs.

> > If that would be Ok, maybe something similar that I did with those kernel
> > patches, i.e. retaining 1024 as an initial default value and if indicated
> > from guest side that more is needed, increasing the SG list amount
> > subsequently according to whatever is needed by guest?
> 
> Further changes are probably not necessary.
> 
> The only thing I have spotted that probably should be changed is that at some
> few locations, a local array is allocated on the stack with VIRTQUEUE_MAX_SIZE
> as array size, e.g.:
> 
> static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> {
>     ...
>     hwaddr addr[VIRTQUEUE_MAX_SIZE];
>     struct iovec iov[VIRTQUEUE_MAX_SIZE];
>     ...
> }
> 
> > And as I am not too familiar with the virtio protocol, is that current limit
> > already visible to guest side? Because obviously it would make sense if I
> > change my kernel patches so that they automatically limit to whatever QEMU
> > supports instead of causing a hang.
> 
> Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of scatter
> gather lists or the maximum queue size ever possible) is not visible to guest.
> 
> I thought about making a hack to make the guest Linux kernel aware whether
> host side has the old limit of 1024 or rather the correct value 32768, but
> probably not worth it.
> 
> Best regards,
> Christian Schoenebeck
> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k
  2021-10-03 18:15   ` [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
@ 2021-10-03 20:31     ` Michael S. Tsirkin
  0 siblings, 0 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2021-10-03 20:31 UTC (permalink / raw)
  To: Christian Schoenebeck; +Cc: qemu-devel, Greg Kurz

On Sun, Oct 03, 2021 at 08:15:36PM +0200, Christian Schoenebeck wrote:
> VIRTQUEUE_MAX_SIZE reflects the absolute theoretical maximum
> queue size possible, which is actually the maximum queue size
> allowed by the virtio protocol. The appropriate value for
> VIRTQUEUE_MAX_SIZE is therefore 32768:
> 
> https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-240006
> 
> Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>

Problem is this then exceeds UIO_MAXIOV and e.g. virtio net
assumes that an iovec it gets from guest can be passed directly
to linux. Either we need to remove that restriction
(e.g. by doing an extra copy if iov size is bigger)
or add the limitation in net-specific code. blk and scsi
might be affected too, but these have a per-device
limit which can be tweaked.

> ---
>  include/hw/virtio/virtio.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 8bab9cfb75..1f18efa0bc 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -48,7 +48,7 @@ size_t virtio_feature_get_config_size(const VirtIOFeature *features,
>  
>  typedef struct VirtQueue VirtQueue;
>  
> -#define VIRTQUEUE_MAX_SIZE 1024
> +#define VIRTQUEUE_MAX_SIZE 32768
>  
>  typedef struct VirtQueueElement
>  {
> -- 
> 2.20.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: virtio 4M limit
  2021-10-03 20:27   ` virtio 4M limit Michael S. Tsirkin
@ 2021-10-04 10:44     ` Christian Schoenebeck
  2021-10-04 19:59       ` Michael S. Tsirkin
  0 siblings, 1 reply; 8+ messages in thread
From: Christian Schoenebeck @ 2021-10-04 10:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Michael S. Tsirkin, Greg Kurz

On Sonntag, 3. Oktober 2021 22:27:03 CEST Michael S. Tsirkin wrote:
> On Sun, Oct 03, 2021 at 08:14:55PM +0200, Christian Schoenebeck wrote:
> > On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> > > Hi Michael,
> > > 
> > > while testing the following kernel patches I realized there is currently
> > > a
> > > size limitation of 4 MB with virtio on QEMU side:
> > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.
> > > com/
> > > 
> > > So with those kernel patches applied I can mount 9pfs on Linux guest
> > > with
> > > the 9p 'msize' (maximum message size) option with a value of up to
> > > 4186112
> > > successfully. If I try to go higher with 'msize' then the system would
> > > hang
> > > 
> > > with the following QEMU error:
> > >   qemu-system-x86_64: virtio: too many write descriptors in indirect
> > >   table
> > > 
> > > Which apparently is due to the amount of scatter gather lists on QEMU
> > > virtio side currently being hard coded to 1024 (i.e. multiplied by 4k
> > > page size =>> > 
> > > 4 MB):
> > >   ./include/hw/virtio/virtio.h:
> > >   #define VIRTQUEUE_MAX_SIZE 1024
> > > 
> > > Is that hard coded limit carved into stone for some reason or would it
> > > be OK if I change that into a runtime variable?
> > 
> > After reviewing the code and protocol specs, it seems that this value is
> > simply too small. I will therefore send a patch suggsting to raise this
> > value to 32768, as this is the maximum possible value according to the
> > virtio specs.
> > 
> > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#
> > x1-240006
> I think it's too aggressive to change it for all devices.
> Pls find a way to only have it affect 9pfs.

So basically I should rather introduce a variable that would be used at most 
places instead of using the macro VIRTQUEUE_MAX_SIZE?

> > > If that would be Ok, maybe something similar that I did with those
> > > kernel
> > > patches, i.e. retaining 1024 as an initial default value and if
> > > indicated
> > > from guest side that more is needed, increasing the SG list amount
> > > subsequently according to whatever is needed by guest?
> > 
> > Further changes are probably not necessary.
> > 
> > The only thing I have spotted that probably should be changed is that at
> > some few locations, a local array is allocated on the stack with
> > VIRTQUEUE_MAX_SIZE as array size, e.g.:
> > 
> > static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> > {
> > 
> >     ...
> >     hwaddr addr[VIRTQUEUE_MAX_SIZE];
> >     struct iovec iov[VIRTQUEUE_MAX_SIZE];
> >     ...
> > 
> > }

What about these allocations on the stack? Is it Ok to disregard this as 
theoretical issue for now and just retain them on the stack, just with the 
runtime variable instead of macro as array size?

> > 
> > > And as I am not too familiar with the virtio protocol, is that current
> > > limit already visible to guest side? Because obviously it would make
> > > sense if I change my kernel patches so that they automatically limit to
> > > whatever QEMU supports instead of causing a hang.
> > 
> > Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of scatter
> > gather lists or the maximum queue size ever possible) is not visible to
> > guest.
> > 
> > I thought about making a hack to make the guest Linux kernel aware whether
> > host side has the old limit of 1024 or rather the correct value 32768, but
> > probably not worth it.
> > 
> > Best regards,
> > Christian Schoenebeck




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: virtio 4M limit
  2021-10-04 10:44     ` Christian Schoenebeck
@ 2021-10-04 19:59       ` Michael S. Tsirkin
  2021-10-04 20:10         ` Christian Schoenebeck
  0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2021-10-04 19:59 UTC (permalink / raw)
  To: Christian Schoenebeck; +Cc: qemu-devel, Greg Kurz

On Mon, Oct 04, 2021 at 12:44:21PM +0200, Christian Schoenebeck wrote:
> On Sonntag, 3. Oktober 2021 22:27:03 CEST Michael S. Tsirkin wrote:
> > On Sun, Oct 03, 2021 at 08:14:55PM +0200, Christian Schoenebeck wrote:
> > > On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> > > > Hi Michael,
> > > > 
> > > > while testing the following kernel patches I realized there is currently
> > > > a
> > > > size limitation of 4 MB with virtio on QEMU side:
> > > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.
> > > > com/
> > > > 
> > > > So with those kernel patches applied I can mount 9pfs on Linux guest
> > > > with
> > > > the 9p 'msize' (maximum message size) option with a value of up to
> > > > 4186112
> > > > successfully. If I try to go higher with 'msize' then the system would
> > > > hang
> > > > 
> > > > with the following QEMU error:
> > > >   qemu-system-x86_64: virtio: too many write descriptors in indirect
> > > >   table
> > > > 
> > > > Which apparently is due to the amount of scatter gather lists on QEMU
> > > > virtio side currently being hard coded to 1024 (i.e. multiplied by 4k
> > > > page size =>> > 
> > > > 4 MB):
> > > >   ./include/hw/virtio/virtio.h:
> > > >   #define VIRTQUEUE_MAX_SIZE 1024
> > > > 
> > > > Is that hard coded limit carved into stone for some reason or would it
> > > > be OK if I change that into a runtime variable?
> > > 
> > > After reviewing the code and protocol specs, it seems that this value is
> > > simply too small. I will therefore send a patch suggsting to raise this
> > > value to 32768, as this is the maximum possible value according to the
> > > virtio specs.
> > > 
> > > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#
> > > x1-240006
> > I think it's too aggressive to change it for all devices.
> > Pls find a way to only have it affect 9pfs.
> 
> So basically I should rather introduce a variable that would be used at most 
> places instead of using the macro VIRTQUEUE_MAX_SIZE?

I guess so.

> > > > If that would be Ok, maybe something similar that I did with those
> > > > kernel
> > > > patches, i.e. retaining 1024 as an initial default value and if
> > > > indicated
> > > > from guest side that more is needed, increasing the SG list amount
> > > > subsequently according to whatever is needed by guest?
> > > 
> > > Further changes are probably not necessary.
> > > 
> > > The only thing I have spotted that probably should be changed is that at
> > > some few locations, a local array is allocated on the stack with
> > > VIRTQUEUE_MAX_SIZE as array size, e.g.:
> > > 
> > > static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> > > {
> > > 
> > >     ...
> > >     hwaddr addr[VIRTQUEUE_MAX_SIZE];
> > >     struct iovec iov[VIRTQUEUE_MAX_SIZE];
> > >     ...
> > > 
> > > }
> 
> What about these allocations on the stack? Is it Ok to disregard this as 
> theoretical issue for now and just retain them on the stack, just with the 
> runtime variable instead of macro as array size?

I think it's not a big deal ... why do you think it is? Are we running
out of stack?

> > > 
> > > > And as I am not too familiar with the virtio protocol, is that current
> > > > limit already visible to guest side? Because obviously it would make
> > > > sense if I change my kernel patches so that they automatically limit to
> > > > whatever QEMU supports instead of causing a hang.
> > > 
> > > Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of scatter
> > > gather lists or the maximum queue size ever possible) is not visible to
> > > guest.
> > > 
> > > I thought about making a hack to make the guest Linux kernel aware whether
> > > host side has the old limit of 1024 or rather the correct value 32768, but
> > > probably not worth it.
> > > 
> > > Best regards,
> > > Christian Schoenebeck
> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: virtio 4M limit
  2021-10-04 19:59       ` Michael S. Tsirkin
@ 2021-10-04 20:10         ` Christian Schoenebeck
  0 siblings, 0 replies; 8+ messages in thread
From: Christian Schoenebeck @ 2021-10-04 20:10 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: qemu-devel, Greg Kurz

On Montag, 4. Oktober 2021 21:59:18 CEST Michael S. Tsirkin wrote:
> On Mon, Oct 04, 2021 at 12:44:21PM +0200, Christian Schoenebeck wrote:
> > On Sonntag, 3. Oktober 2021 22:27:03 CEST Michael S. Tsirkin wrote:
> > > On Sun, Oct 03, 2021 at 08:14:55PM +0200, Christian Schoenebeck wrote:
> > > > On Freitag, 1. Oktober 2021 13:21:23 CEST Christian Schoenebeck wrote:
> > > > > Hi Michael,
> > > > > 
> > > > > while testing the following kernel patches I realized there is
> > > > > currently
> > > > > a
> > > > > size limitation of 4 MB with virtio on QEMU side:
> > > > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudeb
> > > > > yte.
> > > > > com/
> > > > > 
> > > > > So with those kernel patches applied I can mount 9pfs on Linux guest
> > > > > with
> > > > > the 9p 'msize' (maximum message size) option with a value of up to
> > > > > 4186112
> > > > > successfully. If I try to go higher with 'msize' then the system
> > > > > would
> > > > > hang
> > > > > 
> > > > > with the following QEMU error:
> > > > >   qemu-system-x86_64: virtio: too many write descriptors in indirect
> > > > >   table
> > > > > 
> > > > > Which apparently is due to the amount of scatter gather lists on
> > > > > QEMU
> > > > > virtio side currently being hard coded to 1024 (i.e. multiplied by
> > > > > 4k
> > > > > page size =>> >
> > > > > 
> > > > > 4 MB):
> > > > >   ./include/hw/virtio/virtio.h:
> > > > >   #define VIRTQUEUE_MAX_SIZE 1024
> > > > > 
> > > > > Is that hard coded limit carved into stone for some reason or would
> > > > > it
> > > > > be OK if I change that into a runtime variable?
> > > > 
> > > > After reviewing the code and protocol specs, it seems that this value
> > > > is
> > > > simply too small. I will therefore send a patch suggsting to raise
> > > > this
> > > > value to 32768, as this is the maximum possible value according to the
> > > > virtio specs.
> > > > 
> > > > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.h
> > > > tml#
> > > > x1-240006
> > > 
> > > I think it's too aggressive to change it for all devices.
> > > Pls find a way to only have it affect 9pfs.
> > 
> > So basically I should rather introduce a variable that would be used at
> > most places instead of using the macro VIRTQUEUE_MAX_SIZE?
> 
> I guess so.

Good, because I just sent out a v2 series minutes ago.

> > > > > If that would be Ok, maybe something similar that I did with those
> > > > > kernel
> > > > > patches, i.e. retaining 1024 as an initial default value and if
> > > > > indicated
> > > > > from guest side that more is needed, increasing the SG list amount
> > > > > subsequently according to whatever is needed by guest?
> > > > 
> > > > Further changes are probably not necessary.
> > > > 
> > > > The only thing I have spotted that probably should be changed is that
> > > > at
> > > > some few locations, a local array is allocated on the stack with
> > > > VIRTQUEUE_MAX_SIZE as array size, e.g.:
> > > > 
> > > > static void *virtqueue_split_pop(VirtQueue *vq, size_t sz)
> > > > {
> > > > 
> > > >     ...
> > > >     hwaddr addr[VIRTQUEUE_MAX_SIZE];
> > > >     struct iovec iov[VIRTQUEUE_MAX_SIZE];
> > > >     ...
> > > > 
> > > > }
> > 
> > What about these allocations on the stack? Is it Ok to disregard this as
> > theoretical issue for now and just retain them on the stack, just with the
> > runtime variable instead of macro as array size?
> 
> I think it's not a big deal ... why do you think it is? Are we running
> out of stack?

No no. :) That was just a theoretical consideration for who knows what kind of 
platform and usage. I have preserved it on stack in today's v2.
 
> > > > > And as I am not too familiar with the virtio protocol, is that
> > > > > current
> > > > > limit already visible to guest side? Because obviously it would make
> > > > > sense if I change my kernel patches so that they automatically limit
> > > > > to
> > > > > whatever QEMU supports instead of causing a hang.
> > > > 
> > > > Apparently the value of VIRTQUEUE_MAX_SIZE (the maximum amount of
> > > > scatter
> > > > gather lists or the maximum queue size ever possible) is not visible
> > > > to
> > > > guest.
> > > > 
> > > > I thought about making a hack to make the guest Linux kernel aware
> > > > whether
> > > > host side has the old limit of 1024 or rather the correct value 32768,
> > > > but
> > > > probably not worth it.
> > > > 
> > > > Best regards,
> > > > Christian Schoenebeck




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-04 20:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-01 11:21 virtio 4M limit Christian Schoenebeck
2021-10-03 18:14 ` Christian Schoenebeck
2021-10-03 18:15   ` [PATCH] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-03 20:31     ` Michael S. Tsirkin
2021-10-03 20:27   ` virtio 4M limit Michael S. Tsirkin
2021-10-04 10:44     ` Christian Schoenebeck
2021-10-04 19:59       ` Michael S. Tsirkin
2021-10-04 20:10         ` Christian Schoenebeck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.