From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: "Stefan Hajnoczi" <stefanha@redhat.com>,
"Kevin Wolf" <kwolf@redhat.com>,
"Laurent Vivier" <lvivier@redhat.com>,
qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
"Jason Wang" <jasowang@redhat.com>, "Amit Shah" <amit@kernel.org>,
"David Hildenbrand" <david@redhat.com>,
"Greg Kurz" <groug@kaod.org>,
"Raphael Norwitz" <raphael.norwitz@nutanix.com>,
virtio-fs@redhat.com, "Eric Auger" <eric.auger@redhat.com>,
"Hanna Reitz" <hreitz@redhat.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
"Gerd Hoffmann" <kraxel@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Fam Zheng" <fam@euphon.net>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k
Date: Thu, 07 Oct 2021 14:51:55 +0200 [thread overview]
Message-ID: <2233456.PtHKNz60go@silver> (raw)
In-Reply-To: <YV6EbwMFmcIEC+za@stefanha-x1.localdomain>
On Donnerstag, 7. Oktober 2021 07:23:59 CEST Stefan Hajnoczi wrote:
> On Mon, Oct 04, 2021 at 09:38:00PM +0200, Christian Schoenebeck wrote:
> > At the moment the maximum transfer size with virtio is limited to 4M
> > (1024 * PAGE_SIZE). This series raises this limit to its maximum
> > theoretical possible transfer size of 128M (32k pages) according to the
> > virtio specs:
> >
> > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#
> > x1-240006
> Hi Christian,
> I took a quick look at the code:
>
> - The Linux 9p driver restricts descriptor chains to 128 elements
> (net/9p/trans_virtio.c:VIRTQUEUE_NUM)
Yes, that's the limitation that I am about to remove (WIP); current kernel
patches:
https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.com/
> - The QEMU 9pfs code passes iovecs directly to preadv(2) and will fail
> with EINVAL when called with more than IOV_MAX iovecs
> (hw/9pfs/9p.c:v9fs_read())
Hmm, which makes me wonder why I never encountered this error during testing.
Most people will use the 9p qemu 'local' fs driver backend in practice, so
that v9fs_read() call would translate for most people to this implementation
on QEMU side (hw/9p/9p-local.c):
static ssize_t local_preadv(FsContext *ctx, V9fsFidOpenState *fs,
const struct iovec *iov,
int iovcnt, off_t offset)
{
#ifdef CONFIG_PREADV
return preadv(fs->fd, iov, iovcnt, offset);
#else
int err = lseek(fs->fd, offset, SEEK_SET);
if (err == -1) {
return err;
} else {
return readv(fs->fd, iov, iovcnt);
}
#endif
}
> Unless I misunderstood the code, neither side can take advantage of the
> new 32k descriptor chain limit?
>
> Thanks,
> Stefan
I need to check that when I have some more time. One possible explanation
might be that preadv() already has this wrapped into a loop in its
implementation to circumvent a limit like IOV_MAX. It might be another "it
works, but not portable" issue, but not sure.
There are still a bunch of other issues I have to resolve. If you look at
net/9p/client.c on kernel side, you'll notice that it basically does this ATM
kmalloc(msize);
for every 9p request. So not only does it allocate much more memory for every
request than actually required (i.e. say 9pfs was mounted with msize=8M, then
a 9p request that actually would just need 1k would nevertheless allocate 8M),
but also it allocates > PAGE_SIZE, which obviously may fail at any time.
With those kernel patches above and QEMU being patched with these series as
well, I can go above 4M msize now, and the test system runs stable if 9pfs was
mounted with an msize not being "too high". If I try to mount 9pfs with msize
being very high, the upper described kmalloc() issue would kick in and cause
an immediate kernel oops when mounting. So that's a high priority issue that I
still need to resolve.
Best regards,
Christian Schoenebeck
next prev parent reply other threads:[~2021-10-07 12:53 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-04 19:38 [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 1/3] virtio: turn VIRTQUEUE_MAX_SIZE into a variable Christian Schoenebeck
2021-10-05 7:36 ` Greg Kurz
2021-10-05 12:45 ` Stefan Hajnoczi
2021-10-05 13:15 ` Christian Schoenebeck
2021-10-05 15:10 ` Stefan Hajnoczi
2021-10-05 16:32 ` Christian Schoenebeck
2021-10-06 11:06 ` Stefan Hajnoczi
2021-10-06 12:50 ` Christian Schoenebeck
2021-10-06 14:42 ` Stefan Hajnoczi
2021-10-07 13:09 ` Christian Schoenebeck
2021-10-07 15:18 ` Stefan Hajnoczi
2021-10-08 14:48 ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 2/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-05 7:16 ` Michael S. Tsirkin
2021-10-05 7:35 ` Greg Kurz
2021-10-05 11:17 ` Christian Schoenebeck
2021-10-05 11:24 ` Michael S. Tsirkin
2021-10-05 12:01 ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 3/3] virtio-9p-device: switch to 32k max. transfer size Christian Schoenebeck
2021-10-05 7:38 ` [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k David Hildenbrand
2021-10-05 11:10 ` Christian Schoenebeck
2021-10-05 11:19 ` Michael S. Tsirkin
2021-10-05 11:43 ` Christian Schoenebeck
2021-10-07 5:23 ` Stefan Hajnoczi
2021-10-07 12:51 ` Christian Schoenebeck [this message]
2021-10-07 15:42 ` Stefan Hajnoczi
2021-10-08 7:25 ` Greg Kurz
2021-10-08 14:24 ` Christian Schoenebeck
2021-10-08 16:08 ` Christian Schoenebeck
2021-10-21 15:39 ` Christian Schoenebeck
2021-10-25 10:30 ` Stefan Hajnoczi
2021-10-25 15:03 ` Christian Schoenebeck
2021-10-28 9:00 ` Stefan Hajnoczi
2021-11-01 20:29 ` Christian Schoenebeck
2021-11-03 11:33 ` Stefan Hajnoczi
2021-11-04 14:41 ` Christian Schoenebeck
2021-11-09 10:56 ` Stefan Hajnoczi
2021-11-09 13:09 ` Christian Schoenebeck
2021-11-10 10:05 ` Stefan Hajnoczi
2021-11-10 13:14 ` Christian Schoenebeck
2021-11-10 15:14 ` Stefan Hajnoczi
2021-11-10 15:53 ` Christian Schoenebeck
2021-11-11 16:31 ` Stefan Hajnoczi
2021-11-11 17:54 ` Christian Schoenebeck
2021-11-15 11:54 ` Stefan Hajnoczi
2021-11-15 14:32 ` Christian Schoenebeck
2021-11-16 11:13 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2233456.PtHKNz60go@silver \
--to=qemu_oss@crudebyte.com \
--cc=amit@kernel.org \
--cc=arei.gonglei@huawei.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eric.auger@redhat.com \
--cc=fam@euphon.net \
--cc=groug@kaod.org \
--cc=hreitz@redhat.com \
--cc=jasowang@redhat.com \
--cc=kraxel@redhat.com \
--cc=kwolf@redhat.com \
--cc=lvivier@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=raphael.norwitz@nutanix.com \
--cc=stefanha@redhat.com \
--cc=virtio-fs@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).