qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: "Greg Kurz" <groug@kaod.org>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Kevin Wolf" <kwolf@redhat.com>,
	"Laurent Vivier" <lvivier@redhat.com>,
	qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>, "Amit Shah" <amit@kernel.org>,
	"David Hildenbrand" <david@redhat.com>,
	"Raphael Norwitz" <raphael.norwitz@nutanix.com>,
	virtio-fs@redhat.com, "Eric Auger" <eric.auger@redhat.com>,
	"Hanna Reitz" <hreitz@redhat.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Fam Zheng" <fam@euphon.net>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k
Date: Fri, 08 Oct 2021 18:08:48 +0200	[thread overview]
Message-ID: <2657765.azTuxnmmC0@silver> (raw)
In-Reply-To: <1853723.Wj769PA2Ue@silver>

On Freitag, 8. Oktober 2021 16:24:42 CEST Christian Schoenebeck wrote:
> On Freitag, 8. Oktober 2021 09:25:33 CEST Greg Kurz wrote:
> > On Thu, 7 Oct 2021 16:42:49 +0100
> > 
> > Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > > On Thu, Oct 07, 2021 at 02:51:55PM +0200, Christian Schoenebeck wrote:
> > > > On Donnerstag, 7. Oktober 2021 07:23:59 CEST Stefan Hajnoczi wrote:
> > > > > On Mon, Oct 04, 2021 at 09:38:00PM +0200, Christian Schoenebeck 
wrote:
> > > > > > At the moment the maximum transfer size with virtio is limited to
> > > > > > 4M
> > > > > > (1024 * PAGE_SIZE). This series raises this limit to its maximum
> > > > > > theoretical possible transfer size of 128M (32k pages) according
> > > > > > to
> > > > > > the
> > > > > > virtio specs:
> > > > > > 
> > > > > > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs
> > > > > > 01
> > > > > > .html#
> > > > > > x1-240006
> > > > > 
> > > > > Hi Christian,
> > 
> > > > > I took a quick look at the code:
> > Hi,
> > 
> > Thanks Stefan for sharing virtio expertise and helping Christian !
> > 
> > > > > - The Linux 9p driver restricts descriptor chains to 128 elements
> > > > > 
> > > > >   (net/9p/trans_virtio.c:VIRTQUEUE_NUM)
> > > > 
> > > > Yes, that's the limitation that I am about to remove (WIP); current
> > > > kernel
> > > > patches:
> > > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyt
> > > > e.
> > > > com/>
> > > 
> > > I haven't read the patches yet but I'm concerned that today the driver
> > > is pretty well-behaved and this new patch series introduces a spec
> > > violation. Not fixing existing spec violations is okay, but adding new
> > > ones is a red flag. I think we need to figure out a clean solution.
> 
> Nobody has reviewed the kernel patches yet. My main concern therefore
> actually is that the kernel patches are already too complex, because the
> current situation is that only Dominique is handling 9p patches on kernel
> side, and he barely has time for 9p anymore.
> 
> Another reason for me to catch up on reading current kernel code and
> stepping in as reviewer of 9p on kernel side ASAP, independent of this
> issue.
> 
> As for current kernel patches' complexity: I can certainly drop patch 7
> entirely as it is probably just overkill. Patch 4 is then the biggest chunk,
> I have to see if I can simplify it, and whether it would make sense to
> squash with patch 3.
> 
> > > > > - The QEMU 9pfs code passes iovecs directly to preadv(2) and will
> > > > > fail
> > > > > 
> > > > >   with EINVAL when called with more than IOV_MAX iovecs
> > > > >   (hw/9pfs/9p.c:v9fs_read())
> > > > 
> > > > Hmm, which makes me wonder why I never encountered this error during
> > > > testing.
> > > > 
> > > > Most people will use the 9p qemu 'local' fs driver backend in
> > > > practice,
> > > > so
> > > > that v9fs_read() call would translate for most people to this
> > > > implementation on QEMU side (hw/9p/9p-local.c):
> > > > 
> > > > static ssize_t local_preadv(FsContext *ctx, V9fsFidOpenState *fs,
> > > > 
> > > >                             const struct iovec *iov,
> > > >                             int iovcnt, off_t offset)
> > > > 
> > > > {
> > > > #ifdef CONFIG_PREADV
> > > > 
> > > >     return preadv(fs->fd, iov, iovcnt, offset);
> > > > 
> > > > #else
> > > > 
> > > >     int err = lseek(fs->fd, offset, SEEK_SET);
> > > >     if (err == -1) {
> > > >     
> > > >         return err;
> > > >     
> > > >     } else {
> > > >     
> > > >         return readv(fs->fd, iov, iovcnt);
> > > >     
> > > >     }
> > > > 
> > > > #endif
> > > > }
> > > > 
> > > > > Unless I misunderstood the code, neither side can take advantage of
> > > > > the
> > > > > new 32k descriptor chain limit?
> > > > > 
> > > > > Thanks,
> > > > > Stefan
> > > > 
> > > > I need to check that when I have some more time. One possible
> > > > explanation
> > > > might be that preadv() already has this wrapped into a loop in its
> > > > implementation to circumvent a limit like IOV_MAX. It might be another
> > > > "it
> > > > works, but not portable" issue, but not sure.
> > > > 
> > > > There are still a bunch of other issues I have to resolve. If you look
> > > > at
> > > > net/9p/client.c on kernel side, you'll notice that it basically does
> > > > this ATM> >
> > > > 
> > > >     kmalloc(msize);
> > 
> > Note that this is done twice : once for the T message (client request) and
> > once for the R message (server answer). The 9p driver could adjust the
> > size
> > of the T message to what's really needed instead of allocating the full
> > msize. R message size is not known though.
> 
> Would it make sense adding a second virtio ring, dedicated to server
> responses to solve this? IIRC 9p server already calculates appropriate
> exact sizes for each response type. So server could just push space that's
> really needed for its responses.
> 
> > > > for every 9p request. So not only does it allocate much more memory
> > > > for
> > > > every request than actually required (i.e. say 9pfs was mounted with
> > > > msize=8M, then a 9p request that actually would just need 1k would
> > > > nevertheless allocate 8M), but also it allocates > PAGE_SIZE, which
> > > > obviously may fail at any time.>
> > > 
> > > The PAGE_SIZE limitation sounds like a kmalloc() vs vmalloc() situation.
> 
> Hu, I didn't even consider vmalloc(). I just tried the kvmalloc() wrapper as
> a quick & dirty test, but it crashed in the same way as kmalloc() with
> large msize values immediately on mounting:
> 
> diff --git a/net/9p/client.c b/net/9p/client.c
> index a75034fa249b..cfe300a4b6ca 100644
> --- a/net/9p/client.c
> +++ b/net/9p/client.c
> @@ -227,15 +227,18 @@ static int parse_opts(char *opts, struct p9_client
> *clnt)
>  static int p9_fcall_init(struct p9_client *c, struct p9_fcall *fc,
>                          int alloc_msize)
>  {
> -       if (likely(c->fcall_cache) && alloc_msize == c->msize) {
> +       //if (likely(c->fcall_cache) && alloc_msize == c->msize) {
> +       if (false) {
>                 fc->sdata = kmem_cache_alloc(c->fcall_cache, GFP_NOFS);
>                 fc->cache = c->fcall_cache;
>         } else {
> -               fc->sdata = kmalloc(alloc_msize, GFP_NOFS);
> +               fc->sdata = kvmalloc(alloc_msize, GFP_NOFS);

Ok, GFP_NOFS -> GFP_KERNEL did the trick.

Now I get:

   virtio: bogus descriptor or out of resources

So, still some work ahead on both ends.

>                 fc->cache = NULL;
>         }
> -       if (!fc->sdata)
> +       if (!fc->sdata) {
> +               pr_info("%s !fc->sdata", __func__);
>                 return -ENOMEM;
> +       }
>         fc->capacity = alloc_msize;
>         return 0;
>  }
> 
> I try to look at this at the weekend, I would have expected this hack to
> bypass this issue.
> 
> > > I saw zerocopy code in the 9p guest driver but didn't investigate when
> > > it's used. Maybe that should be used for large requests (file
> > > reads/writes)?
> > 
> > This is the case already : zero-copy is only used for reads/writes/readdir
> > if the requested size is 1k or more.
> > 
> > Also you'll note that in this case, the 9p driver doesn't allocate msize
> > for the T/R messages but only 4k, which is largely enough to hold the
> > header.
> > 
> > 	/*
> > 	
> > 	 * We allocate a inline protocol data of only 4k bytes.
> > 	 * The actual content is passed in zero-copy fashion.
> > 	 */
> > 	
> > 	req = p9_client_prepare_req(c, type, P9_ZC_HDR_SZ, fmt, ap);
> > 
> > and
> > 
> > /* size of header for zero copy read/write */
> > #define P9_ZC_HDR_SZ 4096
> > 
> > A huge msize only makes sense for Twrite, Rread and Rreaddir because
> > of the amount of data they convey. All other messages certainly fit
> > in a couple of kilobytes only (sorry, don't remember the numbers).
> > 
> > A first change should be to allocate MIN(XXX, msize) for the
> > regular non-zc case, where XXX could be a reasonable fixed
> > value (8k?). In the case of T messages, it is even possible
> > to adjust the size to what's exactly needed, ala snprintf(NULL).
> 
> Good idea actually! That would limit this problem to reviewing the 9p specs
> and picking one reasonable max value. Because you are right, those message
> types are tiny. Probably not worth to pile up new code to calculate exact
> message sizes for each one of them.
> 
> Adding some safety net would make sense though, to force e.g. if a new
> message type is added in future, that this value would be reviewed as well,
> something like:
> 
> static int max_msg_size(int msg_type) {
>     switch (msg_type) {
>         /* large zero copy messages */
>         case Twrite:
>         case Tread:
>         case Treaddir:
>             BUG_ON(true);
> 
>         /* small messages */
>         case Tversion:
>         ....
>             return 8k; /* to be replaced with appropriate max value */
>     }
> }
> 
> That way the compiler would bark on future additions. But on doubt, a simple
> comment on msg type enum might do as well though.
> 
> > > virtio-blk/scsi don't memcpy data into a new buffer, they
> > > directly access page cache or O_DIRECT pinned pages.
> > > 
> > > Stefan
> > 
> > Cheers,
> > 
> > --
> > Greg




  reply	other threads:[~2021-10-08 16:09 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 19:38 [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 1/3] virtio: turn VIRTQUEUE_MAX_SIZE into a variable Christian Schoenebeck
2021-10-05  7:36   ` Greg Kurz
2021-10-05 12:45   ` Stefan Hajnoczi
2021-10-05 13:15     ` Christian Schoenebeck
2021-10-05 15:10       ` Stefan Hajnoczi
2021-10-05 16:32         ` Christian Schoenebeck
2021-10-06 11:06           ` Stefan Hajnoczi
2021-10-06 12:50             ` Christian Schoenebeck
2021-10-06 14:42               ` Stefan Hajnoczi
2021-10-07 13:09                 ` Christian Schoenebeck
2021-10-07 15:18                   ` Stefan Hajnoczi
2021-10-08 14:48                     ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 2/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-05  7:16   ` Michael S. Tsirkin
2021-10-05  7:35     ` Greg Kurz
2021-10-05 11:17     ` Christian Schoenebeck
2021-10-05 11:24       ` Michael S. Tsirkin
2021-10-05 12:01         ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 3/3] virtio-9p-device: switch to 32k max. transfer size Christian Schoenebeck
2021-10-05  7:38 ` [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k David Hildenbrand
2021-10-05 11:10   ` Christian Schoenebeck
2021-10-05 11:19     ` Michael S. Tsirkin
2021-10-05 11:43       ` Christian Schoenebeck
2021-10-07  5:23 ` Stefan Hajnoczi
2021-10-07 12:51   ` Christian Schoenebeck
2021-10-07 15:42     ` Stefan Hajnoczi
2021-10-08  7:25       ` Greg Kurz
2021-10-08 14:24         ` Christian Schoenebeck
2021-10-08 16:08           ` Christian Schoenebeck [this message]
2021-10-21 15:39             ` Christian Schoenebeck
2021-10-25 10:30               ` Stefan Hajnoczi
2021-10-25 15:03                 ` Christian Schoenebeck
2021-10-28  9:00                   ` Stefan Hajnoczi
2021-11-01 20:29                     ` Christian Schoenebeck
2021-11-03 11:33                       ` Stefan Hajnoczi
2021-11-04 14:41                         ` Christian Schoenebeck
2021-11-09 10:56                           ` Stefan Hajnoczi
2021-11-09 13:09                             ` Christian Schoenebeck
2021-11-10 10:05                               ` Stefan Hajnoczi
2021-11-10 13:14                                 ` Christian Schoenebeck
2021-11-10 15:14                                   ` Stefan Hajnoczi
2021-11-10 15:53                                     ` Christian Schoenebeck
2021-11-11 16:31                                       ` Stefan Hajnoczi
2021-11-11 17:54                                         ` Christian Schoenebeck
2021-11-15 11:54                                           ` Stefan Hajnoczi
2021-11-15 14:32                                             ` Christian Schoenebeck
2021-11-16 11:13                                               ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2657765.azTuxnmmC0@silver \
    --to=qemu_oss@crudebyte.com \
    --cc=amit@kernel.org \
    --cc=arei.gonglei@huawei.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=fam@euphon.net \
    --cc=groug@kaod.org \
    --cc=hreitz@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kraxel@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=raphael.norwitz@nutanix.com \
    --cc=stefanha@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).