qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Dominique Martinet <asmadeus@codewreck.org>,
	"cdupontd@redhat.com" <cdupontd@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Venegas Munoz,
	Jose Carlos" <jose.carlos.venegas.munoz@intel.com>,
	Greg Kurz <groug@kaod.org>, virtio-fs-list <virtio-fs@redhat.com>,
	Vivek Goyal <vgoyal@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	v9fs-developer@lists.sourceforge.net, "Shinde,
	Archana M" <archana.m.shinde@intel.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance)
Date: Fri, 26 Feb 2021 14:49:12 +0100	[thread overview]
Message-ID: <1918692.k70u9Ml6kK@silver> (raw)
In-Reply-To: <20210224154357.GA12207@tyr>

On Mittwoch, 24. Februar 2021 16:43:57 CET Dominique Martinet wrote:
> Christian Schoenebeck wrote on Wed, Feb 24, 2021 at 04:16:52PM +0100:
> > Misapprehension + typo(s) in my previous message, sorry Michael. That's
> > 500k of course (not 5k), yes.
> > 
> > Let me rephrase that question: are you aware of something in virtio that
> > would per se mandate an absolute hard coded message size limit (e.g. from
> > virtio specs perspective or maybe some compatibility issue)?
> > 
> > If not, we would try getting rid of that hard coded limit of the 9p client
> > on kernel side in the first place, because the kernel's 9p client already
> > has a dynamic runtime option 'msize' and that hard coded enforced limit
> > (500k) is a performance bottleneck like I said.
> 
> We could probably set it at init time through virtio_max_dma_size(vdev)
> like virtio_blk does (I just tried and get 2^64 so we can probably
> expect virtually no limit there)
> 
> I'm not too familiar with virtio, feel free to try and if it works send
> me a patch -- the size drop from 512 to 500k is old enough that things
> probably have changed in the background since then.

Yes, agreed. I'm neither too familiar with virtio, nor with the Linux 9p
client code yet. For that reason I consider a minimal invasive change as a
first step at least. AFAICS a "split virtqueue" setup is currently used:

https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-240006

Right now the client uses a hard coded amount of 128 elements. So what about
replacing VIRTQUEUE_NUM by a variable which is initialized with a value
according to the user's requested 'msize' option at init time?

According to the virtio specs the max. amount of elements in a virtqueue is
32768. So 32768 * 4k = 128M as new upper limit would already be a significant
improvement and would not require too many changes to the client code, right?

> On the 9p side itself, unrelated to virtio, we don't want to make it
> *too* big as the client code doesn't use any scatter-gather and will
> want to allocate upfront contiguous buffers of the size that got
> negotiated -- that can get ugly quite fast, but we can leave it up to
> users to decide.

With ugly you just mean that it's occupying this memory for good as long as
the driver is loaded, or is there some runtime performance penalty as well to
be aware of?

> One of my very-long-term goal would be to tend to that, if someone has
> cycles to work on it I'd gladly review any patch in that area.
> A possible implementation path would be to have transport define
> themselves if they support it or not and handle it accordingly until all
> transports migrated, so one wouldn't need to care about e.g. rdma or xen
> if you don't have hardware to test in the short term.

Sounds like something that Greg suggested before for a slightly different,
even though related issue: right now the default 'msize' on Linux client side
is 8k, which really hurts performance wise as virtually all 9p messages have
to be split into a huge number of request and response messages. OTOH you
don't want to set this default value too high. So Greg noted that virtio could
suggest a default msize, i.e. a value that would suit host's storage hardware
appropriately.

> The next best thing would be David's netfs helpers and sending
> concurrent requests if you use cache, but that's not merged yet either
> so it'll be a few cycles as well.

So right now the Linux client is always just handling one request at a time;
it sends a 9p request and waits for its response before processing the next
request?

If so, is there a reason to limit the planned concurrent request handling
feature to one of the cached modes? I mean ordering of requests is already
handled on 9p server side, so client could just pass all messages in a
lite-weight way and assume server takes care of it.

Best regards,
Christian Schoenebeck




  reply	other threads:[~2021-02-26 13:51 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-18 21:34 tools/virtiofs: Multi threading seems to hurt performance Vivek Goyal
2020-09-21  8:39 ` Stefan Hajnoczi
2020-09-21 13:39   ` Vivek Goyal
2020-09-21 16:57     ` Stefan Hajnoczi
2020-09-21  8:50 ` Dr. David Alan Gilbert
2020-09-21 13:35   ` Vivek Goyal
2020-09-21 14:08     ` Daniel P. Berrangé
2020-09-21 15:32 ` Dr. David Alan Gilbert
2020-09-22 10:25   ` Dr. David Alan Gilbert
2020-09-22 17:47     ` Vivek Goyal
2020-09-24 21:33       ` Venegas Munoz, Jose Carlos
2020-09-24 22:10         ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Vivek Goyal
2020-09-25  8:06           ` virtiofs vs 9p performance Christian Schoenebeck
2020-09-25 13:13             ` Vivek Goyal
2020-09-25 15:47               ` Christian Schoenebeck
2021-02-19 16:08             ` Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance) Vivek Goyal
2021-02-19 17:33               ` Christian Schoenebeck
2021-02-19 19:01                 ` Vivek Goyal
2021-02-20 15:38                   ` Christian Schoenebeck
2021-02-22 12:18                     ` Greg Kurz
2021-02-22 15:08                       ` Christian Schoenebeck
2021-02-22 17:11                         ` Greg Kurz
2021-02-23 13:39                           ` Christian Schoenebeck
2021-02-23 14:07                             ` Michael S. Tsirkin
2021-02-24 15:16                               ` Christian Schoenebeck
2021-02-24 15:43                                 ` Dominique Martinet
2021-02-26 13:49                                   ` Christian Schoenebeck [this message]
2021-02-27  0:03                                     ` Dominique Martinet
2021-03-03 14:04                                       ` Christian Schoenebeck
2021-03-03 14:50                                         ` Dominique Martinet
2021-03-05 14:57                                           ` Christian Schoenebeck
2020-09-25 12:41           ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Dr. David Alan Gilbert
2020-09-25 13:04             ` Christian Schoenebeck
2020-09-25 13:05               ` Dr. David Alan Gilbert
2020-09-25 16:05                 ` Christian Schoenebeck
2020-09-25 16:33                   ` Christian Schoenebeck
2020-09-25 18:51                   ` Dr. David Alan Gilbert
2020-09-27 12:14                     ` Christian Schoenebeck
2020-09-29 13:03                       ` Vivek Goyal
2020-09-29 13:28                         ` Christian Schoenebeck
2020-09-29 13:49                           ` Vivek Goyal
2020-09-29 13:59                             ` Christian Schoenebeck
2020-09-29 13:17             ` Vivek Goyal
2020-09-29 13:49               ` [Virtio-fs] " Miklos Szeredi
2020-09-29 14:01                 ` Vivek Goyal
2020-09-29 14:54                   ` Miklos Szeredi
2020-09-29 15:28                 ` Vivek Goyal
2020-09-25 12:11       ` tools/virtiofs: Multi threading seems to hurt performance Dr. David Alan Gilbert
2020-09-25 13:11         ` Vivek Goyal
2020-09-21 20:16 ` Vivek Goyal
2020-09-22 11:09   ` Dr. David Alan Gilbert
2020-09-22 22:56     ` Vivek Goyal
2020-09-23 12:50 ` [Virtio-fs] " Chirantan Ekbote
2020-09-23 12:59   ` Vivek Goyal
2020-09-25 11:35   ` Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1918692.k70u9Ml6kK@silver \
    --to=qemu_oss@crudebyte.com \
    --cc=archana.m.shinde@intel.com \
    --cc=asmadeus@codewreck.org \
    --cc=cdupontd@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=groug@kaod.org \
    --cc=jose.carlos.venegas.munoz@intel.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=v9fs-developer@lists.sourceforge.net \
    --cc=vgoyal@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).