qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Venegas Munoz, Jose Carlos" <jose.carlos.venegas.munoz@intel.com>
To: Vivek Goyal <vgoyal@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: virtio-fs-list <virtio-fs@redhat.com>,
	"Shinde, Archana M" <archana.m.shinde@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"cdupontd@redhat.com" <cdupontd@redhat.com>
Subject: Re: tools/virtiofs: Multi threading seems to hurt performance
Date: Thu, 24 Sep 2020 21:33:01 +0000	[thread overview]
Message-ID: <46D726A6-72F3-40FE-9382-A189513F783D@intel.com> (raw)
In-Reply-To: <20200922174733.GD57620@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4115 bytes --]

Hi Folks,

Sorry for the delay about how to reproduce `fio` data.

I have some code to automate testing for multiple kata configs and collect info like:
- Kata-env, kata configuration.toml, qemu command, virtiofsd command.

See: 
https://github.com/jcvenegas/mrunner/


Last time we agreed to narrow the cases and configs to compare virtiofs and 9pfs

The configs where the following:

- qemu + virtiofs(cache=auto, dax=0) a.ka. `kata-qemu-virtiofs` WITOUT xattr
- qemu + 9pfs a.k.a `kata-qemu`

Please take a look to the html and raw results I attach in this mail.

## Can I say that the  current status is:
- As David tests and Vivek points, for the fio workload you are using, seems that the best candidate should be cache=none
   -  In the comparison I took  cache=auto as Vivek suggested, this make sense as it seems that will be the default for kata.
   - Even if for this case cache=none works better, Can I assume that cache=auto dax=0 will be better than any 9pfs config? (once we find the root cause)

- Vivek is taking a look to mmap mode from 9pfs, to see how different is  with current virtiofs implementations. In 9pfs for kata, this is what we use by default.

## I'd like to identify what should be next on the debug/testing?

- Should I try to narrow by only trying to with qemu? 
- Should I try first with a new patch you already have? 
- Probably try with qemu without static build?
- Do the same test with thread-pool-size=1?

Please let me know how can I help.

Cheers.

On 22/09/20 12:47, "Vivek Goyal" <vgoyal@redhat.com> wrote:

    On Tue, Sep 22, 2020 at 11:25:31AM +0100, Dr. David Alan Gilbert wrote:
    > * Dr. David Alan Gilbert (dgilbert@redhat.com) wrote:
    > > Hi,
    > >   I've been doing some of my own perf tests and I think I agree
    > > about the thread pool size;  my test is a kernel build
    > > and I've tried a bunch of different options.
    > > 
    > > My config:
    > >   Host: 16 core AMD EPYC (32 thread), 128G RAM,
    > >      5.9.0-rc4 kernel, rhel 8.2ish userspace.
    > >   5.1.0 qemu/virtiofsd built from git.
    > >   Guest: Fedora 32 from cloud image with just enough extra installed for
    > > a kernel build.
    > > 
    > >   git cloned and checkout v5.8 of Linux into /dev/shm/linux on the host
    > > fresh before each test.  Then log into the guest, make defconfig,
    > > time make -j 16 bzImage,  make clean; time make -j 16 bzImage 
    > > The numbers below are the 'real' time in the guest from the initial make
    > > (the subsequent makes dont vary much)
    > > 
    > > Below are the detauls of what each of these means, but here are the
    > > numbers first
    > > 
    > > virtiofsdefault        4m0.978s
    > > 9pdefault              9m41.660s
    > > virtiofscache=none    10m29.700s
    > > 9pmmappass             9m30.047s
    > > 9pmbigmsize           12m4.208s
    > > 9pmsecnone             9m21.363s
    > > virtiofscache=noneT1   7m17.494s
    > > virtiofsdefaultT1      3m43.326s
    > > 
    > > So the winner there by far is the 'virtiofsdefaultT1' - that's
    > > the default virtiofs settings, but with --thread-pool-size=1 - so
    > > yes it gives a small benefit.
    > > But interestingly the cache=none virtiofs performance is pretty bad,
    > > but thread-pool-size=1 on that makes a BIG improvement.
    > 
    > Here are fio runs that Vivek asked me to run in my same environment
    > (there are some 0's in some of the mmap cases, and I've not investigated
    > why yet).

    cache=none does not allow mmap in case of virtiofs. That's when you
    are seeing 0.

    >virtiofs is looking good here in I think all of the cases;
    > there's some division over which cinfig; cache=none
    > seems faster in some cases which surprises me.

    I know cache=none is faster in case of write workloads. It forces
    direct write where we don't call file_remove_privs(). While cache=auto
    goes through file_remove_privs() and that adds a GETXATTR request to
    every WRITE request.

    Vivek



[-- Attachment #2: results.tar.gz --]
[-- Type: application/x-gzip, Size: 18156 bytes --]

[-- Attachment #3: vitiofs 9pfs fio comparsion.html --]
[-- Type: text/html, Size: 29758 bytes --]

  reply	other threads:[~2020-09-25  1:17 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-18 21:34 tools/virtiofs: Multi threading seems to hurt performance Vivek Goyal
2020-09-21  8:39 ` Stefan Hajnoczi
2020-09-21 13:39   ` Vivek Goyal
2020-09-21 16:57     ` Stefan Hajnoczi
2020-09-21  8:50 ` Dr. David Alan Gilbert
2020-09-21 13:35   ` Vivek Goyal
2020-09-21 14:08     ` Daniel P. Berrangé
2020-09-21 15:32 ` Dr. David Alan Gilbert
2020-09-22 10:25   ` Dr. David Alan Gilbert
2020-09-22 17:47     ` Vivek Goyal
2020-09-24 21:33       ` Venegas Munoz, Jose Carlos [this message]
2020-09-24 22:10         ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Vivek Goyal
2020-09-25  8:06           ` virtiofs vs 9p performance Christian Schoenebeck
2020-09-25 13:13             ` Vivek Goyal
2020-09-25 15:47               ` Christian Schoenebeck
2021-02-19 16:08             ` Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance) Vivek Goyal
2021-02-19 17:33               ` Christian Schoenebeck
2021-02-19 19:01                 ` Vivek Goyal
2021-02-20 15:38                   ` Christian Schoenebeck
2021-02-22 12:18                     ` Greg Kurz
2021-02-22 15:08                       ` Christian Schoenebeck
2021-02-22 17:11                         ` Greg Kurz
2021-02-23 13:39                           ` Christian Schoenebeck
2021-02-23 14:07                             ` Michael S. Tsirkin
2021-02-24 15:16                               ` Christian Schoenebeck
2021-02-24 15:43                                 ` Dominique Martinet
2021-02-26 13:49                                   ` Christian Schoenebeck
2021-02-27  0:03                                     ` Dominique Martinet
2021-03-03 14:04                                       ` Christian Schoenebeck
2021-03-03 14:50                                         ` Dominique Martinet
2021-03-05 14:57                                           ` Christian Schoenebeck
2020-09-25 12:41           ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Dr. David Alan Gilbert
2020-09-25 13:04             ` Christian Schoenebeck
2020-09-25 13:05               ` Dr. David Alan Gilbert
2020-09-25 16:05                 ` Christian Schoenebeck
2020-09-25 16:33                   ` Christian Schoenebeck
2020-09-25 18:51                   ` Dr. David Alan Gilbert
2020-09-27 12:14                     ` Christian Schoenebeck
2020-09-29 13:03                       ` Vivek Goyal
2020-09-29 13:28                         ` Christian Schoenebeck
2020-09-29 13:49                           ` Vivek Goyal
2020-09-29 13:59                             ` Christian Schoenebeck
2020-09-29 13:17             ` Vivek Goyal
2020-09-29 13:49               ` [Virtio-fs] " Miklos Szeredi
2020-09-29 14:01                 ` Vivek Goyal
2020-09-29 14:54                   ` Miklos Szeredi
2020-09-29 15:28                 ` Vivek Goyal
2020-09-25 12:11       ` tools/virtiofs: Multi threading seems to hurt performance Dr. David Alan Gilbert
2020-09-25 13:11         ` Vivek Goyal
2020-09-21 20:16 ` Vivek Goyal
2020-09-22 11:09   ` Dr. David Alan Gilbert
2020-09-22 22:56     ` Vivek Goyal
2020-09-23 12:50 ` [Virtio-fs] " Chirantan Ekbote
2020-09-23 12:59   ` Vivek Goyal
2020-09-25 11:35   ` Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46D726A6-72F3-40FE-9382-A189513F783D@intel.com \
    --to=jose.carlos.venegas.munoz@intel.com \
    --cc=archana.m.shinde@intel.com \
    --cc=cdupontd@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=vgoyal@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).