qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: virtio-fs-list <virtio-fs@redhat.com>,
	qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: tools/virtiofs: Multi threading seems to hurt performance
Date: Mon, 21 Sep 2020 09:50:19 +0100	[thread overview]
Message-ID: <20200921085019.GB3221@work-vm> (raw)
In-Reply-To: <20200918213436.GA3520@redhat.com>

* Vivek Goyal (vgoyal@redhat.com) wrote:
> Hi All,
> 
> virtiofsd default thread pool size is 64. To me it feels that in most of
> the cases thread pool size 1 performs better than thread pool size 64.
> 
> I ran virtiofs-tests.
> 
> https://github.com/rhvgoyal/virtiofs-tests
> 
> And here are the comparision results. To me it seems that by default
> we should switch to 1 thread (Till we can figure out how to make
> multi thread performance better even when single process is doing
> I/O in client).
> 
> I am especially more interested in getting performance better for
> single process in client. If that suffers, then it is pretty bad.
> 
> Especially look at randread, randwrite, seqwrite performance. seqread
> seems pretty good anyway.
> 
> If I don't run who test suite and just ran randread-psync job,
> my throughput jumps from around 40MB/s to 60MB/s. That's a huge
> jump I would say.
> 
> Thoughts?

What's your host setup; how many cores has the host got and how many did
you give the guest?

Dave

> Thanks
> Vivek
> 
> 
> NAME                    WORKLOAD                Bandwidth       IOPS            
> cache-auto              seqread-psync           690(MiB/s)      172k            
> cache-auto-1-thread     seqread-psync           729(MiB/s)      182k            
> 
> cache-auto              seqread-psync-multi     2578(MiB/s)     644k            
> cache-auto-1-thread     seqread-psync-multi     2597(MiB/s)     649k            
> 
> cache-auto              seqread-mmap            660(MiB/s)      165k            
> cache-auto-1-thread     seqread-mmap            672(MiB/s)      168k            
> 
> cache-auto              seqread-mmap-multi      2499(MiB/s)     624k            
> cache-auto-1-thread     seqread-mmap-multi      2618(MiB/s)     654k            
> 
> cache-auto              seqread-libaio          286(MiB/s)      71k             
> cache-auto-1-thread     seqread-libaio          260(MiB/s)      65k             
> 
> cache-auto              seqread-libaio-multi    1508(MiB/s)     377k            
> cache-auto-1-thread     seqread-libaio-multi    986(MiB/s)      246k            
> 
> cache-auto              randread-psync          35(MiB/s)       9191            
> cache-auto-1-thread     randread-psync          55(MiB/s)       13k             
> 
> cache-auto              randread-psync-multi    179(MiB/s)      44k             
> cache-auto-1-thread     randread-psync-multi    209(MiB/s)      52k             
> 
> cache-auto              randread-mmap           32(MiB/s)       8273            
> cache-auto-1-thread     randread-mmap           50(MiB/s)       12k             
> 
> cache-auto              randread-mmap-multi     161(MiB/s)      40k             
> cache-auto-1-thread     randread-mmap-multi     185(MiB/s)      46k             
> 
> cache-auto              randread-libaio         268(MiB/s)      67k             
> cache-auto-1-thread     randread-libaio         254(MiB/s)      63k             
> 
> cache-auto              randread-libaio-multi   256(MiB/s)      64k             
> cache-auto-1-thread     randread-libaio-multi   155(MiB/s)      38k             
> 
> cache-auto              seqwrite-psync          23(MiB/s)       6026            
> cache-auto-1-thread     seqwrite-psync          30(MiB/s)       7925            
> 
> cache-auto              seqwrite-psync-multi    100(MiB/s)      25k             
> cache-auto-1-thread     seqwrite-psync-multi    154(MiB/s)      38k             
> 
> cache-auto              seqwrite-mmap           343(MiB/s)      85k             
> cache-auto-1-thread     seqwrite-mmap           355(MiB/s)      88k             
> 
> cache-auto              seqwrite-mmap-multi     408(MiB/s)      102k            
> cache-auto-1-thread     seqwrite-mmap-multi     438(MiB/s)      109k            
> 
> cache-auto              seqwrite-libaio         41(MiB/s)       10k             
> cache-auto-1-thread     seqwrite-libaio         65(MiB/s)       16k             
> 
> cache-auto              seqwrite-libaio-multi   137(MiB/s)      34k             
> cache-auto-1-thread     seqwrite-libaio-multi   214(MiB/s)      53k             
> 
> cache-auto              randwrite-psync         22(MiB/s)       5801            
> cache-auto-1-thread     randwrite-psync         30(MiB/s)       7927            
> 
> cache-auto              randwrite-psync-multi   100(MiB/s)      25k             
> cache-auto-1-thread     randwrite-psync-multi   151(MiB/s)      37k             
> 
> cache-auto              randwrite-mmap          31(MiB/s)       7984            
> cache-auto-1-thread     randwrite-mmap          55(MiB/s)       13k             
> 
> cache-auto              randwrite-mmap-multi    124(MiB/s)      31k             
> cache-auto-1-thread     randwrite-mmap-multi    213(MiB/s)      53k             
> 
> cache-auto              randwrite-libaio        40(MiB/s)       10k             
> cache-auto-1-thread     randwrite-libaio        64(MiB/s)       16k             
> 
> cache-auto              randwrite-libaio-multi  139(MiB/s)      34k             
> cache-auto-1-thread     randwrite-libaio-multi  212(MiB/s)      53k             
> 
> 
> 
> 
> 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



  parent reply	other threads:[~2020-09-21  8:51 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-18 21:34 tools/virtiofs: Multi threading seems to hurt performance Vivek Goyal
2020-09-21  8:39 ` Stefan Hajnoczi
2020-09-21 13:39   ` Vivek Goyal
2020-09-21 16:57     ` Stefan Hajnoczi
2020-09-21  8:50 ` Dr. David Alan Gilbert [this message]
2020-09-21 13:35   ` Vivek Goyal
2020-09-21 14:08     ` Daniel P. Berrangé
2020-09-21 15:32 ` Dr. David Alan Gilbert
2020-09-22 10:25   ` Dr. David Alan Gilbert
2020-09-22 17:47     ` Vivek Goyal
2020-09-24 21:33       ` Venegas Munoz, Jose Carlos
2020-09-24 22:10         ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Vivek Goyal
2020-09-25  8:06           ` virtiofs vs 9p performance Christian Schoenebeck
2020-09-25 13:13             ` Vivek Goyal
2020-09-25 15:47               ` Christian Schoenebeck
2021-02-19 16:08             ` Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance) Vivek Goyal
2021-02-19 17:33               ` Christian Schoenebeck
2021-02-19 19:01                 ` Vivek Goyal
2021-02-20 15:38                   ` Christian Schoenebeck
2021-02-22 12:18                     ` Greg Kurz
2021-02-22 15:08                       ` Christian Schoenebeck
2021-02-22 17:11                         ` Greg Kurz
2021-02-23 13:39                           ` Christian Schoenebeck
2021-02-23 14:07                             ` Michael S. Tsirkin
2021-02-24 15:16                               ` Christian Schoenebeck
2021-02-24 15:43                                 ` Dominique Martinet
2021-02-26 13:49                                   ` Christian Schoenebeck
2021-02-27  0:03                                     ` Dominique Martinet
2021-03-03 14:04                                       ` Christian Schoenebeck
2021-03-03 14:50                                         ` Dominique Martinet
2021-03-05 14:57                                           ` Christian Schoenebeck
2020-09-25 12:41           ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Dr. David Alan Gilbert
2020-09-25 13:04             ` Christian Schoenebeck
2020-09-25 13:05               ` Dr. David Alan Gilbert
2020-09-25 16:05                 ` Christian Schoenebeck
2020-09-25 16:33                   ` Christian Schoenebeck
2020-09-25 18:51                   ` Dr. David Alan Gilbert
2020-09-27 12:14                     ` Christian Schoenebeck
2020-09-29 13:03                       ` Vivek Goyal
2020-09-29 13:28                         ` Christian Schoenebeck
2020-09-29 13:49                           ` Vivek Goyal
2020-09-29 13:59                             ` Christian Schoenebeck
2020-09-29 13:17             ` Vivek Goyal
2020-09-29 13:49               ` [Virtio-fs] " Miklos Szeredi
2020-09-29 14:01                 ` Vivek Goyal
2020-09-29 14:54                   ` Miklos Szeredi
2020-09-29 15:28                 ` Vivek Goyal
2020-09-25 12:11       ` tools/virtiofs: Multi threading seems to hurt performance Dr. David Alan Gilbert
2020-09-25 13:11         ` Vivek Goyal
2020-09-21 20:16 ` Vivek Goyal
2020-09-22 11:09   ` Dr. David Alan Gilbert
2020-09-22 22:56     ` Vivek Goyal
2020-09-23 12:50 ` [Virtio-fs] " Chirantan Ekbote
2020-09-23 12:59   ` Vivek Goyal
2020-09-25 11:35   ` Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200921085019.GB3221@work-vm \
    --to=dgilbert@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=vgoyal@redhat.com \
    --cc=virtio-fs@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).