From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: jose.carlos.venegas.munoz@intel.com, qemu-devel@nongnu.org,
cdupontd@redhat.com, virtio-fs-list <virtio-fs@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
archana.m.shinde@intel.com
Subject: Re: tools/virtiofs: Multi threading seems to hurt performance
Date: Mon, 21 Sep 2020 16:32:43 +0100 [thread overview]
Message-ID: <20200921153243.GK3221@work-vm> (raw)
In-Reply-To: <20200918213436.GA3520@redhat.com>
Hi,
I've been doing some of my own perf tests and I think I agree
about the thread pool size; my test is a kernel build
and I've tried a bunch of different options.
My config:
Host: 16 core AMD EPYC (32 thread), 128G RAM,
5.9.0-rc4 kernel, rhel 8.2ish userspace.
5.1.0 qemu/virtiofsd built from git.
Guest: Fedora 32 from cloud image with just enough extra installed for
a kernel build.
git cloned and checkout v5.8 of Linux into /dev/shm/linux on the host
fresh before each test. Then log into the guest, make defconfig,
time make -j 16 bzImage, make clean; time make -j 16 bzImage
The numbers below are the 'real' time in the guest from the initial make
(the subsequent makes dont vary much)
Below are the detauls of what each of these means, but here are the
numbers first
virtiofsdefault 4m0.978s
9pdefault 9m41.660s
virtiofscache=none 10m29.700s
9pmmappass 9m30.047s
9pmbigmsize 12m4.208s
9pmsecnone 9m21.363s
virtiofscache=noneT1 7m17.494s
virtiofsdefaultT1 3m43.326s
So the winner there by far is the 'virtiofsdefaultT1' - that's
the default virtiofs settings, but with --thread-pool-size=1 - so
yes it gives a small benefit.
But interestingly the cache=none virtiofs performance is pretty bad,
but thread-pool-size=1 on that makes a BIG improvement.
virtiofsdefault:
./virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/linux
./x86_64-softmmu/qemu-system-x86_64 -M pc,memory-backend=mem,accel=kvm -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object memory-backend-memfd,id=mem,size=32G,share=on -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel
mount -t virtiofs kernel /mnt
9pdefault
./x86_64-softmmu/qemu-system-x86_64 -M pc,accel=kvm -smp 8 -cpu host -m 32G -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -virtfs local,path=/dev/shm/linux,mount_tag=kernel,security_model=passthrough
mount -t 9p -o trans=virtio kernel /mnt -oversion=9p2000.L
virtiofscache=none
./virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/linux -o cache=none
./x86_64-softmmu/qemu-system-x86_64 -M pc,memory-backend=mem,accel=kvm -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object memory-backend-memfd,id=mem,size=32G,share=on -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel
mount -t virtiofs kernel /mnt
9pmmappass
./x86_64-softmmu/qemu-system-x86_64 -M pc,accel=kvm -smp 8 -cpu host -m 32G -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -virtfs local,path=/dev/shm/linux,mount_tag=kernel,security_model=passthrough
mount -t 9p -o trans=virtio kernel /mnt -oversion=9p2000.L,cache=mmap
9pmbigmsize
./x86_64-softmmu/qemu-system-x86_64 -M pc,accel=kvm -smp 8 -cpu host -m 32G -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -virtfs local,path=/dev/shm/linux,mount_tag=kernel,security_model=passthrough
mount -t 9p -o trans=virtio kernel /mnt -oversion=9p2000.L,cache=mmap,msize=1048576
9pmsecnone
./x86_64-softmmu/qemu-system-x86_64 -M pc,accel=kvm -smp 8 -cpu host -m 32G -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -virtfs local,path=/dev/shm/linux,mount_tag=kernel,security_model=none
mount -t 9p -o trans=virtio kernel /mnt -oversion=9p2000.L
virtiofscache=noneT1
./virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/linux -o cache=none --thread-pool-size=1
mount -t virtiofs kernel /mnt
virtiofsdefaultT1
./virtiofsd --socket-path=/tmp/vhostqemu -o source=/dev/shm/linux --thread-pool-size=1
./x86_64-softmmu/qemu-system-x86_64 -M pc,memory-backend=mem,accel=kvm -smp 8 -cpu host -m 32G,maxmem=64G,slots=1 -object memory-backend-memfd,id=mem,size=32G,share=on -drive if=virtio,file=/home/images/f-32-kernel.qcow2 -nographic -chardev socket,id=char0,path=/tmp/vhostqemu -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=kernel
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-09-21 15:34 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-18 21:34 tools/virtiofs: Multi threading seems to hurt performance Vivek Goyal
2020-09-21 8:39 ` Stefan Hajnoczi
2020-09-21 13:39 ` Vivek Goyal
2020-09-21 16:57 ` Stefan Hajnoczi
2020-09-21 8:50 ` Dr. David Alan Gilbert
2020-09-21 13:35 ` Vivek Goyal
2020-09-21 14:08 ` Daniel P. Berrangé
2020-09-21 15:32 ` Dr. David Alan Gilbert [this message]
2020-09-22 10:25 ` Dr. David Alan Gilbert
2020-09-22 17:47 ` Vivek Goyal
2020-09-24 21:33 ` Venegas Munoz, Jose Carlos
2020-09-24 22:10 ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Vivek Goyal
2020-09-25 8:06 ` virtiofs vs 9p performance Christian Schoenebeck
2020-09-25 13:13 ` Vivek Goyal
2020-09-25 15:47 ` Christian Schoenebeck
2021-02-19 16:08 ` Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance) Vivek Goyal
2021-02-19 17:33 ` Christian Schoenebeck
2021-02-19 19:01 ` Vivek Goyal
2021-02-20 15:38 ` Christian Schoenebeck
2021-02-22 12:18 ` Greg Kurz
2021-02-22 15:08 ` Christian Schoenebeck
2021-02-22 17:11 ` Greg Kurz
2021-02-23 13:39 ` Christian Schoenebeck
2021-02-23 14:07 ` Michael S. Tsirkin
2021-02-24 15:16 ` Christian Schoenebeck
2021-02-24 15:43 ` Dominique Martinet
2021-02-26 13:49 ` Christian Schoenebeck
2021-02-27 0:03 ` Dominique Martinet
2021-03-03 14:04 ` Christian Schoenebeck
2021-03-03 14:50 ` Dominique Martinet
2021-03-05 14:57 ` Christian Schoenebeck
2020-09-25 12:41 ` virtiofs vs 9p performance(Re: tools/virtiofs: Multi threading seems to hurt performance) Dr. David Alan Gilbert
2020-09-25 13:04 ` Christian Schoenebeck
2020-09-25 13:05 ` Dr. David Alan Gilbert
2020-09-25 16:05 ` Christian Schoenebeck
2020-09-25 16:33 ` Christian Schoenebeck
2020-09-25 18:51 ` Dr. David Alan Gilbert
2020-09-27 12:14 ` Christian Schoenebeck
2020-09-29 13:03 ` Vivek Goyal
2020-09-29 13:28 ` Christian Schoenebeck
2020-09-29 13:49 ` Vivek Goyal
2020-09-29 13:59 ` Christian Schoenebeck
2020-09-29 13:17 ` Vivek Goyal
2020-09-29 13:49 ` [Virtio-fs] " Miklos Szeredi
2020-09-29 14:01 ` Vivek Goyal
2020-09-29 14:54 ` Miklos Szeredi
2020-09-29 15:28 ` Vivek Goyal
2020-09-25 12:11 ` tools/virtiofs: Multi threading seems to hurt performance Dr. David Alan Gilbert
2020-09-25 13:11 ` Vivek Goyal
2020-09-21 20:16 ` Vivek Goyal
2020-09-22 11:09 ` Dr. David Alan Gilbert
2020-09-22 22:56 ` Vivek Goyal
2020-09-23 12:50 ` [Virtio-fs] " Chirantan Ekbote
2020-09-23 12:59 ` Vivek Goyal
2020-09-25 11:35 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200921153243.GK3221@work-vm \
--to=dgilbert@redhat.com \
--cc=archana.m.shinde@intel.com \
--cc=cdupontd@redhat.com \
--cc=jose.carlos.venegas.munoz@intel.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vgoyal@redhat.com \
--cc=virtio-fs@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).