linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Feng Li <lifeng1519@gmail.com>
To: amit@kernel.org
Cc: dgilbert@redhat.com, virtualization@lists.linux-foundation.org,
	linux-kernel <linux-kernel@vger.kernel.org>,
	qemu-discuss@nongnu.org, qemu-devel@nongnu.org,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Subject: Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
Date: Tue, 16 Oct 2018 10:26:08 +0800	[thread overview]
Message-ID: <CAEK8JBDpy+AvCfEZ5UAM+FojFBK2Cy1BqsrctkX-6QD6jvTK4w@mail.gmail.com> (raw)
In-Reply-To: <20181015185115.GA3247@grmbl.mre>

Hi Amit,

Thanks for your response.

See inline comments.

Amit Shah <amit@kernel.org> 于2018年10月16日周二 上午2:51写道:
>
> On (Thu) 11 Oct 2018 [18:15:41], Feng Li wrote:
> > Add Amit Shah.
> >
> > After some tests, we found:
> > - the virtio serial port number is inversely proportional to the iSCSI
> > virtio-blk-pci performance.
> > If we set the virio-serial ports to 2("<controller
> > type='virtio-serial' index='0' ports='2'/>), the performance downgrade
> > is minimal.
>
> If you use multiple virtio-net (or blk) devices -- just register, not
> necessarily use -- does that also bring the performance down?  I

Yes. We just register the virtio-serial, and not use it, it brings the
virtio-blk performance down.

> suspect it's the number of interrupts that get allocated for the
> ports.  Also, could you check if MSI is enabled?  Can you try with and
> without?  Can you also reproduce if you have multiple virtio-serial
> controllers with 2 ports each (totalling up to whatever number that
> reproduces the issue).

This is the full cmd:
/usr/libexec/qemu-kvm -name
guest=6a798fde-c5d0-405a-b495-f2726f9d12d5,debug-threads=on -machine
pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m
size=2097152k,slots=255,maxmem=4194304000k   -uuid
702bb5bc-2aa3-4ded-86eb-7b9cf5c1e2d9 -drive
file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=74,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-drive file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=182,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=2,write-cache=on
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -vnc
0.0.0.0:100 -netdev user,id=fl.1,hostfwd=tcp::5555-:22 -device
e1000,netdev=fl.1 -msg timestamp=on   -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5

qemu version: qemu-kvm-2.10.0-21

I guess the MSI is enabled, I could see some logs:
[    2.230194] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    3.556376] virtio-pci 0000:00:05.0: irq 24 for MSI/MSI-X

The issue could reproduce easily, using one virtio-serial with 31
ports, and this is the default port num.
I think it's not necessary to reproduce with multiple controllers.

>
>                 Amit
>
> >
> > - use local disk/ram disk as virtio-blk-pci disk, the performance
> > downgrade is still obvious.
> >
> >
> > Could anyone give some help about this issue?
> >
> > Feng Li <lifeng1519@gmail.com> 于2018年10月1日周一 下午10:58写道:
> > >
> > > Hi Dave,
> > > My comments are in-line.
> > >
> > > Dr. David Alan Gilbert <dgilbert@redhat.com> 于2018年10月1日周一 下午7:41写道:
> > > >
> > > > * Feng Li (lifeng1519@gmail.com) wrote:
> > > > > Hi,
> > > > > I found an obvious performance downgrade when virtio-console combined
> > > > > with virtio-pci-blk.
> > > > >
> > > > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > > > >
> > > > > This is a disk cmd:
> > > > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > > >
> > > > > If I add "-device
> > > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > > >
> > > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > > >
> > > > > Any idea about this issue?
> > > > >
> > > > > I don't know this is a qemu issue or kernel issue.
> > > >
> > > > It sounds odd;  can you provide more details on:
> > > >   a) The benchmark you're using.
> > > I'm using fio, the config is:
> > > [global]
> > > ioengine=libaio
> > > iodepth=128
> > > runtime=120
> > > time_based
> > > direct=1
> > >
> > > [randread]
> > > stonewall
> > > bs=4k
> > > filename=/dev/vdb
> > > rw=randread
> > >
> > > >   b) the host and the guest config (number of cpus etc)
> > > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> > > --enable-kvm -cpu host -smp 8
> > > or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> > > host -smp 8
> > >
> > > The result is the same.
> > >
> > > >   c) Why are you running it with iscsi back to the same host - why not
> > > >      just simplify the test back to a simple file?
> > > >
> > >
> > > Because my ISCSI target could supply a high IOPS performance.
> > > If using a slow disk, the performance downgrade would be not so obvious.
> > > It's easy to be seen, you could try it.
> > >
> > >
> > > > Dave
> > > >
> > > > >
> > > > > Thanks in advance.
> > > > > --
> > > > > Thanks and Best Regards,
> > > > > Alex
> > > > >
> > > > --
> > > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >
> > >
> > >
> > > --
> > > Thanks and Best Regards,
> > > Feng Li(Alex)
> >
> >
> >
> > --
> > Thanks and Best Regards,
> > Feng Li(Alex)
>
>                 Amit
> --
> http://amitshah.net/



-- 
Thanks and Best Regards,
Feng Li(Alex)

      reply	other threads:[~2018-10-16  2:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-30  5:25 virtio-console downgrade the virtio-pci-blk performance Feng Li
2018-10-01 11:41 ` [Qemu-devel] " Dr. David Alan Gilbert
2018-10-01 14:58   ` Feng Li
2018-10-11 10:15     ` Feng Li
2018-10-15 18:51       ` Amit Shah
2018-10-16  2:26         ` Feng Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEK8JBDpy+AvCfEZ5UAM+FojFBK2Cy1BqsrctkX-6QD6jvTK4w@mail.gmail.com \
    --to=lifeng1519@gmail.com \
    --cc=amit@kernel.org \
    --cc=dgilbert@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-discuss@nongnu.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).