linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* virtio-console downgrade the virtio-pci-blk performance
@ 2018-09-30  5:25 Feng Li
  2018-10-01 11:41 ` [Qemu-devel] " Dr. David Alan Gilbert
  0 siblings, 1 reply; 6+ messages in thread
From: Feng Li @ 2018-09-30  5:25 UTC (permalink / raw)
  To: linux-kernel, qemu-discuss, qemu-devel, linux-scsi

Hi,
I found an obvious performance downgrade when virtio-console combined
with virtio-pci-blk.

This phenomenon exists in nearly all Qemu versions and all Linux
(CentOS7, Fedora 28, Ubuntu 18.04) distros.

This is a disk cmd:
-drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on

If I add "-device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.

In VM, if I rmmod virtio-console, the performance will back to normal.

Any idea about this issue?

I don't know this is a qemu issue or kernel issue.


Thanks in advance.
-- 
Thanks and Best Regards,
Alex

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
  2018-09-30  5:25 virtio-console downgrade the virtio-pci-blk performance Feng Li
@ 2018-10-01 11:41 ` Dr. David Alan Gilbert
  2018-10-01 14:58   ` Feng Li
  0 siblings, 1 reply; 6+ messages in thread
From: Dr. David Alan Gilbert @ 2018-10-01 11:41 UTC (permalink / raw)
  To: Feng Li; +Cc: linux-kernel, qemu-discuss, qemu-devel, linux-scsi

* Feng Li (lifeng1519@gmail.com) wrote:
> Hi,
> I found an obvious performance downgrade when virtio-console combined
> with virtio-pci-blk.
> 
> This phenomenon exists in nearly all Qemu versions and all Linux
> (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> 
> This is a disk cmd:
> -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> 
> If I add "-device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> 
> In VM, if I rmmod virtio-console, the performance will back to normal.
> 
> Any idea about this issue?
> 
> I don't know this is a qemu issue or kernel issue.

It sounds odd;  can you provide more details on:
  a) The benchmark you're using.
  b) the host and the guest config (number of cpus etc)
  c) Why are you running it with iscsi back to the same host - why not
     just simplify the test back to a simple file?

Dave

> 
> Thanks in advance.
> -- 
> Thanks and Best Regards,
> Alex
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
  2018-10-01 11:41 ` [Qemu-devel] " Dr. David Alan Gilbert
@ 2018-10-01 14:58   ` Feng Li
  2018-10-11 10:15     ` Feng Li
  0 siblings, 1 reply; 6+ messages in thread
From: Feng Li @ 2018-10-01 14:58 UTC (permalink / raw)
  To: dgilbert; +Cc: linux-kernel, qemu-discuss, qemu-devel, linux-scsi

Hi Dave,
My comments are in-line.

Dr. David Alan Gilbert <dgilbert@redhat.com> 于2018年10月1日周一 下午7:41写道:
>
> * Feng Li (lifeng1519@gmail.com) wrote:
> > Hi,
> > I found an obvious performance downgrade when virtio-console combined
> > with virtio-pci-blk.
> >
> > This phenomenon exists in nearly all Qemu versions and all Linux
> > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> >
> > This is a disk cmd:
> > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> >
> > If I add "-device
> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> >
> > In VM, if I rmmod virtio-console, the performance will back to normal.
> >
> > Any idea about this issue?
> >
> > I don't know this is a qemu issue or kernel issue.
>
> It sounds odd;  can you provide more details on:
>   a) The benchmark you're using.
I'm using fio, the config is:
[global]
ioengine=libaio
iodepth=128
runtime=120
time_based
direct=1

[randread]
stonewall
bs=4k
filename=/dev/vdb
rw=randread

>   b) the host and the guest config (number of cpus etc)
The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
--enable-kvm -cpu host -smp 8
or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
host -smp 8

The result is the same.

>   c) Why are you running it with iscsi back to the same host - why not
>      just simplify the test back to a simple file?
>

Because my ISCSI target could supply a high IOPS performance.
If using a slow disk, the performance downgrade would be not so obvious.
It's easy to be seen, you could try it.


> Dave
>
> >
> > Thanks in advance.
> > --
> > Thanks and Best Regards,
> > Alex
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



-- 
Thanks and Best Regards,
Feng Li(Alex)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
  2018-10-01 14:58   ` Feng Li
@ 2018-10-11 10:15     ` Feng Li
  2018-10-15 18:51       ` Amit Shah
  0 siblings, 1 reply; 6+ messages in thread
From: Feng Li @ 2018-10-11 10:15 UTC (permalink / raw)
  To: dgilbert, amit, virtualization
  Cc: linux-kernel, qemu-discuss, qemu-devel, linux-scsi

Add Amit Shah.

After some tests, we found:
- the virtio serial port number is inversely proportional to the iSCSI
virtio-blk-pci performance.
If we set the virio-serial ports to 2("<controller
type='virtio-serial' index='0' ports='2'/>), the performance downgrade
is minimal.

- use local disk/ram disk as virtio-blk-pci disk, the performance
downgrade is still obvious.


Could anyone give some help about this issue?

Feng Li <lifeng1519@gmail.com> 于2018年10月1日周一 下午10:58写道:
>
> Hi Dave,
> My comments are in-line.
>
> Dr. David Alan Gilbert <dgilbert@redhat.com> 于2018年10月1日周一 下午7:41写道:
> >
> > * Feng Li (lifeng1519@gmail.com) wrote:
> > > Hi,
> > > I found an obvious performance downgrade when virtio-console combined
> > > with virtio-pci-blk.
> > >
> > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > >
> > > This is a disk cmd:
> > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > >
> > > If I add "-device
> > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > >
> > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > >
> > > Any idea about this issue?
> > >
> > > I don't know this is a qemu issue or kernel issue.
> >
> > It sounds odd;  can you provide more details on:
> >   a) The benchmark you're using.
> I'm using fio, the config is:
> [global]
> ioengine=libaio
> iodepth=128
> runtime=120
> time_based
> direct=1
>
> [randread]
> stonewall
> bs=4k
> filename=/dev/vdb
> rw=randread
>
> >   b) the host and the guest config (number of cpus etc)
> The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> --enable-kvm -cpu host -smp 8
> or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> host -smp 8
>
> The result is the same.
>
> >   c) Why are you running it with iscsi back to the same host - why not
> >      just simplify the test back to a simple file?
> >
>
> Because my ISCSI target could supply a high IOPS performance.
> If using a slow disk, the performance downgrade would be not so obvious.
> It's easy to be seen, you could try it.
>
>
> > Dave
> >
> > >
> > > Thanks in advance.
> > > --
> > > Thanks and Best Regards,
> > > Alex
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
>
>
> --
> Thanks and Best Regards,
> Feng Li(Alex)



--
Thanks and Best Regards,
Feng Li(Alex)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
  2018-10-11 10:15     ` Feng Li
@ 2018-10-15 18:51       ` Amit Shah
  2018-10-16  2:26         ` Feng Li
  0 siblings, 1 reply; 6+ messages in thread
From: Amit Shah @ 2018-10-15 18:51 UTC (permalink / raw)
  To: Feng Li
  Cc: dgilbert, amit, virtualization, linux-kernel, qemu-discuss,
	qemu-devel, linux-scsi

On (Thu) 11 Oct 2018 [18:15:41], Feng Li wrote:
> Add Amit Shah.
> 
> After some tests, we found:
> - the virtio serial port number is inversely proportional to the iSCSI
> virtio-blk-pci performance.
> If we set the virio-serial ports to 2("<controller
> type='virtio-serial' index='0' ports='2'/>), the performance downgrade
> is minimal.

If you use multiple virtio-net (or blk) devices -- just register, not
necessarily use -- does that also bring the performance down?  I
suspect it's the number of interrupts that get allocated for the
ports.  Also, could you check if MSI is enabled?  Can you try with and
without?  Can you also reproduce if you have multiple virtio-serial
controllers with 2 ports each (totalling up to whatever number that
reproduces the issue).

		Amit

> 
> - use local disk/ram disk as virtio-blk-pci disk, the performance
> downgrade is still obvious.
> 
> 
> Could anyone give some help about this issue?
> 
> Feng Li <lifeng1519@gmail.com> 于2018年10月1日周一 下午10:58写道:
> >
> > Hi Dave,
> > My comments are in-line.
> >
> > Dr. David Alan Gilbert <dgilbert@redhat.com> 于2018年10月1日周一 下午7:41写道:
> > >
> > > * Feng Li (lifeng1519@gmail.com) wrote:
> > > > Hi,
> > > > I found an obvious performance downgrade when virtio-console combined
> > > > with virtio-pci-blk.
> > > >
> > > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > > >
> > > > This is a disk cmd:
> > > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > >
> > > > If I add "-device
> > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > >
> > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > >
> > > > Any idea about this issue?
> > > >
> > > > I don't know this is a qemu issue or kernel issue.
> > >
> > > It sounds odd;  can you provide more details on:
> > >   a) The benchmark you're using.
> > I'm using fio, the config is:
> > [global]
> > ioengine=libaio
> > iodepth=128
> > runtime=120
> > time_based
> > direct=1
> >
> > [randread]
> > stonewall
> > bs=4k
> > filename=/dev/vdb
> > rw=randread
> >
> > >   b) the host and the guest config (number of cpus etc)
> > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> > --enable-kvm -cpu host -smp 8
> > or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> > host -smp 8
> >
> > The result is the same.
> >
> > >   c) Why are you running it with iscsi back to the same host - why not
> > >      just simplify the test back to a simple file?
> > >
> >
> > Because my ISCSI target could supply a high IOPS performance.
> > If using a slow disk, the performance downgrade would be not so obvious.
> > It's easy to be seen, you could try it.
> >
> >
> > > Dave
> > >
> > > >
> > > > Thanks in advance.
> > > > --
> > > > Thanks and Best Regards,
> > > > Alex
> > > >
> > > --
> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> >
> >
> > --
> > Thanks and Best Regards,
> > Feng Li(Alex)
> 
> 
> 
> --
> Thanks and Best Regards,
> Feng Li(Alex)

		Amit
-- 
http://amitshah.net/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance
  2018-10-15 18:51       ` Amit Shah
@ 2018-10-16  2:26         ` Feng Li
  0 siblings, 0 replies; 6+ messages in thread
From: Feng Li @ 2018-10-16  2:26 UTC (permalink / raw)
  To: amit
  Cc: dgilbert, virtualization, linux-kernel, qemu-discuss, qemu-devel,
	linux-scsi

Hi Amit,

Thanks for your response.

See inline comments.

Amit Shah <amit@kernel.org> 于2018年10月16日周二 上午2:51写道:
>
> On (Thu) 11 Oct 2018 [18:15:41], Feng Li wrote:
> > Add Amit Shah.
> >
> > After some tests, we found:
> > - the virtio serial port number is inversely proportional to the iSCSI
> > virtio-blk-pci performance.
> > If we set the virio-serial ports to 2("<controller
> > type='virtio-serial' index='0' ports='2'/>), the performance downgrade
> > is minimal.
>
> If you use multiple virtio-net (or blk) devices -- just register, not
> necessarily use -- does that also bring the performance down?  I

Yes. We just register the virtio-serial, and not use it, it brings the
virtio-blk performance down.

> suspect it's the number of interrupts that get allocated for the
> ports.  Also, could you check if MSI is enabled?  Can you try with and
> without?  Can you also reproduce if you have multiple virtio-serial
> controllers with 2 ports each (totalling up to whatever number that
> reproduces the issue).

This is the full cmd:
/usr/libexec/qemu-kvm -name
guest=6a798fde-c5d0-405a-b495-f2726f9d12d5,debug-threads=on -machine
pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m
size=2097152k,slots=255,maxmem=4194304000k   -uuid
702bb5bc-2aa3-4ded-86eb-7b9cf5c1e2d9 -drive
file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=74,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-drive file.driver=iscsi,file.portal=127.0.0.1:3260,file.target=iqn.2016-02.com.smartx:system:zbs-iscsi-datastore-1537958580215k,file.lun=182,file.transport=tcp,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=2,write-cache=on
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -vnc
0.0.0.0:100 -netdev user,id=fl.1,hostfwd=tcp::5555-:22 -device
e1000,netdev=fl.1 -msg timestamp=on   -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5

qemu version: qemu-kvm-2.10.0-21

I guess the MSI is enabled, I could see some logs:
[    2.230194] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    3.556376] virtio-pci 0000:00:05.0: irq 24 for MSI/MSI-X

The issue could reproduce easily, using one virtio-serial with 31
ports, and this is the default port num.
I think it's not necessary to reproduce with multiple controllers.

>
>                 Amit
>
> >
> > - use local disk/ram disk as virtio-blk-pci disk, the performance
> > downgrade is still obvious.
> >
> >
> > Could anyone give some help about this issue?
> >
> > Feng Li <lifeng1519@gmail.com> 于2018年10月1日周一 下午10:58写道:
> > >
> > > Hi Dave,
> > > My comments are in-line.
> > >
> > > Dr. David Alan Gilbert <dgilbert@redhat.com> 于2018年10月1日周一 下午7:41写道:
> > > >
> > > > * Feng Li (lifeng1519@gmail.com) wrote:
> > > > > Hi,
> > > > > I found an obvious performance downgrade when virtio-console combined
> > > > > with virtio-pci-blk.
> > > > >
> > > > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > > > >
> > > > > This is a disk cmd:
> > > > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > > >
> > > > > If I add "-device
> > > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > > >
> > > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > > >
> > > > > Any idea about this issue?
> > > > >
> > > > > I don't know this is a qemu issue or kernel issue.
> > > >
> > > > It sounds odd;  can you provide more details on:
> > > >   a) The benchmark you're using.
> > > I'm using fio, the config is:
> > > [global]
> > > ioengine=libaio
> > > iodepth=128
> > > runtime=120
> > > time_based
> > > direct=1
> > >
> > > [randread]
> > > stonewall
> > > bs=4k
> > > filename=/dev/vdb
> > > rw=randread
> > >
> > > >   b) the host and the guest config (number of cpus etc)
> > > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> > > --enable-kvm -cpu host -smp 8
> > > or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> > > host -smp 8
> > >
> > > The result is the same.
> > >
> > > >   c) Why are you running it with iscsi back to the same host - why not
> > > >      just simplify the test back to a simple file?
> > > >
> > >
> > > Because my ISCSI target could supply a high IOPS performance.
> > > If using a slow disk, the performance downgrade would be not so obvious.
> > > It's easy to be seen, you could try it.
> > >
> > >
> > > > Dave
> > > >
> > > > >
> > > > > Thanks in advance.
> > > > > --
> > > > > Thanks and Best Regards,
> > > > > Alex
> > > > >
> > > > --
> > > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >
> > >
> > >
> > > --
> > > Thanks and Best Regards,
> > > Feng Li(Alex)
> >
> >
> >
> > --
> > Thanks and Best Regards,
> > Feng Li(Alex)
>
>                 Amit
> --
> http://amitshah.net/



-- 
Thanks and Best Regards,
Feng Li(Alex)

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-10-16  2:26 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-30  5:25 virtio-console downgrade the virtio-pci-blk performance Feng Li
2018-10-01 11:41 ` [Qemu-devel] " Dr. David Alan Gilbert
2018-10-01 14:58   ` Feng Li
2018-10-11 10:15     ` Feng Li
2018-10-15 18:51       ` Amit Shah
2018-10-16  2:26         ` Feng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).