xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* RE: Read Performance issue when Xen Hypervisor is activated
@ 2016-12-27 14:26 Michael Schinzel
  2016-12-30 16:34 ` Dario Faggioli
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Schinzel @ 2016-12-27 14:26 UTC (permalink / raw)
  To: Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1425 bytes --]

We have searched in the last days more and more for the cause of this performance issue.

In cooperation with the datacenter, we change some hardware to check, if the problem already proceeds. We put the RAID Controller included all RAID Arrays to another Supermicro Mainboard: X10SLM-F with only one CPU. The result was, we got 400 MB/s read Speed. So it seems there is an issue with the Servers Mainboard / CPU and the Xen Hypervisor but, we also change the Mainboard to an Supermicro X9DR3-F with the actual BIOS Version 3.2a - these also do not solved the problem with the performance.

What we also have done:

-          Upgraded Hypervisor from default Debian 8 - 4.4.1 to 4.8.

-          Tested some kernel boot configurations

With an non hypervisor Kernel, the system also uses the read Cache of the controller and after some read operations at the same file, it gets 1.2 G/s back from the Cache. At Xen Hypervisor Kernel, it seems the system do not use any caching operations. I also tested a bit with hdparm:

root@v7:~# hdparm -Tt /dev/sdb

/dev/sdb:
Timing cached reads:   14060 MB in  1.99 seconds = 7076.16 MB/sec
Timing buffered disk reads: 304 MB in  3.01 seconds = 100.85 MB/sec

This Performance is horrable. It is a RAID 10 with read/write cache and SSD Caching functions.

Does somebody know how Xen proceeds with such Caching Systems?


Yours sincerely

Michael Schinzel



[-- Attachment #1.2: Type: text/html, Size: 7150 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2016-12-27 14:26 Read Performance issue when Xen Hypervisor is activated Michael Schinzel
@ 2016-12-30 16:34 ` Dario Faggioli
  2016-12-30 16:53   ` Michael Schinzel
                     ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Dario Faggioli @ 2016-12-30 16:34 UTC (permalink / raw)
  To: Michael Schinzel, Xen-devel; +Cc: Roger Pau Monne


[-- Attachment #1.1: Type: text/plain, Size: 2097 bytes --]

[Cc-ing someone which have done disk benchmark in somewhat recent time]

On Tue, 2016-12-27 at 14:26 +0000, Michael Schinzel wrote:
> We have searched in the last days more and more for the cause of this
> performance issue.
>  
> In cooperation with the datacenter, we change some hardware to check,
> if the problem already proceeds. We put the RAID Controller included
> all RAID Arrays to another Supermicro Mainboard: X10SLM-F with only
> one CPU. The result was, we got 400 MB/s read Speed. So it seems
> there is an issue with the Servers Mainboard / CPU and the Xen
> Hypervisor but, we also change the Mainboard to an Supermicro X9DR3-F 
> with the actual BIOS Version 3.2a – these also do not solved the
> problem with the performance.
>  
> What we also have done:
> -          Upgraded Hypervisor from default Debian 8 – 4.4.1 to 4.8.
> -          Tested some kernel boot configurations\
>
I think it would be useful to know more about your configuration, e.g.,
are these tests being done in Dom0? How many vCPUs and memory does Dom0
have?

> With an non hypervisor Kernel, the system also uses the read Cache of
> the controller and after some read operations at the same file, it
> gets 1.2 G/s back from the Cache. At Xen Hypervisor Kernel, it seems
> the system do not use any caching operations. I also tested a bit
> with hdparm:
>  
> root@v7:~# hdparm -Tt /dev/sdb
>  
> /dev/sdb:
> Timing cached reads:   14060 MB in  1.99 seconds = 7076.16 MB/sec
> Timing buffered disk reads: 304 MB in  3.01 seconds = 100.85 MB/sec
>  
> This Performance is horrable. It is a RAID 10 with read/write cache
> and SSD Caching functions.
>  
> Does somebody know how Xen proceeds with such Caching Systems?
>  
>  
> Yours sincerely
>  
> Michael Schinzel
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2016-12-30 16:34 ` Dario Faggioli
@ 2016-12-30 16:53   ` Michael Schinzel
  2016-12-31  9:07   ` Michael Schinzel
  2017-01-02  7:15   ` Michael Schinzel
  2 siblings, 0 replies; 8+ messages in thread
From: Michael Schinzel @ 2016-12-30 16:53 UTC (permalink / raw)
  To: Dario Faggioli, Xen-devel; +Cc: Roger Pau Monne

Hello,

we have tried some more tests with the system. It seems the CPU speed ist the problem in this case.

I have installed the Tool unixbench. When the Hypervisor Kernel is bootet, Unixbench make 2.900 Benchmark Points. When we boot the same Kernel without Hypervisor, it generates 7.900 Benchmark Points. 

So it seems, the Hypervisor limits the CPU speed or there is somethink in conflict with the hypervisor. I have after this check the Intel ME Version of the BIOS and have seen, that it is such an old Version in this BIOS Version, but there is no newer BIOS Version available, maybe Supermicro stopped the Update Support of this Mainboard - https://www.supermicro.nl/products/motherboard/Xeon/C600/X9DR3-F.cfm 

So i try to Update the ME Version over an Windows Installation which is installed to an other disk actually.

For the benchmark, we have removed all CPU Limitations and gave the dom0 all available CPU ressources. So no Core Limitation. Normally we make a dom0 limit over grub boot configuration: 

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=16192M dom0_max_vcpus=4 dom0_vcpus_pin"

I will update you when we made an upgrade of this last thing we had not tested yet.


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -


IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn 
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



-----Ursprüngliche Nachricht-----
Von: Dario Faggioli [mailto:dario.faggioli@citrix.com] 
Gesendet: Freitag, 30. Dezember 2016 17:35
An: Michael Schinzel <schinzel@ip-projects.de>; Xen-devel <xen-devel@lists.xenproject.org>
Cc: Roger Pau Monne <roger.paumonne@citrix.com>; Bob Liu <bob.liu@oracle.com>
Betreff: Re: [Xen-devel] RE: Read Performance issue when Xen Hypervisor is activated

[Cc-ing someone which have done disk benchmark in somewhat recent time]

On Tue, 2016-12-27 at 14:26 +0000, Michael Schinzel wrote:
> We have searched in the last days more and more for the cause of this 
> performance issue.
>  
> In cooperation with the datacenter, we change some hardware to check, 
> if the problem already proceeds. We put the RAID Controller included 
> all RAID Arrays to another Supermicro Mainboard: X10SLM-F with only 
> one CPU. The result was, we got 400 MB/s read Speed. So it seems there 
> is an issue with the Servers Mainboard / CPU and the Xen Hypervisor 
> but, we also change the Mainboard to an Supermicro X9DR3-F with the 
> actual BIOS Version 3.2a – these also do not solved the problem with 
> the performance.
>  
> What we also have done:
> -          Upgraded Hypervisor from default Debian 8 – 4.4.1 to 4.8.
> -          Tested some kernel boot configurations\
>
I think it would be useful to know more about your configuration, e.g., are these tests being done in Dom0? How many vCPUs and memory does Dom0 have?

> With an non hypervisor Kernel, the system also uses the read Cache of 
> the controller and after some read operations at the same file, it 
> gets 1.2 G/s back from the Cache. At Xen Hypervisor Kernel, it seems 
> the system do not use any caching operations. I also tested a bit with 
> hdparm:
>  
> root@v7:~# hdparm -Tt /dev/sdb
>  
> /dev/sdb:
> Timing cached reads:   14060 MB in  1.99 seconds = 7076.16 MB/sec 
> Timing buffered disk reads: 304 MB in  3.01 seconds = 100.85 MB/sec
>  
> This Performance is horrable. It is a RAID 10 with read/write cache 
> and SSD Caching functions.
>  
> Does somebody know how Xen proceeds with such Caching Systems?
>  
>  
> Yours sincerely
>  
> Michael Schinzel
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2016-12-30 16:34 ` Dario Faggioli
  2016-12-30 16:53   ` Michael Schinzel
@ 2016-12-31  9:07   ` Michael Schinzel
  2017-01-02  7:15   ` Michael Schinzel
  2 siblings, 0 replies; 8+ messages in thread
From: Michael Schinzel @ 2016-12-31  9:07 UTC (permalink / raw)
  To: Dario Faggioli, Xen-devel; +Cc: Roger Pau Monne

Ok, it seems there is no way to Upgrade the Intel ME Version. In the lates BIOS Version of Supermicro it is 

-------[ ME Analyzer v1.7.0_35 RC ]-------
            Database r75_3

File:     X9DRi5.709

Firmware: Intel SPS
Version:  02.01.07.231.1
Release:  Production
Type:     Region
Mode:     Dual OPR
Date:     10/05/2013
Size:     0x2F0000

Also i dont know, if this is realy the issue. Because with a non Hypervisor Kernel it work all fine. 


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -


IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn 
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



-----Ursprüngliche Nachricht-----
Von: Dario Faggioli [mailto:dario.faggioli@citrix.com] 
Gesendet: Freitag, 30. Dezember 2016 17:35
An: Michael Schinzel <schinzel@ip-projects.de>; Xen-devel <xen-devel@lists.xenproject.org>
Cc: Roger Pau Monne <roger.paumonne@citrix.com>; Bob Liu <bob.liu@oracle.com>
Betreff: Re: [Xen-devel] RE: Read Performance issue when Xen Hypervisor is activated

[Cc-ing someone which have done disk benchmark in somewhat recent time]

On Tue, 2016-12-27 at 14:26 +0000, Michael Schinzel wrote:
> We have searched in the last days more and more for the cause of this 
> performance issue.
>  
> In cooperation with the datacenter, we change some hardware to check, 
> if the problem already proceeds. We put the RAID Controller included 
> all RAID Arrays to another Supermicro Mainboard: X10SLM-F with only 
> one CPU. The result was, we got 400 MB/s read Speed. So it seems there 
> is an issue with the Servers Mainboard / CPU and the Xen Hypervisor 
> but, we also change the Mainboard to an Supermicro X9DR3-F with the 
> actual BIOS Version 3.2a – these also do not solved the problem with 
> the performance.
>  
> What we also have done:
> -          Upgraded Hypervisor from default Debian 8 – 4.4.1 to 4.8.
> -          Tested some kernel boot configurations\
>
I think it would be useful to know more about your configuration, e.g., are these tests being done in Dom0? How many vCPUs and memory does Dom0 have?

> With an non hypervisor Kernel, the system also uses the read Cache of 
> the controller and after some read operations at the same file, it 
> gets 1.2 G/s back from the Cache. At Xen Hypervisor Kernel, it seems 
> the system do not use any caching operations. I also tested a bit with 
> hdparm:
>  
> root@v7:~# hdparm -Tt /dev/sdb
>  
> /dev/sdb:
> Timing cached reads:   14060 MB in  1.99 seconds = 7076.16 MB/sec 
> Timing buffered disk reads: 304 MB in  3.01 seconds = 100.85 MB/sec
>  
> This Performance is horrable. It is a RAID 10 with read/write cache 
> and SSD Caching functions.
>  
> Does somebody know how Xen proceeds with such Caching Systems?
>  
>  
> Yours sincerely
>  
> Michael Schinzel
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2016-12-30 16:34 ` Dario Faggioli
  2016-12-30 16:53   ` Michael Schinzel
  2016-12-31  9:07   ` Michael Schinzel
@ 2017-01-02  7:15   ` Michael Schinzel
  2017-01-12 17:03     ` Dario Faggioli
  2 siblings, 1 reply; 8+ messages in thread
From: Michael Schinzel @ 2017-01-02  7:15 UTC (permalink / raw)
  To: Dario Faggioli, Xen-devel; +Cc: Thomas Toka, Roger Pau Monne

Good Morning,

we have done some more tests in the last days. Tested different Mainboard Settings, ... 


As test we also look for some more points. We test the following commands also to get the information of the I/O Performance.


dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct
dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
ioping /dev/sda
hdparm -tT --direct /dev/sda

these is a better test case. 


When we boot the Hostsystem without Xen Support, we get the following results as reference:

root@v7:~# dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct

2+0 Datensätze ein
2+0 Datensätze aus
2147483648 Bytes (2,1 GB) kopiert, 5,16157 s, 416 MB/s

root@v7:~# dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
4000+0 Datensätze ein
4000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 0,510736 s, 4,0 MB/s

root@v7:~# ioping /dev/sda
--- /dev/sda (block device 8.19 TiB) ioping statistics ---
21 requests completed in 20.7 s, 59 iops, 236.8 KiB/s
min/avg/max/mdev = 4.17 ms / 16.9 ms / 87.5 ms / 16.3 ms


root@v7:~# hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   8394 MB in  2.00 seconds = 4198.15 MB/sec
 Timing O_DIRECT disk reads: 1414 MB in  3.00 seconds = 470.99 MB/sec





After this, i have also tested another Debian 8 Standard System with other Hardware specifications to check, if there is maybe an issue with Hardware side. 

Intel Xeon E5-2620v4
64 GB DDR4 ECC Reg
LSI 9361-8i RAID
BBU
2x 4 TB WD RE
4x 2 TB WD RE
2x 512 GB SSD

This Hardware is such the same like the Xen Node has. The Server is installed with Proxmox and Debian 8.  The HDDs are not connected by Backplane, they are added direct attached to the RAID Controller.

root@node-2-newcolo:~# dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 5.76812 s, 372 MB/s

root@node-2-newcolo:~# dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
4000+0 records in
4000+0 records out
2048000 bytes (2.0 MB) copied, 0.491542 s, 4.2 MB/s

root@node-2-newcolo:~# ioping /dev/sdb
--- /dev/sdb (block device 3.64 TiB) ioping statistics ---
13 requests completed in 12.5 s, 25 iops, 103.0 KiB/s
min/avg/max/mdev = 149 us / 38.8 ms / 170.2 ms / 48.5 ms

root@node-2-newcolo:~# hdparm -tT --direct /dev/sdb

/dev/sdb:
 Timing O_DIRECT cached reads:   1294 MB in  2.00 seconds = 646.29 MB/sec
 Timing O_DIRECT disk reads: 1068 MB in  3.00 seconds = 355.87 MB/sec

This is ok, because there are some - 2/3 - VMs running.



After this, i have bootet the vServer Xen Node with the Xen Hypervisor Kernel. 

root@v7:~# dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct
2+0 Datensätze ein
2+0 Datensätze aus
2147483648 Bytes (2,1 GB) kopiert, 5,63201 s, 381 MB/s

root@v7:~# dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
4000+0 Datensätze ein
4000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 0,522344 s, 3,9 MB/s

root@v7:~# ioping /dev/sda
--- /dev/sda (block device 8.19 TiB) ioping statistics ---
10 requests completed in 10.2 s, 41 iops, 165.3 KiB/s
min/avg/max/mdev = 9.26 ms / 24.2 ms / 115.1 ms / 30.4 ms

root@v7:~# hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   4814 MB in  1.99 seconds = 2414.92 MB/sec
 Timing O_DIRECT disk reads: 1386 MB in  3.00 seconds = 461.41 MB/sec


You can see, in default Xen configuration, the most important thing at read performance test -> 2414.92 MB/sec <- the used cache is half of the cache like the same host is bootet without hypervisor. We now searched and searched and searched and find the Case: xen_acpi_processor

Xen is manageing the CPU Performance default with 1.200 Mhz. It is like you are driving a Ferrari all the time with 30 miles/h :) So we changed the Performance parameter to

 xenpm set-scaling-governor all performance

After this, the benchmark result speed a little bit up:

root@v7:~# dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct

2+0 Datensätze ein
2+0 Datensätze aus
2147483648 Bytes (2,1 GB) kopiert, 5,25885 s, 408 MB/s

root@v7:~# dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
4000+0 Datensätze ein
4000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 0,23312 s, 8,8 MB/s

root@v7:~# ioping /dev/sda
--- /dev/sda (block device 8.19 TiB) ioping statistics ---
9 requests completed in 8.69 s, 73 iops, 293.9 KiB/s
min/avg/max/mdev = 10.3 ms / 13.6 ms / 22.3 ms / 3.65 ms

root@v7:~# hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   6398 MB in  2.00 seconds = 3206.43 MB/sec
 Timing O_DIRECT disk reads: 1396 MB in  3.01 seconds = 464.53 MB/sec


There you see, the write performance speed up very nice. 8,8 MB/s and 293.9 Kib/s for IOPS are such good performance information. The read Performance is a bit better, but not as good as with standard Kernel. You must see, there is nothing running at this Server.

After a little bit searching around, i also find a parameter for the scheduler.

root@v7:~# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

I changed the scheduler to deadline.  After this Change


root@v7:~#  dd if=/dev/zero of=/root/testfile bs=1G count=2 oflag=direct
2+0 Datensätze ein
2+0 Datensätze aus
2147483648 Bytes (2,1 GB) kopiert, 5,29884 s, 405 MB/s

root@v7:~# dd if=/dev/zero of=/root/testfile bs=512 count=4000 oflag=direct
4000+0 Datensätze ein
4000+0 Datensätze aus
2048000 Bytes (2,0 MB) kopiert, 0,209492 s, 9,8 MB/s

root@v7:~# ioping /dev/sda
--- /dev/sda (block device 8.19 TiB) ioping statistics ---
16 requests completed in 15.5 s, 52 iops, 210.8 KiB/s
min/avg/max/mdev = 4.44 ms / 19.0 ms / 109.2 ms / 23.8 ms

root@v7:~# hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   6418 MB in  2.00 seconds = 3215.70 MB/sec
 Timing O_DIRECT disk reads: 1406 MB in  3.00 seconds = 468.61 MB/sec


It seems the write performance get a bit worser, but with small files, it speeds a little bit up. But ioping is now 210 instead of 293. At read Performance, there were no change.


With standard reading a file of 10 GB, i get now

root@v7:~# time dd if=datei of=/dev/null
20971520+0 Datensätze ein
20971520+0 Datensätze aus
10737418240 Bytes (11 GB) kopiert, 51,2435 s, 210 MB/s

real    0m51.246s
user    0m10.464s
sys     0m33.880s


all the time 210 MB/s. With default Kernel something arround over 400 MB/s. This is at reading a big file. I dont know what hdparm makes other instead of dd to get 468 MB/s. It seems there is already somethink in the hypervisor which makes for example irq management or sth else. 



The Kernel start Parameter is: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=16192M dom0_max_vcpus=4 dom0_vcpus_pin cpufreq=dom0"

We have already tried to remove the CPU reservation, memory limit and so on but this don't change anythink. Also upgrading the Hypervisor dont change anythink at this performance issue. 
 


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -


IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn 
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



-----Ursprüngliche Nachricht-----
Von: Dario Faggioli [mailto:dario.faggioli@citrix.com] 
Gesendet: Freitag, 30. Dezember 2016 17:35
An: Michael Schinzel <schinzel@ip-projects.de>; Xen-devel <xen-devel@lists.xenproject.org>
Cc: Roger Pau Monne <roger.paumonne@citrix.com>; Bob Liu <bob.liu@oracle.com>
Betreff: Re: [Xen-devel] RE: Read Performance issue when Xen Hypervisor is activated

[Cc-ing someone which have done disk benchmark in somewhat recent time]

On Tue, 2016-12-27 at 14:26 +0000, Michael Schinzel wrote:
> We have searched in the last days more and more for the cause of this 
> performance issue.
>  
> In cooperation with the datacenter, we change some hardware to check, 
> if the problem already proceeds. We put the RAID Controller included 
> all RAID Arrays to another Supermicro Mainboard: X10SLM-F with only 
> one CPU. The result was, we got 400 MB/s read Speed. So it seems there 
> is an issue with the Servers Mainboard / CPU and the Xen Hypervisor 
> but, we also change the Mainboard to an Supermicro X9DR3-F with the 
> actual BIOS Version 3.2a – these also do not solved the problem with 
> the performance.
>  
> What we also have done:
> -          Upgraded Hypervisor from default Debian 8 – 4.4.1 to 4.8.
> -          Tested some kernel boot configurations\
>
I think it would be useful to know more about your configuration, e.g., are these tests being done in Dom0? How many vCPUs and memory does Dom0 have?

> With an non hypervisor Kernel, the system also uses the read Cache of 
> the controller and after some read operations at the same file, it 
> gets 1.2 G/s back from the Cache. At Xen Hypervisor Kernel, it seems 
> the system do not use any caching operations. I also tested a bit with 
> hdparm:
>  
> root@v7:~# hdparm -Tt /dev/sdb
>  
> /dev/sdb:
> Timing cached reads:   14060 MB in  1.99 seconds = 7076.16 MB/sec 
> Timing buffered disk reads: 304 MB in  3.01 seconds = 100.85 MB/sec
>  
> This Performance is horrable. It is a RAID 10 with read/write cache 
> and SSD Caching functions.
>  
> Does somebody know how Xen proceeds with such Caching Systems?
>  
>  
> Yours sincerely
>  
> Michael Schinzel
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2017-01-02  7:15   ` Michael Schinzel
@ 2017-01-12 17:03     ` Dario Faggioli
  2017-01-13 13:32       ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 8+ messages in thread
From: Dario Faggioli @ 2017-01-12 17:03 UTC (permalink / raw)
  To: Michael Schinzel, Xen-devel; +Cc: Roger Pau Monne, Thomas Toka


[-- Attachment #1.1: Type: text/plain, Size: 5058 bytes --]

On Mon, 2017-01-02 at 07:15 +0000, Michael Schinzel wrote:
> Good Morning,
>
I'm back, although, as anticipate, I can't be terribly useful, I'm
afraid...

> You can see, in default Xen configuration, the most important thing
> at read performance test -> 2414.92 MB/sec <- the used cache is half
> of the cache like the same host is bootet without hypervisor. We now
> searched and searched and searched and find the Case:
> xen_acpi_processor
> 
> Xen is manageing the CPU Performance default with 1.200 Mhz. It is
> like you are driving a Ferrari all the time with 30 miles/h :) So we
> changed the Performance parameter to
> 
>  xenpm set-scaling-governor all performance
> 
Well, yes, this will have an impact, but it's unlikely what you're
looking for. In fact, something similar would apply also to baremetal
Linux.

> After a little bit searching around, i also find a parameter for the
> scheduler.
> 
> root@v7:~# cat /sys/block/sda/queue/scheduler
> noop deadline [cfq]
> 
> I changed the scheduler to deadline.  After this Change
> 
Well, ISTR [nop] could be even better. But I don't think this will make
much difference either, in this case.

> We have already tried to remove the CPU reservation, memory limit and
> so on but this don't change anythink. Also upgrading the Hypervisor
> dont change anythink at this performance issue. 
>  
Well, these are all sequential benchmarks, so it indeed could have been
expected that adding more vCPUs wouldn't have changed things much.

I decided to re-run some of your tests on my test hardware (which is
way lower end than yours, especially as far as storage is concerned).

These are m results:

 hdparm -Tt /dev/sda                   Without Xen (baremetal Linux)                    With Xen (from within dom0)
 Timing cached reads         14074 MB in  2.00 seconds = 7043.05 MB/sec     14694 MB in  1.99 seconds = 7382.22 MB/sec
 Timing buffered disk reads    364 MB in  3.01 seconds =  120.78 MB/sec       364 MB in  3.00 seconds =  121.22 MB/sec


 dd_obs_test.sh datei                  transfer rate
 block size   Without Xen (baremetal Linux)   With Xen (from within dom0)
        512            279 MB/s                      123 MB/s
       1024            454 MB/s                      217 MB/s
       2048            275 MB/s                      359 MB/s
       4096            888 MB/s                      532 MB/s
       8192            987 MB/s                      659 MB/s
      16384            1.0 GB/s                      685 MB/s
      32768            1.1 GB/s                      773 MB/s
      65536            1.1 GB/s                      846 MB/s
     131072            1.1 GB/s                      749 MB/s
     262144            327 MB/s                      844 MB/s
     524288            1.1 GB/s                      783 MB/s
    1048576            420 MB/s                      823 MB/s
    2097152            485 MB/s                      305 MB/s
    4194304            409 MB/s                      783 MB/s
    8388608            380 MB/s                      776 MB/s
   16777216            950 MB/s                      703 MB/s
   33554432            916 MB/s                      297 MB/s
   67108864            856 MB/s                      492 MB/s


time dd if=/dev/zero of=datei bs=1M count=10240
  Without Xen (baremetal Linux)   With Xen (from within dom0)
       73.7224 s, 146 MB/s            97.6948 s, 110 MB/s
            real 1m13.724s                 real 1m37.700s
             user 0m0.000s                 user  0m0.068s
             sys  0m9.364s                 sys  0m15.180s


root@Zhaman:~# time dd if=datei of=/dev/null
  Without Xen (baremetal Linux)   With Xen (from within dom0)
       9.92787 s, 1.1 GB/s            95.1827 s, 113 MB/s
             real 0m9.953s                 real 1m35.194s
             user 0m2.096s                 user 0m10.632s
              sys 0m7.300s                  sys 0m51.820s

Which confirms that, when running the tests inside a Xen Dom0, things
are indeed slower.

Let me say something, though: the purpose of Xen is not to achieve the
best possible performance in Dom0. In fact, it is to achieve the best
possible aggregated performance of a number of guest domains.

The fact that virtualization has an overhead and that Dom0 pays quite a
high price are well known. Have you tried, for instance, running some
of the test in a DomU?

Now, whether what both you and I are seeing is to be considered
"normal", I can't tell. Maybe Roger can (or he can tell us who to
bother for that).

In general, I don't think updating random system and firmware
components is useful at all... This is not a BIOS issue, IMO.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Read Performance issue when Xen Hypervisor is activated
  2017-01-12 17:03     ` Dario Faggioli
@ 2017-01-13 13:32       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 8+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-01-13 13:32 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Xen-devel, Michael Schinzel, Thomas Toka, Roger Pau Monne

.
> time dd if=/dev/zero of=datei bs=1M count=10240
>   Without Xen (baremetal Linux)   With Xen (from within dom0)
>        73.7224 s, 146 MB/s            97.6948 s, 110 MB/s
>             real 1m13.724s                 real 1m37.700s
>              user 0m0.000s                 user  0m0.068s
>              sys  0m9.364s                 sys  0m15.180s
> 
> 
> root@Zhaman:~# time dd if=datei of=/dev/null
>   Without Xen (baremetal Linux)   With Xen (from within dom0)
>        9.92787 s, 1.1 GB/s            95.1827 s, 113 MB/s
>              real 0m9.953s                 real 1m35.194s
>              user 0m2.096s                 user 0m10.632s
>               sys 0m7.300s                  sys 0m51.820s
> 
> Which confirms that, when running the tests inside a Xen Dom0, things
> are indeed slower.

.. and which PVH should fix. Also 'dd' sucks for benchmark, pls
use 'fio' which more consistently checks storage speeds.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Read Performance issue when Xen Hypervisor is activated
@ 2016-12-26 11:48 Michael Schinzel
  0 siblings, 0 replies; 8+ messages in thread
From: Michael Schinzel @ 2016-12-26 11:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Thomas Toka


[-- Attachment #1.1: Type: text/plain, Size: 3009 bytes --]

Hello,

first of all, mary christmas for all which read this email to mailinglist.

Since about 14 days, i have a problem with an Xen vServer Host. Hardware Specs:

2x Intel Xeon E5-2620v2
256 GB DDR3 ECC Reg Memory
6x 3 TB WD RE Drives – RAID 10 – HDD Array
2x 256 GB SSDs – RAID 1 – SWAP Drives
2x 600 GB SAS HDDs – RAID 1 – Backup Temp
2x 1 TB SSDs – RAID 1 – xCacheCade
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02

When i boot the Server – OS: Debian 8 – with an Kernel without Xen Hypervisor Support, i get the following Performacne at the RAID 10 HDD Array – the test Script is from: http://blog.tdg5.com/tuning-dd-block-size/ -


root@v7:~# ./dd_obs_test.sh datei
block size : transfer rate
     512 : 293 MB/s
    1024 : 469 MB/s
    2048 : 603 MB/s
    4096 : 812 MB/s
    8192 : 866 MB/s
   16384 : 845 MB/s
   32768 : 875 MB/s
   65536 : 929 MB/s
  131072 : 923 MB/s
  262144 : 898 MB/s
  524288 : 956 MB/s
1048576 : 883 MB/s
2097152 : 956 MB/s
4194304 : 925 MB/s
8388608 : 879 MB/s
16777216 : 811 MB/s
33554432 : 826 MB/s
67108864 : 753 MB/s

root@v7:/home#
time dd if=/dev/zero of=datei bs=1M count=10240
10240+0 Datens㳺e ein
10240+0 Datens㳺e aus
10737418240 Bytes (11 GB) kopiert, 26,3243 s, 408 MB/s

real 0m26.327s
user 0m0.012s
sys 0m17.140s

root@v7:/home#
time dd if=datei of=/dev/null
20971520+0 Datens㳺e ein
20971520+0 Datens㳺e aus
10737418240 Bytes (11 GB) kopiert, 11,7919 s, 911 MB/s

real 0m11.794s
user 0m2.612s
sys 0m9.176s


the read performance from the cache goes up to 1.2 – 1.6 Gb/s

When i reboot the Server and boot an kernel with Hypervisor Support, i get the following performance information:

root@v7:~# ./dd_obs_test.sh datei
block size : transfer rate
     512 : 9 MB/s
    1024 : 163 MB/s
    2048 : 256 MB/s
    4096 : 368 MB/s
    8192 : 490 MB/s
   16384 : 569 MB/s
   32768 : 618 MB/s
   65536 : 684 MB/s
  131072 : 717 MB/s
  262144 : 709 MB/s
  524288 : 715 MB/s
1048576 : 742 MB/s
2097152 : 728 MB/s
4194304 : 730 MB/s
8388608 : 696 MB/s
16777216 : 655 MB/s
33554432 : 626 MB/s
67108864 : 558 MB/s


root@v7:/home#
time dd if=/dev/zero of=datei bs=1M count=10240
10240+0 Datensätze ein
10240+0 Datensätze aus
10737418240 Bytes (11 GB) kopiert, 31,6254 s, 340 MB/s

real 0m31.712s
user 0m0.040s
sys 0m15.744s

root@v7:/home#
time dd if=datei of=/dev/null
20971520+0 Datensätze ein
20971520+0 Datensätze aus
10737418240 Bytes (11 GB) kopiert, 50,6099 s, 212 MB/s


I have upgraded Controller Firmware, BIOS, changed mainboard, changed hard drives, changed RAID Controller. I can not find the reason why the performance of the hypervisor kernel is such bad.

I have tested Kernel 4.9, Kernel 4.4 and Kernel 3.16.36 default debian. All Versions such the same values.

Installed ist he default Hypervisor Version 4.4.1. Has anybody the same problem and could help me to solve it?


Yours sincerely

Michael Schinzel


[-- Attachment #1.2: Type: text/html, Size: 8297 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-01-13 13:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-27 14:26 Read Performance issue when Xen Hypervisor is activated Michael Schinzel
2016-12-30 16:34 ` Dario Faggioli
2016-12-30 16:53   ` Michael Schinzel
2016-12-31  9:07   ` Michael Schinzel
2017-01-02  7:15   ` Michael Schinzel
2017-01-12 17:03     ` Dario Faggioli
2017-01-13 13:32       ` Konrad Rzeszutek Wilk
  -- strict thread matches above, loose matches on Subject: below --
2016-12-26 11:48 Michael Schinzel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).