* performance save/restore under xen-4.3.2 compared to kvm/qemu
@ 2014-08-25 12:06 ustermann.max
2014-08-25 12:23 ` Andrew Cooper
0 siblings, 1 reply; 8+ messages in thread
From: ustermann.max @ 2014-08-25 12:06 UTC (permalink / raw)
To: xen-devel
Hello everybody,
i hope, i am right here with my question.
i have an vm with 1 GB main memory under Xen-4.3.2, if i measure the times for save and restore via "time", i got the following values:
save:
real 0m12.136s
user 0m0.175s
sys 0m2.662s
restore:
real 0m8.639s
user 0m0.468s
sys 0m1.807s
if i do the same with an vm under kvm/qemu (1GB main memory), i got this values:
save:
real 0m10.024s
user 0m0.008s
sys 0m0.003s
restore:
real 0m0.525s
user 0m0.015s
sys 0m0.004s
the host hardware is in both cases the same.
i´m real surprise about the huge difference for the needed time for restore, also that xen use much more time in kernel-mode (sys).
Can anyone give me some hints from where this difference can came?
Is there a way to speedup the restore-process in xen?
I´m thankful for every hint
all the best
max
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:06 performance save/restore under xen-4.3.2 compared to kvm/qemu ustermann.max
@ 2014-08-25 12:23 ` Andrew Cooper
2014-08-25 12:35 ` Jan Beulich
2014-08-25 13:35 ` max ustermann
0 siblings, 2 replies; 8+ messages in thread
From: Andrew Cooper @ 2014-08-25 12:23 UTC (permalink / raw)
To: ustermann.max, xen-devel
On 25/08/14 13:06, ustermann.max@web.de wrote:
> Hello everybody,
>
> i hope, i am right here with my question.
>
> i have an vm with 1 GB main memory under Xen-4.3.2, if i measure the times for save and restore via "time", i got the following values:
>
> save:
> real 0m12.136s
> user 0m0.175s
> sys 0m2.662s
>
> restore:
> real 0m8.639s
> user 0m0.468s
> sys 0m1.807s
>
>
> if i do the same with an vm under kvm/qemu (1GB main memory), i got this values:
>
> save:
> real 0m10.024s
> user 0m0.008s
> sys 0m0.003s
>
> restore:
> real 0m0.525s
> user 0m0.015s
> sys 0m0.004s
>
>
> the host hardware is in both cases the same.
> i´m real surprise about the huge difference for the needed time for restore, also that xen use much more time in kernel-mode (sys).
> Can anyone give me some hints from where this difference can came?
> Is there a way to speedup the restore-process in xen?
>
> I´m thankful for every hint
>
> all the best
> max
Xen and KVM are two different types of hypervisor. By its nature,
kvm/qemu has less to do for migration, as it already has full access to
the VMs memory.
It looks plausibly like qemu restore is mmap()ing the restore file and
running straight from there. You are never going to manage this under
Xen, because of the extra isolation inherent in the Xen model.
In terms of raw speed, my migration v2 series (still in development) has
fixed several performance problems in the old migration code
Particularly in the case of your example, my new code will be 4 times
faster as it is not mapping everything up to 4GB in the VM.
I have also identified a bottleneck in the Linux PVOps kernel where the
mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
hypercalls of batch size 1. Fixing this will certainly make the
mapping/unmapping faster.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:23 ` Andrew Cooper
@ 2014-08-25 12:35 ` Jan Beulich
2014-08-25 12:38 ` Andrew Cooper
2014-08-26 10:29 ` David Vrabel
2014-08-25 13:35 ` max ustermann
1 sibling, 2 replies; 8+ messages in thread
From: Jan Beulich @ 2014-08-25 12:35 UTC (permalink / raw)
To: Andrew Cooper; +Cc: ustermann.max, xen-devel
>>> On 25.08.14 at 14:23, <andrew.cooper3@citrix.com> wrote:
> I have also identified a bottleneck in the Linux PVOps kernel where the
> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
> hypercalls of batch size 1. Fixing this will certainly make the
> mapping/unmapping faster.
Didn't Mats Petersson post a patch for this quite a while ago? Did
this perhaps never get applied?
Jan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:35 ` Jan Beulich
@ 2014-08-25 12:38 ` Andrew Cooper
2014-08-25 13:02 ` Jan Beulich
2014-08-26 10:29 ` David Vrabel
1 sibling, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2014-08-25 12:38 UTC (permalink / raw)
To: Jan Beulich; +Cc: ustermann.max, xen-devel
On 25/08/14 13:35, Jan Beulich wrote:
>>>> On 25.08.14 at 14:23, <andrew.cooper3@citrix.com> wrote:
>> I have also identified a bottleneck in the Linux PVOps kernel where the
>> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
>> hypercalls of batch size 1. Fixing this will certainly make the
>> mapping/unmapping faster.
> Didn't Mats Petersson post a patch for this quite a while ago? Did
> this perhaps never get applied?
>
> Jan
>
That was a different issue if I remember correctly. I think it was to
do with the classic kernel using ioremap() for foreign domain memory.
~Andrew
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:38 ` Andrew Cooper
@ 2014-08-25 13:02 ` Jan Beulich
0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2014-08-25 13:02 UTC (permalink / raw)
To: Andrew Cooper; +Cc: ustermann.max, xen-devel
>>> On 25.08.14 at 14:38, <andrew.cooper3@citrix.com> wrote:
> On 25/08/14 13:35, Jan Beulich wrote:
>>>>> On 25.08.14 at 14:23, <andrew.cooper3@citrix.com> wrote:
>>> I have also identified a bottleneck in the Linux PVOps kernel where the
>>> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
>>> hypercalls of batch size 1. Fixing this will certainly make the
>>> mapping/unmapping faster.
>> Didn't Mats Petersson post a patch for this quite a while ago? Did
>> this perhaps never get applied?
>
> That was a different issue if I remember correctly. I think it was to
> do with the classic kernel using ioremap() for foreign domain memory.
I certainly remember that patch having been against pv-ops...
Jan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:23 ` Andrew Cooper
2014-08-25 12:35 ` Jan Beulich
@ 2014-08-25 13:35 ` max ustermann
2014-08-25 13:50 ` Andrew Cooper
1 sibling, 1 reply; 8+ messages in thread
From: max ustermann @ 2014-08-25 13:35 UTC (permalink / raw)
To: xen-devel lists.xen.org
Hi,
first, thank you for the Information.
For clearness i will comment, that i did the tests with an HVM Guest (Windows XP) not with an PV-Guest. Make this any diffenrence to your explanations?
and then i would like to ask, are your modifications anywere avaible? are they in the unstable-tree of xen ????
all the best
max
----------------ursprüngliche Nachricht-----------------
Von: "Andrew Cooper" andrew.cooper3@citrix.com
An: ustermann.max@web.de, xen-devel@lists.xen.org
Datum: Mon, 25 Aug 2014 13:23:09 +0100
-------------------------------------------------
> On 25/08/14 13:06, ustermann.max@web.de wrote:
>> Hello everybody,
>>
>> i hope, i am right here with my question.
>>
>> i have an vm with 1 GB main memory under Xen-4.3.2, if i measure the times for
>> save and restore via "time", i got the following values:
>>
>> save:
>> real 0m12.136s
>> user 0m0.175s
>> sys 0m2.662s
>>
>> restore:
>> real 0m8.639s
>> user 0m0.468s
>> sys 0m1.807s
>>
>>
>> if i do the same with an vm under kvm/qemu (1GB main memory), i got this values:
>>
>> save:
>> real 0m10.024s
>> user 0m0.008s
>> sys 0m0.003s
>>
>> restore:
>> real 0m0.525s
>> user 0m0.015s
>> sys 0m0.004s
>>
>>
>> the host hardware is in both cases the same.
>> i´m real surprise about the huge difference for the needed time for restore,
>> also that xen use much more time in kernel-mode (sys).
>> Can anyone give me some hints from where this difference can came?
>> Is there a way to speedup the restore-process in xen?
>>
>> I´m thankful for every hint
>>
>> all the best
>> max
>
> Xen and KVM are two different types of hypervisor. By its nature,
> kvm/qemu has less to do for migration, as it already has full access to
> the VMs memory.
>
>
> It looks plausibly like qemu restore is mmap()ing the restore file and
> running straight from there. You are never going to manage this under
> Xen, because of the extra isolation inherent in the Xen model.
>
> In terms of raw speed, my migration v2 series (still in development) has
> fixed several performance problems in the old migration code
> Particularly in the case of your example, my new code will be 4 times
> faster as it is not mapping everything up to 4GB in the VM.
>
> I have also identified a bottleneck in the Linux PVOps kernel where the
> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
> hypercalls of batch size 1. Fixing this will certainly make the
> mapping/unmapping faster.
>
> ~Andrew
>
--
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 13:35 ` max ustermann
@ 2014-08-25 13:50 ` Andrew Cooper
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Cooper @ 2014-08-25 13:50 UTC (permalink / raw)
To: max ustermann, xen-devel lists.xen.org
On 25/08/14 14:35, max ustermann wrote:
> Hi,
>
> first, thank you for the Information.
> For clearness i will comment, that i did the tests with an HVM Guest (Windows XP) not with an PV-Guest. Make this any diffenrence to your explanations?
Not at all. My explanation still stands, especially the bit about 4
times faster.
>
> and then i would like to ask, are your modifications anywere avaible? are they in the unstable-tree of xen ????
I am working to try and get them included in Xen-4.5 before the code
freeze, but there is currently nothing in xen-unstable.
Work-in-progress is available on my personal xenbits tree:
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/saverestore2-v6.1
~Andrew
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: performance save/restore under xen-4.3.2 compared to kvm/qemu
2014-08-25 12:35 ` Jan Beulich
2014-08-25 12:38 ` Andrew Cooper
@ 2014-08-26 10:29 ` David Vrabel
1 sibling, 0 replies; 8+ messages in thread
From: David Vrabel @ 2014-08-26 10:29 UTC (permalink / raw)
To: Jan Beulich, Andrew Cooper; +Cc: ustermann.max, xen-devel
On 25/08/14 13:35, Jan Beulich wrote:
>>>> On 25.08.14 at 14:23, <andrew.cooper3@citrix.com> wrote:
>> I have also identified a bottleneck in the Linux PVOps kernel where the
>> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
>> hypercalls of batch size 1. Fixing this will certainly make the
>> mapping/unmapping faster.
>
> Didn't Mats Petersson post a patch for this quite a while ago? Did
> this perhaps never get applied?
I think it needed some reworking and no one else has picked it up yet.
David
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-08-26 10:29 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-25 12:06 performance save/restore under xen-4.3.2 compared to kvm/qemu ustermann.max
2014-08-25 12:23 ` Andrew Cooper
2014-08-25 12:35 ` Jan Beulich
2014-08-25 12:38 ` Andrew Cooper
2014-08-25 13:02 ` Jan Beulich
2014-08-26 10:29 ` David Vrabel
2014-08-25 13:35 ` max ustermann
2014-08-25 13:50 ` Andrew Cooper
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.