All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
To: Avi Kivity <avi@redhat.com>
Cc: "Fernando Luis Vázquez Cao" <fernando@oss.ntt.co.jp>,
	kvm@vger.kernel.org, qemu-devel@nongnu.org,
	"大村圭(oomura kei)" <ohmura.kei@lab.ntt.co.jp>,
	"Takuya Yoshikawa" <yoshikawa.takuya@oss.ntt.co.jp>,
	anthony@codemonkey.ws, "Andrea Arcangeli" <aarcange@redhat.com>,
	"Chris Wright" <chrisw@redhat.com>
Subject: Re: [RFC] KVM Fault Tolerance: Kemari for KVM
Date: Wed, 18 Nov 2009 22:28:46 +0900	[thread overview]
Message-ID: <87e9effc0911180528s5546c8bt383a6674b382890d@mail.gmail.com> (raw)
In-Reply-To: <4B028334.1070004@lab.ntt.co.jp>

2009/11/17 Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>:
> Avi Kivity wrote:
>>
>> On 11/16/2009 04:18 PM, Fernando Luis Vázquez Cao wrote:
>>>
>>> Avi Kivity wrote:
>>>>
>>>> On 11/09/2009 05:53 AM, Fernando Luis Vázquez Cao wrote:
>>>>>
>>>>> Kemari runs paired virtual machines in an active-passive configuration
>>>>> and achieves whole-system replication by continuously copying the
>>>>> state of the system (dirty pages and the state of the virtual devices)
>>>>> from the active node to the passive node. An interesting implication
>>>>> of this is that during normal operation only the active node is
>>>>> actually executing code.
>>>>>
>>>>
>>>> Can you characterize the performance impact for various workloads?  I
>>>> assume you are running continuously in log-dirty mode.  Doesn't this make
>>>> memory intensive workloads suffer?
>>>
>>> Yes, we're running continuously in log-dirty mode.
>>>
>>> We still do not have numbers to show for KVM, but
>>> the snippets below from several runs of lmbench
>>> using Xen+Kemari will give you an idea of what you
>>> can expect in terms of overhead. All the tests were
>>> run using a fully virtualized Debian guest with
>>> hardware nested paging enabled.
>>>
>>>                     fork exec   sh    P/F  C/S   [us]
>>> ------------------------------------------------------
>>> Base                  114  349 1197 1.2845  8.2
>>> Kemari(10GbE) + FC    141  403 1280 1.2835 11.6
>>> Kemari(10GbE) + DRBD  161  415 1388 1.3145 11.6
>>> Kemari(1GbE) + FC     151  410 1335 1.3370 11.5
>>> Kemari(1GbE) + DRBD   162  413 1318 1.3239 11.6
>>> * P/F=page fault, C/S=context switch
>>>
>>> The benchmarks above are memory intensive and, as you
>>> can see, the overhead varies widely from 7% to 40%.
>>> We also measured CPU bound operations, but, as expected,
>>> Kemari incurred almost no overhead.
>>
>> Is lmbench fork that memory intensive?
>>
>> Do you have numbers for benchmarks that use significant anonymous RSS?
>>  Say, a parallel kernel build.
>>
>> Note that scaling vcpus will increase a guest's memory-dirtying power but
>> snapshot rate will not scale in the same way.
>
> I don't think lmbench is intensive but it's sensitive to memory latency.
> We'll measure kernel build time with minimum config, and post it later.

Here are some quick numbers of parallel kernel compile time.
The number of vcpu is 1, just for convenience.

time make -j 2 all
-----------------------------------------------------------------------------
Base:    real 1m13.950s (user 1m2.742s, sys 0m10.446s)
Kemari: real 1m22.720s (user 1m5.882s, sys 0m10.882s)

time make -j 4 all
-----------------------------------------------------------------------------
Base:    real 1m11.234s (user 1m2.582s, sys 0m8.643s)
Kemari: real 1m26.964s (user 1m6.530s, sys 0m12.194s)

The result of Kemari includes everything, meaning dirty pages tracking and
synchronization upon I/O operations to the disk.
The compile time using j=4 under Kemari was worse than that of j=2,
but I'm not sure this is due to dirty pages tracking or sync interval.

Thanks,

Yoshi

WARNING: multiple messages have this Message-ID (diff)
From: Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>
To: Avi Kivity <avi@redhat.com>
Cc: "Andrea Arcangeli" <aarcange@redhat.com>,
	"Chris Wright" <chrisw@redhat.com>,
	"大村圭(oomura kei)" <ohmura.kei@lab.ntt.co.jp>,
	kvm@vger.kernel.org,
	"Fernando Luis Vázquez Cao" <fernando@oss.ntt.co.jp>,
	qemu-devel@nongnu.org,
	"Takuya Yoshikawa" <yoshikawa.takuya@oss.ntt.co.jp>
Subject: [Qemu-devel] Re: [RFC] KVM Fault Tolerance: Kemari for KVM
Date: Wed, 18 Nov 2009 22:28:46 +0900	[thread overview]
Message-ID: <87e9effc0911180528s5546c8bt383a6674b382890d@mail.gmail.com> (raw)
In-Reply-To: <4B028334.1070004@lab.ntt.co.jp>

2009/11/17 Yoshiaki Tamura <tamura.yoshiaki@lab.ntt.co.jp>:
> Avi Kivity wrote:
>>
>> On 11/16/2009 04:18 PM, Fernando Luis Vázquez Cao wrote:
>>>
>>> Avi Kivity wrote:
>>>>
>>>> On 11/09/2009 05:53 AM, Fernando Luis Vázquez Cao wrote:
>>>>>
>>>>> Kemari runs paired virtual machines in an active-passive configuration
>>>>> and achieves whole-system replication by continuously copying the
>>>>> state of the system (dirty pages and the state of the virtual devices)
>>>>> from the active node to the passive node. An interesting implication
>>>>> of this is that during normal operation only the active node is
>>>>> actually executing code.
>>>>>
>>>>
>>>> Can you characterize the performance impact for various workloads?  I
>>>> assume you are running continuously in log-dirty mode.  Doesn't this make
>>>> memory intensive workloads suffer?
>>>
>>> Yes, we're running continuously in log-dirty mode.
>>>
>>> We still do not have numbers to show for KVM, but
>>> the snippets below from several runs of lmbench
>>> using Xen+Kemari will give you an idea of what you
>>> can expect in terms of overhead. All the tests were
>>> run using a fully virtualized Debian guest with
>>> hardware nested paging enabled.
>>>
>>>                     fork exec   sh    P/F  C/S   [us]
>>> ------------------------------------------------------
>>> Base                  114  349 1197 1.2845  8.2
>>> Kemari(10GbE) + FC    141  403 1280 1.2835 11.6
>>> Kemari(10GbE) + DRBD  161  415 1388 1.3145 11.6
>>> Kemari(1GbE) + FC     151  410 1335 1.3370 11.5
>>> Kemari(1GbE) + DRBD   162  413 1318 1.3239 11.6
>>> * P/F=page fault, C/S=context switch
>>>
>>> The benchmarks above are memory intensive and, as you
>>> can see, the overhead varies widely from 7% to 40%.
>>> We also measured CPU bound operations, but, as expected,
>>> Kemari incurred almost no overhead.
>>
>> Is lmbench fork that memory intensive?
>>
>> Do you have numbers for benchmarks that use significant anonymous RSS?
>>  Say, a parallel kernel build.
>>
>> Note that scaling vcpus will increase a guest's memory-dirtying power but
>> snapshot rate will not scale in the same way.
>
> I don't think lmbench is intensive but it's sensitive to memory latency.
> We'll measure kernel build time with minimum config, and post it later.

Here are some quick numbers of parallel kernel compile time.
The number of vcpu is 1, just for convenience.

time make -j 2 all
-----------------------------------------------------------------------------
Base:    real 1m13.950s (user 1m2.742s, sys 0m10.446s)
Kemari: real 1m22.720s (user 1m5.882s, sys 0m10.882s)

time make -j 4 all
-----------------------------------------------------------------------------
Base:    real 1m11.234s (user 1m2.582s, sys 0m8.643s)
Kemari: real 1m26.964s (user 1m6.530s, sys 0m12.194s)

The result of Kemari includes everything, meaning dirty pages tracking and
synchronization upon I/O operations to the disk.
The compile time using j=4 under Kemari was worse than that of j=2,
but I'm not sure this is due to dirty pages tracking or sync interval.

Thanks,

Yoshi

  parent reply	other threads:[~2009-11-18 13:28 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-09  3:53 [RFC] KVM Fault Tolerance: Kemari for KVM Fernando Luis Vázquez Cao
2009-11-09  3:53 ` [Qemu-devel] " Fernando Luis Vázquez Cao
2009-11-12 21:51 ` Dor Laor
2009-11-12 21:51   ` [Qemu-devel] " Dor Laor
2009-11-13 11:48   ` Yoshiaki Tamura
2009-11-13 11:48     ` [Qemu-devel] " Yoshiaki Tamura
2009-11-15 13:42     ` Dor Laor
2009-11-15 13:42       ` [Qemu-devel] " Dor Laor
2009-11-15 10:35 ` Avi Kivity
2009-11-15 10:35   ` [Qemu-devel] " Avi Kivity
2009-11-16 14:18   ` Fernando Luis Vázquez Cao
2009-11-16 14:18     ` [Qemu-devel] " Fernando Luis Vázquez Cao
2009-11-16 14:49     ` Avi Kivity
2009-11-16 14:49       ` [Qemu-devel] " Avi Kivity
2009-11-17 11:04       ` Yoshiaki Tamura
2009-11-17 11:04         ` [Qemu-devel] " Yoshiaki Tamura
2009-11-17 12:15         ` Avi Kivity
2009-11-17 12:15           ` [Qemu-devel] " Avi Kivity
2009-11-17 14:06           ` Yoshiaki Tamura
2009-11-17 14:06             ` [Qemu-devel] " Yoshiaki Tamura
2009-11-18 13:28         ` Yoshiaki Tamura [this message]
2009-11-18 13:28           ` Yoshiaki Tamura
2009-11-18 13:58           ` Avi Kivity
2009-11-18 13:58             ` [Qemu-devel] " Avi Kivity
2009-11-19  3:43             ` Yoshiaki Tamura
2009-11-19  3:43               ` [Qemu-devel] " Yoshiaki Tamura

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87e9effc0911180528s5546c8bt383a6674b382890d@mail.gmail.com \
    --to=tamura.yoshiaki@lab.ntt.co.jp \
    --cc=aarcange@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=avi@redhat.com \
    --cc=chrisw@redhat.com \
    --cc=fernando@oss.ntt.co.jp \
    --cc=kvm@vger.kernel.org \
    --cc=ohmura.kei@lab.ntt.co.jp \
    --cc=qemu-devel@nongnu.org \
    --cc=yoshikawa.takuya@oss.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.