All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Migration speed throttling, max_throttle in migration.c
@ 2011-02-09 18:13 Thomas Treutner
  2011-02-09 20:02 ` Anthony Liguori
  0 siblings, 1 reply; 4+ messages in thread
From: Thomas Treutner @ 2011-02-09 18:13 UTC (permalink / raw)
  To: qemu-devel

Hi,

I was reading qemu's (qemu-kvm-0.13.0's, to be specific) live migration 
code to unterstand how the iterative dirty page transfer is implemented. 
During this I noticed that ram_save_live in arch_init.c is called quite 
often, more often than I expected (approx. 200 times for an idle 500MiB 
VM). I found out that this is because of while 
(!qemu_file_rate_limit(f)), which evaluates very often to true, and as 
there are remaining dirty pages, ram_save_live is called again.

As I had set no bandwith limit in the libvirt call, I digged deeper and 
found a hard coded maximum bandwidth in migration.c:

/* Migration speed throttling */
static uint32_t max_throttle = (32 << 20);

Using a packet sniffer I verified that max_throttle is Byte/s, here of 
course 32 MiB/s. Additionally, it translates directly to network 
bandwidth - I was not sure about that, as the bandwidth measured in 
ram_save_live seems to be buffer/memory subsystem bandwidth?

Anyways, I'm wondering why exactly *this* value was chosen as a hard 
coded limit? 32MiB/s are ~ 250Mbit/s, which is *both* much more than 
100Mbit/s Ethernet and much less than Gbit-Ethernet can cope with? So in 
the first case, TCP congestion control should take over control anyways, 
and in the second, 3/4 of the bandwidth is thrown away.

As I'm using Gbit-Ethernet, I experimented with different values. With 
max_throttle = (112 << 20); - which is ~ 900Mbit/s - my Gbit network is 
nicely saturated, and live migrations of a rather idle 700MiB VM take 
~5s instead of ~15s without any problems, which is very nice. Much more 
important is the fact that VMs with higher memory activity and therefore 
higher rates of page dirtying are transferred more easily without 
additional manual action, as the default maxdowntime is 30ms, which is 
often unreachable in such situations and there is no evasive action 
built in, like a maximum number of iterations and a forced last 
iteration or aborting the migration when this limit is reached.

So, I'm asking if there is a good reason why *not* to change 
max_throttle to a value that targets at saturating a Gbit network, if 
100Mbit networks will be "flooded" anyways by the current setting?


thanks & regards,
-t

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Migration speed throttling, max_throttle in migration.c
  2011-02-09 18:13 [Qemu-devel] Migration speed throttling, max_throttle in migration.c Thomas Treutner
@ 2011-02-09 20:02 ` Anthony Liguori
  2011-02-09 21:18   ` Thomas Treutner
  0 siblings, 1 reply; 4+ messages in thread
From: Anthony Liguori @ 2011-02-09 20:02 UTC (permalink / raw)
  To: Thomas Treutner; +Cc: qemu-devel

On 02/09/2011 07:13 PM, Thomas Treutner wrote:
> Hi,
>
> I was reading qemu's (qemu-kvm-0.13.0's, to be specific) live 
> migration code to unterstand how the iterative dirty page transfer is 
> implemented. During this I noticed that ram_save_live in arch_init.c 
> is called quite often, more often than I expected (approx. 200 times 
> for an idle 500MiB VM). I found out that this is because of while 
> (!qemu_file_rate_limit(f)), which evaluates very often to true, and as 
> there are remaining dirty pages, ram_save_live is called again.
>
> As I had set no bandwith limit in the libvirt call, I digged deeper 
> and found a hard coded maximum bandwidth in migration.c:
>
> /* Migration speed throttling */
> static uint32_t max_throttle = (32 << 20);
>
> Using a packet sniffer I verified that max_throttle is Byte/s, here of 
> course 32 MiB/s. Additionally, it translates directly to network 
> bandwidth - I was not sure about that, as the bandwidth measured in 
> ram_save_live seems to be buffer/memory subsystem bandwidth?

Because that was roughly how fast my laptop's hard drive was and we had 
a very bad implementation of migration to file and the limit was the 
only way to avoid pausing the guest.

The reason it's still this today is mainly historic.  I've thought about 
making the default limit unlimited.  I'm not sure if anyone has strong 
opinions.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Migration speed throttling, max_throttle in migration.c
  2011-02-09 20:02 ` Anthony Liguori
@ 2011-02-09 21:18   ` Thomas Treutner
  2011-02-10  5:52     ` Yoshiaki Tamura
  0 siblings, 1 reply; 4+ messages in thread
From: Thomas Treutner @ 2011-02-09 21:18 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: qemu-devel

Am 09.02.2011 21:02, schrieb Anthony Liguori:
> The reason it's still this today is mainly historic. I've thought about
> making the default limit unlimited. I'm not sure if anyone has strong
> opinions.

Personally, I'd appreciate that. TCP's congestion control when using 
100MBit Ethernet seems to work fine (as there are no complaints?) so I 
see no reason why the bandwidth shouldn't be unlimited in general and 
TCP should adapt to the available bandwidth. If one wants to limit 
bandwidth manually, that's possible of course.

I don't know if such a change would affect all other existing ways of 
saving a VM, with different consequences?


regards,
thomas

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Migration speed throttling, max_throttle in migration.c
  2011-02-09 21:18   ` Thomas Treutner
@ 2011-02-10  5:52     ` Yoshiaki Tamura
  0 siblings, 0 replies; 4+ messages in thread
From: Yoshiaki Tamura @ 2011-02-10  5:52 UTC (permalink / raw)
  To: Thomas Treutner; +Cc: qemu-devel

2011/2/10 Thomas Treutner <thomas@scripty.at>:
> Am 09.02.2011 21:02, schrieb Anthony Liguori:
>>
>> The reason it's still this today is mainly historic. I've thought about
>> making the default limit unlimited. I'm not sure if anyone has strong
>> opinions.
>
> Personally, I'd appreciate that. TCP's congestion control when using 100MBit
> Ethernet seems to work fine (as there are no complaints?) so I see no reason
> why the bandwidth shouldn't be unlimited in general and TCP should adapt to
> the available bandwidth. If one wants to limit bandwidth manually, that's
> possible of course.
>
> I don't know if such a change would affect all other existing ways of saving
> a VM, with different consequences?

I would prefer to have live migration speed unlimited, but there
might be cases that the response speed of qemu monitor gets bad
in low spec environments?  Especially if you use block migration,
the overhead should be far more than usual live migration.

Thanks,

Yoshi

>
>
> regards,
> thomas
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-02-10  5:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-09 18:13 [Qemu-devel] Migration speed throttling, max_throttle in migration.c Thomas Treutner
2011-02-09 20:02 ` Anthony Liguori
2011-02-09 21:18   ` Thomas Treutner
2011-02-10  5:52     ` Yoshiaki Tamura

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.