All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wen Congyang <wency@cn.fujitsu.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: den@openvz.org, qemu-devel@nongnu.org, lizhijian@cn.fujitsu.com,
	Juan Quintela <quintela@redhat.com>
Subject: Re: [Qemu-devel] safety of migration_bitmap_extend
Date: Thu, 12 Nov 2015 16:33:49 +0800	[thread overview]
Message-ID: <56444EED.4060104@cn.fujitsu.com> (raw)
In-Reply-To: <20151104091946.GB2702@work-vm>

On 11/04/2015 05:19 PM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (wency@cn.fujitsu.com) wrote:
>> On 11/04/2015 05:05 PM, Dr. David Alan Gilbert wrote:
>>> * Wen Congyang (wency@cn.fujitsu.com) wrote:
>>>> On 11/03/2015 09:47 PM, Dr. David Alan Gilbert wrote:
>>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>>> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>>>>>>> Hi,
>>>>>>>   I'm trying to understand why migration_bitmap_extend is correct/safe;
>>>>>>> If I understand correctly, you're arguing that:
>>>>>>>
>>>>>>>   1) the migration_bitmap_mutex around the extend, stops any sync's happening
>>>>>>>      and so no new bits will be set during the extend.
>>>>>>>
>>>>>>>   2) If migration sends a page and clears a bitmap entry, it doesn't
>>>>>>>      matter if we lose the 'clear' because we're copying it as
>>>>>>>      we extend it, because losing the clear just means the page
>>>>>>>      gets resent, and so the data is OK.
>>>>>>>
>>>>>>> However, doesn't (2) mean that migration_dirty_pages might be wrong?
>>>>>>> If a page was sent, the bit cleared, and migration_dirty_pages decremented,
>>>>>>> then if we copy over that bitmap and 'set' that bit again then migration_dirty_pages
>>>>>>> is too small; that means that either migration would finish too early,
>>>>>>> or more likely, migration_dirty_pages would wrap-around -ve and
>>>>>>> never finish.
>>>>>>>
>>>>>>> Is there a reason it's really safe?
>>>>>>
>>>>>> No.  It is reasonably safe.  Various values of reasonably.
>>>>>>
>>>>>> migration_dirty_pages should never arrive at values near zero.  Because
>>>>>> we move to the completion stage way before it gets a value near zero.
>>>>>> (We could have very, very bad luck, as in it is not safe).
>>>>>
>>>>> That's only true if we hit the qemu_file_rate_limit() in ram_save_iterate;
>>>>> if we don't hit the rate limit (e.g. because we're CPU or network limited
>>>>> to slower than the set limit) then I think ram_save_iterate will go all the
>>>>> way to sending every page; if that happens it'll go once more
>>>>> around the main migration loop, and call the pending routine, and now get
>>>>> a -ve (very +ve) number of pending pages, so continuously do ram_save_iterate
>>>>> again.
>>>>>
>>>>> We've had that type of bug before when we messed up the dirty-pages calculation
>>>>> during hotplug.
>>>>
>>>> IIUC, migration_bitmap_extend() is called when migration is running, and we hotplug
>>>> a device.
>>>>
>>>> In this case, I think we hold the iothread mutex when migration_bitmap_extend() is called.
>>>>
>>>> ram_save_complete() is also protected by the iothread mutex.
>>>>
>>>> So if migration_bitmap_extend() is called, the migration thread may be blocked in
>>>> migration_completion() and wait it. qemu_savevm_state_complete() will be called after
>>>> migration_completion() returns.
>>>
>>> But I don't think ram_save_iterate is protected by that lock, and my concern
>>> is that the dirty-pages calculation is wrong during the iteration phase, and then
>>> the iteration phase will never exit and never try and get to ram_save_complete.
>>
>> Yes, the dirty-pages may be wrong. But it is smaller, not larger than the exact value.
>> Why will the iteration phase never exit?
> 
> Imagine that migration_dirty_pages is slightly too small and we enter ram_save_iterate;
> ram_save_iterate now sends *all* it's pages, it decrements migration_dirty_pages for
> every page sent.  At the end of ram_save_iterate, migration_dirty_pages would be negative.
> But migration_dirty_pages is *u*int64_t; so we exit ram_save_iterate,
> go around the main migration_thread loop again and call qemu_savevm_state_pending, and
> it returns a very large number (because it's actually a negative number), so we keep
> going around the loop, because it never gets smaller.

I don't know how to trigger the problem. I think store migration_dirty_pages in BitmapRcu
can fix this problem.

Thanks
Wen Congyang

> 
> Dave
> 
>>
>> Thanks
>> Wen Congyang
>>
>>>
>>> Dave
>>>
>>>>
>>>> Thanks
>>>> Wen Congyang
>>>>
>>>>>
>>>>>> Now, do we really care if migration_dirty_pages is exact?  Not really,
>>>>>> we just use it to calculate if we should start the throotle or not.
>>>>>> That only test that each 1 second, so if we have written a couple of
>>>>>> pages that we are not accounting for, things should be reasonably safe.
>>>>>>
>>>>>> Once told that, I don't know why we didn't catch that problem during
>>>>>> review (yes, I am guilty here).  Not sure how to really fix it,
>>>>>> thought.  I think that the problem is more theoretical than real, but
>>>>>
>>>>> Dave
>>>>>
>>>>>> ....
>>>>>>
>>>>>> Thanks, Juan.
>>>>>>
>>>>>>>
>>>>>>> Dave
>>>>>>>
>>>>>>> --
>>>>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>>> --
>>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>>>>
>>>>> .
>>>>>
>>>>
>>> --
>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>>> .
>>>
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> .
> 

  reply	other threads:[~2015-11-12  8:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-03 12:23 [Qemu-devel] safety of migration_bitmap_extend Dr. David Alan Gilbert
2015-11-03 12:55 ` Juan Quintela
2015-11-03 13:47   ` Dr. David Alan Gilbert
2015-11-04  3:10     ` Wen Congyang
2015-11-04  9:05       ` Dr. David Alan Gilbert
2015-11-04  9:13         ` Wen Congyang
2015-11-04  9:19           ` Dr. David Alan Gilbert
2015-11-12  8:33             ` Wen Congyang [this message]
2015-11-13  8:55               ` Li Zhijian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56444EED.4060104@cn.fujitsu.com \
    --to=wency@cn.fujitsu.com \
    --cc=den@openvz.org \
    --cc=dgilbert@redhat.com \
    --cc=lizhijian@cn.fujitsu.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.