From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46736) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bnF29-0001PF-Ni for qemu-devel@nongnu.org; Thu, 22 Sep 2016 21:12:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bnF25-0005Zg-MW for qemu-devel@nongnu.org; Thu, 22 Sep 2016 21:12:09 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:14699) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bnF25-0005Xs-2R for qemu-devel@nongnu.org; Thu, 22 Sep 2016 21:12:05 -0400 References: <57D90289.6020003@huawei.com> <57E3D9AF.4060502@huawei.com> From: "Herongguang (Stephen)" Message-ID: <57E4814D.4020607@huawei.com> Date: Fri, 23 Sep 2016 09:11:41 +0800 MIME-Version: 1.0 In-Reply-To: <57E3D9AF.4060502@huawei.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , qemu-devel@nongnu.org, quintela@redhat.com, amit.shah@redhat.com Cc: arei.gonglei@huawei.com, "Huangweidong (C)" On 2016/9/22 21:16, Herongguang (Stephen) wrote: > > > On 2016/9/14 17:05, Paolo Bonzini wrote: >> >> >> On 14/09/2016 09:55, Herongguang (Stephen) wrote: >>> Hi, >>> We found a problem that when a redhat 6 VM reboots (in grub countdown >>> UI), migrating this VM will result in VM’s memory difference between >>> source and destination side. The difference always resides in GPA >>> 0xA0000~0xC0000, i.e. SMRAM area. >>> >>> Occasionally this result in VM instruction emulation error in >>> destination side. >>> >>> After some digging, I think this is because in migration code, in >>> migration_bitmap_sync(), only memory slots in address space >>> address_space_memory’s dirty bitmap fetched from kvm-kmod, while SMRAM >>> memory slot, in address space smram_address_space’s dirty bitmap not >>> fetched from kvm-kmod, thus modifications in SMRAM in source side are >>> not sent to destination side. >>> >>> I tried following patch, and this phenomenon does not happen anymore. Do >>> you think this patch is OK or do you have better idea? Thanks. >> >> Nice caatch! >> >> I think the right solution here is to sync all RAM memory regions >> instead of the address spaces. You can do that by putting a notifier in >> MemoryRegion; register the notifier in all the RAM creation functions >> (basically after every mr->ram=true or mr->rom_device=true), and >> unregister it in memory_region_destructor_ram. >> >> Thanks, >> >> Paolo >> > > I have some concern: > 1. For example, vhost does not know about as_id, I wonder if guests in SMM can operate disk or ether card, as in > that case vhost would not logging dirty pages correctly, without knowing as_id. > > 2. If a memory region is disabled/enabled/disabled frequently, since disabled memory regions would be removed > from memory slots in kvm-kmod, dirty pages would be discarded in kvm-kmod and qemu when disabled, thus missing. > Is my assumption correct? After reviewing code, I think question 2 does not exist as qemu will sync dirty page before removing memory slots in kvm_set_phys_mem. > > 3. I agree your opinion that the right solution is to get dirty-page information for all memory region from > kvm-kmod. But I found it’s somewhat hard to implement since kvm_log_sync() expects a MemoryRegionSection* > parameter. Do you have good idea? > > As to all the ram memory regions, I think they are all in the ram_list.blocks, so there is no need to create > a notifier, is this correct?