All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
@ 2016-09-14  7:55 Herongguang (Stephen)
  2016-09-14  9:05 ` Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-14  7:55 UTC (permalink / raw)
  To: qemu-devel, pbonzini, quintela, amit.shah; +Cc: arei.gonglei, Huangweidong (C)

Hi,
We found a problem that when a redhat 6 VM reboots (in grub countdown UI), migrating this VM will result in VM’s memory difference between source and destination side. The difference always resides in GPA 0xA0000~0xC0000, i.e. SMRAM area.

Occasionally this result in VM instruction emulation error in destination side.

After some digging, I think this is because in migration code, in migration_bitmap_sync(), only memory slots in address space address_space_memory’s dirty bitmap  fetched from kvm-kmod, while SMRAM memory slot, in address space smram_address_space’s dirty bitmap not fetched from kvm-kmod, thus modifications in SMRAM in source side are not sent to destination side.

I tried following patch, and this phenomenon does not happen anymore. Do you think this patch is OK or do you have better idea? Thanks.

diff --git a/migration/ram.c b/migration/ram.c
index a3d70c4..1cc4360 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -607,6 +607,8 @@ static void migration_bitmap_sync_init(void)
      iterations_prev = 0;
  }

+extern AddressSpace smram_address_space;
+
  static void migration_bitmap_sync(void)
  {
      RAMBlock *block;
@@ -627,6 +629,7 @@ static void migration_bitmap_sync(void)

      trace_migration_bitmap_sync_start();
      address_space_sync_dirty_bitmap(&address_space_memory);
+    address_space_sync_dirty_bitmap(&smram_address_space);

      qemu_mutex_lock(&migration_bitmap_mutex);
      rcu_read_lock();
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index d1a25c5..b98fe22 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -1111,7 +1111,7 @@ static int kvm_get_supported_msrs(KVMState *s)

  static Notifier smram_machine_done;
  static KVMMemoryListener smram_listener;
-static AddressSpace smram_address_space;
+AddressSpace smram_address_space;
  static MemoryRegion smram_as_root;
  static MemoryRegion smram_as_mem;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-14  7:55 [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination Herongguang (Stephen)
@ 2016-09-14  9:05 ` Paolo Bonzini
  2016-09-22 13:16   ` Herongguang (Stephen)
  2016-09-22  7:51   ` [Qemu-devel] " Herongguang (Stephen)
  2016-09-22  7:56 ` [Qemu-devel] [RFC/PATCH 3] kvm: fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0 Herongguang (Stephen)
  2 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-14  9:05 UTC (permalink / raw)
  To: Herongguang (Stephen), qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 14/09/2016 09:55, Herongguang (Stephen) wrote:
> Hi,
> We found a problem that when a redhat 6 VM reboots (in grub countdown
> UI), migrating this VM will result in VM’s memory difference between
> source and destination side. The difference always resides in GPA
> 0xA0000~0xC0000, i.e. SMRAM area.
> 
> Occasionally this result in VM instruction emulation error in
> destination side.
> 
> After some digging, I think this is because in migration code, in
> migration_bitmap_sync(), only memory slots in address space
> address_space_memory’s dirty bitmap  fetched from kvm-kmod, while SMRAM
> memory slot, in address space smram_address_space’s dirty bitmap not
> fetched from kvm-kmod, thus modifications in SMRAM in source side are
> not sent to destination side.
> 
> I tried following patch, and this phenomenon does not happen anymore. Do
> you think this patch is OK or do you have better idea? Thanks.

Nice caatch!

I think the right solution here is to sync all RAM memory regions
instead of the address spaces.  You can do that by putting a notifier in
MemoryRegion; register the notifier in all the RAM creation functions
(basically after every mr->ram=true or mr->rom_device=true), and
unregister it in memory_region_destructor_ram.

Thanks,

Paolo

> diff --git a/migration/ram.c b/migration/ram.c
> index a3d70c4..1cc4360 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -607,6 +607,8 @@ static void migration_bitmap_sync_init(void)
>      iterations_prev = 0;
>  }
> 
> +extern AddressSpace smram_address_space;
> +
>  static void migration_bitmap_sync(void)
>  {
>      RAMBlock *block;
> @@ -627,6 +629,7 @@ static void migration_bitmap_sync(void)
> 
>      trace_migration_bitmap_sync_start();
>      address_space_sync_dirty_bitmap(&address_space_memory);
> +    address_space_sync_dirty_bitmap(&smram_address_space);
> 
>      qemu_mutex_lock(&migration_bitmap_mutex);
>      rcu_read_lock();
> diff --git a/target-i386/kvm.c b/target-i386/kvm.c
> index d1a25c5..b98fe22 100644
> --- a/target-i386/kvm.c
> +++ b/target-i386/kvm.c
> @@ -1111,7 +1111,7 @@ static int kvm_get_supported_msrs(KVMState *s)
> 
>  static Notifier smram_machine_done;
>  static KVMMemoryListener smram_listener;
> -static AddressSpace smram_address_space;
> +AddressSpace smram_address_space;
>  static MemoryRegion smram_as_root;
>  static MemoryRegion smram_as_mem;
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
  2016-09-14  7:55 [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination Herongguang (Stephen)
@ 2016-09-22  7:51   ` Herongguang (Stephen)
  2016-09-22  7:51   ` [Qemu-devel] " Herongguang (Stephen)
  2016-09-22  7:56 ` [Qemu-devel] [RFC/PATCH 3] kvm: fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0 Herongguang (Stephen)
  2 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22  7:51 UTC (permalink / raw)
  To: pbonzini, quintela, amit.shah, rkrcmar, kvm, herongguang.he
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)

After making memory consistent between source and destination (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html), there can
still reproduce instruction emulation failure in destination side if migration when VM’s in grub stage:

[2016-09-15 06:29:24] monitor_qapi_event_emit:478 {"timestamp": {"seconds": 1473892164, "microseconds": 99652}, "event": "RESUME"}
KVM internal error. Suberror: 1
emulation failure
EAX=000000b5 EBX=00008fc6 ECX=00005678 EDX=00000000
ESI=000f254b EDI=00000000 EBP=0000f958 ESP=000ee958
EIP=00008000 EFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =a000 000a0000 ffffffff 00809300
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     000f71a0 00000037
IDT=     00000000 00000000
CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff <ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[2016-09-15 06:29:24] virtio_set_status:712 virtio-blk device status is 7 that means DRIVER OK
[2016-09-15 06:29:24] monitor_qapi_event_emit:478 {"timestamp": {"seconds": 1473892164, "microseconds": 106738}, "event": "STOP"}


Note CS+EIP=0xA8000, which is within SMRAM, however SMM=0 is confusing! So I found that in kvm_put_vcpu_events, events.flags(KVM_VCPUEVENT_VALID_SMM) is
overwritten by setting events.flags to 0, which is obvious mistaken.

So it comes to patch 3. After patch 3, however it results kvm-kmod crash:
[69328.761479] kvm [51353]: vcpu7, guest rIP: 0xfd31c unhandled wrmsr: 0xd1 data 0
[69337.406083] BUG: unable to handle kernel NULL pointer dereference at           (null)
[69337.414193] IP: [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.420357] PGD 0
[69337.422514] Oops: 0000 [#1] SMP
[69337.425783] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache nfsd auth_rpcgss nfs_acl lockd grace sunrpc xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat xt_conntrack ipt_REJECT nf_reject_ipv4 bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter openvswitch nf_conntrack_ipv6 nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat nf_conntrack libcrc32c ipmi_devintf ipmi_si ipmi_msghandler intel_rapl sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw kvm_intel gf128mul iTCO_wdt kvm glue_helper iTCO_vendor_support ablk_helper cryptd pcspkr sg mei_me mei ioatdma shpchp lpc_ich wmi mfd_core i2c_i801 vhost_net tun vhos
 t macvtap macvlan ip_tables ext4 jbd2 mbcache sr_mod sd_mod cdrom isci libsas igb ahci libahci scsi_transport_sas crc32c_intel libata ptp pps_core serio_raw i2c_algo_bit megaraid_sas i2c_co!
re dca d
m_mod vfio_pci irqbypass vfio_virqfd vfio_iommu_type1 vfio
[69337.524020] CPU: 3 PID: 51375 Comm: CPU 0/KVM Not tainted 4.7.0 #1
[69337.530315] Hardware name: Huawei Technologies Co., Ltd. Tecal RH2288 V2-8S/BC11SRSB1, BIOS RMIBV365 09/06/2013
[69337.540644] task: ffff88104bcc8000 ti: ffff881055650000 task.ti: ffff881055650000
[69337.548382] RIP: 0010:[<ffffffffa056a59d>]  [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.557105] RSP: 0018:ffff881055653a88  EFLAGS: 00010202
[69337.562541] RAX: ffffc9000b3cf428 RBX: ffff88085624a538 RCX: 000000000000000c
[69337.569788] RDX: ffff88083a6e2bd0 RSI: 00000000000000a7 RDI: 00000000000000a0
[69337.577036] RBP: ffff881055653a88 R08: 000000000000000c R09: ffffc9000b3cf008
[69337.584283] R10: ffffc9000b3cf000 R11: 0000000000000037 R12: ffff8810569d0000
[69337.591530] R13: 0000000000000000 R14: 00000000000000a7 R15: ffff8808d624a538
[69337.598777] FS:  00007faeb900d700(0000) GS:ffff88085e0c0000(0000) knlGS:0000000000000000
[69337.607116] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[69337.612978] CR2: 0000000000000000 CR3: 000000104c203000 CR4: 00000000000426e0
[69337.620225] Stack:
[69337.622369]  ffff881055653af0 ffffffffa056b28d ffffffff00000000 0000000000000001
[69337.630059]  0000000000000001 00007fadb50a7000 0100000000000000 ffff88083a6e2bd0
[69337.637751]  ffff8810569d0000 0000000000000001 000ffffffffff000 ffff880000000000
[69337.645442] Call Trace:
[69337.648029]  [<ffffffffa056b28d>] mmu_set_spte+0x19d/0x280 [kvm]
[69337.654166]  [<ffffffffa056f56b>] __direct_map.part.122+0x19b/0x210 [kvm]
[69337.661078]  [<ffffffffa056fa29>] tdp_page_fault+0x239/0x260 [kvm]
[69337.667385]  [<ffffffffa0568bf0>] kvm_mmu_page_fault+0x60/0xf0 [kvm]
[69337.673856]  [<ffffffffa03f8656>] handle_ept_violation+0x96/0x190 [kvm_intel]
[69337.681106]  [<ffffffffa03ff740>] vmx_handle_exit+0x1f0/0xc30 [kvm_intel]
[69337.688019]  [<ffffffffa055d9ce>] vcpu_enter_guest+0x90e/0x1190 [kvm]
[69337.694585]  [<ffffffffa056444d>] kvm_arch_vcpu_ioctl_run+0xcd/0x3f0 [kvm]
[69337.701581]  [<ffffffffa0549fb5>] kvm_vcpu_ioctl+0x295/0x610 [kvm]
[69337.707880]  [<ffffffff8110b6cc>] ? do_futex+0x11c/0x530
[69337.713314]  [<ffffffff8122e436>] do_vfs_ioctl+0xa6/0x5c0
[69337.718833]  [<ffffffff81003276>] ? do_audit_syscall_entry+0x66/0x70
[69337.725303]  [<ffffffff810037cf>] ? syscall_trace_enter_phase1+0x11f/0x140
[69337.732291]  [<ffffffff8122e9c9>] SyS_ioctl+0x79/0x90
[69337.737470]  [<ffffffff81003b12>] do_syscall_64+0x62/0x110
[69337.743085]  [<ffffffff816c88a1>] entry_SYSCALL64_slow_path+0x25/0x25
[69337.749641] Code: 8b 38 0f b6 52 28 5d 83 e2 0f 83 ea 01 8d 0c d2 48 63 d2 48 8b 44 d0 18 48 d3 ee 48 d3 ef 48 29 fe 48 8d 04 f0 c3 8d 48 01 eb 80 <48> 8b 3c 25 00 00 00 00 31 c0 eb cb 0f 1f 80 00 00 00 00 66 66
[69337.769707] RIP  [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.775938]  RSP <ffff881055653a88>
[69337.779552] CR2: 0000000000000000
[69337.783467] ---[ end trace 5350f10b8de91e83 ]---
[69337.843957] Kernel panic - not syncing: Fatal exception
[69337.849430] Kernel Offset: disabled

I found that when kernel crashes, in tdp_page_fault ->__direct_map ->mmu_set_spte ->rmap_add->gfn_to_rmap-> __gfn_to_memslot &__gfn_to_rmap,
in __gfn_to_memslot returned NULL, and __gfn_to_rmap uses this NULL pointer (slot) without checking, which is bad.


After some investigation, I found when kernel crashes, gfn is 0xA7, in SMRAM region again. And at this time, kvm_memslots_for_spte_role is false,
however actually (vcpu->arch.hflags & HF_SMM_MASK) is true!

So I think there is some lacking in kvm-kmod’s kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM. I tried following patch,
it seems works fine.

Do you think this patch is appropriate or not enough? Thanks.

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 19f9f9e..f39e839 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3013,8 +3013,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
                 vcpu->arch.apic->sipi_vector = events->sipi_vector;

         if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
-               if (events->smi.smm)
+               if (events->smi.smm) {
                         vcpu->arch.hflags |= HF_SMM_MASK;
+                       kvm_mmu_reset_context(vcpu);
+               }
                 else
                         vcpu->arch.hflags &= ~HF_SMM_MASK;
                 vcpu->arch.smi_pending = events->smi.pending;



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
@ 2016-09-22  7:51   ` Herongguang (Stephen)
  0 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22  7:51 UTC (permalink / raw)
  To: pbonzini, quintela, amit.shah, rkrcmar, kvm, herongguang.he
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)

After making memory consistent between source and destination (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html), there can
still reproduce instruction emulation failure in destination side if migration when VM’s in grub stage:

[2016-09-15 06:29:24] monitor_qapi_event_emit:478 {"timestamp": {"seconds": 1473892164, "microseconds": 99652}, "event": "RESUME"}
KVM internal error. Suberror: 1
emulation failure
EAX=000000b5 EBX=00008fc6 ECX=00005678 EDX=00000000
ESI=000f254b EDI=00000000 EBP=0000f958 ESP=000ee958
EIP=00008000 EFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =a000 000a0000 ffffffff 00809300
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     000f71a0 00000037
IDT=     00000000 00000000
CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff <ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[2016-09-15 06:29:24] virtio_set_status:712 virtio-blk device status is 7 that means DRIVER OK
[2016-09-15 06:29:24] monitor_qapi_event_emit:478 {"timestamp": {"seconds": 1473892164, "microseconds": 106738}, "event": "STOP"}


Note CS+EIP=0xA8000, which is within SMRAM, however SMM=0 is confusing! So I found that in kvm_put_vcpu_events, events.flags(KVM_VCPUEVENT_VALID_SMM) is
overwritten by setting events.flags to 0, which is obvious mistaken.

So it comes to patch 3. After patch 3, however it results kvm-kmod crash:
[69328.761479] kvm [51353]: vcpu7, guest rIP: 0xfd31c unhandled wrmsr: 0xd1 data 0
[69337.406083] BUG: unable to handle kernel NULL pointer dereference at           (null)
[69337.414193] IP: [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.420357] PGD 0
[69337.422514] Oops: 0000 [#1] SMP
[69337.425783] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache nfsd auth_rpcgss nfs_acl lockd grace sunrpc xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat xt_conntrack ipt_REJECT nf_reject_ipv4 bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter openvswitch nf_conntrack_ipv6 nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat nf_conntrack libcrc32c ipmi_devintf ipmi_si ipmi_msghandler intel_rapl sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw kvm_intel gf128mul iTCO_wdt kvm glue_helper iTCO_vendor_support ablk_helper cryptd pcspkr sg mei_me mei ioatdma shpchp lpc_ich wmi mfd_core i2c_i801 vhost_net tun vhost macvtap macvlan ip_tables ext4 jbd2 mbcache sr_mod sd_mod cdrom isci libsas igb ahci libahci scsi_transport_sas crc32c_intel libata ptp pps_core serio_raw i2c_algo_bit megaraid_sas i2c_co!
re dca d
m_mod vfio_pci irqbypass vfio_virqfd vfio_iommu_type1 vfio
[69337.524020] CPU: 3 PID: 51375 Comm: CPU 0/KVM Not tainted 4.7.0 #1
[69337.530315] Hardware name: Huawei Technologies Co., Ltd. Tecal RH2288 V2-8S/BC11SRSB1, BIOS RMIBV365 09/06/2013
[69337.540644] task: ffff88104bcc8000 ti: ffff881055650000 task.ti: ffff881055650000
[69337.548382] RIP: 0010:[<ffffffffa056a59d>]  [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.557105] RSP: 0018:ffff881055653a88  EFLAGS: 00010202
[69337.562541] RAX: ffffc9000b3cf428 RBX: ffff88085624a538 RCX: 000000000000000c
[69337.569788] RDX: ffff88083a6e2bd0 RSI: 00000000000000a7 RDI: 00000000000000a0
[69337.577036] RBP: ffff881055653a88 R08: 000000000000000c R09: ffffc9000b3cf008
[69337.584283] R10: ffffc9000b3cf000 R11: 0000000000000037 R12: ffff8810569d0000
[69337.591530] R13: 0000000000000000 R14: 00000000000000a7 R15: ffff8808d624a538
[69337.598777] FS:  00007faeb900d700(0000) GS:ffff88085e0c0000(0000) knlGS:0000000000000000
[69337.607116] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[69337.612978] CR2: 0000000000000000 CR3: 000000104c203000 CR4: 00000000000426e0
[69337.620225] Stack:
[69337.622369]  ffff881055653af0 ffffffffa056b28d ffffffff00000000 0000000000000001
[69337.630059]  0000000000000001 00007fadb50a7000 0100000000000000 ffff88083a6e2bd0
[69337.637751]  ffff8810569d0000 0000000000000001 000ffffffffff000 ffff880000000000
[69337.645442] Call Trace:
[69337.648029]  [<ffffffffa056b28d>] mmu_set_spte+0x19d/0x280 [kvm]
[69337.654166]  [<ffffffffa056f56b>] __direct_map.part.122+0x19b/0x210 [kvm]
[69337.661078]  [<ffffffffa056fa29>] tdp_page_fault+0x239/0x260 [kvm]
[69337.667385]  [<ffffffffa0568bf0>] kvm_mmu_page_fault+0x60/0xf0 [kvm]
[69337.673856]  [<ffffffffa03f8656>] handle_ept_violation+0x96/0x190 [kvm_intel]
[69337.681106]  [<ffffffffa03ff740>] vmx_handle_exit+0x1f0/0xc30 [kvm_intel]
[69337.688019]  [<ffffffffa055d9ce>] vcpu_enter_guest+0x90e/0x1190 [kvm]
[69337.694585]  [<ffffffffa056444d>] kvm_arch_vcpu_ioctl_run+0xcd/0x3f0 [kvm]
[69337.701581]  [<ffffffffa0549fb5>] kvm_vcpu_ioctl+0x295/0x610 [kvm]
[69337.707880]  [<ffffffff8110b6cc>] ? do_futex+0x11c/0x530
[69337.713314]  [<ffffffff8122e436>] do_vfs_ioctl+0xa6/0x5c0
[69337.718833]  [<ffffffff81003276>] ? do_audit_syscall_entry+0x66/0x70
[69337.725303]  [<ffffffff810037cf>] ? syscall_trace_enter_phase1+0x11f/0x140
[69337.732291]  [<ffffffff8122e9c9>] SyS_ioctl+0x79/0x90
[69337.737470]  [<ffffffff81003b12>] do_syscall_64+0x62/0x110
[69337.743085]  [<ffffffff816c88a1>] entry_SYSCALL64_slow_path+0x25/0x25
[69337.749641] Code: 8b 38 0f b6 52 28 5d 83 e2 0f 83 ea 01 8d 0c d2 48 63 d2 48 8b 44 d0 18 48 d3 ee 48 d3 ef 48 29 fe 48 8d 04 f0 c3 8d 48 01 eb 80 <48> 8b 3c 25 00 00 00 00 31 c0 eb cb 0f 1f 80 00 00 00 00 66 66
[69337.769707] RIP  [<ffffffffa056a59d>] gfn_to_rmap+0xcd/0xe0 [kvm]
[69337.775938]  RSP <ffff881055653a88>
[69337.779552] CR2: 0000000000000000
[69337.783467] ---[ end trace 5350f10b8de91e83 ]---
[69337.843957] Kernel panic - not syncing: Fatal exception
[69337.849430] Kernel Offset: disabled

I found that when kernel crashes, in tdp_page_fault ->__direct_map ->mmu_set_spte ->rmap_add->gfn_to_rmap-> __gfn_to_memslot &__gfn_to_rmap,
in __gfn_to_memslot returned NULL, and __gfn_to_rmap uses this NULL pointer (slot) without checking, which is bad.


After some investigation, I found when kernel crashes, gfn is 0xA7, in SMRAM region again. And at this time, kvm_memslots_for_spte_role is false,
however actually (vcpu->arch.hflags & HF_SMM_MASK) is true!

So I think there is some lacking in kvm-kmod’s kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM. I tried following patch,
it seems works fine.

Do you think this patch is appropriate or not enough? Thanks.

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 19f9f9e..f39e839 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3013,8 +3013,10 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
                 vcpu->arch.apic->sipi_vector = events->sipi_vector;

         if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
-               if (events->smi.smm)
+               if (events->smi.smm) {
                         vcpu->arch.hflags |= HF_SMM_MASK;
+                       kvm_mmu_reset_context(vcpu);
+               }
                 else
                         vcpu->arch.hflags &= ~HF_SMM_MASK;
                 vcpu->arch.smi_pending = events->smi.pending;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Qemu-devel] [RFC/PATCH 3] kvm: fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0
  2016-09-14  7:55 [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination Herongguang (Stephen)
  2016-09-14  9:05 ` Paolo Bonzini
  2016-09-22  7:51   ` [Qemu-devel] " Herongguang (Stephen)
@ 2016-09-22  7:56 ` Herongguang (Stephen)
  2 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22  7:56 UTC (permalink / raw)
  To: qemu-devel, pbonzini, quintela, amit.shah, rth, ehabkost, mtosatti
  Cc: arei.gonglei, Huangweidong (C)

Fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0.

Signed-off-by: He Rongguang <herongguang.he@huawei.com>
---
Note without patch 2, this would result in kvm-kmod crash, as described in patch 2
---
  target-i386/kvm.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index d1a25c5..7db33d2 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -2456,6 +2456,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)

      events.sipi_vector = env->sipi_vector;

+    events.flags = 0;
      if (has_msr_smbase) {
          events.smi.smm = !!(env->hflags & HF_SMM_MASK);
          events.smi.smm_inside_nmi = !!(env->hflags2 & HF2_SMM_INSIDE_NMI_MASK);
@@ -2474,7 +2475,6 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
          events.flags |= KVM_VCPUEVENT_VALID_SMM;
      }

-    events.flags = 0;
      if (level >= KVM_PUT_RESET_STATE) {
          events.flags |=
              KVM_VCPUEVENT_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR;
--
1.8.3.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
  2016-09-22  7:51   ` [Qemu-devel] " Herongguang (Stephen)
@ 2016-09-22  9:29     ` Paolo Bonzini
  -1 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-22  9:29 UTC (permalink / raw)
  To: Herongguang (Stephen), quintela, amit.shah, rkrcmar, kvm
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)



On 22/09/2016 09:51, Herongguang (Stephen) wrote:
> After making memory consistent between source and destination
> (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html),
> there can
> still reproduce instruction emulation failure in destination side if
> migration when VM’s in grub stage:

Hi!  Did you follow up on that patch, by the way?

> So I think there is some lacking in kvm-kmod’s
> kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM.
> I tried following patch,
> it seems works fine.
> 
> Do you think this patch is appropriate or not enough? Thanks.

Yes.  I would just call kvm_mmu_reset_context unconditionally at the end
of kvm_vcpu_iocyl_x86_set_x86_vcpu_events.  Please send this patch as
non-RFC.

Patch 3 is also okay, please send it separately.

Thanks,

Paolo

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 19f9f9e..f39e839 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3013,8 +3013,10 @@ static int
> kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>                 vcpu->arch.apic->sipi_vector = events->sipi_vector;
> 
>         if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
> -               if (events->smi.smm)
> +               if (events->smi.smm) {
>                         vcpu->arch.hflags |= HF_SMM_MASK;
> +                       kvm_mmu_reset_context(vcpu);
> +               }
>                 else
>                         vcpu->arch.hflags &= ~HF_SMM_MASK;
>                 vcpu->arch.smi_pending = events->smi.pending;
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
@ 2016-09-22  9:29     ` Paolo Bonzini
  0 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-22  9:29 UTC (permalink / raw)
  To: Herongguang (Stephen), quintela, amit.shah, rkrcmar, kvm
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)



On 22/09/2016 09:51, Herongguang (Stephen) wrote:
> After making memory consistent between source and destination
> (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html),
> there can
> still reproduce instruction emulation failure in destination side if
> migration when VM’s in grub stage:

Hi!  Did you follow up on that patch, by the way?

> So I think there is some lacking in kvm-kmod’s
> kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM.
> I tried following patch,
> it seems works fine.
> 
> Do you think this patch is appropriate or not enough? Thanks.

Yes.  I would just call kvm_mmu_reset_context unconditionally at the end
of kvm_vcpu_iocyl_x86_set_x86_vcpu_events.  Please send this patch as
non-RFC.

Patch 3 is also okay, please send it separately.

Thanks,

Paolo

> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 19f9f9e..f39e839 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3013,8 +3013,10 @@ static int
> kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>                 vcpu->arch.apic->sipi_vector = events->sipi_vector;
> 
>         if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
> -               if (events->smi.smm)
> +               if (events->smi.smm) {
>                         vcpu->arch.hflags |= HF_SMM_MASK;
> +                       kvm_mmu_reset_context(vcpu);
> +               }
>                 else
>                         vcpu->arch.hflags &= ~HF_SMM_MASK;
>                 vcpu->arch.smi_pending = events->smi.pending;
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-14  9:05 ` Paolo Bonzini
@ 2016-09-22 13:16   ` Herongguang (Stephen)
  2016-09-23  1:11     ` Herongguang (Stephen)
  2016-09-23  7:17     ` Paolo Bonzini
  0 siblings, 2 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22 13:16 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 2016/9/14 17:05, Paolo Bonzini wrote:
>
>
> On 14/09/2016 09:55, Herongguang (Stephen) wrote:
>> Hi,
>> We found a problem that when a redhat 6 VM reboots (in grub countdown
>> UI), migrating this VM will result in VM’s memory difference between
>> source and destination side. The difference always resides in GPA
>> 0xA0000~0xC0000, i.e. SMRAM area.
>>
>> Occasionally this result in VM instruction emulation error in
>> destination side.
>>
>> After some digging, I think this is because in migration code, in
>> migration_bitmap_sync(), only memory slots in address space
>> address_space_memory’s dirty bitmap  fetched from kvm-kmod, while SMRAM
>> memory slot, in address space smram_address_space’s dirty bitmap not
>> fetched from kvm-kmod, thus modifications in SMRAM in source side are
>> not sent to destination side.
>>
>> I tried following patch, and this phenomenon does not happen anymore. Do
>> you think this patch is OK or do you have better idea? Thanks.
>
> Nice caatch!
>
> I think the right solution here is to sync all RAM memory regions
> instead of the address spaces.  You can do that by putting a notifier in
> MemoryRegion; register the notifier in all the RAM creation functions
> (basically after every mr->ram=true or mr->rom_device=true), and
> unregister it in memory_region_destructor_ram.
>
> Thanks,
>
> Paolo
>

I have some concern:
1. For example, vhost does not know about as_id, I wonder if guests in SMM can operate disk or ether card, as in
that case vhost would not logging dirty pages correctly, without knowing as_id.

2. If a memory region is disabled/enabled/disabled frequently, since disabled memory regions would be removed
from memory slots in kvm-kmod, dirty pages would be discarded in kvm-kmod and qemu when disabled, thus missing.
Is my assumption correct?

3. I agree your opinion that the right solution is to get dirty-page information for all memory region from
kvm-kmod. But I found it’s somewhat hard to implement since kvm_log_sync() expects a MemoryRegionSection*
parameter. Do you have good idea?

As to all the ram memory regions, I think they are all in the ram_list.blocks, so there is no need to create
a notifier, is this correct?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
  2016-09-22  9:29     ` [Qemu-devel] " Paolo Bonzini
@ 2016-09-22 13:19       ` Herongguang (Stephen)
  -1 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22 13:19 UTC (permalink / raw)
  To: Paolo Bonzini, quintela, amit.shah, rkrcmar, kvm
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)



On 2016/9/22 17:29, Paolo Bonzini wrote:
>
>
> On 22/09/2016 09:51, Herongguang (Stephen) wrote:
>> After making memory consistent between source and destination
>> (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html),
>> there can
>> still reproduce instruction emulation failure in destination side if
>> migration when VM’s in grub stage:
>
> Hi!  Did you follow up on that patch, by the way?
Yes, I have some concern, see that post.

>
>> So I think there is some lacking in kvm-kmod’s
>> kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM.
>> I tried following patch,
>> it seems works fine.
>>
>> Do you think this patch is appropriate or not enough? Thanks.
>
> Yes.  I would just call kvm_mmu_reset_context unconditionally at the end
> of kvm_vcpu_iocyl_x86_set_x86_vcpu_events.  Please send this patch as
> non-RFC.
>
> Patch 3 is also okay, please send it separately.
Ok, I will test and post it tomorrow, thanks!

>
> Thanks,
>
> Paolo
>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 19f9f9e..f39e839 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -3013,8 +3013,10 @@ static int
>> kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>>                  vcpu->arch.apic->sipi_vector = events->sipi_vector;
>>
>>          if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
>> -               if (events->smi.smm)
>> +               if (events->smi.smm) {
>>                          vcpu->arch.hflags |= HF_SMM_MASK;
>> +                       kvm_mmu_reset_context(vcpu);
>> +               }
>>                  else
>>                          vcpu->arch.hflags &= ~HF_SMM_MASK;
>>                  vcpu->arch.smi_pending = events->smi.pending;
>>
>>
>
> .
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly
@ 2016-09-22 13:19       ` Herongguang (Stephen)
  0 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-22 13:19 UTC (permalink / raw)
  To: Paolo Bonzini, quintela, amit.shah, rkrcmar, kvm
  Cc: qemu-devel, arei.gonglei, Huangweidong (C)



On 2016/9/22 17:29, Paolo Bonzini wrote:
>
>
> On 22/09/2016 09:51, Herongguang (Stephen) wrote:
>> After making memory consistent between source and destination
>> (https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg03069.html),
>> there can
>> still reproduce instruction emulation failure in destination side if
>> migration when VM’s in grub stage:
>
> Hi!  Did you follow up on that patch, by the way?
Yes, I have some concern, see that post.

>
>> So I think there is some lacking in kvm-kmod’s
>> kvm_vcpu_ioctl_x86_set_vcpu_events that handles KVM_VCPUEVENT_VALID_SMM.
>> I tried following patch,
>> it seems works fine.
>>
>> Do you think this patch is appropriate or not enough? Thanks.
>
> Yes.  I would just call kvm_mmu_reset_context unconditionally at the end
> of kvm_vcpu_iocyl_x86_set_x86_vcpu_events.  Please send this patch as
> non-RFC.
>
> Patch 3 is also okay, please send it separately.
Ok, I will test and post it tomorrow, thanks!

>
> Thanks,
>
> Paolo
>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 19f9f9e..f39e839 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -3013,8 +3013,10 @@ static int
>> kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>>                  vcpu->arch.apic->sipi_vector = events->sipi_vector;
>>
>>          if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
>> -               if (events->smi.smm)
>> +               if (events->smi.smm) {
>>                          vcpu->arch.hflags |= HF_SMM_MASK;
>> +                       kvm_mmu_reset_context(vcpu);
>> +               }
>>                  else
>>                          vcpu->arch.hflags &= ~HF_SMM_MASK;
>>                  vcpu->arch.smi_pending = events->smi.pending;
>>
>>
>
> .
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-22 13:16   ` Herongguang (Stephen)
@ 2016-09-23  1:11     ` Herongguang (Stephen)
  2016-09-23  7:17     ` Paolo Bonzini
  1 sibling, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-23  1:11 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 2016/9/22 21:16, Herongguang (Stephen) wrote:
>
>
> On 2016/9/14 17:05, Paolo Bonzini wrote:
>>
>>
>> On 14/09/2016 09:55, Herongguang (Stephen) wrote:
>>> Hi,
>>> We found a problem that when a redhat 6 VM reboots (in grub countdown
>>> UI), migrating this VM will result in VM’s memory difference between
>>> source and destination side. The difference always resides in GPA
>>> 0xA0000~0xC0000, i.e. SMRAM area.
>>>
>>> Occasionally this result in VM instruction emulation error in
>>> destination side.
>>>
>>> After some digging, I think this is because in migration code, in
>>> migration_bitmap_sync(), only memory slots in address space
>>> address_space_memory’s dirty bitmap  fetched from kvm-kmod, while SMRAM
>>> memory slot, in address space smram_address_space’s dirty bitmap not
>>> fetched from kvm-kmod, thus modifications in SMRAM in source side are
>>> not sent to destination side.
>>>
>>> I tried following patch, and this phenomenon does not happen anymore. Do
>>> you think this patch is OK or do you have better idea? Thanks.
>>
>> Nice caatch!
>>
>> I think the right solution here is to sync all RAM memory regions
>> instead of the address spaces.  You can do that by putting a notifier in
>> MemoryRegion; register the notifier in all the RAM creation functions
>> (basically after every mr->ram=true or mr->rom_device=true), and
>> unregister it in memory_region_destructor_ram.
>>
>> Thanks,
>>
>> Paolo
>>
>
> I have some concern:
> 1. For example, vhost does not know about as_id, I wonder if guests in SMM can operate disk or ether card, as in
> that case vhost would not logging dirty pages correctly, without knowing as_id.
>
> 2. If a memory region is disabled/enabled/disabled frequently, since disabled memory regions would be removed
> from memory slots in kvm-kmod, dirty pages would be discarded in kvm-kmod and qemu when disabled, thus missing.
> Is my assumption correct?
After reviewing code, I think question 2 does not exist as qemu will sync dirty page before removing memory slots in kvm_set_phys_mem.

>
> 3. I agree your opinion that the right solution is to get dirty-page information for all memory region from
> kvm-kmod. But I found it’s somewhat hard to implement since kvm_log_sync() expects a MemoryRegionSection*
> parameter. Do you have good idea?
>
> As to all the ram memory regions, I think they are all in the ram_list.blocks, so there is no need to create
> a notifier, is this correct?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-22 13:16   ` Herongguang (Stephen)
  2016-09-23  1:11     ` Herongguang (Stephen)
@ 2016-09-23  7:17     ` Paolo Bonzini
  2016-09-23  8:51       ` Herongguang (Stephen)
  2016-09-25 11:33       ` Herongguang (Stephen)
  1 sibling, 2 replies; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-23  7:17 UTC (permalink / raw)
  To: Herongguang (Stephen), qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 22/09/2016 15:16, Herongguang (Stephen) wrote:
> I have some concern:
> 1. For example, vhost does not know about as_id, I wonder if guests in
> SMM can operate disk or ether card, as in
> that case vhost would not logging dirty pages correctly, without knowing
> as_id.

In the end memory is logged by ram_addr_t, not by address space.  So if
vhost_sync_dirty_bitmap is called on the right region everything works.

Guests in SMM can operate on storage devices, but storage devices cannot
write to 0xA0000-0xBFFFF so that's safe.

> 2. If a memory region is disabled/enabled/disabled frequently, since
> disabled memory regions would be removed
> from memory slots in kvm-kmod, dirty pages would be discarded in
> kvm-kmod and qemu when disabled, thus missing.
> Is my assumption correct?

As you found, this is handled by kvm_set_phys_mem:

        if (mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
            kvm_physical_sync_dirty_bitmap(kml, section);
        }

> 3. I agree your opinion that the right solution is to get dirty-page
> information for all memory region from
> kvm-kmod. But I found it’s somewhat hard to implement since
> kvm_log_sync() expects a MemoryRegionSection*
> parameter. Do you have good idea?

You're right, memory_region_sync_dirty_bitmap handles this but it's 
inefficient.

So your patch is correct, but it breaks for !x86 as you probably know.  
We need to look at the address spaces which have a listener that 
implements log_sync.

> As to all the ram memory regions, I think they are all in the
> ram_list.blocks, so there is no need to create
> a notifier, is this correct?

Yes, makes sense.

The patch here is a bit of a hack but is efficient.  It looks at
each address space just once and calls log_sync just once per
FlatRange.

Paolo

diff --git a/include/exec/memory.h b/include/exec/memory.h
index 3e4d416..23b086d 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1154,12 +1154,11 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr,
                                        hwaddr addr, uint64_t size);
 
 /**
- * address_space_sync_dirty_bitmap: synchronize the dirty log for all memory
+ * memory_global_dirty_log_sync: synchronize the dirty log for all memory
  *
- * Synchronizes the dirty page log for an entire address space.
- * @as: the address space that contains the memory being synchronized
+ * Synchronizes the dirty page log for all address spaces.
  */
-void address_space_sync_dirty_bitmap(AddressSpace *as);
+void memory_global_dirty_log_sync(void);
 
 /**
  * memory_region_transaction_begin: Start a transaction.
diff --git a/memory.c b/memory.c
index 1a1baf5..6cac674 100644
--- a/memory.c
+++ b/memory.c
@@ -158,15 +158,11 @@ static bool memory_listener_match(MemoryListener *listener,
 
 /* No need to ref/unref .mr, the FlatRange keeps it alive.  */
 #define MEMORY_LISTENER_UPDATE_REGION(fr, as, dir, callback, _args...)  \
-    MEMORY_LISTENER_CALL(callback, dir, (&(MemoryRegionSection) {       \
-        .mr = (fr)->mr,                                                 \
-        .address_space = (as),                                          \
-        .offset_within_region = (fr)->offset_in_region,                 \
-        .size = (fr)->addr.size,                                        \
-        .offset_within_address_space = int128_get64((fr)->addr.start),  \
-        .readonly = (fr)->readonly,                                     \
-              }), ##_args)
-
+    do {                                                                \
+        MemoryRegionSection mrs = section_from_flat_range(fr, as);      \
+        MEMORY_LISTENER_CALL(callback, dir, &mrs, ##_args);             \
+    } while(0)
+
 struct CoalescedMemoryRange {
     AddrRange addr;
     QTAILQ_ENTRY(CoalescedMemoryRange) link;
@@ -245,6 +241,19 @@ typedef struct AddressSpaceOps AddressSpaceOps;
 #define FOR_EACH_FLAT_RANGE(var, view)          \
     for (var = (view)->ranges; var < (view)->ranges + (view)->nr; ++var)
 
+static inline MemoryRegionSection
+section_from_flat_range(FlatRange *fr, AddressSpace *as)
+{
+    return (MemoryRegionSection) {
+        .mr = fr->mr,
+        .address_space = as,
+        .offset_within_region = fr->offset_in_region,
+        .size = fr->addr.size,
+        .offset_within_address_space = int128_get64(fr->addr.start),
+        .readonly = fr->readonly,
+    };
+}
+
 static bool flatrange_equal(FlatRange *a, FlatRange *b)
 {
     return a->mr == b->mr
@@ -2124,16 +2133,25 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr)
     return mr && mr != container;
 }
 
-void address_space_sync_dirty_bitmap(AddressSpace *as)
+void memory_global_dirty_log_sync(void)
 {
+    MemoryListener *listener;
+    AddressSpace *as;
     FlatView *view;
     FlatRange *fr;
 
-    view = address_space_get_flatview(as);
-    FOR_EACH_FLAT_RANGE(fr, view) {
-        MEMORY_LISTENER_UPDATE_REGION(fr, as, Forward, log_sync);
+    QTAILQ_FOREACH(listener, &memory_listeners, link) {
+        if (!listener->log_sync || !listener->address_space_filter) {
+            continue;
+        }
+        as = listener->address_space_filter;
+        view = address_space_get_flatview(as);
+        FOR_EACH_FLAT_RANGE(fr, view) {
+            MemoryRegionSection mrs = section_from_flat_range(fr, as);
+            listener->log_sync(listener, &mrs);
+        }
+        flatview_unref(view);
     }
-    flatview_unref(view);
 }
 
 void memory_global_dirty_log_start(void)
diff --git a/migration/ram.c b/migration/ram.c
index a6e1c63..e33ff5e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -626,7 +626,7 @@ static void migration_bitmap_sync(void)
     }
 
     trace_migration_bitmap_sync_start();
-    address_space_sync_dirty_bitmap(&address_space_memory);
+    memory_global_dirty_log_sync();
 
     qemu_mutex_lock(&migration_bitmap_mutex);
     rcu_read_lock();

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-23  7:17     ` Paolo Bonzini
@ 2016-09-23  8:51       ` Herongguang (Stephen)
  2016-09-23  8:59         ` Paolo Bonzini
  2016-09-25 11:33       ` Herongguang (Stephen)
  1 sibling, 1 reply; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-23  8:51 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 2016/9/23 15:17, Paolo Bonzini wrote:
>
>
> On 22/09/2016 15:16, Herongguang (Stephen) wrote:
>> I have some concern:
>> 1. For example, vhost does not know about as_id, I wonder if guests in
>> SMM can operate disk or ether card, as in
>> that case vhost would not logging dirty pages correctly, without knowing
>> as_id.
>
> In the end memory is logged by ram_addr_t, not by address space.  So if
> vhost_sync_dirty_bitmap is called on the right region everything works.
>
> Guests in SMM can operate on storage devices, but storage devices cannot
> write to 0xA0000-0xBFFFF so that's safe.
You are right, as vhost does not have a smram address space listener. So is this
a qemu's flaw that virtual devices can not write to SMRAM?

>
>> 2. If a memory region is disabled/enabled/disabled frequently, since
>> disabled memory regions would be removed
>> from memory slots in kvm-kmod, dirty pages would be discarded in
>> kvm-kmod and qemu when disabled, thus missing.
>> Is my assumption correct?
>
> As you found, this is handled by kvm_set_phys_mem:
>
>          if (mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
>              kvm_physical_sync_dirty_bitmap(kml, section);
>          }
>
>> 3. I agree your opinion that the right solution is to get dirty-page
>> information for all memory region from
>> kvm-kmod. But I found it’s somewhat hard to implement since
>> kvm_log_sync() expects a MemoryRegionSection*
>> parameter. Do you have good idea?
>
> You're right, memory_region_sync_dirty_bitmap handles this but it's
> inefficient.
>
> So your patch is correct, but it breaks for !x86 as you probably know.
> We need to look at the address spaces which have a listener that
> implements log_sync.
>
>> As to all the ram memory regions, I think they are all in the
>> ram_list.blocks, so there is no need to create
>> a notifier, is this correct?
>
> Yes, makes sense.
>
> The patch here is a bit of a hack but is efficient.  It looks at
> each address space just once and calls log_sync just once per
> FlatRange.
>
> Paolo
>
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 3e4d416..23b086d 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1154,12 +1154,11 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr,
>                                          hwaddr addr, uint64_t size);
>
>   /**
> - * address_space_sync_dirty_bitmap: synchronize the dirty log for all memory
> + * memory_global_dirty_log_sync: synchronize the dirty log for all memory
>    *
> - * Synchronizes the dirty page log for an entire address space.
> - * @as: the address space that contains the memory being synchronized
> + * Synchronizes the dirty page log for all address spaces.
>    */
> -void address_space_sync_dirty_bitmap(AddressSpace *as);
> +void memory_global_dirty_log_sync(void);
>
>   /**
>    * memory_region_transaction_begin: Start a transaction.
> diff --git a/memory.c b/memory.c
> index 1a1baf5..6cac674 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -158,15 +158,11 @@ static bool memory_listener_match(MemoryListener *listener,
>
>   /* No need to ref/unref .mr, the FlatRange keeps it alive.  */
>   #define MEMORY_LISTENER_UPDATE_REGION(fr, as, dir, callback, _args...)  \
> -    MEMORY_LISTENER_CALL(callback, dir, (&(MemoryRegionSection) {       \
> -        .mr = (fr)->mr,                                                 \
> -        .address_space = (as),                                          \
> -        .offset_within_region = (fr)->offset_in_region,                 \
> -        .size = (fr)->addr.size,                                        \
> -        .offset_within_address_space = int128_get64((fr)->addr.start),  \
> -        .readonly = (fr)->readonly,                                     \
> -              }), ##_args)
> -
> +    do {                                                                \
> +        MemoryRegionSection mrs = section_from_flat_range(fr, as);      \
> +        MEMORY_LISTENER_CALL(callback, dir, &mrs, ##_args);             \
> +    } while(0)
> +
>   struct CoalescedMemoryRange {
>       AddrRange addr;
>       QTAILQ_ENTRY(CoalescedMemoryRange) link;
> @@ -245,6 +241,19 @@ typedef struct AddressSpaceOps AddressSpaceOps;
>   #define FOR_EACH_FLAT_RANGE(var, view)          \
>       for (var = (view)->ranges; var < (view)->ranges + (view)->nr; ++var)
>
> +static inline MemoryRegionSection
> +section_from_flat_range(FlatRange *fr, AddressSpace *as)
> +{
> +    return (MemoryRegionSection) {
> +        .mr = fr->mr,
> +        .address_space = as,
> +        .offset_within_region = fr->offset_in_region,
> +        .size = fr->addr.size,
> +        .offset_within_address_space = int128_get64(fr->addr.start),
> +        .readonly = fr->readonly,
> +    };
> +}
> +
>   static bool flatrange_equal(FlatRange *a, FlatRange *b)
>   {
>       return a->mr == b->mr
> @@ -2124,16 +2133,25 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr)
>       return mr && mr != container;
>   }
>
> -void address_space_sync_dirty_bitmap(AddressSpace *as)
> +void memory_global_dirty_log_sync(void)
>   {
> +    MemoryListener *listener;
> +    AddressSpace *as;
>       FlatView *view;
>       FlatRange *fr;
>
> -    view = address_space_get_flatview(as);
> -    FOR_EACH_FLAT_RANGE(fr, view) {
> -        MEMORY_LISTENER_UPDATE_REGION(fr, as, Forward, log_sync);
> +    QTAILQ_FOREACH(listener, &memory_listeners, link) {
> +        if (!listener->log_sync || !listener->address_space_filter) {
> +            continue;
> +        }
> +        as = listener->address_space_filter;
> +        view = address_space_get_flatview(as);
> +        FOR_EACH_FLAT_RANGE(fr, view) {
> +            MemoryRegionSection mrs = section_from_flat_range(fr, as);
> +            listener->log_sync(listener, &mrs);
> +        }
> +        flatview_unref(view);
>       }
> -    flatview_unref(view);
>   }
>
>   void memory_global_dirty_log_start(void)
> diff --git a/migration/ram.c b/migration/ram.c
> index a6e1c63..e33ff5e 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -626,7 +626,7 @@ static void migration_bitmap_sync(void)
>       }
>
>       trace_migration_bitmap_sync_start();
> -    address_space_sync_dirty_bitmap(&address_space_memory);
> +    memory_global_dirty_log_sync();
>
>       qemu_mutex_lock(&migration_bitmap_mutex);
>       rcu_read_lock();
>
Fine, your patch is simpler than I thought and functions right.

Reviewed-by: He Rongguang <herongguang.he@huawei.com>

> .
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-23  8:51       ` Herongguang (Stephen)
@ 2016-09-23  8:59         ` Paolo Bonzini
  2016-09-23  9:14           ` Herongguang (Stephen)
  0 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-23  8:59 UTC (permalink / raw)
  To: Herongguang (Stephen), qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 23/09/2016 10:51, Herongguang (Stephen) wrote:
> 
> 
> On 2016/9/23 15:17, Paolo Bonzini wrote:
>>
>>
>> On 22/09/2016 15:16, Herongguang (Stephen) wrote:
>>> I have some concern:
>>> 1. For example, vhost does not know about as_id, I wonder if guests in
>>> SMM can operate disk or ether card, as in
>>> that case vhost would not logging dirty pages correctly, without knowing
>>> as_id.
>>
>> In the end memory is logged by ram_addr_t, not by address space.  So if
>> vhost_sync_dirty_bitmap is called on the right region everything works.
>>
>> Guests in SMM can operate on storage devices, but storage devices cannot
>> write to 0xA0000-0xBFFFF so that's safe.
> You are right, as vhost does not have a smram address space listener. So
> is this a qemu's flaw that virtual devices can not write to SMRAM?

No, it's how it works in real hardware.  However, it is a (minor) bug
that vhost doesn't have the equivalent of kvm_set_phys_mem's call to
kvm_physical_sync_dirty_bitmap.

> Fine, your patch is simpler than I thought and functions right.

Great!

> Reviewed-by: He Rongguang <herongguang.he@huawei.com>

Also Tested-by?

Thanks,

Paolo
> 
>> .
>>
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-23  8:59         ` Paolo Bonzini
@ 2016-09-23  9:14           ` Herongguang (Stephen)
  0 siblings, 0 replies; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-23  9:14 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 2016/9/23 16:59, Paolo Bonzini wrote:
>
>
> On 23/09/2016 10:51, Herongguang (Stephen) wrote:
>>
>>
>> On 2016/9/23 15:17, Paolo Bonzini wrote:
>>>
>>>
>>> On 22/09/2016 15:16, Herongguang (Stephen) wrote:
>>>> I have some concern:
>>>> 1. For example, vhost does not know about as_id, I wonder if guests in
>>>> SMM can operate disk or ether card, as in
>>>> that case vhost would not logging dirty pages correctly, without knowing
>>>> as_id.
>>>
>>> In the end memory is logged by ram_addr_t, not by address space.  So if
>>> vhost_sync_dirty_bitmap is called on the right region everything works.
>>>
>>> Guests in SMM can operate on storage devices, but storage devices cannot
>>> write to 0xA0000-0xBFFFF so that's safe.
>> You are right, as vhost does not have a smram address space listener. So
>> is this a qemu's flaw that virtual devices can not write to SMRAM?
>
> No, it's how it works in real hardware.  However, it is a (minor) bug
> that vhost doesn't have the equivalent of kvm_set_phys_mem's call to
> kvm_physical_sync_dirty_bitmap.
>
>> Fine, your patch is simpler than I thought and functions right.
>
> Great!
>
>> Reviewed-by: He Rongguang <herongguang.he@huawei.com>
>
> Also Tested-by?
Yes, I will test this patch in these days.

>
> Thanks,
>
> Paolo
>>
>>> .
>>>
>>
>
> .
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-23  7:17     ` Paolo Bonzini
  2016-09-23  8:51       ` Herongguang (Stephen)
@ 2016-09-25 11:33       ` Herongguang (Stephen)
  2016-09-26  7:15         ` Paolo Bonzini
  1 sibling, 1 reply; 17+ messages in thread
From: Herongguang (Stephen) @ 2016-09-25 11:33 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 2016/9/23 15:17, Paolo Bonzini wrote:
>
>
> On 22/09/2016 15:16, Herongguang (Stephen) wrote:
>> I have some concern:
>> 1. For example, vhost does not know about as_id, I wonder if guests in
>> SMM can operate disk or ether card, as in
>> that case vhost would not logging dirty pages correctly, without knowing
>> as_id.
>
> In the end memory is logged by ram_addr_t, not by address space.  So if
> vhost_sync_dirty_bitmap is called on the right region everything works.
>
> Guests in SMM can operate on storage devices, but storage devices cannot
> write to 0xA0000-0xBFFFF so that's safe.
>
>> 2. If a memory region is disabled/enabled/disabled frequently, since
>> disabled memory regions would be removed
>> from memory slots in kvm-kmod, dirty pages would be discarded in
>> kvm-kmod and qemu when disabled, thus missing.
>> Is my assumption correct?
>
> As you found, this is handled by kvm_set_phys_mem:
>
>          if (mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
>              kvm_physical_sync_dirty_bitmap(kml, section);
>          }
>
>> 3. I agree your opinion that the right solution is to get dirty-page
>> information for all memory region from
>> kvm-kmod. But I found it’s somewhat hard to implement since
>> kvm_log_sync() expects a MemoryRegionSection*
>> parameter. Do you have good idea?
>
> You're right, memory_region_sync_dirty_bitmap handles this but it's
> inefficient.
>
> So your patch is correct, but it breaks for !x86 as you probably know.
> We need to look at the address spaces which have a listener that
> implements log_sync.
>
>> As to all the ram memory regions, I think they are all in the
>> ram_list.blocks, so there is no need to create
>> a notifier, is this correct?
>
> Yes, makes sense.
>
> The patch here is a bit of a hack but is efficient.  It looks at
> each address space just once and calls log_sync just once per
> FlatRange.
>
> Paolo
>
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 3e4d416..23b086d 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1154,12 +1154,11 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr,
>                                          hwaddr addr, uint64_t size);
>
>   /**
> - * address_space_sync_dirty_bitmap: synchronize the dirty log for all memory
> + * memory_global_dirty_log_sync: synchronize the dirty log for all memory
>    *
> - * Synchronizes the dirty page log for an entire address space.
> - * @as: the address space that contains the memory being synchronized
> + * Synchronizes the dirty page log for all address spaces.
>    */
> -void address_space_sync_dirty_bitmap(AddressSpace *as);
> +void memory_global_dirty_log_sync(void);
>
>   /**
>    * memory_region_transaction_begin: Start a transaction.
> diff --git a/memory.c b/memory.c
> index 1a1baf5..6cac674 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -158,15 +158,11 @@ static bool memory_listener_match(MemoryListener *listener,
>
>   /* No need to ref/unref .mr, the FlatRange keeps it alive.  */
>   #define MEMORY_LISTENER_UPDATE_REGION(fr, as, dir, callback, _args...)  \
> -    MEMORY_LISTENER_CALL(callback, dir, (&(MemoryRegionSection) {       \
> -        .mr = (fr)->mr,                                                 \
> -        .address_space = (as),                                          \
> -        .offset_within_region = (fr)->offset_in_region,                 \
> -        .size = (fr)->addr.size,                                        \
> -        .offset_within_address_space = int128_get64((fr)->addr.start),  \
> -        .readonly = (fr)->readonly,                                     \
> -              }), ##_args)
> -
> +    do {                                                                \
> +        MemoryRegionSection mrs = section_from_flat_range(fr, as);      \
> +        MEMORY_LISTENER_CALL(callback, dir, &mrs, ##_args);             \
> +    } while(0)
> +
>   struct CoalescedMemoryRange {
>       AddrRange addr;
>       QTAILQ_ENTRY(CoalescedMemoryRange) link;
> @@ -245,6 +241,19 @@ typedef struct AddressSpaceOps AddressSpaceOps;
>   #define FOR_EACH_FLAT_RANGE(var, view)          \
>       for (var = (view)->ranges; var < (view)->ranges + (view)->nr; ++var)
>
> +static inline MemoryRegionSection
> +section_from_flat_range(FlatRange *fr, AddressSpace *as)
> +{
> +    return (MemoryRegionSection) {
> +        .mr = fr->mr,
> +        .address_space = as,
> +        .offset_within_region = fr->offset_in_region,
> +        .size = fr->addr.size,
> +        .offset_within_address_space = int128_get64(fr->addr.start),
> +        .readonly = fr->readonly,
> +    };
> +}
> +
>   static bool flatrange_equal(FlatRange *a, FlatRange *b)
>   {
>       return a->mr == b->mr
> @@ -2124,16 +2133,25 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr)
>       return mr && mr != container;
>   }
>
> -void address_space_sync_dirty_bitmap(AddressSpace *as)
> +void memory_global_dirty_log_sync(void)
>   {
> +    MemoryListener *listener;
> +    AddressSpace *as;
>       FlatView *view;
>       FlatRange *fr;
>
> -    view = address_space_get_flatview(as);
> -    FOR_EACH_FLAT_RANGE(fr, view) {
> -        MEMORY_LISTENER_UPDATE_REGION(fr, as, Forward, log_sync);
> +    QTAILQ_FOREACH(listener, &memory_listeners, link) {
> +        if (!listener->log_sync || !listener->address_space_filter) {
> +            continue;
> +        }
> +        as = listener->address_space_filter;
> +        view = address_space_get_flatview(as);
> +        FOR_EACH_FLAT_RANGE(fr, view) {
> +            MemoryRegionSection mrs = section_from_flat_range(fr, as);
> +            listener->log_sync(listener, &mrs);
> +        }
> +        flatview_unref(view);
>       }
> -    flatview_unref(view);
>   }
>
>   void memory_global_dirty_log_start(void)
> diff --git a/migration/ram.c b/migration/ram.c
> index a6e1c63..e33ff5e 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -626,7 +626,7 @@ static void migration_bitmap_sync(void)
>       }
>
>       trace_migration_bitmap_sync_start();
> -    address_space_sync_dirty_bitmap(&address_space_memory);
> +    memory_global_dirty_log_sync();
>
>       qemu_mutex_lock(&migration_bitmap_mutex);
>       rcu_read_lock();
>
Tested-by: He Rongguang <herongguang.he@huawei.com>

> .
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination
  2016-09-25 11:33       ` Herongguang (Stephen)
@ 2016-09-26  7:15         ` Paolo Bonzini
  0 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2016-09-26  7:15 UTC (permalink / raw)
  To: Herongguang (Stephen), qemu-devel, quintela, amit.shah
  Cc: arei.gonglei, Huangweidong (C)



On 25/09/2016 13:33, Herongguang (Stephen) wrote:
> Tested-by: He Rongguang <herongguang.he@huawei.com>

Thanks!

Paolo

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2016-09-26  7:16 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-14  7:55 [Qemu-devel] [RFC/PATCH] migration: SMRAM dirty bitmap not fetched from kvm-kmod and not send to destination Herongguang (Stephen)
2016-09-14  9:05 ` Paolo Bonzini
2016-09-22 13:16   ` Herongguang (Stephen)
2016-09-23  1:11     ` Herongguang (Stephen)
2016-09-23  7:17     ` Paolo Bonzini
2016-09-23  8:51       ` Herongguang (Stephen)
2016-09-23  8:59         ` Paolo Bonzini
2016-09-23  9:14           ` Herongguang (Stephen)
2016-09-25 11:33       ` Herongguang (Stephen)
2016-09-26  7:15         ` Paolo Bonzini
2016-09-22  7:51 ` [RFC/PATCH 2] kvm: x86: handle KVM_SET_VCPU_EVENTS/KVM_VCPUEVENT_VALID_SMM properly Herongguang (Stephen)
2016-09-22  7:51   ` [Qemu-devel] " Herongguang (Stephen)
2016-09-22  9:29   ` Paolo Bonzini
2016-09-22  9:29     ` [Qemu-devel] " Paolo Bonzini
2016-09-22 13:19     ` Herongguang (Stephen)
2016-09-22 13:19       ` [Qemu-devel] " Herongguang (Stephen)
2016-09-22  7:56 ` [Qemu-devel] [RFC/PATCH 3] kvm: fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0 Herongguang (Stephen)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.