qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
@ 2019-09-20 10:21 Paolo Bonzini
  2019-09-20 10:21 ` [PATCH 1/2] kvm: extract kvm_log_clear_one_slot Paolo Bonzini
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Paolo Bonzini @ 2019-09-20 10:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, peterx

A single ram_addr (representing a host-virtual address) could be aliased
to multiple guest physical addresses.  Since the KVM dirty page reporting
works on guest physical addresses, we need to clear all of the aliases
when a page is migrated, or there is a risk of losing writes to the
aliases that were not cleared.

Paolo

Paolo Bonzini (2):
  kvm: extract kvm_log_clear_one_slot
  kvm: clear dirty bitmaps from all overlapping memslots

 accel/kvm/kvm-all.c | 114 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 66 insertions(+), 48 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] kvm: extract kvm_log_clear_one_slot
  2019-09-20 10:21 [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
@ 2019-09-20 10:21 ` Paolo Bonzini
  2019-09-20 12:11   ` Peter Xu
  2019-09-20 10:21 ` [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
  2019-09-20 12:19 ` [PATCH 0/2] " Peter Xu
  2 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2019-09-20 10:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, peterx, qemu-stable

We may need to clear the dirty bitmap for more than one KVM memslot.
First do some code movement with no semantic change.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 accel/kvm/kvm-all.c | 102 ++++++++++++++++++++++++++++------------------------
 1 file changed, 56 insertions(+), 46 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index b09bad0..e9e6086 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -575,55 +575,13 @@ out:
 #define KVM_CLEAR_LOG_ALIGN  (qemu_real_host_page_size << KVM_CLEAR_LOG_SHIFT)
 #define KVM_CLEAR_LOG_MASK   (-KVM_CLEAR_LOG_ALIGN)
 
-/**
- * kvm_physical_log_clear - Clear the kernel's dirty bitmap for range
- *
- * NOTE: this will be a no-op if we haven't enabled manual dirty log
- * protection in the host kernel because in that case this operation
- * will be done within log_sync().
- *
- * @kml:     the kvm memory listener
- * @section: the memory range to clear dirty bitmap
- */
-static int kvm_physical_log_clear(KVMMemoryListener *kml,
-                                  MemoryRegionSection *section)
+static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, uint64_t start, uint64_t size)
 {
     KVMState *s = kvm_state;
+    uint64_t end, bmap_start, start_delta, bmap_npages;
     struct kvm_clear_dirty_log d;
-    uint64_t start, end, bmap_start, start_delta, bmap_npages, size;
     unsigned long *bmap_clear = NULL, psize = qemu_real_host_page_size;
-    KVMSlot *mem = NULL;
-    int ret, i;
-
-    if (!s->manual_dirty_log_protect) {
-        /* No need to do explicit clear */
-        return 0;
-    }
-
-    start = section->offset_within_address_space;
-    size = int128_get64(section->size);
-
-    if (!size) {
-        /* Nothing more we can do... */
-        return 0;
-    }
-
-    kvm_slots_lock(kml);
-
-    /* Find any possible slot that covers the section */
-    for (i = 0; i < s->nr_slots; i++) {
-        mem = &kml->slots[i];
-        if (mem->start_addr <= start &&
-            start + size <= mem->start_addr + mem->memory_size) {
-            break;
-        }
-    }
-
-    /*
-     * We should always find one memslot until this point, otherwise
-     * there could be something wrong from the upper layer
-     */
-    assert(mem && i != s->nr_slots);
+    int ret;
 
     /*
      * We need to extend either the start or the size or both to
@@ -694,7 +652,7 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
     /* It should never overflow.  If it happens, say something */
     assert(bmap_npages <= UINT32_MAX);
     d.num_pages = bmap_npages;
-    d.slot = mem->slot | (kml->as_id << 16);
+    d.slot = mem->slot | (as_id << 16);
 
     if (kvm_vm_ioctl(s, KVM_CLEAR_DIRTY_LOG, &d) == -1) {
         ret = -errno;
@@ -717,6 +675,58 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
                  size / psize);
     /* This handles the NULL case well */
     g_free(bmap_clear);
+    return ret;
+}
+
+
+/**
+ * kvm_physical_log_clear - Clear the kernel's dirty bitmap for range
+ *
+ * NOTE: this will be a no-op if we haven't enabled manual dirty log
+ * protection in the host kernel because in that case this operation
+ * will be done within log_sync().
+ *
+ * @kml:     the kvm memory listener
+ * @section: the memory range to clear dirty bitmap
+ */
+static int kvm_physical_log_clear(KVMMemoryListener *kml,
+                                  MemoryRegionSection *section)
+{
+    KVMState *s = kvm_state;
+    uint64_t start, size;
+    KVMSlot *mem = NULL;
+    int ret, i;
+
+    if (!s->manual_dirty_log_protect) {
+        /* No need to do explicit clear */
+        return 0;
+    }
+
+    start = section->offset_within_address_space;
+    size = int128_get64(section->size);
+
+    if (!size) {
+        /* Nothing more we can do... */
+        return 0;
+    }
+
+    kvm_slots_lock(kml);
+
+    /* Find any possible slot that covers the section */
+    for (i = 0; i < s->nr_slots; i++) {
+        mem = &kml->slots[i];
+        if (mem->start_addr <= start &&
+            start + size <= mem->start_addr + mem->memory_size) {
+            break;
+        }
+    }
+
+    /*
+     * We should always find one memslot until this point, otherwise
+     * there could be something wrong from the upper layer
+     */
+    assert(mem && i != s->nr_slots);
+    ret = kvm_log_clear_one_slot(mem, kml->as_id, start, size);
 
     kvm_slots_unlock(kml);
 
-- 
1.8.3.1




^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 10:21 [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
  2019-09-20 10:21 ` [PATCH 1/2] kvm: extract kvm_log_clear_one_slot Paolo Bonzini
@ 2019-09-20 10:21 ` Paolo Bonzini
  2019-09-20 12:18   ` Peter Xu
  2019-09-20 12:19 ` [PATCH 0/2] " Peter Xu
  2 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2019-09-20 10:21 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, peterx, qemu-stable

Since the KVM dirty page reporting works on guest physical addresses,
we need to clear all of the aliases when a page is migrated, or there
is a risk of losing writes to the aliases that were not cleared.

Note that this is only an issue for manual clearing of the bitmap;
if the bitmap is cleared at the same time as it's retrieved, all
the aliases get cleared correctly.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Fixes: ff4aa11419242c835b03d274f08f797c129ed7ba
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 accel/kvm/kvm-all.c | 36 ++++++++++++++++++++++--------------
 1 file changed, 22 insertions(+), 14 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index e9e6086..315a915 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -588,8 +588,8 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, uint64_t start, uint6
      * satisfy the KVM interface requirement.  Firstly, do the start
      * page alignment on 64 host pages
      */
-    bmap_start = (start - mem->start_addr) & KVM_CLEAR_LOG_MASK;
-    start_delta = start - mem->start_addr - bmap_start;
+    bmap_start = start & KVM_CLEAR_LOG_MASK;
+    start_delta = start - bmap_start;
     bmap_start /= psize;
 
     /*
@@ -693,8 +693,8 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
                                   MemoryRegionSection *section)
 {
     KVMState *s = kvm_state;
-    uint64_t start, size;
-    KVMSlot *mem = NULL;
+    uint64_t start, size, offset, count;
+    KVMSlot *mem;
     int ret, i;
 
     if (!s->manual_dirty_log_protect) {
@@ -712,22 +712,30 @@ static int kvm_physical_log_clear(KVMMemoryListener *kml,
 
     kvm_slots_lock(kml);
 
-    /* Find any possible slot that covers the section */
     for (i = 0; i < s->nr_slots; i++) {
         mem = &kml->slots[i];
-        if (mem->start_addr <= start &&
-            start + size <= mem->start_addr + mem->memory_size) {
+        /* Discard slots that are empty or do not overlap the section */
+        if (!mem->memory_size ||
+            mem->start_addr > start + size - 1 ||
+            start > mem->start_addr + mem->memory_size - 1) {
+            continue;
+        }
+
+        if (start >= mem->start_addr) {
+            /* The slot starts before section or is aligned to it.  */
+            offset = start - mem->start_addr;
+            count = MIN(mem->memory_size - offset, size);
+        } else {
+            /* The slot starts after section.  */
+            offset = 0;
+            count = MIN(mem->memory_size, size - (mem->start_addr - start));
+        }
+        ret = kvm_log_clear_one_slot(mem, kml->as_id, offset, count);
+        if (ret < 0) {
             break;
         }
     }
 
-    /*
-     * We should always find one memslot until this point, otherwise
-     * there could be something wrong from the upper layer
-     */
-    assert(mem && i != s->nr_slots);
-    ret = kvm_log_clear_one_slot(mem, kml->as_id, start, size);
-
     kvm_slots_unlock(kml);
 
     return ret;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] kvm: extract kvm_log_clear_one_slot
  2019-09-20 10:21 ` [PATCH 1/2] kvm: extract kvm_log_clear_one_slot Paolo Bonzini
@ 2019-09-20 12:11   ` Peter Xu
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Xu @ 2019-09-20 12:11 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-stable, qemu-devel, dgilbert

On Fri, Sep 20, 2019 at 12:21:21PM +0200, Paolo Bonzini wrote:
> We may need to clear the dirty bitmap for more than one KVM memslot.
> First do some code movement with no semantic change.
> 
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 10:21 ` [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
@ 2019-09-20 12:18   ` Peter Xu
  2019-09-20 14:03     ` Paolo Bonzini
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Xu @ 2019-09-20 12:18 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-stable, qemu-devel, dgilbert

On Fri, Sep 20, 2019 at 12:21:22PM +0200, Paolo Bonzini wrote:
> Since the KVM dirty page reporting works on guest physical addresses,
> we need to clear all of the aliases when a page is migrated, or there
> is a risk of losing writes to the aliases that were not cleared.

The patch content looks perfect to me, though I just want to make sure
I understand the issue behind, and the commit message...

IMHO we've got two issues to cover for log_clear():

  (1) memory region aliasing, hence multiple GPAs can point to the same
      HVA/HPA so we need to clear the memslot dirty bits on all the
      mapped GPAs, and,

  (2) large log_clear() request which can cover more than one valid
      kvm memslots.  Note that in this case, the mem slots can really
      be having different HVAs so imho it should be a different issue
      comparing to (1)

The commit message says it's solving problem (1).  However for what I
understand, we are actually doing well on issue (1) because in
memory_region_clear_dirty_bitmap() we iterate over all the flat views
so that we should have caught all the aliasing memory regions if there
are any.  However this patch should perfectly fix problem (2).  Am I
right?

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 10:21 [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
  2019-09-20 10:21 ` [PATCH 1/2] kvm: extract kvm_log_clear_one_slot Paolo Bonzini
  2019-09-20 10:21 ` [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
@ 2019-09-20 12:19 ` Peter Xu
  2019-09-20 13:58   ` Igor Mammedov
  2 siblings, 1 reply; 12+ messages in thread
From: Peter Xu @ 2019-09-20 12:19 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Igor Mammedov, qemu-devel, dgilbert

On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote:
> A single ram_addr (representing a host-virtual address) could be aliased
> to multiple guest physical addresses.  Since the KVM dirty page reporting
> works on guest physical addresses, we need to clear all of the aliases
> when a page is migrated, or there is a risk of losing writes to the
> aliases that were not cleared.

(CCing Igor too so Igor would be aware of these changes that might
 conflict with the recent memslot split work)

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 12:19 ` [PATCH 0/2] " Peter Xu
@ 2019-09-20 13:58   ` Igor Mammedov
  2019-09-23  1:29     ` Peter Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Igor Mammedov @ 2019-09-20 13:58 UTC (permalink / raw)
  To: Peter Xu; +Cc: Paolo Bonzini, qemu-devel, dgilbert

On Fri, 20 Sep 2019 20:19:51 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote:
> > A single ram_addr (representing a host-virtual address) could be aliased
> > to multiple guest physical addresses.  Since the KVM dirty page reporting
> > works on guest physical addresses, we need to clear all of the aliases
> > when a page is migrated, or there is a risk of losing writes to the
> > aliases that were not cleared.  
> 
> (CCing Igor too so Igor would be aware of these changes that might
>  conflict with the recent memslot split work)
> 

Thanks Peter,
I'll rebase on top of this series and do some more testing


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 12:18   ` Peter Xu
@ 2019-09-20 14:03     ` Paolo Bonzini
  0 siblings, 0 replies; 12+ messages in thread
From: Paolo Bonzini @ 2019-09-20 14:03 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-stable, qemu-devel, dgilbert

On 20/09/19 14:18, Peter Xu wrote:
>   (1) memory region aliasing, hence multiple GPAs can point to the same
>       HVA/HPA so we need to clear the memslot dirty bits on all the
>       mapped GPAs, and,
> 
>   (2) large log_clear() request which can cover more than one valid
>       kvm memslots.  Note that in this case, the mem slots can really
>       be having different HVAs so imho it should be a different issue
>       comparing to (1)
> 
> The commit message says it's solving problem (1).  However for what I
> understand, we are actually doing well on issue (1) because in
> memory_region_clear_dirty_bitmap() we iterate over all the flat views
> so that we should have caught all the aliasing memory regions if there
> are any.

There could be two addresses pointing to the same HVA *in the same
flatview*.  See for example 0xe0000..0xfffff and 0xffffe000..0xffffffff
when a PC guest is started.  In this particular case
0xffffe000..0xffffffff is ROM, so it's not an issue, but in other cases
it may

> However this patch should perfectly fix problem (2).  Am I right?

I hadn't thought of problem (2).  I guess without Igor's work for s390
it does not exist?  But yes, it fixes it just the same.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-20 13:58   ` Igor Mammedov
@ 2019-09-23  1:29     ` Peter Xu
  2019-09-23 16:15       ` Igor Mammedov
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Xu @ 2019-09-23  1:29 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: Paolo Bonzini, qemu-devel, dgilbert

On Fri, Sep 20, 2019 at 03:58:51PM +0200, Igor Mammedov wrote:
> On Fri, 20 Sep 2019 20:19:51 +0800
> Peter Xu <peterx@redhat.com> wrote:
> 
> > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote:
> > > A single ram_addr (representing a host-virtual address) could be aliased
> > > to multiple guest physical addresses.  Since the KVM dirty page reporting
> > > works on guest physical addresses, we need to clear all of the aliases
> > > when a page is migrated, or there is a risk of losing writes to the
> > > aliases that were not cleared.  
> > 
> > (CCing Igor too so Igor would be aware of these changes that might
> >  conflict with the recent memslot split work)
> > 
> 
> Thanks Peter,
> I'll rebase on top of this series and do some more testing

Igor,

It turns out that this series is probably not required for the current
tree because memory_region_clear_dirty_bitmap() should have handled
the aliasing issue correctly, but then this patchset will be a
pre-requisite of your split series because when we split memory slots
it starts to be possible that log_clear() will be applied to multiple
kvm memslots.

Would you like to pick these two patches directly into your series?
The 1st paragraph in the 2nd patch could probably be inaccurate and
need amending (as mentioned).

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-23  1:29     ` Peter Xu
@ 2019-09-23 16:15       ` Igor Mammedov
  2019-09-23 16:49         ` Paolo Bonzini
  0 siblings, 1 reply; 12+ messages in thread
From: Igor Mammedov @ 2019-09-23 16:15 UTC (permalink / raw)
  To: Peter Xu; +Cc: Paolo Bonzini, qemu-devel, dgilbert

On Mon, 23 Sep 2019 09:29:46 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Fri, Sep 20, 2019 at 03:58:51PM +0200, Igor Mammedov wrote:
> > On Fri, 20 Sep 2019 20:19:51 +0800
> > Peter Xu <peterx@redhat.com> wrote:
> >   
> > > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote:  
> > > > A single ram_addr (representing a host-virtual address) could be aliased
> > > > to multiple guest physical addresses.  Since the KVM dirty page reporting
> > > > works on guest physical addresses, we need to clear all of the aliases
> > > > when a page is migrated, or there is a risk of losing writes to the
> > > > aliases that were not cleared.    
> > > 
> > > (CCing Igor too so Igor would be aware of these changes that might
> > >  conflict with the recent memslot split work)
> > >   
> > 
> > Thanks Peter,
> > I'll rebase on top of this series and do some more testing  
> 
> Igor,
> 
> It turns out that this series is probably not required for the current
> tree because memory_region_clear_dirty_bitmap() should have handled
> the aliasing issue correctly, but then this patchset will be a
> pre-requisite of your split series because when we split memory slots
> it starts to be possible that log_clear() will be applied to multiple
> kvm memslots.
> 
> Would you like to pick these two patches directly into your series?
> The 1st paragraph in the 2nd patch could probably be inaccurate and
> need amending (as mentioned).

Yep, commit message doesn't fit patch, how about following description:
"
Currently MemoryRegionSection has 1:1 mapping to KVMSlot.
However next patch will allow splitting MemoryRegionSection into
several KVMSlot-s, make sure that kvm_physical_log_slot_clear()
is able to handle such 1:N mapping.
"

> 
> Thanks,
> 



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-23 16:15       ` Igor Mammedov
@ 2019-09-23 16:49         ` Paolo Bonzini
  2019-09-24  2:53           ` Peter Xu
  0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2019-09-23 16:49 UTC (permalink / raw)
  To: Igor Mammedov, Peter Xu; +Cc: qemu-devel, dgilbert

On 23/09/19 18:15, Igor Mammedov wrote:
> Yep, commit message doesn't fit patch, how about following description:
> "
> Currently MemoryRegionSection has 1:1 mapping to KVMSlot.
> However next patch will allow splitting MemoryRegionSection into
> several KVMSlot-s, make sure that kvm_physical_log_slot_clear()
> is able to handle such 1:N mapping.
> "

Yes, that's great.

Paolo


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
  2019-09-23 16:49         ` Paolo Bonzini
@ 2019-09-24  2:53           ` Peter Xu
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Xu @ 2019-09-24  2:53 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Igor Mammedov, qemu-devel, dgilbert

On Mon, Sep 23, 2019 at 06:49:12PM +0200, Paolo Bonzini wrote:
> On 23/09/19 18:15, Igor Mammedov wrote:
> > Yep, commit message doesn't fit patch, how about following description:
> > "
> > Currently MemoryRegionSection has 1:1 mapping to KVMSlot.
> > However next patch will allow splitting MemoryRegionSection into
> > several KVMSlot-s, make sure that kvm_physical_log_slot_clear()
> > is able to handle such 1:N mapping.
> > "
> 
> Yes, that's great.

Please feel free to add my r-b directly on patch 2 with that amended.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-09-24  2:54 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-20 10:21 [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
2019-09-20 10:21 ` [PATCH 1/2] kvm: extract kvm_log_clear_one_slot Paolo Bonzini
2019-09-20 12:11   ` Peter Xu
2019-09-20 10:21 ` [PATCH 2/2] kvm: clear dirty bitmaps from all overlapping memslots Paolo Bonzini
2019-09-20 12:18   ` Peter Xu
2019-09-20 14:03     ` Paolo Bonzini
2019-09-20 12:19 ` [PATCH 0/2] " Peter Xu
2019-09-20 13:58   ` Igor Mammedov
2019-09-23  1:29     ` Peter Xu
2019-09-23 16:15       ` Igor Mammedov
2019-09-23 16:49         ` Paolo Bonzini
2019-09-24  2:53           ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).