All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] kvm: Take into account the unaligned section size when preparing bitmap
@ 2020-12-08 11:40 Zenghui Yu
  2020-12-08 15:16 ` Peter Xu
  0 siblings, 1 reply; 16+ messages in thread
From: Zenghui Yu @ 2020-12-08 11:40 UTC (permalink / raw)
  To: qemu-devel, pbonzini; +Cc: Zenghui Yu, wanghaibin.wang, peterx

The kernel KVM_CLEAR_DIRTY_LOG interface has align requirement on both the
start and the size of the given range of pages. We have been careful to
handle the unaligned cases when performing CLEAR on one slot. But it seems
that we forget to take the unaligned *size* case into account when
preparing bitmap for the interface, and we may end up clearing dirty status
for pages outside of [start, start + size).

If the size is unaligned, let's go through the slow path to manipulate a
temp bitmap for the interface so that we won't bother with those unaligned
bits at the end of bitmap.

I don't think this can happen in practice since the upper layer would
provide us with the alignment guarantee. I'm not sure if kvm-all could rely
on it. And this patch is mainly intended to address correctness of the
specific algorithm used inside kvm_log_clear_one_slot().

Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
---
 accel/kvm/kvm-all.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index bed2455ca5..05d323ba1f 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -747,7 +747,7 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, uint64_t start,
     assert(bmap_start % BITS_PER_LONG == 0);
     /* We should never do log_clear before log_sync */
     assert(mem->dirty_bmap);
-    if (start_delta) {
+    if (start_delta || bmap_npages - size / psize) {
         /* Slow path - we need to manipulate a temp bitmap */
         bmap_clear = bitmap_new(bmap_npages);
         bitmap_copy_with_src_offset(bmap_clear, mem->dirty_bmap,
@@ -760,7 +760,10 @@ static int kvm_log_clear_one_slot(KVMSlot *mem, int as_id, uint64_t start,
         bitmap_clear(bmap_clear, 0, start_delta);
         d.dirty_bitmap = bmap_clear;
     } else {
-        /* Fast path - start address aligns well with BITS_PER_LONG */
+        /*
+         * Fast path - both start and size align well with BITS_PER_LONG
+         * (or the end of memory slot)
+         */
         d.dirty_bitmap = mem->dirty_bmap + BIT_WORD(bmap_start);
     }
 
-- 
2.19.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-12-15  7:41 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-08 11:40 [PATCH] kvm: Take into account the unaligned section size when preparing bitmap Zenghui Yu
2020-12-08 15:16 ` Peter Xu
2020-12-09  2:33   ` Zenghui Yu
2020-12-09 21:09     ` Peter Xu
2020-12-10  4:23       ` Zenghui Yu
2020-12-10  1:46     ` zhukeqian
2020-12-10  2:08       ` Peter Xu
2020-12-10  2:53         ` zhukeqian
2020-12-10  3:31           ` Zenghui Yu
2020-12-10 14:50           ` Peter Xu
2020-12-11  1:13             ` zhukeqian
2020-12-11 15:25               ` Peter Xu
2020-12-14  2:14                 ` zhukeqian
2020-12-14 15:36                   ` Peter Xu
2020-12-15  7:23                     ` zhukeqian
2020-12-15  7:39                       ` Zenghui Yu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.