On 02.08.19 20:58, Vladimir Sementsov-Ogievskiy wrote: > hbitmap_reset is broken: it rounds up the requested region. It leads to > the following bug, which is shown by fixed test: > > assume granularity = 2 > set(0, 3) # count becomes 4 > reset(0, 1) # count becomes 2 > > But user of the interface assume that virtual bit 1 should be still > dirty, so hbitmap should report count to be 4! > > In other words, because of granularity, when we set one "virtual" bit, > yes, we make all "virtual" bits in same chunk to be dirty. But this > should not be so for reset. > > Fix this, aligning bound correctly. > > Signed-off-by: Vladimir Sementsov-Ogievskiy > --- > > Hi all! > > Hmm, is it a bug or feature? :) > I don't have a test for mirror yet, but I think that sync mirror may be broken > because of this, as do_sync_target_write() seems to be using unaligned reset. > > tests/test-hbitmap.c | 2 +- > util/hbitmap.c | 24 +++++++++++++++++++----- > 2 files changed, 20 insertions(+), 6 deletions(-) > > diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c > index 592d8219db..0008025a9f 100644 > --- a/tests/test-hbitmap.c > +++ b/tests/test-hbitmap.c > @@ -424,7 +424,7 @@ static void test_hbitmap_granularity(TestHBitmapData *data, > hbitmap_test_set(data, 0, 3); > g_assert_cmpint(hbitmap_count(data->hb), ==, 4); > hbitmap_test_reset(data, 0, 1); > - g_assert_cmpint(hbitmap_count(data->hb), ==, 2); > + g_assert_cmpint(hbitmap_count(data->hb), ==, 4); > } > > static void test_hbitmap_iter_granularity(TestHBitmapData *data, > diff --git a/util/hbitmap.c b/util/hbitmap.c > index 7905212a8b..61a813994a 100644 > --- a/util/hbitmap.c > +++ b/util/hbitmap.c > @@ -473,15 +473,29 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count) > { > /* Compute range in the last layer. */ > uint64_t first; > - uint64_t last = start + count - 1; > + uint64_t last; > + uint64_t end = start + count; > + uint64_t gran = UINT64_C(1) << hb->granularity; > > - trace_hbitmap_reset(hb, start, count, > - start >> hb->granularity, last >> hb->granularity); > + /* > + * We should clear only bits, fully covered by requested region. Otherwise > + * we may clear something that is actually still dirty. > + */ > + first = DIV_ROUND_UP(start, gran); > > - first = start >> hb->granularity; > - last >>= hb->granularity; > + if (end == hb->orig_size) { This should be “>=”. There are callers that don’t make sure that start + count <= hb->orig_size (e.g. the backup job just calls it with multiples of cluster_size, which may or may not end up at the image end; and hbitmap_truncate() just uses “hb->size << hb->granularity” as the end, which is arguably not ideal, but that’s how it is). Max > + end = DIV_ROUND_UP(end, gran); > + } else { > + end = end >> hb->granularity; > + } > + if (end <= first) { > + return; > + } > + last = end - 1; > assert(last < hb->size); > > + trace_hbitmap_reset(hb, start, count, first, last); > + > hb->count -= hb_count_between(hb, first, last); > if (hb_reset_between(hb, HBITMAP_LEVELS - 1, first, last) && > hb->meta) { >