All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: kwolf@redhat.com, stefanha@linux.vnet.ibm.com, lcapitulino@redhat.com
Subject: [Qemu-devel] [RFC PATCH 30/36] add hierarchical bitmap data type and test cases
Date: Fri, 15 Jun 2012 17:05:53 +0200	[thread overview]
Message-ID: <1339772759-31004-31-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1339772759-31004-1-git-send-email-pbonzini@redhat.com>

HBitmaps provides an array of bits.  The bits are stored as usual in an
array of unsigned longs, but HBitmap is also optimized to provide fast
iteration over set bits; going from one bit to the next is O(log log n)
worst case, which is low enough that the number of levels is in fact fixed.

In order to do this, it stacks multiple bitmaps with progressively coarser
granularity; in all levels except the last, bit N is set iff the N-th
unsigned long is nonzero in the immediately next level.  When iteration
completes on the last level it can examine the 2nd-last level to quickly
skip entire words, and even do so recursively to skip blocks of 64 words or
powers thereof (32 on 32-bit machines).

Given an index in the bitmap, it can be split in group of bits like
this (for the 64-bit case):

     bits 0-57 => word in the last bitmap     | bits 58-63 => bit in the word
     bits 0-51 => word in the 2nd-last bitmap | bits 52-57 => bit in the word
     bits 0-45 => word in the 3rd-last bitmap | bits 46-51 => bit in the word

So it is easy to move up simply by shifting the index right by
log2(BITS_PER_LONG) bits.  To move down, you shift the index left
similarly, and add the word index within the group.  Iteration uses
ffs (find first set bit) to find the next word to examine; this
operation can be done in constant time in most current architectures.

Setting or clearing a range of m bits on all levels, the work to perform
is O(m + m/W + m/W^2 + ...), which in O(m) like on a regular bitmap.

When iterating on a bitmap, each bit (on any level) is only visited
once.  Hence, The total cost of visiting a bitmap with m bits in it is
the number of bits that are set in all bitmaps.  Unless the bitmap is
extremely sparse, this is also O(m + m/W + m/W^2 + ...), so the amortized
cost of advancing from one bit to the next is usually constant.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hbitmap.c            |  371 ++++++++++++++++++++++++++++++++++++++++++++++++++
 hbitmap.h            |   51 +++++++
 tests/Makefile       |    2 +
 tests/test-hbitmap.c |  369 +++++++++++++++++++++++++++++++++++++++++++++++++
 trace-events         |    5 +
 5 files changed, 798 insertions(+)
 create mode 100644 hbitmap.c
 create mode 100644 hbitmap.h
 create mode 100644 tests/test-hbitmap.c

diff --git a/hbitmap.c b/hbitmap.c
new file mode 100644
index 0000000..be1a852
--- /dev/null
+++ b/hbitmap.c
@@ -0,0 +1,371 @@
+/*
+ * Hierarchical Bitmap Data Type
+ *
+ * Copyright Red Hat, Inc., 2012
+ *
+ * Author: Paolo Bonzini <pbonzini@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#include "osdep.h"
+#include "hbitmap.h"
+#include "host-utils.h"
+#include "trace.h"
+#include <string.h>
+#include <glib.h>
+#include <assert.h>
+
+/* HBitmaps provides an array of bits.  The bits are stored as usual in an
+ * array of unsigned longs, but HBitmap is also optimized to provide fast
+ * iteration over set bits; going from one bit to the next is O(log log n)
+ * worst case, which is low enough that the number of levels is in fact fixed.
+ *
+ * In order to do this, it stacks multiple bitmaps with progressively coarser
+ * granularity; in all levels except the last, bit N is set iff the N-th
+ * unsigned long is nonzero in the immediately next level.  When iteration
+ * completes on the last level it can examine the 2nd-last level to quickly
+ * skip entire words, and even do so recursively to skip blocks of 64 words or
+ * powers thereof (32 on 32-bit machines).
+ *
+ * Given an index in the bitmap, it can be split in group of bits like
+ * this (for the 64-bit case):
+ *
+ *   bits 0-57 => word in the last bitmap     | bits 58-63 => bit in the word
+ *   bits 0-51 => word in the 2nd-last bitmap | bits 52-57 => bit in the word
+ *   bits 0-45 => word in the 3rd-last bitmap | bits 46-51 => bit in the word
+ *
+ * So it is easy to move up simply by shifting the index right by
+ * log2(BITS_PER_LONG) bits.  To move down, you shift the index left
+ * similarly, and add the word index within the group.  Iteration uses
+ * ffs (find first set bit) to find the next word to examine; this
+ * operation can be done in constant time in most current architectures.
+ *
+ * Setting or clearing a range of m bits on all levels, the work to perform
+ * is O(m + m/W + m/W^2 + ...), which in O(m) like on a regular bitmap.
+ *
+ * When iterating on a bitmap, each bit (on any level) is only visited
+ * once.  Hence, The total cost of visiting a bitmap with m bits in it is
+ * the number of bits that are set in all bitmaps.  Unless the bitmap is
+ * extremely sparse, this is also O(m + m/W + m/W^2 + ...), so the amortized
+ * cost of advancing from one bit to the next is usually constant (worst case
+ * O(log log n) as in the non-amortized complexity).
+ */
+
+struct HBitmap {
+    uint64_t size;
+    uint64_t count;
+    int granularity;
+    unsigned long *levels[HBITMAP_LEVELS];
+};
+
+static int64_t hbi_next_internal(HBitmapIter *hbi)
+{
+    unsigned long cur = hbi->cur[HBITMAP_LEVELS - 1];
+    size_t pos = hbi->pos;
+
+    if (cur == 0) {
+        HBitmap *hb = hbi->hb;
+        int i = HBITMAP_LEVELS - 1;
+
+        do {
+            cur = hbi->cur[--i];
+            pos >>= BITS_PER_LEVEL;
+        } while (cur == 0);
+
+        /* Check for end of iteration.  We only use up to
+         * BITS_PER_LEVEL bits (actually less) in the level 0 bitmap,
+         * and a sentinel is placed in hbitmap_alloc that ends the
+         * above loop.
+         */
+
+        if (i == 0 && (cur & (BITS_PER_LONG - 1)) == 0) {
+            return -1;
+        }
+        for (; i < HBITMAP_LEVELS - 1; i++) {
+            /* Find least significant set bit in the word, use them
+             * to add back shifted out bits to pos.
+             */
+            pos = (pos << BITS_PER_LEVEL) + ffsl(cur) - 1;
+            hbi->cur[i] = cur & (cur - 1);
+
+            /* Set up next level for iteration.  */
+            cur = hb->levels[i + 1][pos];
+        }
+
+        hbi->pos = pos;
+        hbi->cur[HBITMAP_LEVELS - 1] = cur & (cur - 1);
+    } else {
+        hbi->cur[HBITMAP_LEVELS - 1] &= cur - 1;
+    }
+    return ((uint64_t)pos << BITS_PER_LEVEL) + ffsl(cur) - 1;
+}
+
+static inline int popcountl(unsigned long l)
+{
+    return BITS_PER_LONG == 32 ? ctpop32(l) : ctpop64(l);
+}
+
+static int hbi_count_towards(HBitmapIter *hbi, uint64_t last)
+{
+    uint64_t next = hbi_next_internal(hbi);
+    int n;
+
+    /* Take it easy with the last few bits.  */
+    if (next >= (last & -BITS_PER_LONG)) {
+        return (next > last ? 0 : 1);
+    }
+
+    /* Process one word at a time, hbi_next_internal takes
+     * care of skipping large all-zero blocks.  Sum one to
+     * account for the value that was returned by next.
+     */
+    n = popcountl(hbi->cur[HBITMAP_LEVELS - 1]) + 1;
+    hbi->cur[HBITMAP_LEVELS - 1] = 0;
+    return n;
+}
+
+int64_t hbitmap_iter_next(HBitmapIter *hbi)
+{
+    int64_t next = hbi_next_internal(hbi);
+    trace_hbitmap_iter_next(hbi->hb, hbi, next << hbi->granularity, next);
+
+    return next << hbi->granularity;
+}
+
+void hbitmap_iter_init(HBitmapIter *hbi, HBitmap *hb, uint64_t first)
+{
+    int i, bit;
+    size_t pos;
+
+    hbi->hb = hb;
+    pos = first;
+    for (i = HBITMAP_LEVELS; --i >= 0; ) {
+        bit = pos & (BITS_PER_LONG - 1);
+        pos >>= BITS_PER_LEVEL;
+
+        /* Drop bits representing items before first.  */
+        hbi->cur[i] = hb->levels[i][pos] & ~((1UL << bit) - 1);
+
+        /* We have already added level i+1, so the lowest set bit has
+         * been processed.  Clear it.
+         */
+        if (i != HBITMAP_LEVELS - 1) {
+            hbi->cur[i] &= ~(1UL << bit);
+        }
+    }
+
+    hbi->pos = first >> BITS_PER_LEVEL;
+    hbi->granularity = hb->granularity;
+}
+
+bool hbitmap_empty(HBitmap *hb)
+{
+    return hb->count == 0;
+}
+
+uint64_t hbitmap_count(HBitmap *hb)
+{
+    return hb->count << hb->granularity;
+}
+
+/* Count the number of set bits between start and end, not accounting for
+ * the granularity.
+ */
+static int hb_count_between(HBitmap *hb, uint64_t start, uint64_t end)
+{
+    HBitmapIter hbi;
+    uint64_t count = 0, more;
+
+    hbitmap_iter_init(&hbi, hb, start);
+    do {
+        more = hbi_count_towards(&hbi, end);
+        count += more;
+    } while (more > 0);
+    return count;
+}
+
+/* Setting starts at the last layer and propagates up if an element
+ * changes from zero to non-zero.
+ */
+static inline bool hb_set_elem(unsigned long *elem, uint64_t start, uint64_t end)
+{
+    unsigned long mask;
+    bool changed;
+
+    assert((end & -BITS_PER_LONG) == (start & -BITS_PER_LONG));
+
+    mask = 2UL << (end & (BITS_PER_LONG - 1));
+    mask -= 1UL << (start & (BITS_PER_LONG - 1));
+    changed = (*elem == 0);
+    *elem |= mask;
+    return changed;
+}
+
+/* The recursive workhorse... */
+static void hb_set_between(HBitmap *hb, int level, uint64_t start, uint64_t end)
+{
+    size_t pos = start >> BITS_PER_LEVEL;
+    size_t endpos = end >> BITS_PER_LEVEL;
+    bool changed = false;
+    size_t i;
+
+    i = pos;
+    if (i < endpos) {
+        uint64_t next = (start | (BITS_PER_LONG - 1)) + 1;
+        changed |= hb_set_elem(&hb->levels[level][i], start, next - 1);
+        for (;;) {
+            start = next;
+            next += BITS_PER_LONG;
+            if (++i == endpos) {
+                break;
+            }
+            changed |= (hb->levels[level][i] == 0);
+            hb->levels[level][i] = ~0UL;
+        }
+    }
+    changed |= hb_set_elem(&hb->levels[level][i], start, end);
+
+    /* If there was any change in this layer, we may have to update
+     * the one above.
+     */
+    if (level > 0 && changed) {
+        return hb_set_between(hb, level - 1, pos, endpos);
+    }
+}
+
+void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count)
+{
+    /* Compute range in the last layer.  */
+    uint64_t last = start + count - 1;
+
+    trace_hbitmap_set(hb, start, count,
+                      start >> hb->granularity, last >> hb->granularity);
+
+    start >>= hb->granularity;
+    last >>= hb->granularity;
+    count = last - start + 1;
+
+    hb->count += count - hb_count_between(hb, start, last);
+    hb_set_between(hb, HBITMAP_LEVELS - 1, start, last);
+}
+
+/* Resetting works the other way round: propagate up if the new
+ * value is zero.
+ */
+static inline bool hb_reset_elem(unsigned long *elem, uint64_t start, uint64_t end)
+{
+    unsigned long mask;
+    bool blanked;
+
+    assert((end & -BITS_PER_LONG) == (start & -BITS_PER_LONG));
+
+    mask = 2UL << (end & (BITS_PER_LONG - 1));
+    mask -= 1UL << (start & (BITS_PER_LONG - 1));
+    blanked = *elem != 0 && ((*elem & ~mask) == 0);
+    *elem &= ~mask;
+    return blanked;
+}
+
+/* The recursive workhorse... */
+static void hb_reset_between(HBitmap *hb, int level, uint64_t start, uint64_t end)
+{
+    size_t pos = start >> BITS_PER_LEVEL;
+    size_t endpos = end >> BITS_PER_LEVEL;
+    bool changed = false;
+    size_t i;
+
+    i = pos;
+    if (i < endpos) {
+        uint64_t next = (start | (BITS_PER_LONG - 1)) + 1;
+
+	/* Here we need a more complex test than when setting bits.  Even if
+	 * something was changed, we must not blank bits in the upper level
+	 * unless the lower-level word became entirely zero.  So, remove pos
+	 * from the upper-level range if bits remain set.
+         */
+        if (hb_reset_elem(&hb->levels[level][i], start, next - 1)) {
+            changed = true;
+        } else {
+            pos++;
+        }
+
+        for (;;) {
+            start = next;
+            next += BITS_PER_LONG;
+            if (++i == endpos) {
+                break;
+            }
+            changed |= (hb->levels[level][i] != 0);
+            hb->levels[level][i] = 0UL;
+        }
+    }
+
+    /* Same as above, this time for endpos.  */
+    if (hb_reset_elem(&hb->levels[level][i], start, end)) {
+        changed = true;
+    } else {
+        endpos--;
+    }
+
+    if (level > 0 && changed) {
+        return hb_reset_between(hb, level - 1, pos, endpos);
+    }
+}
+
+void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
+{
+    /* Compute range in the last layer.  */
+    uint64_t last = start + count - 1;
+
+    trace_hbitmap_reset(hb, start, count,
+                        start >> hb->granularity, last >> hb->granularity);
+
+    start >>= hb->granularity;
+    last >>= hb->granularity;
+
+    hb->count -= hb_count_between(hb, start, last);
+    hb_reset_between(hb, HBITMAP_LEVELS - 1, start, last);
+}
+
+bool hbitmap_get(HBitmap *hb, uint64_t item)
+{
+    /* Compute position and bit in the last layer.  */
+    uint64_t pos = item >> hb->granularity;
+    unsigned long bit = 1UL << (pos & (BITS_PER_LONG - 1));
+
+    return (hb->levels[HBITMAP_LEVELS - 1][pos >> BITS_PER_LEVEL] & bit) != 0;
+}
+
+void hbitmap_free(HBitmap *hb)
+{
+    int i;
+    for (i = HBITMAP_LEVELS; --i >= 0; ) {
+        g_free(hb->levels[i]);
+    }
+    g_free(hb);
+}
+
+HBitmap *hbitmap_alloc(uint64_t size, int granularity)
+{
+    HBitmap *hb = g_malloc0(sizeof (struct HBitmap));
+    int i;
+
+    size = (size + (1 << granularity) - 1) >> granularity;
+    assert(size <= ((uint64_t)1 << HBITMAP_LOG_MAX_SIZE));
+
+    hb->size = size;
+    hb->granularity = granularity;
+    for (i = HBITMAP_LEVELS; --i >= 0; ) {
+        size = MAX((size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+        hb->levels[i] = g_malloc0(size * sizeof(unsigned long));
+    }
+
+    /* Add a sentinel in the level 0 bitmap.  We only use up to
+     * BITS_PER_LEVEL bits in level 0, so it's safe.
+     */
+    assert(size == 1);
+    hb->levels[0][0] |= 1UL << (BITS_PER_LONG - 1);
+    return hb;
+}
diff --git a/hbitmap.h b/hbitmap.h
new file mode 100644
index 0000000..2b717b0
--- /dev/null
+++ b/hbitmap.h
@@ -0,0 +1,51 @@
+/*
+ * Hierarchical Bitmap Data Type
+ *
+ * Copyright Red Hat, Inc., 2012
+ *
+ * Author: Paolo Bonzini <pbonzini@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+
+#ifndef HBITMAP_H
+#define HBITMAP_H 1
+
+#include <limits.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include "bitops.h"
+
+typedef struct HBitmap HBitmap;
+typedef struct HBitmapIter HBitmapIter;
+
+#define BITS_PER_LEVEL         (BITS_PER_LONG == 32 ? 5 : 6)
+
+/* For 32-bit, the largest that fits in a 4 GiB address space.
+ * For 64-bit, the number of sectors in 1 PiB.  Good luck, in
+ * either case... :)
+ */
+#define HBITMAP_LOG_MAX_SIZE   (BITS_PER_LONG == 32 ? 34 : 41)
+
+/* Leave an extra bit for a sentinel.  */
+#define HBITMAP_LEVELS         ((HBITMAP_LOG_MAX_SIZE / BITS_PER_LEVEL) + 1)
+
+struct HBitmapIter {
+    HBitmap *hb;
+    size_t pos;
+    int granularity;
+    unsigned long cur[HBITMAP_LEVELS];
+};
+
+int64_t hbitmap_iter_next(HBitmapIter *hbi);
+void hbitmap_iter_init(HBitmapIter *hbi, HBitmap *hb, uint64_t first);
+bool hbitmap_empty(HBitmap *hb);
+uint64_t hbitmap_count(HBitmap *hb);
+void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
+void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
+bool hbitmap_get(HBitmap *hb, uint64_t item);
+void hbitmap_free(HBitmap *hb);
+HBitmap *hbitmap_alloc(uint64_t size, int granularity);
+
+#endif
diff --git a/tests/Makefile b/tests/Makefile
index d66ab19..3932b68 100644
--- a/tests/Makefile
+++ b/tests/Makefile
@@ -14,6 +14,7 @@ check-unit-y += tests/test-string-input-visitor$(EXESUF)
 check-unit-y += tests/test-string-output-visitor$(EXESUF)
 check-unit-y += tests/test-coroutine$(EXESUF)
 check-unit-y += tests/test-visitor-serialization$(EXESUF)
+check-unit-y += tests/test-hbitmap$(EXESUF)
 
 check-block-$(CONFIG_POSIX) += tests/qemu-iotests-quick.sh
 
@@ -47,6 +48,7 @@ tests/check-qlist$(EXESUF): tests/check-qlist.o qlist.o qint.o $(tools-obj-y)
 tests/check-qfloat$(EXESUF): tests/check-qfloat.o qfloat.o $(tools-obj-y)
 tests/check-qjson$(EXESUF): tests/check-qjson.o $(qobject-obj-y) $(tools-obj-y)
 tests/test-coroutine$(EXESUF): tests/test-coroutine.o $(coroutine-obj-y) $(tools-obj-y)
+tests/test-hbitmap$(EXESUF): tests/test-hbitmap.o hbitmap.o $(trace-obj-y)
 
 tests/test-qapi-types.c tests/test-qapi-types.h :\
 $(SRC_PATH)/qapi-schema-test.json $(SRC_PATH)/scripts/qapi-types.py
diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
new file mode 100644
index 0000000..450fba3
--- /dev/null
+++ b/tests/test-hbitmap.c
@@ -0,0 +1,369 @@
+/*
+ * Hierarchical bitmap unit-tests.
+ *
+ * Copyright (C) 2012 Red Hat Inc.
+ *
+ * Author: Paolo Bonzini <pbonzini@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include <glib.h>
+#include <stdarg.h>
+#include "hbitmap.h"
+
+#define LOG_BITS_PER_LONG          (BITS_PER_LONG == 32 ? 5 : 6)
+
+#define L1                         BITS_PER_LONG
+#define L2                         (BITS_PER_LONG * L1)
+#define L3                         (BITS_PER_LONG * L2)
+
+typedef struct TestHBitmapData {
+    HBitmap       *hb;
+    unsigned long *bits;
+    size_t         size;
+    int            granularity;
+} TestHBitmapData;
+
+
+/* Check that the HBitmap and the shadow bitmap contain the same data,
+ * ignoring the same "first" bits.
+ */
+static void hbitmap_test_check(TestHBitmapData *data,
+                               uint64_t first)
+{
+    uint64_t count = 0;
+    size_t pos;
+    int bit;
+    HBitmapIter hbi;
+    int64_t next;
+
+    hbitmap_iter_init(&hbi, data->hb, first);
+
+    for (;;) {
+        next = hbitmap_iter_next(&hbi);
+        if (next < 0) {
+            next = data->size;
+        }
+
+        while (first < next) {
+            pos = first >> LOG_BITS_PER_LONG;
+            bit = first & (BITS_PER_LONG - 1);
+            first++;
+            g_assert_cmpint(data->bits[pos] & (1UL << bit), ==, 0);
+        }
+
+        if (next == data->size) {
+            break;
+        }
+
+        pos = first >> LOG_BITS_PER_LONG;
+        bit = first & (BITS_PER_LONG - 1);
+        first++;
+        count++;
+        g_assert_cmpint(data->bits[pos] & (1UL << bit), !=, 0);
+    }
+
+    if (first == 0) {
+        g_assert_cmpint(count * data->granularity, ==, hbitmap_count(data->hb));
+    }
+}
+
+/* This is provided instead of a test setup function so that the sizes
+   are kept in the test functions (and not in main()) */
+static void hbitmap_test_init(TestHBitmapData *data,
+                              uint64_t size, int granularity)
+{
+    size_t n;
+    data->hb = hbitmap_alloc(size, granularity);
+
+    n = (size + BITS_PER_LONG - 1) / BITS_PER_LONG;
+    if (n == 0) {
+        n = 1;
+    }
+    data->bits = g_new0(unsigned long, n);
+    data->size = size;
+    data->granularity = granularity;
+    hbitmap_test_check(data, 0);
+}
+
+static void hbitmap_test_teardown(TestHBitmapData *data,
+                                  const void *unused)
+{
+    if (data->hb) {
+        hbitmap_free(data->hb);
+        data->hb = NULL;
+    }
+    if (data->bits) {
+        g_free(data->bits);
+        data->bits = NULL;
+    }
+}
+
+/* Set a range in the HBitmap and in the shadow "simple" bitmap.
+ * The two bitmaps are then tested against each other.
+ */
+static void hbitmap_test_set(TestHBitmapData *data,
+                             uint64_t first, uint64_t count)
+{
+    hbitmap_set(data->hb, first, count);
+    while (count-- != 0) {
+        size_t pos = first >> LOG_BITS_PER_LONG;
+        int bit = first & (BITS_PER_LONG - 1);
+        first++;
+
+        data->bits[pos] |= 1UL << bit;
+    }
+
+    if (data->granularity == 0) {
+        hbitmap_test_check(data, 0);
+    }
+}
+
+/* Reset a range in the HBitmap and in the shadow "simple" bitmap.
+ */
+static void hbitmap_test_reset(TestHBitmapData *data,
+                               uint64_t first, uint64_t count)
+{
+    hbitmap_reset(data->hb, first, count);
+    while (count-- != 0) {
+        size_t pos = first >> LOG_BITS_PER_LONG;
+        int bit = first & (BITS_PER_LONG - 1);
+        first++;
+
+        data->bits[pos] &= ~(1UL << bit);
+    }
+
+    if (data->granularity == 0) {
+        hbitmap_test_check(data, 0);
+    }
+}
+
+static void hbitmap_test_check_get(TestHBitmapData *data)
+{
+    uint64_t count = 0;
+    uint64_t i;
+
+    for (i = 0; i < data->size; i++) {
+        size_t pos = i >> LOG_BITS_PER_LONG;
+        int bit = i & (BITS_PER_LONG - 1);
+        unsigned long val = data->bits[pos] & (1UL << bit);
+        count += hbitmap_get(data->hb, i);
+        g_assert_cmpint(hbitmap_get(data->hb, i), ==, val != 0);
+    }
+    g_assert_cmpint(count, ==, hbitmap_count(data->hb));
+}
+
+static void test_hbitmap_zero(TestHBitmapData *data,
+                               const void *unused)
+{
+    hbitmap_test_init(data, 0, 0);
+}
+
+static void test_hbitmap_unaligned(TestHBitmapData *data,
+                                   const void *unused)
+{
+    hbitmap_test_init(data, L3 + 23, 0);
+    hbitmap_test_set(data, 0, 1);
+    hbitmap_test_set(data, L3 + 22, 1);
+}
+
+static void test_hbitmap_iter_empty(TestHBitmapData *data,
+                                    const void *unused)
+{
+    hbitmap_test_init(data, L1, 0);
+}
+
+static void test_hbitmap_iter_partial(TestHBitmapData *data,
+                                      const void *unused)
+{
+    hbitmap_test_init(data, L3, 0);
+    hbitmap_test_set(data, 0, L3);
+    hbitmap_test_check(data, 1);
+    hbitmap_test_check(data, L1 - 1);
+    hbitmap_test_check(data, L1);
+    hbitmap_test_check(data, L1 * 2 - 1);
+    hbitmap_test_check(data, L2 - 1);
+    hbitmap_test_check(data, L2);
+    hbitmap_test_check(data, L2 + 1);
+    hbitmap_test_check(data, L2 + L1);
+    hbitmap_test_check(data, L2 + L1 * 2 - 1);
+    hbitmap_test_check(data, L2 * 2 - 1);
+    hbitmap_test_check(data, L2 * 2);
+    hbitmap_test_check(data, L2 * 2 + 1);
+    hbitmap_test_check(data, L2 * 2 + L1);
+    hbitmap_test_check(data, L2 * 2 + L1 * 2 - 1);
+    hbitmap_test_check(data, L3 / 2);
+}
+
+static void test_hbitmap_iter_past(TestHBitmapData *data,
+                                    const void *unused)
+{
+    hbitmap_test_init(data, L3, 0);
+    hbitmap_test_set(data, 0, L3);
+    hbitmap_test_check(data, L3);
+}
+
+static void test_hbitmap_set_all(TestHBitmapData *data,
+                                 const void *unused)
+{
+    hbitmap_test_init(data, L3, 0);
+    hbitmap_test_set(data, 0, L3);
+}
+
+static void test_hbitmap_get_all(TestHBitmapData *data,
+                                 const void *unused)
+{
+    hbitmap_test_init(data, L3, 0);
+    hbitmap_test_set(data, 0, L3);
+    hbitmap_test_check_get(data);
+}
+
+static void test_hbitmap_get_some(TestHBitmapData *data,
+                                  const void *unused)
+{
+    hbitmap_test_init(data, 2 * L2, 0);
+    hbitmap_test_set(data, 10, 1);
+    hbitmap_test_check_get(data);
+    hbitmap_test_set(data, L1 - 1, 1);
+    hbitmap_test_check_get(data);
+    hbitmap_test_set(data, L1, 1);
+    hbitmap_test_check_get(data);
+    hbitmap_test_set(data, L2 - 1, 1);
+    hbitmap_test_check_get(data);
+    hbitmap_test_set(data, L2, 1);
+    hbitmap_test_check_get(data);
+}
+
+static void test_hbitmap_set_one(TestHBitmapData *data,
+                                 const void *unused)
+{
+    hbitmap_test_init(data, 2 * L2, 0);
+    hbitmap_test_set(data, 10, 1);
+    hbitmap_test_set(data, L1 - 1, 1);
+    hbitmap_test_set(data, L1, 1);
+    hbitmap_test_set(data, L2 - 1, 1);
+    hbitmap_test_set(data, L2, 1);
+}
+
+static void test_hbitmap_set_two_elem(TestHBitmapData *data,
+                                      const void *unused)
+{
+    hbitmap_test_init(data, 2 * L2, 0);
+    hbitmap_test_set(data, L1 - 1, 2);
+    hbitmap_test_set(data, L1 * 2 - 1, 4);
+    hbitmap_test_set(data, L1 * 4, L1 + 1);
+    hbitmap_test_set(data, L1 * 8 - 1, L1 + 1);
+    hbitmap_test_set(data, L2 - 1, 2);
+    hbitmap_test_set(data, L2 + L1 - 1, 8);
+    hbitmap_test_set(data, L2 + L1 * 4, L1 + 1);
+    hbitmap_test_set(data, L2 + L1 * 8 - 1, L1 + 1);
+}
+
+static void test_hbitmap_set(TestHBitmapData *data,
+                             const void *unused)
+{
+    hbitmap_test_init(data, L3 * 2, 0);
+    hbitmap_test_set(data, L1 - 1, L1 + 2);
+    hbitmap_test_set(data, L1 * 3 - 1, L1 + 2);
+    hbitmap_test_set(data, L1 * 5, L1 * 2 + 1);
+    hbitmap_test_set(data, L1 * 8 - 1, L1 * 2 + 1);
+    hbitmap_test_set(data, L2 - 1, L1 + 2);
+    hbitmap_test_set(data, L2 + L1 * 2 - 1, L1 + 2);
+    hbitmap_test_set(data, L2 + L1 * 4, L1 * 2 + 1);
+    hbitmap_test_set(data, L2 + L1 * 7 - 1, L1 * 2 + 1);
+    hbitmap_test_set(data, L2 * 2 - 1, L3 * 2 - L2 * 2);
+}
+
+static void test_hbitmap_set_overlap(TestHBitmapData *data,
+                                     const void *unused)
+{
+    hbitmap_test_init(data, L3 * 2, 0);
+    hbitmap_test_set(data, L1 - 1, L1 + 2);
+    hbitmap_test_set(data, L1 * 2 - 1, L1 * 2 + 2);
+    hbitmap_test_set(data, 0, L1 * 3);
+    hbitmap_test_set(data, L1 * 8 - 1, L2);
+    hbitmap_test_set(data, L2, L1);
+    hbitmap_test_set(data, L2 - L1 - 1, L1 * 8 + 2);
+    hbitmap_test_set(data, L2, L3 - L2 + 1);
+    hbitmap_test_set(data, L3 - L1, L1 * 3);
+    hbitmap_test_set(data, L3 - 1, 3);
+    hbitmap_test_set(data, L3 - 1, L2);
+}
+
+static void test_hbitmap_reset_empty(TestHBitmapData *data,
+                                     const void *unused)
+{
+    hbitmap_test_init(data, L3, 0);
+    hbitmap_test_reset(data, 0, L3);
+}
+
+static void test_hbitmap_reset(TestHBitmapData *data,
+                               const void *unused)
+{
+    hbitmap_test_init(data, L3 * 2, 0);
+    hbitmap_test_set(data, L1 - 1, L1 + 2);
+    hbitmap_test_reset(data, L1 * 2 - 1, L1 * 2 + 2);
+    hbitmap_test_set(data, 0, L1 * 3);
+    hbitmap_test_reset(data, L1 * 8 - 1, L2);
+    hbitmap_test_set(data, L2, L1);
+    hbitmap_test_reset(data, L2 - L1 - 1, L1 * 8 + 2);
+    hbitmap_test_set(data, L2, L3 - L2 + 1);
+    hbitmap_test_reset(data, L3 - L1, L1 * 3);
+    hbitmap_test_set(data, L3 - 1, 3);
+    hbitmap_test_reset(data, L3 - 1, L2);
+    hbitmap_test_set(data, 0, L3 * 2);
+    hbitmap_test_reset(data, 0, L1);
+    hbitmap_test_reset(data, 0, L2);
+    hbitmap_test_reset(data, L3, L3);
+    hbitmap_test_set(data, L3 / 2, L3);
+}
+
+static void test_hbitmap_granularity(TestHBitmapData *data,
+                                     const void *unused)
+{
+    /* Note that hbitmap_test_check has to be invoked manually in this test.  */
+    hbitmap_test_init(data, L1, 1);
+    hbitmap_test_set(data, 0, 1);
+    hbitmap_test_check(data, 0);
+    hbitmap_test_set(data, 2, 1);
+    hbitmap_test_check(data, 0);
+    hbitmap_test_set(data, 0, 3);
+    g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
+    hbitmap_test_reset(data, 0, 1);
+    g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
+}
+
+static void hbitmap_test_add(const char *testpath,
+                                   TestHBitmapData *data,
+                                   void (*test_func)(TestHBitmapData *data, const void *user_data))
+{
+    g_test_add(testpath, TestHBitmapData, data, NULL, test_func,
+               hbitmap_test_teardown);
+}
+
+int main(int argc, char **argv)
+{
+    TestHBitmapData hbitmap_data;
+
+    g_test_init(&argc, &argv, NULL);
+    hbitmap_test_add("/hbitmap/size/0", &hbitmap_data, test_hbitmap_zero);
+    hbitmap_test_add("/hbitmap/size/unaligned", &hbitmap_data, test_hbitmap_unaligned);
+    hbitmap_test_add("/hbitmap/iter/empty", &hbitmap_data, test_hbitmap_iter_empty);
+    hbitmap_test_add("/hbitmap/iter/past", &hbitmap_data, test_hbitmap_iter_past);
+    hbitmap_test_add("/hbitmap/iter/partial", &hbitmap_data, test_hbitmap_iter_partial);
+    hbitmap_test_add("/hbitmap/get/all", &hbitmap_data, test_hbitmap_get_all);
+    hbitmap_test_add("/hbitmap/get/some", &hbitmap_data, test_hbitmap_get_some);
+    hbitmap_test_add("/hbitmap/set/all", &hbitmap_data, test_hbitmap_set_all);
+    hbitmap_test_add("/hbitmap/set/one", &hbitmap_data, test_hbitmap_set_one);
+    hbitmap_test_add("/hbitmap/set/two-elem", &hbitmap_data, test_hbitmap_set_two_elem);
+    hbitmap_test_add("/hbitmap/set/general", &hbitmap_data, test_hbitmap_set);
+    hbitmap_test_add("/hbitmap/set/overlap", &hbitmap_data, test_hbitmap_set_overlap);
+    hbitmap_test_add("/hbitmap/reset/empty", &hbitmap_data, test_hbitmap_reset_empty);
+    hbitmap_test_add("/hbitmap/reset/general", &hbitmap_data, test_hbitmap_reset);
+    hbitmap_test_add("/hbitmap/granularity", &hbitmap_data, test_hbitmap_granularity);
+    g_test_run();
+
+    return 0;
+}
diff --git a/trace-events b/trace-events
index 65db490..cda332e 100644
--- a/trace-events
+++ b/trace-events
@@ -848,3 +848,8 @@ qxl_render_blit_guest_primary_initialized(void) ""
 qxl_render_blit(int32_t stride, int32_t left, int32_t right, int32_t top, int32_t bottom) "stride=%d [%d, %d, %d, %d]"
 qxl_render_guest_primary_resized(int32_t width, int32_t height, int32_t stride, int32_t bytes_pp, int32_t bits_pp) "%dx%d, stride %d, bpp %d, depth %d"
 qxl_render_update_area_done(void *cookie) "%p"
+
+# hbitmap.c
+hbitmap_iter_next(void *hb, void *hbi, int64_t item, int64_t bit) "hb %p hbi %p item %"PRId64" bit %"PRId64
+hbitmap_reset(void *hb, uint64_t start, uint64_t count, uint64_t sbit, uint64_t ebit) "hb %p items %"PRIu64",%"PRIu64" bits %"PRIu64"..%"PRIu64
+hbitmap_set(void *hb, uint64_t start, uint64_t count, uint64_t sbit, uint64_t ebit) "hb %p items %"PRIu64",%"PRIu64" bits %"PRIu64"..%"PRIu64
-- 
1.7.10.2

  parent reply	other threads:[~2012-06-15 15:08 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-15 15:05 [Qemu-devel] [RFC PATCH 00/36] A peek at the current block job patches Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 01/36] qapi: generalize documentation of streaming commands Paolo Bonzini
2012-06-15 16:45   ` Eric Blake
2012-07-11 16:00     ` Paolo Bonzini
2012-07-12  8:07       ` Kevin Wolf
2012-07-12 20:41         ` Blue Swirl
2012-07-13  9:13           ` Kevin Wolf
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 02/36] qerror/block: introduce QERR_BLOCK_JOB_NOT_ACTIVE Paolo Bonzini
2012-06-15 16:51   ` Eric Blake
2012-06-15 16:56     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 03/36] block: move job APIs to separate files Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 04/36] block: add block_job_query Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 05/36] block: add support for job pause/resume Paolo Bonzini
2012-06-15 17:22   ` Eric Blake
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 06/36] qmp: add block-job-pause and block-job-resume Paolo Bonzini
2012-06-15 17:32   ` Eric Blake
2012-07-11 16:02     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 07/36] qemu-iotests: add test for pausing a streaming operation Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 08/36] block: rename block_job_complete to block_job_completed Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 09/36] block: rename BlockErrorAction, BlockQMPEventAction Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 10/36] block: move BlockdevOnError declaration to QAPI Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 11/36] block: reorganize io error code Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 12/36] block: sort BlockDeviceIoStatus errors by severity Paolo Bonzini
2012-06-15 17:45   ` Eric Blake
2012-07-11 16:03     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 13/36] block: introduce block job error Paolo Bonzini
2012-06-15 17:50   ` Eric Blake
2012-07-11 16:10     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 14/36] stream: add on_error argument Paolo Bonzini
2012-06-15 17:58   ` Eric Blake
2012-07-11 16:12     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 15/36] qemu-iotests: add tests for streaming error handling Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 16/36] block: add bdrv_query_info Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 17/36] block: add bdrv_query_stats Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 18/36] block: make device optional in BlockInfo Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 19/36] block: add target info to QMP query-blockjobs command Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 20/36] block: forward bdrv_iostatus_reset to block job Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 21/36] block: introduce new dirty bitmap functionality Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 22/36] block: add mirror job Paolo Bonzini
2012-06-15 18:20   ` Eric Blake
2012-07-12 13:45     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 23/36] qmp: add drive-mirror command Paolo Bonzini
2012-06-15 20:12   ` Eric Blake
2012-07-11 16:23     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 24/36] mirror: support querying target file Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 25/36] mirror: add support for on_source_error/on_target_error Paolo Bonzini
2012-06-15 21:12   ` Eric Blake
2012-07-11 16:28     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 26/36] block: live snapshot documentation tweaks Paolo Bonzini
2012-06-15 21:14   ` Eric Blake
2012-07-11 16:16     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 27/36] block: add bdrv_ensure_backing_file Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 28/36] block: add block-job-complete Paolo Bonzini
2012-06-15 21:42   ` Eric Blake
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 29/36] mirror: implement completion Paolo Bonzini
2012-06-15 15:05 ` Paolo Bonzini [this message]
2012-06-15 23:02   ` [Qemu-devel] [RFC PATCH 30/36] add hierarchical bitmap data type and test cases Eric Blake
2012-07-11 16:35     ` Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 31/36] block: implement dirty bitmap using HBitmap Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 32/36] block: make round_to_clusters public Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 33/36] mirror: perform COW if the cluster size is bigger than the granularity Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 34/36] block: return count of dirty sectors, not chunks Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 35/36] block: allow customizing the granularity of the dirty bitmap Paolo Bonzini
2012-06-15 15:05 ` [Qemu-devel] [RFC PATCH 36/36] mirror: allow customizing the granularity Paolo Bonzini
2012-06-15 23:24   ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1339772759-31004-31-git-send-email-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.