* [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
@ 2018-08-14 17:01 Vladimir Sementsov-Ogievskiy
2018-08-16 15:05 ` no-reply
` (5 more replies)
0 siblings, 6 replies; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-14 17:01 UTC (permalink / raw)
To: qemu-devel, qemu-block
Cc: eblake, armbru, mreitz, kwolf, famz, jsnow, pbonzini, stefanha,
den, vsementsov
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
[v2 is just a resend. I forget to add Den an me to cc, and I don't see the
letter in my thunderbird at all. strange. sorry for that]
Hi all!
Here is an idea and kind of proof-of-concept of how to unify and improve
push/pull backup schemes.
Let's start from fleecing, a way of importing a point-in-time snapshot not
creating a real snapshot. Now we do it with help of backup(sync=none)..
Proposal:
For fleecing we need two nodes:
1. fleecing hook. It's a filter which should be inserted on top of active
disk. It's main purpose is handling guest writes by copy-on-write operation,
i.e. it's a substitution for write-notifier in backup job.
2. fleecing cache. It's a target node for COW operations by fleecing-hook.
It also represents a point-in-time snapshot of active disk for the readers.
The simplest realization of fleecing cache is a qcow2 temporary image, backed
by active disk, i.e.:
+-------+
| Guest |
+---+---+
|
v
+---+-----------+ file +-----------------------+
| Fleecing hook +---------->+ Fleecing cache(qcow2) |
+---+-----------+ +---+-------------------+
| |
backing | |
v |
+---+---------+ backing |
| Active disk +<----------------+
+-------------+
Hm. No, because of permissions I can't do so, I have to do like this:
+-------+
| Guest |
+---+---+
|
v
+---+-----------+ file +-----------------------+
| Fleecing hook +---------->+ Fleecing cache(qcow2) |
+---+-----------+ +-----+-----------------+
| |
backing | | backing
v v
+---+---------+ backing +-----+---------------------+
| Active disk +<------------+ hack children permissions |
+-------------+ | filter node |
+---------------------------+
Ok, this works, it's an image fleecing scheme without any block jobs.
Problems with realization:
1 What to do with hack-permissions-node? What is a true way to implement
something like this? How to tune permissions to avoid this additional node?
2 Inserting/removing the filter. Do we have working way or developments on
it?
3. Interesting: we can't setup backing link to active disk before inserting
fleecing-hook, otherwise, it will damage this link on insertion. This means,
that we can't create fleecing cache node in advance with all backing to
reference it when creating fleecing hook. And we can't prepare all the nodes
in advance and then insert the filter.. We have to:
1. create all the nodes with all links in one big json, or
2. set backing links/create nodes automatically, as it is done in this RFC
(it's a bad way I think, not clear, not transparent)
4. Is it a good idea to use "backing" and "file" links in such way?
Benefits, or, what can be done:
1. We can implement special Fleecing cache filter driver, which will be a real
cache: it will store some recently written clusters and RAM, it can have a
backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
for each cluster of active disk we will have the following characteristics:
- changed (changed in active disk since backup start)
- copy (we need this cluster for fleecing user. For example, in RFC patch all
clusters are "copy", cow_bitmap is initialized to all ones. We can use some
existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
fleecing (for use in incremental backup push or pull)
- cached in RAM
- cached in disk
On top of these characteristics we can implement the following features:
1. COR, we can cache clusters not only on writes but on reads too, if we have
free space in ram-cache (and if not, do not cache at all, don't write to
disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
2. Benefit for guest: if cluster is unchanged and ram-cached, we can skip reading
from the devise
3. If needed, we can drop unchanged ram-cached clusters from ram-cache
4. On guest write, if cluster is already cached, we just mark it "changed"
5. Lazy discards: in some setups, discards are not guaranteed to do something,
so, we can at least defer some discards to the end of backup, if ram-cache is
full.
6. We can implement discard operation in fleecing cache, to make cluster
not needed (drop from cache, drop "copy" flag), so further reads of this
cluster will return error. So, fleecing client may read cluster by cluster
and discard them to reduce COW-load of the drive. We even can combine read
and discard into one command, something like "read-once", or it may be a
flag for fleecing-cache, that all reads are "read-once".
7. We can provide recommendations, on which clusters should fleecing-client
copy first. Examples:
a. copy ram-cached clusters first (obvious, to unload cache and reduce io
overhead)
b. copy zero-clusters last (the don't occupy place in cache, so, lets copy
other clusters first)
c. copy disk-cached clusters list (if we don't care about disk space,
we can say, that for disk-cached clusters we already have a maximum
io overhead, so let's copy other clusters first)
d. copy disk-cached clusters with high priority (but after ram-cached) -
if we don't have enough disk space
So, there is a wide range of possible politics. How to provide these
recommendations?
1. block_status
2. create separate interface
3. internal backup job may access shared fleecing object directly.
About internal backup:
Of course, we need a job which will copy clusters. But it will be simplified:
it should not care about guest writes, it copies clusters from a kind of
snapshot which is not changing in time. This job should follow recommendations
from fleecing scheme [7].
What about the target?
We can use separate node as target, and copy from fleecing cache to the target.
If we have only ram-cache, it would be equal to current approach (data is copied
directly to the target, even on COW). If we have both ram- and disk- caches, it's
a cool solution for slow-target: instead of make guest wait for long write to
backup target (when ram-cache is full) we can write to disk-cache which is local
and fast.
Another option is to combine fleecing cache and target somehow (I didn't think
about this really).
Finally, with one - two (three?) special filters we can implement all current
fleecing/backup schemes in unique and very configurable way and do a lot more
cool features and possibilities.
What do you think?
I really need help with fleecing graph creating/inserting/destroying, my code
about it is a hack, I don't like it, it just works.
About testing: to show that this work I use existing fleecing test - 222, a bit
tuned (drop block-job and use new qmp command to remove filter).
Based on:
[PATCH v3 0/8] dirty-bitmap: rewrite bdrv_dirty_iter_next_area
and
[PATCH 0/2] block: make .bdrv_close optional
qapi/block-core.json | 23 +++-
block/fleecing-hook.c | 280 +++++++++++++++++++++++++++++++++++++++++++++
blockdev.c | 37 ++++++
block/Makefile.objs | 2 +
tests/qemu-iotests/222 | 21 ++--
tests/qemu-iotests/222.out | 1 -
6 files changed, 352 insertions(+), 12 deletions(-)
create mode 100644 block/fleecing-hook.c
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 5b9084a394..70849074b3 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2549,7 +2549,8 @@
'host_cdrom', 'host_device', 'http', 'https', 'iscsi', 'luks',
'nbd', 'nfs', 'null-aio', 'null-co', 'nvme', 'parallels', 'qcow',
'qcow2', 'qed', 'quorum', 'raw', 'rbd', 'replication', 'sheepdog',
- 'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs' ] }
+ 'ssh', 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs',
+ 'fleecing-hook'] }
##
# @BlockdevOptionsFile:
@@ -3636,7 +3637,8 @@
'vmdk': 'BlockdevOptionsGenericCOWFormat',
'vpc': 'BlockdevOptionsGenericFormat',
'vvfat': 'BlockdevOptionsVVFAT',
- 'vxhs': 'BlockdevOptionsVxHS'
+ 'vxhs': 'BlockdevOptionsVxHS',
+ 'fleecing-hook': 'BlockdevOptionsGenericCOWFormat'
} }
##
@@ -3757,6 +3759,23 @@
{ 'command': 'blockdev-del', 'data': { 'node-name': 'str' } }
##
+# @x-drop-fleecing:
+#
+# Deletes fleecing-hook filter from the top of the backing chain.
+#
+# @node-name: Name of the fleecing-hook node name.
+#
+# Since: 3.1
+#
+# -> { "execute": "x-drop-fleecing",
+# "arguments": { "node-name": "fleece0" }
+# }
+# <- { "return": {} }
+#
+##
+{ 'command': 'x-drop-fleecing', 'data': { 'node-name': 'str' } }
+
+##
# @BlockdevCreateOptionsFile:
#
# Driver specific image creation options for file.
diff --git a/block/fleecing-hook.c b/block/fleecing-hook.c
new file mode 100644
index 0000000000..1728d503a7
--- /dev/null
+++ b/block/fleecing-hook.c
@@ -0,0 +1,280 @@
+#include "qemu/osdep.h"
+#include "qemu/cutils.h"
+#include "qemu-common.h"
+#include "qapi/error.h"
+#include "block/blockjob.h"
+#include "block/block_int.h"
+#include "block/block_backup.h"
+#include "block/qdict.h"
+#include "sysemu/block-backend.h"
+
+typedef struct BDRVFleecingHookState {
+ HBitmap *cow_bitmap; /* what should be copied to @file on guest write. */
+
+ /* use of common BlockDriverState fields:
+ * @backing: link to active disk. Fleecing hook is a filter, which should
+ * replace active disk in block tree. Fleecing hook then transfers
+ * requests to active disk through @backing link.
+ * @file: Fleecing cache. It's a storage for COW. @file should look like a
+ * point-in-time snapshot of active disk for readers.
+ */
+} BDRVFleecingHookState;
+
+static coroutine_fn int fleecing_hook_co_preadv(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
+{
+ /* Features to be implemented:
+ * F1. COR. save read data to fleecing cache for fast access
+ * (to reduce reads)
+ * F2. read from fleecing cache if data is in ram-cache and is unchanged
+ */
+
+ return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
+}
+
+static coroutine_fn int fleecing_hook_cow(BlockDriverState *bs, uint64_t offset,
+ uint64_t bytes)
+{
+ int ret = 0;
+ BDRVFleecingHookState *s = bs->opaque;
+ uint64_t gran = 1UL << hbitmap_granularity(s->cow_bitmap);
+ uint64_t end = QEMU_ALIGN_UP(offset + bytes, gran);
+ uint64_t off = QEMU_ALIGN_DOWN(offset, gran), len;
+ size_t align = MAX(bdrv_opt_mem_align(bs->backing->bs),
+ bdrv_opt_mem_align(bs->file->bs));
+ struct iovec iov = {
+ .iov_base = qemu_memalign(align, end - off),
+ .iov_len = end - off
+ };
+ QEMUIOVector qiov;
+
+ qemu_iovec_init_external(&qiov, &iov, 1);
+
+ /* Features to be implemented:
+ * F3. parallelize copying loop
+ * F4. detect zeros
+ * F5. use block_status ?
+ * F6. don't cache clusters which are already cached by COR [see F1]
+ */
+
+ while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
+ iov.iov_len = qiov.size = len;
+ ret = bdrv_co_preadv(bs->backing, off, len, &qiov,
+ BDRV_REQ_NO_SERIALISING);
+ if (ret < 0) {
+ goto finish;
+ }
+
+ ret = bdrv_co_pwritev(bs->file, off, len, &qiov, BDRV_REQ_SERIALISING);
+ if (ret < 0) {
+ goto finish;
+ }
+ hbitmap_reset(s->cow_bitmap, off, len);
+ }
+
+finish:
+ qemu_vfree(iov.iov_base);
+
+ return ret;
+}
+
+static int coroutine_fn fleecing_hook_co_pdiscard(
+ BlockDriverState *bs, int64_t offset, int bytes)
+{
+ int ret = fleecing_hook_cow(bs, offset, bytes);
+ if (ret < 0) {
+ return ret;
+ }
+
+ /* Features to be implemented:
+ * F7. possibility of lazy discard: just defer the discard after fleecing
+ * completion. If write (or new discard) occurs to the same area, just
+ * drop deferred discard.
+ */
+
+ return bdrv_co_pdiscard(bs->backing, offset, bytes);
+}
+
+static int coroutine_fn fleecing_hook_co_pwrite_zeroes(BlockDriverState *bs,
+ int64_t offset, int bytes, BdrvRequestFlags flags)
+{
+ int ret = fleecing_hook_cow(bs, offset, bytes);
+ if (ret < 0) {
+ /* F8. Additional option to break fleecing instead of breaking guest
+ * write here */
+ return ret;
+ }
+
+ return bdrv_co_pwrite_zeroes(bs->backing, offset, bytes, flags);
+}
+
+static coroutine_fn int fleecing_hook_co_pwritev(BlockDriverState *bs,
+ uint64_t offset,
+ uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
+{
+ int ret = fleecing_hook_cow(bs, offset, bytes);
+ if (ret < 0) {
+ return ret;
+ }
+
+ return bdrv_co_pwritev(bs->backing, offset, bytes, qiov, flags);
+}
+
+static int coroutine_fn fleecing_hook_co_flush(BlockDriverState *bs)
+{
+ if (!bs->backing) {
+ return 0;
+ }
+
+ return bdrv_co_flush(bs->backing->bs);
+}
+
+static void fleecing_hook_refresh_filename(BlockDriverState *bs, QDict *opts)
+{
+ if (bs->backing == NULL) {
+ /* we can be here after failed bdrv_attach_child in
+ * bdrv_set_backing_hd */
+ return;
+ }
+ bdrv_refresh_filename(bs->backing->bs);
+ pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
+ bs->backing->bs->filename);
+}
+
+static void fleecing_hook_child_perm(BlockDriverState *bs, BdrvChild *c,
+ const BdrvChildRole *role,
+ BlockReopenQueue *reopen_queue,
+ uint64_t perm, uint64_t shared,
+ uint64_t *nperm, uint64_t *nshared)
+{
+ *nperm = BLK_PERM_CONSISTENT_READ;
+ *nshared = BLK_PERM_ALL;
+}
+
+static coroutine_fn int fleecing_cheat_co_preadv(BlockDriverState *bs,
+ uint64_t offset, uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
+{
+ return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
+}
+
+static int coroutine_fn fleecing_cheat_co_pdiscard(
+ BlockDriverState *bs, int64_t offset, int bytes)
+{
+ return -EINVAL;
+}
+
+static coroutine_fn int fleecing_cheat_co_pwritev(BlockDriverState *bs,
+ uint64_t offset,
+ uint64_t bytes,
+ QEMUIOVector *qiov, int flags)
+{
+ return -EINVAL;
+}
+
+BlockDriver bdrv_fleecing_cheat = {
+ .format_name = "fleecing-cheat",
+
+ .bdrv_co_preadv = fleecing_cheat_co_preadv,
+ .bdrv_co_pwritev = fleecing_cheat_co_pwritev,
+ .bdrv_co_pdiscard = fleecing_cheat_co_pdiscard,
+
+ .bdrv_co_block_status = bdrv_co_block_status_from_backing,
+
+ .bdrv_refresh_filename = fleecing_hook_refresh_filename,
+ .bdrv_child_perm = fleecing_hook_child_perm,
+};
+
+static int fleecing_hook_open(BlockDriverState *bs, QDict *options, int flags,
+ Error **errp)
+{
+ BDRVFleecingHookState *s = bs->opaque;
+ Error *local_err = NULL;
+ const char *backing;
+ BlockDriverState *backing_bs, *cheat;
+
+ backing = qdict_get_try_str(options, "backing");
+ if (!backing) {
+ error_setg(errp, "No backing option");
+ return -EINVAL;
+ }
+
+ backing_bs = bdrv_lookup_bs(backing, backing, errp);
+ if (!backing_bs) {
+ return -EINVAL;
+ }
+
+ qdict_del(options, "backing");
+
+ bs->file = bdrv_open_child(NULL, options, "file", bs, &child_file,
+ false, errp);
+ if (!bs->file) {
+ return -EINVAL;
+ }
+
+ bs->total_sectors = backing_bs->total_sectors;
+ bdrv_set_aio_context(bs, bdrv_get_aio_context(backing_bs));
+ bdrv_set_aio_context(bs->file->bs, bdrv_get_aio_context(backing_bs));
+
+ cheat = bdrv_new_open_driver(&bdrv_fleecing_cheat, "cheat",
+ BDRV_O_RDWR, errp);
+ cheat->total_sectors = backing_bs->total_sectors;
+ bdrv_set_aio_context(cheat, bdrv_get_aio_context(backing_bs));
+
+ bdrv_drained_begin(backing_bs);
+ bdrv_ref(bs);
+ bdrv_append(bs, backing_bs, &local_err);
+
+ bdrv_set_backing_hd(cheat, backing_bs, &error_abort);
+ bdrv_set_backing_hd(bs->file->bs, cheat, &error_abort);
+ bdrv_unref(cheat);
+
+ bdrv_drained_end(backing_bs);
+
+ if (local_err) {
+ error_propagate(errp, local_err);
+ return -EINVAL;
+ }
+
+ s->cow_bitmap = hbitmap_alloc(bdrv_getlength(backing_bs), 16);
+ hbitmap_set(s->cow_bitmap, 0, bdrv_getlength(backing_bs));
+
+ return 0;
+}
+
+static void fleecing_hook_close(BlockDriverState *bs)
+{
+ BDRVFleecingHookState *s = bs->opaque;
+
+ if (s->cow_bitmap) {
+ hbitmap_free(s->cow_bitmap);
+ }
+}
+
+BlockDriver bdrv_fleecing_hook_filter = {
+ .format_name = "fleecing-hook",
+ .instance_size = sizeof(BDRVFleecingHookState),
+
+ .bdrv_co_preadv = fleecing_hook_co_preadv,
+ .bdrv_co_pwritev = fleecing_hook_co_pwritev,
+ .bdrv_co_pwrite_zeroes = fleecing_hook_co_pwrite_zeroes,
+ .bdrv_co_pdiscard = fleecing_hook_co_pdiscard,
+ .bdrv_co_flush = fleecing_hook_co_flush,
+
+ .bdrv_co_block_status = bdrv_co_block_status_from_backing,
+
+ .bdrv_refresh_filename = fleecing_hook_refresh_filename,
+ .bdrv_open = fleecing_hook_open,
+ .bdrv_close = fleecing_hook_close,
+
+ .bdrv_child_perm = bdrv_filter_default_perms,
+};
+
+static void bdrv_fleecing_hook_init(void)
+{
+ bdrv_register(&bdrv_fleecing_hook_filter);
+}
+
+block_init(bdrv_fleecing_hook_init);
diff --git a/blockdev.c b/blockdev.c
index dcf8c8d2ab..0b734fa670 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -4284,6 +4284,43 @@ out:
aio_context_release(aio_context);
}
+void qmp_x_drop_fleecing(const char *node_name, Error **errp)
+{
+ AioContext *aio_context;
+ BlockDriverState *bs;
+
+ bs = bdrv_find_node(node_name);
+ if (!bs) {
+ error_setg(errp, "Cannot find node %s", node_name);
+ return;
+ }
+
+ if (!bdrv_has_blk(bs)) {
+ error_setg(errp, "Node %s is not inserted", node_name);
+ return;
+ }
+
+ if (!bs->backing) {
+ error_setg(errp, "'%s' has no backing", node_name);
+ return;
+ }
+
+ aio_context = bdrv_get_aio_context(bs);
+ aio_context_acquire(aio_context);
+
+ bdrv_drained_begin(bs);
+
+ bdrv_child_try_set_perm(bs->backing, 0, BLK_PERM_ALL, &error_abort);
+ bdrv_replace_node(bs, backing_bs(bs), &error_abort);
+ bdrv_set_backing_hd(bs, NULL, &error_abort);
+
+ bdrv_drained_end(bs);
+
+ qmp_blockdev_del(node_name, &error_abort);
+
+ aio_context_release(aio_context);
+}
+
static BdrvChild *bdrv_find_child(BlockDriverState *parent_bs,
const char *child_name)
{
diff --git a/block/Makefile.objs b/block/Makefile.objs
index c8337bf186..081720b14f 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -31,6 +31,8 @@ block-obj-y += throttle.o copy-on-read.o
block-obj-y += crypto.o
+block-obj-y += fleecing-hook.o
+
common-obj-y += stream.o
nfs.o-libs := $(LIBNFS_LIBS)
diff --git a/tests/qemu-iotests/222 b/tests/qemu-iotests/222
index 0ead56d574..bafb426f67 100644
--- a/tests/qemu-iotests/222
+++ b/tests/qemu-iotests/222
@@ -86,14 +86,19 @@ with iotests.FilePath('base.img') as base_img_path, \
"driver": "file",
"filename": fleece_img_path,
},
- "backing": src_node,
+ # backing is unset, otherwise we can't insert filter,
+ # instead, fleecing_hook will set backing link for
+ # tgt_node automatically.
}))
- # Establish COW from source to fleecing node
- log(vm.qmp("blockdev-backup",
- device=src_node,
- target=tgt_node,
- sync="none"))
+ # Establish COW from source to fleecing node, also,
+ # source becomes backing file of target.
+ log(vm.qmp("blockdev-add", **{
+ "driver": "fleecing-hook",
+ "node-name": "hook",
+ "file": tgt_node,
+ "backing": src_node,
+ }))
log('')
log('--- Setting up NBD Export ---')
@@ -137,10 +142,8 @@ with iotests.FilePath('base.img') as base_img_path, \
log('--- Cleanup ---')
log('')
- log(vm.qmp('block-job-cancel', device=src_node))
- log(vm.event_wait('BLOCK_JOB_CANCELLED'),
- filters=[iotests.filter_qmp_event])
log(vm.qmp('nbd-server-stop'))
+ log(vm.qmp('x-drop-fleecing', node_name="hook"))
log(vm.qmp('blockdev-del', node_name=tgt_node))
vm.shutdown()
diff --git a/tests/qemu-iotests/222.out b/tests/qemu-iotests/222.out
index 48f336a02b..be925601a8 100644
--- a/tests/qemu-iotests/222.out
+++ b/tests/qemu-iotests/222.out
@@ -50,7 +50,6 @@ read -P0 0x3fe0000 64k
--- Cleanup ---
{u'return': {}}
-{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'drive0', u'type': u'backup', u'speed': 0, u'len': 67108864, u'offset': 393216}, u'event': u'BLOCK_JOB_CANCELLED'}
{u'return': {}}
{u'return': {}}
--
2.11.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
@ 2018-08-16 15:05 ` no-reply
2018-08-16 17:28 ` Vladimir Sementsov-Ogievskiy
2018-08-16 15:09 ` no-reply
` (4 subsequent siblings)
5 siblings, 1 reply; 15+ messages in thread
From: no-reply @ 2018-08-16 15:05 UTC (permalink / raw)
To: vsementsov; +Cc: famz, qemu-devel, qemu-block, kwolf
Hi,
This series failed docker-mingw@fedora build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
Type: series
Message-id: 20180814170126.56461-1-vsementsov@virtuozzo.com
Subject: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-mingw@fedora SHOW_ENV=1 J=8
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
fede2b479e new, node-graph-based fleecing and backup
=== OUTPUT BEGIN ===
BUILD fedora
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-axs4wus0/src'
GEN /var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar
Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot'...
done.
Checking out files: 8% (537/6322)
Checking out files: 9% (569/6322)
Checking out files: 10% (633/6322)
Checking out files: 11% (696/6322)
Checking out files: 12% (759/6322)
Checking out files: 13% (822/6322)
Checking out files: 13% (870/6322)
Checking out files: 14% (886/6322)
Checking out files: 15% (949/6322)
Checking out files: 16% (1012/6322)
Checking out files: 17% (1075/6322)
Checking out files: 18% (1138/6322)
Checking out files: 19% (1202/6322)
Checking out files: 20% (1265/6322)
Checking out files: 21% (1328/6322)
Checking out files: 22% (1391/6322)
Checking out files: 23% (1455/6322)
Checking out files: 24% (1518/6322)
Checking out files: 25% (1581/6322)
Checking out files: 26% (1644/6322)
Checking out files: 26% (1648/6322)
Checking out files: 27% (1707/6322)
Checking out files: 28% (1771/6322)
Checking out files: 29% (1834/6322)
Checking out files: 30% (1897/6322)
Checking out files: 31% (1960/6322)
Checking out files: 32% (2024/6322)
Checking out files: 33% (2087/6322)
Checking out files: 34% (2150/6322)
Checking out files: 35% (2213/6322)
Checking out files: 36% (2276/6322)
Checking out files: 37% (2340/6322)
Checking out files: 38% (2403/6322)
Checking out files: 39% (2466/6322)
Checking out files: 40% (2529/6322)
Checking out files: 41% (2593/6322)
Checking out files: 42% (2656/6322)
Checking out files: 43% (2719/6322)
Checking out files: 44% (2782/6322)
Checking out files: 45% (2845/6322)
Checking out files: 46% (2909/6322)
Checking out files: 47% (2972/6322)
Checking out files: 48% (3035/6322)
Checking out files: 49% (3098/6322)
Checking out files: 50% (3161/6322)
Checking out files: 51% (3225/6322)
Checking out files: 52% (3288/6322)
Checking out files: 53% (3351/6322)
Checking out files: 54% (3414/6322)
Checking out files: 55% (3478/6322)
Checking out files: 56% (3541/6322)
Checking out files: 57% (3604/6322)
Checking out files: 58% (3667/6322)
Checking out files: 59% (3730/6322)
Checking out files: 60% (3794/6322)
Checking out files: 61% (3857/6322)
Checking out files: 62% (3920/6322)
Checking out files: 63% (3983/6322)
Checking out files: 64% (4047/6322)
Checking out files: 65% (4110/6322)
Checking out files: 66% (4173/6322)
Checking out files: 67% (4236/6322)
Checking out files: 68% (4299/6322)
Checking out files: 69% (4363/6322)
Checking out files: 70% (4426/6322)
Checking out files: 71% (4489/6322)
Checking out files: 72% (4552/6322)
Checking out files: 73% (4616/6322)
Checking out files: 74% (4679/6322)
Checking out files: 75% (4742/6322)
Checking out files: 76% (4805/6322)
Checking out files: 77% (4868/6322)
Checking out files: 78% (4932/6322)
Checking out files: 79% (4995/6322)
Checking out files: 80% (5058/6322)
Checking out files: 81% (5121/6322)
Checking out files: 82% (5185/6322)
Checking out files: 83% (5248/6322)
Checking out files: 84% (5311/6322)
Checking out files: 85% (5374/6322)
Checking out files: 86% (5437/6322)
Checking out files: 87% (5501/6322)
Checking out files: 88% (5564/6322)
Checking out files: 89% (5627/6322)
Checking out files: 90% (5690/6322)
Checking out files: 91% (5754/6322)
Checking out files: 92% (5817/6322)
Checking out files: 93% (5880/6322)
Checking out files: 94% (5943/6322)
Checking out files: 95% (6006/6322)
Checking out files: 96% (6070/6322)
Checking out files: 97% (6133/6322)
Checking out files: 98% (6196/6322)
Checking out files: 99% (6259/6322)
Checking out files: 99% (6321/6322)
Checking out files: 100% (6322/6322)
Checking out files: 100% (6322/6322), done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
COPY RUNNER
RUN test-mingw in qemu:fedora
Packages installed:
SDL2-devel-2.0.8-5.fc28.x86_64
bc-1.07.1-5.fc28.x86_64
bison-3.0.4-9.fc28.x86_64
bluez-libs-devel-5.49-3.fc28.x86_64
brlapi-devel-0.6.7-12.fc28.x86_64
bzip2-1.0.6-26.fc28.x86_64
bzip2-devel-1.0.6-26.fc28.x86_64
ccache-3.4.2-2.fc28.x86_64
clang-6.0.0-5.fc28.x86_64
device-mapper-multipath-devel-0.7.4-2.git07e7bd5.fc28.x86_64
findutils-4.6.0-19.fc28.x86_64
flex-2.6.1-7.fc28.x86_64
gcc-8.1.1-1.fc28.x86_64
gcc-c++-8.1.1-1.fc28.x86_64
gettext-0.19.8.1-14.fc28.x86_64
git-2.17.1-2.fc28.x86_64
glib2-devel-2.56.1-3.fc28.x86_64
glusterfs-api-devel-4.0.2-1.fc28.x86_64
gnutls-devel-3.6.2-1.fc28.x86_64
gtk3-devel-3.22.30-1.fc28.x86_64
hostname-3.20-3.fc28.x86_64
libaio-devel-0.3.110-11.fc28.x86_64
libasan-8.1.1-1.fc28.x86_64
libattr-devel-2.4.47-23.fc28.x86_64
libcap-devel-2.25-9.fc28.x86_64
libcap-ng-devel-0.7.9-1.fc28.x86_64
libcurl-devel-7.59.0-3.fc28.x86_64
libfdt-devel-1.4.6-4.fc28.x86_64
libpng-devel-1.6.34-3.fc28.x86_64
librbd-devel-12.2.5-1.fc28.x86_64
libssh2-devel-1.8.0-7.fc28.x86_64
libubsan-8.1.1-1.fc28.x86_64
libusbx-devel-1.0.21-6.fc28.x86_64
libxml2-devel-2.9.7-4.fc28.x86_64
llvm-6.0.0-11.fc28.x86_64
lzo-devel-2.08-12.fc28.x86_64
make-4.2.1-6.fc28.x86_64
mingw32-SDL2-2.0.5-3.fc27.noarch
mingw32-bzip2-1.0.6-9.fc27.noarch
mingw32-curl-7.57.0-1.fc28.noarch
mingw32-glib2-2.54.1-1.fc28.noarch
mingw32-gmp-6.1.2-2.fc27.noarch
mingw32-gnutls-3.5.13-2.fc27.noarch
mingw32-gtk3-3.22.16-1.fc27.noarch
mingw32-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw32-libpng-1.6.29-2.fc27.noarch
mingw32-libssh2-1.8.0-3.fc27.noarch
mingw32-libtasn1-4.13-1.fc28.noarch
mingw32-nettle-3.3-3.fc27.noarch
mingw32-pixman-0.34.0-3.fc27.noarch
mingw32-pkg-config-0.28-9.fc27.x86_64
mingw64-SDL2-2.0.5-3.fc27.noarch
mingw64-bzip2-1.0.6-9.fc27.noarch
mingw64-curl-7.57.0-1.fc28.noarch
mingw64-glib2-2.54.1-1.fc28.noarch
mingw64-gmp-6.1.2-2.fc27.noarch
mingw64-gnutls-3.5.13-2.fc27.noarch
mingw64-gtk3-3.22.16-1.fc27.noarch
mingw64-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw64-libpng-1.6.29-2.fc27.noarch
mingw64-libssh2-1.8.0-3.fc27.noarch
mingw64-libtasn1-4.13-1.fc28.noarch
mingw64-nettle-3.3-3.fc27.noarch
mingw64-pixman-0.34.0-3.fc27.noarch
mingw64-pkg-config-0.28-9.fc27.x86_64
ncurses-devel-6.1-5.20180224.fc28.x86_64
nettle-devel-3.4-2.fc28.x86_64
nss-devel-3.36.1-1.1.fc28.x86_64
numactl-devel-2.0.11-8.fc28.x86_64
package PyYAML is not installed
package libjpeg-devel is not installed
perl-5.26.2-411.fc28.x86_64
pixman-devel-0.34.0-8.fc28.x86_64
python3-3.6.5-1.fc28.x86_64
snappy-devel-1.1.7-5.fc28.x86_64
sparse-0.5.2-1.fc28.x86_64
spice-server-devel-0.14.0-4.fc28.x86_64
systemtap-sdt-devel-3.2-11.fc28.x86_64
tar-1.30-3.fc28.x86_64
usbredir-devel-0.7.1-7.fc28.x86_64
virglrenderer-devel-0.6.0-4.20170210git76b3da97b.fc28.x86_64
vte3-devel-0.36.5-6.fc28.x86_64
which-2.21-8.fc28.x86_64
xen-devel-4.10.1-3.fc28.x86_64
zlib-devel-1.2.11-8.fc28.x86_64
Environment variables:
TARGET_LIST=
PACKAGES=ccache gettext git tar PyYAML sparse flex bison python3 bzip2 hostname gcc gcc-c++ llvm clang make perl which bc findutils glib2-devel libaio-devel pixman-devel zlib-devel libfdt-devel libasan libubsan bluez-libs-devel brlapi-devel bzip2-devel device-mapper-multipath-devel glusterfs-api-devel gnutls-devel gtk3-devel libattr-devel libcap-devel libcap-ng-devel libcurl-devel libjpeg-devel libpng-devel librbd-devel libssh2-devel libusbx-devel libxml2-devel lzo-devel ncurses-devel nettle-devel nss-devel numactl-devel SDL2-devel snappy-devel spice-server-devel systemtap-sdt-devel usbredir-devel virglrenderer-devel vte3-devel xen-devel mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL2 mingw32-pkg-config mingw32-gtk3 mingw32-gnutls mingw32-nettle mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp mingw64-SDL2 mingw64-pkg-config mingw64-gtk3 mingw64-gnutls mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo mingw64-libpng mingw64-curl mingw64-libssh2 mingw64-bzip2
J=8
V=
HOSTNAME=2d66dc589b5e
DEBUG=
SHOW_ENV=1
PWD=/
HOME=/
CCACHE_DIR=/var/tmp/ccache
DISTTAG=f28container
QEMU_CONFIGURE_OPTS=--python=/usr/bin/python3
FGC=f28
TEST_DIR=/tmp/qemu-test
SHLVL=1
FEATURES=mingw clang pyyaml asan dtc
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAKEFLAGS= -j8
EXTRA_CONFIGURE_OPTS=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/tmp/qemu-test/install --python=/usr/bin/python3 --cross-prefix=x86_64-w64-mingw32- --enable-trace-backends=simple --enable-gnutls --enable-nettle --enable-curl --enable-vnc --enable-bzip2 --enable-guest-agent --with-sdlabi=2.0 --with-gtkabi=3.0
Install prefix /tmp/qemu-test/install
BIOS directory /tmp/qemu-test/install
firmware path /tmp/qemu-test/install/share/qemu-firmware
binary directory /tmp/qemu-test/install
library directory /tmp/qemu-test/install/lib
module directory /tmp/qemu-test/install/lib
libexec directory /tmp/qemu-test/install/libexec
include directory /tmp/qemu-test/install/include
config directory /tmp/qemu-test/install
local state directory queried at runtime
Windows SDK no
Source path /tmp/qemu-test/src
GIT binary git
GIT submodules
C compiler x86_64-w64-mingw32-gcc
Host C compiler cc
C++ compiler x86_64-w64-mingw32-g++
Objective-C compiler clang
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -Werror -DHAS_LIBSSH2_SFTP_FSYNC -mms-bitfields -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/glib-2.0 -I/usr/x86_64-w64-mingw32/sys-root/mingw/lib/glib-2.0/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -m64 -mcx16 -mthreads -D__USE_MINGW_ANSI_STDIO=1 -DWIN32_LEAN_AND_MEAN -DWINVER=0x501 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wexpansion-to-defined -Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/p11-kit-1 -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/libpng16
LDFLAGS -Wl,--nxcompat -Wl,--no-seh -Wl,--dynamicbase -Wl,--warn-common -m64 -g
QEMU_LDFLAGS -L$(BUILD_DIR)/dtc/libfdt
make make
install install
python /usr/bin/python3 -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
SDL support yes (2.0.5)
GTK support yes (3.22.16)
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support yes
GNUTLS rnd yes
libgcrypt no
libgcrypt kdf no
nettle yes (3.3)
nettle kdf yes
libtasn1 yes
curses support no
virgl support no
curl support yes
mingw32 support yes
Audio drivers dsound
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
Multipath support no
VNC support yes
VNC SASL support no
VNC JPEG support yes
VNC PNG support yes
xen support no
brlapi support no
bluez support no
Documentation no
PIE no
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support no
Install blobs yes
KVM support no
HAX support yes
HVF support no
WHPX support no
TCG support yes
TCG debug enabled no
TCG interpreter no
malloc trim support no
RDMA support no
fdt support git
membarrier no
preadv support no
fdatasync no
madvise no
posix_madvise no
posix_memalign no
libcap-ng support no
vhost-net support no
vhost-crypto support no
vhost-scsi support no
vhost-vsock support no
vhost-user support no
Trace backends simple
Trace output file trace-<pid>
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info yes
QGA MSI support no
seccomp support no
coroutine backend win32
coroutine pool yes
debug stack usage no
mutex debugging no
crypto afalg no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support yes
TPM passthrough no
TPM emulator no
QOM debugging yes
Live block migration yes
lzo support no
snappy support no
bzip2 support yes
NUMA host support no
libxml2 no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
VxHS block device no
capstone no
docker no
NOTE: cross-compilers enabled: 'x86_64-w64-mingw32-gcc'
GEN x86_64-softmmu/config-devices.mak.tmp
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qapi-gen
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
GEN aarch64-softmmu/config-devices.mak
GEN x86_64-softmmu/config-devices.mak
GEN trace/generated-helpers.c
GEN module_block.h
GEN ui/input-keymap-atset1-to-qcode.c
GEN ui/input-keymap-linux-to-qcode.c
GEN ui/input-keymap-qcode-to-atset1.c
GEN ui/input-keymap-qcode-to-atset2.c
GEN ui/input-keymap-qcode-to-atset3.c
GEN ui/input-keymap-qcode-to-linux.c
GEN ui/input-keymap-qcode-to-qnum.c
GEN ui/input-keymap-qcode-to-sun.c
GEN ui/input-keymap-qnum-to-qcode.c
GEN ui/input-keymap-usb-to-qcode.c
GEN ui/input-keymap-win32-to-qcode.c
GEN ui/input-keymap-x11-to-qcode.c
GEN ui/input-keymap-xorgevdev-to-qcode.c
GEN ui/input-keymap-xorgkbd-to-qcode.c
GEN ui/input-keymap-xorgxquartz-to-qcode.c
GEN ui/input-keymap-xorgxwin-to-qcode.c
GEN ui/input-keymap-osx-to-qcode.c
GEN tests/test-qapi-gen
GEN trace-root.h
GEN accel/kvm/trace.h
GEN accel/tcg/trace.h
GEN audio/trace.h
GEN block/trace.h
GEN chardev/trace.h
GEN crypto/trace.h
GEN hw/9pfs/trace.h
GEN hw/acpi/trace.h
GEN hw/alpha/trace.h
GEN hw/arm/trace.h
GEN hw/audio/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/display/trace.h
GEN hw/dma/trace.h
GEN hw/hppa/trace.h
GEN hw/i2c/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/ide/trace.h
GEN hw/input/trace.h
GEN hw/intc/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/misc/trace.h
GEN hw/misc/macio/trace.h
GEN hw/net/trace.h
GEN hw/nvram/trace.h
GEN hw/pci/trace.h
GEN hw/pci-host/trace.h
GEN hw/ppc/trace.h
GEN hw/rdma/trace.h
GEN hw/rdma/vmw/trace.h
GEN hw/s390x/trace.h
GEN hw/scsi/trace.h
GEN hw/sd/trace.h
GEN hw/sparc/trace.h
GEN hw/sparc64/trace.h
GEN hw/timer/trace.h
GEN hw/tpm/trace.h
GEN hw/usb/trace.h
GEN hw/vfio/trace.h
GEN hw/virtio/trace.h
GEN hw/xen/trace.h
GEN io/trace.h
GEN linux-user/trace.h
GEN migration/trace.h
GEN nbd/trace.h
GEN net/trace.h
GEN qapi/trace.h
GEN qom/trace.h
GEN scsi/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/ppc/trace.h
GEN target/s390x/trace.h
GEN target/sparc/trace.h
GEN ui/trace.h
GEN util/trace.h
GEN trace-root.c
GEN accel/kvm/trace.c
GEN accel/tcg/trace.c
GEN audio/trace.c
GEN block/trace.c
GEN chardev/trace.c
GEN crypto/trace.c
GEN hw/9pfs/trace.c
GEN hw/acpi/trace.c
GEN hw/alpha/trace.c
GEN hw/arm/trace.c
GEN hw/audio/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/display/trace.c
GEN hw/dma/trace.c
GEN hw/hppa/trace.c
GEN hw/i2c/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/ide/trace.c
GEN hw/input/trace.c
GEN hw/intc/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/misc/trace.c
GEN hw/misc/macio/trace.c
GEN hw/net/trace.c
GEN hw/nvram/trace.c
GEN hw/pci/trace.c
GEN hw/pci-host/trace.c
GEN hw/ppc/trace.c
GEN hw/rdma/trace.c
GEN hw/rdma/vmw/trace.c
GEN hw/s390x/trace.c
GEN hw/scsi/trace.c
GEN hw/sd/trace.c
GEN hw/sparc/trace.c
GEN hw/sparc64/trace.c
GEN hw/timer/trace.c
GEN hw/tpm/trace.c
GEN hw/usb/trace.c
GEN hw/vfio/trace.c
GEN hw/virtio/trace.c
GEN hw/xen/trace.c
GEN io/trace.c
GEN linux-user/trace.c
GEN migration/trace.c
GEN nbd/trace.c
GEN net/trace.c
GEN qapi/trace.c
GEN qom/trace.c
GEN scsi/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/ppc/trace.c
GEN target/s390x/trace.c
GEN target/sparc/trace.c
GEN ui/trace.c
GEN util/trace.c
GEN config-all-devices.mak
DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
DEP /tmp/qemu-test/src/dtc/tests/trees.S
DEP /tmp/qemu-test/src/dtc/tests/testutils.c
DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
DEP /tmp/qemu-test/src/dtc/tests/check_path.c
DEP /tmp/qemu-test/src/dtc/tests/overlay_bad_fixup.c
DEP /tmp/qemu-test/src/dtc/tests/overlay.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/property_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
DEP /tmp/qemu-test/src/dtc/tests/incbin.c
DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
DEP /tmp/qemu-test/src/dtc/tests/path-references.c
DEP /tmp/qemu-test/src/dtc/tests/references.c
DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
DEP /tmp/qemu-test/src/dtc/tests/del_node.c
DEP /tmp/qemu-test/src/dtc/tests/del_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop.c
DEP /tmp/qemu-test/src/dtc/tests/set_name.c
DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
DEP /tmp/qemu-test/src/dtc/tests/stringlist.c
DEP /tmp/qemu-test/src/dtc/tests/addr_size_cells.c
DEP /tmp/qemu-test/src/dtc/tests/notfound.c
DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/get_path.c
DEP /tmp/qemu-test/src/dtc/tests/getprop.c
DEP /tmp/qemu-test/src/dtc/tests/get_name.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
DEP /tmp/qemu-test/src/dtc/tests/find_property.c
DEP /tmp/qemu-test/src/dtc/tests/root_node.c
DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_overlay.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_addresses.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
DEP /tmp/qemu-test/src/dtc/fdtoverlay.c
DEP /tmp/qemu-test/src/dtc/util.c
DEP /tmp/qemu-test/src/dtc/fdtput.c
DEP /tmp/qemu-test/src/dtc/fdtget.c
DEP /tmp/qemu-test/src/dtc/fdtdump.c
LEX convert-dtsv0-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/srcpos.c
BISON dtc-parser.tab.c
LEX dtc-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/treesource.c
DEP /tmp/qemu-test/src/dtc/livetree.c
DEP /tmp/qemu-test/src/dtc/fstree.c
DEP /tmp/qemu-test/src/dtc/flattree.c
DEP /tmp/qemu-test/src/dtc/dtc.c
DEP /tmp/qemu-test/src/dtc/data.c
DEP /tmp/qemu-test/src/dtc/checks.c
DEP convert-dtsv0-lexer.lex.c
DEP dtc-parser.tab.c
DEP dtc-lexer.lex.c
CHK version_gen.h
UPD version_gen.h
DEP /tmp/qemu-test/src/dtc/util.c
CC libfdt/fdt.o
CC libfdt/fdt_ro.o
CC libfdt/fdt_wip.o
CC libfdt/fdt_rw.o
CC libfdt/fdt_sw.o
CC libfdt/fdt_strerror.o
CC libfdt/fdt_empty_tree.o
CC libfdt/fdt_addresses.o
CC libfdt/fdt_overlay.o
AR libfdt/libfdt.a
x86_64-w64-mingw32-ar: creating libfdt/libfdt.a
a - libfdt/fdt.o
a - libfdt/fdt_ro.o
a - libfdt/fdt_wip.o
a - libfdt/fdt_sw.o
a - libfdt/fdt_rw.o
a - libfdt/fdt_strerror.o
a - libfdt/fdt_empty_tree.o
a - libfdt/fdt_addresses.o
a - libfdt/fdt_overlay.o
RC version.o
GEN qga/qapi-generated/qapi-gen
CC qapi/qapi-types.o
CC qapi/qapi-types-block-core.o
CC qapi/qapi-types-block.o
CC qapi/qapi-builtin-types.o
CC qapi/qapi-types-common.o
CC qapi/qapi-types-char.o
CC qapi/qapi-types-crypto.o
CC qapi/qapi-types-introspect.o
CC qapi/qapi-types-job.o
CC qapi/qapi-types-migration.o
CC qapi/qapi-types-misc.o
CC qapi/qapi-types-net.o
CC qapi/qapi-types-rocker.o
CC qapi/qapi-types-run-state.o
CC qapi/qapi-types-sockets.o
CC qapi/qapi-types-tpm.o
CC qapi/qapi-types-transaction.o
CC qapi/qapi-types-ui.o
CC qapi/qapi-types-trace.o
CC qapi/qapi-builtin-visit.o
CC qapi/qapi-visit.o
CC qapi/qapi-visit-block-core.o
CC qapi/qapi-visit-block.o
CC qapi/qapi-visit-char.o
CC qapi/qapi-visit-common.o
CC qapi/qapi-visit-crypto.o
CC qapi/qapi-visit-introspect.o
CC qapi/qapi-visit-job.o
CC qapi/qapi-visit-migration.o
CC qapi/qapi-visit-misc.o
CC qapi/qapi-visit-net.o
CC qapi/qapi-visit-rocker.o
CC qapi/qapi-visit-run-state.o
CC qapi/qapi-visit-sockets.o
CC qapi/qapi-visit-tpm.o
CC qapi/qapi-visit-trace.o
CC qapi/qapi-visit-transaction.o
CC qapi/qapi-events-block-core.o
CC qapi/qapi-events.o
CC qapi/qapi-visit-ui.o
CC qapi/qapi-events-block.o
CC qapi/qapi-events-char.o
CC qapi/qapi-events-common.o
CC qapi/qapi-events-crypto.o
CC qapi/qapi-events-introspect.o
CC qapi/qapi-events-job.o
CC qapi/qapi-events-migration.o
CC qapi/qapi-events-misc.o
CC qapi/qapi-events-net.o
CC qapi/qapi-events-rocker.o
CC qapi/qapi-events-sockets.o
CC qapi/qapi-events-tpm.o
CC qapi/qapi-events-transaction.o
CC qapi/qapi-events-run-state.o
CC qapi/qapi-events-ui.o
CC qapi/qapi-events-trace.o
CC qapi/qapi-introspect.o
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/string-input-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qnum.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qbool.o
CC qobject/qlit.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC qobject/block-qdict.o
CC trace/simple.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/lockcnt.o
CC util/bufferiszero.o
CC util/aiocb.o
CC util/async.o
CC util/aio-wait.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-win32.o
CC util/event_notifier-win32.o
CC util/oslib-win32.o
CC util/qemu-thread-win32.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/host-utils.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/cacheinfo.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/notify.o
CC util/uri.o
CC util/qemu-progress.o
CC util/qemu-option.o
CC util/keyval.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/readline.o
CC util/getauxval.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-win32.o
CC util/timed-average.o
CC util/buffer.o
CC util/base64.o
CC util/log.o
CC util/pagesize.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC util/stats64.o
CC util/systemd.o
CC util/iova-tree.o
CC trace-root.o
CC accel/kvm/trace.o
CC accel/tcg/trace.o
CC block/trace.o
CC audio/trace.o
CC chardev/trace.o
CC crypto/trace.o
CC hw/9pfs/trace.o
CC hw/acpi/trace.o
CC hw/alpha/trace.o
CC hw/arm/trace.o
CC hw/audio/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/display/trace.o
CC hw/dma/trace.o
CC hw/hppa/trace.o
CC hw/i2c/trace.o
CC hw/i386/trace.o
CC hw/i386/xen/trace.o
CC hw/ide/trace.o
CC hw/input/trace.o
CC hw/intc/trace.o
CC hw/isa/trace.o
CC hw/mem/trace.o
CC hw/misc/trace.o
CC hw/misc/macio/trace.o
CC hw/net/trace.o
CC hw/nvram/trace.o
CC hw/pci/trace.o
CC hw/pci-host/trace.o
CC hw/rdma/trace.o
CC hw/ppc/trace.o
CC hw/rdma/vmw/trace.o
CC hw/s390x/trace.o
CC hw/scsi/trace.o
CC hw/sd/trace.o
CC hw/sparc/trace.o
CC hw/sparc64/trace.o
CC hw/timer/trace.o
CC hw/tpm/trace.o
CC hw/usb/trace.o
CC hw/vfio/trace.o
CC hw/virtio/trace.o
CC io/trace.o
CC hw/xen/trace.o
CC linux-user/trace.o
CC nbd/trace.o
CC net/trace.o
CC qapi/trace.o
CC migration/trace.o
CC qom/trace.o
CC scsi/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/ppc/trace.o
CC target/s390x/trace.o
CC ui/trace.o
CC target/sparc/trace.o
CC util/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/error-printf.o
CC stubs/dump.o
CC stubs/fdset.o
CC stubs/gdbstub.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/change-state-handler.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/tpm.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/fd-register.o
CC stubs/qmp_memory_device.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/xen-common.o
CC stubs/xen-hvm.o
CC stubs/pci-host-piix.o
CC stubs/ram-block.o
GEN qemu-img-cmds.h
CC blockjob.o
CC block.o
CC job.o
CC replication.o
CC qemu-io-cmds.o
CC block/raw-format.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vvfat.o
CC block/vpc.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-bitmap.o
CC block/qcow2-cache.o
CC block/qed-l2-cache.o
CC block/qed.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/blklogwrites.o
CC block/block-backend.o
CC block/snapshot.o
CC block/file-win32.o
CC block/win32-aio.o
CC block/null.o
CC block/qapi.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/create.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/throttle.o
CC block/copy-on-read.o
CC block/crypto.o
CC block/fleecing-hook.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC scsi/utils.o
CC scsi/pr-manager-stub.o
CC block/curl.o
CC block/ssh.o
CC block/dmg-bz2.o
CC crypto/init.o
CC crypto/hash-nettle.o
CC crypto/hash.o
CC crypto/hmac.o
CC crypto/hmac-nettle.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlscredspsk.o
CC crypto/tlscredsx509.o
CC crypto/tlssession.o
CC crypto/secret.o
CC crypto/random-gnutls.o
CC crypto/pbkdf.o
CC crypto/pbkdf-nettle.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel.o
CC io/channel-buffer.o
CC io/channel-command.o
CC io/channel-file.o
CC io/channel-socket.o
CC io/channel-tls.o
CC io/channel-watch.o
CC io/channel-websock.o
CC io/channel-util.o
CC io/dns-resolver.o
CC io/net-listener.o
CC io/task.o
CC qom/object.o
CC qom/container.o
CC qom/qom-qobject.o
CC qom/object_interfaces.o
CC qemu-io.o
CC blockdev.o
CC blockdev-nbd.o
CC bootdevice.o
CC iothread.o
CC job-qmp.o
CC qdev-monitor.o
CC device-hotplug.o
/tmp/qemu-test/src/block/fleecing-hook.c: In function 'fleecing_hook_cow':
/tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: implicit declaration of function 'hbitmap_next_dirty_area'; did you mean 'hbitmap_next_zero'? [-Werror=implicit-function-declaration]
while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
^~~~~~~~~~~~~~~~~~~~~~~
hbitmap_next_zero
/tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: nested extern declaration of 'hbitmap_next_dirty_area' [-Werror=nested-externs]
cc1: all warnings being treated as errors
make: *** [/tmp/qemu-test/src/rules.mak:69: block/fleecing-hook.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
File "./tests/docker/docker.py", line 565, in <module>
sys.exit(main())
File "./tests/docker/docker.py", line 562, in main
return args.cmdobj.run(args, argv)
File "./tests/docker/docker.py", line 308, in run
return Docker().run(argv, args.keep, quiet=args.quiet)
File "./tests/docker/docker.py", line 276, in run
quiet=quiet)
File "./tests/docker/docker.py", line 183, in _do_check
return subprocess.check_call(self._command + cmd, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=b3b1183ca16511e8a5bd52540069c830', '-u', '1000', '--security-opt', 'seccomp=unconfined', '--rm', '--net=none', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=8', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit status 2
make[1]: *** [tests/docker/Makefile.include:213: docker-run] Error 1
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-axs4wus0/src'
make: *** [tests/docker/Makefile.include:247: docker-run-test-mingw@fedora] Error 2
real 1m13.823s
user 0m4.993s
sys 0m3.502s
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
2018-08-16 15:05 ` no-reply
@ 2018-08-16 15:09 ` no-reply
2018-08-17 18:21 ` Vladimir Sementsov-Ogievskiy
` (3 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2018-08-16 15:09 UTC (permalink / raw)
To: vsementsov; +Cc: famz, qemu-devel, qemu-block, kwolf
Hi,
This series failed docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
Type: series
Message-id: 20180814170126.56461-1-vsementsov@virtuozzo.com
Subject: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-quick@centos7 SHOW_ENV=1 J=8
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
fede2b479e new, node-graph-based fleecing and backup
=== OUTPUT BEGIN ===
BUILD centos7
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-5cpgkzz3/src'
GEN /var/tmp/patchew-tester-tmp-5cpgkzz3/src/docker-src.2018-08-16-11.07.56.6514/qemu.tar
Cloning into '/var/tmp/patchew-tester-tmp-5cpgkzz3/src/docker-src.2018-08-16-11.07.56.6514/qemu.tar.vroot'...
done.
Checking out files: 46% (2933/6322)
Checking out files: 47% (2972/6322)
Checking out files: 48% (3035/6322)
Checking out files: 49% (3098/6322)
Checking out files: 50% (3161/6322)
Checking out files: 51% (3225/6322)
Checking out files: 52% (3288/6322)
Checking out files: 53% (3351/6322)
Checking out files: 54% (3414/6322)
Checking out files: 55% (3478/6322)
Checking out files: 56% (3541/6322)
Checking out files: 57% (3604/6322)
Checking out files: 58% (3667/6322)
Checking out files: 59% (3730/6322)
Checking out files: 60% (3794/6322)
Checking out files: 61% (3857/6322)
Checking out files: 62% (3920/6322)
Checking out files: 63% (3983/6322)
Checking out files: 64% (4047/6322)
Checking out files: 65% (4110/6322)
Checking out files: 66% (4173/6322)
Checking out files: 67% (4236/6322)
Checking out files: 68% (4299/6322)
Checking out files: 69% (4363/6322)
Checking out files: 70% (4426/6322)
Checking out files: 71% (4489/6322)
Checking out files: 72% (4552/6322)
Checking out files: 73% (4616/6322)
Checking out files: 74% (4679/6322)
Checking out files: 75% (4742/6322)
Checking out files: 76% (4805/6322)
Checking out files: 77% (4868/6322)
Checking out files: 78% (4932/6322)
Checking out files: 79% (4995/6322)
Checking out files: 80% (5058/6322)
Checking out files: 81% (5121/6322)
Checking out files: 82% (5185/6322)
Checking out files: 83% (5248/6322)
Checking out files: 84% (5311/6322)
Checking out files: 85% (5374/6322)
Checking out files: 85% (5408/6322)
Checking out files: 86% (5437/6322)
Checking out files: 87% (5501/6322)
Checking out files: 88% (5564/6322)
Checking out files: 89% (5627/6322)
Checking out files: 90% (5690/6322)
Checking out files: 91% (5754/6322)
Checking out files: 92% (5817/6322)
Checking out files: 93% (5880/6322)
Checking out files: 94% (5943/6322)
Checking out files: 95% (6006/6322)
Checking out files: 96% (6070/6322)
Checking out files: 97% (6133/6322)
Checking out files: 98% (6196/6322)
Checking out files: 99% (6259/6322)
Checking out files: 100% (6322/6322)
Checking out files: 100% (6322/6322), done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-5cpgkzz3/src/docker-src.2018-08-16-11.07.56.6514/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into '/var/tmp/patchew-tester-tmp-5cpgkzz3/src/docker-src.2018-08-16-11.07.56.6514/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
COPY RUNNER
RUN test-quick in qemu:centos7
Packages installed:
SDL-devel-1.2.15-14.el7.x86_64
bison-3.0.4-1.el7.x86_64
bzip2-devel-1.0.6-13.el7.x86_64
ccache-3.3.4-1.el7.x86_64
csnappy-devel-0-6.20150729gitd7bc683.el7.x86_64
flex-2.5.37-3.el7.x86_64
gcc-4.8.5-16.el7_4.2.x86_64
gettext-0.19.8.1-2.el7.x86_64
git-1.8.3.1-12.el7_4.x86_64
glib2-devel-2.50.3-3.el7.x86_64
libepoxy-devel-1.3.1-1.el7.x86_64
libfdt-devel-1.4.6-1.el7.x86_64
lzo-devel-2.06-8.el7.x86_64
make-3.82-23.el7.x86_64
mesa-libEGL-devel-17.0.1-6.20170307.el7.x86_64
mesa-libgbm-devel-17.0.1-6.20170307.el7.x86_64
package g++ is not installed
package librdmacm-devel is not installed
pixman-devel-0.34.0-1.el7.x86_64
spice-glib-devel-0.33-6.el7_4.1.x86_64
spice-server-devel-0.12.8-2.el7.1.x86_64
tar-1.26-32.el7.x86_64
vte-devel-0.28.2-10.el7.x86_64
xen-devel-4.6.6-10.el7.x86_64
zlib-devel-1.2.7-17.el7.x86_64
Environment variables:
PACKAGES=bison bzip2-devel ccache csnappy-devel flex g++ gcc gettext git glib2-devel libepoxy-devel libfdt-devel librdmacm-devel lzo-devel make mesa-libEGL-devel mesa-libgbm-devel pixman-devel SDL-devel spice-glib-devel spice-server-devel tar vte-devel xen-devel zlib-devel
HOSTNAME=7bda933cbb0b
MAKEFLAGS= -j8
J=8
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
TARGET_LIST=
SHLVL=1
HOME=/home/patchew
TEST_DIR=/tmp/qemu-test
FEATURES= dtc
DEBUG=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/tmp/qemu-test/install
No C++ compiler available; disabling C++ specific optional code
Install prefix /tmp/qemu-test/install
BIOS directory /tmp/qemu-test/install/share/qemu
firmware path /tmp/qemu-test/install/share/qemu-firmware
binary directory /tmp/qemu-test/install/bin
library directory /tmp/qemu-test/install/lib
module directory /tmp/qemu-test/install/lib/qemu
libexec directory /tmp/qemu-test/install/libexec
include directory /tmp/qemu-test/install/include
config directory /tmp/qemu-test/install/etc
local state directory /tmp/qemu-test/install/var
Manual directory /tmp/qemu-test/install/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path /tmp/qemu-test/src
GIT binary git
GIT submodules
C compiler cc
Host C compiler cc
C++ compiler
Objective-C compiler cc
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/include/pixman-1 -Werror -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -Wno-missing-braces -I/usr/include/libpng15 -I/usr/include/spice-server -I/usr/include/cacard -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/nss3 -I/usr/include/nspr4 -I/usr/include/spice-1
LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
QEMU_LDFLAGS
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
SDL support yes (1.2.15)
GTK support yes (2.24.31)
GTK GL support no
VTE support yes (0.28.2)
TLS priority NORMAL
GNUTLS support no
GNUTLS rnd no
libgcrypt no
libgcrypt kdf no
nettle no
nettle kdf no
libtasn1 no
curses support yes
virgl support no
curl support no
mingw32 support no
Audio drivers oss
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
Multipath support no
VNC support yes
VNC SASL support no
VNC JPEG support no
VNC PNG support yes
xen support yes
xen ctrl version 40600
pv dom build no
brlapi support no
bluez support no
Documentation no
PIE yes
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support yes
Install blobs yes
KVM support yes
HAX support no
HVF support no
WHPX support no
TCG support yes
TCG debug enabled no
TCG interpreter no
malloc trim support yes
RDMA support yes
fdt support system
membarrier no
preadv support yes
fdatasync yes
madvise yes
posix_madvise yes
posix_memalign yes
libcap-ng support no
vhost-net support yes
vhost-crypto support yes
vhost-scsi support yes
vhost-vsock support yes
vhost-user support yes
Trace backends log
spice support yes (0.12.12/0.12.8)
rbd support no
xfsctl support no
smartcard support yes
libusb no
usb net redir no
OpenGL support yes
OpenGL dmabufs yes
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info no
QGA MSI support no
seccomp support no
coroutine backend ucontext
coroutine pool yes
debug stack usage no
mutex debugging no
crypto afalg no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support no
TPM passthrough yes
TPM emulator yes
QOM debugging yes
Live block migration yes
lzo support yes
snappy support no
bzip2 support yes
NUMA host support no
libxml2 no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
VxHS block device no
capstone no
docker no
WARNING: Use of GTK 2.0 is deprecated and will be removed in
WARNING: future releases. Please switch to using GTK 3.0
WARNING: Use of SDL 1.2 is deprecated and will be removed in
WARNING: future releases. Please switch to using SDL 2.0
NOTE: cross-compilers enabled: 'cc'
GEN x86_64-softmmu/config-devices.mak.tmp
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qapi-gen
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
GEN x86_64-softmmu/config-devices.mak
GEN aarch64-softmmu/config-devices.mak
GEN trace/generated-helpers.c
GEN module_block.h
GEN ui/input-keymap-atset1-to-qcode.c
GEN ui/input-keymap-linux-to-qcode.c
GEN ui/input-keymap-qcode-to-atset1.c
GEN ui/input-keymap-qcode-to-atset2.c
GEN ui/input-keymap-qcode-to-atset3.c
GEN ui/input-keymap-qcode-to-linux.c
GEN ui/input-keymap-qcode-to-qnum.c
GEN ui/input-keymap-qcode-to-sun.c
GEN ui/input-keymap-qnum-to-qcode.c
GEN ui/input-keymap-usb-to-qcode.c
GEN ui/input-keymap-win32-to-qcode.c
GEN ui/input-keymap-x11-to-qcode.c
GEN ui/input-keymap-xorgevdev-to-qcode.c
GEN ui/input-keymap-xorgkbd-to-qcode.c
GEN ui/input-keymap-xorgxquartz-to-qcode.c
GEN ui/input-keymap-xorgxwin-to-qcode.c
GEN ui/input-keymap-osx-to-qcode.c
GEN tests/test-qapi-gen
GEN trace-root.h
GEN accel/kvm/trace.h
GEN accel/tcg/trace.h
GEN audio/trace.h
GEN block/trace.h
GEN chardev/trace.h
GEN crypto/trace.h
GEN hw/9pfs/trace.h
GEN hw/acpi/trace.h
GEN hw/alpha/trace.h
GEN hw/arm/trace.h
GEN hw/audio/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/display/trace.h
GEN hw/dma/trace.h
GEN hw/hppa/trace.h
GEN hw/i2c/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/ide/trace.h
GEN hw/input/trace.h
GEN hw/intc/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/misc/trace.h
GEN hw/misc/macio/trace.h
GEN hw/net/trace.h
GEN hw/nvram/trace.h
GEN hw/pci/trace.h
GEN hw/pci-host/trace.h
GEN hw/ppc/trace.h
GEN hw/rdma/trace.h
GEN hw/rdma/vmw/trace.h
GEN hw/s390x/trace.h
GEN hw/scsi/trace.h
GEN hw/sd/trace.h
GEN hw/sparc/trace.h
GEN hw/sparc64/trace.h
GEN hw/timer/trace.h
GEN hw/tpm/trace.h
GEN hw/usb/trace.h
GEN hw/vfio/trace.h
GEN hw/virtio/trace.h
GEN hw/xen/trace.h
GEN io/trace.h
GEN linux-user/trace.h
GEN migration/trace.h
GEN nbd/trace.h
GEN net/trace.h
GEN qapi/trace.h
GEN qom/trace.h
GEN scsi/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/ppc/trace.h
GEN target/s390x/trace.h
GEN target/sparc/trace.h
GEN ui/trace.h
GEN util/trace.h
GEN trace-root.c
GEN accel/kvm/trace.c
GEN accel/tcg/trace.c
GEN audio/trace.c
GEN block/trace.c
GEN chardev/trace.c
GEN crypto/trace.c
GEN hw/9pfs/trace.c
GEN hw/acpi/trace.c
GEN hw/alpha/trace.c
GEN hw/arm/trace.c
GEN hw/audio/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/display/trace.c
GEN hw/dma/trace.c
GEN hw/hppa/trace.c
GEN hw/i2c/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/ide/trace.c
GEN hw/input/trace.c
GEN hw/intc/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/misc/trace.c
GEN hw/misc/macio/trace.c
GEN hw/net/trace.c
GEN hw/nvram/trace.c
GEN hw/pci/trace.c
GEN hw/pci-host/trace.c
GEN hw/ppc/trace.c
GEN hw/rdma/trace.c
GEN hw/rdma/vmw/trace.c
GEN hw/s390x/trace.c
GEN hw/scsi/trace.c
GEN hw/sd/trace.c
GEN hw/sparc/trace.c
GEN hw/sparc64/trace.c
GEN hw/timer/trace.c
GEN hw/tpm/trace.c
GEN hw/usb/trace.c
GEN hw/vfio/trace.c
GEN hw/virtio/trace.c
GEN hw/xen/trace.c
GEN io/trace.c
GEN linux-user/trace.c
GEN migration/trace.c
GEN nbd/trace.c
GEN net/trace.c
GEN qapi/trace.c
GEN qom/trace.c
GEN scsi/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/ppc/trace.c
GEN target/s390x/trace.c
GEN target/sparc/trace.c
GEN ui/trace.c
GEN util/trace.c
GEN config-all-devices.mak
CC tests/qemu-iotests/socket_scm_helper.o
GEN qga/qapi-generated/qapi-gen
CC qapi/qapi-builtin-types.o
CC qapi/qapi-types.o
CC qapi/qapi-types-block.o
CC qapi/qapi-types-char.o
CC qapi/qapi-types-block-core.o
CC qapi/qapi-types-common.o
CC qapi/qapi-types-crypto.o
CC qapi/qapi-types-introspect.o
CC qapi/qapi-types-job.o
CC qapi/qapi-types-migration.o
CC qapi/qapi-types-misc.o
CC qapi/qapi-types-net.o
CC qapi/qapi-types-rocker.o
CC qapi/qapi-types-run-state.o
CC qapi/qapi-types-sockets.o
CC qapi/qapi-types-tpm.o
CC qapi/qapi-types-trace.o
CC qapi/qapi-types-transaction.o
CC qapi/qapi-types-ui.o
CC qapi/qapi-builtin-visit.o
CC qapi/qapi-visit.o
CC qapi/qapi-visit-block-core.o
CC qapi/qapi-visit-block.o
CC qapi/qapi-visit-char.o
CC qapi/qapi-visit-common.o
CC qapi/qapi-visit-crypto.o
CC qapi/qapi-visit-introspect.o
CC qapi/qapi-visit-job.o
CC qapi/qapi-visit-migration.o
CC qapi/qapi-visit-misc.o
CC qapi/qapi-visit-net.o
CC qapi/qapi-visit-rocker.o
CC qapi/qapi-visit-run-state.o
CC qapi/qapi-visit-sockets.o
CC qapi/qapi-visit-tpm.o
CC qapi/qapi-visit-trace.o
CC qapi/qapi-visit-transaction.o
CC qapi/qapi-visit-ui.o
CC qapi/qapi-events.o
CC qapi/qapi-events-block-core.o
CC qapi/qapi-events-block.o
CC qapi/qapi-events-char.o
CC qapi/qapi-events-common.o
CC qapi/qapi-events-crypto.o
CC qapi/qapi-events-introspect.o
CC qapi/qapi-events-job.o
CC qapi/qapi-events-migration.o
CC qapi/qapi-events-misc.o
CC qapi/qapi-events-net.o
CC qapi/qapi-events-run-state.o
CC qapi/qapi-events-rocker.o
CC qapi/qapi-events-sockets.o
CC qapi/qapi-events-trace.o
CC qapi/qapi-events-tpm.o
CC qapi/qapi-events-transaction.o
CC qapi/qapi-events-ui.o
CC qapi/qapi-introspect.o
CC qapi/qapi-visit-core.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-registry.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnum.o
CC qobject/qnull.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qbool.o
CC qobject/qlit.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC qobject/block-qdict.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/aiocb.o
CC util/async.o
CC util/lockcnt.o
CC util/aio-wait.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-posix.o
CC util/compatfd.o
CC util/event_notifier-posix.o
CC util/qemu-openpty.o
CC util/oslib-posix.o
CC util/mmap-alloc.o
CC util/qemu-thread-posix.o
CC util/memfd.o
CC util/path.o
CC util/module.o
CC util/envlist.o
CC util/host-utils.o
CC util/bitmap.o
CC util/hbitmap.o
CC util/bitops.o
CC util/fifo8.o
CC util/acl.o
CC util/cacheinfo.o
CC util/error.o
CC util/qemu-error.o
CC util/iov.o
CC util/id.o
CC util/qemu-config.o
CC util/uri.o
CC util/qemu-sockets.o
CC util/notify.o
CC util/qemu-progress.o
CC util/keyval.o
CC util/qemu-option.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-ucontext.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/pagesize.o
CC util/qdist.o
CC util/range.o
CC util/qht.o
CC util/stats64.o
CC util/systemd.o
CC util/iova-tree.o
CC util/vfio-helpers.o
CC trace-root.o
CC accel/kvm/trace.o
CC accel/tcg/trace.o
CC audio/trace.o
CC block/trace.o
CC chardev/trace.o
CC crypto/trace.o
CC hw/9pfs/trace.o
CC hw/acpi/trace.o
CC hw/alpha/trace.o
CC hw/arm/trace.o
CC hw/audio/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/display/trace.o
CC hw/hppa/trace.o
CC hw/dma/trace.o
CC hw/i386/trace.o
CC hw/i386/xen/trace.o
CC hw/i2c/trace.o
CC hw/ide/trace.o
CC hw/input/trace.o
CC hw/intc/trace.o
CC hw/isa/trace.o
CC hw/misc/trace.o
CC hw/mem/trace.o
CC hw/misc/macio/trace.o
CC hw/net/trace.o
CC hw/nvram/trace.o
CC hw/pci/trace.o
CC hw/pci-host/trace.o
CC hw/ppc/trace.o
CC hw/rdma/vmw/trace.o
CC hw/rdma/trace.o
CC hw/sd/trace.o
CC hw/scsi/trace.o
CC hw/s390x/trace.o
CC hw/sparc/trace.o
CC hw/sparc64/trace.o
CC hw/timer/trace.o
CC hw/tpm/trace.o
CC hw/usb/trace.o
CC hw/vfio/trace.o
CC hw/virtio/trace.o
CC hw/xen/trace.o
CC io/trace.o
CC linux-user/trace.o
CC migration/trace.o
CC nbd/trace.o
CC net/trace.o
CC qapi/trace.o
CC qom/trace.o
CC scsi/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/ppc/trace.o
CC target/sparc/trace.o
CC ui/trace.o
CC target/s390x/trace.o
CC util/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/blk-commit-all.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/iothread.o
CC stubs/gdbstub.o
CC stubs/fdset.o
CC stubs/iothread-lock.o
CC stubs/get-vm-name.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/change-state-handler.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/set-fd-handler.o
CC stubs/runstate-check.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/tpm.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/qmp_memory_device.o
CC stubs/vmstate.o
CC stubs/target-monitor-defs.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/target-get-monitor-def.o
CC stubs/xen-common.o
CC stubs/xen-hvm.o
CC stubs/pci-host-piix.o
CC stubs/ram-block.o
CC contrib/ivshmem-client/ivshmem-client.o
CC contrib/ivshmem-server/ivshmem-server.o
CC contrib/ivshmem-client/main.o
CC contrib/ivshmem-server/main.o
CC qemu-nbd.o
CC block.o
CC blockjob.o
CC job.o
CC replication.o
CC qemu-io-cmds.o
CC block/raw-format.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qcow2-bitmap.o
CC block/qed.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blklogwrites.o
CC block/blkreplay.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/file-posix.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/throttle-groups.o
CC block/create.o
CC block/nvme.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/throttle.o
CC block/copy-on-read.o
CC block/crypto.o
CC nbd/server.o
CC block/fleecing-hook.o
CC nbd/client.o
CC scsi/utils.o
CC nbd/common.o
CC scsi/pr-manager.o
CC scsi/pr-manager-helper.o
CC block/dmg-bz2.o
CC crypto/hash.o
CC crypto/init.o
CC crypto/hash-glib.o
CC crypto/hmac.o
CC crypto/hmac-glib.o
CC crypto/aes.o
CC crypto/desrfb.o
CC crypto/cipher.o
CC crypto/tlscreds.o
CC crypto/tlscredsanon.o
CC crypto/tlscredspsk.o
CC crypto/tlscredsx509.o
CC crypto/tlssession.o
CC crypto/secret.o
CC crypto/random-platform.o
CC crypto/pbkdf.o
CC crypto/ivgen.o
CC crypto/ivgen-essiv.o
CC crypto/ivgen-plain.o
CC crypto/ivgen-plain64.o
CC crypto/afsplit.o
CC crypto/xts.o
CC crypto/block.o
CC crypto/block-qcow.o
CC crypto/block-luks.o
CC io/channel.o
CC io/channel-buffer.o
CC io/channel-command.o
CC io/channel-file.o
CC io/channel-socket.o
CC io/channel-tls.o
CC io/channel-watch.o
/tmp/qemu-test/src/block/fleecing-hook.c: In function 'fleecing_hook_cow':
/tmp/qemu-test/src/block/fleecing-hook.c:61:5: error: implicit declaration of function 'hbitmap_next_dirty_area' [-Werror=implicit-function-declaration]
while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
^
/tmp/qemu-test/src/block/fleecing-hook.c:61:5: error: nested extern declaration of 'hbitmap_next_dirty_area' [-Werror=nested-externs]
cc1: all warnings being treated as errors
make: *** [block/fleecing-hook.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
File "./tests/docker/docker.py", line 565, in <module>
sys.exit(main())
File "./tests/docker/docker.py", line 562, in main
return args.cmdobj.run(args, argv)
File "./tests/docker/docker.py", line 308, in run
return Docker().run(argv, args.keep, quiet=args.quiet)
File "./tests/docker/docker.py", line 276, in run
quiet=quiet)
File "./tests/docker/docker.py", line 183, in _do_check
return subprocess.check_call(self._command + cmd, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=2d6893f8a16611e890fb52540069c830', '-u', '1000', '--security-opt', 'seccomp=unconfined', '--rm', '--net=none', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=8', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-5cpgkzz3/src/docker-src.2018-08-16-11.07.56.6514:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2
make[1]: *** [tests/docker/Makefile.include:213: docker-run] Error 1
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-5cpgkzz3/src'
make: *** [tests/docker/Makefile.include:247: docker-run-test-quick@centos7] Error 2
real 1m8.384s
user 0m4.634s
sys 0m3.408s
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-16 15:05 ` no-reply
@ 2018-08-16 17:28 ` Vladimir Sementsov-Ogievskiy
2018-08-16 17:58 ` Eric Blake
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-16 17:28 UTC (permalink / raw)
To: qemu-devel
Cc: famz, qemu-block, kwolf, armbru, mreitz, stefanha, den, pbonzini, jsnow
Hmm, how should I properly set based-on, if there two series under this one?
Based on:
[PATCH v3 0/8] dirty-bitmap: rewrite bdrv_dirty_iter_next_area
and
[PATCH 0/2] block: make .bdrv_close optional
16.08.2018 18:05, no-reply@patchew.org wrote:
> Hi,
>
> This series failed docker-mingw@fedora build test. Please find the testing commands and
> their output below. If you have Docker installed, you can probably reproduce it
> locally.
>
> Type: series
> Message-id: 20180814170126.56461-1-vsementsov@virtuozzo.com
> Subject: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
>
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> time make docker-test-mingw@fedora SHOW_ENV=1 J=8
> === TEST SCRIPT END ===
>
> Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
> Switched to a new branch 'test'
> fede2b479e new, node-graph-based fleecing and backup
>
> === OUTPUT BEGIN ===
> BUILD fedora
> make[1]: Entering directory '/var/tmp/patchew-tester-tmp-axs4wus0/src'
> GEN /var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar
> Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot'...
> done.
> Checking out files: 8% (537/6322)
> Checking out files: 9% (569/6322)
> Checking out files: 10% (633/6322)
> Checking out files: 11% (696/6322)
> Checking out files: 12% (759/6322)
> Checking out files: 13% (822/6322)
> Checking out files: 13% (870/6322)
> Checking out files: 14% (886/6322)
> Checking out files: 15% (949/6322)
> Checking out files: 16% (1012/6322)
> Checking out files: 17% (1075/6322)
> Checking out files: 18% (1138/6322)
> Checking out files: 19% (1202/6322)
> Checking out files: 20% (1265/6322)
> Checking out files: 21% (1328/6322)
> Checking out files: 22% (1391/6322)
> Checking out files: 23% (1455/6322)
> Checking out files: 24% (1518/6322)
> Checking out files: 25% (1581/6322)
> Checking out files: 26% (1644/6322)
> Checking out files: 26% (1648/6322)
> Checking out files: 27% (1707/6322)
> Checking out files: 28% (1771/6322)
> Checking out files: 29% (1834/6322)
> Checking out files: 30% (1897/6322)
> Checking out files: 31% (1960/6322)
> Checking out files: 32% (2024/6322)
> Checking out files: 33% (2087/6322)
> Checking out files: 34% (2150/6322)
> Checking out files: 35% (2213/6322)
> Checking out files: 36% (2276/6322)
> Checking out files: 37% (2340/6322)
> Checking out files: 38% (2403/6322)
> Checking out files: 39% (2466/6322)
> Checking out files: 40% (2529/6322)
> Checking out files: 41% (2593/6322)
> Checking out files: 42% (2656/6322)
> Checking out files: 43% (2719/6322)
> Checking out files: 44% (2782/6322)
> Checking out files: 45% (2845/6322)
> Checking out files: 46% (2909/6322)
> Checking out files: 47% (2972/6322)
> Checking out files: 48% (3035/6322)
> Checking out files: 49% (3098/6322)
> Checking out files: 50% (3161/6322)
> Checking out files: 51% (3225/6322)
> Checking out files: 52% (3288/6322)
> Checking out files: 53% (3351/6322)
> Checking out files: 54% (3414/6322)
> Checking out files: 55% (3478/6322)
> Checking out files: 56% (3541/6322)
> Checking out files: 57% (3604/6322)
> Checking out files: 58% (3667/6322)
> Checking out files: 59% (3730/6322)
> Checking out files: 60% (3794/6322)
> Checking out files: 61% (3857/6322)
> Checking out files: 62% (3920/6322)
> Checking out files: 63% (3983/6322)
> Checking out files: 64% (4047/6322)
> Checking out files: 65% (4110/6322)
> Checking out files: 66% (4173/6322)
> Checking out files: 67% (4236/6322)
> Checking out files: 68% (4299/6322)
> Checking out files: 69% (4363/6322)
> Checking out files: 70% (4426/6322)
> Checking out files: 71% (4489/6322)
> Checking out files: 72% (4552/6322)
> Checking out files: 73% (4616/6322)
> Checking out files: 74% (4679/6322)
> Checking out files: 75% (4742/6322)
> Checking out files: 76% (4805/6322)
> Checking out files: 77% (4868/6322)
> Checking out files: 78% (4932/6322)
> Checking out files: 79% (4995/6322)
> Checking out files: 80% (5058/6322)
> Checking out files: 81% (5121/6322)
> Checking out files: 82% (5185/6322)
> Checking out files: 83% (5248/6322)
> Checking out files: 84% (5311/6322)
> Checking out files: 85% (5374/6322)
> Checking out files: 86% (5437/6322)
> Checking out files: 87% (5501/6322)
> Checking out files: 88% (5564/6322)
> Checking out files: 89% (5627/6322)
> Checking out files: 90% (5690/6322)
> Checking out files: 91% (5754/6322)
> Checking out files: 92% (5817/6322)
> Checking out files: 93% (5880/6322)
> Checking out files: 94% (5943/6322)
> Checking out files: 95% (6006/6322)
> Checking out files: 96% (6070/6322)
> Checking out files: 97% (6133/6322)
> Checking out files: 98% (6196/6322)
> Checking out files: 99% (6259/6322)
> Checking out files: 99% (6321/6322)
> Checking out files: 100% (6322/6322)
> Checking out files: 100% (6322/6322), done.
> Your branch is up-to-date with 'origin/test'.
> Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
> Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot/dtc'...
> Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
> Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered for path 'ui/keycodemapdb'
> Cloning into '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398/qemu.tar.vroot/ui/keycodemapdb'...
> Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
> COPY RUNNER
> RUN test-mingw in qemu:fedora
> Packages installed:
> SDL2-devel-2.0.8-5.fc28.x86_64
> bc-1.07.1-5.fc28.x86_64
> bison-3.0.4-9.fc28.x86_64
> bluez-libs-devel-5.49-3.fc28.x86_64
> brlapi-devel-0.6.7-12.fc28.x86_64
> bzip2-1.0.6-26.fc28.x86_64
> bzip2-devel-1.0.6-26.fc28.x86_64
> ccache-3.4.2-2.fc28.x86_64
> clang-6.0.0-5.fc28.x86_64
> device-mapper-multipath-devel-0.7.4-2.git07e7bd5.fc28.x86_64
> findutils-4.6.0-19.fc28.x86_64
> flex-2.6.1-7.fc28.x86_64
> gcc-8.1.1-1.fc28.x86_64
> gcc-c++-8.1.1-1.fc28.x86_64
> gettext-0.19.8.1-14.fc28.x86_64
> git-2.17.1-2.fc28.x86_64
> glib2-devel-2.56.1-3.fc28.x86_64
> glusterfs-api-devel-4.0.2-1.fc28.x86_64
> gnutls-devel-3.6.2-1.fc28.x86_64
> gtk3-devel-3.22.30-1.fc28.x86_64
> hostname-3.20-3.fc28.x86_64
> libaio-devel-0.3.110-11.fc28.x86_64
> libasan-8.1.1-1.fc28.x86_64
> libattr-devel-2.4.47-23.fc28.x86_64
> libcap-devel-2.25-9.fc28.x86_64
> libcap-ng-devel-0.7.9-1.fc28.x86_64
> libcurl-devel-7.59.0-3.fc28.x86_64
> libfdt-devel-1.4.6-4.fc28.x86_64
> libpng-devel-1.6.34-3.fc28.x86_64
> librbd-devel-12.2.5-1.fc28.x86_64
> libssh2-devel-1.8.0-7.fc28.x86_64
> libubsan-8.1.1-1.fc28.x86_64
> libusbx-devel-1.0.21-6.fc28.x86_64
> libxml2-devel-2.9.7-4.fc28.x86_64
> llvm-6.0.0-11.fc28.x86_64
> lzo-devel-2.08-12.fc28.x86_64
> make-4.2.1-6.fc28.x86_64
> mingw32-SDL2-2.0.5-3.fc27.noarch
> mingw32-bzip2-1.0.6-9.fc27.noarch
> mingw32-curl-7.57.0-1.fc28.noarch
> mingw32-glib2-2.54.1-1.fc28.noarch
> mingw32-gmp-6.1.2-2.fc27.noarch
> mingw32-gnutls-3.5.13-2.fc27.noarch
> mingw32-gtk3-3.22.16-1.fc27.noarch
> mingw32-libjpeg-turbo-1.5.1-3.fc27.noarch
> mingw32-libpng-1.6.29-2.fc27.noarch
> mingw32-libssh2-1.8.0-3.fc27.noarch
> mingw32-libtasn1-4.13-1.fc28.noarch
> mingw32-nettle-3.3-3.fc27.noarch
> mingw32-pixman-0.34.0-3.fc27.noarch
> mingw32-pkg-config-0.28-9.fc27.x86_64
> mingw64-SDL2-2.0.5-3.fc27.noarch
> mingw64-bzip2-1.0.6-9.fc27.noarch
> mingw64-curl-7.57.0-1.fc28.noarch
> mingw64-glib2-2.54.1-1.fc28.noarch
> mingw64-gmp-6.1.2-2.fc27.noarch
> mingw64-gnutls-3.5.13-2.fc27.noarch
> mingw64-gtk3-3.22.16-1.fc27.noarch
> mingw64-libjpeg-turbo-1.5.1-3.fc27.noarch
> mingw64-libpng-1.6.29-2.fc27.noarch
> mingw64-libssh2-1.8.0-3.fc27.noarch
> mingw64-libtasn1-4.13-1.fc28.noarch
> mingw64-nettle-3.3-3.fc27.noarch
> mingw64-pixman-0.34.0-3.fc27.noarch
> mingw64-pkg-config-0.28-9.fc27.x86_64
> ncurses-devel-6.1-5.20180224.fc28.x86_64
> nettle-devel-3.4-2.fc28.x86_64
> nss-devel-3.36.1-1.1.fc28.x86_64
> numactl-devel-2.0.11-8.fc28.x86_64
> package PyYAML is not installed
> package libjpeg-devel is not installed
> perl-5.26.2-411.fc28.x86_64
> pixman-devel-0.34.0-8.fc28.x86_64
> python3-3.6.5-1.fc28.x86_64
> snappy-devel-1.1.7-5.fc28.x86_64
> sparse-0.5.2-1.fc28.x86_64
> spice-server-devel-0.14.0-4.fc28.x86_64
> systemtap-sdt-devel-3.2-11.fc28.x86_64
> tar-1.30-3.fc28.x86_64
> usbredir-devel-0.7.1-7.fc28.x86_64
> virglrenderer-devel-0.6.0-4.20170210git76b3da97b.fc28.x86_64
> vte3-devel-0.36.5-6.fc28.x86_64
> which-2.21-8.fc28.x86_64
> xen-devel-4.10.1-3.fc28.x86_64
> zlib-devel-1.2.11-8.fc28.x86_64
>
> Environment variables:
> TARGET_LIST=
> PACKAGES=ccache gettext git tar PyYAML sparse flex bison python3 bzip2 hostname gcc gcc-c++ llvm clang make perl which bc findutils glib2-devel libaio-devel pixman-devel zlib-devel libfdt-devel libasan libubsan bluez-libs-devel brlapi-devel bzip2-devel device-mapper-multipath-devel glusterfs-api-devel gnutls-devel gtk3-devel libattr-devel libcap-devel libcap-ng-devel libcurl-devel libjpeg-devel libpng-devel librbd-devel libssh2-devel libusbx-devel libxml2-devel lzo-devel ncurses-devel nettle-devel nss-devel numactl-devel SDL2-devel snappy-devel spice-server-devel systemtap-sdt-devel usbredir-devel virglrenderer-devel vte3-devel xen-devel mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL2 mingw32-pkg-config mingw32-gtk3 mingw32-gnutls mingw32-nettle mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp mingw64-SDL2 mingw64-pkg-config mingw64-gtk3 mingw64-gnutls mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo mingw64-libpng mingw64-curl mingw64-libssh2 mingw64-bzip2
> J=8
> V=
> HOSTNAME=2d66dc589b5e
> DEBUG=
> SHOW_ENV=1
> PWD=/
> HOME=/
> CCACHE_DIR=/var/tmp/ccache
> DISTTAG=f28container
> QEMU_CONFIGURE_OPTS=--python=/usr/bin/python3
> FGC=f28
> TEST_DIR=/tmp/qemu-test
> SHLVL=1
> FEATURES=mingw clang pyyaml asan dtc
> PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> MAKEFLAGS= -j8
> EXTRA_CONFIGURE_OPTS=
> _=/usr/bin/env
>
> Configure options:
> --enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/tmp/qemu-test/install --python=/usr/bin/python3 --cross-prefix=x86_64-w64-mingw32- --enable-trace-backends=simple --enable-gnutls --enable-nettle --enable-curl --enable-vnc --enable-bzip2 --enable-guest-agent --with-sdlabi=2.0 --with-gtkabi=3.0
> Install prefix /tmp/qemu-test/install
> BIOS directory /tmp/qemu-test/install
> firmware path /tmp/qemu-test/install/share/qemu-firmware
> binary directory /tmp/qemu-test/install
> library directory /tmp/qemu-test/install/lib
> module directory /tmp/qemu-test/install/lib
> libexec directory /tmp/qemu-test/install/libexec
> include directory /tmp/qemu-test/install/include
> config directory /tmp/qemu-test/install
> local state directory queried at runtime
> Windows SDK no
> Source path /tmp/qemu-test/src
> GIT binary git
> GIT submodules
> C compiler x86_64-w64-mingw32-gcc
> Host C compiler cc
> C++ compiler x86_64-w64-mingw32-g++
> Objective-C compiler clang
> ARFLAGS rv
> CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
> QEMU_CFLAGS -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -Werror -DHAS_LIBSSH2_SFTP_FSYNC -mms-bitfields -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/glib-2.0 -I/usr/x86_64-w64-mingw32/sys-root/mingw/lib/glib-2.0/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -m64 -mcx16 -mthreads -D__USE_MINGW_ANSI_STDIO=1 -DWIN32_LEAN_AND_MEAN -DWINVER=0x501 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wexpansion-to-defined -Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/p11-kit-1 -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/libpng16
> LDFLAGS -Wl,--nxcompat -Wl,--no-seh -Wl,--dynamicbase -Wl,--warn-common -m64 -g
> QEMU_LDFLAGS -L$(BUILD_DIR)/dtc/libfdt
> make make
> install install
> python /usr/bin/python3 -B
> smbd /usr/sbin/smbd
> module support no
> host CPU x86_64
> host big endian no
> target list x86_64-softmmu aarch64-softmmu
> gprof enabled no
> sparse enabled no
> strip binaries yes
> profiler no
> static build no
> SDL support yes (2.0.5)
> GTK support yes (3.22.16)
> GTK GL support no
> VTE support no
> TLS priority NORMAL
> GNUTLS support yes
> GNUTLS rnd yes
> libgcrypt no
> libgcrypt kdf no
> nettle yes (3.3)
> nettle kdf yes
> libtasn1 yes
> curses support no
> virgl support no
> curl support yes
> mingw32 support yes
> Audio drivers dsound
> Block whitelist (rw)
> Block whitelist (ro)
> VirtFS support no
> Multipath support no
> VNC support yes
> VNC SASL support no
> VNC JPEG support yes
> VNC PNG support yes
> xen support no
> brlapi support no
> bluez support no
> Documentation no
> PIE no
> vde support no
> netmap support no
> Linux AIO support no
> ATTR/XATTR support no
> Install blobs yes
> KVM support no
> HAX support yes
> HVF support no
> WHPX support no
> TCG support yes
> TCG debug enabled no
> TCG interpreter no
> malloc trim support no
> RDMA support no
> fdt support git
> membarrier no
> preadv support no
> fdatasync no
> madvise no
> posix_madvise no
> posix_memalign no
> libcap-ng support no
> vhost-net support no
> vhost-crypto support no
> vhost-scsi support no
> vhost-vsock support no
> vhost-user support no
> Trace backends simple
> Trace output file trace-<pid>
> spice support no
> rbd support no
> xfsctl support no
> smartcard support no
> libusb no
> usb net redir no
> OpenGL support no
> OpenGL dmabufs no
> libiscsi support no
> libnfs support no
> build guest agent yes
> QGA VSS support no
> QGA w32 disk info yes
> QGA MSI support no
> seccomp support no
> coroutine backend win32
> coroutine pool yes
> debug stack usage no
> mutex debugging no
> crypto afalg no
> GlusterFS support no
> gcov gcov
> gcov enabled no
> TPM support yes
> libssh2 support yes
> TPM passthrough no
> TPM emulator no
> QOM debugging yes
> Live block migration yes
> lzo support no
> snappy support no
> bzip2 support yes
> NUMA host support no
> libxml2 no
> tcmalloc support no
> jemalloc support no
> avx2 optimization yes
> replication support yes
> VxHS block device no
> capstone no
> docker no
>
> NOTE: cross-compilers enabled: 'x86_64-w64-mingw32-gcc'
> GEN x86_64-softmmu/config-devices.mak.tmp
> GEN aarch64-softmmu/config-devices.mak.tmp
> GEN config-host.h
> GEN qemu-options.def
> GEN qapi-gen
> GEN trace/generated-tcg-tracers.h
> GEN trace/generated-helpers-wrappers.h
> GEN trace/generated-helpers.h
> GEN aarch64-softmmu/config-devices.mak
> GEN x86_64-softmmu/config-devices.mak
> GEN trace/generated-helpers.c
> GEN module_block.h
> GEN ui/input-keymap-atset1-to-qcode.c
> GEN ui/input-keymap-linux-to-qcode.c
> GEN ui/input-keymap-qcode-to-atset1.c
> GEN ui/input-keymap-qcode-to-atset2.c
> GEN ui/input-keymap-qcode-to-atset3.c
> GEN ui/input-keymap-qcode-to-linux.c
> GEN ui/input-keymap-qcode-to-qnum.c
> GEN ui/input-keymap-qcode-to-sun.c
> GEN ui/input-keymap-qnum-to-qcode.c
> GEN ui/input-keymap-usb-to-qcode.c
> GEN ui/input-keymap-win32-to-qcode.c
> GEN ui/input-keymap-x11-to-qcode.c
> GEN ui/input-keymap-xorgevdev-to-qcode.c
> GEN ui/input-keymap-xorgkbd-to-qcode.c
> GEN ui/input-keymap-xorgxquartz-to-qcode.c
> GEN ui/input-keymap-xorgxwin-to-qcode.c
> GEN ui/input-keymap-osx-to-qcode.c
> GEN tests/test-qapi-gen
> GEN trace-root.h
> GEN accel/kvm/trace.h
> GEN accel/tcg/trace.h
> GEN audio/trace.h
> GEN block/trace.h
> GEN chardev/trace.h
> GEN crypto/trace.h
> GEN hw/9pfs/trace.h
> GEN hw/acpi/trace.h
> GEN hw/alpha/trace.h
> GEN hw/arm/trace.h
> GEN hw/audio/trace.h
> GEN hw/block/trace.h
> GEN hw/block/dataplane/trace.h
> GEN hw/char/trace.h
> GEN hw/display/trace.h
> GEN hw/dma/trace.h
> GEN hw/hppa/trace.h
> GEN hw/i2c/trace.h
> GEN hw/i386/trace.h
> GEN hw/i386/xen/trace.h
> GEN hw/ide/trace.h
> GEN hw/input/trace.h
> GEN hw/intc/trace.h
> GEN hw/isa/trace.h
> GEN hw/mem/trace.h
> GEN hw/misc/trace.h
> GEN hw/misc/macio/trace.h
> GEN hw/net/trace.h
> GEN hw/nvram/trace.h
> GEN hw/pci/trace.h
> GEN hw/pci-host/trace.h
> GEN hw/ppc/trace.h
> GEN hw/rdma/trace.h
> GEN hw/rdma/vmw/trace.h
> GEN hw/s390x/trace.h
> GEN hw/scsi/trace.h
> GEN hw/sd/trace.h
> GEN hw/sparc/trace.h
> GEN hw/sparc64/trace.h
> GEN hw/timer/trace.h
> GEN hw/tpm/trace.h
> GEN hw/usb/trace.h
> GEN hw/vfio/trace.h
> GEN hw/virtio/trace.h
> GEN hw/xen/trace.h
> GEN io/trace.h
> GEN linux-user/trace.h
> GEN migration/trace.h
> GEN nbd/trace.h
> GEN net/trace.h
> GEN qapi/trace.h
> GEN qom/trace.h
> GEN scsi/trace.h
> GEN target/arm/trace.h
> GEN target/i386/trace.h
> GEN target/mips/trace.h
> GEN target/ppc/trace.h
> GEN target/s390x/trace.h
> GEN target/sparc/trace.h
> GEN ui/trace.h
> GEN util/trace.h
> GEN trace-root.c
> GEN accel/kvm/trace.c
> GEN accel/tcg/trace.c
> GEN audio/trace.c
> GEN block/trace.c
> GEN chardev/trace.c
> GEN crypto/trace.c
> GEN hw/9pfs/trace.c
> GEN hw/acpi/trace.c
> GEN hw/alpha/trace.c
> GEN hw/arm/trace.c
> GEN hw/audio/trace.c
> GEN hw/block/trace.c
> GEN hw/block/dataplane/trace.c
> GEN hw/char/trace.c
> GEN hw/display/trace.c
> GEN hw/dma/trace.c
> GEN hw/hppa/trace.c
> GEN hw/i2c/trace.c
> GEN hw/i386/trace.c
> GEN hw/i386/xen/trace.c
> GEN hw/ide/trace.c
> GEN hw/input/trace.c
> GEN hw/intc/trace.c
> GEN hw/isa/trace.c
> GEN hw/mem/trace.c
> GEN hw/misc/trace.c
> GEN hw/misc/macio/trace.c
> GEN hw/net/trace.c
> GEN hw/nvram/trace.c
> GEN hw/pci/trace.c
> GEN hw/pci-host/trace.c
> GEN hw/ppc/trace.c
> GEN hw/rdma/trace.c
> GEN hw/rdma/vmw/trace.c
> GEN hw/s390x/trace.c
> GEN hw/scsi/trace.c
> GEN hw/sd/trace.c
> GEN hw/sparc/trace.c
> GEN hw/sparc64/trace.c
> GEN hw/timer/trace.c
> GEN hw/tpm/trace.c
> GEN hw/usb/trace.c
> GEN hw/vfio/trace.c
> GEN hw/virtio/trace.c
> GEN hw/xen/trace.c
> GEN io/trace.c
> GEN linux-user/trace.c
> GEN migration/trace.c
> GEN nbd/trace.c
> GEN net/trace.c
> GEN qapi/trace.c
> GEN qom/trace.c
> GEN scsi/trace.c
> GEN target/arm/trace.c
> GEN target/i386/trace.c
> GEN target/mips/trace.c
> GEN target/ppc/trace.c
> GEN target/s390x/trace.c
> GEN target/sparc/trace.c
> GEN ui/trace.c
> GEN util/trace.c
> GEN config-all-devices.mak
> DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
> DEP /tmp/qemu-test/src/dtc/tests/trees.S
> DEP /tmp/qemu-test/src/dtc/tests/testutils.c
> DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
> DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
> DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
> DEP /tmp/qemu-test/src/dtc/tests/check_path.c
> DEP /tmp/qemu-test/src/dtc/tests/overlay_bad_fixup.c
> DEP /tmp/qemu-test/src/dtc/tests/overlay.c
> DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
> DEP /tmp/qemu-test/src/dtc/tests/property_iterate.c
> DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
> DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
> DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
> DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
> DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
> DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
> DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
> DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
> DEP /tmp/qemu-test/src/dtc/tests/incbin.c
> DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
> DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
> DEP /tmp/qemu-test/src/dtc/tests/path-references.c
> DEP /tmp/qemu-test/src/dtc/tests/references.c
> DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
> DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
> DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
> DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
> DEP /tmp/qemu-test/src/dtc/tests/del_node.c
> DEP /tmp/qemu-test/src/dtc/tests/del_property.c
> DEP /tmp/qemu-test/src/dtc/tests/setprop.c
> DEP /tmp/qemu-test/src/dtc/tests/set_name.c
> DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
> DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
> DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
> DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
> DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
> DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
> DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
> DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
> DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
> DEP /tmp/qemu-test/src/dtc/tests/stringlist.c
> DEP /tmp/qemu-test/src/dtc/tests/addr_size_cells.c
> DEP /tmp/qemu-test/src/dtc/tests/notfound.c
> DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
> DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
> DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
> DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
> DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
> DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
> DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
> DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
> DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
> DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
> DEP /tmp/qemu-test/src/dtc/tests/get_path.c
> DEP /tmp/qemu-test/src/dtc/tests/getprop.c
> DEP /tmp/qemu-test/src/dtc/tests/get_name.c
> DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
> DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
> DEP /tmp/qemu-test/src/dtc/tests/find_property.c
> DEP /tmp/qemu-test/src/dtc/tests/root_node.c
> DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_overlay.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_addresses.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
> DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
> DEP /tmp/qemu-test/src/dtc/fdtoverlay.c
> DEP /tmp/qemu-test/src/dtc/util.c
> DEP /tmp/qemu-test/src/dtc/fdtput.c
> DEP /tmp/qemu-test/src/dtc/fdtget.c
> DEP /tmp/qemu-test/src/dtc/fdtdump.c
> LEX convert-dtsv0-lexer.lex.c
> DEP /tmp/qemu-test/src/dtc/srcpos.c
> BISON dtc-parser.tab.c
> LEX dtc-lexer.lex.c
> DEP /tmp/qemu-test/src/dtc/treesource.c
> DEP /tmp/qemu-test/src/dtc/livetree.c
> DEP /tmp/qemu-test/src/dtc/fstree.c
> DEP /tmp/qemu-test/src/dtc/flattree.c
> DEP /tmp/qemu-test/src/dtc/dtc.c
> DEP /tmp/qemu-test/src/dtc/data.c
> DEP /tmp/qemu-test/src/dtc/checks.c
> DEP convert-dtsv0-lexer.lex.c
> DEP dtc-parser.tab.c
> DEP dtc-lexer.lex.c
> CHK version_gen.h
> UPD version_gen.h
> DEP /tmp/qemu-test/src/dtc/util.c
> CC libfdt/fdt.o
> CC libfdt/fdt_ro.o
> CC libfdt/fdt_wip.o
> CC libfdt/fdt_rw.o
> CC libfdt/fdt_sw.o
> CC libfdt/fdt_strerror.o
> CC libfdt/fdt_empty_tree.o
> CC libfdt/fdt_addresses.o
> CC libfdt/fdt_overlay.o
> AR libfdt/libfdt.a
> x86_64-w64-mingw32-ar: creating libfdt/libfdt.a
> a - libfdt/fdt.o
> a - libfdt/fdt_ro.o
> a - libfdt/fdt_wip.o
> a - libfdt/fdt_sw.o
> a - libfdt/fdt_rw.o
> a - libfdt/fdt_strerror.o
> a - libfdt/fdt_empty_tree.o
> a - libfdt/fdt_addresses.o
> a - libfdt/fdt_overlay.o
> RC version.o
> GEN qga/qapi-generated/qapi-gen
> CC qapi/qapi-types.o
> CC qapi/qapi-types-block-core.o
> CC qapi/qapi-types-block.o
> CC qapi/qapi-builtin-types.o
> CC qapi/qapi-types-common.o
> CC qapi/qapi-types-char.o
> CC qapi/qapi-types-crypto.o
> CC qapi/qapi-types-introspect.o
> CC qapi/qapi-types-job.o
> CC qapi/qapi-types-migration.o
> CC qapi/qapi-types-misc.o
> CC qapi/qapi-types-net.o
> CC qapi/qapi-types-rocker.o
> CC qapi/qapi-types-run-state.o
> CC qapi/qapi-types-sockets.o
> CC qapi/qapi-types-tpm.o
> CC qapi/qapi-types-transaction.o
> CC qapi/qapi-types-ui.o
> CC qapi/qapi-types-trace.o
> CC qapi/qapi-builtin-visit.o
> CC qapi/qapi-visit.o
> CC qapi/qapi-visit-block-core.o
> CC qapi/qapi-visit-block.o
> CC qapi/qapi-visit-char.o
> CC qapi/qapi-visit-common.o
> CC qapi/qapi-visit-crypto.o
> CC qapi/qapi-visit-introspect.o
> CC qapi/qapi-visit-job.o
> CC qapi/qapi-visit-migration.o
> CC qapi/qapi-visit-misc.o
> CC qapi/qapi-visit-net.o
> CC qapi/qapi-visit-rocker.o
> CC qapi/qapi-visit-run-state.o
> CC qapi/qapi-visit-sockets.o
> CC qapi/qapi-visit-tpm.o
> CC qapi/qapi-visit-trace.o
> CC qapi/qapi-visit-transaction.o
> CC qapi/qapi-events-block-core.o
> CC qapi/qapi-events.o
> CC qapi/qapi-visit-ui.o
> CC qapi/qapi-events-block.o
> CC qapi/qapi-events-char.o
> CC qapi/qapi-events-common.o
> CC qapi/qapi-events-crypto.o
> CC qapi/qapi-events-introspect.o
> CC qapi/qapi-events-job.o
> CC qapi/qapi-events-migration.o
> CC qapi/qapi-events-misc.o
> CC qapi/qapi-events-net.o
> CC qapi/qapi-events-rocker.o
> CC qapi/qapi-events-sockets.o
> CC qapi/qapi-events-tpm.o
> CC qapi/qapi-events-transaction.o
> CC qapi/qapi-events-run-state.o
> CC qapi/qapi-events-ui.o
> CC qapi/qapi-events-trace.o
> CC qapi/qapi-introspect.o
> CC qapi/qapi-visit-core.o
> CC qapi/qapi-dealloc-visitor.o
> CC qapi/qobject-input-visitor.o
> CC qapi/qobject-output-visitor.o
> CC qapi/qmp-registry.o
> CC qapi/qmp-dispatch.o
> CC qapi/string-output-visitor.o
> CC qapi/opts-visitor.o
> CC qapi/string-input-visitor.o
> CC qapi/qapi-clone-visitor.o
> CC qapi/qmp-event.o
> CC qapi/qapi-util.o
> CC qobject/qnull.o
> CC qobject/qnum.o
> CC qobject/qstring.o
> CC qobject/qdict.o
> CC qobject/qlist.o
> CC qobject/qbool.o
> CC qobject/qlit.o
> CC qobject/qjson.o
> CC qobject/qobject.o
> CC qobject/json-lexer.o
> CC qobject/json-streamer.o
> CC qobject/json-parser.o
> CC qobject/block-qdict.o
> CC trace/simple.o
> CC trace/control.o
> CC trace/qmp.o
> CC util/osdep.o
> CC util/cutils.o
> CC util/unicode.o
> CC util/qemu-timer-common.o
> CC util/lockcnt.o
> CC util/bufferiszero.o
> CC util/aiocb.o
> CC util/async.o
> CC util/aio-wait.o
> CC util/thread-pool.o
> CC util/qemu-timer.o
> CC util/main-loop.o
> CC util/iohandler.o
> CC util/aio-win32.o
> CC util/event_notifier-win32.o
> CC util/oslib-win32.o
> CC util/qemu-thread-win32.o
> CC util/envlist.o
> CC util/path.o
> CC util/module.o
> CC util/host-utils.o
> CC util/bitmap.o
> CC util/bitops.o
> CC util/hbitmap.o
> CC util/fifo8.o
> CC util/acl.o
> CC util/cacheinfo.o
> CC util/error.o
> CC util/qemu-error.o
> CC util/id.o
> CC util/iov.o
> CC util/qemu-config.o
> CC util/qemu-sockets.o
> CC util/notify.o
> CC util/uri.o
> CC util/qemu-progress.o
> CC util/qemu-option.o
> CC util/keyval.o
> CC util/hexdump.o
> CC util/crc32c.o
> CC util/uuid.o
> CC util/throttle.o
> CC util/readline.o
> CC util/getauxval.o
> CC util/rcu.o
> CC util/qemu-coroutine.o
> CC util/qemu-coroutine-lock.o
> CC util/qemu-coroutine-io.o
> CC util/qemu-coroutine-sleep.o
> CC util/coroutine-win32.o
> CC util/timed-average.o
> CC util/buffer.o
> CC util/base64.o
> CC util/log.o
> CC util/pagesize.o
> CC util/qdist.o
> CC util/qht.o
> CC util/range.o
> CC util/stats64.o
> CC util/systemd.o
> CC util/iova-tree.o
> CC trace-root.o
> CC accel/kvm/trace.o
> CC accel/tcg/trace.o
> CC block/trace.o
> CC audio/trace.o
> CC chardev/trace.o
> CC crypto/trace.o
> CC hw/9pfs/trace.o
> CC hw/acpi/trace.o
> CC hw/alpha/trace.o
> CC hw/arm/trace.o
> CC hw/audio/trace.o
> CC hw/block/trace.o
> CC hw/block/dataplane/trace.o
> CC hw/char/trace.o
> CC hw/display/trace.o
> CC hw/dma/trace.o
> CC hw/hppa/trace.o
> CC hw/i2c/trace.o
> CC hw/i386/trace.o
> CC hw/i386/xen/trace.o
> CC hw/ide/trace.o
> CC hw/input/trace.o
> CC hw/intc/trace.o
> CC hw/isa/trace.o
> CC hw/mem/trace.o
> CC hw/misc/trace.o
> CC hw/misc/macio/trace.o
> CC hw/net/trace.o
> CC hw/nvram/trace.o
> CC hw/pci/trace.o
> CC hw/pci-host/trace.o
> CC hw/rdma/trace.o
> CC hw/ppc/trace.o
> CC hw/rdma/vmw/trace.o
> CC hw/s390x/trace.o
> CC hw/scsi/trace.o
> CC hw/sd/trace.o
> CC hw/sparc/trace.o
> CC hw/sparc64/trace.o
> CC hw/timer/trace.o
> CC hw/tpm/trace.o
> CC hw/usb/trace.o
> CC hw/vfio/trace.o
> CC hw/virtio/trace.o
> CC io/trace.o
> CC hw/xen/trace.o
> CC linux-user/trace.o
> CC nbd/trace.o
> CC net/trace.o
> CC qapi/trace.o
> CC migration/trace.o
> CC qom/trace.o
> CC scsi/trace.o
> CC target/arm/trace.o
> CC target/i386/trace.o
> CC target/mips/trace.o
> CC target/ppc/trace.o
> CC target/s390x/trace.o
> CC ui/trace.o
> CC target/sparc/trace.o
> CC util/trace.o
> CC crypto/pbkdf-stub.o
> CC stubs/arch-query-cpu-def.o
> CC stubs/arch-query-cpu-model-expansion.o
> CC stubs/arch-query-cpu-model-comparison.o
> CC stubs/arch-query-cpu-model-baseline.o
> CC stubs/bdrv-next-monitor-owned.o
> CC stubs/blk-commit-all.o
> CC stubs/blockdev-close-all-bdrv-states.o
> CC stubs/clock-warp.o
> CC stubs/cpu-get-clock.o
> CC stubs/cpu-get-icount.o
> CC stubs/error-printf.o
> CC stubs/dump.o
> CC stubs/fdset.o
> CC stubs/gdbstub.o
> CC stubs/get-vm-name.o
> CC stubs/iothread.o
> CC stubs/iothread-lock.o
> CC stubs/is-daemonized.o
> CC stubs/machine-init-done.o
> CC stubs/migr-blocker.o
> CC stubs/change-state-handler.o
> CC stubs/monitor.o
> CC stubs/notify-event.o
> CC stubs/qtest.o
> CC stubs/replay.o
> CC stubs/runstate-check.o
> CC stubs/set-fd-handler.o
> CC stubs/slirp.o
> CC stubs/sysbus.o
> CC stubs/tpm.o
> CC stubs/trace-control.o
> CC stubs/uuid.o
> CC stubs/vm-stop.o
> CC stubs/vmstate.o
> CC stubs/fd-register.o
> CC stubs/qmp_memory_device.o
> CC stubs/target-monitor-defs.o
> CC stubs/target-get-monitor-def.o
> CC stubs/pc_madt_cpu_entry.o
> CC stubs/vmgenid.o
> CC stubs/xen-common.o
> CC stubs/xen-hvm.o
> CC stubs/pci-host-piix.o
> CC stubs/ram-block.o
> GEN qemu-img-cmds.h
> CC blockjob.o
> CC block.o
> CC job.o
> CC replication.o
> CC qemu-io-cmds.o
> CC block/raw-format.o
> CC block/qcow.o
> CC block/vdi.o
> CC block/vmdk.o
> CC block/cloop.o
> CC block/bochs.o
> CC block/vvfat.o
> CC block/vpc.o
> CC block/dmg.o
> CC block/qcow2.o
> CC block/qcow2-refcount.o
> CC block/qcow2-cluster.o
> CC block/qcow2-snapshot.o
> CC block/qcow2-bitmap.o
> CC block/qcow2-cache.o
> CC block/qed-l2-cache.o
> CC block/qed.o
> CC block/qed-table.o
> CC block/qed-cluster.o
> CC block/qed-check.o
> CC block/vhdx.o
> CC block/vhdx-endian.o
> CC block/vhdx-log.o
> CC block/quorum.o
> CC block/parallels.o
> CC block/blkdebug.o
> CC block/blkverify.o
> CC block/blkreplay.o
> CC block/blklogwrites.o
> CC block/block-backend.o
> CC block/snapshot.o
> CC block/file-win32.o
> CC block/win32-aio.o
> CC block/null.o
> CC block/qapi.o
> CC block/mirror.o
> CC block/commit.o
> CC block/io.o
> CC block/create.o
> CC block/throttle-groups.o
> CC block/nbd.o
> CC block/nbd-client.o
> CC block/sheepdog.o
> CC block/accounting.o
> CC block/dirty-bitmap.o
> CC block/write-threshold.o
> CC block/backup.o
> CC block/replication.o
> CC block/throttle.o
> CC block/copy-on-read.o
> CC block/crypto.o
> CC block/fleecing-hook.o
> CC nbd/server.o
> CC nbd/client.o
> CC nbd/common.o
> CC scsi/utils.o
> CC scsi/pr-manager-stub.o
> CC block/curl.o
> CC block/ssh.o
> CC block/dmg-bz2.o
> CC crypto/init.o
> CC crypto/hash-nettle.o
> CC crypto/hash.o
> CC crypto/hmac.o
> CC crypto/hmac-nettle.o
> CC crypto/aes.o
> CC crypto/desrfb.o
> CC crypto/cipher.o
> CC crypto/tlscreds.o
> CC crypto/tlscredsanon.o
> CC crypto/tlscredspsk.o
> CC crypto/tlscredsx509.o
> CC crypto/tlssession.o
> CC crypto/secret.o
> CC crypto/random-gnutls.o
> CC crypto/pbkdf.o
> CC crypto/pbkdf-nettle.o
> CC crypto/ivgen.o
> CC crypto/ivgen-essiv.o
> CC crypto/ivgen-plain.o
> CC crypto/ivgen-plain64.o
> CC crypto/afsplit.o
> CC crypto/xts.o
> CC crypto/block.o
> CC crypto/block-qcow.o
> CC crypto/block-luks.o
> CC io/channel.o
> CC io/channel-buffer.o
> CC io/channel-command.o
> CC io/channel-file.o
> CC io/channel-socket.o
> CC io/channel-tls.o
> CC io/channel-watch.o
> CC io/channel-websock.o
> CC io/channel-util.o
> CC io/dns-resolver.o
> CC io/net-listener.o
> CC io/task.o
> CC qom/object.o
> CC qom/container.o
> CC qom/qom-qobject.o
> CC qom/object_interfaces.o
> CC qemu-io.o
> CC blockdev.o
> CC blockdev-nbd.o
> CC bootdevice.o
> CC iothread.o
> CC job-qmp.o
> CC qdev-monitor.o
> CC device-hotplug.o
> /tmp/qemu-test/src/block/fleecing-hook.c: In function 'fleecing_hook_cow':
> /tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: implicit declaration of function 'hbitmap_next_dirty_area'; did you mean 'hbitmap_next_zero'? [-Werror=implicit-function-declaration]
> while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
> ^~~~~~~~~~~~~~~~~~~~~~~
> hbitmap_next_zero
> /tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: nested extern declaration of 'hbitmap_next_dirty_area' [-Werror=nested-externs]
> cc1: all warnings being treated as errors
> make: *** [/tmp/qemu-test/src/rules.mak:69: block/fleecing-hook.o] Error 1
> make: *** Waiting for unfinished jobs....
> Traceback (most recent call last):
> File "./tests/docker/docker.py", line 565, in <module>
> sys.exit(main())
> File "./tests/docker/docker.py", line 562, in main
> return args.cmdobj.run(args, argv)
> File "./tests/docker/docker.py", line 308, in run
> return Docker().run(argv, args.keep, quiet=args.quiet)
> File "./tests/docker/docker.py", line 276, in run
> quiet=quiet)
> File "./tests/docker/docker.py", line 183, in _do_check
> return subprocess.check_call(self._command + cmd, **kwargs)
> File "/usr/lib64/python2.7/subprocess.py", line 186, in check_call
> raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=b3b1183ca16511e8a5bd52540069c830', '-u', '1000', '--security-opt', 'seccomp=unconfined', '--rm', '--net=none', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=8', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-axs4wus0/src/docker-src.2018-08-16-11.04.31.27398:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit status 2
> make[1]: *** [tests/docker/Makefile.include:213: docker-run] Error 1
> make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-axs4wus0/src'
> make: *** [tests/docker/Makefile.include:247: docker-run-test-mingw@fedora] Error 2
>
> real 1m13.823s
> user 0m4.993s
> sys 0m3.502s
> === OUTPUT END ===
>
> Test command exited with code: 2
>
>
> ---
> Email generated automatically by Patchew [http://patchew.org/].
> Please send your feedback to patchew-devel@redhat.com
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-16 17:28 ` Vladimir Sementsov-Ogievskiy
@ 2018-08-16 17:58 ` Eric Blake
0 siblings, 0 replies; 15+ messages in thread
From: Eric Blake @ 2018-08-16 17:58 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy, qemu-devel
Cc: kwolf, famz, qemu-block, armbru, mreitz, stefanha, pbonzini, den, jsnow
On 08/16/2018 12:28 PM, Vladimir Sementsov-Ogievskiy wrote:
> Hmm, how should I properly set based-on, if there two series under this
> one?
>
> Based on:
> [PATCH v3 0/8] dirty-bitmap: rewrite bdrv_dirty_iter_next_area
> and
> [PATCH 0/2] block: make .bdrv_close optional
patchew goes off mail ids, so I'll rewrite the above in a
machine-readable format:
Based-on: <20180814121443.33114-1-vsementsov@virtuozzo.com>
Based-on: <20180814124320.39481-1-vsementsov@virtuozzo.com>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
2018-08-16 15:05 ` no-reply
2018-08-16 15:09 ` no-reply
@ 2018-08-17 18:21 ` Vladimir Sementsov-Ogievskiy
2018-08-17 20:56 ` no-reply
` (2 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-17 18:21 UTC (permalink / raw)
To: qemu-devel, qemu-block
Cc: eblake, armbru, mreitz, kwolf, famz, jsnow, pbonzini, stefanha, den
[-- Attachment #1: Type: text/plain, Size: 125 bytes --]
attached generated block graph (with help of "[PATCH v2 0/3] block nodes
graph visualization")
--
Best regards,
Vladimir
[-- Attachment #2: out.dot.png --]
[-- Type: image/png, Size: 83603 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
` (2 preceding siblings ...)
2018-08-17 18:21 ` Vladimir Sementsov-Ogievskiy
@ 2018-08-17 20:56 ` no-reply
2018-08-17 21:01 ` no-reply
2018-08-17 21:50 ` Max Reitz
5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2018-08-17 20:56 UTC (permalink / raw)
To: vsementsov; +Cc: famz, qemu-devel, qemu-block, kwolf
Hi,
This series failed docker-mingw@fedora build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
Type: series
Message-id: 20180814170126.56461-1-vsementsov@virtuozzo.com
Subject: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-mingw@fedora SHOW_ENV=1 J=8
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
d307c74eae new, node-graph-based fleecing and backup
=== OUTPUT BEGIN ===
BUILD fedora
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-askh7zyi/src'
GEN /var/tmp/patchew-tester-tmp-askh7zyi/src/docker-src.2018-08-17-16.54.21.21368/qemu.tar
Cloning into '/var/tmp/patchew-tester-tmp-askh7zyi/src/docker-src.2018-08-17-16.54.21.21368/qemu.tar.vroot'...
done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-askh7zyi/src/docker-src.2018-08-17-16.54.21.21368/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into '/var/tmp/patchew-tester-tmp-askh7zyi/src/docker-src.2018-08-17-16.54.21.21368/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
COPY RUNNER
RUN test-mingw in qemu:fedora
Packages installed:
SDL2-devel-2.0.8-5.fc28.x86_64
bc-1.07.1-5.fc28.x86_64
bison-3.0.4-9.fc28.x86_64
bluez-libs-devel-5.49-3.fc28.x86_64
brlapi-devel-0.6.7-12.fc28.x86_64
bzip2-1.0.6-26.fc28.x86_64
bzip2-devel-1.0.6-26.fc28.x86_64
ccache-3.4.2-2.fc28.x86_64
clang-6.0.0-5.fc28.x86_64
device-mapper-multipath-devel-0.7.4-2.git07e7bd5.fc28.x86_64
findutils-4.6.0-19.fc28.x86_64
flex-2.6.1-7.fc28.x86_64
gcc-8.1.1-1.fc28.x86_64
gcc-c++-8.1.1-1.fc28.x86_64
gettext-0.19.8.1-14.fc28.x86_64
git-2.17.1-2.fc28.x86_64
glib2-devel-2.56.1-3.fc28.x86_64
glusterfs-api-devel-4.0.2-1.fc28.x86_64
gnutls-devel-3.6.2-1.fc28.x86_64
gtk3-devel-3.22.30-1.fc28.x86_64
hostname-3.20-3.fc28.x86_64
libaio-devel-0.3.110-11.fc28.x86_64
libasan-8.1.1-1.fc28.x86_64
libattr-devel-2.4.47-23.fc28.x86_64
libcap-devel-2.25-9.fc28.x86_64
libcap-ng-devel-0.7.9-1.fc28.x86_64
libcurl-devel-7.59.0-3.fc28.x86_64
libfdt-devel-1.4.6-4.fc28.x86_64
libpng-devel-1.6.34-3.fc28.x86_64
librbd-devel-12.2.5-1.fc28.x86_64
libssh2-devel-1.8.0-7.fc28.x86_64
libubsan-8.1.1-1.fc28.x86_64
libusbx-devel-1.0.21-6.fc28.x86_64
libxml2-devel-2.9.7-4.fc28.x86_64
llvm-6.0.0-11.fc28.x86_64
lzo-devel-2.08-12.fc28.x86_64
make-4.2.1-6.fc28.x86_64
mingw32-SDL2-2.0.5-3.fc27.noarch
mingw32-bzip2-1.0.6-9.fc27.noarch
mingw32-curl-7.57.0-1.fc28.noarch
mingw32-glib2-2.54.1-1.fc28.noarch
mingw32-gmp-6.1.2-2.fc27.noarch
mingw32-gnutls-3.5.13-2.fc27.noarch
mingw32-gtk3-3.22.16-1.fc27.noarch
mingw32-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw32-libpng-1.6.29-2.fc27.noarch
mingw32-libssh2-1.8.0-3.fc27.noarch
mingw32-libtasn1-4.13-1.fc28.noarch
mingw32-nettle-3.3-3.fc27.noarch
mingw32-pixman-0.34.0-3.fc27.noarch
mingw32-pkg-config-0.28-9.fc27.x86_64
mingw64-SDL2-2.0.5-3.fc27.noarch
mingw64-bzip2-1.0.6-9.fc27.noarch
mingw64-curl-7.57.0-1.fc28.noarch
mingw64-glib2-2.54.1-1.fc28.noarch
mingw64-gmp-6.1.2-2.fc27.noarch
mingw64-gnutls-3.5.13-2.fc27.noarch
mingw64-gtk3-3.22.16-1.fc27.noarch
mingw64-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw64-libpng-1.6.29-2.fc27.noarch
mingw64-libssh2-1.8.0-3.fc27.noarch
mingw64-libtasn1-4.13-1.fc28.noarch
mingw64-nettle-3.3-3.fc27.noarch
mingw64-pixman-0.34.0-3.fc27.noarch
mingw64-pkg-config-0.28-9.fc27.x86_64
ncurses-devel-6.1-5.20180224.fc28.x86_64
nettle-devel-3.4-2.fc28.x86_64
nss-devel-3.36.1-1.1.fc28.x86_64
numactl-devel-2.0.11-8.fc28.x86_64
package PyYAML is not installed
package libjpeg-devel is not installed
perl-5.26.2-411.fc28.x86_64
pixman-devel-0.34.0-8.fc28.x86_64
python3-3.6.5-1.fc28.x86_64
snappy-devel-1.1.7-5.fc28.x86_64
sparse-0.5.2-1.fc28.x86_64
spice-server-devel-0.14.0-4.fc28.x86_64
systemtap-sdt-devel-3.2-11.fc28.x86_64
tar-1.30-3.fc28.x86_64
usbredir-devel-0.7.1-7.fc28.x86_64
virglrenderer-devel-0.6.0-4.20170210git76b3da97b.fc28.x86_64
vte3-devel-0.36.5-6.fc28.x86_64
which-2.21-8.fc28.x86_64
xen-devel-4.10.1-3.fc28.x86_64
zlib-devel-1.2.11-8.fc28.x86_64
Environment variables:
TARGET_LIST=
PACKAGES=ccache gettext git tar PyYAML sparse flex bison python3 bzip2 hostname gcc gcc-c++ llvm clang make perl which bc findutils glib2-devel libaio-devel pixman-devel zlib-devel libfdt-devel libasan libubsan bluez-libs-devel brlapi-devel bzip2-devel device-mapper-multipath-devel glusterfs-api-devel gnutls-devel gtk3-devel libattr-devel libcap-devel libcap-ng-devel libcurl-devel libjpeg-devel libpng-devel librbd-devel libssh2-devel libusbx-devel libxml2-devel lzo-devel ncurses-devel nettle-devel nss-devel numactl-devel SDL2-devel snappy-devel spice-server-devel systemtap-sdt-devel usbredir-devel virglrenderer-devel vte3-devel xen-devel mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL2 mingw32-pkg-config mingw32-gtk3 mingw32-gnutls mingw32-nettle mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp mingw64-SDL2 mingw64-pkg-config mingw64-gtk3 mingw64-gnutls mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo mingw64-libpng mingw64-curl mingw64-libssh2 mingw64-bzip2
J=8
V=
HOSTNAME=3394a850b887
DEBUG=
SHOW_ENV=1
PWD=/
HOME=/
CCACHE_DIR=/var/tmp/ccache
DISTTAG=f28container
QEMU_CONFIGURE_OPTS=--python=/usr/bin/python3
FGC=f28
TEST_DIR=/tmp/qemu-test
SHLVL=1
FEATURES=mingw clang pyyaml asan dtc
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAKEFLAGS= -j8
EXTRA_CONFIGURE_OPTS=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/tmp/qemu-test/install --python=/usr/bin/python3 --cross-prefix=x86_64-w64-mingw32- --enable-trace-backends=simple --enable-gnutls --enable-nettle --enable-curl --enable-vnc --enable-bzip2 --enable-guest-agent --with-sdlabi=2.0 --with-gtkabi=3.0
Install prefix /tmp/qemu-test/install
BIOS directory /tmp/qemu-test/install
firmware path /tmp/qemu-test/install/share/qemu-firmware
binary directory /tmp/qemu-test/install
library directory /tmp/qemu-test/install/lib
module directory /tmp/qemu-test/install/lib
libexec directory /tmp/qemu-test/install/libexec
include directory /tmp/qemu-test/install/include
config directory /tmp/qemu-test/install
local state directory queried at runtime
Windows SDK no
Source path /tmp/qemu-test/src
GIT binary git
GIT submodules
C compiler x86_64-w64-mingw32-gcc
Host C compiler cc
C++ compiler x86_64-w64-mingw32-g++
Objective-C compiler clang
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -Werror -DHAS_LIBSSH2_SFTP_FSYNC -mms-bitfields -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/glib-2.0 -I/usr/x86_64-w64-mingw32/sys-root/mingw/lib/glib-2.0/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -m64 -mcx16 -mthreads -D__USE_MINGW_ANSI_STDIO=1 -DWIN32_LEAN_AND_MEAN -DWINVER=0x501 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wexpansion-to-defined -Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/p11-kit-1 -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include -I/usr/x86_64-w64-mingw32/sys-root/mingw/include/libpng16
LDFLAGS -Wl,--nxcompat -Wl,--no-seh -Wl,--dynamicbase -Wl,--warn-common -m64 -g
QEMU_LDFLAGS -L$(BUILD_DIR)/dtc/libfdt
make make
install install
python /usr/bin/python3 -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
SDL support yes (2.0.5)
GTK support yes (3.22.16)
GTK GL support no
VTE support no
TLS priority NORMAL
GNUTLS support yes
GNUTLS rnd yes
libgcrypt no
libgcrypt kdf no
nettle yes (3.3)
nettle kdf yes
libtasn1 yes
curses support no
virgl support no
curl support yes
mingw32 support yes
Audio drivers dsound
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
Multipath support no
VNC support yes
VNC SASL support no
VNC JPEG support yes
VNC PNG support yes
xen support no
brlapi support no
bluez support no
Documentation no
PIE no
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support no
Install blobs yes
KVM support no
HAX support yes
HVF support no
WHPX support no
TCG support yes
TCG debug enabled no
TCG interpreter no
malloc trim support no
RDMA support no
fdt support git
membarrier no
preadv support no
fdatasync no
madvise no
posix_madvise no
posix_memalign no
libcap-ng support no
vhost-net support no
vhost-crypto support no
vhost-scsi support no
vhost-vsock support no
vhost-user support no
Trace backends simple
Trace output file trace-<pid>
spice support no
rbd support no
xfsctl support no
smartcard support no
libusb no
usb net redir no
OpenGL support no
OpenGL dmabufs no
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info yes
QGA MSI support no
seccomp support no
coroutine backend win32
coroutine pool yes
debug stack usage no
mutex debugging no
crypto afalg no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support yes
TPM passthrough no
TPM emulator no
QOM debugging yes
Live block migration yes
lzo support no
snappy support no
bzip2 support yes
NUMA host support no
libxml2 no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
VxHS block device no
capstone no
docker no
NOTE: cross-compilers enabled: 'x86_64-w64-mingw32-gcc'
GEN x86_64-softmmu/config-devices.mak.tmp
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qapi-gen
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-helpers.h
GEN trace/generated-helpers.c
GEN module_block.h
GEN x86_64-softmmu/config-devices.mak
GEN aarch64-softmmu/config-devices.mak
GEN ui/input-keymap-atset1-to-qcode.c
GEN ui/input-keymap-linux-to-qcode.c
GEN ui/input-keymap-qcode-to-atset1.c
GEN ui/input-keymap-qcode-to-atset2.c
GEN ui/input-keymap-qcode-to-atset3.c
GEN ui/input-keymap-qcode-to-linux.c
GEN ui/input-keymap-qcode-to-qnum.c
GEN ui/input-keymap-qcode-to-sun.c
GEN ui/input-keymap-qnum-to-qcode.c
GEN ui/input-keymap-win32-to-qcode.c
GEN ui/input-keymap-usb-to-qcode.c
GEN ui/input-keymap-x11-to-qcode.c
GEN ui/input-keymap-xorgevdev-to-qcode.c
GEN ui/input-keymap-xorgkbd-to-qcode.c
GEN ui/input-keymap-xorgxquartz-to-qcode.c
GEN ui/input-keymap-xorgxwin-to-qcode.c
GEN ui/input-keymap-osx-to-qcode.c
GEN tests/test-qapi-gen
GEN trace-root.h
GEN accel/kvm/trace.h
GEN accel/tcg/trace.h
GEN audio/trace.h
GEN block/trace.h
GEN chardev/trace.h
GEN crypto/trace.h
GEN hw/9pfs/trace.h
GEN hw/acpi/trace.h
GEN hw/alpha/trace.h
GEN hw/arm/trace.h
GEN hw/audio/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/display/trace.h
GEN hw/dma/trace.h
GEN hw/hppa/trace.h
GEN hw/i2c/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/ide/trace.h
GEN hw/input/trace.h
GEN hw/intc/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/misc/trace.h
GEN hw/misc/macio/trace.h
GEN hw/net/trace.h
GEN hw/nvram/trace.h
GEN hw/pci/trace.h
GEN hw/pci-host/trace.h
GEN hw/ppc/trace.h
GEN hw/rdma/trace.h
GEN hw/rdma/vmw/trace.h
GEN hw/s390x/trace.h
GEN hw/scsi/trace.h
GEN hw/sd/trace.h
GEN hw/sparc/trace.h
GEN hw/sparc64/trace.h
GEN hw/timer/trace.h
GEN hw/tpm/trace.h
GEN hw/usb/trace.h
GEN hw/vfio/trace.h
GEN hw/virtio/trace.h
GEN hw/xen/trace.h
GEN io/trace.h
GEN linux-user/trace.h
GEN migration/trace.h
GEN nbd/trace.h
GEN net/trace.h
GEN qapi/trace.h
GEN qom/trace.h
GEN scsi/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/ppc/trace.h
GEN target/s390x/trace.h
GEN target/sparc/trace.h
GEN ui/trace.h
GEN util/trace.h
GEN trace-root.c
GEN accel/kvm/trace.c
GEN accel/tcg/trace.c
GEN audio/trace.c
GEN block/trace.c
GEN chardev/trace.c
GEN crypto/trace.c
GEN hw/9pfs/trace.c
GEN hw/acpi/trace.c
GEN hw/alpha/trace.c
GEN hw/arm/trace.c
GEN hw/audio/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/display/trace.c
GEN hw/dma/trace.c
GEN hw/hppa/trace.c
GEN hw/i2c/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/ide/trace.c
GEN hw/input/trace.c
GEN hw/intc/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/misc/trace.c
GEN hw/misc/macio/trace.c
GEN hw/net/trace.c
GEN hw/nvram/trace.c
GEN hw/pci/trace.c
GEN hw/pci-host/trace.c
GEN hw/ppc/trace.c
GEN hw/rdma/trace.c
GEN hw/rdma/vmw/trace.c
GEN hw/s390x/trace.c
GEN hw/scsi/trace.c
GEN hw/sd/trace.c
GEN hw/sparc/trace.c
GEN hw/sparc64/trace.c
GEN hw/timer/trace.c
GEN hw/tpm/trace.c
GEN hw/usb/trace.c
GEN hw/vfio/trace.c
GEN hw/virtio/trace.c
GEN hw/xen/trace.c
GEN io/trace.c
GEN linux-user/trace.c
GEN migration/trace.c
GEN nbd/trace.c
GEN net/trace.c
GEN qapi/trace.c
GEN qom/trace.c
GEN scsi/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/ppc/trace.c
GEN target/s390x/trace.c
GEN target/sparc/trace.c
GEN ui/trace.c
GEN util/trace.c
GEN config-all-devices.mak
DEP /tmp/qemu-test/src/dtc/tests/dumptrees.c
DEP /tmp/qemu-test/src/dtc/tests/trees.S
DEP /tmp/qemu-test/src/dtc/tests/testutils.c
DEP /tmp/qemu-test/src/dtc/tests/value-labels.c
DEP /tmp/qemu-test/src/dtc/tests/asm_tree_dump.c
DEP /tmp/qemu-test/src/dtc/tests/truncated_property.c
DEP /tmp/qemu-test/src/dtc/tests/check_path.c
DEP /tmp/qemu-test/src/dtc/tests/overlay_bad_fixup.c
DEP /tmp/qemu-test/src/dtc/tests/overlay.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/property_iterate.c
DEP /tmp/qemu-test/src/dtc/tests/utilfdt_test.c
DEP /tmp/qemu-test/src/dtc/tests/integer-expressions.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset_aliases.c
DEP /tmp/qemu-test/src/dtc/tests/add_subnode_with_nops.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_unordered.c
DEP /tmp/qemu-test/src/dtc/tests/dtb_reverse.c
DEP /tmp/qemu-test/src/dtc/tests/dtbs_equal_ordered.c
DEP /tmp/qemu-test/src/dtc/tests/extra-terminating-null.c
DEP /tmp/qemu-test/src/dtc/tests/incbin.c
DEP /tmp/qemu-test/src/dtc/tests/boot-cpuid.c
DEP /tmp/qemu-test/src/dtc/tests/phandle_format.c
DEP /tmp/qemu-test/src/dtc/tests/path-references.c
DEP /tmp/qemu-test/src/dtc/tests/references.c
DEP /tmp/qemu-test/src/dtc/tests/propname_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/string_escapes.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop2.c
DEP /tmp/qemu-test/src/dtc/tests/appendprop1.c
DEP /tmp/qemu-test/src/dtc/tests/del_node.c
DEP /tmp/qemu-test/src/dtc/tests/del_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop.c
DEP /tmp/qemu-test/src/dtc/tests/set_name.c
DEP /tmp/qemu-test/src/dtc/tests/rw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/open_pack.c
DEP /tmp/qemu-test/src/dtc/tests/nopulate.c
DEP /tmp/qemu-test/src/dtc/tests/mangle-layout.c
DEP /tmp/qemu-test/src/dtc/tests/move_and_save.c
DEP /tmp/qemu-test/src/dtc/tests/nop_node.c
DEP /tmp/qemu-test/src/dtc/tests/sw_tree1.c
DEP /tmp/qemu-test/src/dtc/tests/nop_property.c
DEP /tmp/qemu-test/src/dtc/tests/setprop_inplace.c
DEP /tmp/qemu-test/src/dtc/tests/stringlist.c
DEP /tmp/qemu-test/src/dtc/tests/addr_size_cells.c
DEP /tmp/qemu-test/src/dtc/tests/notfound.c
DEP /tmp/qemu-test/src/dtc/tests/sized_cells.c
DEP /tmp/qemu-test/src/dtc/tests/char_literal.c
DEP /tmp/qemu-test/src/dtc/tests/get_alias.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_check_compatible.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/node_offset_by_prop_value.c
DEP /tmp/qemu-test/src/dtc/tests/parent_offset.c
DEP /tmp/qemu-test/src/dtc/tests/supernode_atdepth_offset.c
DEP /tmp/qemu-test/src/dtc/tests/get_path.c
DEP /tmp/qemu-test/src/dtc/tests/get_phandle.c
DEP /tmp/qemu-test/src/dtc/tests/get_name.c
DEP /tmp/qemu-test/src/dtc/tests/getprop.c
DEP /tmp/qemu-test/src/dtc/tests/path_offset.c
DEP /tmp/qemu-test/src/dtc/tests/subnode_offset.c
DEP /tmp/qemu-test/src/dtc/tests/root_node.c
DEP /tmp/qemu-test/src/dtc/tests/find_property.c
DEP /tmp/qemu-test/src/dtc/tests/get_mem_rsv.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_overlay.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_addresses.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_empty_tree.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_strerror.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_rw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_sw.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_wip.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt_ro.c
DEP /tmp/qemu-test/src/dtc/libfdt/fdt.c
DEP /tmp/qemu-test/src/dtc/util.c
DEP /tmp/qemu-test/src/dtc/fdtoverlay.c
DEP /tmp/qemu-test/src/dtc/fdtput.c
DEP /tmp/qemu-test/src/dtc/fdtget.c
DEP /tmp/qemu-test/src/dtc/fdtdump.c
LEX convert-dtsv0-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/srcpos.c
BISON dtc-parser.tab.c
LEX dtc-lexer.lex.c
DEP /tmp/qemu-test/src/dtc/livetree.c
DEP /tmp/qemu-test/src/dtc/treesource.c
DEP /tmp/qemu-test/src/dtc/fstree.c
DEP /tmp/qemu-test/src/dtc/flattree.c
DEP /tmp/qemu-test/src/dtc/dtc.c
DEP /tmp/qemu-test/src/dtc/data.c
DEP /tmp/qemu-test/src/dtc/checks.c
DEP convert-dtsv0-lexer.lex.c
DEP dtc-parser.tab.c
DEP dtc-lexer.lex.c
CHK version_gen.h
UPD version_gen.h
DEP /tmp/qemu-test/src/dtc/util.c
CC libfdt/fdt.o
CC libfdt/fdt_ro.o
CC libfdt/fdt_wip.o
CC libfdt/fdt_rw.o
CC libfdt/fdt_empty_tree.o
CC libfdt/fdt_sw.o
CC libfdt/fdt_addresses.o
CC libfdt/fdt_strerror.o
CC libfdt/fdt_overlay.o
AR libfdt/libfdt.a
x86_64-w64-mingw32-ar: creating libfdt/libfdt.a
a - libfdt/fdt.o
a - libfdt/fdt_ro.o
a - libfdt/fdt_wip.o
a - libfdt/fdt_sw.o
a - libfdt/fdt_rw.o
a - libfdt/fdt_strerror.o
a - libfdt/fdt_empty_tree.o
a - libfdt/fdt_addresses.o
a - libfdt/fdt_overlay.o
RC version.o
GEN qga/qapi-generated/qapi-gen
CC qapi/qapi-builtin-types.o
CC qapi/qapi-types.o
CC qapi/qapi-types-char.o
CC qapi/qapi-types-crypto.o
CC qapi/qapi-types-block-core.o
CC qapi/qapi-types-common.o
CC qapi/qapi-types-block.o
CC qapi/qapi-types-introspect.o
CC qapi/qapi-types-job.o
CC qapi/qapi-types-migration.o
CC qapi/qapi-types-misc.o
CC qapi/qapi-types-net.o
CC qapi/qapi-types-rocker.o
CC qapi/qapi-types-run-state.o
CC qapi/qapi-types-sockets.o
CC qapi/qapi-types-tpm.o
CC qapi/qapi-types-trace.o
CC qapi/qapi-types-transaction.o
CC qapi/qapi-types-ui.o
CC qapi/qapi-visit.o
CC qapi/qapi-builtin-visit.o
CC qapi/qapi-visit-block-core.o
CC qapi/qapi-visit-block.o
CC qapi/qapi-visit-char.o
CC qapi/qapi-visit-common.o
CC qapi/qapi-visit-crypto.o
CC qapi/qapi-visit-introspect.o
CC qapi/qapi-visit-job.o
CC qapi/qapi-visit-migration.o
CC qapi/qapi-visit-misc.o
CC qapi/qapi-visit-net.o
CC qapi/qapi-visit-rocker.o
CC qapi/qapi-visit-run-state.o
CC qapi/qapi-visit-sockets.o
CC qapi/qapi-visit-tpm.o
CC qapi/qapi-visit-trace.o
CC qapi/qapi-visit-transaction.o
CC qapi/qapi-visit-ui.o
CC qapi/qapi-events.o
CC qapi/qapi-events-block-core.o
CC qapi/qapi-events-block.o
CC qapi/qapi-events-char.o
CC qapi/qapi-events-common.o
CC qapi/qapi-events-crypto.o
CC qapi/qapi-events-introspect.o
CC qapi/qapi-events-job.o
CC qapi/qapi-events-migration.o
CC qapi/qapi-events-misc.o
CC qapi/qapi-events-net.o
CC qapi/qapi-events-rocker.o
CC qapi/qapi-events-run-state.o
CC qapi/qapi-events-sockets.o
CC qapi/qapi-events-tpm.o
CC qapi/qapi-events-trace.o
CC qapi/qapi-events-transaction.o
CC qapi/qapi-events-ui.o
CC qapi/qapi-introspect.o
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qnum.o
CC qobject/qstring.o
CC qobject/qdict.o
CC qobject/qlist.o
CC qobject/qbool.o
CC qobject/qlit.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC qobject/block-qdict.o
CC trace/simple.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/lockcnt.o
CC util/aiocb.o
CC util/async.o
CC util/aio-wait.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-win32.o
CC util/event_notifier-win32.o
CC util/oslib-win32.o
CC util/qemu-thread-win32.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/host-utils.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/cacheinfo.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/keyval.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-win32.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/pagesize.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC util/stats64.o
CC util/systemd.o
CC util/iova-tree.o
CC trace-root.o
CC accel/kvm/trace.o
CC accel/tcg/trace.o
CC audio/trace.o
CC block/trace.o
CC chardev/trace.o
CC crypto/trace.o
CC hw/9pfs/trace.o
CC hw/acpi/trace.o
CC hw/alpha/trace.o
CC hw/arm/trace.o
CC hw/audio/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/display/trace.o
CC hw/dma/trace.o
CC hw/hppa/trace.o
CC hw/i2c/trace.o
CC hw/i386/trace.o
CC hw/i386/xen/trace.o
CC hw/ide/trace.o
CC hw/input/trace.o
CC hw/intc/trace.o
CC hw/isa/trace.o
CC hw/mem/trace.o
CC hw/misc/trace.o
CC hw/misc/macio/trace.o
CC hw/net/trace.o
CC hw/nvram/trace.o
CC hw/pci/trace.o
CC hw/pci-host/trace.o
CC hw/ppc/trace.o
CC hw/rdma/trace.o
CC hw/rdma/vmw/trace.o
CC hw/s390x/trace.o
CC hw/scsi/trace.o
CC hw/sd/trace.o
CC hw/sparc/trace.o
CC hw/sparc64/trace.o
CC hw/timer/trace.o
CC hw/tpm/trace.o
CC hw/usb/trace.o
CC hw/vfio/trace.o
CC hw/virtio/trace.o
CC hw/xen/trace.o
CC io/trace.o
CC linux-user/trace.o
CC migration/trace.o
CC nbd/trace.o
CC net/trace.o
CC qapi/trace.o
CC qom/trace.o
CC scsi/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/ppc/trace.o
CC target/s390x/trace.o
CC target/sparc/trace.o
CC ui/trace.o
CC util/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset.o
CC stubs/gdbstub.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/machine-init-done.o
CC stubs/migr-blocker.o
CC stubs/change-state-handler.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/tpm.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/fd-register.o
CC stubs/qmp_memory_device.o
CC stubs/target-monitor-defs.o
CC stubs/target-get-monitor-def.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/xen-common.o
CC stubs/xen-hvm.o
CC stubs/pci-host-piix.o
CC stubs/ram-block.o
GEN qemu-img-cmds.h
CC block.o
CC blockjob.o
CC job.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw-format.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qcow2-bitmap.o
CC block/qed.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/blklogwrites.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/file-win32.o
CC block/win32-aio.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/create.o
CC block/throttle-groups.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/throttle.o
CC block/copy-on-read.o
CC block/crypto.o
CC block/fleecing-hook.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC scsi/utils.o
CC scsi/pr-manager-stub.o
CC block/curl.o
CC block/ssh.o
CC block/dmg-bz2.o
/tmp/qemu-test/src/block/fleecing-hook.c: In function 'fleecing_hook_cow':
/tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: implicit declaration of function 'hbitmap_next_dirty_area'; did you mean 'hbitmap_next_zero'? [-Werror=implicit-function-declaration]
while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
^~~~~~~~~~~~~~~~~~~~~~~
hbitmap_next_zero
/tmp/qemu-test/src/block/fleecing-hook.c:61:12: error: nested extern declaration of 'hbitmap_next_dirty_area' [-Werror=nested-externs]
cc1: all warnings being treated as errors
make: *** [/tmp/qemu-test/src/rules.mak:69: block/fleecing-hook.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
File "./tests/docker/docker.py", line 565, in <module>
sys.exit(main())
File "./tests/docker/docker.py", line 562, in main
return args.cmdobj.run(args, argv)
File "./tests/docker/docker.py", line 308, in run
return Docker().run(argv, args.keep, quiet=args.quiet)
File "./tests/docker/docker.py", line 276, in run
quiet=quiet)
File "./tests/docker/docker.py", line 183, in _do_check
return subprocess.check_call(self._command + cmd, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=c06a5f2aa25f11e8b87852540069c830', '-u', '1000', '--security-opt', 'seccomp=unconfined', '--rm', '--net=none', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=8', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-askh7zyi/src/docker-src.2018-08-17-16.54.21.21368:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit status 2
make[1]: *** [tests/docker/Makefile.include:213: docker-run] Error 1
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-askh7zyi/src'
make: *** [tests/docker/Makefile.include:247: docker-run-test-mingw@fedora] Error 2
real 2m20.483s
user 0m4.614s
sys 0m3.418s
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
` (3 preceding siblings ...)
2018-08-17 20:56 ` no-reply
@ 2018-08-17 21:01 ` no-reply
2018-08-17 21:50 ` Max Reitz
5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2018-08-17 21:01 UTC (permalink / raw)
To: vsementsov; +Cc: famz, qemu-devel, qemu-block, kwolf
Hi,
This series failed docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
Type: series
Message-id: 20180814170126.56461-1-vsementsov@virtuozzo.com
Subject: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
=== TEST SCRIPT BEGIN ===
#!/bin/bash
time make docker-test-quick@centos7 SHOW_ENV=1 J=8
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
d307c74eae new, node-graph-based fleecing and backup
=== OUTPUT BEGIN ===
BUILD centos7
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-yl0gza68/src'
GEN /var/tmp/patchew-tester-tmp-yl0gza68/src/docker-src.2018-08-17-16.59.54.2505/qemu.tar
Cloning into '/var/tmp/patchew-tester-tmp-yl0gza68/src/docker-src.2018-08-17-16.59.54.2505/qemu.tar.vroot'...
done.
Checking out files: 49% (3155/6322)
Checking out files: 50% (3161/6322)
Checking out files: 51% (3225/6322)
Checking out files: 52% (3288/6322)
Checking out files: 53% (3351/6322)
Checking out files: 54% (3414/6322)
Checking out files: 55% (3478/6322)
Checking out files: 56% (3541/6322)
Checking out files: 57% (3604/6322)
Checking out files: 58% (3667/6322)
Checking out files: 59% (3730/6322)
Checking out files: 60% (3794/6322)
Checking out files: 61% (3857/6322)
Checking out files: 62% (3920/6322)
Checking out files: 63% (3983/6322)
Checking out files: 64% (4047/6322)
Checking out files: 65% (4110/6322)
Checking out files: 66% (4173/6322)
Checking out files: 67% (4236/6322)
Checking out files: 68% (4299/6322)
Checking out files: 69% (4363/6322)
Checking out files: 70% (4426/6322)
Checking out files: 71% (4489/6322)
Checking out files: 72% (4552/6322)
Checking out files: 73% (4616/6322)
Checking out files: 74% (4679/6322)
Checking out files: 75% (4742/6322)
Checking out files: 76% (4805/6322)
Checking out files: 77% (4868/6322)
Checking out files: 78% (4932/6322)
Checking out files: 79% (4995/6322)
Checking out files: 80% (5058/6322)
Checking out files: 81% (5121/6322)
Checking out files: 82% (5185/6322)
Checking out files: 83% (5248/6322)
Checking out files: 84% (5311/6322)
Checking out files: 85% (5374/6322)
Checking out files: 86% (5437/6322)
Checking out files: 87% (5501/6322)
Checking out files: 88% (5564/6322)
Checking out files: 89% (5627/6322)
Checking out files: 90% (5690/6322)
Checking out files: 91% (5754/6322)
Checking out files: 92% (5817/6322)
Checking out files: 93% (5880/6322)
Checking out files: 94% (5943/6322)
Checking out files: 95% (6006/6322)
Checking out files: 96% (6070/6322)
Checking out files: 97% (6133/6322)
Checking out files: 98% (6196/6322)
Checking out files: 99% (6259/6322)
Checking out files: 100% (6322/6322)
Checking out files: 100% (6322/6322), done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-yl0gza68/src/docker-src.2018-08-17-16.59.54.2505/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into '/var/tmp/patchew-tester-tmp-yl0gza68/src/docker-src.2018-08-17-16.59.54.2505/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
COPY RUNNER
RUN test-quick in qemu:centos7
Packages installed:
SDL-devel-1.2.15-14.el7.x86_64
bison-3.0.4-1.el7.x86_64
bzip2-devel-1.0.6-13.el7.x86_64
ccache-3.3.4-1.el7.x86_64
csnappy-devel-0-6.20150729gitd7bc683.el7.x86_64
flex-2.5.37-3.el7.x86_64
gcc-4.8.5-16.el7_4.2.x86_64
gettext-0.19.8.1-2.el7.x86_64
git-1.8.3.1-12.el7_4.x86_64
glib2-devel-2.50.3-3.el7.x86_64
libepoxy-devel-1.3.1-1.el7.x86_64
libfdt-devel-1.4.6-1.el7.x86_64
lzo-devel-2.06-8.el7.x86_64
make-3.82-23.el7.x86_64
mesa-libEGL-devel-17.0.1-6.20170307.el7.x86_64
mesa-libgbm-devel-17.0.1-6.20170307.el7.x86_64
package g++ is not installed
package librdmacm-devel is not installed
pixman-devel-0.34.0-1.el7.x86_64
spice-glib-devel-0.33-6.el7_4.1.x86_64
spice-server-devel-0.12.8-2.el7.1.x86_64
tar-1.26-32.el7.x86_64
vte-devel-0.28.2-10.el7.x86_64
xen-devel-4.6.6-10.el7.x86_64
zlib-devel-1.2.7-17.el7.x86_64
Environment variables:
PACKAGES=bison bzip2-devel ccache csnappy-devel flex g++ gcc gettext git glib2-devel libepoxy-devel libfdt-devel librdmacm-devel lzo-devel make mesa-libEGL-devel mesa-libgbm-devel pixman-devel SDL-devel spice-glib-devel spice-server-devel tar vte-devel xen-devel zlib-devel
HOSTNAME=e9a9f6bac41c
MAKEFLAGS= -j8
J=8
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
TARGET_LIST=
SHLVL=1
HOME=/home/patchew
TEST_DIR=/tmp/qemu-test
FEATURES= dtc
DEBUG=
_=/usr/bin/env
Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu --prefix=/tmp/qemu-test/install
No C++ compiler available; disabling C++ specific optional code
Install prefix /tmp/qemu-test/install
BIOS directory /tmp/qemu-test/install/share/qemu
firmware path /tmp/qemu-test/install/share/qemu-firmware
binary directory /tmp/qemu-test/install/bin
library directory /tmp/qemu-test/install/lib
module directory /tmp/qemu-test/install/lib/qemu
libexec directory /tmp/qemu-test/install/libexec
include directory /tmp/qemu-test/install/include
config directory /tmp/qemu-test/install/etc
local state directory /tmp/qemu-test/install/var
Manual directory /tmp/qemu-test/install/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path /tmp/qemu-test/src
GIT binary git
GIT submodules
C compiler cc
Host C compiler cc
C++ compiler
Objective-C compiler cc
ARFLAGS rv
CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
QEMU_CFLAGS -I/usr/include/pixman-1 -Werror -pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wendif-labels -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -Wno-missing-braces -I/usr/include/libpng15 -I/usr/include/spice-server -I/usr/include/cacard -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/nss3 -I/usr/include/nspr4 -I/usr/include/spice-1
LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
QEMU_LDFLAGS
make make
install install
python python -B
smbd /usr/sbin/smbd
module support no
host CPU x86_64
host big endian no
target list x86_64-softmmu aarch64-softmmu
gprof enabled no
sparse enabled no
strip binaries yes
profiler no
static build no
SDL support yes (1.2.15)
GTK support yes (2.24.31)
GTK GL support no
VTE support yes (0.28.2)
TLS priority NORMAL
GNUTLS support no
GNUTLS rnd no
libgcrypt no
libgcrypt kdf no
nettle no
nettle kdf no
libtasn1 no
curses support yes
virgl support no
curl support no
mingw32 support no
Audio drivers oss
Block whitelist (rw)
Block whitelist (ro)
VirtFS support no
Multipath support no
VNC support yes
VNC SASL support no
VNC JPEG support no
VNC PNG support yes
xen support yes
xen ctrl version 40600
pv dom build no
brlapi support no
bluez support no
Documentation no
PIE yes
vde support no
netmap support no
Linux AIO support no
ATTR/XATTR support yes
Install blobs yes
KVM support yes
HAX support no
HVF support no
WHPX support no
TCG support yes
TCG debug enabled no
TCG interpreter no
malloc trim support yes
RDMA support yes
fdt support system
membarrier no
preadv support yes
fdatasync yes
madvise yes
posix_madvise yes
posix_memalign yes
libcap-ng support no
vhost-net support yes
vhost-crypto support yes
vhost-scsi support yes
vhost-vsock support yes
vhost-user support yes
Trace backends log
spice support yes (0.12.12/0.12.8)
rbd support no
xfsctl support no
smartcard support yes
libusb no
usb net redir no
OpenGL support yes
OpenGL dmabufs yes
libiscsi support no
libnfs support no
build guest agent yes
QGA VSS support no
QGA w32 disk info no
QGA MSI support no
seccomp support no
coroutine backend ucontext
coroutine pool yes
debug stack usage no
mutex debugging no
crypto afalg no
GlusterFS support no
gcov gcov
gcov enabled no
TPM support yes
libssh2 support no
TPM passthrough yes
TPM emulator yes
QOM debugging yes
Live block migration yes
lzo support yes
snappy support no
bzip2 support yes
NUMA host support no
libxml2 no
tcmalloc support no
jemalloc support no
avx2 optimization yes
replication support yes
VxHS block device no
capstone no
docker no
WARNING: Use of GTK 2.0 is deprecated and will be removed in
WARNING: future releases. Please switch to using GTK 3.0
WARNING: Use of SDL 1.2 is deprecated and will be removed in
WARNING: future releases. Please switch to using SDL 2.0
NOTE: cross-compilers enabled: 'cc'
GEN x86_64-softmmu/config-devices.mak.tmp
GEN aarch64-softmmu/config-devices.mak.tmp
GEN config-host.h
GEN qemu-options.def
GEN qapi-gen
GEN trace/generated-helpers-wrappers.h
GEN trace/generated-tcg-tracers.h
GEN trace/generated-helpers.h
GEN trace/generated-helpers.c
GEN module_block.h
GEN x86_64-softmmu/config-devices.mak
GEN ui/input-keymap-atset1-to-qcode.c
GEN aarch64-softmmu/config-devices.mak
GEN ui/input-keymap-linux-to-qcode.c
GEN ui/input-keymap-qcode-to-atset1.c
GEN ui/input-keymap-qcode-to-atset2.c
GEN ui/input-keymap-qcode-to-atset3.c
GEN ui/input-keymap-qcode-to-linux.c
GEN ui/input-keymap-qcode-to-qnum.c
GEN ui/input-keymap-qcode-to-sun.c
GEN ui/input-keymap-qnum-to-qcode.c
GEN ui/input-keymap-win32-to-qcode.c
GEN ui/input-keymap-usb-to-qcode.c
GEN ui/input-keymap-x11-to-qcode.c
GEN ui/input-keymap-xorgevdev-to-qcode.c
GEN ui/input-keymap-xorgkbd-to-qcode.c
GEN ui/input-keymap-xorgxquartz-to-qcode.c
GEN ui/input-keymap-xorgxwin-to-qcode.c
GEN ui/input-keymap-osx-to-qcode.c
GEN tests/test-qapi-gen
GEN trace-root.h
GEN accel/kvm/trace.h
GEN accel/tcg/trace.h
GEN audio/trace.h
GEN block/trace.h
GEN chardev/trace.h
GEN crypto/trace.h
GEN hw/9pfs/trace.h
GEN hw/acpi/trace.h
GEN hw/alpha/trace.h
GEN hw/arm/trace.h
GEN hw/audio/trace.h
GEN hw/block/trace.h
GEN hw/block/dataplane/trace.h
GEN hw/char/trace.h
GEN hw/display/trace.h
GEN hw/dma/trace.h
GEN hw/hppa/trace.h
GEN hw/i2c/trace.h
GEN hw/i386/trace.h
GEN hw/i386/xen/trace.h
GEN hw/ide/trace.h
GEN hw/input/trace.h
GEN hw/intc/trace.h
GEN hw/isa/trace.h
GEN hw/mem/trace.h
GEN hw/misc/trace.h
GEN hw/misc/macio/trace.h
GEN hw/net/trace.h
GEN hw/nvram/trace.h
GEN hw/pci/trace.h
GEN hw/pci-host/trace.h
GEN hw/ppc/trace.h
GEN hw/rdma/trace.h
GEN hw/rdma/vmw/trace.h
GEN hw/s390x/trace.h
GEN hw/scsi/trace.h
GEN hw/sd/trace.h
GEN hw/sparc/trace.h
GEN hw/sparc64/trace.h
GEN hw/timer/trace.h
GEN hw/tpm/trace.h
GEN hw/usb/trace.h
GEN hw/vfio/trace.h
GEN hw/virtio/trace.h
GEN hw/xen/trace.h
GEN io/trace.h
GEN linux-user/trace.h
GEN migration/trace.h
GEN nbd/trace.h
GEN net/trace.h
GEN qapi/trace.h
GEN qom/trace.h
GEN scsi/trace.h
GEN target/arm/trace.h
GEN target/i386/trace.h
GEN target/mips/trace.h
GEN target/ppc/trace.h
GEN target/s390x/trace.h
GEN target/sparc/trace.h
GEN ui/trace.h
GEN util/trace.h
GEN trace-root.c
GEN accel/kvm/trace.c
GEN accel/tcg/trace.c
GEN audio/trace.c
GEN block/trace.c
GEN chardev/trace.c
GEN crypto/trace.c
GEN hw/9pfs/trace.c
GEN hw/acpi/trace.c
GEN hw/alpha/trace.c
GEN hw/arm/trace.c
GEN hw/audio/trace.c
GEN hw/block/trace.c
GEN hw/block/dataplane/trace.c
GEN hw/char/trace.c
GEN hw/display/trace.c
GEN hw/dma/trace.c
GEN hw/hppa/trace.c
GEN hw/i2c/trace.c
GEN hw/i386/trace.c
GEN hw/i386/xen/trace.c
GEN hw/ide/trace.c
GEN hw/input/trace.c
GEN hw/intc/trace.c
GEN hw/isa/trace.c
GEN hw/mem/trace.c
GEN hw/misc/trace.c
GEN hw/misc/macio/trace.c
GEN hw/net/trace.c
GEN hw/nvram/trace.c
GEN hw/pci/trace.c
GEN hw/pci-host/trace.c
GEN hw/ppc/trace.c
GEN hw/rdma/trace.c
GEN hw/rdma/vmw/trace.c
GEN hw/s390x/trace.c
GEN hw/scsi/trace.c
GEN hw/sd/trace.c
GEN hw/sparc/trace.c
GEN hw/sparc64/trace.c
GEN hw/timer/trace.c
GEN hw/tpm/trace.c
GEN hw/usb/trace.c
GEN hw/vfio/trace.c
GEN hw/virtio/trace.c
GEN hw/xen/trace.c
GEN io/trace.c
GEN linux-user/trace.c
GEN migration/trace.c
GEN nbd/trace.c
GEN net/trace.c
GEN qapi/trace.c
GEN qom/trace.c
GEN scsi/trace.c
GEN target/arm/trace.c
GEN target/i386/trace.c
GEN target/mips/trace.c
GEN target/ppc/trace.c
GEN target/s390x/trace.c
GEN target/sparc/trace.c
GEN ui/trace.c
GEN util/trace.c
GEN config-all-devices.mak
CC tests/qemu-iotests/socket_scm_helper.o
GEN qga/qapi-generated/qapi-gen
CC qapi/qapi-types.o
CC qapi/qapi-builtin-types.o
CC qapi/qapi-types-block-core.o
CC qapi/qapi-types-block.o
CC qapi/qapi-types-char.o
CC qapi/qapi-types-common.o
CC qapi/qapi-types-crypto.o
CC qapi/qapi-types-introspect.o
CC qapi/qapi-types-job.o
CC qapi/qapi-types-migration.o
CC qapi/qapi-types-misc.o
CC qapi/qapi-types-net.o
CC qapi/qapi-types-rocker.o
CC qapi/qapi-types-run-state.o
CC qapi/qapi-types-sockets.o
CC qapi/qapi-types-tpm.o
CC qapi/qapi-types-trace.o
CC qapi/qapi-types-transaction.o
CC qapi/qapi-types-ui.o
CC qapi/qapi-builtin-visit.o
CC qapi/qapi-visit.o
CC qapi/qapi-visit-block-core.o
CC qapi/qapi-visit-block.o
CC qapi/qapi-visit-char.o
CC qapi/qapi-visit-common.o
CC qapi/qapi-visit-crypto.o
CC qapi/qapi-visit-introspect.o
CC qapi/qapi-visit-job.o
CC qapi/qapi-visit-migration.o
CC qapi/qapi-visit-misc.o
CC qapi/qapi-visit-net.o
CC qapi/qapi-visit-rocker.o
CC qapi/qapi-visit-run-state.o
CC qapi/qapi-visit-sockets.o
CC qapi/qapi-visit-tpm.o
CC qapi/qapi-visit-trace.o
CC qapi/qapi-visit-transaction.o
CC qapi/qapi-visit-ui.o
CC qapi/qapi-events.o
CC qapi/qapi-events-block-core.o
CC qapi/qapi-events-block.o
CC qapi/qapi-events-char.o
CC qapi/qapi-events-common.o
CC qapi/qapi-events-crypto.o
CC qapi/qapi-events-introspect.o
CC qapi/qapi-events-job.o
CC qapi/qapi-events-migration.o
CC qapi/qapi-events-misc.o
CC qapi/qapi-events-net.o
CC qapi/qapi-events-rocker.o
CC qapi/qapi-events-run-state.o
CC qapi/qapi-events-sockets.o
CC qapi/qapi-events-tpm.o
CC qapi/qapi-events-trace.o
CC qapi/qapi-events-transaction.o
CC qapi/qapi-events-ui.o
CC qapi/qapi-introspect.o
CC qapi/qapi-visit-core.o
CC qapi/qapi-dealloc-visitor.o
CC qapi/qobject-input-visitor.o
CC qapi/qobject-output-visitor.o
CC qapi/qmp-registry.o
CC qapi/qmp-dispatch.o
CC qapi/string-input-visitor.o
CC qapi/string-output-visitor.o
CC qapi/opts-visitor.o
CC qapi/qapi-clone-visitor.o
CC qapi/qmp-event.o
CC qapi/qapi-util.o
CC qobject/qnull.o
CC qobject/qnum.o
CC qobject/qdict.o
CC qobject/qstring.o
CC qobject/qlist.o
CC qobject/qbool.o
CC qobject/qlit.o
CC qobject/qjson.o
CC qobject/qobject.o
CC qobject/json-lexer.o
CC qobject/json-streamer.o
CC qobject/json-parser.o
CC qobject/block-qdict.o
CC trace/control.o
CC trace/qmp.o
CC util/osdep.o
CC util/cutils.o
CC util/unicode.o
CC util/qemu-timer-common.o
CC util/bufferiszero.o
CC util/lockcnt.o
CC util/aiocb.o
CC util/async.o
CC util/aio-wait.o
CC util/thread-pool.o
CC util/qemu-timer.o
CC util/main-loop.o
CC util/iohandler.o
CC util/aio-posix.o
CC util/compatfd.o
CC util/event_notifier-posix.o
CC util/mmap-alloc.o
CC util/oslib-posix.o
CC util/qemu-openpty.o
CC util/qemu-thread-posix.o
CC util/memfd.o
CC util/envlist.o
CC util/path.o
CC util/module.o
CC util/host-utils.o
CC util/bitmap.o
CC util/bitops.o
CC util/hbitmap.o
CC util/fifo8.o
CC util/acl.o
CC util/cacheinfo.o
CC util/error.o
CC util/qemu-error.o
CC util/id.o
CC util/iov.o
CC util/qemu-config.o
CC util/qemu-sockets.o
CC util/uri.o
CC util/notify.o
CC util/qemu-option.o
CC util/qemu-progress.o
CC util/keyval.o
CC util/hexdump.o
CC util/crc32c.o
CC util/uuid.o
CC util/throttle.o
CC util/getauxval.o
CC util/readline.o
CC util/rcu.o
CC util/qemu-coroutine.o
CC util/qemu-coroutine-lock.o
CC util/qemu-coroutine-io.o
CC util/qemu-coroutine-sleep.o
CC util/coroutine-ucontext.o
CC util/buffer.o
CC util/timed-average.o
CC util/base64.o
CC util/log.o
CC util/pagesize.o
CC util/qdist.o
CC util/qht.o
CC util/range.o
CC util/stats64.o
CC util/systemd.o
CC util/iova-tree.o
CC util/vfio-helpers.o
CC trace-root.o
CC accel/kvm/trace.o
CC accel/tcg/trace.o
CC audio/trace.o
CC block/trace.o
CC chardev/trace.o
CC crypto/trace.o
CC hw/9pfs/trace.o
CC hw/acpi/trace.o
CC hw/alpha/trace.o
CC hw/arm/trace.o
CC hw/audio/trace.o
CC hw/block/trace.o
CC hw/block/dataplane/trace.o
CC hw/char/trace.o
CC hw/display/trace.o
CC hw/dma/trace.o
CC hw/hppa/trace.o
CC hw/i2c/trace.o
CC hw/i386/trace.o
CC hw/i386/xen/trace.o
CC hw/ide/trace.o
CC hw/input/trace.o
CC hw/intc/trace.o
CC hw/isa/trace.o
CC hw/mem/trace.o
CC hw/misc/trace.o
CC hw/misc/macio/trace.o
CC hw/net/trace.o
CC hw/nvram/trace.o
CC hw/pci/trace.o
CC hw/pci-host/trace.o
CC hw/ppc/trace.o
CC hw/rdma/trace.o
CC hw/rdma/vmw/trace.o
CC hw/s390x/trace.o
CC hw/scsi/trace.o
CC hw/sd/trace.o
CC hw/sparc/trace.o
CC hw/sparc64/trace.o
CC hw/timer/trace.o
CC hw/tpm/trace.o
CC hw/usb/trace.o
CC hw/vfio/trace.o
CC hw/virtio/trace.o
CC hw/xen/trace.o
CC io/trace.o
CC linux-user/trace.o
CC migration/trace.o
CC nbd/trace.o
CC net/trace.o
CC qapi/trace.o
CC qom/trace.o
CC scsi/trace.o
CC target/arm/trace.o
CC target/i386/trace.o
CC target/mips/trace.o
CC target/ppc/trace.o
CC target/s390x/trace.o
CC target/sparc/trace.o
CC ui/trace.o
CC util/trace.o
CC crypto/pbkdf-stub.o
CC stubs/arch-query-cpu-def.o
CC stubs/arch-query-cpu-model-expansion.o
CC stubs/arch-query-cpu-model-comparison.o
CC stubs/arch-query-cpu-model-baseline.o
CC stubs/bdrv-next-monitor-owned.o
CC stubs/blk-commit-all.o
CC stubs/blockdev-close-all-bdrv-states.o
CC stubs/clock-warp.o
CC stubs/cpu-get-clock.o
CC stubs/cpu-get-icount.o
CC stubs/dump.o
CC stubs/error-printf.o
CC stubs/fdset.o
CC stubs/gdbstub.o
CC stubs/get-vm-name.o
CC stubs/iothread.o
CC stubs/iothread-lock.o
CC stubs/is-daemonized.o
CC stubs/migr-blocker.o
CC stubs/machine-init-done.o
CC stubs/monitor.o
CC stubs/notify-event.o
CC stubs/change-state-handler.o
CC stubs/qtest.o
CC stubs/replay.o
CC stubs/runstate-check.o
CC stubs/set-fd-handler.o
CC stubs/slirp.o
CC stubs/sysbus.o
CC stubs/tpm.o
CC stubs/trace-control.o
CC stubs/uuid.o
CC stubs/vm-stop.o
CC stubs/vmstate.o
CC stubs/qmp_memory_device.o
CC stubs/target-get-monitor-def.o
CC stubs/target-monitor-defs.o
CC stubs/pc_madt_cpu_entry.o
CC stubs/vmgenid.o
CC stubs/xen-hvm.o
CC stubs/xen-common.o
CC stubs/pci-host-piix.o
CC stubs/ram-block.o
CC contrib/ivshmem-client/ivshmem-client.o
CC contrib/ivshmem-client/main.o
CC contrib/ivshmem-server/ivshmem-server.o
CC contrib/ivshmem-server/main.o
CC qemu-nbd.o
CC block.o
CC blockjob.o
CC job.o
CC qemu-io-cmds.o
CC replication.o
CC block/raw-format.o
CC block/qcow.o
CC block/vdi.o
CC block/vmdk.o
CC block/cloop.o
CC block/bochs.o
CC block/vpc.o
CC block/vvfat.o
CC block/dmg.o
CC block/qcow2.o
CC block/qcow2-refcount.o
CC block/qcow2-cluster.o
CC block/qcow2-snapshot.o
CC block/qcow2-cache.o
CC block/qcow2-bitmap.o
CC block/qed.o
CC block/qed-l2-cache.o
CC block/qed-table.o
CC block/qed-cluster.o
CC block/qed-check.o
CC block/vhdx.o
CC block/vhdx-endian.o
CC block/vhdx-log.o
CC block/quorum.o
CC block/parallels.o
CC block/blkdebug.o
CC block/blkverify.o
CC block/blkreplay.o
CC block/blklogwrites.o
CC block/block-backend.o
CC block/snapshot.o
CC block/qapi.o
CC block/file-posix.o
CC block/null.o
CC block/mirror.o
CC block/commit.o
CC block/io.o
CC block/create.o
CC block/throttle-groups.o
CC block/nvme.o
CC block/nbd.o
CC block/nbd-client.o
CC block/sheepdog.o
CC block/accounting.o
CC block/dirty-bitmap.o
CC block/write-threshold.o
CC block/backup.o
CC block/replication.o
CC block/throttle.o
CC block/copy-on-read.o
CC block/crypto.o
CC block/fleecing-hook.o
CC nbd/server.o
CC nbd/client.o
CC nbd/common.o
CC scsi/utils.o
CC scsi/pr-manager.o
CC scsi/pr-manager-helper.o
/tmp/qemu-test/src/block/fleecing-hook.c: In function 'fleecing_hook_cow':
/tmp/qemu-test/src/block/fleecing-hook.c:61:5: error: implicit declaration of function 'hbitmap_next_dirty_area' [-Werror=implicit-function-declaration]
while (hbitmap_next_dirty_area(s->cow_bitmap, &off, end, &len)) {
^
/tmp/qemu-test/src/block/fleecing-hook.c:61:5: error: nested extern declaration of 'hbitmap_next_dirty_area' [-Werror=nested-externs]
cc1: all warnings being treated as errors
make: *** [block/fleecing-hook.o] Error 1
make: *** Waiting for unfinished jobs....
Traceback (most recent call last):
File "./tests/docker/docker.py", line 565, in <module>
sys.exit(main())
File "./tests/docker/docker.py", line 562, in main
return args.cmdobj.run(args, argv)
File "./tests/docker/docker.py", line 308, in run
return Docker().run(argv, args.keep, quiet=args.quiet)
File "./tests/docker/docker.py", line 276, in run
quiet=quiet)
File "./tests/docker/docker.py", line 183, in _do_check
return subprocess.check_call(self._command + cmd, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=8266ec4ca26011e8a15752540069c830', '-u', '1000', '--security-opt', 'seccomp=unconfined', '--rm', '--net=none', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=8', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-yl0gza68/src/docker-src.2018-08-17-16.59.54.2505:/var/tmp/qemu:z,ro', 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2
make[1]: *** [tests/docker/Makefile.include:213: docker-run] Error 1
make[1]: Leaving directory '/var/tmp/patchew-tester-tmp-yl0gza68/src'
make: *** [tests/docker/Makefile.include:247: docker-run-test-quick@centos7] Error 2
real 1m42.142s
user 0m4.759s
sys 0m3.341s
=== OUTPUT END ===
Test command exited with code: 2
---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
` (4 preceding siblings ...)
2018-08-17 21:01 ` no-reply
@ 2018-08-17 21:50 ` Max Reitz
2018-08-20 9:42 ` Vladimir Sementsov-Ogievskiy
5 siblings, 1 reply; 15+ messages in thread
From: Max Reitz @ 2018-08-17 21:50 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
[-- Attachment #1: Type: text/plain, Size: 11660 bytes --]
On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>
> [v2 is just a resend. I forget to add Den an me to cc, and I don't see the
> letter in my thunderbird at all. strange. sorry for that]
>
> Hi all!
>
> Here is an idea and kind of proof-of-concept of how to unify and improve
> push/pull backup schemes.
>
> Let's start from fleecing, a way of importing a point-in-time snapshot not
> creating a real snapshot. Now we do it with help of backup(sync=none)..
>
> Proposal:
>
> For fleecing we need two nodes:
>
> 1. fleecing hook. It's a filter which should be inserted on top of active
> disk. It's main purpose is handling guest writes by copy-on-write operation,
> i.e. it's a substitution for write-notifier in backup job.
>
> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
> It also represents a point-in-time snapshot of active disk for the readers.
It's not really COW, it's copy-before-write, isn't it? It's something
else entirely. COW is about writing data to an overlay *instead* of
writing it to the backing file. Ideally, you don't copy anything,
actually. It's just a side effect that you need to copy things if your
cluster size doesn't happen to match exactly what you're overwriting.
CBW is about copying everything to the overlay, and then leaving it
alone, instead writing the data to the backing file.
I'm not sure how important it is, I just wanted to make a note so we
don't misunderstand what's going on, somehow.
The fleecing hook sounds good to me, but I'm asking myself why we don't
just add that behavior to the backup filter node. That is, re-implement
backup without before-write notifiers by making the filter node actually
do something (I think there was some reason, but I don't remember).
> The simplest realization of fleecing cache is a qcow2 temporary image, backed
> by active disk, i.e.:
>
> +-------+
> | Guest |
> +---+---+
> |
> v
> +---+-----------+ file +-----------------------+
> | Fleecing hook +---------->+ Fleecing cache(qcow2) |
> +---+-----------+ +---+-------------------+
> | |
> backing | |
> v |
> +---+---------+ backing |
> | Active disk +<----------------+
> +-------------+
>
> Hm. No, because of permissions I can't do so, I have to do like this:
>
> +-------+
> | Guest |
> +---+---+
> |
> v
> +---+-----------+ file +-----------------------+
> | Fleecing hook +---------->+ Fleecing cache(qcow2) |
> +---+-----------+ +-----+-----------------+
> | |
> backing | | backing
> v v
> +---+---------+ backing +-----+---------------------+
> | Active disk +<------------+ hack children permissions |
> +-------------+ | filter node |
> +---------------------------+
>
> Ok, this works, it's an image fleecing scheme without any block jobs.
So this is the goal? Hm. How useful is that really?
I suppose technically you could allow blockdev-add'ing a backup filter
node (though only with sync=none) and that would give you the same.
> Problems with realization:
>
> 1 What to do with hack-permissions-node? What is a true way to implement
> something like this? How to tune permissions to avoid this additional node?
Hm, how is that different from what we currently do? Because the block
job takes care of it?
Well, the user would have to guarantee the permissions. And they can
only do that by manually adding a filter node in the backing chain, I
suppose.
Or they just start a block job which guarantees the permissions work...
So maybe it's best to just stay with a block job as it is.
> 2 Inserting/removing the filter. Do we have working way or developments on
> it?
Berto has posted patches for an x-blockdev-reopen QMP command.
> 3. Interesting: we can't setup backing link to active disk before inserting
> fleecing-hook, otherwise, it will damage this link on insertion. This means,
> that we can't create fleecing cache node in advance with all backing to
> reference it when creating fleecing hook. And we can't prepare all the nodes
> in advance and then insert the filter.. We have to:
> 1. create all the nodes with all links in one big json, or
I think that should be possible with x-blockdev-reopen.
> 2. set backing links/create nodes automatically, as it is done in this RFC
> (it's a bad way I think, not clear, not transparent)
>
> 4. Is it a good idea to use "backing" and "file" links in such way?
I don't think so, because you're pretending it to be a COW relationship
when it isn't. Using backing for what it is is kind of OK (because
that's what the mirror and backup filters do, too), but then using
"file" additionally is a bit weird.
(Usually, "backing" refers to a filtered node with COW, and "file" then
refers to the node where the overlay driver stores its data and
metadata. But you'd store old data there (instead of new data), and no
metadata.)
> Benefits, or, what can be done:
>
> 1. We can implement special Fleecing cache filter driver, which will be a real
> cache: it will store some recently written clusters and RAM, it can have a
> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
> for each cluster of active disk we will have the following characteristics:
>
> - changed (changed in active disk since backup start)
> - copy (we need this cluster for fleecing user. For example, in RFC patch all
> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
> fleecing (for use in incremental backup push or pull)
> - cached in RAM
> - cached in disk
Would it be possible to implement such a filter driver that could just
be used as a backup target?
> On top of these characteristics we can implement the following features:
>
> 1. COR, we can cache clusters not only on writes but on reads too, if we have
> free space in ram-cache (and if not, do not cache at all, don't write to
> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
You can do the same with backup by just putting a fast overlay between
source and the backup, if your source is so slow, and then do COR, i.e.:
slow source --> fast overlay --> COR node --> backup filter
> 2. Benefit for guest: if cluster is unchanged and ram-cached, we can skip reading
> from the devise
>
> 3. If needed, we can drop unchanged ram-cached clusters from ram-cache
>
> 4. On guest write, if cluster is already cached, we just mark it "changed"
>
> 5. Lazy discards: in some setups, discards are not guaranteed to do something,
> so, we can at least defer some discards to the end of backup, if ram-cache is
> full.
>
> 6. We can implement discard operation in fleecing cache, to make cluster
> not needed (drop from cache, drop "copy" flag), so further reads of this
> cluster will return error. So, fleecing client may read cluster by cluster
> and discard them to reduce COW-load of the drive. We even can combine read
> and discard into one command, something like "read-once", or it may be a
> flag for fleecing-cache, that all reads are "read-once".
That would definitely be possible with a dedicated fleecing backup
target filter (and normal backup).
> 7. We can provide recommendations, on which clusters should fleecing-client
> copy first. Examples:
> a. copy ram-cached clusters first (obvious, to unload cache and reduce io
> overhead)
> b. copy zero-clusters last (the don't occupy place in cache, so, lets copy
> other clusters first)
> c. copy disk-cached clusters list (if we don't care about disk space,
> we can say, that for disk-cached clusters we already have a maximum
> io overhead, so let's copy other clusters first)
> d. copy disk-cached clusters with high priority (but after ram-cached) -
> if we don't have enough disk space
>
> So, there is a wide range of possible politics. How to provide these
> recommendations?
> 1. block_status
> 2. create separate interface
> 3. internal backup job may access shared fleecing object directly.
Hm, this is a completely different question now. Sure, extending backup
or mirror (or a future blockdev-copy) would make it easiest for us. But
then again, if you want to copy data off a point-in-time snapshot of a
volume, you can just use normal backup anyway, right?
So I'd say the purpose of fleecing is that you have an external tool
make use of it. Since my impression was that you'd just access the
volume externally and wouldn't actually copy all of the data off of it
(because that's what you could use the backup job for), I don't think I
can say much here, because my impression seems to have been wrong.
> About internal backup:
> Of course, we need a job which will copy clusters. But it will be simplified:
So you want to completely rebuild backup based on the fact that you
specifically have fleecing now?
I don't think that will be any simpler.
I mean, it would make blockdev-copy simpler, because we could
immediately replace backup by mirror, and then we just have mirror,
which would then automatically become blockdev-copy...
But it's not really going to be simpler, because whether you put the
copy-before-write logic into a dedicated block driver, or into the
backup filter driver, doesn't really make it simpler either way. Well,
adding a new driver always is a bit more complicated, so there's that.
> it should not care about guest writes, it copies clusters from a kind of
> snapshot which is not changing in time. This job should follow recommendations
> from fleecing scheme [7].
>
> What about the target?
>
> We can use separate node as target, and copy from fleecing cache to the target.
> If we have only ram-cache, it would be equal to current approach (data is copied
> directly to the target, even on COW). If we have both ram- and disk- caches, it's
> a cool solution for slow-target: instead of make guest wait for long write to
> backup target (when ram-cache is full) we can write to disk-cache which is local
> and fast.
Or you backup to a fast overlay over a slow target, and run a live
commit on the side.
> Another option is to combine fleecing cache and target somehow (I didn't think
> about this really).
>
> Finally, with one - two (three?) special filters we can implement all current
> fleecing/backup schemes in unique and very configurable way and do a lot more
> cool features and possibilities.
>
> What do you think?
I think adding a specific fleecing target filter makes sense because you
gave many reasons for interesting new use cases that could emerge from that.
But I think adding a new fleecing-hook driver just means moving the
implementation from backup to that new driver.
Max
> I really need help with fleecing graph creating/inserting/destroying, my code
> about it is a hack, I don't like it, it just works.
>
> About testing: to show that this work I use existing fleecing test - 222, a bit
> tuned (drop block-job and use new qmp command to remove filter).
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-17 21:50 ` Max Reitz
@ 2018-08-20 9:42 ` Vladimir Sementsov-Ogievskiy
2018-08-20 13:32 ` Max Reitz
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-20 9:42 UTC (permalink / raw)
To: Max Reitz, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
18.08.2018 00:50, Max Reitz wrote:
> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>
>> [v2 is just a resend. I forget to add Den an me to cc, and I don't see the
>> letter in my thunderbird at all. strange. sorry for that]
>>
>> Hi all!
>>
>> Here is an idea and kind of proof-of-concept of how to unify and improve
>> push/pull backup schemes.
>>
>> Let's start from fleecing, a way of importing a point-in-time snapshot not
>> creating a real snapshot. Now we do it with help of backup(sync=none)..
>>
>> Proposal:
>>
>> For fleecing we need two nodes:
>>
>> 1. fleecing hook. It's a filter which should be inserted on top of active
>> disk. It's main purpose is handling guest writes by copy-on-write operation,
>> i.e. it's a substitution for write-notifier in backup job.
>>
>> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
>> It also represents a point-in-time snapshot of active disk for the readers.
> It's not really COW, it's copy-before-write, isn't it? It's something
> else entirely. COW is about writing data to an overlay *instead* of
> writing it to the backing file. Ideally, you don't copy anything,
> actually. It's just a side effect that you need to copy things if your
> cluster size doesn't happen to match exactly what you're overwriting.
Hmm. I'm not against. But COW term was already used in backup to
describe this.
>
> CBW is about copying everything to the overlay, and then leaving it
> alone, instead writing the data to the backing file.
>
> I'm not sure how important it is, I just wanted to make a note so we
> don't misunderstand what's going on, somehow.
>
>
> The fleecing hook sounds good to me, but I'm asking myself why we don't
> just add that behavior to the backup filter node. That is, re-implement
> backup without before-write notifiers by making the filter node actually
> do something (I think there was some reason, but I don't remember).
fleecing don't need any block-job at all, so, I think it is good to have
fleecing filter
to be separate. And then, it should be reused by internal backup.
Hm, we can call this backup-filter instead of fleecing-hook, what is the
difference?
>
>> The simplest realization of fleecing cache is a qcow2 temporary image, backed
>> by active disk, i.e.:
>>
>> +-------+
>> | Guest |
>> +---+---+
>> |
>> v
>> +---+-----------+ file +-----------------------+
>> | Fleecing hook +---------->+ Fleecing cache(qcow2) |
>> +---+-----------+ +---+-------------------+
>> | |
>> backing | |
>> v |
>> +---+---------+ backing |
>> | Active disk +<----------------+
>> +-------------+
>>
>> Hm. No, because of permissions I can't do so, I have to do like this:
>>
>> +-------+
>> | Guest |
>> +---+---+
>> |
>> v
>> +---+-----------+ file +-----------------------+
>> | Fleecing hook +---------->+ Fleecing cache(qcow2) |
>> +---+-----------+ +-----+-----------------+
>> | |
>> backing | | backing
>> v v
>> +---+---------+ backing +-----+---------------------+
>> | Active disk +<------------+ hack children permissions |
>> +-------------+ | filter node |
>> +---------------------------+
>>
>> Ok, this works, it's an image fleecing scheme without any block jobs.
> So this is the goal? Hm. How useful is that really?
>
> I suppose technically you could allow blockdev-add'ing a backup filter
> node (though only with sync=none) and that would give you the same.
what is backup filter node?
>
>> Problems with realization:
>>
>> 1 What to do with hack-permissions-node? What is a true way to implement
>> something like this? How to tune permissions to avoid this additional node?
> Hm, how is that different from what we currently do? Because the block
> job takes care of it?
1. As I understand, we agreed, that it is good to use filter node
instead of write_notifier.
2. We already have fleecing scheme, when we should create some subgraph
between nodes.
3. If we move to filter-node instead of write_notifier, block job is not
actually needed for fleecing, and it is good to drop it from the
fleecing scheme, to simplify it, to make it more clear and transparent.
And finally, we will have unified filter-node-based scheme for backup
and fleecing, modular and customisable.
>
> Well, the user would have to guarantee the permissions. And they can
> only do that by manually adding a filter node in the backing chain, I
> suppose.
>
> Or they just start a block job which guarantees the permissions work...
> So maybe it's best to just stay with a block job as it is.
>
>> 2 Inserting/removing the filter. Do we have working way or developments on
>> it?
> Berto has posted patches for an x-blockdev-reopen QMP command.
>
>> 3. Interesting: we can't setup backing link to active disk before inserting
>> fleecing-hook, otherwise, it will damage this link on insertion. This means,
>> that we can't create fleecing cache node in advance with all backing to
>> reference it when creating fleecing hook. And we can't prepare all the nodes
>> in advance and then insert the filter.. We have to:
>> 1. create all the nodes with all links in one big json, or
> I think that should be possible with x-blockdev-reopen.
>
>> 2. set backing links/create nodes automatically, as it is done in this RFC
>> (it's a bad way I think, not clear, not transparent)
>>
>> 4. Is it a good idea to use "backing" and "file" links in such way?
> I don't think so, because you're pretending it to be a COW relationship
> when it isn't. Using backing for what it is is kind of OK (because
> that's what the mirror and backup filters do, too), but then using
> "file" additionally is a bit weird.
>
> (Usually, "backing" refers to a filtered node with COW, and "file" then
> refers to the node where the overlay driver stores its data and
> metadata. But you'd store old data there (instead of new data), and no
> metadata.)
>
>> Benefits, or, what can be done:
>>
>> 1. We can implement special Fleecing cache filter driver, which will be a real
>> cache: it will store some recently written clusters and RAM, it can have a
>> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
>> for each cluster of active disk we will have the following characteristics:
>>
>> - changed (changed in active disk since backup start)
>> - copy (we need this cluster for fleecing user. For example, in RFC patch all
>> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
>> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
>> fleecing (for use in incremental backup push or pull)
>> - cached in RAM
>> - cached in disk
> Would it be possible to implement such a filter driver that could just
> be used as a backup target?
for internal backup we need backup-job anyway, and we will be able to
create different schemes.
One of my goals is the scheme, when we store old data from CBW
operations into local cache, when
backup target is remote, relatively slow NBD node. In this case, cache
is backup source, not target.
>
>> On top of these characteristics we can implement the following features:
>>
>> 1. COR, we can cache clusters not only on writes but on reads too, if we have
>> free space in ram-cache (and if not, do not cache at all, don't write to
>> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
> You can do the same with backup by just putting a fast overlay between
> source and the backup, if your source is so slow, and then do COR, i.e.:
>
> slow source --> fast overlay --> COR node --> backup filter
How will we check ram-cache size to make COR optional in this scheme?
>
>> 2. Benefit for guest: if cluster is unchanged and ram-cached, we can skip reading
>> from the devise
>>
>> 3. If needed, we can drop unchanged ram-cached clusters from ram-cache
>>
>> 4. On guest write, if cluster is already cached, we just mark it "changed"
>>
>> 5. Lazy discards: in some setups, discards are not guaranteed to do something,
>> so, we can at least defer some discards to the end of backup, if ram-cache is
>> full.
>>
>> 6. We can implement discard operation in fleecing cache, to make cluster
>> not needed (drop from cache, drop "copy" flag), so further reads of this
>> cluster will return error. So, fleecing client may read cluster by cluster
>> and discard them to reduce COW-load of the drive. We even can combine read
>> and discard into one command, something like "read-once", or it may be a
>> flag for fleecing-cache, that all reads are "read-once".
> That would definitely be possible with a dedicated fleecing backup
> target filter (and normal backup).
target-filter schemes will not work for external-backup..
>
>> 7. We can provide recommendations, on which clusters should fleecing-client
>> copy first. Examples:
>> a. copy ram-cached clusters first (obvious, to unload cache and reduce io
>> overhead)
>> b. copy zero-clusters last (the don't occupy place in cache, so, lets copy
>> other clusters first)
>> c. copy disk-cached clusters list (if we don't care about disk space,
>> we can say, that for disk-cached clusters we already have a maximum
>> io overhead, so let's copy other clusters first)
>> d. copy disk-cached clusters with high priority (but after ram-cached) -
>> if we don't have enough disk space
>>
>> So, there is a wide range of possible politics. How to provide these
>> recommendations?
>> 1. block_status
>> 2. create separate interface
>> 3. internal backup job may access shared fleecing object directly.
> Hm, this is a completely different question now. Sure, extending backup
> or mirror (or a future blockdev-copy) would make it easiest for us. But
> then again, if you want to copy data off a point-in-time snapshot of a
> volume, you can just use normal backup anyway, right?
right. but how to implement all the features I listed? I see the way to
implement them with help of two special filters. And backup job will be
used anyway (without write-notifiers) for internal backup and will not
be used for external backup (fleecing).
>
> So I'd say the purpose of fleecing is that you have an external tool
> make use of it. Since my impression was that you'd just access the
> volume externally and wouldn't actually copy all of the data off of it
not quite right. People use fleecing to implement external backup,
managed by their third-party tool, which they want to use instead of
internal backup. And they do copy all the data. I cant describe all the
reasons, but example is custom storage for backup, which external tool
can manage and Qemu can't.
So, fleecing is used for external backups (or pull backups).
> (because that's what you could use the backup job for), I don't think I
> can say much here, because my impression seems to have been wrong.
>
>> About internal backup:
>> Of course, we need a job which will copy clusters. But it will be simplified:
> So you want to completely rebuild backup based on the fact that you
> specifically have fleecing now?
I need several features, which are hard to implement using current scheme.
1. The scheme when we have a local cache as COW target and slow remote
backup target.
How to do it now? Using two backups, one with sync=none... Not sure that
this is right way.
2. Then, we'll need support for bitmaps in backup (sync=none). 3. Then,
we'll need a possibility for backup(sync=none) to
not COW clusters, which are already copied to backup, and so on.
If we want a backup-filter anyway, why not to implement some cool
features on top of it?
>
> I don't think that will be any simpler.
>
> I mean, it would make blockdev-copy simpler, because we could
> immediately replace backup by mirror, and then we just have mirror,
> which would then automatically become blockdev-copy...
>
> But it's not really going to be simpler, because whether you put the
> copy-before-write logic into a dedicated block driver, or into the
> backup filter driver, doesn't really make it simpler either way. Well,
> adding a new driver always is a bit more complicated, so there's that.
what is the difference between separate filter driver and backup filter
driver?
>
>> it should not care about guest writes, it copies clusters from a kind of
>> snapshot which is not changing in time. This job should follow recommendations
>> from fleecing scheme [7].
>>
>> What about the target?
>>
>> We can use separate node as target, and copy from fleecing cache to the target.
>> If we have only ram-cache, it would be equal to current approach (data is copied
>> directly to the target, even on COW). If we have both ram- and disk- caches, it's
>> a cool solution for slow-target: instead of make guest wait for long write to
>> backup target (when ram-cache is full) we can write to disk-cache which is local
>> and fast.
> Or you backup to a fast overlay over a slow target, and run a live
> commit on the side.
I think it will lead to larger io overhead: all clusters will go through
overlay, not only guest-written clusters, for which we did not have time
to copy them..
>
>> Another option is to combine fleecing cache and target somehow (I didn't think
>> about this really).
>>
>> Finally, with one - two (three?) special filters we can implement all current
>> fleecing/backup schemes in unique and very configurable way and do a lot more
>> cool features and possibilities.
>>
>> What do you think?
> I think adding a specific fleecing target filter makes sense because you
> gave many reasons for interesting new use cases that could emerge from that.
>
> But I think adding a new fleecing-hook driver just means moving the
> implementation from backup to that new driver.
But in the same time you say that it's ok to create backup-filter
(instead of write_notifier) and make it insertable by qapi? So, if I
implement it in block/backup, it's ok? Why not do it separately?
>
> Max
>
>> I really need help with fleecing graph creating/inserting/destroying, my code
>> about it is a hack, I don't like it, it just works.
>>
>> About testing: to show that this work I use existing fleecing test - 222, a bit
>> tuned (drop block-job and use new qmp command to remove filter).
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-20 9:42 ` Vladimir Sementsov-Ogievskiy
@ 2018-08-20 13:32 ` Max Reitz
2018-08-20 14:49 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 15+ messages in thread
From: Max Reitz @ 2018-08-20 13:32 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
[-- Attachment #1: Type: text/plain, Size: 16892 bytes --]
On 2018-08-20 11:42, Vladimir Sementsov-Ogievskiy wrote:
> 18.08.2018 00:50, Max Reitz wrote:
>> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
[...]
>>> Proposal:
>>>
>>> For fleecing we need two nodes:
>>>
>>> 1. fleecing hook. It's a filter which should be inserted on top of active
>>> disk. It's main purpose is handling guest writes by copy-on-write operation,
>>> i.e. it's a substitution for write-notifier in backup job.
>>>
>>> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
>>> It also represents a point-in-time snapshot of active disk for the readers.
>> It's not really COW, it's copy-before-write, isn't it? It's something
>> else entirely. COW is about writing data to an overlay *instead* of
>> writing it to the backing file. Ideally, you don't copy anything,
>> actually. It's just a side effect that you need to copy things if your
>> cluster size doesn't happen to match exactly what you're overwriting.
>
> Hmm. I'm not against. But COW term was already used in backup to
> describe this.
Bad enough. :-)
>> CBW is about copying everything to the overlay, and then leaving it
>> alone, instead writing the data to the backing file.
>>
>> I'm not sure how important it is, I just wanted to make a note so we
>> don't misunderstand what's going on, somehow.
>>
>>
>> The fleecing hook sounds good to me, but I'm asking myself why we don't
>> just add that behavior to the backup filter node. That is, re-implement
>> backup without before-write notifiers by making the filter node actually
>> do something (I think there was some reason, but I don't remember).
>
> fleecing don't need any block-job at all, so, I think it is good to have
> fleecing filter
> to be separate. And then, it should be reused by internal backup.
Sure, but we have backup now. Throwing it out of the window and
rewriting it just because sounds like a lot of work for not much gain.
> Hm, we can call this backup-filter instead of fleecing-hook, what is the
> difference?
The difference would be that instead of putting it into an entirely new
block driver, you'd move the functionality inside of block/backup.c
(thus relieving backup from having to use the before-write notifiers as
I described above). That may keep the changes easier to handle.
I do think it'd be cleaner, but the question is, does it really gain you
something? Aside from not having to start a block job, but I don't
really consider this an issue (it's not really more difficult to start
a block job than to do block graph manipulation yourself).
[...]
>>> Ok, this works, it's an image fleecing scheme without any block jobs.
>> So this is the goal? Hm. How useful is that really?
>>
>> I suppose technically you could allow blockdev-add'ing a backup filter
>> node (though only with sync=none) and that would give you the same.
>
> what is backup filter node?
Ah, right... My mistake. I thought backup had a filter node like
mirror and commit do. But it wasn't necessary so far because there was
no permission issue with backup like there was with mirror and commit.
OK, so my idea would have been that basically every block job can be
represented with a filter node that actually performs the work. We only
need the block job to make it perform in background.
(BDSs can only do work when requested to do so, usually by a parent --
you need a block job if you want them to continuously perform work.)
But that's just my idea, it's not really how things are right now.
So from that POV, having a backup-filter/fleecing-hook that actually
performs the backup work is something I would like -- but again, I don't
know whether it's actually important.
>>> Problems with realization:
>>>
>>> 1 What to do with hack-permissions-node? What is a true way to implement
>>> something like this? How to tune permissions to avoid this additional node?
>> Hm, how is that different from what we currently do? Because the block
>> job takes care of it?
>
> 1. As I understand, we agreed, that it is good to use filter node
> instead of write_notifier.
Ah, great.
> 2. We already have fleecing scheme, when we should create some subgraph
> between nodes.
Yes, but how do the permissions work right now, and why wouldn't they
work with your schema?
> 3. If we move to filter-node instead of write_notifier, block job is not
> actually needed for fleecing, and it is good to drop it from the
> fleecing scheme, to simplify it, to make it more clear and transparent.
If that's possible, why not. But again, I'm not sure whether that's
enough of a reason for the endavour, because whether you start a block
job or do some graph manipulation yourself is not really a difference in
complexity.
But it's mostly your call, since I suppose you'd be doing most of the work.
> And finally, we will have unified filter-node-based scheme for backup
> and fleecing, modular and customisable.
[...]
>>> Benefits, or, what can be done:
>>>
>>> 1. We can implement special Fleecing cache filter driver, which will be a real
>>> cache: it will store some recently written clusters and RAM, it can have a
>>> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
>>> for each cluster of active disk we will have the following characteristics:
>>>
>>> - changed (changed in active disk since backup start)
>>> - copy (we need this cluster for fleecing user. For example, in RFC patch all
>>> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
>>> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
>>> fleecing (for use in incremental backup push or pull)
>>> - cached in RAM
>>> - cached in disk
>> Would it be possible to implement such a filter driver that could just
>> be used as a backup target?
>
> for internal backup we need backup-job anyway, and we will be able to
> create different schemes.
> One of my goals is the scheme, when we store old data from CBW
> operations into local cache, when
> backup target is remote, relatively slow NBD node. In this case, cache
> is backup source, not target.
Sorry, my question was badly worded. My main point was whether you
could implement the filter driver in such a generic way that it wouldn't
depend on the fleecing-hook.
Judging from your answer and from the fact that you proposed calling the
filter node backup-filter and just using it for all backups, I suppose
the answer is "yes". So that's good.
(Though I didn't quite understand why in your example the cache would be
the backup source, when the target is the slow node...)
>>> On top of these characteristics we can implement the following features:
>>>
>>> 1. COR, we can cache clusters not only on writes but on reads too, if we have
>>> free space in ram-cache (and if not, do not cache at all, don't write to
>>> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
>> You can do the same with backup by just putting a fast overlay between
>> source and the backup, if your source is so slow, and then do COR, i.e.:
>>
>> slow source --> fast overlay --> COR node --> backup filter
>
> How will we check ram-cache size to make COR optional in this scheme?
Yes, well, if you have a caching driver already, I suppose you can just
use that.
You could either write it a bit simpler to only cache on writes and then
put a COR node on top if desired; or you implement the read cache
functionality directly in the node, which may make it a bit more
complicated, but probably also faster.
(I guess you indeed want to go for faster when already writing a RAM
cache driver...)
(I don't really understand what BDRV_REQ_UNNECESSARY is supposed to do,
though.)
>>> 2. Benefit for guest: if cluster is unchanged and ram-cached, we can skip reading
>>> from the devise
>>>
>>> 3. If needed, we can drop unchanged ram-cached clusters from ram-cache
>>>
>>> 4. On guest write, if cluster is already cached, we just mark it "changed"
>>>
>>> 5. Lazy discards: in some setups, discards are not guaranteed to do something,
>>> so, we can at least defer some discards to the end of backup, if ram-cache is
>>> full.
>>>
>>> 6. We can implement discard operation in fleecing cache, to make cluster
>>> not needed (drop from cache, drop "copy" flag), so further reads of this
>>> cluster will return error. So, fleecing client may read cluster by cluster
>>> and discard them to reduce COW-load of the drive. We even can combine read
>>> and discard into one command, something like "read-once", or it may be a
>>> flag for fleecing-cache, that all reads are "read-once".
>> That would definitely be possible with a dedicated fleecing backup
>> target filter (and normal backup).
>
> target-filter schemes will not work for external-backup..
I thought you were talking about what you could do with the node schema
you gave above, i.e. inside of qemu itself.
>>> 7. We can provide recommendations, on which clusters should fleecing-client
>>> copy first. Examples:
>>> a. copy ram-cached clusters first (obvious, to unload cache and reduce io
>>> overhead)
>>> b. copy zero-clusters last (the don't occupy place in cache, so, lets copy
>>> other clusters first)
>>> c. copy disk-cached clusters list (if we don't care about disk space,
>>> we can say, that for disk-cached clusters we already have a maximum
>>> io overhead, so let's copy other clusters first)
>>> d. copy disk-cached clusters with high priority (but after ram-cached) -
>>> if we don't have enough disk space
>>>
>>> So, there is a wide range of possible politics. How to provide these
>>> recommendations?
>>> 1. block_status
>>> 2. create separate interface
>>> 3. internal backup job may access shared fleecing object directly.
>> Hm, this is a completely different question now. Sure, extending backup
>> or mirror (or a future blockdev-copy) would make it easiest for us. But
>> then again, if you want to copy data off a point-in-time snapshot of a
>> volume, you can just use normal backup anyway, right?
>
> right. but how to implement all the features I listed? I see the way to
> implement them with help of two special filters. And backup job will be
> used anyway (without write-notifiers) for internal backup and will not
> be used for external backup (fleecing).
Hm. So what you want here is a special block driver or at least a
special interface that can give information to an outside tool, namely
the information you listed above.
If you want information about RAM-cached clusters, well, you can only
get that information from the RAM cache driver. It probably would be
allocation information, do we have any way of getting that out?
It seems you can get all of that (zero information and allocation
information) over NBD. Would that be enough?
>> So I'd say the purpose of fleecing is that you have an external tool
>> make use of it. Since my impression was that you'd just access the
>> volume externally and wouldn't actually copy all of the data off of it
>
> not quite right. People use fleecing to implement external backup,
> managed by their third-party tool, which they want to use instead of
> internal backup. And they do copy all the data. I cant describe all the
> reasons, but example is custom storage for backup, which external tool
> can manage and Qemu can't.
> So, fleecing is used for external backups (or pull backups).
Hm, OK. I understand.
>> (because that's what you could use the backup job for), I don't think I
>> can say much here, because my impression seems to have been wrong.
>>
>>> About internal backup:
>>> Of course, we need a job which will copy clusters. But it will be simplified:
>> So you want to completely rebuild backup based on the fact that you
>> specifically have fleecing now?
>
> I need several features, which are hard to implement using current scheme.
>
> 1. The scheme when we have a local cache as COW target and slow remote
> backup target.
> How to do it now? Using two backups, one with sync=none... Not sure that
> this is right way.
If it works...
(I'd rather build simple building blocks that you can put together than
something complicated that works for a specific solution)
> 2. Then, we'll need support for bitmaps in backup (sync=none).
What do you mean by that? You've written about using bitmaps with
fleecing before, but actually I didn't understand that.
Do you want to expose a bitmap for the external tool so it knows what it
should copy, and then use that bitmap during fleecing, too, because you
know you don't have to save the non-dirty clusters because the backup
tool isn't going to look at them anyway?
In that case, sure, that is just impossible right now, but it doesn't
seem like it needs to be. Adding dirty bitmap support to sync=none
doesn't seem too hard. (Or adding it to your schema.)
> 3. Then,
> we'll need a possibility for backup(sync=none) to
> not COW clusters, which are already copied to backup, and so on.
Isn't that the same as 2?
> If we want a backup-filter anyway, why not to implement some cool
> features on top of it?
Sure, but the question is whether you need to rebuild backup for that. :-)
To me, it just sounded a bit wrong to start over from the fleecing side
of things, re-implement all of backup there (effectively), and then
re-implement backup on top of it.
But maybe it is the right way to go. I can certainly see nothing
absolutely wrong with putting the CBW logic into a backup filter (be it
backup-filter or fleecing-hook), and then it makes sense to just use
that filter node in the backup job. It's just work, which I don't know
whether it's necessary. But if you're willing to do it, that's OK.
>> I don't think that will be any simpler.
>>
>> I mean, it would make blockdev-copy simpler, because we could
>> immediately replace backup by mirror, and then we just have mirror,
>> which would then automatically become blockdev-copy...
>>
>> But it's not really going to be simpler, because whether you put the
>> copy-before-write logic into a dedicated block driver, or into the
>> backup filter driver, doesn't really make it simpler either way. Well,
>> adding a new driver always is a bit more complicated, so there's that.
>
> what is the difference between separate filter driver and backup filter
> driver?
I thought we already had a backup filter node, so you wouldn't have had
to create a new driver in that case.
But we don't, so there really is no difference. Well, apart from being
able to share state easier when the driver is in the same file as the job.
>>> it should not care about guest writes, it copies clusters from a kind of
>>> snapshot which is not changing in time. This job should follow recommendations
>>> from fleecing scheme [7].
>>>
>>> What about the target?
>>>
>>> We can use separate node as target, and copy from fleecing cache to the target.
>>> If we have only ram-cache, it would be equal to current approach (data is copied
>>> directly to the target, even on COW). If we have both ram- and disk- caches, it's
>>> a cool solution for slow-target: instead of make guest wait for long write to
>>> backup target (when ram-cache is full) we can write to disk-cache which is local
>>> and fast.
>> Or you backup to a fast overlay over a slow target, and run a live
>> commit on the side.
>
> I think it will lead to larger io overhead: all clusters will go through
> overlay, not only guest-written clusters, for which we did not have time
> to copy them..
Well, and it probably makes sense to have some form of RAM-cache driver.
Then that'd be your fast overlay.
>>> Another option is to combine fleecing cache and target somehow (I didn't think
>>> about this really).
>>>
>>> Finally, with one - two (three?) special filters we can implement all current
>>> fleecing/backup schemes in unique and very configurable way and do a lot more
>>> cool features and possibilities.
>>>
>>> What do you think?
>> I think adding a specific fleecing target filter makes sense because you
>> gave many reasons for interesting new use cases that could emerge from that.
>>
>> But I think adding a new fleecing-hook driver just means moving the
>> implementation from backup to that new driver.
>
> But in the same time you say that it's ok to create backup-filter
> (instead of write_notifier) and make it insertable by qapi? So, if I
> implement it in block/backup, it's ok? Why not do it separately?
Because I thought we had it already. But we don't. So feel free to do
it separately. :-)
Max
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-20 13:32 ` Max Reitz
@ 2018-08-20 14:49 ` Vladimir Sementsov-Ogievskiy
2018-08-20 17:25 ` Max Reitz
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-20 14:49 UTC (permalink / raw)
To: Max Reitz, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
20.08.2018 16:32, Max Reitz wrote:
> On 2018-08-20 11:42, Vladimir Sementsov-Ogievskiy wrote:
>> 18.08.2018 00:50, Max Reitz wrote:
>>> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
> [...]
>
>>>> Proposal:
>>>>
>>>> For fleecing we need two nodes:
>>>>
>>>> 1. fleecing hook. It's a filter which should be inserted on top of active
>>>> disk. It's main purpose is handling guest writes by copy-on-write operation,
>>>> i.e. it's a substitution for write-notifier in backup job.
>>>>
>>>> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
>>>> It also represents a point-in-time snapshot of active disk for the readers.
>>> It's not really COW, it's copy-before-write, isn't it? It's something
>>> else entirely. COW is about writing data to an overlay *instead* of
>>> writing it to the backing file. Ideally, you don't copy anything,
>>> actually. It's just a side effect that you need to copy things if your
>>> cluster size doesn't happen to match exactly what you're overwriting.
>> Hmm. I'm not against. But COW term was already used in backup to
>> describe this.
> Bad enough. :-)
So, we agreed about new "CBW" abbreviation? :)
>
>>> CBW is about copying everything to the overlay, and then leaving it
>>> alone, instead writing the data to the backing file.
>>>
>>> I'm not sure how important it is, I just wanted to make a note so we
>>> don't misunderstand what's going on, somehow.
>>>
>>>
>>> The fleecing hook sounds good to me, but I'm asking myself why we don't
>>> just add that behavior to the backup filter node. That is, re-implement
>>> backup without before-write notifiers by making the filter node actually
>>> do something (I think there was some reason, but I don't remember).
>> fleecing don't need any block-job at all, so, I think it is good to have
>> fleecing filter
>> to be separate. And then, it should be reused by internal backup.
> Sure, but we have backup now. Throwing it out of the window and
> rewriting it just because sounds like a lot of work for not much gain.
>
>> Hm, we can call this backup-filter instead of fleecing-hook, what is the
>> difference?
> The difference would be that instead of putting it into an entirely new
> block driver, you'd move the functionality inside of block/backup.c
> (thus relieving backup from having to use the before-write notifiers as
> I described above). That may keep the changes easier to handle.
>
> I do think it'd be cleaner, but the question is, does it really gain you
> something? Aside from not having to start a block job, but I don't
> really consider this an issue (it's not really more difficult to start
> a block job than to do block graph manipulation yourself).
>
> [...]
>
>>>> Ok, this works, it's an image fleecing scheme without any block jobs.
>>> So this is the goal? Hm. How useful is that really?
>>>
>>> I suppose technically you could allow blockdev-add'ing a backup filter
>>> node (though only with sync=none) and that would give you the same.
>> what is backup filter node?
> Ah, right... My mistake. I thought backup had a filter node like
> mirror and commit do. But it wasn't necessary so far because there was
> no permission issue with backup like there was with mirror and commit.
>
> OK, so my idea would have been that basically every block job can be
> represented with a filter node that actually performs the work. We only
> need the block job to make it perform in background.
>
> (BDSs can only do work when requested to do so, usually by a parent --
> you need a block job if you want them to continuously perform work.)
>
> But that's just my idea, it's not really how things are right now.
>
> So from that POV, having a backup-filter/fleecing-hook that actually
> performs the backup work is something I would like -- but again, I don't
> know whether it's actually important.
>
>>>> Problems with realization:
>>>>
>>>> 1 What to do with hack-permissions-node? What is a true way to implement
>>>> something like this? How to tune permissions to avoid this additional node?
>>> Hm, how is that different from what we currently do? Because the block
>>> job takes care of it?
>> 1. As I understand, we agreed, that it is good to use filter node
>> instead of write_notifier.
> Ah, great.
>
>> 2. We already have fleecing scheme, when we should create some subgraph
>> between nodes.
> Yes, but how do the permissions work right now, and why wouldn't they
> work with your schema?
now it uses backup job, with shared_perm = all for its source and target
nodes. (ha, you can look at the picture in "[PATCH v2 0/3] block nodes
graph visualization")
>
>> 3. If we move to filter-node instead of write_notifier, block job is not
>> actually needed for fleecing, and it is good to drop it from the
>> fleecing scheme, to simplify it, to make it more clear and transparent.
> If that's possible, why not. But again, I'm not sure whether that's
> enough of a reason for the endavour, because whether you start a block
> job or do some graph manipulation yourself is not really a difference in
> complexity.
not "or" but "and": in current fleecing scheme we do both graph
manipulations and block-job stat/cancel..
Yes, I agree, that there no real benefit in difficulty. I just thing,
that if we have filter node which performs "CBW" operations, block-job
backup(sync=none) becomes actually empty, it will do nothing.
>
> But it's mostly your call, since I suppose you'd be doing most of the work.
>
>> And finally, we will have unified filter-node-based scheme for backup
>> and fleecing, modular and customisable.
> [...]
>
>>>> Benefits, or, what can be done:
>>>>
>>>> 1. We can implement special Fleecing cache filter driver, which will be a real
>>>> cache: it will store some recently written clusters and RAM, it can have a
>>>> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
>>>> for each cluster of active disk we will have the following characteristics:
>>>>
>>>> - changed (changed in active disk since backup start)
>>>> - copy (we need this cluster for fleecing user. For example, in RFC patch all
>>>> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
>>>> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
>>>> fleecing (for use in incremental backup push or pull)
>>>> - cached in RAM
>>>> - cached in disk
>>> Would it be possible to implement such a filter driver that could just
>>> be used as a backup target?
>> for internal backup we need backup-job anyway, and we will be able to
>> create different schemes.
>> One of my goals is the scheme, when we store old data from CBW
>> operations into local cache, when
>> backup target is remote, relatively slow NBD node. In this case, cache
>> is backup source, not target.
> Sorry, my question was badly worded. My main point was whether you
> could implement the filter driver in such a generic way that it wouldn't
> depend on the fleecing-hook.
yes, I want my filter nodes to be self-sufficient entities. However it
may be more effective to have some shared data, between them, for
example, dirty-bitmaps, specifying drive clusters, to know which
clusters are cached, which are changed, etc.
>
> Judging from your answer and from the fact that you proposed calling the
> filter node backup-filter and just using it for all backups, I suppose
> the answer is "yes". So that's good.
>
> (Though I didn't quite understand why in your example the cache would be
> the backup source, when the target is the slow node...)
cache is a point-in-time view of active disk (actual source) for
fleecing. So, we can start backup job to copy data from cache to target.
>
>>>> On top of these characteristics we can implement the following features:
>>>>
>>>> 1. COR, we can cache clusters not only on writes but on reads too, if we have
>>>> free space in ram-cache (and if not, do not cache at all, don't write to
>>>> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
>>> You can do the same with backup by just putting a fast overlay between
>>> source and the backup, if your source is so slow, and then do COR, i.e.:
>>>
>>> slow source --> fast overlay --> COR node --> backup filter
>> How will we check ram-cache size to make COR optional in this scheme?
> Yes, well, if you have a caching driver already, I suppose you can just
> use that.
>
> You could either write it a bit simpler to only cache on writes and then
> put a COR node on top if desired; or you implement the read cache
> functionality directly in the node, which may make it a bit more
> complicated, but probably also faster.
>
> (I guess you indeed want to go for faster when already writing a RAM
> cache driver...)
>
> (I don't really understand what BDRV_REQ_UNNECESSARY is supposed to do,
> though.)
When we do "CBW", we _must_ save data before guest write, so, we write
this data to the cache (or directly to target, like in current approach).
When we do "COR", we _may_ save data to our ram-cache. It's safe to not
save data, as we can read it from active disk (data is not changed yet).
BDRV_REQ_UNNECESSARY is a proposed interface to write this unnecessary
data to the cache: if ram-cache is full, cache will skip this write.
>
>>>> 2. Benefit for guest: if cluster is unchanged and ram-cached, we can skip reading
>>>> from the devise
>>>>
>>>> 3. If needed, we can drop unchanged ram-cached clusters from ram-cache
>>>>
>>>> 4. On guest write, if cluster is already cached, we just mark it "changed"
>>>>
>>>> 5. Lazy discards: in some setups, discards are not guaranteed to do something,
>>>> so, we can at least defer some discards to the end of backup, if ram-cache is
>>>> full.
>>>>
>>>> 6. We can implement discard operation in fleecing cache, to make cluster
>>>> not needed (drop from cache, drop "copy" flag), so further reads of this
>>>> cluster will return error. So, fleecing client may read cluster by cluster
>>>> and discard them to reduce COW-load of the drive. We even can combine read
>>>> and discard into one command, something like "read-once", or it may be a
>>>> flag for fleecing-cache, that all reads are "read-once".
>>> That would definitely be possible with a dedicated fleecing backup
>>> target filter (and normal backup).
>> target-filter schemes will not work for external-backup..
> I thought you were talking about what you could do with the node schema
> you gave above, i.e. inside of qemu itself.
>
>>>> 7. We can provide recommendations, on which clusters should fleecing-client
>>>> copy first. Examples:
>>>> a. copy ram-cached clusters first (obvious, to unload cache and reduce io
>>>> overhead)
>>>> b. copy zero-clusters last (the don't occupy place in cache, so, lets copy
>>>> other clusters first)
>>>> c. copy disk-cached clusters list (if we don't care about disk space,
>>>> we can say, that for disk-cached clusters we already have a maximum
>>>> io overhead, so let's copy other clusters first)
>>>> d. copy disk-cached clusters with high priority (but after ram-cached) -
>>>> if we don't have enough disk space
>>>>
>>>> So, there is a wide range of possible politics. How to provide these
>>>> recommendations?
>>>> 1. block_status
>>>> 2. create separate interface
>>>> 3. internal backup job may access shared fleecing object directly.
>>> Hm, this is a completely different question now. Sure, extending backup
>>> or mirror (or a future blockdev-copy) would make it easiest for us. But
>>> then again, if you want to copy data off a point-in-time snapshot of a
>>> volume, you can just use normal backup anyway, right?
>> right. but how to implement all the features I listed? I see the way to
>> implement them with help of two special filters. And backup job will be
>> used anyway (without write-notifiers) for internal backup and will not
>> be used for external backup (fleecing).
> Hm. So what you want here is a special block driver or at least a
> special interface that can give information to an outside tool, namely
> the information you listed above.
>
> If you want information about RAM-cached clusters, well, you can only
> get that information from the RAM cache driver. It probably would be
> allocation information, do we have any way of getting that out?
>
> It seems you can get all of that (zero information and allocation
> information) over NBD. Would that be enough?
it's a most generic and clean way, but I'm not sure that it will be
performance-effective.
>
>>> So I'd say the purpose of fleecing is that you have an external tool
>>> make use of it. Since my impression was that you'd just access the
>>> volume externally and wouldn't actually copy all of the data off of it
>> not quite right. People use fleecing to implement external backup,
>> managed by their third-party tool, which they want to use instead of
>> internal backup. And they do copy all the data. I cant describe all the
>> reasons, but example is custom storage for backup, which external tool
>> can manage and Qemu can't.
>> So, fleecing is used for external backups (or pull backups).
> Hm, OK. I understand.
>
>>> (because that's what you could use the backup job for), I don't think I
>>> can say much here, because my impression seems to have been wrong.
>>>
>>>> About internal backup:
>>>> Of course, we need a job which will copy clusters. But it will be simplified:
>>> So you want to completely rebuild backup based on the fact that you
>>> specifically have fleecing now?
>> I need several features, which are hard to implement using current scheme.
>>
>> 1. The scheme when we have a local cache as COW target and slow remote
>> backup target.
>> How to do it now? Using two backups, one with sync=none... Not sure that
>> this is right way.
> If it works...
>
> (I'd rather build simple building blocks that you can put together than
> something complicated that works for a specific solution)
exactly, I want to implement simple building blocks = filter nodes,
instead of implementing all the features in backup job.
>
>> 2. Then, we'll need support for bitmaps in backup (sync=none).
> What do you mean by that? You've written about using bitmaps with
> fleecing before, but actually I didn't understand that.
>
> Do you want to expose a bitmap for the external tool so it knows what it
> should copy, and then use that bitmap during fleecing, too, because you
> know you don't have to save the non-dirty clusters because the backup
> tool isn't going to look at them anyway?
yes.
>
> In that case, sure, that is just impossible right now, but it doesn't
> seem like it needs to be. Adding dirty bitmap support to sync=none
> doesn't seem too hard. (Or adding it to your schema.)
>
>> 3. Then,
>> we'll need a possibility for backup(sync=none) to
>> not COW clusters, which are already copied to backup, and so on.
> Isn't that the same as 2?
We can use one bitmap for 2 and 3, and drop bits from it, when
external-tool has read corresponding cluster from nbd-fleecing-export..
>
>> If we want a backup-filter anyway, why not to implement some cool
>> features on top of it?
> Sure, but the question is whether you need to rebuild backup for that. :-)
>
> To me, it just sounded a bit wrong to start over from the fleecing side
> of things, re-implement all of backup there (effectively), and then
> re-implement backup on top of it.
>
> But maybe it is the right way to go. I can certainly see nothing
> absolutely wrong with putting the CBW logic into a backup filter (be it
> backup-filter or fleecing-hook), and then it makes sense to just use
> that filter node in the backup job. It's just work, which I don't know
> whether it's necessary. But if you're willing to do it, that's OK.
>
>>> I don't think that will be any simpler.
>>>
>>> I mean, it would make blockdev-copy simpler, because we could
>>> immediately replace backup by mirror, and then we just have mirror,
>>> which would then automatically become blockdev-copy...
>>>
>>> But it's not really going to be simpler, because whether you put the
>>> copy-before-write logic into a dedicated block driver, or into the
>>> backup filter driver, doesn't really make it simpler either way. Well,
>>> adding a new driver always is a bit more complicated, so there's that.
>> what is the difference between separate filter driver and backup filter
>> driver?
> I thought we already had a backup filter node, so you wouldn't have had
> to create a new driver in that case.
>
> But we don't, so there really is no difference. Well, apart from being
> able to share state easier when the driver is in the same file as the job.
But if we make it separate - it will be a separate "building block" to
be reused in different schemes.
>
>>>> it should not care about guest writes, it copies clusters from a kind of
>>>> snapshot which is not changing in time. This job should follow recommendations
>>>> from fleecing scheme [7].
>>>>
>>>> What about the target?
>>>>
>>>> We can use separate node as target, and copy from fleecing cache to the target.
>>>> If we have only ram-cache, it would be equal to current approach (data is copied
>>>> directly to the target, even on COW). If we have both ram- and disk- caches, it's
>>>> a cool solution for slow-target: instead of make guest wait for long write to
>>>> backup target (when ram-cache is full) we can write to disk-cache which is local
>>>> and fast.
>>> Or you backup to a fast overlay over a slow target, and run a live
>>> commit on the side.
>> I think it will lead to larger io overhead: all clusters will go through
>> overlay, not only guest-written clusters, for which we did not have time
>> to copy them..
> Well, and it probably makes sense to have some form of RAM-cache driver.
> Then that'd be your fast overlay.
but there no reasons to copy all the data through the cache: we need it
only for CBW.
any way, I think it will be good if both schemes will be possible.
>
>>>> Another option is to combine fleecing cache and target somehow (I didn't think
>>>> about this really).
>>>>
>>>> Finally, with one - two (three?) special filters we can implement all current
>>>> fleecing/backup schemes in unique and very configurable way and do a lot more
>>>> cool features and possibilities.
>>>>
>>>> What do you think?
>>> I think adding a specific fleecing target filter makes sense because you
>>> gave many reasons for interesting new use cases that could emerge from that.
>>>
>>> But I think adding a new fleecing-hook driver just means moving the
>>> implementation from backup to that new driver.
>> But in the same time you say that it's ok to create backup-filter
>> (instead of write_notifier) and make it insertable by qapi? So, if I
>> implement it in block/backup, it's ok? Why not do it separately?
> Because I thought we had it already. But we don't. So feel free to do
> it separately. :-)
Ok, that's good :) . Then, I'll try to reuse the filter in backup
instead of write-notifiers, and understand do we really need internal
state of backup block-job or not.
>
> Max
>
PS: in background, I have unpublished work, aimed to parallelize
backup-job into several coroutines (like it is done for mirror, qemu-img
clone cmd). And it's really hard. It creates queues of requests with
different priority, to handle CBW requests in common pipeline, it's
mostly a rewrite of block/backup. If we split CBW from backup to
separate filter-node, backup becomes very simple thing (copy clusters
from constant storage) and its parallelization becomes simpler.
I don't say throw the backup away, but I have several ideas, which may
alter current approach. They may live in parallel with current backup
path, or replace it in future, if they will be more effective.
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-20 14:49 ` Vladimir Sementsov-Ogievskiy
@ 2018-08-20 17:25 ` Max Reitz
2018-08-20 18:30 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 15+ messages in thread
From: Max Reitz @ 2018-08-20 17:25 UTC (permalink / raw)
To: Vladimir Sementsov-Ogievskiy, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
[-- Attachment #1: Type: text/plain, Size: 16188 bytes --]
On 2018-08-20 16:49, Vladimir Sementsov-Ogievskiy wrote:
> 20.08.2018 16:32, Max Reitz wrote:
>> On 2018-08-20 11:42, Vladimir Sementsov-Ogievskiy wrote:
>>> 18.08.2018 00:50, Max Reitz wrote:
>>>> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
>> [...]
>>
>>>>> Proposal:
>>>>>
>>>>> For fleecing we need two nodes:
>>>>>
>>>>> 1. fleecing hook. It's a filter which should be inserted on top of active
>>>>> disk. It's main purpose is handling guest writes by copy-on-write operation,
>>>>> i.e. it's a substitution for write-notifier in backup job.
>>>>>
>>>>> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
>>>>> It also represents a point-in-time snapshot of active disk for the readers.
>>>> It's not really COW, it's copy-before-write, isn't it? It's something
>>>> else entirely. COW is about writing data to an overlay *instead* of
>>>> writing it to the backing file. Ideally, you don't copy anything,
>>>> actually. It's just a side effect that you need to copy things if your
>>>> cluster size doesn't happen to match exactly what you're overwriting.
>>> Hmm. I'm not against. But COW term was already used in backup to
>>> describe this.
>> Bad enough. :-)
>
> So, we agreed about new "CBW" abbreviation? :)
It is already used for the USB mass-storage command block wrapper, but I
suppose that is sufficiently different not to cause much confusion. :-)
(Or at least that's the only other use I know of.)
[...]
>>> 2. We already have fleecing scheme, when we should create some subgraph
>>> between nodes.
>> Yes, but how do the permissions work right now, and why wouldn't they
>> work with your schema?
>
> now it uses backup job, with shared_perm = all for its source and target
> nodes.
Uh-huh.
So the issue is... Hm, what exactly? The backup node probably doesn't
want to share WRITE for the source anymore, as there is no real point in
doing so. And for the target, the only problem may be to share
CONSISTENT_READ. It is OK to share that in the fleecing case, but in
other cases maybe it isn't. But that's easy enough to distinguish in
the driver.
The main issue I could see is that the overlay (the fleecing target)
might not share write permissions on its backing file (the fleecing
source)... But your diagram shows (and bdrv_format_default_perms() as
well) that this is no the case, when the overlay is writable, the
backing file may be written to, too.
> (ha, you can look at the picture in "[PATCH v2 0/3] block nodes
> graph visualization")
:-)
>>> 3. If we move to filter-node instead of write_notifier, block job is not
>>> actually needed for fleecing, and it is good to drop it from the
>>> fleecing scheme, to simplify it, to make it more clear and transparent.
>> If that's possible, why not. But again, I'm not sure whether that's
>> enough of a reason for the endavour, because whether you start a block
>> job or do some graph manipulation yourself is not really a difference in
>> complexity.
>
> not "or" but "and": in current fleecing scheme we do both graph
> manipulations and block-job stat/cancel..
Hm! Interesting. I didn't know blockdev-backup didn't set the target's
backing file. It makes sense, but I didn't think about it.
Well, still, my point was whether you do a blockdev-backup +
block-job-cancel, or a blockdev-add + blockdev-reopen + blockdev-reopen
+ blockdev-del... If there is a difference, the former is going to be
simpler, probably.
(But if there are things you can't do with the current blockdev-backup,
then, well, that doesn't help you.)
> Yes, I agree, that there no real benefit in difficulty. I just thing,
> that if we have filter node which performs "CBW" operations, block-job
> backup(sync=none) becomes actually empty, it will do nothing.
On the code side, yes, that's true.
>> But it's mostly your call, since I suppose you'd be doing most of the work.
>>
>>> And finally, we will have unified filter-node-based scheme for backup
>>> and fleecing, modular and customisable.
>> [...]
>>
>>>>> Benefits, or, what can be done:
>>>>>
>>>>> 1. We can implement special Fleecing cache filter driver, which will be a real
>>>>> cache: it will store some recently written clusters and RAM, it can have a
>>>>> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
>>>>> for each cluster of active disk we will have the following characteristics:
>>>>>
>>>>> - changed (changed in active disk since backup start)
>>>>> - copy (we need this cluster for fleecing user. For example, in RFC patch all
>>>>> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
>>>>> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
>>>>> fleecing (for use in incremental backup push or pull)
>>>>> - cached in RAM
>>>>> - cached in disk
>>>> Would it be possible to implement such a filter driver that could just
>>>> be used as a backup target?
>>> for internal backup we need backup-job anyway, and we will be able to
>>> create different schemes.
>>> One of my goals is the scheme, when we store old data from CBW
>>> operations into local cache, when
>>> backup target is remote, relatively slow NBD node. In this case, cache
>>> is backup source, not target.
>> Sorry, my question was badly worded. My main point was whether you
>> could implement the filter driver in such a generic way that it wouldn't
>> depend on the fleecing-hook.
>
> yes, I want my filter nodes to be self-sufficient entities. However it
> may be more effective to have some shared data, between them, for
> example, dirty-bitmaps, specifying drive clusters, to know which
> clusters are cached, which are changed, etc.
I suppose having global dirty bitmaps may make sense.
>> Judging from your answer and from the fact that you proposed calling the
>> filter node backup-filter and just using it for all backups, I suppose
>> the answer is "yes". So that's good.
>>
>> (Though I didn't quite understand why in your example the cache would be
>> the backup source, when the target is the slow node...)
>
> cache is a point-in-time view of active disk (actual source) for
> fleecing. So, we can start backup job to copy data from cache to target.
But wouldn't the cache need to be the immediate fleecing target for
this? (And then you'd run another backup/mirror from it to copy the
whole disk to the real target.)
>>>>> On top of these characteristics we can implement the following features:
>>>>>
>>>>> 1. COR, we can cache clusters not only on writes but on reads too, if we have
>>>>> free space in ram-cache (and if not, do not cache at all, don't write to
>>>>> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
>>>> You can do the same with backup by just putting a fast overlay between
>>>> source and the backup, if your source is so slow, and then do COR, i.e.:
>>>>
>>>> slow source --> fast overlay --> COR node --> backup filter
>>> How will we check ram-cache size to make COR optional in this scheme?
>> Yes, well, if you have a caching driver already, I suppose you can just
>> use that.
>>
>> You could either write it a bit simpler to only cache on writes and then
>> put a COR node on top if desired; or you implement the read cache
>> functionality directly in the node, which may make it a bit more
>> complicated, but probably also faster.
>>
>> (I guess you indeed want to go for faster when already writing a RAM
>> cache driver...)
>>
>> (I don't really understand what BDRV_REQ_UNNECESSARY is supposed to do,
>> though.)
>
> When we do "CBW", we _must_ save data before guest write, so, we write
> this data to the cache (or directly to target, like in current approach).
> When we do "COR", we _may_ save data to our ram-cache. It's safe to not
> save data, as we can read it from active disk (data is not changed yet).
> BDRV_REQ_UNNECESSARY is a proposed interface to write this unnecessary
> data to the cache: if ram-cache is full, cache will skip this write.
Hm, OK... But deciding for each request how much priority it should get
in a potential cache node seems like an awful lot of work. Well, I
don't even know what kind of requests you would deem unnecessary. If it
has something to do with the state of a dirty bitmap, then having global
dirty bitmaps might remove the need for such a request flag.
[...]
>> Hm. So what you want here is a special block driver or at least a
>> special interface that can give information to an outside tool, namely
>> the information you listed above.
>>
>> If you want information about RAM-cached clusters, well, you can only
>> get that information from the RAM cache driver. It probably would be
>> allocation information, do we have any way of getting that out?
>>
>> It seems you can get all of that (zero information and allocation
>> information) over NBD. Would that be enough?
>
> it's a most generic and clean way, but I'm not sure that it will be
> performance-effective.
Intuitively I'd agree, but I suppose if NBD is written right, such a
request should be very fast and the response basically just consists of
the allocation information, so I don't suspect it can be much faster
than that.
(Unless you want some form of interrupts. I suppose NBD would be the
wrong interface, then.)
[...]
>>> I need several features, which are hard to implement using current scheme.
>>>
>>> 1. The scheme when we have a local cache as COW target and slow remote
>>> backup target.
>>> How to do it now? Using two backups, one with sync=none... Not sure that
>>> this is right way.
>> If it works...
>>
>> (I'd rather build simple building blocks that you can put together than
>> something complicated that works for a specific solution)
>
> exactly, I want to implement simple building blocks = filter nodes,
> instead of implementing all the features in backup job.
Good, good. :-)
>>> 3. Then,
>>> we'll need a possibility for backup(sync=none) to
>>> not COW clusters, which are already copied to backup, and so on.
>> Isn't that the same as 2?
>
> We can use one bitmap for 2 and 3, and drop bits from it, when
> external-tool has read corresponding cluster from nbd-fleecing-export..
Oh, right, it needs to be modifiable from the outside. I suppose that
would be possible in NBD, too. (But I don't know exactly.)
[...]
>>>> I don't think that will be any simpler.
>>>>
>>>> I mean, it would make blockdev-copy simpler, because we could
>>>> immediately replace backup by mirror, and then we just have mirror,
>>>> which would then automatically become blockdev-copy...
>>>>
>>>> But it's not really going to be simpler, because whether you put the
>>>> copy-before-write logic into a dedicated block driver, or into the
>>>> backup filter driver, doesn't really make it simpler either way. Well,
>>>> adding a new driver always is a bit more complicated, so there's that.
>>> what is the difference between separate filter driver and backup filter
>>> driver?
>> I thought we already had a backup filter node, so you wouldn't have had
>> to create a new driver in that case.
>>
>> But we don't, so there really is no difference. Well, apart from being
>> able to share state easier when the driver is in the same file as the job.
>
> But if we make it separate - it will be a separate "building block" to
> be reused in different schemes.
Absolutely true.
>>>>> it should not care about guest writes, it copies clusters from a kind of
>>>>> snapshot which is not changing in time. This job should follow recommendations
>>>>> from fleecing scheme [7].
>>>>>
>>>>> What about the target?
>>>>>
>>>>> We can use separate node as target, and copy from fleecing cache to the target.
>>>>> If we have only ram-cache, it would be equal to current approach (data is copied
>>>>> directly to the target, even on COW). If we have both ram- and disk- caches, it's
>>>>> a cool solution for slow-target: instead of make guest wait for long write to
>>>>> backup target (when ram-cache is full) we can write to disk-cache which is local
>>>>> and fast.
>>>> Or you backup to a fast overlay over a slow target, and run a live
>>>> commit on the side.
>>> I think it will lead to larger io overhead: all clusters will go through
>>> overlay, not only guest-written clusters, for which we did not have time
>>> to copy them..
>> Well, and it probably makes sense to have some form of RAM-cache driver.
>> Then that'd be your fast overlay.
>
> but there no reasons to copy all the data through the cache: we need it
> only for CBW.
Well, if there'd be a RAM-cache driver, you may use it for anything that
seems useful (I seem to remember there were some patches on the list
like three or four years ago...).
> any way, I think it will be good if both schemes will be possible.
>
>>>>> Another option is to combine fleecing cache and target somehow (I didn't think
>>>>> about this really).
>>>>>
>>>>> Finally, with one - two (three?) special filters we can implement all current
>>>>> fleecing/backup schemes in unique and very configurable way and do a lot more
>>>>> cool features and possibilities.
>>>>>
>>>>> What do you think?
>>>> I think adding a specific fleecing target filter makes sense because you
>>>> gave many reasons for interesting new use cases that could emerge from that.
>>>>
>>>> But I think adding a new fleecing-hook driver just means moving the
>>>> implementation from backup to that new driver.
>>> But in the same time you say that it's ok to create backup-filter
>>> (instead of write_notifier) and make it insertable by qapi? So, if I
>>> implement it in block/backup, it's ok? Why not do it separately?
>> Because I thought we had it already. But we don't. So feel free to do
>> it separately. :-)
>
> Ok, that's good :) . Then, I'll try to reuse the filter in backup
> instead of write-notifiers, and understand do we really need internal
> state of backup block-job or not.
>
>> Max
>>
>
> PS: in background, I have unpublished work, aimed to parallelize
> backup-job into several coroutines (like it is done for mirror, qemu-img
> clone cmd). And it's really hard.It creates queues of requests with
> different priority, to handle CBW requests in common pipeline, it's
> mostly a rewrite of block/backup. If we split CBW from backup to
> separate filter-node, backup becomes very simple thing (copy clusters
> from constant storage) and its parallelization becomes simpler.
If CBW is split from backup, maybe mirror could replace backup
immediately. You'd fleece to a RAM cache target and then mirror from there.
(To be precise: The exact replacement would be an active mirror, so a
mirror with copy-mode=write-blocking, so it immediately writes the old
block to the target when it is changed in the source, and thus the RAM
cache could stay effectively empty.)
> I don't say throw the backup away, but I have several ideas, which may
> alter current approach. They may live in parallel with current backup
> path, or replace it in future, if they will be more effective.
Thing is, contrary to the impression I've probably given, we do want to
throw away backup sooner or later. We want a single block job
(blockdev-copy) that unifies mirror, backup, and commit.
(mirror already basically supersedes commit, with live commit just being
exactly mirror; the main problem is integrating backup. But with a
fleecing node and a RAM cache target, that would suddenly be really
simple, I assume.)
((All that's missing is sync=top, where the mirror would need to not
only check its source (which would be the RAM cache), but also its
backing file; and sync=incremental, which just isn't there with mirror
at all. OTOH, it may be possible to implement both modes simply in the
fleecing/backup node, so it only copies that respective data to the
target and the mirror simply sees nothing else.))
Max
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-20 17:25 ` Max Reitz
@ 2018-08-20 18:30 ` Vladimir Sementsov-Ogievskiy
2018-08-21 9:29 ` Vladimir Sementsov-Ogievskiy
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-20 18:30 UTC (permalink / raw)
To: Max Reitz, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
20.08.2018 20:25, Max Reitz wrote:
> On 2018-08-20 16:49, Vladimir Sementsov-Ogievskiy wrote:
>> 20.08.2018 16:32, Max Reitz wrote:
>>> On 2018-08-20 11:42, Vladimir Sementsov-Ogievskiy wrote:
>>>> 18.08.2018 00:50, Max Reitz wrote:
>>>>> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
>>> [...]
>>>
>>>>>> Proposal:
>>>>>>
>>>>>> For fleecing we need two nodes:
>>>>>>
>>>>>> 1. fleecing hook. It's a filter which should be inserted on top of active
>>>>>> disk. It's main purpose is handling guest writes by copy-on-write operation,
>>>>>> i.e. it's a substitution for write-notifier in backup job.
>>>>>>
>>>>>> 2. fleecing cache. It's a target node for COW operations by fleecing-hook.
>>>>>> It also represents a point-in-time snapshot of active disk for the readers.
>>>>> It's not really COW, it's copy-before-write, isn't it? It's something
>>>>> else entirely. COW is about writing data to an overlay *instead* of
>>>>> writing it to the backing file. Ideally, you don't copy anything,
>>>>> actually. It's just a side effect that you need to copy things if your
>>>>> cluster size doesn't happen to match exactly what you're overwriting.
>>>> Hmm. I'm not against. But COW term was already used in backup to
>>>> describe this.
>>> Bad enough. :-)
>> So, we agreed about new "CBW" abbreviation? :)
> It is already used for the USB mass-storage command block wrapper, but I
> suppose that is sufficiently different not to cause much confusion. :-)
>
> (Or at least that's the only other use I know of.)
>
> [...]
>
>>>> 2. We already have fleecing scheme, when we should create some subgraph
>>>> between nodes.
>>> Yes, but how do the permissions work right now, and why wouldn't they
>>> work with your schema?
>> now it uses backup job, with shared_perm = all for its source and target
>> nodes.
> Uh-huh.
>
> So the issue is... Hm, what exactly? The backup node probably doesn't
> want to share WRITE for the source anymore, as there is no real point in
> doing so. And for the target, the only problem may be to share
> CONSISTENT_READ. It is OK to share that in the fleecing case, but in
> other cases maybe it isn't. But that's easy enough to distinguish in
> the driver.
>
> The main issue I could see is that the overlay (the fleecing target)
> might not share write permissions on its backing file (the fleecing
> source)... But your diagram shows (and bdrv_format_default_perms() as
> well) that this is no the case, when the overlay is writable, the
> backing file may be written to, too.
Hm, actually overlay may share write permission to clusters which are
saved in overlay, or which are not needed (if we have dirty bitmap for
incremental backup).. But we don't have such permission kind, and it
looks not easy to implement it... And it may be too expensive in
operation overhead.
>
>> (ha, you can look at the picture in "[PATCH v2 0/3] block nodes
>> graph visualization")
> :-)
>
>>>> 3. If we move to filter-node instead of write_notifier, block job is not
>>>> actually needed for fleecing, and it is good to drop it from the
>>>> fleecing scheme, to simplify it, to make it more clear and transparent.
>>> If that's possible, why not. But again, I'm not sure whether that's
>>> enough of a reason for the endavour, because whether you start a block
>>> job or do some graph manipulation yourself is not really a difference in
>>> complexity.
>> not "or" but "and": in current fleecing scheme we do both graph
>> manipulations and block-job stat/cancel..
> Hm! Interesting. I didn't know blockdev-backup didn't set the target's
> backing file. It makes sense, but I didn't think about it.
>
> Well, still, my point was whether you do a blockdev-backup +
> block-job-cancel, or a blockdev-add + blockdev-reopen + blockdev-reopen
> + blockdev-del... If there is a difference, the former is going to be
> simpler, probably.
>
> (But if there are things you can't do with the current blockdev-backup,
> then, well, that doesn't help you.)
>
>> Yes, I agree, that there no real benefit in difficulty. I just thing,
>> that if we have filter node which performs "CBW" operations, block-job
>> backup(sync=none) becomes actually empty, it will do nothing.
> On the code side, yes, that's true.
>
>>> But it's mostly your call, since I suppose you'd be doing most of the work.
>>>
>>>> And finally, we will have unified filter-node-based scheme for backup
>>>> and fleecing, modular and customisable.
>>> [...]
>>>
>>>>>> Benefits, or, what can be done:
>>>>>>
>>>>>> 1. We can implement special Fleecing cache filter driver, which will be a real
>>>>>> cache: it will store some recently written clusters and RAM, it can have a
>>>>>> backing (or file?) qcow2 child, to flush some clusters to the disk, etc. So,
>>>>>> for each cluster of active disk we will have the following characteristics:
>>>>>>
>>>>>> - changed (changed in active disk since backup start)
>>>>>> - copy (we need this cluster for fleecing user. For example, in RFC patch all
>>>>>> clusters are "copy", cow_bitmap is initialized to all ones. We can use some
>>>>>> existent bitmap to initialize cow_bitmap, and it will provide an "incremental"
>>>>>> fleecing (for use in incremental backup push or pull)
>>>>>> - cached in RAM
>>>>>> - cached in disk
>>>>> Would it be possible to implement such a filter driver that could just
>>>>> be used as a backup target?
>>>> for internal backup we need backup-job anyway, and we will be able to
>>>> create different schemes.
>>>> One of my goals is the scheme, when we store old data from CBW
>>>> operations into local cache, when
>>>> backup target is remote, relatively slow NBD node. In this case, cache
>>>> is backup source, not target.
>>> Sorry, my question was badly worded. My main point was whether you
>>> could implement the filter driver in such a generic way that it wouldn't
>>> depend on the fleecing-hook.
>> yes, I want my filter nodes to be self-sufficient entities. However it
>> may be more effective to have some shared data, between them, for
>> example, dirty-bitmaps, specifying drive clusters, to know which
>> clusters are cached, which are changed, etc.
> I suppose having global dirty bitmaps may make sense.
>
>>> Judging from your answer and from the fact that you proposed calling the
>>> filter node backup-filter and just using it for all backups, I suppose
>>> the answer is "yes". So that's good.
>>>
>>> (Though I didn't quite understand why in your example the cache would be
>>> the backup source, when the target is the slow node...)
>> cache is a point-in-time view of active disk (actual source) for
>> fleecing. So, we can start backup job to copy data from cache to target.
> But wouldn't the cache need to be the immediate fleecing target for
> this? (And then you'd run another backup/mirror from it to copy the
> whole disk to the real target.)
Yes, the cache is immediate fleecing target.
>
>>>>>> On top of these characteristics we can implement the following features:
>>>>>>
>>>>>> 1. COR, we can cache clusters not only on writes but on reads too, if we have
>>>>>> free space in ram-cache (and if not, do not cache at all, don't write to
>>>>>> disk-cache). It may be done like bdrv_write(..., BDRV_REQ_UNNECESARY)
>>>>> You can do the same with backup by just putting a fast overlay between
>>>>> source and the backup, if your source is so slow, and then do COR, i.e.:
>>>>>
>>>>> slow source --> fast overlay --> COR node --> backup filter
>>>> How will we check ram-cache size to make COR optional in this scheme?
>>> Yes, well, if you have a caching driver already, I suppose you can just
>>> use that.
>>>
>>> You could either write it a bit simpler to only cache on writes and then
>>> put a COR node on top if desired; or you implement the read cache
>>> functionality directly in the node, which may make it a bit more
>>> complicated, but probably also faster.
>>>
>>> (I guess you indeed want to go for faster when already writing a RAM
>>> cache driver...)
>>>
>>> (I don't really understand what BDRV_REQ_UNNECESSARY is supposed to do,
>>> though.)
>> When we do "CBW", we _must_ save data before guest write, so, we write
>> this data to the cache (or directly to target, like in current approach).
>> When we do "COR", we _may_ save data to our ram-cache. It's safe to not
>> save data, as we can read it from active disk (data is not changed yet).
>> BDRV_REQ_UNNECESSARY is a proposed interface to write this unnecessary
>> data to the cache: if ram-cache is full, cache will skip this write.
> Hm, OK... But deciding for each request how much priority it should get
> in a potential cache node seems like an awful lot of work. Well, I
> don't even know what kind of requests you would deem unnecessary. If it
> has something to do with the state of a dirty bitmap, then having global
> dirty bitmaps might remove the need for such a request flag.
Yes, if we have some "shared fleecing object", accessible by
fleecing-hook filter,
fleecing-cache filter (and backup job, if it is an internal backup), we
don't need
such flag.
>
> [...]
>
>>> Hm. So what you want here is a special block driver or at least a
>>> special interface that can give information to an outside tool, namely
>>> the information you listed above.
>>>
>>> If you want information about RAM-cached clusters, well, you can only
>>> get that information from the RAM cache driver. It probably would be
>>> allocation information, do we have any way of getting that out?
>>>
>>> It seems you can get all of that (zero information and allocation
>>> information) over NBD. Would that be enough?
>> it's a most generic and clean way, but I'm not sure that it will be
>> performance-effective.
> Intuitively I'd agree, but I suppose if NBD is written right, such a
> request should be very fast and the response basically just consists of
> the allocation information, so I don't suspect it can be much faster
> than that.
>
> (Unless you want some form of interrupts. I suppose NBD would be the
> wrong interface, then.)
Yes, for external backup through NBD it's ok to get block status, but
for internal backup it seems faster to access shared fleecing object (or
global bitmaps, etc).
However, if we have some shared fleecing object, it's not a problem to
export it as a blockstatus metadata through NBD export..
>
> [...]
>
>>>> I need several features, which are hard to implement using current scheme.
>>>>
>>>> 1. The scheme when we have a local cache as COW target and slow remote
>>>> backup target.
>>>> How to do it now? Using two backups, one with sync=none... Not sure that
>>>> this is right way.
>>> If it works...
>>>
>>> (I'd rather build simple building blocks that you can put together than
>>> something complicated that works for a specific solution)
>> exactly, I want to implement simple building blocks = filter nodes,
>> instead of implementing all the features in backup job.
> Good, good. :-)
>
>>>> 3. Then,
>>>> we'll need a possibility for backup(sync=none) to
>>>> not COW clusters, which are already copied to backup, and so on.
>>> Isn't that the same as 2?
>> We can use one bitmap for 2 and 3, and drop bits from it, when
>> external-tool has read corresponding cluster from nbd-fleecing-export..
> Oh, right, it needs to be modifiable from the outside. I suppose that
> would be possible in NBD, too. (But I don't know exactly.)
I think it's natural to implement it through discard operation on
fleecing-cache node: if fleecing-user discard something, it will not
read it more and we can drop it from the cache and clear bit in shared
bitmap.
Then we can improve it by creating flag READ_ONCE for each READ command
or for the whole connection, to discard data after each read.. Or pass
this flag to bdrv_read, to handle it in one command..
>
> [...]
>
>>>>> I don't think that will be any simpler.
>>>>>
>>>>> I mean, it would make blockdev-copy simpler, because we could
>>>>> immediately replace backup by mirror, and then we just have mirror,
>>>>> which would then automatically become blockdev-copy...
>>>>>
>>>>> But it's not really going to be simpler, because whether you put the
>>>>> copy-before-write logic into a dedicated block driver, or into the
>>>>> backup filter driver, doesn't really make it simpler either way. Well,
>>>>> adding a new driver always is a bit more complicated, so there's that.
>>>> what is the difference between separate filter driver and backup filter
>>>> driver?
>>> I thought we already had a backup filter node, so you wouldn't have had
>>> to create a new driver in that case.
>>>
>>> But we don't, so there really is no difference. Well, apart from being
>>> able to share state easier when the driver is in the same file as the job.
>> But if we make it separate - it will be a separate "building block" to
>> be reused in different schemes.
> Absolutely true.
>
>>>>>> it should not care about guest writes, it copies clusters from a kind of
>>>>>> snapshot which is not changing in time. This job should follow recommendations
>>>>>> from fleecing scheme [7].
>>>>>>
>>>>>> What about the target?
>>>>>>
>>>>>> We can use separate node as target, and copy from fleecing cache to the target.
>>>>>> If we have only ram-cache, it would be equal to current approach (data is copied
>>>>>> directly to the target, even on COW). If we have both ram- and disk- caches, it's
>>>>>> a cool solution for slow-target: instead of make guest wait for long write to
>>>>>> backup target (when ram-cache is full) we can write to disk-cache which is local
>>>>>> and fast.
>>>>> Or you backup to a fast overlay over a slow target, and run a live
>>>>> commit on the side.
>>>> I think it will lead to larger io overhead: all clusters will go through
>>>> overlay, not only guest-written clusters, for which we did not have time
>>>> to copy them..
>>> Well, and it probably makes sense to have some form of RAM-cache driver.
>>> Then that'd be your fast overlay.
>> but there no reasons to copy all the data through the cache: we need it
>> only for CBW.
> Well, if there'd be a RAM-cache driver, you may use it for anything that
> seems useful (I seem to remember there were some patches on the list
> like three or four years ago...).
>
>> any way, I think it will be good if both schemes will be possible.
>>
>>>>>> Another option is to combine fleecing cache and target somehow (I didn't think
>>>>>> about this really).
>>>>>>
>>>>>> Finally, with one - two (three?) special filters we can implement all current
>>>>>> fleecing/backup schemes in unique and very configurable way and do a lot more
>>>>>> cool features and possibilities.
>>>>>>
>>>>>> What do you think?
>>>>> I think adding a specific fleecing target filter makes sense because you
>>>>> gave many reasons for interesting new use cases that could emerge from that.
>>>>>
>>>>> But I think adding a new fleecing-hook driver just means moving the
>>>>> implementation from backup to that new driver.
>>>> But in the same time you say that it's ok to create backup-filter
>>>> (instead of write_notifier) and make it insertable by qapi? So, if I
>>>> implement it in block/backup, it's ok? Why not do it separately?
>>> Because I thought we had it already. But we don't. So feel free to do
>>> it separately. :-)
>> Ok, that's good :) . Then, I'll try to reuse the filter in backup
>> instead of write-notifiers, and understand do we really need internal
>> state of backup block-job or not.
>>
>>> Max
>>>
>> PS: in background, I have unpublished work, aimed to parallelize
>> backup-job into several coroutines (like it is done for mirror, qemu-img
>> clone cmd). And it's really hard.It creates queues of requests with
>> different priority, to handle CBW requests in common pipeline, it's
>> mostly a rewrite of block/backup. If we split CBW from backup to
>> separate filter-node, backup becomes very simple thing (copy clusters
>> from constant storage) and its parallelization becomes simpler.
> If CBW is split from backup, maybe mirror could replace backup
> immediately. You'd fleece to a RAM cache target and then mirror from there.
Hmm, good option. It would be just one mirror iteration.
But then I'll need to teach mirror to copy clusters with some
priorities, to avoid ram-cache overloading (and guest io hang).
It may be better to have a separate simple (a lot simpler than mirror)
block job for it. or use a backup. Anyway, it's a separate
building block, performance comparison will show better candidate.
>
> (To be precise: The exact replacement would be an active mirror, so a
> mirror with copy-mode=write-blocking, so it immediately writes the old
> block to the target when it is changed in the source, and thus the RAM
> cache could stay effectively empty.)
Hmm, or this way. So, actually for such thing, we need a cache node
which do absolutely nothing, write will be actually handled by mirror
job. But in this case we cant control size of actual ram cache: if
target is slow we will accumulate unfinished bdrv_mirror_top_pwritev
calls, which has allocated memory and waiting in a queue to create
mirror coroutine.
>
>> I don't say throw the backup away, but I have several ideas, which may
>> alter current approach. They may live in parallel with current backup
>> path, or replace it in future, if they will be more effective.
> Thing is, contrary to the impression I've probably given, we do want to
> throw away backup sooner or later. We want a single block job
> (blockdev-copy) that unifies mirror, backup, and commit.
>
> (mirror already basically supersedes commit, with live commit just being
> exactly mirror; the main problem is integrating backup. But with a
> fleecing node and a RAM cache target, that would suddenly be really
> simple, I assume.)
>
> ((All that's missing is sync=top, where the mirror would need to not
> only check its source (which would be the RAM cache), but also its
> backing file; and sync=incremental, which just isn't there with mirror
> at all. OTOH, it may be possible to implement both modes simply in the
> fleecing/backup node, so it only copies that respective data to the
> target and the mirror simply sees nothing else.))
Good idea. If we have fleecing-cache node as a "view" or "export", we
can export only selected portions of data, marking the other as
unallocated.. Or we need to share bitmaps (global bitmaps, shared
fleecing state, etc) with a block-job.
>
> Max
>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup
2018-08-20 18:30 ` Vladimir Sementsov-Ogievskiy
@ 2018-08-21 9:29 ` Vladimir Sementsov-Ogievskiy
0 siblings, 0 replies; 15+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2018-08-21 9:29 UTC (permalink / raw)
To: Max Reitz, qemu-devel, qemu-block
Cc: eblake, armbru, kwolf, famz, jsnow, pbonzini, stefanha, den
20.08.2018 21:30, Vladimir Sementsov-Ogievskiy wrote:
> 20.08.2018 20:25, Max Reitz wrote:
>> On 2018-08-20 16:49, Vladimir Sementsov-Ogievskiy wrote:
>>> 20.08.2018 16:32, Max Reitz wrote:
>>>> On 2018-08-20 11:42, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 18.08.2018 00:50, Max Reitz wrote:
>>>>>> On 2018-08-14 19:01, Vladimir Sementsov-Ogievskiy wrote:
>>>> [...]
>>>>
>>>>>>> Proposal:
>>>>>>>
>>>>>>> For fleecing we need two nodes:
>>>>>>>
>>>>>>> 1. fleecing hook. It's a filter which should be inserted on top
>>>>>>> of active
>>>>>>> disk. It's main purpose is handling guest writes by
>>>>>>> copy-on-write operation,
>>>>>>> i.e. it's a substitution for write-notifier in backup job.
>>>>>>>
>>>>>>> 2. fleecing cache. It's a target node for COW operations by
>>>>>>> fleecing-hook.
>>>>>>> It also represents a point-in-time snapshot of active disk for
>>>>>>> the readers.
>>>>>> It's not really COW, it's copy-before-write, isn't it? It's
>>>>>> something
>>>>>> else entirely. COW is about writing data to an overlay *instead* of
>>>>>> writing it to the backing file. Ideally, you don't copy anything,
>>>>>> actually. It's just a side effect that you need to copy things
>>>>>> if your
>>>>>> cluster size doesn't happen to match exactly what you're
>>>>>> overwriting.
>>>>> Hmm. I'm not against. But COW term was already used in backup to
>>>>> describe this.
>>>> Bad enough. :-)
>>> So, we agreed about new "CBW" abbreviation? :)
>> It is already used for the USB mass-storage command block wrapper, but I
>> suppose that is sufficiently different not to cause much confusion. :-)
>>
>> (Or at least that's the only other use I know of.)
>>
>> [...]
>>
>>>>> 2. We already have fleecing scheme, when we should create some
>>>>> subgraph
>>>>> between nodes.
>>>> Yes, but how do the permissions work right now, and why wouldn't they
>>>> work with your schema?
>>> now it uses backup job, with shared_perm = all for its source and
>>> target
>>> nodes.
>> Uh-huh.
>>
>> So the issue is... Hm, what exactly? The backup node probably doesn't
>> want to share WRITE for the source anymore, as there is no real point in
>> doing so. And for the target, the only problem may be to share
>> CONSISTENT_READ. It is OK to share that in the fleecing case, but in
>> other cases maybe it isn't. But that's easy enough to distinguish in
>> the driver.
>>
>> The main issue I could see is that the overlay (the fleecing target)
>> might not share write permissions on its backing file (the fleecing
>> source)... But your diagram shows (and bdrv_format_default_perms() as
>> well) that this is no the case, when the overlay is writable, the
>> backing file may be written to, too.
>
> Hm, actually overlay may share write permission to clusters which are
> saved in overlay, or which are not needed (if we have dirty bitmap for
> incremental backup).. But we don't have such permission kind, and it
> looks not easy to implement it... And it may be too expensive in
> operation overhead.
>
>>
>>> (ha, you can look at the picture in "[PATCH v2 0/3] block nodes
>>> graph visualization")
>> :-)
>>
>>>>> 3. If we move to filter-node instead of write_notifier, block job
>>>>> is not
>>>>> actually needed for fleecing, and it is good to drop it from the
>>>>> fleecing scheme, to simplify it, to make it more clear and
>>>>> transparent.
>>>> If that's possible, why not. But again, I'm not sure whether that's
>>>> enough of a reason for the endavour, because whether you start a block
>>>> job or do some graph manipulation yourself is not really a
>>>> difference in
>>>> complexity.
>>> not "or" but "and": in current fleecing scheme we do both graph
>>> manipulations and block-job stat/cancel..
>> Hm! Interesting. I didn't know blockdev-backup didn't set the target's
>> backing file. It makes sense, but I didn't think about it.
>>
>> Well, still, my point was whether you do a blockdev-backup +
>> block-job-cancel, or a blockdev-add + blockdev-reopen + blockdev-reopen
>> + blockdev-del... If there is a difference, the former is going to be
>> simpler, probably.
>>
>> (But if there are things you can't do with the current blockdev-backup,
>> then, well, that doesn't help you.)
>>
>>> Yes, I agree, that there no real benefit in difficulty. I just thing,
>>> that if we have filter node which performs "CBW" operations, block-job
>>> backup(sync=none) becomes actually empty, it will do nothing.
>> On the code side, yes, that's true.
>>
>>>> But it's mostly your call, since I suppose you'd be doing most of
>>>> the work.
>>>>
>>>>> And finally, we will have unified filter-node-based scheme for backup
>>>>> and fleecing, modular and customisable.
>>>> [...]
>>>>
>>>>>>> Benefits, or, what can be done:
>>>>>>>
>>>>>>> 1. We can implement special Fleecing cache filter driver, which
>>>>>>> will be a real
>>>>>>> cache: it will store some recently written clusters and RAM, it
>>>>>>> can have a
>>>>>>> backing (or file?) qcow2 child, to flush some clusters to the
>>>>>>> disk, etc. So,
>>>>>>> for each cluster of active disk we will have the following
>>>>>>> characteristics:
>>>>>>>
>>>>>>> - changed (changed in active disk since backup start)
>>>>>>> - copy (we need this cluster for fleecing user. For example, in
>>>>>>> RFC patch all
>>>>>>> clusters are "copy", cow_bitmap is initialized to all ones. We
>>>>>>> can use some
>>>>>>> existent bitmap to initialize cow_bitmap, and it will provide an
>>>>>>> "incremental"
>>>>>>> fleecing (for use in incremental backup push or pull)
>>>>>>> - cached in RAM
>>>>>>> - cached in disk
>>>>>> Would it be possible to implement such a filter driver that could
>>>>>> just
>>>>>> be used as a backup target?
>>>>> for internal backup we need backup-job anyway, and we will be able to
>>>>> create different schemes.
>>>>> One of my goals is the scheme, when we store old data from CBW
>>>>> operations into local cache, when
>>>>> backup target is remote, relatively slow NBD node. In this case,
>>>>> cache
>>>>> is backup source, not target.
>>>> Sorry, my question was badly worded. My main point was whether you
>>>> could implement the filter driver in such a generic way that it
>>>> wouldn't
>>>> depend on the fleecing-hook.
>>> yes, I want my filter nodes to be self-sufficient entities. However it
>>> may be more effective to have some shared data, between them, for
>>> example, dirty-bitmaps, specifying drive clusters, to know which
>>> clusters are cached, which are changed, etc.
>> I suppose having global dirty bitmaps may make sense.
>>
>>>> Judging from your answer and from the fact that you proposed
>>>> calling the
>>>> filter node backup-filter and just using it for all backups, I suppose
>>>> the answer is "yes". So that's good.
>>>>
>>>> (Though I didn't quite understand why in your example the cache
>>>> would be
>>>> the backup source, when the target is the slow node...)
>>> cache is a point-in-time view of active disk (actual source) for
>>> fleecing. So, we can start backup job to copy data from cache to
>>> target.
>> But wouldn't the cache need to be the immediate fleecing target for
>> this? (And then you'd run another backup/mirror from it to copy the
>> whole disk to the real target.)
>
> Yes, the cache is immediate fleecing target.
>
>>
>>>>>>> On top of these characteristics we can implement the following
>>>>>>> features:
>>>>>>>
>>>>>>> 1. COR, we can cache clusters not only on writes but on reads
>>>>>>> too, if we have
>>>>>>> free space in ram-cache (and if not, do not cache at all, don't
>>>>>>> write to
>>>>>>> disk-cache). It may be done like bdrv_write(...,
>>>>>>> BDRV_REQ_UNNECESARY)
>>>>>> You can do the same with backup by just putting a fast overlay
>>>>>> between
>>>>>> source and the backup, if your source is so slow, and then do
>>>>>> COR, i.e.:
>>>>>>
>>>>>> slow source --> fast overlay --> COR node --> backup filter
>>>>> How will we check ram-cache size to make COR optional in this scheme?
>>>> Yes, well, if you have a caching driver already, I suppose you can
>>>> just
>>>> use that.
>>>>
>>>> You could either write it a bit simpler to only cache on writes and
>>>> then
>>>> put a COR node on top if desired; or you implement the read cache
>>>> functionality directly in the node, which may make it a bit more
>>>> complicated, but probably also faster.
>>>>
>>>> (I guess you indeed want to go for faster when already writing a RAM
>>>> cache driver...)
>>>>
>>>> (I don't really understand what BDRV_REQ_UNNECESSARY is supposed to
>>>> do,
>>>> though.)
>>> When we do "CBW", we _must_ save data before guest write, so, we write
>>> this data to the cache (or directly to target, like in current
>>> approach).
>>> When we do "COR", we _may_ save data to our ram-cache. It's safe to not
>>> save data, as we can read it from active disk (data is not changed
>>> yet).
>>> BDRV_REQ_UNNECESSARY is a proposed interface to write this unnecessary
>>> data to the cache: if ram-cache is full, cache will skip this write.
>> Hm, OK... But deciding for each request how much priority it should get
>> in a potential cache node seems like an awful lot of work. Well, I
>> don't even know what kind of requests you would deem unnecessary. If it
>> has something to do with the state of a dirty bitmap, then having global
>> dirty bitmaps might remove the need for such a request flag.
>
> Yes, if we have some "shared fleecing object", accessible by
> fleecing-hook filter,
> fleecing-cache filter (and backup job, if it is an internal backup),
> we don't need
> such flag.
>
>>
>> [...]
>>
>>>> Hm. So what you want here is a special block driver or at least a
>>>> special interface that can give information to an outside tool, namely
>>>> the information you listed above.
>>>>
>>>> If you want information about RAM-cached clusters, well, you can only
>>>> get that information from the RAM cache driver. It probably would be
>>>> allocation information, do we have any way of getting that out?
>>>>
>>>> It seems you can get all of that (zero information and allocation
>>>> information) over NBD. Would that be enough?
>>> it's a most generic and clean way, but I'm not sure that it will be
>>> performance-effective.
>> Intuitively I'd agree, but I suppose if NBD is written right, such a
>> request should be very fast and the response basically just consists of
>> the allocation information, so I don't suspect it can be much faster
>> than that.
>>
>> (Unless you want some form of interrupts. I suppose NBD would be the
>> wrong interface, then.)
>
> Yes, for external backup through NBD it's ok to get block status, but
> for internal backup it seems faster to access shared fleecing object
> (or global bitmaps, etc).
>
> However, if we have some shared fleecing object, it's not a problem to
> export it as a blockstatus metadata through NBD export..
>
>>
>> [...]
>>
>>>>> I need several features, which are hard to implement using current
>>>>> scheme.
>>>>>
>>>>> 1. The scheme when we have a local cache as COW target and slow
>>>>> remote
>>>>> backup target.
>>>>> How to do it now? Using two backups, one with sync=none... Not
>>>>> sure that
>>>>> this is right way.
>>>> If it works...
>>>>
>>>> (I'd rather build simple building blocks that you can put together
>>>> than
>>>> something complicated that works for a specific solution)
>>> exactly, I want to implement simple building blocks = filter nodes,
>>> instead of implementing all the features in backup job.
>> Good, good. :-)
>>
>>>>> 3. Then,
>>>>> we'll need a possibility for backup(sync=none) to
>>>>> not COW clusters, which are already copied to backup, and so on.
>>>> Isn't that the same as 2?
>>> We can use one bitmap for 2 and 3, and drop bits from it, when
>>> external-tool has read corresponding cluster from nbd-fleecing-export..
>> Oh, right, it needs to be modifiable from the outside. I suppose that
>> would be possible in NBD, too. (But I don't know exactly.)
>
> I think it's natural to implement it through discard operation on
> fleecing-cache node: if fleecing-user discard something, it will not
> read it more and we can drop it from the cache and clear bit in shared
> bitmap.
>
> Then we can improve it by creating flag READ_ONCE for each READ
> command or for the whole connection, to discard data after each read..
> Or pass this flag to bdrv_read, to handle it in one command..
>
>>
>> [...]
>>
>>>>>> I don't think that will be any simpler.
>>>>>>
>>>>>> I mean, it would make blockdev-copy simpler, because we could
>>>>>> immediately replace backup by mirror, and then we just have mirror,
>>>>>> which would then automatically become blockdev-copy...
>>>>>>
>>>>>> But it's not really going to be simpler, because whether you put the
>>>>>> copy-before-write logic into a dedicated block driver, or into the
>>>>>> backup filter driver, doesn't really make it simpler either way.
>>>>>> Well,
>>>>>> adding a new driver always is a bit more complicated, so there's
>>>>>> that.
>>>>> what is the difference between separate filter driver and backup
>>>>> filter
>>>>> driver?
>>>> I thought we already had a backup filter node, so you wouldn't have
>>>> had
>>>> to create a new driver in that case.
>>>>
>>>> But we don't, so there really is no difference. Well, apart from
>>>> being
>>>> able to share state easier when the driver is in the same file as
>>>> the job.
>>> But if we make it separate - it will be a separate "building block" to
>>> be reused in different schemes.
>> Absolutely true.
>>
>>>>>>> it should not care about guest writes, it copies clusters from a
>>>>>>> kind of
>>>>>>> snapshot which is not changing in time. This job should follow
>>>>>>> recommendations
>>>>>>> from fleecing scheme [7].
>>>>>>>
>>>>>>> What about the target?
>>>>>>>
>>>>>>> We can use separate node as target, and copy from fleecing cache
>>>>>>> to the target.
>>>>>>> If we have only ram-cache, it would be equal to current approach
>>>>>>> (data is copied
>>>>>>> directly to the target, even on COW). If we have both ram- and
>>>>>>> disk- caches, it's
>>>>>>> a cool solution for slow-target: instead of make guest wait for
>>>>>>> long write to
>>>>>>> backup target (when ram-cache is full) we can write to
>>>>>>> disk-cache which is local
>>>>>>> and fast.
>>>>>> Or you backup to a fast overlay over a slow target, and run a live
>>>>>> commit on the side.
>>>>> I think it will lead to larger io overhead: all clusters will go
>>>>> through
>>>>> overlay, not only guest-written clusters, for which we did not
>>>>> have time
>>>>> to copy them..
>>>> Well, and it probably makes sense to have some form of RAM-cache
>>>> driver.
>>>> Then that'd be your fast overlay.
>>> but there no reasons to copy all the data through the cache: we need it
>>> only for CBW.
>> Well, if there'd be a RAM-cache driver, you may use it for anything that
>> seems useful (I seem to remember there were some patches on the list
>> like three or four years ago...).
>>
>>> any way, I think it will be good if both schemes will be possible.
>>>
>>>>>>> Another option is to combine fleecing cache and target somehow
>>>>>>> (I didn't think
>>>>>>> about this really).
>>>>>>>
>>>>>>> Finally, with one - two (three?) special filters we can
>>>>>>> implement all current
>>>>>>> fleecing/backup schemes in unique and very configurable way and
>>>>>>> do a lot more
>>>>>>> cool features and possibilities.
>>>>>>>
>>>>>>> What do you think?
>>>>>> I think adding a specific fleecing target filter makes sense
>>>>>> because you
>>>>>> gave many reasons for interesting new use cases that could emerge
>>>>>> from that.
>>>>>>
>>>>>> But I think adding a new fleecing-hook driver just means moving the
>>>>>> implementation from backup to that new driver.
>>>>> But in the same time you say that it's ok to create backup-filter
>>>>> (instead of write_notifier) and make it insertable by qapi? So, if I
>>>>> implement it in block/backup, it's ok? Why not do it separately?
>>>> Because I thought we had it already. But we don't. So feel free
>>>> to do
>>>> it separately. :-)
>>> Ok, that's good :) . Then, I'll try to reuse the filter in backup
>>> instead of write-notifiers, and understand do we really need internal
>>> state of backup block-job or not.
>>>
>>>> Max
>>>>
>>> PS: in background, I have unpublished work, aimed to parallelize
>>> backup-job into several coroutines (like it is done for mirror,
>>> qemu-img
>>> clone cmd). And it's really hard.It creates queues of requests with
>>> different priority, to handle CBW requests in common pipeline, it's
>>> mostly a rewrite of block/backup. If we split CBW from backup to
>>> separate filter-node, backup becomes very simple thing (copy clusters
>>> from constant storage) and its parallelization becomes simpler.
>> If CBW is split from backup, maybe mirror could replace backup
>> immediately. You'd fleece to a RAM cache target and then mirror from
>> there.
>
> Hmm, good option. It would be just one mirror iteration.
> But then I'll need to teach mirror to copy clusters with some
> priorities, to avoid ram-cache overloading (and guest io hang).
> It may be better to have a separate simple (a lot simpler than mirror)
> block job for it. or use a backup. Anyway, it's a separate
> building block, performance comparison will show better candidate.
>
>>
>> (To be precise: The exact replacement would be an active mirror, so a
>> mirror with copy-mode=write-blocking, so it immediately writes the old
>> block to the target when it is changed in the source, and thus the RAM
>> cache could stay effectively empty.)
>
> Hmm, or this way. So, actually for such thing, we need a cache node
> which do absolutely nothing, write will be actually handled by mirror
> job. But in this case we cant control size of actual ram cache: if
> target is slow we will accumulate unfinished bdrv_mirror_top_pwritev
> calls, which has allocated memory and waiting in a queue to create
> mirror coroutine.
Oh, sorry, no. active mirror copy data synchronously on write, so, it's
really should be the same copy pattern as in backup.
>
>>
>>> I don't say throw the backup away, but I have several ideas, which may
>>> alter current approach. They may live in parallel with current backup
>>> path, or replace it in future, if they will be more effective.
>> Thing is, contrary to the impression I've probably given, we do want to
>> throw away backup sooner or later. We want a single block job
>> (blockdev-copy) that unifies mirror, backup, and commit.
>>
>> (mirror already basically supersedes commit, with live commit just being
>> exactly mirror; the main problem is integrating backup. But with a
>> fleecing node and a RAM cache target, that would suddenly be really
>> simple, I assume.)
>>
>> ((All that's missing is sync=top, where the mirror would need to not
>> only check its source (which would be the RAM cache), but also its
>> backing file; and sync=incremental, which just isn't there with mirror
>> at all. OTOH, it may be possible to implement both modes simply in the
>> fleecing/backup node, so it only copies that respective data to the
>> target and the mirror simply sees nothing else.))
>
> Good idea. If we have fleecing-cache node as a "view" or "export", we
> can export only selected portions of data, marking the other as
> unallocated.. Or we need to share bitmaps (global bitmaps, shared
> fleecing state, etc) with a block-job.
>
>>
>> Max
>>
>
>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2018-08-21 9:31 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-14 17:01 [Qemu-devel] [RFC v2] new, node-graph-based fleecing and backup Vladimir Sementsov-Ogievskiy
2018-08-16 15:05 ` no-reply
2018-08-16 17:28 ` Vladimir Sementsov-Ogievskiy
2018-08-16 17:58 ` Eric Blake
2018-08-16 15:09 ` no-reply
2018-08-17 18:21 ` Vladimir Sementsov-Ogievskiy
2018-08-17 20:56 ` no-reply
2018-08-17 21:01 ` no-reply
2018-08-17 21:50 ` Max Reitz
2018-08-20 9:42 ` Vladimir Sementsov-Ogievskiy
2018-08-20 13:32 ` Max Reitz
2018-08-20 14:49 ` Vladimir Sementsov-Ogievskiy
2018-08-20 17:25 ` Max Reitz
2018-08-20 18:30 ` Vladimir Sementsov-Ogievskiy
2018-08-21 9:29 ` Vladimir Sementsov-Ogievskiy
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.