All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete
@ 2021-04-09 13:29 Max Reitz
  2021-04-09 13:29 ` [PATCH v2 1/3] " Max Reitz
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:29 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel,
	Max Reitz

Hi,

v1:
https://lists.nongnu.org/archive/html/qemu-block/2021-04/msg00215.html

Alternative:
https://lists.nongnu.org/archive/html/qemu-block/2021-04/msg00261.html


Compared to v1, I’ve added an aio_wait_kick() to job_pause_point() (as
suggested by Kevin) and adjusted the error message on job->user_paused
(as suggested by John).  I’ve kept the job->pause_count > 0 block
because of the concern of nested event loops Kevin raised, and I’ve
expanded its comment to explain the problem (I hope).

Note also the new note in patch 1’s commit message, which explains how
we’d ideally want block-job-complete to be a coroutine QMP handler so we
could yield instead of polling.


Furthermore, I’ve added the flaky test that I’ve also appended to the
alternative series.  Sometimes it fails 50/100 times for me, sometimes
more like 20/100.  (On master.)  Maybe it won’t reproduce the problem
for you at all.


git-backport-diff against v1:

Key:
[----] : patches are identical
[####] : number of functional differences between upstream/downstream patch
[down] : patch is downstream-only
The flags [FC] indicate (F)unctional and (C)ontextual differences, respectively

001/4:[0015] [FC] 'job: Add job_wait_unpaused() for block-job-complete'
002/4:[----] [--] 'test-blockjob: Test job_wait_unpaused()'
003/4:[----] [--] 'iotests/041: block-job-complete on user-paused job'
004/4:[down] 'iotests: Test completion immediately after drain'



Max Reitz (4):
  job: Add job_wait_unpaused() for block-job-complete
  test-blockjob: Test job_wait_unpaused()
  iotests/041: block-job-complete on user-paused job
  iotests: Test completion immediately after drain

 include/qemu/job.h                            |  15 ++
 blockdev.c                                    |   3 +
 job.c                                         |  53 +++++++
 tests/unit/test-blockjob.c                    | 140 ++++++++++++++++++
 tests/qemu-iotests/041                        |  13 +-
 .../tests/mirror-complete-after-drain         |  89 +++++++++++
 .../tests/mirror-complete-after-drain.out     |  14 ++
 7 files changed, 326 insertions(+), 1 deletion(-)
 create mode 100755 tests/qemu-iotests/tests/mirror-complete-after-drain
 create mode 100644 tests/qemu-iotests/tests/mirror-complete-after-drain.out

-- 
2.29.2



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] job: Add job_wait_unpaused() for block-job-complete
  2021-04-09 13:29 [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete Max Reitz
@ 2021-04-09 13:29 ` Max Reitz
  2021-04-09 13:29 ` [PATCH v2 2/3] test-blockjob: Test job_wait_unpaused() Max Reitz
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:29 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel,
	Max Reitz

block-job-complete can only be applied when the job is READY, not when
it is on STANDBY (ready, but paused).  Draining a job technically pauses
it (which makes a READY job enter STANDBY), and ending the drained
section does not synchronously resume it, but only schedules the job,
which will then be resumed.  So attempting to complete a job immediately
after a drained section may sometimes fail.

That is bad at least because users cannot really work nicely around
this: A job may be paused and resumed at any time, so waiting for the
job to be in the READY state and then issuing a block-job-complete poses
a TOCTTOU problem.  The only way around it would be to issue
block-job-complete until it no longer fails due to the job being in the
STANDBY state, but that would not be nice.

We can solve the problem by allowing block-job-complete to be invoked on
jobs that are on STANDBY, if that status is the result of a drained
section (not because the user has paused the job), and that section has
ended.  That is, if the job is on STANDBY, but scheduled to be resumed.

Perhaps we could actually just directly allow this, seeing that mirror
is the only user of ready/complete, and that mirror_complete() could
probably work under the given circumstances, but there may be many side
effects to consider.

It is simpler to add a function job_wait_unpaused() that waits for the
job to be resumed (under said circumstances), and to make
qmp_block_job_complete() use it to delay job_complete() until then.

Note that for the future, we probably want to make block-job-complete a
coroutine QMP handler, so instead of polling job_wait_unpaused() would
yield and have job_pause_point() wake it up.  That would get around the
problem of nested polling, which is currently the reason for returning
an error when job->pause_point > 0.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1945635
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 include/qemu/job.h | 15 +++++++++++++
 blockdev.c         |  3 +++
 job.c              | 53 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 71 insertions(+)

diff --git a/include/qemu/job.h b/include/qemu/job.h
index efc6fa7544..cf3082b6d7 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -563,4 +563,19 @@ void job_dismiss(Job **job, Error **errp);
  */
 int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error **errp);
 
+/**
+ * If the job has been paused because of a drained section, and that
+ * section has ended, wait until the job is resumed.
+ *
+ * Return 0 if the job is not paused, or if it has been successfully
+ * resumed.
+ * Return an error if the job has been paused in such a way that
+ * waiting will not resume it, i.e. if it has been paused by the user,
+ * or if it is still drained.
+ *
+ * Callers must be in the home AioContext and hold the AioContext lock
+ * of job->aio_context.
+ */
+int job_wait_unpaused(Job *job, Error **errp);
+
 #endif
diff --git a/blockdev.c b/blockdev.c
index a57590aae4..c0cc2fa364 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3414,6 +3414,9 @@ void qmp_block_job_complete(const char *device, Error **errp)
         return;
     }
 
+    if (job_wait_unpaused(&job->job, errp) < 0) {
+        return;
+    }
     trace_qmp_block_job_complete(job);
     job_complete(&job->job, errp);
     aio_context_release(aio_context);
diff --git a/job.c b/job.c
index 289edee143..fbccd4b32a 100644
--- a/job.c
+++ b/job.c
@@ -505,6 +505,7 @@ void coroutine_fn job_pause_point(Job *job)
         job_do_yield(job, -1);
         job->paused = false;
         job_state_transition(job, status);
+        aio_wait_kick();
     }
 
     if (job->driver->resume) {
@@ -1023,3 +1024,55 @@ int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error **errp)
     job_unref(job);
     return ret;
 }
+
+int job_wait_unpaused(Job *job, Error **errp)
+{
+    /*
+     * Only run this function from the main context, because this is
+     * what we need, and this way we do not have to think about what
+     * happens if the user concurrently pauses the job from the main
+     * monitor.
+     */
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+
+    /*
+     * Quick path (e.g. so we do not get an error if pause_count > 0
+     * but the job is not even paused)
+     */
+    if (!job->paused) {
+        return 0;
+    }
+
+    /* If the user has paused the job, waiting will not help */
+    if (job->user_paused) {
+        error_setg(errp, "Job '%s' has been paused and needs to be explicitly "
+                   "resumed", job->id);
+        return -EBUSY;
+    }
+
+    /*
+     * Similarly, if the job is still drained, waiting may not help
+     * either: If the drain came from an IO thread, waiting would
+     * probably help.  However, if the drain came from the main thread
+     * (which may be possible if the QMP handler calling this function
+     * has been invoked by BDRV_POLL_WHILE() from a drain_begin), then
+     * waiting will only deadlock.
+     * Better be safe and return an error.  Drains from IO threads
+     * probably do not occur anyway.
+     */
+    if (job->pause_count > 0) {
+        error_setg(errp, "Job '%s' is blocked and cannot be unpaused", job->id);
+        return -EBUSY;
+    }
+
+    /*
+     * This function is specifically for waiting for a job to be
+     * resumed after a drained section.  Ending the drained section
+     * includes a job_enter(), which schedules the job loop to be run,
+     * and once it does, job->paused will be cleared.  Therefore, we
+     * do not need to invoke job_enter() here.
+     */
+    AIO_WAIT_WHILE(job->aio_context, job->paused);
+
+    return 0;
+}
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] test-blockjob: Test job_wait_unpaused()
  2021-04-09 13:29 [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete Max Reitz
  2021-04-09 13:29 ` [PATCH v2 1/3] " Max Reitz
@ 2021-04-09 13:29 ` Max Reitz
  2021-04-09 13:29 ` [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job Max Reitz
  2021-04-09 13:29 ` [PATCH v2 4/3] iotests: Test completion immediately after drain Max Reitz
  3 siblings, 0 replies; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:29 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel,
	Max Reitz

Create a job that remains on STANDBY after a drained section, and see
that invoking job_wait_unpaused() will get it unstuck.

Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 tests/unit/test-blockjob.c | 140 +++++++++++++++++++++++++++++++++++++
 1 file changed, 140 insertions(+)

diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c
index 7519847912..b7736e298d 100644
--- a/tests/unit/test-blockjob.c
+++ b/tests/unit/test-blockjob.c
@@ -16,6 +16,7 @@
 #include "block/blockjob_int.h"
 #include "sysemu/block-backend.h"
 #include "qapi/qmp/qdict.h"
+#include "iothread.h"
 
 static const BlockJobDriver test_block_job_driver = {
     .job_driver = {
@@ -375,6 +376,144 @@ static void test_cancel_concluded(void)
     cancel_common(s);
 }
 
+/* (See test_yielding_driver for the job description) */
+typedef struct YieldingJob {
+    BlockJob common;
+    bool should_complete;
+} YieldingJob;
+
+static void yielding_job_complete(Job *job, Error **errp)
+{
+    YieldingJob *s = container_of(job, YieldingJob, common.job);
+    s->should_complete = true;
+    job_enter(job);
+}
+
+static int coroutine_fn yielding_job_run(Job *job, Error **errp)
+{
+    YieldingJob *s = container_of(job, YieldingJob, common.job);
+
+    job_transition_to_ready(job);
+
+    while (!s->should_complete) {
+        job_yield(job);
+    }
+
+    return 0;
+}
+
+/*
+ * This job transitions immediately to the READY state, and then
+ * yields until it is to complete.
+ */
+static const BlockJobDriver test_yielding_driver = {
+    .job_driver = {
+        .instance_size  = sizeof(YieldingJob),
+        .free           = block_job_free,
+        .user_resume    = block_job_user_resume,
+        .run            = yielding_job_run,
+        .complete       = yielding_job_complete,
+    },
+};
+
+/*
+ * Test that job_wait_unpaused() can get jobs from a paused state to
+ * a running state so that job_complete() can be applied (assuming the
+ * pause occurred due to a drain that has already been lifted).
+ * (This is what QMP's block-job-complete does so it can be executed
+ * even immediately after some other operation instated and lifted a
+ * drain.)
+ *
+ * To do this, run YieldingJob in an IO thread, get it into the READY
+ * state, then have a drained section.  Before ending the section,
+ * acquire the context so the job will not be entered and will thus
+ * remain on STANDBY.
+ *
+ * Invoking job_complete() then will fail.
+ *
+ * However, job_wait_unpaused() should see the job is to be resumed,
+ * wait for it to be resumed, and then we can invoke job_complete()
+ * without error.
+ *
+ * Note that on the QMP interface, it is impossible to lock an IO
+ * thread before a drained section ends.  In practice, the
+ * bdrv_drain_all_end() and the aio_context_acquire() will be
+ * reversed.  However, that makes for worse reproducibility here:
+ * Sometimes, the job would no longer be in STANDBY then but already
+ * be started.  We cannot prevent that, because the IO thread runs
+ * concurrently.  We can only prevent it by taking the lock before
+ * ending the drained section, so we do that.
+ *
+ * (You can reverse the order of operations and most of the time the
+ * test will pass, but sometimes the assert(status == STANDBY) will
+ * fail.)
+ */
+static void test_complete_in_standby(void)
+{
+    BlockBackend *blk;
+    IOThread *iothread;
+    AioContext *ctx;
+    Job *job;
+    BlockJob *bjob;
+    Error *local_err = NULL;
+
+    /* Create a test drive, move it to an IO thread */
+    blk = create_blk(NULL);
+    iothread = iothread_new();
+
+    ctx = iothread_get_aio_context(iothread);
+    blk_set_aio_context(blk, ctx, &error_abort);
+
+    /* Create our test job */
+    bjob = mk_job(blk, "job", &test_yielding_driver, true,
+                  JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS);
+    job = &bjob->job;
+    assert(job->status == JOB_STATUS_CREATED);
+
+    /* Wait for the job to become READY */
+    job_start(job);
+    aio_context_acquire(ctx);
+    AIO_WAIT_WHILE(ctx, job->status != JOB_STATUS_READY);
+    aio_context_release(ctx);
+
+    /* Begin the drained section, pausing the job */
+    bdrv_drain_all_begin();
+    assert(job->status == JOB_STATUS_STANDBY);
+    /* Lock the IO thread to prevent the job from being run */
+    aio_context_acquire(ctx);
+    /* This will schedule the job to resume it */
+    bdrv_drain_all_end();
+
+    /* But the job cannot run, so it will remain on standby */
+    assert(job->status == JOB_STATUS_STANDBY);
+
+    /* A job on standby cannot be completed */
+    job_complete(job, &local_err);
+    assert(local_err != NULL);
+    error_free(local_err);
+    local_err = NULL;
+
+    /*
+     * But waiting for it and then completing it should work.
+     * (This is what qmp_block_job_complete() does.)
+     */
+    job_wait_unpaused(job, &error_abort);
+    job_complete(job, &error_abort);
+
+    /* The test is done now, clean up. */
+    job_finish_sync(job, NULL, &error_abort);
+    assert(job->status == JOB_STATUS_PENDING);
+
+    job_finalize(job, &error_abort);
+    assert(job->status == JOB_STATUS_CONCLUDED);
+
+    job_dismiss(&job, &error_abort);
+
+    destroy_blk(blk);
+    aio_context_release(ctx);
+    iothread_join(iothread);
+}
+
 int main(int argc, char **argv)
 {
     qemu_init_main_loop(&error_abort);
@@ -389,5 +528,6 @@ int main(int argc, char **argv)
     g_test_add_func("/blockjob/cancel/standby", test_cancel_standby);
     g_test_add_func("/blockjob/cancel/pending", test_cancel_pending);
     g_test_add_func("/blockjob/cancel/concluded", test_cancel_concluded);
+    g_test_add_func("/blockjob/complete_in_standby", test_complete_in_standby);
     return g_test_run();
 }
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job
  2021-04-09 13:29 [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete Max Reitz
  2021-04-09 13:29 ` [PATCH v2 1/3] " Max Reitz
  2021-04-09 13:29 ` [PATCH v2 2/3] test-blockjob: Test job_wait_unpaused() Max Reitz
@ 2021-04-09 13:29 ` Max Reitz
  2021-04-09 13:45   ` Max Reitz
  2021-04-09 13:29 ` [PATCH v2 4/3] iotests: Test completion immediately after drain Max Reitz
  3 siblings, 1 reply; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:29 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel,
	Max Reitz

Expand test_pause() to check what happens when issuing
block-job-complete on a job that is on STANDBY because it has been
paused by the user.  (This should be an error, and in particular not
hang job_wait_unpaused().)

Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/041 | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
index 5cc02b24fc..d2c9669741 100755
--- a/tests/qemu-iotests/041
+++ b/tests/qemu-iotests/041
@@ -120,7 +120,18 @@ class TestSingleDrive(iotests.QMPTestCase):
         result = self.vm.qmp('block-job-resume', device='drive0')
         self.assert_qmp(result, 'return', {})
 
-        self.complete_and_wait()
+        self.wait_ready()
+
+        # Check that a job on STANDBY cannot be completed
+        self.pause_job('drive0')
+        result = self.vm.qmp('block-job-complete', device='drive0')
+        self.assert_qmp(result, 'error/desc',
+                        "Job 'drive0' has been paused by the user")
+
+        result = self.vm.qmp('block-job-resume', device='drive0')
+        self.assert_qmp(result, 'return', {})
+
+        self.complete_and_wait(wait_ready=False)
         self.vm.shutdown()
         self.assertTrue(iotests.compare_images(test_img, target_img),
                         'target image does not match source after mirroring')
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 4/3] iotests: Test completion immediately after drain
  2021-04-09 13:29 [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete Max Reitz
                   ` (2 preceding siblings ...)
  2021-04-09 13:29 ` [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job Max Reitz
@ 2021-04-09 13:29 ` Max Reitz
  3 siblings, 0 replies; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:29 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel,
	Max Reitz

Test what happens when you have multiple busy block jobs, drain all (via
an empty transaction), and immediately issue a block-job-complete on one
of the jobs.

Sometimes it will still be in STANDBY, in which case block-job-complete
used to fail.  It should not.

Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 .../tests/mirror-complete-after-drain         | 89 +++++++++++++++++++
 .../tests/mirror-complete-after-drain.out     | 14 +++
 2 files changed, 103 insertions(+)
 create mode 100755 tests/qemu-iotests/tests/mirror-complete-after-drain
 create mode 100644 tests/qemu-iotests/tests/mirror-complete-after-drain.out

diff --git a/tests/qemu-iotests/tests/mirror-complete-after-drain b/tests/qemu-iotests/tests/mirror-complete-after-drain
new file mode 100755
index 0000000000..b096ffbcb4
--- /dev/null
+++ b/tests/qemu-iotests/tests/mirror-complete-after-drain
@@ -0,0 +1,89 @@
+#!/usr/bin/env python3
+# group: rw
+#
+# Tests for block-job-complete immediately after a drain
+#
+# Copyright (c) 2021 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+import iotests
+
+iotests.script_initialize(supported_fmts=['raw'])
+
+DISK_JOBS = 4
+NULL_JOBS = 1
+
+
+# We cannot auto-generate these in a loop because the files are
+# deleted when their scope ends
+src_imgs = iotests.file_path('src0', 'src1', 'src2', 'src3')
+dst_imgs = iotests.file_path('dst0', 'dst1', 'dst2', 'dst3')
+
+assert len(src_imgs) == DISK_JOBS
+assert len(dst_imgs) == DISK_JOBS
+
+
+for i in range(DISK_JOBS):
+    ret = iotests.qemu_img('create', '-f', iotests.imgfmt, src_imgs[i], '128M')
+    assert ret == 0
+
+    ret = iotests.qemu_img('create', '-f', iotests.imgfmt, dst_imgs[i], '128M')
+    assert ret == 0
+
+with iotests.VM() as vm:
+    vm.add_object('iothread,id=iothr0')
+    vm.add_device('virtio-scsi,iothread=iothr0')
+
+    for i in range(DISK_JOBS):
+        vm.add_blockdev(f'file,node-name=source-disk-{i},'
+                        f'filename={src_imgs[i]}')
+
+        vm.add_blockdev(f'file,node-name=target-disk-{i},'
+                        f'filename={dst_imgs[i]}')
+
+        vm.add_device(f'scsi-hd,id=device-disk-{i},drive=source-disk-{i}')
+
+    for i in range(NULL_JOBS):
+        vm.add_blockdev(f'null-co,node-name=source-null-{i},read-zeroes=on')
+        vm.add_blockdev(f'null-co,node-name=target-null-{i},read-zeroes=on')
+        vm.add_device(f'scsi-hd,id=device-null-{i},drive=source-null-{i}')
+
+    vm.launch()
+
+    for i in range(DISK_JOBS):
+        vm.qmp_log('blockdev-mirror',
+                   job_id=f'mirror-disk-{i}',
+                   device=f'source-disk-{i}',
+                   target=f'target-disk-{i}',
+                   sync='full',
+                   granularity=1048576,
+                   buf_size=(16 * 1048576))
+
+    for i in range(NULL_JOBS):
+        vm.qmp_log('blockdev-mirror',
+                   job_id=f'mirror-null-{i}',
+                   device=f'source-null-{i}',
+                   target=f'target-null-{i}',
+                   sync='full')
+
+    for i in range(DISK_JOBS + NULL_JOBS):
+        vm.event_wait('BLOCK_JOB_READY')
+
+    for i in range(DISK_JOBS):
+        vm.hmp(f'qemu-io -d device-disk-{i} "write 0 128M"')
+
+    vm.qmp_log('transaction', actions=[])
+    vm.qmp_log('block-job-complete', device='mirror-null-0')
diff --git a/tests/qemu-iotests/tests/mirror-complete-after-drain.out b/tests/qemu-iotests/tests/mirror-complete-after-drain.out
new file mode 100644
index 0000000000..4d9d0529fe
--- /dev/null
+++ b/tests/qemu-iotests/tests/mirror-complete-after-drain.out
@@ -0,0 +1,14 @@
+{"execute": "blockdev-mirror", "arguments": {"buf-size": 16777216, "device": "source-disk-0", "granularity": 1048576, "job-id": "mirror-disk-0", "sync": "full", "target": "target-disk-0"}}
+{"return": {}}
+{"execute": "blockdev-mirror", "arguments": {"buf-size": 16777216, "device": "source-disk-1", "granularity": 1048576, "job-id": "mirror-disk-1", "sync": "full", "target": "target-disk-1"}}
+{"return": {}}
+{"execute": "blockdev-mirror", "arguments": {"buf-size": 16777216, "device": "source-disk-2", "granularity": 1048576, "job-id": "mirror-disk-2", "sync": "full", "target": "target-disk-2"}}
+{"return": {}}
+{"execute": "blockdev-mirror", "arguments": {"buf-size": 16777216, "device": "source-disk-3", "granularity": 1048576, "job-id": "mirror-disk-3", "sync": "full", "target": "target-disk-3"}}
+{"return": {}}
+{"execute": "blockdev-mirror", "arguments": {"device": "source-null-0", "job-id": "mirror-null-0", "sync": "full", "target": "target-null-0"}}
+{"return": {}}
+{"execute": "transaction", "arguments": {"actions": []}}
+{"return": {}}
+{"execute": "block-job-complete", "arguments": {"device": "mirror-null-0"}}
+{"return": {}}
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job
  2021-04-09 13:29 ` [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job Max Reitz
@ 2021-04-09 13:45   ` Max Reitz
  0 siblings, 0 replies; 6+ messages in thread
From: Max Reitz @ 2021-04-09 13:45 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Vladimir Sementsov-Ogievskiy, John Snow, qemu-devel

On 09.04.21 15:29, Max Reitz wrote:
> Expand test_pause() to check what happens when issuing
> block-job-complete on a job that is on STANDBY because it has been
> paused by the user.  (This should be an error, and in particular not
> hang job_wait_unpaused().)
> 
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
>   tests/qemu-iotests/041 | 13 ++++++++++++-
>   1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/qemu-iotests/041 b/tests/qemu-iotests/041
> index 5cc02b24fc..d2c9669741 100755
> --- a/tests/qemu-iotests/041
> +++ b/tests/qemu-iotests/041
> @@ -120,7 +120,18 @@ class TestSingleDrive(iotests.QMPTestCase):
>           result = self.vm.qmp('block-job-resume', device='drive0')
>           self.assert_qmp(result, 'return', {})
>   
> -        self.complete_and_wait()
> +        self.wait_ready()
> +
> +        # Check that a job on STANDBY cannot be completed
> +        self.pause_job('drive0')
> +        result = self.vm.qmp('block-job-complete', device='drive0')
> +        self.assert_qmp(result, 'error/desc',
> +                        "Job 'drive0' has been paused by the user")

Oops.  Should now be

"Job 'drive0' has been paused and needs to be explicitly resumed"

of course.

Max

> +
> +        result = self.vm.qmp('block-job-resume', device='drive0')
> +        self.assert_qmp(result, 'return', {})
> +
> +        self.complete_and_wait(wait_ready=False)
>           self.vm.shutdown()
>           self.assertTrue(iotests.compare_images(test_img, target_img),
>                           'target image does not match source after mirroring')
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-04-09 13:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-09 13:29 [PATCH v2 0/3] job: Add job_wait_unpaused() for block-job-complete Max Reitz
2021-04-09 13:29 ` [PATCH v2 1/3] " Max Reitz
2021-04-09 13:29 ` [PATCH v2 2/3] test-blockjob: Test job_wait_unpaused() Max Reitz
2021-04-09 13:29 ` [PATCH v2 3/3] iotests/041: block-job-complete on user-paused job Max Reitz
2021-04-09 13:45   ` Max Reitz
2021-04-09 13:29 ` [PATCH v2 4/3] iotests: Test completion immediately after drain Max Reitz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.