All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH 00/26] replay additions
@ 2017-10-31 11:06 Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 01/26] block: implement bdrv_snapshot_goto for blkreplay Pavel Dovgalyuk
                   ` (26 more replies)
  0 siblings, 27 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This set of patches includex fixes from Alex Bennée for fixing
BQL and replay locks after inventing the MTTCG. It also includes some
additional replay patches that makes this set of fixes working.
It is also fixes some vmstate creation (and loading) issues
in record/replay modes:
 - VM start/stop fixes in replay mode
 - overlay creation for blkreplay filter
 - fixes for vmstate save/load in record/replay mode
 - fixes for host clock vmstate
 - fixes for icount timers vmstate

There is also a set of helper scripts written by Alex Bennée
for debugging the record/replay code.

---

Alex Bennée (12):
      target/arm/arm-powertctl: drop BQL assertions
      cpus: push BQL lock to qemu_*_wait_io_event
      cpus: only take BQL for sleeping threads
      replay/replay.c: bump REPLAY_VERSION again
      replay/replay-internal.c: track holding of replay_lock
      replay: make locking visible outside replay code
      replay: push replay_mutex_lock up the call tree
      scripts/qemu-gdb: add simple tcg lock status helper
      util/qemu-thread-*: add qemu_lock, locked and unlock trace events
      scripts/analyse-locks-simpletrace.py: script to analyse lock times
      scripts/replay-dump.py: replay log dumper
      scripts/qemu-gdb/timers.py: new helper to dump timer state

Pavel Dovgalyuk (14):
      block: implement bdrv_snapshot_goto for blkreplay
      blkreplay: create temporary overlay for underlaying devices
      replay: disable default snapshot for record/replay
      replay: fix processing async events
      replay: fixed replay_enable_events
      replay: fix save/load vm for non-empty queue
      replay: added replay log format description
      replay: make safe vmstop at record/replay
      replay: save prior value of the host clock
      icount: fixed saving/restoring of icount warp timers
      cpu-exec: don't overwrite exception_index
      cpu-exec: reset exit flag before calling cpu_exec_nocache
      replay: don't destroy mutex at exit
      replay: check return values of fwrite


 accel/kvm/kvm-all.c                  |    4 
 accel/tcg/cpu-exec.c                 |    5 -
 block/blkreplay.c                    |   73 ++++++++
 cpus-common.c                        |   13 +
 cpus.c                               |  149 +++++++++++++---
 docs/replay.txt                      |   88 ++++++++++
 include/qemu/thread.h                |   14 +-
 include/qemu/timer.h                 |   14 ++
 include/sysemu/replay.h              |   19 ++
 migration/savevm.c                   |   13 +
 replay/replay-char.c                 |   21 +-
 replay/replay-events.c               |   30 +--
 replay/replay-internal.c             |   26 +++
 replay/replay-internal.h             |    9 +
 replay/replay-snapshot.c             |    9 +
 replay/replay-time.c                 |   10 +
 replay/replay.c                      |   43 ++---
 scripts/analyse-locks-simpletrace.py |   99 +++++++++++
 scripts/qemu-gdb.py                  |    4 
 scripts/qemugdb/tcg.py               |   46 +++++
 scripts/qemugdb/timers.py            |   54 ++++++
 scripts/replay-dump.py               |  308 ++++++++++++++++++++++++++++++++++
 stubs/replay.c                       |   16 ++
 target/arm/arm-powerctl.c            |    8 -
 target/i386/hax-all.c                |    3 
 util/main-loop.c                     |   23 ++-
 util/qemu-thread-posix.c             |   21 +-
 util/qemu-timer.c                    |   12 +
 util/trace-events                    |    7 -
 vl.c                                 |   12 +
 30 files changed, 1014 insertions(+), 139 deletions(-)
 create mode 100755 scripts/analyse-locks-simpletrace.py
 create mode 100644 scripts/qemugdb/tcg.py
 create mode 100644 scripts/qemugdb/timers.py
 create mode 100755 scripts/replay-dump.py

-- 
Pavel Dovgalyuk

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 01/26] block: implement bdrv_snapshot_goto for blkreplay
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
@ 2017-10-31 11:06 ` Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 02/26] blkreplay: create temporary overlay for underlaying devices Pavel Dovgalyuk
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

This patch enables making snapshots with blkreplay used in
block devices.
This function is required to make bdrv_snapshot_goto without
calling .bdrv_open which is not implemented.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 block/blkreplay.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/block/blkreplay.c b/block/blkreplay.c
index 61e44a1..82a88f8 100755
--- a/block/blkreplay.c
+++ b/block/blkreplay.c
@@ -127,6 +127,12 @@ static int coroutine_fn blkreplay_co_flush(BlockDriverState *bs)
     return ret;
 }
 
+static int blkreplay_snapshot_goto(BlockDriverState *bs,
+                                   const char *snapshot_id)
+{
+    return bdrv_snapshot_goto(bs->file->bs, snapshot_id);
+}
+
 static BlockDriver bdrv_blkreplay = {
     .format_name            = "blkreplay",
     .protocol_name          = "blkreplay",
@@ -143,6 +149,8 @@ static BlockDriver bdrv_blkreplay = {
     .bdrv_co_pwrite_zeroes  = blkreplay_co_pwrite_zeroes,
     .bdrv_co_pdiscard       = blkreplay_co_pdiscard,
     .bdrv_co_flush          = blkreplay_co_flush,
+
+    .bdrv_snapshot_goto     = blkreplay_snapshot_goto,
 };
 
 static void bdrv_blkreplay_init(void)

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 02/26] blkreplay: create temporary overlay for underlaying devices
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 01/26] block: implement bdrv_snapshot_goto for blkreplay Pavel Dovgalyuk
@ 2017-10-31 11:06 ` Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 03/26] replay: disable default snapshot for record/replay Pavel Dovgalyuk
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

This patch allows using '-snapshot' behavior in record/replay mode.
blkreplay layer creates temporary overlays on top of underlaying
disk images. It is needed, because creating an overlay over blkreplay
breaks the determinism.
This patch creates similar temporary overlay (when it is needed)
under the blkreplay driver. Therefore all block operations are controlled
by blkreplay.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 block/blkreplay.c |   65 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 stubs/replay.c    |    1 +
 vl.c              |    2 +-
 3 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/block/blkreplay.c b/block/blkreplay.c
index 82a88f8..c489d50 100755
--- a/block/blkreplay.c
+++ b/block/blkreplay.c
@@ -14,12 +14,69 @@
 #include "block/block_int.h"
 #include "sysemu/replay.h"
 #include "qapi/error.h"
+#include "qapi/qmp/qstring.h"
 
 typedef struct Request {
     Coroutine *co;
     QEMUBH *bh;
 } Request;
 
+static BlockDriverState *blkreplay_append_snapshot(BlockDriverState *bs,
+                                                   Error **errp)
+{
+    int ret;
+    BlockDriverState *bs_snapshot;
+    int64_t total_size;
+    QemuOpts *opts = NULL;
+    char tmp_filename[PATH_MAX + 1];
+    QDict *snapshot_options = qdict_new();
+
+    /* Prepare options QDict for the overlay file */
+    qdict_put(snapshot_options, "file.driver", qstring_from_str("file"));
+    qdict_put(snapshot_options, "driver", qstring_from_str("qcow2"));
+
+    /* Create temporary file */
+    ret = get_tmp_filename(tmp_filename, PATH_MAX + 1);
+    if (ret < 0) {
+        error_setg_errno(errp, -ret, "Could not get temporary filename");
+        goto out;
+    }
+    qdict_put(snapshot_options, "file.filename",
+              qstring_from_str(tmp_filename));
+
+    /* Get the required size from the image */
+    total_size = bdrv_getlength(bs);
+    if (total_size < 0) {
+        error_setg_errno(errp, -total_size, "Could not get image size");
+        goto out;
+    }
+
+    opts = qemu_opts_create(bdrv_qcow2.create_opts, NULL, 0, &error_abort);
+    qemu_opt_set_number(opts, BLOCK_OPT_SIZE, total_size, &error_abort);
+    ret = bdrv_create(&bdrv_qcow2, tmp_filename, opts, errp);
+    qemu_opts_del(opts);
+    if (ret < 0) {
+        error_prepend(errp, "Could not create temporary overlay '%s': ",
+                      tmp_filename);
+        goto out;
+    }
+
+    bs_snapshot = bdrv_open(NULL, NULL, snapshot_options,
+                            BDRV_O_RDWR | BDRV_O_TEMPORARY, errp);
+    snapshot_options = NULL;
+    if (!bs_snapshot) {
+        goto out;
+    }
+
+    bdrv_append(bs_snapshot, bs, errp);
+
+    return bs_snapshot;
+
+out:
+    QDECREF(snapshot_options);
+    return NULL;
+}
+
 static int blkreplay_open(BlockDriverState *bs, QDict *options, int flags,
                           Error **errp)
 {
@@ -35,6 +92,14 @@ static int blkreplay_open(BlockDriverState *bs, QDict *options, int flags,
         goto fail;
     }
 
+    /* Add temporary snapshot to preserve the image */
+    if (!replay_snapshot
+        && !blkreplay_append_snapshot(bs->file->bs, &local_err)) {
+        ret = -EINVAL;
+        error_propagate(errp, local_err);
+        goto fail;
+    }
+
     ret = 0;
 fail:
     return ret;
diff --git a/stubs/replay.c b/stubs/replay.c
index 9c8aa48..9991ee5 100644
--- a/stubs/replay.c
+++ b/stubs/replay.c
@@ -3,6 +3,7 @@
 #include "sysemu/sysemu.h"
 
 ReplayMode replay_mode;
+char *replay_snapshot;
 
 int64_t replay_save_clock(unsigned int kind, int64_t clock)
 {
diff --git a/vl.c b/vl.c
index ec29909..7aa59f0 100644
--- a/vl.c
+++ b/vl.c
@@ -4661,7 +4661,7 @@ int main(int argc, char **argv, char **envp)
         qapi_free_BlockdevOptions(bdo->bdo);
         g_free(bdo);
     }
-    if (snapshot || replay_mode != REPLAY_MODE_NONE) {
+    if (snapshot) {
         qemu_opts_foreach(qemu_find_opts("drive"), drive_enable_snapshot,
                           NULL, NULL);
     }

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 03/26] replay: disable default snapshot for record/replay
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 01/26] block: implement bdrv_snapshot_goto for blkreplay Pavel Dovgalyuk
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 02/26] blkreplay: create temporary overlay for underlaying devices Pavel Dovgalyuk
@ 2017-10-31 11:06 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 04/26] replay: fix processing async events Pavel Dovgalyuk
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:06 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

This patch disables setting '-snapshot' option on by default
in record/replay mode. This is needed for creating vmstates in record
and replay modes.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 vl.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/vl.c b/vl.c
index 7aa59f0..a8e0d03 100644
--- a/vl.c
+++ b/vl.c
@@ -3314,7 +3314,13 @@ int main(int argc, char **argv, char **envp)
                 drive_add(IF_PFLASH, -1, optarg, PFLASH_OPTS);
                 break;
             case QEMU_OPTION_snapshot:
-                snapshot = 1;
+                {
+                    Error *blocker = NULL;
+                    snapshot = 1;
+                    error_setg(&blocker, QERR_REPLAY_NOT_SUPPORTED,
+                               "-snapshot");
+                    replay_add_blocker(blocker);
+                }
                 break;
             case QEMU_OPTION_hdachs:
                 {

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 04/26] replay: fix processing async events
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (2 preceding siblings ...)
  2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 03/26] replay: disable default snapshot for record/replay Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events Pavel Dovgalyuk
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

Asynchronous events saved at checkpoints may invoke
callbacks when processed. These callbacks may also generate/read
new events (e.g. clock reads). Therefore event processing flag must be
reset before callback invocation.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 replay/replay-events.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/replay/replay-events.c b/replay/replay-events.c
index 94a6dcc..768b505 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -295,13 +295,13 @@ void replay_read_events(int checkpoint)
         if (!event) {
             break;
         }
+        replay_finish_event();
+        read_event_kind = -1;
         replay_mutex_unlock();
         replay_run_event(event);
         replay_mutex_lock();
 
         g_free(event);
-        replay_finish_event();
-        read_event_kind = -1;
     }
 }
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (3 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 04/26] replay: fix processing async events Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 06/26] replay: fix save/load vm for non-empty queue Pavel Dovgalyuk
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch fixes assignment to internal events_enabled variable.
Now it is set only in record/replay mode. This affects the behavior
of the external functions that check this flag.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 replay/replay-events.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/replay/replay-events.c b/replay/replay-events.c
index 768b505..e858254 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -67,7 +67,9 @@ static void replay_run_event(Event *event)
 
 void replay_enable_events(void)
 {
-    events_enabled = true;
+    if (replay_mode != REPLAY_MODE_NONE) {
+        events_enabled = true;
+    }
 }
 
 bool replay_has_events(void)
@@ -141,7 +143,7 @@ void replay_add_event(ReplayAsyncEventKind event_kind,
 
 void replay_bh_schedule_event(QEMUBH *bh)
 {
-    if (replay_mode != REPLAY_MODE_NONE && events_enabled) {
+    if (events_enabled) {
         uint64_t id = replay_get_current_step();
         replay_add_event(REPLAY_ASYNC_EVENT_BH, bh, NULL, id);
     } else {
@@ -161,7 +163,7 @@ void replay_add_input_sync_event(void)
 
 void replay_block_event(QEMUBH *bh, uint64_t id)
 {
-    if (replay_mode != REPLAY_MODE_NONE && events_enabled) {
+    if (events_enabled) {
         replay_add_event(REPLAY_ASYNC_EVENT_BLOCK, bh, NULL, id);
     } else {
         qemu_bh_schedule(bh);

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 06/26] replay: fix save/load vm for non-empty queue
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (4 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 07/26] replay: added replay log format description Pavel Dovgalyuk
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch does not allows saving/loading vmstate when
replay events queue is not empty. There is no reliable
way to save events queue, because it describes internal
coroutine state. Therefore saving and loading operations
should be deferred to another record/replay step.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 include/sysemu/replay.h  |    3 +++
 migration/savevm.c       |   13 +++++++++++++
 replay/replay-snapshot.c |    6 ++++++
 3 files changed, 22 insertions(+)

diff --git a/include/sysemu/replay.h b/include/sysemu/replay.h
index fa14d0e..b86d6bb 100644
--- a/include/sysemu/replay.h
+++ b/include/sysemu/replay.h
@@ -165,5 +165,8 @@ void replay_audio_in(int *recorded, void *samples, int *wpos, int size);
 /*! Called at the start of execution.
     Loads or saves initial vmstate depending on execution mode. */
 void replay_vmstate_init(void);
+/*! Called to ensure that replay state is consistent and VM snapshot
+    can be created */
+bool replay_can_snapshot(void);
 
 #endif
diff --git a/migration/savevm.c b/migration/savevm.c
index 4a88228..20cebe1 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -52,6 +52,7 @@
 #include "qemu/cutils.h"
 #include "io/channel-buffer.h"
 #include "io/channel-file.h"
+#include "sysemu/replay.h"
 
 #ifndef ETH_P_RARP
 #define ETH_P_RARP 0x8035
@@ -2141,6 +2142,12 @@ int save_snapshot(const char *name, Error **errp)
     struct tm tm;
     AioContext *aio_context;
 
+    if (!replay_can_snapshot()) {
+        monitor_printf(mon, "Record/replay does not allow making snapshot "
+                        "right now. Try once more later.\n");
+        return ret;
+    }
+
     if (!bdrv_all_can_snapshot(&bs)) {
         error_setg(errp, "Device '%s' is writable but does not support "
                    "snapshots", bdrv_get_device_name(bs));
@@ -2310,6 +2317,12 @@ int load_snapshot(const char *name, Error **errp)
     AioContext *aio_context;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
+    if (!replay_can_snapshot()) {
+        error_report("Record/replay does not allow loading snapshot "
+                     "right now. Try once more later.\n");
+        return -EINVAL;
+    }
+
     if (!bdrv_all_can_snapshot(&bs)) {
         error_setg(errp,
                    "Device '%s' is writable but does not support snapshots",
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index b2e1076..7075986 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -83,3 +83,9 @@ void replay_vmstate_init(void)
         }
     }
 }
+
+bool replay_can_snapshot(void)
+{
+    return replay_mode == REPLAY_MODE_NONE
+        || !replay_has_events();
+}

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 07/26] replay: added replay log format description
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (5 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 06/26] replay: fix save/load vm for non-empty queue Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 08/26] replay: make safe vmstop at record/replay Pavel Dovgalyuk
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

This patch adds description of the replay log file format
into the docs/replay.txt.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 docs/replay.txt |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/docs/replay.txt b/docs/replay.txt
index 486c1e0..c52407f 100644
--- a/docs/replay.txt
+++ b/docs/replay.txt
@@ -232,3 +232,72 @@ Audio devices
 Audio data is recorded and replay automatically. The command line for recording
 and replaying must contain identical specifications of audio hardware, e.g.:
  -soundhw ac97
+
+Replay log format
+-----------------
+
+Record/replay log consits of the header and the sequence of execution
+events. The header includes 4-byte replay version id and 8-byte reserved
+field. Version is updated every time replay log format changes to prevent
+using replay log created by another build of qemu.
+
+The sequence of the events describes virtual machine state changes.
+It includes all non-deterministic inputs of VM, synchronization marks and
+instruction counts used to correctly inject inputs at replay.
+
+Synchronization marks (checkpoints) are used for synchronizing qemu threads
+that perform operations with virtual hardware. These operations may change
+system's state (e.g., change some register or generate interrupt) and
+therefore should execute synchronously with CPU thread.
+
+Every event in the log includes 1-byte event id and optional arguments.
+When argument is an array, it is stored as 4-byte array length
+and corresponding number of bytes with data.
+Here is the list of events that are written into the log:
+
+ - EVENT_INSTRUCTION. Instructions executed since last event.
+   Argument: 4-byte number of executed instructions.
+ - EVENT_INTERRUPT. Used to synchronize interrupt processing.
+ - EVENT_EXCEPTION. Used to synchronize exception handling.
+ - EVENT_ASYNC. This is a group of events. They are always processed
+   together with checkpoints. When such an event is generated, it is
+   stored in the queue and processed only when checkpoint occurs.
+   Every such event is followed by 1-byte checkpoint id and 1-byte
+   async event id from the following list:
+     - REPLAY_ASYNC_EVENT_BH. Bottom-half callback. This event synchronizes
+       callbacks that affect virtual machine state, but normally called
+       asyncronously.
+       Argument: 8-byte operation id.
+     - REPLAY_ASYNC_EVENT_INPUT. Input device event. Contains
+       parameters of keyboard and mouse input operations
+       (key press/release, mouse pointer movement).
+       Arguments: 9-16 bytes depending of input event.
+     - REPLAY_ASYNC_EVENT_INPUT_SYNC. Internal input synchronization event.
+     - REPLAY_ASYNC_EVENT_CHAR_READ. Character (e.g., serial port) device input
+       initiated by the sender.
+       Arguments: 1-byte character device id.
+                  Array with bytes were read.
+     - REPLAY_ASYNC_EVENT_BLOCK. Block device operation. Used to synchronize
+       operations with disk and flash drives with CPU.
+       Argument: 8-byte operation id.
+     - REPLAY_ASYNC_EVENT_NET. Incoming network packet.
+       Arguments: 1-byte network adapter id.
+                  4-byte packet flags.
+                  Array with packet bytes.
+ - EVENT_SHUTDOWN. Occurs when user sends shutdown event to qemu,
+   e.g., by closing the window.
+ - EVENT_CHAR_WRITE. Used to synchronize character output operations.
+   Arguments: 4-byte output function return value.
+              4-byte offset in the output array.
+ - EVENT_CHAR_READ_ALL. Used to synchronize character input operations,
+   initiated by qemu.
+   Argument: Array with bytes that were read.
+ - EVENT_CHAR_READ_ALL_ERROR. Unsuccessful character input operation,
+   initiated by qemu.
+   Argument: 4-byte error code.
+ - EVENT_CLOCK + clock_id. Group of events for host clock read operations.
+   Argument: 8-byte clock value.
+ - EVENT_CHECKPOINT + checkpoint_id. Checkpoint for synchronization of
+   CPU, internal threads, and asynchronous input events. May be followed
+   by one or more EVENT_ASYNC events.
+ - EVENT_END. Last event in the log.

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 08/26] replay: make safe vmstop at record/replay
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (6 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 07/26] replay: added replay log format description Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 09/26] replay: save prior value of the host clock Pavel Dovgalyuk
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

This patch disables bdrv flush/drain in record/replay mode.
When block request is in the replay queue it cannot be processed
with drain/flush until it is found in the log.
Therefore vm should just stop leaving unfinished operations
in the queue.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 cpus.c             |    7 ++++---
 migration/savevm.c |    4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/cpus.c b/cpus.c
index 114c29b..c728f3a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -942,9 +942,10 @@ static int do_vm_stop(RunState state)
         qapi_event_send_stop(&error_abort);
     }
 
-    bdrv_drain_all();
-    replay_disable_events();
-    ret = bdrv_flush_all();
+    if (!replay_events_enabled()) {
+        bdrv_drain_all();
+        ret = bdrv_flush_all();
+    }
 
     return ret;
 }
diff --git a/migration/savevm.c b/migration/savevm.c
index 20cebe1..41a13c0 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2143,8 +2143,8 @@ int save_snapshot(const char *name, Error **errp)
     AioContext *aio_context;
 
     if (!replay_can_snapshot()) {
-        monitor_printf(mon, "Record/replay does not allow making snapshot "
-                        "right now. Try once more later.\n");
+        error_report("Record/replay does not allow making snapshot "
+                     "right now. Try once more later.\n");
         return ret;
     }
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 09/26] replay: save prior value of the host clock
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (7 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 08/26] replay: make safe vmstop at record/replay Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 10/26] icount: fixed saving/restoring of icount warp timers Pavel Dovgalyuk
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch adds saving/restoring of the host clock field 'last'.
It is used in host clock calculation and therefore clock may
become incorrect when using restored vmstate.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 include/qemu/timer.h     |   14 ++++++++++++++
 replay/replay-internal.h |    2 ++
 replay/replay-snapshot.c |    3 +++
 util/qemu-timer.c        |   12 ++++++++++++
 4 files changed, 31 insertions(+)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 1b518bc..a610a17 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -251,6 +251,20 @@ bool qemu_clock_run_timers(QEMUClockType type);
  */
 bool qemu_clock_run_all_timers(void);
 
+/**
+ * qemu_clock_get_last:
+ *
+ * Returns last clock query time.
+ */
+uint64_t qemu_clock_get_last(QEMUClockType type);
+/**
+ * qemu_clock_set_last:
+ *
+ * Sets last clock query time.
+ */
+void qemu_clock_set_last(QEMUClockType type, uint64_t last);
+
+
 /*
  * QEMUTimerList
  */
diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index 3ebb199..be96d7e 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -78,6 +78,8 @@ typedef struct ReplayState {
         This counter is global, because requests from different
         block devices should not get overlapping ids. */
     uint64_t block_request_id;
+    /*! Prior value of the host clock */
+    uint64_t host_clock_last;
 } ReplayState;
 extern ReplayState replay_state;
 
diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c
index 7075986..e0b2204 100644
--- a/replay/replay-snapshot.c
+++ b/replay/replay-snapshot.c
@@ -25,6 +25,7 @@ static int replay_pre_save(void *opaque)
 {
     ReplayState *state = opaque;
     state->file_offset = ftell(replay_file);
+    state->host_clock_last = qemu_clock_get_last(QEMU_CLOCK_HOST);
 
     return 0;
 }
@@ -33,6 +34,7 @@ static int replay_post_load(void *opaque, int version_id)
 {
     ReplayState *state = opaque;
     fseek(replay_file, state->file_offset, SEEK_SET);
+    qemu_clock_set_last(QEMU_CLOCK_HOST, state->host_clock_last);
     /* If this was a vmstate, saved in recording mode,
        we need to initialize replay data fields. */
     replay_fetch_data_kind();
@@ -54,6 +56,7 @@ static const VMStateDescription vmstate_replay = {
         VMSTATE_UINT32(has_unread_data, ReplayState),
         VMSTATE_UINT64(file_offset, ReplayState),
         VMSTATE_UINT64(block_request_id, ReplayState),
+        VMSTATE_UINT64(host_clock_last, ReplayState),
         VMSTATE_END_OF_LIST()
     },
 };
diff --git a/util/qemu-timer.c b/util/qemu-timer.c
index 82d5650..2ed1bf2 100644
--- a/util/qemu-timer.c
+++ b/util/qemu-timer.c
@@ -622,6 +622,18 @@ int64_t qemu_clock_get_ns(QEMUClockType type)
     }
 }
 
+uint64_t qemu_clock_get_last(QEMUClockType type)
+{
+    QEMUClock *clock = qemu_clock_ptr(type);
+    return clock->last;
+}
+
+void qemu_clock_set_last(QEMUClockType type, uint64_t last)
+{
+    QEMUClock *clock = qemu_clock_ptr(type);
+    clock->last = last;
+}
+
 void qemu_clock_register_reset_notifier(QEMUClockType type,
                                         Notifier *notifier)
 {

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 10/26] icount: fixed saving/restoring of icount warp timers
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (8 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 09/26] replay: save prior value of the host clock Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 11/26] target/arm/arm-powertctl: drop BQL assertions Pavel Dovgalyuk
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch adds saving and restoring of the icount warp
timers in the vmstate.
It is needed because there timers affect the virtual clock value.
Therefore determinism of the execution in icount record/replay mode
depends on determinism of the timers.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 cpus.c |   85 ++++++++++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 66 insertions(+), 19 deletions(-)

diff --git a/cpus.c b/cpus.c
index c728f3a..2eec54f 100644
--- a/cpus.c
+++ b/cpus.c
@@ -119,16 +119,11 @@ static bool all_cpu_threads_idle(void)
 /* Protected by TimersState seqlock */
 
 static bool icount_sleep = true;
-static int64_t vm_clock_warp_start = -1;
 /* Conversion factor from emulated instructions to virtual clock ticks.  */
 static int icount_time_shift;
 /* Arbitrarily pick 1MIPS as the minimum allowable speed.  */
 #define MAX_ICOUNT_SHIFT 10
 
-static QEMUTimer *icount_rt_timer;
-static QEMUTimer *icount_vm_timer;
-static QEMUTimer *icount_warp_timer;
-
 typedef struct TimersState {
     /* Protected by BQL.  */
     int64_t cpu_ticks_prev;
@@ -146,6 +141,11 @@ typedef struct TimersState {
     int64_t qemu_icount_bias;
     /* Only written by TCG thread */
     int64_t qemu_icount;
+    /* for adjusting icount */
+    int64_t vm_clock_warp_start;
+    QEMUTimer *icount_rt_timer;
+    QEMUTimer *icount_vm_timer;
+    QEMUTimer *icount_warp_timer;
 } TimersState;
 
 static TimersState timers_state;
@@ -431,14 +431,14 @@ static void icount_adjust(void)
 
 static void icount_adjust_rt(void *opaque)
 {
-    timer_mod(icount_rt_timer,
+    timer_mod(timers_state.icount_rt_timer,
               qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) + 1000);
     icount_adjust();
 }
 
 static void icount_adjust_vm(void *opaque)
 {
-    timer_mod(icount_vm_timer,
+    timer_mod(timers_state.icount_vm_timer,
                    qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
                    NANOSECONDS_PER_SECOND / 10);
     icount_adjust();
@@ -459,7 +459,7 @@ static void icount_warp_rt(void)
      */
     do {
         seq = seqlock_read_begin(&timers_state.vm_clock_seqlock);
-        warp_start = vm_clock_warp_start;
+        warp_start = timers_state.vm_clock_warp_start;
     } while (seqlock_read_retry(&timers_state.vm_clock_seqlock, seq));
 
     if (warp_start == -1) {
@@ -472,7 +472,7 @@ static void icount_warp_rt(void)
                                      cpu_get_clock_locked());
         int64_t warp_delta;
 
-        warp_delta = clock - vm_clock_warp_start;
+        warp_delta = clock - timers_state.vm_clock_warp_start;
         if (use_icount == 2) {
             /*
              * In adaptive mode, do not let QEMU_CLOCK_VIRTUAL run too
@@ -484,7 +484,7 @@ static void icount_warp_rt(void)
         }
         timers_state.qemu_icount_bias += warp_delta;
     }
-    vm_clock_warp_start = -1;
+    timers_state.vm_clock_warp_start = -1;
     seqlock_write_end(&timers_state.vm_clock_seqlock);
 
     if (qemu_clock_expired(QEMU_CLOCK_VIRTUAL)) {
@@ -593,11 +593,13 @@ void qemu_start_warp_timer(void)
              * every 100ms.
              */
             seqlock_write_begin(&timers_state.vm_clock_seqlock);
-            if (vm_clock_warp_start == -1 || vm_clock_warp_start > clock) {
-                vm_clock_warp_start = clock;
+            if (timers_state.vm_clock_warp_start == -1
+                || timers_state.vm_clock_warp_start > clock) {
+                timers_state.vm_clock_warp_start = clock;
             }
             seqlock_write_end(&timers_state.vm_clock_seqlock);
-            timer_mod_anticipate(icount_warp_timer, clock + deadline);
+            timer_mod_anticipate(timers_state.icount_warp_timer,
+                                 clock + deadline);
         }
     } else if (deadline == 0) {
         qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
@@ -622,7 +624,7 @@ static void qemu_account_warp_timer(void)
         return;
     }
 
-    timer_del(icount_warp_timer);
+    timer_del(timers_state.icount_warp_timer);
     icount_warp_rt();
 }
 
@@ -631,6 +633,44 @@ static bool icount_state_needed(void *opaque)
     return use_icount;
 }
 
+static bool warp_timer_state_needed(void *opaque)
+{
+    TimersState *s = opaque;
+    return s->icount_warp_timer != NULL;
+}
+
+static bool adjust_timers_state_needed(void *opaque)
+{
+    TimersState *s = opaque;
+    return s->icount_rt_timer != NULL;
+}
+
+/*
+ * Subsection for warp timer migration is optional, because may not be created
+ */
+static const VMStateDescription icount_vmstate_warp_timer = {
+    .name = "timer/icount/warp_timer",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = warp_timer_state_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_TIMER_PTR(icount_warp_timer, TimersState),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static const VMStateDescription icount_vmstate_adjust_timers = {
+    .name = "timer/icount/timers",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = adjust_timers_state_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_TIMER_PTR(icount_rt_timer, TimersState),
+        VMSTATE_TIMER_PTR(icount_vm_timer, TimersState),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
 /*
  * This is a subsection for icount migration.
  */
@@ -642,7 +682,13 @@ static const VMStateDescription icount_vmstate_timers = {
     .fields = (VMStateField[]) {
         VMSTATE_INT64(qemu_icount_bias, TimersState),
         VMSTATE_INT64(qemu_icount, TimersState),
+        VMSTATE_INT64(vm_clock_warp_start, TimersState),
         VMSTATE_END_OF_LIST()
+    },
+    .subsections = (const VMStateDescription*[]) {
+        &icount_vmstate_warp_timer,
+        &icount_vmstate_adjust_timers,
+        NULL
     }
 };
 
@@ -753,7 +799,7 @@ void configure_icount(QemuOpts *opts, Error **errp)
 
     icount_sleep = qemu_opt_get_bool(opts, "sleep", true);
     if (icount_sleep) {
-        icount_warp_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT,
+        timers_state.icount_warp_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL_RT,
                                          icount_timer_cb, NULL);
     }
 
@@ -787,13 +833,14 @@ void configure_icount(QemuOpts *opts, Error **errp)
        the virtual time trigger catches emulated time passing too fast.
        Realtime triggers occur even when idle, so use them less frequently
        than VM triggers.  */
-    icount_rt_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL_RT,
+    timers_state.vm_clock_warp_start = -1;
+    timers_state.icount_rt_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL_RT,
                                    icount_adjust_rt, NULL);
-    timer_mod(icount_rt_timer,
+    timer_mod(timers_state.icount_rt_timer,
                    qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL_RT) + 1000);
-    icount_vm_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
+    timers_state.icount_vm_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
                                         icount_adjust_vm, NULL);
-    timer_mod(icount_vm_timer,
+    timer_mod(timers_state.icount_vm_timer,
                    qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
                    NANOSECONDS_PER_SECOND / 10);
 }

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 11/26] target/arm/arm-powertctl: drop BQL assertions
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (9 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 10/26] icount: fixed saving/restoring of icount warp timers Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 12/26] cpus: push BQL lock to qemu_*_wait_io_event Pavel Dovgalyuk
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

The powerctl code is run in the context of the vCPU changing power
state. It does not need the BQL to protect its changes.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 target/arm/arm-powerctl.c |    8 --------
 1 file changed, 8 deletions(-)

diff --git a/target/arm/arm-powerctl.c b/target/arm/arm-powerctl.c
index 25207cb..9661a59 100644
--- a/target/arm/arm-powerctl.c
+++ b/target/arm/arm-powerctl.c
@@ -124,7 +124,6 @@ static void arm_set_cpu_on_async_work(CPUState *target_cpu_state,
     g_free(info);
 
     /* Finally set the power status */
-    assert(qemu_mutex_iothread_locked());
     target_cpu->power_state = PSCI_ON;
 }
 
@@ -135,8 +134,6 @@ int arm_set_cpu_on(uint64_t cpuid, uint64_t entry, uint64_t context_id,
     ARMCPU *target_cpu;
     struct CpuOnInfo *info;
 
-    assert(qemu_mutex_iothread_locked());
-
     DPRINTF("cpu %" PRId64 " (EL %d, %s) @ 0x%" PRIx64 " with R0 = 0x%" PRIx64
             "\n", cpuid, target_el, target_aa64 ? "aarch64" : "aarch32", entry,
             context_id);
@@ -227,7 +224,6 @@ static void arm_set_cpu_off_async_work(CPUState *target_cpu_state,
 {
     ARMCPU *target_cpu = ARM_CPU(target_cpu_state);
 
-    assert(qemu_mutex_iothread_locked());
     target_cpu->power_state = PSCI_OFF;
     target_cpu_state->halted = 1;
     target_cpu_state->exception_index = EXCP_HLT;
@@ -238,8 +234,6 @@ int arm_set_cpu_off(uint64_t cpuid)
     CPUState *target_cpu_state;
     ARMCPU *target_cpu;
 
-    assert(qemu_mutex_iothread_locked());
-
     DPRINTF("cpu %" PRId64 "\n", cpuid);
 
     /* change to the cpu we are powering up */
@@ -274,8 +268,6 @@ int arm_reset_cpu(uint64_t cpuid)
     CPUState *target_cpu_state;
     ARMCPU *target_cpu;
 
-    assert(qemu_mutex_iothread_locked());
-
     DPRINTF("cpu %" PRId64 "\n", cpuid);
 
     /* change to the cpu we are resetting */

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 12/26] cpus: push BQL lock to qemu_*_wait_io_event
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (10 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 11/26] target/arm/arm-powertctl: drop BQL assertions Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 13/26] cpus: only take BQL for sleeping threads Pavel Dovgalyuk
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

We only really need to grab the lock for initial setup (so we don't
race with the thread-spawning thread). After that we can drop the lock
for the whole main loop and only grab it for waiting for IO events.

There is a slight wrinkle for the round-robin TCG thread as we also
expire timers which needs to be done under BQL as they are in the
main-loop.

This is stage one of reducing the lock impact as we can drop the
requirement of implicit BQL for async work and only grab the lock when
we need to sleep on the cpu->halt_cond.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 accel/kvm/kvm-all.c   |    4 ----
 cpus.c                |   27 ++++++++++++++++++++-------
 target/i386/hax-all.c |    3 +--
 3 files changed, 21 insertions(+), 13 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index f290f48..8d1d2c4 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1857,9 +1857,7 @@ int kvm_cpu_exec(CPUState *cpu)
         return EXCP_HLT;
     }
 
-    qemu_mutex_unlock_iothread();
     cpu_exec_start(cpu);
-
     do {
         MemTxAttrs attrs;
 
@@ -1989,8 +1987,6 @@ int kvm_cpu_exec(CPUState *cpu)
     } while (ret == 0);
 
     cpu_exec_end(cpu);
-    qemu_mutex_lock_iothread();
-
     if (ret < 0) {
         cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_CODE);
         vm_stop(RUN_STATE_INTERNAL_ERROR);
diff --git a/cpus.c b/cpus.c
index 2eec54f..efde5c1 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1127,6 +1127,8 @@ static bool qemu_tcg_should_sleep(CPUState *cpu)
 
 static void qemu_tcg_wait_io_event(CPUState *cpu)
 {
+    qemu_mutex_lock_iothread();
+
     while (qemu_tcg_should_sleep(cpu)) {
         stop_tcg_kick_timer();
         qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
@@ -1135,15 +1137,21 @@ static void qemu_tcg_wait_io_event(CPUState *cpu)
     start_tcg_kick_timer();
 
     qemu_wait_io_event_common(cpu);
+
+    qemu_mutex_unlock_iothread();
 }
 
 static void qemu_kvm_wait_io_event(CPUState *cpu)
 {
+    qemu_mutex_lock_iothread();
+
     while (cpu_thread_is_idle(cpu)) {
         qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
     }
 
     qemu_wait_io_event_common(cpu);
+
+    qemu_mutex_unlock_iothread();
 }
 
 static void *qemu_kvm_cpu_thread_fn(void *arg)
@@ -1169,6 +1177,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
 
     /* signal CPU creation */
     cpu->created = true;
+    qemu_mutex_unlock_iothread();
+
     qemu_cond_signal(&qemu_cpu_cond);
 
     do {
@@ -1211,10 +1221,10 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
 
     /* signal CPU creation */
     cpu->created = true;
+    qemu_mutex_unlock_iothread();
     qemu_cond_signal(&qemu_cpu_cond);
 
     while (1) {
-        qemu_mutex_unlock_iothread();
         do {
             int sig;
             r = sigwait(&waitset, &sig);
@@ -1225,6 +1235,7 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
         }
         qemu_mutex_lock_iothread();
         qemu_wait_io_event_common(cpu);
+        qemu_mutex_unlock_iothread();
     }
 
     return NULL;
@@ -1313,11 +1324,9 @@ static int tcg_cpu_exec(CPUState *cpu)
 #ifdef CONFIG_PROFILER
     ti = profile_getclock();
 #endif
-    qemu_mutex_unlock_iothread();
     cpu_exec_start(cpu);
     ret = cpu_exec(cpu);
     cpu_exec_end(cpu);
-    qemu_mutex_lock_iothread();
 #ifdef CONFIG_PROFILER
     tcg_time += profile_getclock() - ti;
 #endif
@@ -1377,6 +1386,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
             qemu_wait_io_event_common(cpu);
         }
     }
+    qemu_mutex_unlock_iothread();
 
     start_tcg_kick_timer();
 
@@ -1386,6 +1396,9 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
     cpu->exit_request = 1;
 
     while (1) {
+
+        qemu_mutex_lock_iothread();
+
         /* Account partial waits to QEMU_CLOCK_VIRTUAL.  */
         qemu_account_warp_timer();
 
@@ -1394,6 +1407,8 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
          */
         handle_icount_deadline();
 
+        qemu_mutex_unlock_iothread();
+
         if (!cpu) {
             cpu = first_cpu;
         }
@@ -1419,9 +1434,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
                     cpu_handle_guest_debug(cpu);
                     break;
                 } else if (r == EXCP_ATOMIC) {
-                    qemu_mutex_unlock_iothread();
                     cpu_exec_step_atomic(cpu);
-                    qemu_mutex_lock_iothread();
                     break;
                 }
             } else if (cpu->stop) {
@@ -1462,6 +1475,7 @@ static void *qemu_hax_cpu_thread_fn(void *arg)
     current_cpu = cpu;
 
     hax_init_vcpu(cpu);
+    qemu_mutex_unlock_iothread();
     qemu_cond_signal(&qemu_cpu_cond);
 
     while (1) {
@@ -1512,6 +1526,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
     cpu->created = true;
     cpu->can_do_io = 1;
     current_cpu = cpu;
+    qemu_mutex_unlock_iothread();
     qemu_cond_signal(&qemu_cpu_cond);
 
     /* process any pending work */
@@ -1536,9 +1551,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                 g_assert(cpu->halted);
                 break;
             case EXCP_ATOMIC:
-                qemu_mutex_unlock_iothread();
                 cpu_exec_step_atomic(cpu);
-                qemu_mutex_lock_iothread();
             default:
                 /* Ignore everything else? */
                 break;
diff --git a/target/i386/hax-all.c b/target/i386/hax-all.c
index 3ce6950..99af6bb 100644
--- a/target/i386/hax-all.c
+++ b/target/i386/hax-all.c
@@ -513,11 +513,10 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
 
         hax_vcpu_interrupt(env);
 
-        qemu_mutex_unlock_iothread();
         cpu_exec_start(cpu);
         hax_ret = hax_vcpu_run(vcpu);
+        current_cpu = cpu;
         cpu_exec_end(cpu);
-        qemu_mutex_lock_iothread();
 
         /* Simply continue the vcpu_run if system call interrupted */
         if (hax_ret == -EINTR || hax_ret == -EAGAIN) {

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 13/26] cpus: only take BQL for sleeping threads
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (11 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 12/26] cpus: push BQL lock to qemu_*_wait_io_event Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 14/26] replay/replay.c: bump REPLAY_VERSION again Pavel Dovgalyuk
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

Now the only real need to hold the BQL is for when we sleep on the
cpu->halt conditional. The lock is actually dropped while the thread
sleeps so the actual window for contention is pretty small. This also
means we can remove the special case hack for exclusive work and
simply declare that work no longer has an implicit BQL held. This
isn't a major problem async work is generally only changing things in
the context of its own vCPU. If it needs to work across vCPUs it
should be using the exclusive mechanism or possibly taking the lock
itself.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 cpus-common.c |   13 +++++--------
 cpus.c        |   10 ++++------
 2 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/cpus-common.c b/cpus-common.c
index 59f751e..64661c3 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -310,6 +310,11 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
     queue_work_on_cpu(cpu, wi);
 }
 
+/* Work items run outside of the BQL. This is essential for avoiding a
+ * deadlock for exclusive work but also applies to non-exclusive work.
+ * If the work requires cross-vCPU changes then it should use the
+ * exclusive mechanism.
+ */
 void process_queued_cpu_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
@@ -327,17 +332,9 @@ void process_queued_cpu_work(CPUState *cpu)
         }
         qemu_mutex_unlock(&cpu->work_mutex);
         if (wi->exclusive) {
-            /* Running work items outside the BQL avoids the following deadlock:
-             * 1) start_exclusive() is called with the BQL taken while another
-             * CPU is running; 2) cpu_exec in the other CPU tries to takes the
-             * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
-             * neither CPU can proceed.
-             */
-            qemu_mutex_unlock_iothread();
             start_exclusive();
             wi->func(cpu, wi->data);
             end_exclusive();
-            qemu_mutex_lock_iothread();
         } else {
             wi->func(cpu, wi->data);
         }
diff --git a/cpus.c b/cpus.c
index efde5c1..de6dfce 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1127,31 +1127,29 @@ static bool qemu_tcg_should_sleep(CPUState *cpu)
 
 static void qemu_tcg_wait_io_event(CPUState *cpu)
 {
-    qemu_mutex_lock_iothread();
 
     while (qemu_tcg_should_sleep(cpu)) {
+        qemu_mutex_lock_iothread();
         stop_tcg_kick_timer();
         qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_mutex_unlock_iothread();
     }
 
     start_tcg_kick_timer();
 
     qemu_wait_io_event_common(cpu);
-
-    qemu_mutex_unlock_iothread();
 }
 
 static void qemu_kvm_wait_io_event(CPUState *cpu)
 {
-    qemu_mutex_lock_iothread();
 
     while (cpu_thread_is_idle(cpu)) {
+        qemu_mutex_lock_iothread();
         qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_mutex_unlock_iothread();
     }
 
     qemu_wait_io_event_common(cpu);
-
-    qemu_mutex_unlock_iothread();
 }
 
 static void *qemu_kvm_cpu_thread_fn(void *arg)

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 14/26] replay/replay.c: bump REPLAY_VERSION again
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (12 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 13/26] cpus: only take BQL for sleeping threads Pavel Dovgalyuk
@ 2017-10-31 11:07 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 15/26] replay/replay-internal.c: track holding of replay_lock Pavel Dovgalyuk
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:07 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

This time commit 802f045a5f61b781df55e4492d896b4d20503ba7 broke the
replay file format. Also add a comment about this to
replay-internal.h.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 replay/replay-internal.h |    2 +-
 replay/replay.c          |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index be96d7e..8e4c701 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -12,7 +12,7 @@
  *
  */
 
-
+/* Any changes to order/number of events will need to bump REPLAY_VERSION */
 enum ReplayEvents {
     /* for instruction event */
     EVENT_INSTRUCTION,
diff --git a/replay/replay.c b/replay/replay.c
index ff58a5a..4f24498 100644
--- a/replay/replay.c
+++ b/replay/replay.c
@@ -22,7 +22,7 @@
 
 /* Current version of the replay mechanism.
    Increase it when file format changes. */
-#define REPLAY_VERSION              0xe02006
+#define REPLAY_VERSION              0xe02007
 /* Size of replay log header */
 #define HEADER_SIZE                 (sizeof(uint32_t) + sizeof(uint64_t))
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 15/26] replay/replay-internal.c: track holding of replay_lock
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (13 preceding siblings ...)
  2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 14/26] replay/replay.c: bump REPLAY_VERSION again Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 16/26] replay: make locking visible outside replay code Pavel Dovgalyuk
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

This is modelled after the iothread mutex lock. We keep a TLS flag to
indicate when that thread has acquired the lock and assert we don't
double-lock or release when we shouldn't have.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 replay/replay-internal.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/replay/replay-internal.c b/replay/replay-internal.c
index fca8514..157c863 100644
--- a/replay/replay-internal.c
+++ b/replay/replay-internal.c
@@ -179,13 +179,24 @@ void replay_mutex_destroy(void)
     qemu_mutex_destroy(&lock);
 }
 
+static __thread bool replay_locked;
+
+static bool replay_mutex_locked(void)
+{
+    return replay_locked;
+}
+
 void replay_mutex_lock(void)
 {
+    g_assert(!replay_mutex_locked());
     qemu_mutex_lock(&lock);
+    replay_locked = true;
 }
 
 void replay_mutex_unlock(void)
 {
+    g_assert(replay_mutex_locked());
+    replay_locked = false;
     qemu_mutex_unlock(&lock);
 }
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 16/26] replay: make locking visible outside replay code
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (14 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 15/26] replay/replay-internal.c: track holding of replay_lock Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 17/26] replay: push replay_mutex_lock up the call tree Pavel Dovgalyuk
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

The replay_mutex_lock/unlock/locked functions are now going to be used
for ensuring lock-step behaviour between the two threads. Make them
public API functions and also provide stubs for non-QEMU builds on
common paths.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 include/sysemu/replay.h  |   14 ++++++++++++++
 replay/replay-internal.c |    5 ++---
 replay/replay-internal.h |    5 ++---
 stubs/replay.c           |   15 +++++++++++++++
 4 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/include/sysemu/replay.h b/include/sysemu/replay.h
index b86d6bb..9973849 100644
--- a/include/sysemu/replay.h
+++ b/include/sysemu/replay.h
@@ -47,6 +47,20 @@ extern ReplayMode replay_mode;
 /* Name of the initial VM snapshot */
 extern char *replay_snapshot;
 
+/* Replay locking
+ *
+ * The locks are needed to protect the shared structures and log file
+ * when doing record/replay. They also are the main sync-point between
+ * the main-loop thread and the vCPU thread. This was a role
+ * previously filled by the BQL which has been busy trying to reduce
+ * its impact across the code. This ensures blocks of events stay
+ * sequential and reproducible.
+ */
+
+void replay_mutex_lock(void);
+void replay_mutex_unlock(void);
+bool replay_mutex_locked(void);
+
 /* Replay process control functions */
 
 /*! Enables recording or saving event log with specified parameters */
diff --git a/replay/replay-internal.c b/replay/replay-internal.c
index 157c863..e6b2fdb 100644
--- a/replay/replay-internal.c
+++ b/replay/replay-internal.c
@@ -181,7 +181,7 @@ void replay_mutex_destroy(void)
 
 static __thread bool replay_locked;
 
-static bool replay_mutex_locked(void)
+bool replay_mutex_locked(void)
 {
     return replay_locked;
 }
@@ -204,7 +204,7 @@ void replay_mutex_unlock(void)
 void replay_save_instructions(void)
 {
     if (replay_file && replay_mode == REPLAY_MODE_RECORD) {
-        replay_mutex_lock();
+        g_assert(replay_mutex_locked());
         int diff = (int)(replay_get_current_step() - replay_state.current_step);
 
         /* Time can only go forward */
@@ -215,6 +215,5 @@ void replay_save_instructions(void)
             replay_put_dword(diff);
             replay_state.current_step += diff;
         }
-        replay_mutex_unlock();
     }
 }
diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index 8e4c701..f5f8e96 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -100,12 +100,11 @@ int64_t replay_get_qword(void);
 void replay_get_array(uint8_t *buf, size_t *size);
 void replay_get_array_alloc(uint8_t **buf, size_t *size);
 
-/* Mutex functions for protecting replay log file */
+/* Mutex functions for protecting replay log file and ensuring
+ * synchronisation between vCPU and main-loop threads. */
 
 void replay_mutex_init(void);
 void replay_mutex_destroy(void);
-void replay_mutex_lock(void);
-void replay_mutex_unlock(void);
 
 /*! Checks error status of the file. */
 void replay_check_error(void);
diff --git a/stubs/replay.c b/stubs/replay.c
index 9991ee5..cb050ef 100644
--- a/stubs/replay.c
+++ b/stubs/replay.c
@@ -73,3 +73,18 @@ uint64_t blkreplay_next_id(void)
 {
     return 0;
 }
+
+void replay_mutex_lock(void)
+{
+    abort();
+}
+
+void replay_mutex_unlock(void)
+{
+    abort();
+}
+
+bool replay_mutex_locked(void)
+{
+    return false;
+}

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 17/26] replay: push replay_mutex_lock up the call tree
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (15 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 16/26] replay: make locking visible outside replay code Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 18/26] cpu-exec: don't overwrite exception_index Pavel Dovgalyuk
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

Now instead of using the replay_lock to guard the output of the log we
now use it to protect the whole execution section. This replaces what
the BQL used to do when it was held during TCG execution.

We also introduce some rules for locking order - mainly that you
cannot take the replay_mutex while holding the BQL. This leads to some
slight sophistry during start-up and extending the
replay_mutex_destroy function to unlock the mutex without checking
for the BQL condition so it can be cleanly dropped in the non-replay
case.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 cpus.c                   |   32 ++++++++++++++++++++++++++++++++
 docs/replay.txt          |   19 +++++++++++++++++++
 include/sysemu/replay.h  |    2 ++
 replay/replay-char.c     |   21 ++++++++-------------
 replay/replay-events.c   |   18 +++++-------------
 replay/replay-internal.c |   18 +++++++++++++-----
 replay/replay-time.c     |   10 +++++-----
 replay/replay.c          |   40 ++++++++++++++++++++--------------------
 util/main-loop.c         |   23 ++++++++++++++++++++---
 vl.c                     |    2 ++
 10 files changed, 126 insertions(+), 59 deletions(-)

diff --git a/cpus.c b/cpus.c
index de6dfce..110ce0a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1293,6 +1293,10 @@ static void prepare_icount_for_run(CPUState *cpu)
         insns_left = MIN(0xffff, cpu->icount_budget);
         cpu->icount_decr.u16.low = insns_left;
         cpu->icount_extra = cpu->icount_budget - insns_left;
+
+        if (replay_mode != REPLAY_MODE_NONE) {
+            replay_mutex_lock();
+        }
     }
 }
 
@@ -1308,6 +1312,10 @@ static void process_icount_data(CPUState *cpu)
         cpu->icount_budget = 0;
 
         replay_account_executed_instructions();
+
+        if (replay_mode != REPLAY_MODE_NONE) {
+            replay_mutex_unlock();
+        }
     }
 }
 
@@ -1395,6 +1403,10 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 
     while (1) {
 
+        if (replay_mode != REPLAY_MODE_NONE) {
+            replay_mutex_lock();
+        }
+
         qemu_mutex_lock_iothread();
 
         /* Account partial waits to QEMU_CLOCK_VIRTUAL.  */
@@ -1407,6 +1419,10 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
 
         qemu_mutex_unlock_iothread();
 
+        if (replay_mode != REPLAY_MODE_NONE) {
+            replay_mutex_unlock();
+        }
+
         if (!cpu) {
             cpu = first_cpu;
         }
@@ -1677,12 +1693,28 @@ void pause_all_vcpus(void)
         cpu_stop_current();
     }
 
+    /* We need to drop the replay_lock so any vCPU threads woken up
+     * can finish their replay tasks
+     */
+    if (replay_mode != REPLAY_MODE_NONE) {
+        g_assert(replay_mutex_locked());
+        qemu_mutex_unlock_iothread();
+        replay_mutex_unlock();
+        qemu_mutex_lock_iothread();
+    }
+
     while (!all_vcpus_paused()) {
         qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
         CPU_FOREACH(cpu) {
             qemu_cpu_kick(cpu);
         }
     }
+
+    if (replay_mode != REPLAY_MODE_NONE) {
+        qemu_mutex_unlock_iothread();
+        replay_mutex_lock();
+        qemu_mutex_lock_iothread();
+    }
 }
 
 void cpu_resume(CPUState *cpu)
diff --git a/docs/replay.txt b/docs/replay.txt
index c52407f..994153e 100644
--- a/docs/replay.txt
+++ b/docs/replay.txt
@@ -49,6 +49,25 @@ Modifications of qemu include:
  * recording/replaying user input (mouse and keyboard)
  * adding internal checkpoints for cpu and io synchronization
 
+Locking and thread synchronisation
+----------------------------------
+
+Previously the synchronisation of the main thread and the vCPU thread
+was ensured by the holding of the BQL. However the trend has been to
+reduce the time the BQL was held across the system including under TCG
+system emulation. As it is important that batches of events are kept
+in sequence (e.g. expiring timers and checkpoints in the main thread
+while instruction checkpoints are written by the vCPU thread) we need
+another lock to keep things in lock-step. This role is now handled by
+the replay_mutex_lock. It used to be held only for each event being
+written but now it is held for a whole execution period. This results
+in a deterministic ping-pong between the two main threads.
+
+As deadlocks are easy to introduce a new rule is introduced that the
+replay_mutex_lock is taken before any BQL locks. Conversely you cannot
+release the replay_lock while the BQL is still held.
+
+
 Non-deterministic events
 ------------------------
 
diff --git a/include/sysemu/replay.h b/include/sysemu/replay.h
index 9973849..d026b28 100644
--- a/include/sysemu/replay.h
+++ b/include/sysemu/replay.h
@@ -63,6 +63,8 @@ bool replay_mutex_locked(void);
 
 /* Replay process control functions */
 
+/*! Enables and take replay locks (even if we don't use it) */
+void replay_init_locks(void);
 /*! Enables recording or saving event log with specified parameters */
 void replay_configure(struct QemuOpts *opts);
 /*! Initializes timers used for snapshotting and enables events recording */
diff --git a/replay/replay-char.c b/replay/replay-char.c
index cbf7c04..736cc8c 100755
--- a/replay/replay-char.c
+++ b/replay/replay-char.c
@@ -96,25 +96,24 @@ void *replay_event_char_read_load(void)
 
 void replay_char_write_event_save(int res, int offset)
 {
+    g_assert(replay_mutex_locked());
+
     replay_save_instructions();
-    replay_mutex_lock();
     replay_put_event(EVENT_CHAR_WRITE);
     replay_put_dword(res);
     replay_put_dword(offset);
-    replay_mutex_unlock();
 }
 
 void replay_char_write_event_load(int *res, int *offset)
 {
+    g_assert(replay_mutex_locked());
+
     replay_account_executed_instructions();
-    replay_mutex_lock();
     if (replay_next_event_is(EVENT_CHAR_WRITE)) {
         *res = replay_get_dword();
         *offset = replay_get_dword();
         replay_finish_event();
-        replay_mutex_unlock();
     } else {
-        replay_mutex_unlock();
         error_report("Missing character write event in the replay log");
         exit(1);
     }
@@ -122,23 +121,21 @@ void replay_char_write_event_load(int *res, int *offset)
 
 int replay_char_read_all_load(uint8_t *buf)
 {
-    replay_mutex_lock();
+    g_assert(replay_mutex_locked());
+
     if (replay_next_event_is(EVENT_CHAR_READ_ALL)) {
         size_t size;
         int res;
         replay_get_array(buf, &size);
         replay_finish_event();
-        replay_mutex_unlock();
         res = (int)size;
         assert(res >= 0);
         return res;
     } else if (replay_next_event_is(EVENT_CHAR_READ_ALL_ERROR)) {
         int res = replay_get_dword();
         replay_finish_event();
-        replay_mutex_unlock();
         return res;
     } else {
-        replay_mutex_unlock();
         error_report("Missing character read all event in the replay log");
         exit(1);
     }
@@ -146,19 +143,17 @@ int replay_char_read_all_load(uint8_t *buf)
 
 void replay_char_read_all_save_error(int res)
 {
+    g_assert(replay_mutex_locked());
     assert(res < 0);
     replay_save_instructions();
-    replay_mutex_lock();
     replay_put_event(EVENT_CHAR_READ_ALL_ERROR);
     replay_put_dword(res);
-    replay_mutex_unlock();
 }
 
 void replay_char_read_all_save_buf(uint8_t *buf, int offset)
 {
+    g_assert(replay_mutex_locked());
     replay_save_instructions();
-    replay_mutex_lock();
     replay_put_event(EVENT_CHAR_READ_ALL);
     replay_put_array(buf, offset);
-    replay_mutex_unlock();
 }
diff --git a/replay/replay-events.c b/replay/replay-events.c
index e858254..a941efb 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -79,16 +79,14 @@ bool replay_has_events(void)
 
 void replay_flush_events(void)
 {
-    replay_mutex_lock();
+    g_assert(replay_mutex_locked());
+
     while (!QTAILQ_EMPTY(&events_list)) {
         Event *event = QTAILQ_FIRST(&events_list);
-        replay_mutex_unlock();
         replay_run_event(event);
-        replay_mutex_lock();
         QTAILQ_REMOVE(&events_list, event, events);
         g_free(event);
     }
-    replay_mutex_unlock();
 }
 
 void replay_disable_events(void)
@@ -102,14 +100,14 @@ void replay_disable_events(void)
 
 void replay_clear_events(void)
 {
-    replay_mutex_lock();
+    g_assert(replay_mutex_locked());
+
     while (!QTAILQ_EMPTY(&events_list)) {
         Event *event = QTAILQ_FIRST(&events_list);
         QTAILQ_REMOVE(&events_list, event, events);
 
         g_free(event);
     }
-    replay_mutex_unlock();
 }
 
 /*! Adds specified async event to the queue */
@@ -136,9 +134,8 @@ void replay_add_event(ReplayAsyncEventKind event_kind,
     event->opaque2 = opaque2;
     event->id = id;
 
-    replay_mutex_lock();
+    g_assert(replay_mutex_locked());
     QTAILQ_INSERT_TAIL(&events_list, event, events);
-    replay_mutex_unlock();
 }
 
 void replay_bh_schedule_event(QEMUBH *bh)
@@ -210,10 +207,7 @@ void replay_save_events(int checkpoint)
     while (!QTAILQ_EMPTY(&events_list)) {
         Event *event = QTAILQ_FIRST(&events_list);
         replay_save_event(event, checkpoint);
-
-        replay_mutex_unlock();
         replay_run_event(event);
-        replay_mutex_lock();
         QTAILQ_REMOVE(&events_list, event, events);
         g_free(event);
     }
@@ -299,9 +293,7 @@ void replay_read_events(int checkpoint)
         }
         replay_finish_event();
         read_event_kind = -1;
-        replay_mutex_unlock();
         replay_run_event(event);
-        replay_mutex_lock();
 
         g_free(event);
     }
diff --git a/replay/replay-internal.c b/replay/replay-internal.c
index e6b2fdb..d036a02 100644
--- a/replay/replay-internal.c
+++ b/replay/replay-internal.c
@@ -174,11 +174,6 @@ void replay_mutex_init(void)
     qemu_mutex_init(&lock);
 }
 
-void replay_mutex_destroy(void)
-{
-    qemu_mutex_destroy(&lock);
-}
-
 static __thread bool replay_locked;
 
 bool replay_mutex_locked(void)
@@ -186,15 +181,28 @@ bool replay_mutex_locked(void)
     return replay_locked;
 }
 
+void replay_mutex_destroy(void)
+{
+    if (replay_mutex_locked()) {
+        qemu_mutex_unlock(&lock);
+    }
+    qemu_mutex_destroy(&lock);
+}
+
+
+/* Ordering constraints, replay_lock must be taken before BQL */
 void replay_mutex_lock(void)
 {
+    g_assert(!qemu_mutex_iothread_locked());
     g_assert(!replay_mutex_locked());
     qemu_mutex_lock(&lock);
     replay_locked = true;
 }
 
+/* BQL can't be held when releasing the replay_lock */
 void replay_mutex_unlock(void)
 {
+    g_assert(!qemu_mutex_iothread_locked());
     g_assert(replay_mutex_locked());
     replay_locked = false;
     qemu_mutex_unlock(&lock);
diff --git a/replay/replay-time.c b/replay/replay-time.c
index f70382a..6a7565e 100644
--- a/replay/replay-time.c
+++ b/replay/replay-time.c
@@ -17,13 +17,13 @@
 
 int64_t replay_save_clock(ReplayClockKind kind, int64_t clock)
 {
-    replay_save_instructions();
 
     if (replay_file) {
-        replay_mutex_lock();
+        g_assert(replay_mutex_locked());
+
+        replay_save_instructions();
         replay_put_event(EVENT_CLOCK + kind);
         replay_put_qword(clock);
-        replay_mutex_unlock();
     }
 
     return clock;
@@ -46,16 +46,16 @@ void replay_read_next_clock(ReplayClockKind kind)
 /*! Reads next clock event from the input. */
 int64_t replay_read_clock(ReplayClockKind kind)
 {
+    g_assert(replay_file && replay_mutex_locked());
+
     replay_account_executed_instructions();
 
     if (replay_file) {
         int64_t ret;
-        replay_mutex_lock();
         if (replay_next_event_is(EVENT_CLOCK + kind)) {
             replay_read_next_clock(kind);
         }
         ret = replay_state.cached_clock[kind];
-        replay_mutex_unlock();
 
         return ret;
     }
diff --git a/replay/replay.c b/replay/replay.c
index 4f24498..7fc50ea 100644
--- a/replay/replay.c
+++ b/replay/replay.c
@@ -80,8 +80,9 @@ int replay_get_instructions(void)
 
 void replay_account_executed_instructions(void)
 {
+    g_assert(replay_mutex_locked());
+
     if (replay_mode == REPLAY_MODE_PLAY) {
-        replay_mutex_lock();
         if (replay_state.instructions_count > 0) {
             int count = (int)(replay_get_current_step()
                               - replay_state.current_step);
@@ -100,24 +101,22 @@ void replay_account_executed_instructions(void)
                 qemu_notify_event();
             }
         }
-        replay_mutex_unlock();
     }
 }
 
 bool replay_exception(void)
 {
+
     if (replay_mode == REPLAY_MODE_RECORD) {
+        g_assert(replay_mutex_locked());
         replay_save_instructions();
-        replay_mutex_lock();
         replay_put_event(EVENT_EXCEPTION);
-        replay_mutex_unlock();
         return true;
     } else if (replay_mode == REPLAY_MODE_PLAY) {
+        g_assert(replay_mutex_locked());
         bool res = replay_has_exception();
         if (res) {
-            replay_mutex_lock();
             replay_finish_event();
-            replay_mutex_unlock();
         }
         return res;
     }
@@ -129,10 +128,9 @@ bool replay_has_exception(void)
 {
     bool res = false;
     if (replay_mode == REPLAY_MODE_PLAY) {
+        g_assert(replay_mutex_locked());
         replay_account_executed_instructions();
-        replay_mutex_lock();
         res = replay_next_event_is(EVENT_EXCEPTION);
-        replay_mutex_unlock();
     }
 
     return res;
@@ -141,17 +139,15 @@ bool replay_has_exception(void)
 bool replay_interrupt(void)
 {
     if (replay_mode == REPLAY_MODE_RECORD) {
+        g_assert(replay_mutex_locked());
         replay_save_instructions();
-        replay_mutex_lock();
         replay_put_event(EVENT_INTERRUPT);
-        replay_mutex_unlock();
         return true;
     } else if (replay_mode == REPLAY_MODE_PLAY) {
+        g_assert(replay_mutex_locked());
         bool res = replay_has_interrupt();
         if (res) {
-            replay_mutex_lock();
             replay_finish_event();
-            replay_mutex_unlock();
         }
         return res;
     }
@@ -163,10 +159,9 @@ bool replay_has_interrupt(void)
 {
     bool res = false;
     if (replay_mode == REPLAY_MODE_PLAY) {
+        g_assert(replay_mutex_locked());
         replay_account_executed_instructions();
-        replay_mutex_lock();
         res = replay_next_event_is(EVENT_INTERRUPT);
-        replay_mutex_unlock();
     }
     return res;
 }
@@ -174,9 +169,8 @@ bool replay_has_interrupt(void)
 void replay_shutdown_request(ShutdownCause cause)
 {
     if (replay_mode == REPLAY_MODE_RECORD) {
-        replay_mutex_lock();
+        g_assert(replay_mutex_locked());
         replay_put_event(EVENT_SHUTDOWN + cause);
-        replay_mutex_unlock();
     }
 }
 
@@ -190,9 +184,9 @@ bool replay_checkpoint(ReplayCheckpoint checkpoint)
         return true;
     }
 
-    replay_mutex_lock();
 
     if (replay_mode == REPLAY_MODE_PLAY) {
+        g_assert(replay_mutex_locked());
         if (replay_next_event_is(EVENT_CHECKPOINT + checkpoint)) {
             replay_finish_event();
         } else if (replay_state.data_kind != EVENT_ASYNC) {
@@ -205,15 +199,21 @@ bool replay_checkpoint(ReplayCheckpoint checkpoint)
            checkpoint were processed */
         res = replay_state.data_kind != EVENT_ASYNC;
     } else if (replay_mode == REPLAY_MODE_RECORD) {
+        g_assert(replay_mutex_locked());
         replay_put_event(EVENT_CHECKPOINT + checkpoint);
         replay_save_events(checkpoint);
         res = true;
     }
 out:
-    replay_mutex_unlock();
     return res;
 }
 
+void replay_init_locks(void)
+{
+    replay_mutex_init();
+    replay_mutex_lock(); /* Hold while we start-up */
+}
+
 static void replay_enable(const char *fname, int mode)
 {
     const char *fmode = NULL;
@@ -233,8 +233,6 @@ static void replay_enable(const char *fname, int mode)
 
     atexit(replay_finish);
 
-    replay_mutex_init();
-
     replay_file = fopen(fname, fmode);
     if (replay_file == NULL) {
         fprintf(stderr, "Replay: open %s: %s\n", fname, strerror(errno));
@@ -274,6 +272,8 @@ void replay_configure(QemuOpts *opts)
     Location loc;
 
     if (!opts) {
+        /* we no longer need this lock */
+        replay_mutex_destroy();
         return;
     }
 
diff --git a/util/main-loop.c b/util/main-loop.c
index 7558eb5..7c5b163 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -29,6 +29,7 @@
 #include "qemu/sockets.h"	// struct in_addr needed for libslirp.h
 #include "sysemu/qtest.h"
 #include "sysemu/cpus.h"
+#include "sysemu/replay.h"
 #include "slirp/libslirp.h"
 #include "qemu/main-loop.h"
 #include "block/aio.h"
@@ -245,19 +246,26 @@ static int os_host_main_loop_wait(int64_t timeout)
         timeout = SCALE_MS;
     }
 
+
     if (timeout) {
         spin_counter = 0;
-        qemu_mutex_unlock_iothread();
     } else {
         spin_counter++;
     }
+    qemu_mutex_unlock_iothread();
+
+    if (replay_mode != REPLAY_MODE_NONE) {
+        replay_mutex_unlock();
+    }
 
     ret = qemu_poll_ns((GPollFD *)gpollfds->data, gpollfds->len, timeout);
 
-    if (timeout) {
-        qemu_mutex_lock_iothread();
+    if (replay_mode != REPLAY_MODE_NONE) {
+        replay_mutex_lock();
     }
 
+    qemu_mutex_lock_iothread();
+
     glib_pollfds_poll();
 
     g_main_context_release(context);
@@ -463,8 +471,17 @@ static int os_host_main_loop_wait(int64_t timeout)
     poll_timeout_ns = qemu_soonest_timeout(poll_timeout_ns, timeout);
 
     qemu_mutex_unlock_iothread();
+
+    if (replay_mode != REPLAY_MODE_NONE) {
+        replay_mutex_unlock();
+    }
+
     g_poll_ret = qemu_poll_ns(poll_fds, n_poll_fds + w->num, poll_timeout_ns);
 
+    if (replay_mode != REPLAY_MODE_NONE) {
+        replay_mutex_lock();
+    }
+
     qemu_mutex_lock_iothread();
     if (g_poll_ret > 0) {
         for (i = 0; i < w->num; i++) {
diff --git a/vl.c b/vl.c
index a8e0d03..77fc1ef 100644
--- a/vl.c
+++ b/vl.c
@@ -3137,6 +3137,8 @@ int main(int argc, char **argv, char **envp)
 
     qemu_init_cpu_list();
     qemu_init_cpu_loop();
+
+    replay_init_locks();
     qemu_mutex_lock_iothread();
 
     atexit(qemu_run_exit_notifiers);

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 18/26] cpu-exec: don't overwrite exception_index
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (16 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 17/26] replay: push replay_mutex_lock up the call tree Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 19/26] cpu-exec: reset exit flag before calling cpu_exec_nocache Pavel Dovgalyuk
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch adds a condition before overwriting exception_index fiels.
It is needed when exception_index is already set to some meaningful value.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 accel/tcg/cpu-exec.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 4318441..35d0240 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -585,7 +585,9 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
     if (unlikely(atomic_read(&cpu->exit_request)
         || (use_icount && cpu->icount_decr.u16.low + cpu->icount_extra == 0))) {
         atomic_set(&cpu->exit_request, 0);
-        cpu->exception_index = EXCP_INTERRUPT;
+        if (cpu->exception_index == -1) {
+            cpu->exception_index = EXCP_INTERRUPT;
+        }
         return true;
     }
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 19/26] cpu-exec: reset exit flag before calling cpu_exec_nocache
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (17 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 18/26] cpu-exec: don't overwrite exception_index Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 20/26] replay: don't destroy mutex at exit Pavel Dovgalyuk
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch resets icount_decr.u32.high before calling cpu_exec_nocache
when exception is pending. Exception is caused by the first instruction
in the block and it cannot be executed without resetting the flag.

Signed-off-by: Maria Klimushenkova <maria.klimushenkova@ispras.ru>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 accel/tcg/cpu-exec.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 35d0240..aaa9c2d 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -500,6 +500,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
     } else if (replay_has_exception()
                && cpu->icount_decr.u16.low + cpu->icount_extra == 0) {
         /* try to cause an exception pending in the log */
+        atomic_set(&cpu->icount_decr.u16.high, 0);
         cpu_exec_nocache(cpu, 1, tb_find(cpu, NULL, 0, curr_cflags()), true);
         *ret = -1;
         return true;

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 20/26] replay: don't destroy mutex at exit
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (18 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 19/26] cpu-exec: reset exit flag before calling cpu_exec_nocache Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 21/26] replay: check return values of fwrite Pavel Dovgalyuk
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

Replay mutex is held by vCPU thread and destroy function is called
from atexit of the main thread. Therefore we cannot destroy it safely.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 replay/replay.c |    1 -
 1 file changed, 1 deletion(-)

diff --git a/replay/replay.c b/replay/replay.c
index 7fc50ea..3f431ad 100644
--- a/replay/replay.c
+++ b/replay/replay.c
@@ -358,7 +358,6 @@ void replay_finish(void)
     replay_snapshot = NULL;
 
     replay_finish_events();
-    replay_mutex_destroy();
 }
 
 void replay_add_blocker(Error *reason)

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 21/26] replay: check return values of fwrite
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (19 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 20/26] replay: don't destroy mutex at exit Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 22/26] scripts/qemu-gdb: add simple tcg lock status helper Pavel Dovgalyuk
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

This patch adds error reporting when fwrite cannot completely
save the buffer to the file.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 replay/replay-internal.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/replay/replay-internal.c b/replay/replay-internal.c
index d036a02..0f73fdc 100644
--- a/replay/replay-internal.c
+++ b/replay/replay-internal.c
@@ -62,7 +62,9 @@ void replay_put_array(const uint8_t *buf, size_t size)
 {
     if (replay_file) {
         replay_put_dword(size);
-        fwrite(buf, 1, size, replay_file);
+        if (fwrite(buf, 1, size, replay_file) != size) {
+            error_report("replay write error");
+        }
     }
 }
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 22/26] scripts/qemu-gdb: add simple tcg lock status helper
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (20 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 21/26] replay: check return values of fwrite Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 23/26] util/qemu-thread-*: add qemu_lock, locked and unlock trace events Pavel Dovgalyuk
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

Add a simple helper to dump lock state.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 scripts/qemu-gdb.py    |    3 ++-
 scripts/qemugdb/tcg.py |   46 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+), 1 deletion(-)
 create mode 100644 scripts/qemugdb/tcg.py

diff --git a/scripts/qemu-gdb.py b/scripts/qemu-gdb.py
index b3f8e04..d58213e 100644
--- a/scripts/qemu-gdb.py
+++ b/scripts/qemu-gdb.py
@@ -26,7 +26,7 @@ import os, sys
 
 sys.path.append(os.path.dirname(__file__))
 
-from qemugdb import aio, mtree, coroutine
+from qemugdb import aio, mtree, coroutine, tcg
 
 class QemuCommand(gdb.Command):
     '''Prefix for QEMU debug support commands'''
@@ -38,6 +38,7 @@ QemuCommand()
 coroutine.CoroutineCommand()
 mtree.MtreeCommand()
 aio.HandlersCommand()
+tcg.TCGLockStatusCommand()
 
 coroutine.CoroutineSPFunction()
 coroutine.CoroutinePCFunction()
diff --git a/scripts/qemugdb/tcg.py b/scripts/qemugdb/tcg.py
new file mode 100644
index 0000000..8c7f1d7
--- /dev/null
+++ b/scripts/qemugdb/tcg.py
@@ -0,0 +1,46 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+#
+# GDB debugging support, TCG status
+#
+# Copyright 2016 Linaro Ltd
+#
+# Authors:
+#  Alex Bennée <alex.bennee@linaro.org>
+#
+# This work is licensed under the terms of the GNU GPL, version 2.  See
+# the COPYING file in the top-level directory.
+#
+# Contributions after 2012-01-13 are licensed under the terms of the
+# GNU GPL, version 2 or (at your option) any later version.
+
+# 'qemu tcg-lock-status' -- display the TCG lock status across threads
+
+import gdb
+
+class TCGLockStatusCommand(gdb.Command):
+    '''Display TCG Execution Status'''
+    def __init__(self):
+        gdb.Command.__init__(self, 'qemu tcg-lock-status', gdb.COMMAND_DATA,
+                             gdb.COMPLETE_NONE)
+
+    def invoke(self, arg, from_tty):
+        gdb.write("Thread, BQL (iothread_mutex), Replay, Blocked?\n")
+        for thread in gdb.inferiors()[0].threads():
+            thread.switch()
+
+            iothread = gdb.parse_and_eval("iothread_locked")
+            replay = gdb.parse_and_eval("replay_locked")
+
+            frame = gdb.selected_frame()
+            if frame.name() == "__lll_lock_wait":
+                frame.older().select()
+                mutex = gdb.parse_and_eval("mutex")
+                owner = gdb.parse_and_eval("mutex->__data.__owner")
+                blocked = ("__lll_lock_wait waiting on %s from %d" %
+                           (mutex, owner))
+            else:
+                blocked = "not blocked"
+
+            gdb.write("%d/%d, %s, %s, %s\n" % (thread.num, thread.ptid[1],
+                                               iothread, replay, blocked))

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 23/26] util/qemu-thread-*: add qemu_lock, locked and unlock trace events
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (21 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 22/26] scripts/qemu-gdb: add simple tcg lock status helper Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 24/26] scripts/analyse-locks-simpletrace.py: script to analyse lock times Pavel Dovgalyuk
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
v1
  - fix merge conflicts with existing tracing
  - add trylock/cond_wait traces

---
 include/qemu/thread.h    |   14 ++++++++++----
 util/qemu-thread-posix.c |   21 ++++++++++++---------
 util/trace-events        |    7 ++++---
 3 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/include/qemu/thread.h b/include/qemu/thread.h
index 9910f49..c51a7f1 100644
--- a/include/qemu/thread.h
+++ b/include/qemu/thread.h
@@ -22,9 +22,13 @@ typedef struct QemuThread QemuThread;
 
 void qemu_mutex_init(QemuMutex *mutex);
 void qemu_mutex_destroy(QemuMutex *mutex);
-void qemu_mutex_lock(QemuMutex *mutex);
-int qemu_mutex_trylock(QemuMutex *mutex);
-void qemu_mutex_unlock(QemuMutex *mutex);
+int qemu_mutex_trylock_impl(QemuMutex *mutex, const char *file, const int line);
+void qemu_mutex_lock_impl(QemuMutex *mutex, const char *file, const int line);
+void qemu_mutex_unlock_impl(QemuMutex *mutex, const char *file, const int line);
+
+#define qemu_mutex_lock(mutex) qemu_mutex_lock_impl(mutex, __FILE__, __LINE__)
+#define qemu_mutex_trylock(mutex) qemu_mutex_trylock_impl(mutex, __FILE__, __LINE__)
+#define qemu_mutex_unlock(mutex) qemu_mutex_unlock_impl(mutex, __FILE__, __LINE__)
 
 /* Prototypes for other functions are in thread-posix.h/thread-win32.h.  */
 void qemu_rec_mutex_init(QemuRecMutex *mutex);
@@ -39,7 +43,9 @@ void qemu_cond_destroy(QemuCond *cond);
  */
 void qemu_cond_signal(QemuCond *cond);
 void qemu_cond_broadcast(QemuCond *cond);
-void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex);
+void qemu_cond_wait_impl(QemuCond *cond, QemuMutex *mutex, const char *file, const int line);
+
+#define qemu_cond_wait(cond, mutex) qemu_cond_wait_impl(cond, mutex, __FILE__, __LINE__)
 
 void qemu_sem_init(QemuSemaphore *sem, int init);
 void qemu_sem_post(QemuSemaphore *sem);
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 7306475..1a838a9 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -57,26 +57,28 @@ void qemu_mutex_destroy(QemuMutex *mutex)
         error_exit(err, __func__);
 }
 
-void qemu_mutex_lock(QemuMutex *mutex)
+void qemu_mutex_lock_impl(QemuMutex *mutex, const char *file, const int line)
 {
     int err;
 
     assert(mutex->initialized);
+    trace_qemu_mutex_lock(mutex, file, line);
+
     err = pthread_mutex_lock(&mutex->lock);
     if (err)
         error_exit(err, __func__);
 
-    trace_qemu_mutex_locked(mutex);
+    trace_qemu_mutex_locked(mutex, file, line);
 }
 
-int qemu_mutex_trylock(QemuMutex *mutex)
+int qemu_mutex_trylock_impl(QemuMutex *mutex, const char *file, const int line)
 {
     int err;
 
     assert(mutex->initialized);
     err = pthread_mutex_trylock(&mutex->lock);
     if (err == 0) {
-        trace_qemu_mutex_locked(mutex);
+        trace_qemu_mutex_locked(mutex, file, line);
         return 0;
     }
     if (err != EBUSY) {
@@ -85,15 +87,16 @@ int qemu_mutex_trylock(QemuMutex *mutex)
     return -EBUSY;
 }
 
-void qemu_mutex_unlock(QemuMutex *mutex)
+void qemu_mutex_unlock_impl(QemuMutex *mutex, const char *file, const int line)
 {
     int err;
 
     assert(mutex->initialized);
-    trace_qemu_mutex_unlocked(mutex);
     err = pthread_mutex_unlock(&mutex->lock);
     if (err)
         error_exit(err, __func__);
+
+    trace_qemu_mutex_unlock(mutex, file, line);
 }
 
 void qemu_rec_mutex_init(QemuRecMutex *mutex)
@@ -152,14 +155,14 @@ void qemu_cond_broadcast(QemuCond *cond)
         error_exit(err, __func__);
 }
 
-void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
+void qemu_cond_wait_impl(QemuCond *cond, QemuMutex *mutex, const char *file, const int line)
 {
     int err;
 
     assert(cond->initialized);
-    trace_qemu_mutex_unlocked(mutex);
+    trace_qemu_mutex_unlock(mutex, file, line);
     err = pthread_cond_wait(&cond->cond, &mutex->lock);
-    trace_qemu_mutex_locked(mutex);
+    trace_qemu_mutex_locked(mutex, file, line);
     if (err)
         error_exit(err, __func__);
 }
diff --git a/util/trace-events b/util/trace-events
index 025499f..515e625 100644
--- a/util/trace-events
+++ b/util/trace-events
@@ -56,6 +56,7 @@ lockcnt_futex_wait(const void *lockcnt, int val) "lockcnt %p waiting on %d"
 lockcnt_futex_wait_resume(const void *lockcnt, int new) "lockcnt %p after wait: %d"
 lockcnt_futex_wake(const void *lockcnt) "lockcnt %p waking up one waiter"
 
-# util/qemu-thread-posix.c
-qemu_mutex_locked(void *lock) "locked mutex %p"
-qemu_mutex_unlocked(void *lock) "unlocked mutex %p"
+# util/qemu-thread.c
+qemu_mutex_lock(void *mutex, const char *file, const int line) "waiting on mutex %p (%s:%d)"
+qemu_mutex_locked(void *mutex, const char *file, const int line) "taken mutex %p (%s:%d)"
+qemu_mutex_unlock(void *mutex, const char *file, const int line) "released mutex %p (%s:%d)"

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 24/26] scripts/analyse-locks-simpletrace.py: script to analyse lock times
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (22 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 23/26] util/qemu-thread-*: add qemu_lock, locked and unlock trace events Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 25/26] scripts/replay-dump.py: replay log dumper Pavel Dovgalyuk
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

This script allows analysis of mutex acquisition and hold times based
on a trace file. Given a trace control file of:

  qemu_mutex_lock
  qemu_mutex_locked
  qemu_mutex_unlock

And running with:

  $QEMU $QEMU_ARGS -trace events=./lock-trace

You can analyse the results with:

  ./scripts/analyse-locks-simpletrace.py trace-events-all ./trace-21812

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 scripts/analyse-locks-simpletrace.py |   99 ++++++++++++++++++++++++++++++++++
 1 file changed, 99 insertions(+)
 create mode 100755 scripts/analyse-locks-simpletrace.py

diff --git a/scripts/analyse-locks-simpletrace.py b/scripts/analyse-locks-simpletrace.py
new file mode 100755
index 0000000..b72c951
--- /dev/null
+++ b/scripts/analyse-locks-simpletrace.py
@@ -0,0 +1,99 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Analyse lock events and
+#
+# Author: Alex Bennée <alex.bennee@linaro.org>
+#
+
+import os
+import simpletrace
+import argparse
+import numpy as np
+
+class MutexAnalyser(simpletrace.Analyzer):
+    "A simpletrace Analyser for checking locks."
+
+    def __init__(self):
+        self.locks = 0
+        self.locked = 0
+        self.unlocks = 0
+        self.mutex_records = {}
+
+    def _get_mutex(self, mutex):
+        if not mutex in self.mutex_records:
+            self.mutex_records[mutex] = {"locks": 0,
+                                         "lock_time": 0,
+                                         "acquire_times": [],
+                                         "locked": 0,
+                                         "locked_time": 0,
+                                         "held_times": [],
+                                         "unlocked": 0}
+
+        return self.mutex_records[mutex]
+
+    def qemu_mutex_lock(self, timestamp, mutex, filename, line):
+        self.locks += 1
+        rec = self._get_mutex(mutex)
+        rec["locks"] += 1
+        rec["lock_time"] = timestamp[0]
+        rec["lock_loc"] = (filename, line)
+
+    def qemu_mutex_locked(self, timestamp, mutex, filename, line):
+        self.locked += 1
+        rec = self._get_mutex(mutex)
+        rec["locked"] += 1
+        rec["locked_time"] = timestamp[0]
+        acquire_time = rec["locked_time"] - rec["lock_time"]
+        rec["locked_loc"] = (filename, line)
+        rec["acquire_times"].append(acquire_time)
+
+    def qemu_mutex_unlock(self, timestamp, mutex, filename, line):
+        self.unlocks += 1
+        rec = self._get_mutex(mutex)
+        rec["unlocked"] += 1
+        held_time = timestamp[0] - rec["locked_time"]
+        rec["held_times"].append(held_time)
+        rec["unlock_loc"] = (filename, line)
+
+
+def get_args():
+    "Grab options"
+    parser = argparse.ArgumentParser()
+    parser.add_argument("--output", "-o", type=str, help="Render plot to file")
+    parser.add_argument("events", type=str, help='trace file read from')
+    parser.add_argument("tracefile", type=str, help='trace file read from')
+    return parser.parse_args()
+
+if __name__ == '__main__':
+    args = get_args()
+
+    # Gather data from the trace
+    analyser = MutexAnalyser()
+    simpletrace.process(args.events, args.tracefile, analyser)
+
+    print ("Total locks: %d, locked: %d, unlocked: %d" %
+           (analyser.locks, analyser.locked, analyser.unlocks))
+
+    # Now dump the individual lock stats
+    for key, val in sorted(analyser.mutex_records.iteritems(),
+                           key=lambda (k,v): v["locks"]):
+        print ("Lock: %#x locks: %d, locked: %d, unlocked: %d" %
+               (key, val["locks"], val["locked"], val["unlocked"]))
+
+        acquire_times = np.array(val["acquire_times"])
+        if len(acquire_times) > 0:
+            print ("  Acquire Time: min:%d median:%d avg:%.2f max:%d" %
+                   (acquire_times.min(), np.median(acquire_times),
+                    acquire_times.mean(), acquire_times.max()))
+
+        held_times = np.array(val["held_times"])
+        if len(held_times) > 0:
+            print ("  Held Time: min:%d median:%d avg:%.2f max:%d" %
+                   (held_times.min(), np.median(held_times),
+                    held_times.mean(), held_times.max()))
+
+        # Check if any locks still held
+        if val["locks"] > val["locked"]:
+            print ("  LOCK HELD (%s:%s)" % (val["locked_loc"]))
+            print ("  BLOCKED   (%s:%s)" % (val["lock_loc"]))

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 25/26] scripts/replay-dump.py: replay log dumper
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (23 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 24/26] scripts/analyse-locks-simpletrace.py: script to analyse lock times Pavel Dovgalyuk
@ 2017-10-31 11:08 ` Pavel Dovgalyuk
  2017-10-31 11:09 ` [Qemu-devel] [RFC PATCH 26/26] scripts/qemu-gdb/timers.py: new helper to dump timer state Pavel Dovgalyuk
  2017-10-31 12:48 ` [Qemu-devel] [RFC PATCH 00/26] replay additions no-reply
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

This script is a debugging tool for looking through the contents of a
replay log file. It is incomplete but should fail gracefully at events
it doesn't understand.

It currently understands two different log formats as the audio
record/replay support was merged during since MTTCG. It was written to
help debug what has caused the BQL changes to break replay support.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
v2
  - yet another update to the log format

---
 scripts/replay-dump.py |  308 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 308 insertions(+)
 create mode 100755 scripts/replay-dump.py

diff --git a/scripts/replay-dump.py b/scripts/replay-dump.py
new file mode 100755
index 0000000..203bb31
--- /dev/null
+++ b/scripts/replay-dump.py
@@ -0,0 +1,308 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# Dump the contents of a recorded execution stream
+#
+#  Copyright (c) 2017 Alex Bennée <alex.bennee@linaro.org>
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
+
+import argparse
+import struct
+from collections import namedtuple
+
+# This mirrors some of the global replay state which some of the
+# stream loading refers to. Some decoders may read the next event so
+# we need handle that case. Calling reuse_event will ensure the next
+# event is read from the cache rather than advancing the file.
+
+class ReplayState(object):
+    def __init__(self):
+        self.event = -1
+        self.event_count = 0
+        self.already_read = False
+        self.current_checkpoint = 0
+        self.checkpoint = 0
+
+    def set_event(self, ev):
+        self.event = ev
+        self.event_count += 1
+
+    def get_event(self):
+        self.already_read = False
+        return self.event
+
+    def reuse_event(self, ev):
+        self.event = ev
+        self.already_read = True
+
+    def set_checkpoint(self):
+        self.checkpoint = self.event - self.checkpoint_start
+
+    def get_checkpoint(self):
+        return self.checkpoint
+
+replay_state = ReplayState()
+
+# Simple read functions that mirror replay-internal.c
+# The file-stream is big-endian and manually written out a byte at a time.
+
+def read_byte(fin):
+    "Read a single byte"
+    return struct.unpack('>B', fin.read(1))[0]
+
+def read_event(fin):
+    "Read a single byte event, but save some state"
+    if replay_state.already_read:
+        return replay_state.get_event()
+    else:
+        replay_state.set_event(read_byte(fin))
+        return replay_state.event
+
+def read_word(fin):
+    "Read a 16 bit word"
+    return struct.unpack('>H', fin.read(2))[0]
+
+def read_dword(fin):
+    "Read a 32 bit word"
+    return struct.unpack('>I', fin.read(4))[0]
+
+def read_qword(fin):
+    "Read a 64 bit word"
+    return struct.unpack('>Q', fin.read(8))[0]
+
+# Generic decoder structure
+Decoder = namedtuple("Decoder", "eid name fn")
+
+def call_decode(table, index, dumpfile):
+    "Search decode table for next step"
+    decoder = next((d for d in table if d.eid == index), None)
+    if not decoder:
+        print "Could not decode index: %d" % (index)
+        print "Entry is: %s" % (decoder)
+        print "Decode Table is:\n%s" % (table)
+        return False
+    else:
+        return decoder.fn(decoder.eid, decoder.name, dumpfile)
+
+# Print event
+def print_event(eid, name, string=None, event_count=None):
+    "Print event with count"
+    if not event_count:
+        event_count = replay_state.event_count
+
+    if string:
+        print "%d:%s(%d) %s" % (event_count, name, eid, string)
+    else:
+        print "%d:%s(%d)" % (event_count, name, eid)
+
+
+# Decoders for each event type
+
+def decode_unimp(eid, name, _unused_dumpfile):
+    "Unimplimented decoder, will trigger exit"
+    print "%s not handled - will now stop" % (name)
+    return False
+
+# Checkpoint decoder
+def swallow_async_qword(eid, name, dumpfile):
+    "Swallow a qword of data without looking at it"
+    step_id = read_qword(dumpfile)
+    print "  %s(%d) @ %d" % (name, eid, step_id)
+    return True
+
+async_decode_table = [ Decoder(0, "REPLAY_ASYNC_EVENT_BH", swallow_async_qword),
+                       Decoder(1, "REPLAY_ASYNC_INPUT", decode_unimp),
+                       Decoder(2, "REPLAY_ASYNC_INPUT_SYNC", decode_unimp),
+                       Decoder(3, "REPLAY_ASYNC_CHAR_READ", decode_unimp),
+                       Decoder(4, "REPLAY_ASYNC_EVENT_BLOCK", decode_unimp),
+                       Decoder(5, "REPLAY_ASYNC_EVENT_NET", decode_unimp),
+]
+# See replay_read_events/replay_read_event
+def decode_async(eid, name, dumpfile):
+    """Decode an ASYNC event"""
+
+    print_event(eid, name)
+
+    async_event_kind = read_byte(dumpfile)
+    async_event_checkpoint = read_byte(dumpfile)
+
+    if async_event_checkpoint != replay_state.current_checkpoint:
+        print "  mismatch between checkpoint %d and async data %d" % (
+            replay_state.current_checkpoint, async_event_checkpoint)
+        return True
+
+    return call_decode(async_decode_table, async_event_kind, dumpfile)
+
+
+def decode_instruction(eid, name, dumpfile):
+    ins_diff = read_dword(dumpfile)
+    print_event(eid, name, "0x%x" % (ins_diff))
+    return True
+
+def decode_audio_out(eid, name, dumpfile):
+    audio_data = read_dword(dumpfile)
+    print_event(eid, name, "%d" % (audio_data))
+    return True
+
+def decode_checkpoint(eid, name, dumpfile):
+    """Decode a checkpoint.
+
+    Checkpoints contain a series of async events with their own specific data.
+    """
+    replay_state.set_checkpoint()
+    # save event count as we peek ahead
+    event_number = replay_state.event_count
+    next_event = read_event(dumpfile)
+
+    # if the next event is EVENT_ASYNC there are a bunch of
+    # async events to read, otherwise we are done
+    if next_event != 3:
+        print_event(eid, name, "no additional data", event_number)
+    else:
+        print_event(eid, name, "more data follows", event_number)
+
+    replay_state.reuse_event(next_event)
+    return True
+
+def decode_checkpoint_init(eid, name, dumpfile):
+    print_event(eid, name)
+    return True
+
+def decode_interrupt(eid, name, dumpfile):
+    print_event(eid, name)
+    return True
+
+def decode_clock(eid, name, dumpfile):
+    clock_data = read_qword(dumpfile)
+    print_event(eid, name, "0x%x" % (clock_data))
+    return True
+
+
+# pre-MTTCG merge
+v5_event_table = [Decoder(0, "EVENT_INSTRUCTION", decode_instruction),
+                  Decoder(1, "EVENT_INTERRUPT", decode_interrupt),
+                  Decoder(2, "EVENT_EXCEPTION", decode_unimp),
+                  Decoder(3, "EVENT_ASYNC", decode_async),
+                  Decoder(4, "EVENT_SHUTDOWN", decode_unimp),
+                  Decoder(5, "EVENT_CHAR_WRITE", decode_unimp),
+                  Decoder(6, "EVENT_CHAR_READ_ALL", decode_unimp),
+                  Decoder(7, "EVENT_CHAR_READ_ALL_ERROR", decode_unimp),
+                  Decoder(8, "EVENT_CLOCK_HOST", decode_clock),
+                  Decoder(9, "EVENT_CLOCK_VIRTUAL_RT", decode_clock),
+                  Decoder(10, "EVENT_CP_CLOCK_WARP_START", decode_checkpoint),
+                  Decoder(11, "EVENT_CP_CLOCK_WARP_ACCOUNT", decode_checkpoint),
+                  Decoder(12, "EVENT_CP_RESET_REQUESTED", decode_checkpoint),
+                  Decoder(13, "EVENT_CP_SUSPEND_REQUESTED", decode_checkpoint),
+                  Decoder(14, "EVENT_CP_CLOCK_VIRTUAL", decode_checkpoint),
+                  Decoder(15, "EVENT_CP_CLOCK_HOST", decode_checkpoint),
+                  Decoder(16, "EVENT_CP_CLOCK_VIRTUAL_RT", decode_checkpoint),
+                  Decoder(17, "EVENT_CP_INIT", decode_checkpoint_init),
+                  Decoder(18, "EVENT_CP_RESET", decode_checkpoint),
+]
+
+# post-MTTCG merge, AUDIO support added
+v6_event_table = [Decoder(0, "EVENT_INSTRUCTION", decode_instruction),
+                  Decoder(1, "EVENT_INTERRUPT", decode_interrupt),
+                  Decoder(2, "EVENT_EXCEPTION", decode_unimp),
+                  Decoder(3, "EVENT_ASYNC", decode_async),
+                  Decoder(4, "EVENT_SHUTDOWN", decode_unimp),
+                  Decoder(5, "EVENT_CHAR_WRITE", decode_unimp),
+                  Decoder(6, "EVENT_CHAR_READ_ALL", decode_unimp),
+                  Decoder(7, "EVENT_CHAR_READ_ALL_ERROR", decode_unimp),
+                  Decoder(8, "EVENT_AUDIO_OUT", decode_audio_out),
+                  Decoder(9, "EVENT_AUDIO_IN", decode_unimp),
+                  Decoder(10, "EVENT_CLOCK_HOST", decode_clock),
+                  Decoder(11, "EVENT_CLOCK_VIRTUAL_RT", decode_clock),
+                  Decoder(12, "EVENT_CP_CLOCK_WARP_START", decode_checkpoint),
+                  Decoder(13, "EVENT_CP_CLOCK_WARP_ACCOUNT", decode_checkpoint),
+                  Decoder(14, "EVENT_CP_RESET_REQUESTED", decode_checkpoint),
+                  Decoder(15, "EVENT_CP_SUSPEND_REQUESTED", decode_checkpoint),
+                  Decoder(16, "EVENT_CP_CLOCK_VIRTUAL", decode_checkpoint),
+                  Decoder(17, "EVENT_CP_CLOCK_HOST", decode_checkpoint),
+                  Decoder(18, "EVENT_CP_CLOCK_VIRTUAL_RT", decode_checkpoint),
+                  Decoder(19, "EVENT_CP_INIT", decode_checkpoint_init),
+                  Decoder(20, "EVENT_CP_RESET", decode_checkpoint),
+]
+
+# Shutdown cause added
+v7_event_table = [Decoder(0, "EVENT_INSTRUCTION", decode_instruction),
+                  Decoder(1, "EVENT_INTERRUPT", decode_interrupt),
+                  Decoder(2, "EVENT_EXCEPTION", decode_unimp),
+                  Decoder(3, "EVENT_ASYNC", decode_async),
+                  Decoder(4, "EVENT_SHUTDOWN", decode_unimp),
+                  Decoder(5, "EVENT_SHUTDOWN_HOST_ERR", decode_unimp),
+                  Decoder(6, "EVENT_SHUTDOWN_HOST_QMP", decode_unimp),
+                  Decoder(7, "EVENT_SHUTDOWN_HOST_SIGNAL", decode_unimp),
+                  Decoder(8, "EVENT_SHUTDOWN_HOST_UI", decode_unimp),
+                  Decoder(9, "EVENT_SHUTDOWN_GUEST_SHUTDOWN", decode_unimp),
+                  Decoder(10, "EVENT_SHUTDOWN_GUEST_RESET", decode_unimp),
+                  Decoder(11, "EVENT_SHUTDOWN_GUEST_PANIC", decode_unimp),
+                  Decoder(12, "EVENT_SHUTDOWN___MAX", decode_unimp),
+                  Decoder(13, "EVENT_CHAR_WRITE", decode_unimp),
+                  Decoder(14, "EVENT_CHAR_READ_ALL", decode_unimp),
+                  Decoder(15, "EVENT_CHAR_READ_ALL_ERROR", decode_unimp),
+                  Decoder(16, "EVENT_AUDIO_OUT", decode_audio_out),
+                  Decoder(17, "EVENT_AUDIO_IN", decode_unimp),
+                  Decoder(18, "EVENT_CLOCK_HOST", decode_clock),
+                  Decoder(19, "EVENT_CLOCK_VIRTUAL_RT", decode_clock),
+                  Decoder(20, "EVENT_CP_CLOCK_WARP_START", decode_checkpoint),
+                  Decoder(21, "EVENT_CP_CLOCK_WARP_ACCOUNT", decode_checkpoint),
+                  Decoder(22, "EVENT_CP_RESET_REQUESTED", decode_checkpoint),
+                  Decoder(23, "EVENT_CP_SUSPEND_REQUESTED", decode_checkpoint),
+                  Decoder(24, "EVENT_CP_CLOCK_VIRTUAL", decode_checkpoint),
+                  Decoder(25, "EVENT_CP_CLOCK_HOST", decode_checkpoint),
+                  Decoder(26, "EVENT_CP_CLOCK_VIRTUAL_RT", decode_checkpoint),
+                  Decoder(27, "EVENT_CP_INIT", decode_checkpoint_init),
+                  Decoder(28, "EVENT_CP_RESET", decode_checkpoint),
+]
+
+def parse_arguments():
+    "Grab arguments for script"
+    parser = argparse.ArgumentParser()
+    parser.add_argument("-f", "--file", help='record/replay dump to read from',
+                        required=True)
+    return parser.parse_args()
+
+def decode_file(filename):
+    "Decode a record/replay dump"
+    dumpfile = open(filename, "rb")
+
+    # read and throwaway the header
+    version = read_dword(dumpfile)
+    junk = read_qword(dumpfile)
+
+    print "HEADER: version 0x%x" % (version)
+
+    if version == 0xe02007:
+        event_decode_table = v7_event_table
+        replay_state.checkpoint_start = 12
+    elif version == 0xe02006:
+        event_decode_table = v6_event_table
+        replay_state.checkpoint_start = 12
+    else:
+        event_decode_table = v5_event_table
+        replay_state.checkpoint_start = 10
+
+    try:
+        decode_ok = True
+        while decode_ok:
+            event = read_event(dumpfile)
+            decode_ok = call_decode(event_decode_table, event, dumpfile)
+    finally:
+        dumpfile.close()
+
+if __name__ == "__main__":
+    args = parse_arguments()
+    decode_file(args.file)

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 26/26] scripts/qemu-gdb/timers.py: new helper to dump timer state
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (24 preceding siblings ...)
  2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 25/26] scripts/replay-dump.py: replay log dumper Pavel Dovgalyuk
@ 2017-10-31 11:09 ` Pavel Dovgalyuk
  2017-10-31 12:48 ` [Qemu-devel] [RFC PATCH 00/26] replay additions no-reply
  26 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:09 UTC (permalink / raw)
  To: qemu-devel; +Cc: dovgaluk

From: Alex Bennée <alex.bennee@linaro.org>

This introduces the qemu-gdb command "qemu timers" which will dump the
state of the main timers in the system.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

---
 scripts/qemu-gdb.py       |    3 ++-
 scripts/qemugdb/timers.py |   54 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+), 1 deletion(-)
 create mode 100644 scripts/qemugdb/timers.py

diff --git a/scripts/qemu-gdb.py b/scripts/qemu-gdb.py
index d58213e..690827e 100644
--- a/scripts/qemu-gdb.py
+++ b/scripts/qemu-gdb.py
@@ -26,7 +26,7 @@ import os, sys
 
 sys.path.append(os.path.dirname(__file__))
 
-from qemugdb import aio, mtree, coroutine, tcg
+from qemugdb import aio, mtree, coroutine, tcg, timers
 
 class QemuCommand(gdb.Command):
     '''Prefix for QEMU debug support commands'''
@@ -39,6 +39,7 @@ coroutine.CoroutineCommand()
 mtree.MtreeCommand()
 aio.HandlersCommand()
 tcg.TCGLockStatusCommand()
+timers.TimersCommand()
 
 coroutine.CoroutineSPFunction()
 coroutine.CoroutinePCFunction()
diff --git a/scripts/qemugdb/timers.py b/scripts/qemugdb/timers.py
new file mode 100644
index 0000000..be71a00
--- /dev/null
+++ b/scripts/qemugdb/timers.py
@@ -0,0 +1,54 @@
+#!/usr/bin/python
+# GDB debugging support
+#
+# Copyright 2017 Linaro Ltd
+#
+# Author: Alex Bennée <alex.bennee@linaro.org>
+#
+# This work is licensed under the terms of the GNU GPL, version 2.  See
+# the COPYING file in the top-level directory.
+
+# 'qemu timers' -- display the current timerlists
+
+import gdb
+
+class TimersCommand(gdb.Command):
+    '''Display the current QEMU timers'''
+
+    def __init__(self):
+        'Register the class as a gdb command'
+        gdb.Command.__init__(self, 'qemu timers', gdb.COMMAND_DATA,
+                             gdb.COMPLETE_NONE)
+
+    def dump_timers(self, timer):
+        "Follow a timer and recursively dump each one in the list."
+        # timer should be of type QemuTimer
+        gdb.write("    timer %s/%s (cb:%s,opq:%s)\n" % (
+            timer['expire_time'],
+            timer['scale'],
+            timer['cb'],
+            timer['opaque']))
+
+        if int(timer['next']) > 0:
+            self.dump_timers(timer['next'])
+
+
+    def process_timerlist(self, tlist, ttype):
+        gdb.write("Processing %s timers\n" % (ttype))
+        gdb.write("  clock %s is enabled:%s, last:%s\n" % (
+            tlist['clock']['type'],
+            tlist['clock']['enabled'],
+            tlist['clock']['last']))
+        if int(tlist['active_timers']) > 0:
+            self.dump_timers(tlist['active_timers'])
+
+
+    def invoke(self, arg, from_tty):
+        'Run the command'
+        main_timers = gdb.parse_and_eval("main_loop_tlg")
+
+        # This will break if QEMUClockType in timer.h is redfined
+        self.process_timerlist(main_timers['tl'][0], "Realtime")
+        self.process_timerlist(main_timers['tl'][1], "Virtual")
+        self.process_timerlist(main_timers['tl'][2], "Host")
+        self.process_timerlist(main_timers['tl'][3], "Virtual RT")

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 00/26] replay additions
  2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
                   ` (25 preceding siblings ...)
  2017-10-31 11:09 ` [Qemu-devel] [RFC PATCH 26/26] scripts/qemu-gdb/timers.py: new helper to dump timer state Pavel Dovgalyuk
@ 2017-10-31 12:48 ` no-reply
  26 siblings, 0 replies; 29+ messages in thread
From: no-reply @ 2017-10-31 12:48 UTC (permalink / raw)
  To: Pavel.Dovgaluk; +Cc: famz, qemu-devel, dovgaluk

Hi,

This series seems to have some coding style problems. See output below for
more information:

Subject: [Qemu-devel] [RFC PATCH 00/26] replay additions
Type: series
Message-id: 20171031110641.5836.43266.stgit@pasha-VirtualBox

=== TEST SCRIPT BEGIN ===
#!/bin/bash

BASE=base
n=1
total=$(git log --oneline $BASE.. | wc -l)
failed=0

git config --local diff.renamelimit 0
git config --local diff.renames True

commits="$(git log --format=%H --reverse $BASE..)"
for c in $commits; do
    echo "Checking PATCH $n/$total: $(git log -n 1 --format=%s $c)..."
    if ! git show $c --format=email | ./scripts/checkpatch.pl --mailback -; then
        failed=1
        echo
    fi
    n=$((n+1))
done

exit $failed
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
8f55c902ad scripts/qemu-gdb/timers.py: new helper to dump timer state
a8cba1b214 scripts/replay-dump.py: replay log dumper
15dcdd58a9 scripts/analyse-locks-simpletrace.py: script to analyse lock times
2b717ce080 util/qemu-thread-*: add qemu_lock, locked and unlock trace events
dce04fcfd9 scripts/qemu-gdb: add simple tcg lock status helper
a197d956c9 replay: check return values of fwrite
6a31eda883 replay: don't destroy mutex at exit
22d8103583 cpu-exec: reset exit flag before calling cpu_exec_nocache
ffd5f8380a cpu-exec: don't overwrite exception_index
383b2a323d replay: push replay_mutex_lock up the call tree
915bfa3562 replay: make locking visible outside replay code
2a47ac1a9b replay/replay-internal.c: track holding of replay_lock
bb876be027 replay/replay.c: bump REPLAY_VERSION again
617e2488f1 cpus: only take BQL for sleeping threads
714ff08a06 cpus: push BQL lock to qemu_*_wait_io_event
f96528d08a target/arm/arm-powertctl: drop BQL assertions
9fece25a19 icount: fixed saving/restoring of icount warp timers
f71c5b84e7 replay: save prior value of the host clock
695405b4c6 replay: make safe vmstop at record/replay
64e3b6ab3d replay: added replay log format description
fc18acd5c0 replay: fix save/load vm for non-empty queue
0f4d59a726 replay: fixed replay_enable_events
7b8e9c1e53 replay: fix processing async events
bba29a49ed replay: disable default snapshot for record/replay
85e4f184ac blkreplay: create temporary overlay for underlaying devices
42b8f38eb3 block: implement bdrv_snapshot_goto for blkreplay

=== OUTPUT BEGIN ===
Checking PATCH 1/26: block: implement bdrv_snapshot_goto for blkreplay...
Checking PATCH 2/26: blkreplay: create temporary overlay for underlaying devices...
Checking PATCH 3/26: replay: disable default snapshot for record/replay...
Checking PATCH 4/26: replay: fix processing async events...
Checking PATCH 5/26: replay: fixed replay_enable_events...
Checking PATCH 6/26: replay: fix save/load vm for non-empty queue...
ERROR: Error messages should not contain newlines
#60: FILE: migration/savevm.c:2322:
+                     "right now. Try once more later.\n");

total: 1 errors, 0 warnings, 48 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 7/26: replay: added replay log format description...
Checking PATCH 8/26: replay: make safe vmstop at record/replay...
ERROR: Error messages should not contain newlines
#45: FILE: migration/savevm.c:2147:
+                     "right now. Try once more later.\n");

total: 1 errors, 0 warnings, 23 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 9/26: replay: save prior value of the host clock...
Checking PATCH 10/26: icount: fixed saving/restoring of icount warp timers...
ERROR: spaces required around that '*' (ctx:VxV)
#171: FILE: cpus.c:688:
+    .subsections = (const VMStateDescription*[]) {
                                             ^

total: 1 errors, 0 warnings, 174 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 11/26: target/arm/arm-powertctl: drop BQL assertions...
Checking PATCH 12/26: cpus: push BQL lock to qemu_*_wait_io_event...
Checking PATCH 13/26: cpus: only take BQL for sleeping threads...
Checking PATCH 14/26: replay/replay.c: bump REPLAY_VERSION again...
Checking PATCH 15/26: replay/replay-internal.c: track holding of replay_lock...
Checking PATCH 16/26: replay: make locking visible outside replay code...
Checking PATCH 17/26: replay: push replay_mutex_lock up the call tree...
Checking PATCH 18/26: cpu-exec: don't overwrite exception_index...
Checking PATCH 19/26: cpu-exec: reset exit flag before calling cpu_exec_nocache...
Checking PATCH 20/26: replay: don't destroy mutex at exit...
Checking PATCH 21/26: replay: check return values of fwrite...
Checking PATCH 22/26: scripts/qemu-gdb: add simple tcg lock status helper...
Checking PATCH 23/26: util/qemu-thread-*: add qemu_lock, locked and unlock trace events...
WARNING: line over 80 characters
#30: FILE: include/qemu/thread.h:30:
+#define qemu_mutex_trylock(mutex) qemu_mutex_trylock_impl(mutex, __FILE__, __LINE__)

WARNING: line over 80 characters
#31: FILE: include/qemu/thread.h:31:
+#define qemu_mutex_unlock(mutex) qemu_mutex_unlock_impl(mutex, __FILE__, __LINE__)

ERROR: line over 90 characters
#40: FILE: include/qemu/thread.h:46:
+void qemu_cond_wait_impl(QemuCond *cond, QemuMutex *mutex, const char *file, const int line);

WARNING: line over 80 characters
#42: FILE: include/qemu/thread.h:48:
+#define qemu_cond_wait(cond, mutex) qemu_cond_wait_impl(cond, mutex, __FILE__, __LINE__)

ERROR: line over 90 characters
#107: FILE: util/qemu-thread-posix.c:158:
+void qemu_cond_wait_impl(QemuCond *cond, QemuMutex *mutex, const char *file, const int line)

total: 2 errors, 3 warnings, 103 lines checked

Your patch has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

Checking PATCH 24/26: scripts/analyse-locks-simpletrace.py: script to analyse lock times...
Checking PATCH 25/26: scripts/replay-dump.py: replay log dumper...
Checking PATCH 26/26: scripts/qemu-gdb/timers.py: new helper to dump timer state...
=== OUTPUT END ===

Test command exited with code: 1


---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-devel@freelists.org

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events
  2017-10-31 11:24 Pavel Dovgalyuk
@ 2017-10-31 11:25 ` Pavel Dovgalyuk
  0 siblings, 0 replies; 29+ messages in thread
From: Pavel Dovgalyuk @ 2017-10-31 11:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, peter.maydell, boost.lists, quintela, jasowang, mst,
	zuban32s, maria.klimushenkova, dovgaluk, kraxel, pavel.dovgaluk,
	pbonzini, alex.bennee

This patch fixes assignment to internal events_enabled variable.
Now it is set only in record/replay mode. This affects the behavior
of the external functions that check this flag.

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>

---
 replay/replay-events.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/replay/replay-events.c b/replay/replay-events.c
index 768b505..e858254 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -67,7 +67,9 @@ static void replay_run_event(Event *event)
 
 void replay_enable_events(void)
 {
-    events_enabled = true;
+    if (replay_mode != REPLAY_MODE_NONE) {
+        events_enabled = true;
+    }
 }
 
 bool replay_has_events(void)
@@ -141,7 +143,7 @@ void replay_add_event(ReplayAsyncEventKind event_kind,
 
 void replay_bh_schedule_event(QEMUBH *bh)
 {
-    if (replay_mode != REPLAY_MODE_NONE && events_enabled) {
+    if (events_enabled) {
         uint64_t id = replay_get_current_step();
         replay_add_event(REPLAY_ASYNC_EVENT_BH, bh, NULL, id);
     } else {
@@ -161,7 +163,7 @@ void replay_add_input_sync_event(void)
 
 void replay_block_event(QEMUBH *bh, uint64_t id)
 {
-    if (replay_mode != REPLAY_MODE_NONE && events_enabled) {
+    if (events_enabled) {
         replay_add_event(REPLAY_ASYNC_EVENT_BLOCK, bh, NULL, id);
     } else {
         qemu_bh_schedule(bh);

^ permalink raw reply related	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2017-10-31 13:04 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-31 11:06 [Qemu-devel] [RFC PATCH 00/26] replay additions Pavel Dovgalyuk
2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 01/26] block: implement bdrv_snapshot_goto for blkreplay Pavel Dovgalyuk
2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 02/26] blkreplay: create temporary overlay for underlaying devices Pavel Dovgalyuk
2017-10-31 11:06 ` [Qemu-devel] [RFC PATCH 03/26] replay: disable default snapshot for record/replay Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 04/26] replay: fix processing async events Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 06/26] replay: fix save/load vm for non-empty queue Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 07/26] replay: added replay log format description Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 08/26] replay: make safe vmstop at record/replay Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 09/26] replay: save prior value of the host clock Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 10/26] icount: fixed saving/restoring of icount warp timers Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 11/26] target/arm/arm-powertctl: drop BQL assertions Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 12/26] cpus: push BQL lock to qemu_*_wait_io_event Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 13/26] cpus: only take BQL for sleeping threads Pavel Dovgalyuk
2017-10-31 11:07 ` [Qemu-devel] [RFC PATCH 14/26] replay/replay.c: bump REPLAY_VERSION again Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 15/26] replay/replay-internal.c: track holding of replay_lock Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 16/26] replay: make locking visible outside replay code Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 17/26] replay: push replay_mutex_lock up the call tree Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 18/26] cpu-exec: don't overwrite exception_index Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 19/26] cpu-exec: reset exit flag before calling cpu_exec_nocache Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 20/26] replay: don't destroy mutex at exit Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 21/26] replay: check return values of fwrite Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 22/26] scripts/qemu-gdb: add simple tcg lock status helper Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 23/26] util/qemu-thread-*: add qemu_lock, locked and unlock trace events Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 24/26] scripts/analyse-locks-simpletrace.py: script to analyse lock times Pavel Dovgalyuk
2017-10-31 11:08 ` [Qemu-devel] [RFC PATCH 25/26] scripts/replay-dump.py: replay log dumper Pavel Dovgalyuk
2017-10-31 11:09 ` [Qemu-devel] [RFC PATCH 26/26] scripts/qemu-gdb/timers.py: new helper to dump timer state Pavel Dovgalyuk
2017-10-31 12:48 ` [Qemu-devel] [RFC PATCH 00/26] replay additions no-reply
2017-10-31 11:24 Pavel Dovgalyuk
2017-10-31 11:25 ` [Qemu-devel] [RFC PATCH 05/26] replay: fixed replay_enable_events Pavel Dovgalyuk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.