* [PULL 01/21] runstate: add runstate_get()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 02/21] migration: never fail in global_state_store() Juan Quintela
` (20 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel
Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela,
Vladimir Sementsov-Ogievskiy
From: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
It's necessary to restore the state after failed/cancelled migration in
further commit.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230517123752.21615-2-vsementsov@yandex-team.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/sysemu/runstate.h | 1 +
softmmu/runstate.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/include/sysemu/runstate.h b/include/sysemu/runstate.h
index f3ed52548e..85f5d9a419 100644
--- a/include/sysemu/runstate.h
+++ b/include/sysemu/runstate.h
@@ -6,6 +6,7 @@
bool runstate_check(RunState state);
void runstate_set(RunState new_state);
+RunState runstate_get(void);
bool runstate_is_running(void);
bool runstate_needs_reset(void);
bool runstate_store(char *str, size_t size);
diff --git a/softmmu/runstate.c b/softmmu/runstate.c
index 2f2396c819..1e6f0bcecc 100644
--- a/softmmu/runstate.c
+++ b/softmmu/runstate.c
@@ -221,6 +221,11 @@ void runstate_set(RunState new_state)
current_run_state = new_state;
}
+RunState runstate_get(void)
+{
+ return current_run_state;
+}
+
bool runstate_is_running(void)
{
return runstate_check(RUN_STATE_RUNNING);
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 02/21] migration: never fail in global_state_store()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
2023-05-30 18:25 ` [PULL 01/21] runstate: add runstate_get() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 03/21] runstate: drop unused runstate_store() Juan Quintela
` (19 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel
Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela,
Vladimir Sementsov-Ogievskiy
From: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Actually global_state_store() can never fail. Let's get rid of extra
error paths.
To make things clear, use new runstate_get() and use same approach for
global_state_store() and global_state_store_running().
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230517123752.21615-3-vsementsov@yandex-team.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/migration/global_state.h | 2 +-
migration/global_state.c | 29 +++++++++++-----------
migration/migration.c | 41 +++++++++++++++-----------------
migration/savevm.c | 6 +----
4 files changed, 35 insertions(+), 43 deletions(-)
diff --git a/include/migration/global_state.h b/include/migration/global_state.h
index 945eb35d5b..d7c2cd3216 100644
--- a/include/migration/global_state.h
+++ b/include/migration/global_state.h
@@ -16,7 +16,7 @@
#include "qapi/qapi-types-run-state.h"
void register_global_state(void);
-int global_state_store(void);
+void global_state_store(void);
void global_state_store_running(void);
bool global_state_received(void);
RunState global_state_get_runstate(void);
diff --git a/migration/global_state.c b/migration/global_state.c
index a33947ca32..4e2a9d8ec0 100644
--- a/migration/global_state.c
+++ b/migration/global_state.c
@@ -29,23 +29,22 @@ typedef struct {
static GlobalState global_state;
-int global_state_store(void)
+static void global_state_do_store(RunState state)
{
- if (!runstate_store((char *)global_state.runstate,
- sizeof(global_state.runstate))) {
- error_report("runstate name too big: %s", global_state.runstate);
- trace_migrate_state_too_big();
- return -EINVAL;
- }
- return 0;
-}
-
-void global_state_store_running(void)
-{
- const char *state = RunState_str(RUN_STATE_RUNNING);
- assert(strlen(state) < sizeof(global_state.runstate));
+ const char *state_str = RunState_str(state);
+ assert(strlen(state_str) < sizeof(global_state.runstate));
strpadcpy((char *)global_state.runstate, sizeof(global_state.runstate),
- state, '\0');
+ state_str, '\0');
+}
+
+void global_state_store(void)
+{
+ global_state_do_store(runstate_get());
+}
+
+void global_state_store_running(void)
+{
+ global_state_do_store(RUN_STATE_RUNNING);
}
bool global_state_received(void)
diff --git a/migration/migration.c b/migration/migration.c
index 5de7f734b9..c75d5aa479 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2288,27 +2288,26 @@ static void migration_completion(MigrationState *s)
s->downtime_start = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
s->vm_was_running = runstate_is_running();
- ret = global_state_store();
+ global_state_store();
- if (!ret) {
- ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
- trace_migration_completion_vm_stop(ret);
- if (ret >= 0) {
- ret = migration_maybe_pause(s, ¤t_active_state,
- MIGRATION_STATUS_DEVICE);
- }
- if (ret >= 0) {
- /*
- * Inactivate disks except in COLO, and track that we
- * have done so in order to remember to reactivate
- * them if migration fails or is cancelled.
- */
- s->block_inactive = !migrate_colo();
- migration_rate_set(RATE_LIMIT_DISABLED);
- ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
- s->block_inactive);
- }
+ ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
+ trace_migration_completion_vm_stop(ret);
+ if (ret >= 0) {
+ ret = migration_maybe_pause(s, ¤t_active_state,
+ MIGRATION_STATUS_DEVICE);
}
+ if (ret >= 0) {
+ /*
+ * Inactivate disks except in COLO, and track that we
+ * have done so in order to remember to reactivate
+ * them if migration fails or is cancelled.
+ */
+ s->block_inactive = !migrate_colo();
+ migration_rate_set(RATE_LIMIT_DISABLED);
+ ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
+ s->block_inactive);
+ }
+
qemu_mutex_unlock_iothread();
if (ret < 0) {
@@ -3088,9 +3087,7 @@ static void *bg_migration_thread(void *opaque)
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
s->vm_was_running = runstate_is_running();
- if (global_state_store()) {
- goto fail;
- }
+ global_state_store();
/* Forcibly stop VM before saving state of vCPUs and devices */
if (vm_stop_force_state(RUN_STATE_PAUSED)) {
goto fail;
diff --git a/migration/savevm.c b/migration/savevm.c
index 03795ce8dc..bc284087f9 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -2919,11 +2919,7 @@ bool save_snapshot(const char *name, bool overwrite, const char *vmstate,
saved_vm_running = runstate_is_running();
- ret = global_state_store();
- if (ret) {
- error_setg(errp, "Error saving global state");
- return false;
- }
+ global_state_store();
vm_stop(RUN_STATE_SAVE_VM);
bdrv_drain_all_begin();
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 03/21] runstate: drop unused runstate_store()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
2023-05-30 18:25 ` [PULL 01/21] runstate: add runstate_get() Juan Quintela
2023-05-30 18:25 ` [PULL 02/21] migration: never fail in global_state_store() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 04/21] migration: switch from .vm_was_running to .vm_old_state Juan Quintela
` (18 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel
Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela,
Vladimir Sementsov-Ogievskiy
From: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
The function is unused since previous commit. Drop it.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230517123752.21615-4-vsementsov@yandex-team.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/sysemu/runstate.h | 1 -
softmmu/runstate.c | 12 ------------
2 files changed, 13 deletions(-)
diff --git a/include/sysemu/runstate.h b/include/sysemu/runstate.h
index 85f5d9a419..7beb29c2e2 100644
--- a/include/sysemu/runstate.h
+++ b/include/sysemu/runstate.h
@@ -9,7 +9,6 @@ void runstate_set(RunState new_state);
RunState runstate_get(void);
bool runstate_is_running(void);
bool runstate_needs_reset(void);
-bool runstate_store(char *str, size_t size);
typedef void VMChangeStateHandler(void *opaque, bool running, RunState state);
diff --git a/softmmu/runstate.c b/softmmu/runstate.c
index 1e6f0bcecc..0370230a5e 100644
--- a/softmmu/runstate.c
+++ b/softmmu/runstate.c
@@ -175,18 +175,6 @@ bool runstate_check(RunState state)
return current_run_state == state;
}
-bool runstate_store(char *str, size_t size)
-{
- const char *state = RunState_str(current_run_state);
- size_t len = strlen(state) + 1;
-
- if (len > size) {
- return false;
- }
- memcpy(str, state, len);
- return true;
-}
-
static void runstate_init(void)
{
const RunStateTransition *p;
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 04/21] migration: switch from .vm_was_running to .vm_old_state
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (2 preceding siblings ...)
2023-05-30 18:25 ` [PULL 03/21] runstate: drop unused runstate_store() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 05/21] migration: restore vmstate on migration failure Juan Quintela
` (17 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel
Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela,
Vladimir Sementsov-Ogievskiy
From: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
No logic change here, only refactoring. That's a preparation for next
commit where we finally restore the stopped vm state on migration
failure or cancellation.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230517123752.21615-5-vsementsov@yandex-team.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.h | 9 ++++++---
migration/migration.c | 11 ++++++-----
2 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/migration/migration.h b/migration/migration.h
index 48a46123a0..30c3e97635 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -25,6 +25,7 @@
#include "net/announce.h"
#include "qom/object.h"
#include "postcopy-ram.h"
+#include "sysemu/runstate.h"
struct PostcopyBlocktimeContext;
@@ -317,12 +318,14 @@ struct MigrationState {
int64_t expected_downtime;
bool capabilities[MIGRATION_CAPABILITY__MAX];
int64_t setup_time;
+
/*
- * Whether guest was running when we enter the completion stage.
+ * State before stopping the vm by vm_stop_force_state().
* If migration is interrupted by any reason, we need to continue
- * running the guest on source.
+ * running the guest on source if it was running or restore its stopped
+ * state.
*/
- bool vm_was_running;
+ RunState vm_old_state;
/* Flag set once the migration has been asked to enter postcopy */
bool start_postcopy;
diff --git a/migration/migration.c b/migration/migration.c
index c75d5aa479..033162cda0 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1402,7 +1402,7 @@ void migrate_init(MigrationState *s)
s->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
s->total_time = 0;
- s->vm_was_running = false;
+ s->vm_old_state = -1;
s->iteration_initial_bytes = 0;
s->threshold_size = 0;
}
@@ -2287,7 +2287,8 @@ static void migration_completion(MigrationState *s)
qemu_mutex_lock_iothread();
s->downtime_start = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
- s->vm_was_running = runstate_is_running();
+
+ s->vm_old_state = runstate_get();
global_state_store();
ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
@@ -2760,12 +2761,12 @@ static void migration_iteration_finish(MigrationState *s)
case MIGRATION_STATUS_COLO:
assert(migrate_colo());
migrate_start_colo_process(s);
- s->vm_was_running = true;
+ s->vm_old_state = RUN_STATE_RUNNING;
/* Fallthrough */
case MIGRATION_STATUS_FAILED:
case MIGRATION_STATUS_CANCELLED:
case MIGRATION_STATUS_CANCELLING:
- if (s->vm_was_running) {
+ if (s->vm_old_state == RUN_STATE_RUNNING) {
if (!runstate_check(RUN_STATE_SHUTDOWN)) {
vm_start();
}
@@ -3085,7 +3086,7 @@ static void *bg_migration_thread(void *opaque)
* transition in vm_stop_force_state() we need to wakeup it up.
*/
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
- s->vm_was_running = runstate_is_running();
+ s->vm_old_state = runstate_get();
global_state_store();
/* Forcibly stop VM before saving state of vCPUs and devices */
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 05/21] migration: restore vmstate on migration failure
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (3 preceding siblings ...)
2023-05-30 18:25 ` [PULL 04/21] migration: switch from .vm_was_running to .vm_old_state Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 06/21] migration: Don't abuse qemu_file transferred for RDMA Juan Quintela
` (16 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel
Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela,
Vladimir Sementsov-Ogievskiy
From: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
1. Otherwise failed migration just drops guest-panicked state, which is
not good for management software.
2. We do keep different paused states like guest-panicked during
migration with help of global_state state.
3. We do restore running state on source when migration is cancelled or
failed.
4. "postmigrate" state is documented as "guest is paused following a
successful 'migrate'", so originally it's only for successful path
and we never documented current behavior.
Let's restore paused states like guest-panicked in case of cancel or
fail too. Allow same transitions like for inmigrate state.
This commit changes the behavior that was introduced by commit
42da5550d6 "migration: set state to post-migrate on failure" and
provides a bit different fix on related
https://bugzilla.redhat.com/show_bug.cgi?id=1355683
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230517123752.21615-6-vsementsov@yandex-team.ru>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 2 +-
softmmu/runstate.c | 8 +++++++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 033162cda0..7c3425c6fe 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2772,7 +2772,7 @@ static void migration_iteration_finish(MigrationState *s)
}
} else {
if (runstate_check(RUN_STATE_FINISH_MIGRATE)) {
- runstate_set(RUN_STATE_POSTMIGRATE);
+ runstate_set(s->vm_old_state);
}
}
break;
diff --git a/softmmu/runstate.c b/softmmu/runstate.c
index 0370230a5e..1957caf73f 100644
--- a/softmmu/runstate.c
+++ b/softmmu/runstate.c
@@ -121,7 +121,13 @@ static const RunStateTransition runstate_transitions_def[] = {
{ RUN_STATE_FINISH_MIGRATE, RUN_STATE_PAUSED },
{ RUN_STATE_FINISH_MIGRATE, RUN_STATE_POSTMIGRATE },
{ RUN_STATE_FINISH_MIGRATE, RUN_STATE_PRELAUNCH },
- { RUN_STATE_FINISH_MIGRATE, RUN_STATE_COLO},
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_COLO },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_INTERNAL_ERROR },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_IO_ERROR },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_SHUTDOWN },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_SUSPENDED },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_WATCHDOG },
+ { RUN_STATE_FINISH_MIGRATE, RUN_STATE_GUEST_PANICKED },
{ RUN_STATE_RESTORE_VM, RUN_STATE_RUNNING },
{ RUN_STATE_RESTORE_VM, RUN_STATE_PRELAUNCH },
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 06/21] migration: Don't abuse qemu_file transferred for RDMA
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (4 preceding siblings ...)
2023-05-30 18:25 ` [PULL 05/21] migration: restore vmstate on migration failure Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 07/21] migration/RDMA: It is accounting for zero/normal pages in two places Juan Quintela
` (15 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Just create a variable for it, the same way that multifd does. This
way it is safe to use for other thread, etc, etc.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-11-quintela@redhat.com>
---
migration/migration-stats.h | 4 ++++
migration/migration-stats.c | 5 +++--
migration/rdma.c | 22 ++++++++++++++++++++--
migration/trace-events | 2 +-
4 files changed, 28 insertions(+), 5 deletions(-)
diff --git a/migration/migration-stats.h b/migration/migration-stats.h
index ac2260e987..2358caad63 100644
--- a/migration/migration-stats.h
+++ b/migration/migration-stats.h
@@ -89,6 +89,10 @@ typedef struct {
* Maximum amount of data we can send in a cycle.
*/
Stat64 rate_limit_max;
+ /*
+ * Number of bytes sent through RDMA.
+ */
+ Stat64 rdma_bytes;
/*
* Total number of bytes transferred.
*/
diff --git a/migration/migration-stats.c b/migration/migration-stats.c
index f98c8260be..79eea8d865 100644
--- a/migration/migration-stats.c
+++ b/migration/migration-stats.c
@@ -61,8 +61,9 @@ void migration_rate_reset(QEMUFile *f)
uint64_t migration_transferred_bytes(QEMUFile *f)
{
uint64_t multifd = stat64_get(&mig_stats.multifd_bytes);
+ uint64_t rdma = stat64_get(&mig_stats.rdma_bytes);
uint64_t qemu_file = qemu_file_transferred(f);
- trace_migration_transferred_bytes(qemu_file, multifd);
- return qemu_file + multifd;
+ trace_migration_transferred_bytes(qemu_file, multifd, rdma);
+ return qemu_file + multifd + rdma;
}
diff --git a/migration/rdma.c b/migration/rdma.c
index 2e4dcff1c9..074456f9df 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -2122,9 +2122,18 @@ retry:
return -EIO;
}
+ /*
+ * TODO: Here we are sending something, but we are not
+ * accounting for anything transferred. The following is wrong:
+ *
+ * stat64_add(&mig_stats.rdma_bytes, sge.length);
+ *
+ * because we are using some kind of compression. I
+ * would think that head.len would be the more similar
+ * thing to a correct value.
+ */
stat64_add(&mig_stats.zero_pages,
sge.length / qemu_target_page_size());
-
return 1;
}
@@ -2232,8 +2241,17 @@ retry:
set_bit(chunk, block->transit_bitmap);
stat64_add(&mig_stats.normal_pages, sge.length / qemu_target_page_size());
+ /*
+ * We are adding to transferred the amount of data written, but no
+ * overhead at all. I will asume that RDMA is magicaly and don't
+ * need to transfer (at least) the addresses where it wants to
+ * write the pages. Here it looks like it should be something
+ * like:
+ * sizeof(send_wr) + sge.length
+ * but this being RDMA, who knows.
+ */
+ stat64_add(&mig_stats.rdma_bytes, sge.length);
ram_transferred_add(sge.length);
- qemu_file_credit_transfer(f, sge.length);
rdma->total_writes++;
return 0;
diff --git a/migration/trace-events b/migration/trace-events
index cdaef7a1ea..54ae5653fd 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -187,7 +187,7 @@ process_incoming_migration_co_postcopy_end_main(void) ""
postcopy_preempt_enabled(bool value) "%d"
# migration-stats
-migration_transferred_bytes(uint64_t qemu_file, uint64_t multifd) "qemu_file %" PRIu64 " multifd %" PRIu64
+migration_transferred_bytes(uint64_t qemu_file, uint64_t multifd, uint64_t rdma) "qemu_file %" PRIu64 " multifd %" PRIu64 " RDMA %" PRIu64
# channel.c
migration_set_incoming_channel(void *ioc, const char *ioctype) "ioc=%p ioctype=%s"
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 07/21] migration/RDMA: It is accounting for zero/normal pages in two places
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (5 preceding siblings ...)
2023-05-30 18:25 ` [PULL 06/21] migration: Don't abuse qemu_file transferred for RDMA Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 08/21] migration/rdma: Remove QEMUFile parameter when not used Juan Quintela
` (14 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Remove the one in control_save_page().
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-12-quintela@redhat.com>
---
migration/ram.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 88a6c82e63..40b8f9630d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1162,13 +1162,6 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
if (ret == RAM_SAVE_CONTROL_DELAYED) {
return true;
}
-
- if (bytes_xmit > 0) {
- stat64_add(&mig_stats.normal_pages, 1);
- } else if (bytes_xmit == 0) {
- stat64_add(&mig_stats.zero_pages, 1);
- }
-
return true;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 08/21] migration/rdma: Remove QEMUFile parameter when not used
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (6 preceding siblings ...)
2023-05-30 18:25 ` [PULL 07/21] migration/RDMA: It is accounting for zero/normal pages in two places Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 09/21] migration/rdma: Don't use imaginary transfers Juan Quintela
` (13 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-13-quintela@redhat.com>
---
migration/rdma.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/migration/rdma.c b/migration/rdma.c
index 074456f9df..416dec00a2 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -2027,7 +2027,7 @@ static int qemu_rdma_exchange_recv(RDMAContext *rdma, RDMAControlHeader *head,
* If we're using dynamic registration on the dest-side, we have to
* send a registration command first.
*/
-static int qemu_rdma_write_one(QEMUFile *f, RDMAContext *rdma,
+static int qemu_rdma_write_one(RDMAContext *rdma,
int current_index, uint64_t current_addr,
uint64_t length)
{
@@ -2263,7 +2263,7 @@ retry:
* We support sending out multiple chunks at the same time.
* Not all of them need to get signaled in the completion queue.
*/
-static int qemu_rdma_write_flush(QEMUFile *f, RDMAContext *rdma)
+static int qemu_rdma_write_flush(RDMAContext *rdma)
{
int ret;
@@ -2271,7 +2271,7 @@ static int qemu_rdma_write_flush(QEMUFile *f, RDMAContext *rdma)
return 0;
}
- ret = qemu_rdma_write_one(f, rdma,
+ ret = qemu_rdma_write_one(rdma,
rdma->current_index, rdma->current_addr, rdma->current_length);
if (ret < 0) {
@@ -2344,7 +2344,7 @@ static inline int qemu_rdma_buffer_mergable(RDMAContext *rdma,
* and only require that a batch gets acknowledged in the completion
* queue instead of each individual chunk.
*/
-static int qemu_rdma_write(QEMUFile *f, RDMAContext *rdma,
+static int qemu_rdma_write(RDMAContext *rdma,
uint64_t block_offset, uint64_t offset,
uint64_t len)
{
@@ -2355,7 +2355,7 @@ static int qemu_rdma_write(QEMUFile *f, RDMAContext *rdma,
/* If we cannot merge it, we flush the current buffer first. */
if (!qemu_rdma_buffer_mergable(rdma, current_addr, len)) {
- ret = qemu_rdma_write_flush(f, rdma);
+ ret = qemu_rdma_write_flush(rdma);
if (ret) {
return ret;
}
@@ -2377,7 +2377,7 @@ static int qemu_rdma_write(QEMUFile *f, RDMAContext *rdma,
/* flush it if buffer is too large */
if (rdma->current_length >= RDMA_MERGE_MAX) {
- return qemu_rdma_write_flush(f, rdma);
+ return qemu_rdma_write_flush(rdma);
}
return 0;
@@ -2798,7 +2798,6 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
Error **errp)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
- QEMUFile *f = rioc->file;
RDMAContext *rdma;
int ret;
ssize_t done = 0;
@@ -2819,7 +2818,7 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc,
* Push out any writes that
* we're queued up for VM's ram.
*/
- ret = qemu_rdma_write_flush(f, rdma);
+ ret = qemu_rdma_write_flush(rdma);
if (ret < 0) {
rdma->error_state = ret;
error_setg(errp, "qemu_rdma_write_flush returned %d", ret);
@@ -2958,11 +2957,11 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc,
/*
* Block until all the outstanding chunks have been delivered by the hardware.
*/
-static int qemu_rdma_drain_cq(QEMUFile *f, RDMAContext *rdma)
+static int qemu_rdma_drain_cq(RDMAContext *rdma)
{
int ret;
- if (qemu_rdma_write_flush(f, rdma) < 0) {
+ if (qemu_rdma_write_flush(rdma) < 0) {
return -EIO;
}
@@ -3272,7 +3271,7 @@ static size_t qemu_rdma_save_page(QEMUFile *f,
* is full, or the page doesn't belong to the current chunk,
* an actual RDMA write will occur and a new chunk will be formed.
*/
- ret = qemu_rdma_write(f, rdma, block_offset, offset, size);
+ ret = qemu_rdma_write(rdma, block_offset, offset, size);
if (ret < 0) {
error_report("rdma migration: write error! %d", ret);
goto err;
@@ -3927,7 +3926,7 @@ static int qemu_rdma_registration_stop(QEMUFile *f,
CHECK_ERROR_STATE();
qemu_fflush(f);
- ret = qemu_rdma_drain_cq(f, rdma);
+ ret = qemu_rdma_drain_cq(rdma);
if (ret < 0) {
goto err;
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 09/21] migration/rdma: Don't use imaginary transfers
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (7 preceding siblings ...)
2023-05-30 18:25 ` [PULL 08/21] migration/rdma: Remove QEMUFile parameter when not used Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 10/21] migration: Remove unused qemu_file_credit_transfer() Juan Quintela
` (12 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
RDMA protocol is completely asynchronous, so in qemu_rdma_save_page()
they "invent" that a byte has been transferred. And then they call
qemu_file_credit_transfer() and ram_transferred_add() with that byte.
Just remove that calls as nothing has been sent.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-14-quintela@redhat.com>
---
migration/qemu-file.c | 5 +----
migration/ram.c | 1 -
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index acc282654a..23a21e2331 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -346,13 +346,10 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
if (ret != RAM_SAVE_CONTROL_DELAYED &&
ret != RAM_SAVE_CONTROL_NOT_SUPP) {
- if (bytes_sent && *bytes_sent > 0) {
- qemu_file_credit_transfer(f, *bytes_sent);
- } else if (ret < 0) {
+ if (ret < 0) {
qemu_file_set_error(f, ret);
}
}
-
return ret;
}
diff --git a/migration/ram.c b/migration/ram.c
index 40b8f9630d..da0dfd7072 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1155,7 +1155,6 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
}
if (bytes_xmit) {
- ram_transferred_add(bytes_xmit);
*pages = 1;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 10/21] migration: Remove unused qemu_file_credit_transfer()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (8 preceding siblings ...)
2023-05-30 18:25 ` [PULL 09/21] migration/rdma: Don't use imaginary transfers Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 11/21] migration/rdma: Simplify the function that saves a page Juan Quintela
` (11 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
After this change, nothing abuses QEMUFile to account for data
transferrefd during migration.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-15-quintela@redhat.com>
---
migration/qemu-file.h | 8 --------
migration/qemu-file.c | 5 -----
2 files changed, 13 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index e649718492..37f42315c7 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -122,14 +122,6 @@ bool qemu_file_buffer_empty(QEMUFile *file);
*/
int coroutine_mixed_fn qemu_peek_byte(QEMUFile *f, int offset);
void qemu_file_skip(QEMUFile *f, int size);
-/*
- * qemu_file_credit_transfer:
- *
- * Report on a number of bytes that have been transferred
- * out of band from the main file object I/O methods. This
- * accounting information tracks the total migration traffic.
- */
-void qemu_file_credit_transfer(QEMUFile *f, size_t size);
int qemu_file_get_error_obj(QEMUFile *f, Error **errp);
int qemu_file_get_error_obj_any(QEMUFile *f1, QEMUFile *f2, Error **errp);
void qemu_file_set_error_obj(QEMUFile *f, int ret, Error *err);
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 23a21e2331..72e130631d 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -411,11 +411,6 @@ static ssize_t coroutine_mixed_fn qemu_fill_buffer(QEMUFile *f)
return len;
}
-void qemu_file_credit_transfer(QEMUFile *f, size_t size)
-{
- f->total_transferred += size;
-}
-
/** Closes the file
*
* Returns negative error value if any error happened on previous operations or
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 11/21] migration/rdma: Simplify the function that saves a page
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (9 preceding siblings ...)
2023-05-30 18:25 ` [PULL 10/21] migration: Remove unused qemu_file_credit_transfer() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 12/21] migration: Create migrate_rdma() Juan Quintela
` (10 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
When we sent a page through QEMUFile hooks (RDMA) there are three
posiblities:
- We are not using RDMA. return RAM_SAVE_CONTROL_DELAYED and
control_save_page() returns false to let anything else to proceed.
- There is one error but we are using RDMA. Then we return a negative
value, control_save_page() needs to return true.
- Everything goes well and RDMA start the sent of the page
asynchronously. It returns RAM_SAVE_CONTROL_DELAYED and we need to
return 1 for ram_save_page_legacy.
Clear?
I know, I know, the interface is as bad as it gets. I think that now
it is a bit clearer, but this needs to be done some other way.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-16-quintela@redhat.com>
---
migration/qemu-file.h | 14 ++++++--------
migration/qemu-file.c | 12 ++++++------
migration/ram.c | 10 +++-------
migration/rdma.c | 19 +++----------------
4 files changed, 18 insertions(+), 37 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 37f42315c7..ed77996201 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -49,11 +49,10 @@ typedef int (QEMURamHookFunc)(QEMUFile *f, uint64_t flags, void *data);
* This function allows override of where the RAM page
* is saved (such as RDMA, for example.)
*/
-typedef size_t (QEMURamSaveFunc)(QEMUFile *f,
- ram_addr_t block_offset,
- ram_addr_t offset,
- size_t size,
- uint64_t *bytes_sent);
+typedef int (QEMURamSaveFunc)(QEMUFile *f,
+ ram_addr_t block_offset,
+ ram_addr_t offset,
+ size_t size);
typedef struct QEMUFileHooks {
QEMURamHookFunc *before_ram_iterate;
@@ -146,9 +145,8 @@ void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
#define RAM_SAVE_CONTROL_NOT_SUPP -1000
#define RAM_SAVE_CONTROL_DELAYED -2000
-size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size,
- uint64_t *bytes_sent);
+int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size);
QIOChannel *qemu_file_get_ioc(QEMUFile *file);
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 72e130631d..32ef5e9651 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -336,14 +336,14 @@ void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data)
}
}
-size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size,
- uint64_t *bytes_sent)
+int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
{
if (f->hooks && f->hooks->save_page) {
- int ret = f->hooks->save_page(f, block_offset,
- offset, size, bytes_sent);
-
+ int ret = f->hooks->save_page(f, block_offset, offset, size);
+ /*
+ * RAM_SAVE_CONTROL_* are negative values
+ */
if (ret != RAM_SAVE_CONTROL_DELAYED &&
ret != RAM_SAVE_CONTROL_NOT_SUPP) {
if (ret < 0) {
diff --git a/migration/ram.c b/migration/ram.c
index da0dfd7072..eab0cb2710 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1144,23 +1144,19 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
ram_addr_t offset, int *pages)
{
- uint64_t bytes_xmit = 0;
int ret;
- *pages = -1;
ret = ram_control_save_page(pss->pss_channel, block->offset, offset,
- TARGET_PAGE_SIZE, &bytes_xmit);
+ TARGET_PAGE_SIZE);
if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
return false;
}
- if (bytes_xmit) {
- *pages = 1;
- }
-
if (ret == RAM_SAVE_CONTROL_DELAYED) {
+ *pages = 1;
return true;
}
+ *pages = ret;
return true;
}
diff --git a/migration/rdma.c b/migration/rdma.c
index 416dec00a2..12d3c23fdc 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3239,13 +3239,12 @@ qio_channel_rdma_shutdown(QIOChannel *ioc,
*
* @size : Number of bytes to transfer
*
- * @bytes_sent : User-specificed pointer to indicate how many bytes were
+ * @pages_sent : User-specificed pointer to indicate how many pages were
* sent. Usually, this will not be more than a few bytes of
* the protocol because most transfers are sent asynchronously.
*/
-static size_t qemu_rdma_save_page(QEMUFile *f,
- ram_addr_t block_offset, ram_addr_t offset,
- size_t size, uint64_t *bytes_sent)
+static int qemu_rdma_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
@@ -3277,18 +3276,6 @@ static size_t qemu_rdma_save_page(QEMUFile *f,
goto err;
}
- /*
- * We always return 1 bytes because the RDMA
- * protocol is completely asynchronous. We do not yet know
- * whether an identified chunk is zero or not because we're
- * waiting for other pages to potentially be merged with
- * the current chunk. So, we have to call qemu_update_position()
- * later on when the actual write occurs.
- */
- if (bytes_sent) {
- *bytes_sent = 1;
- }
-
/*
* Drain the Completion Queue if possible, but do not block,
* just poll.
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 12/21] migration: Create migrate_rdma()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (10 preceding siblings ...)
2023-05-30 18:25 ` [PULL 11/21] migration/rdma: Simplify the function that saves a page Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 13/21] migration/rdma: Unfold ram_control_before_iterate() Juan Quintela
` (9 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Helper to say if we are doing a migration over rdma.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-2-quintela@redhat.com>
---
migration/migration.h | 3 +++
migration/options.h | 1 +
migration/migration.c | 1 +
migration/options.c | 7 +++++++
migration/rdma.c | 4 +++-
5 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/migration/migration.h b/migration/migration.h
index 30c3e97635..7359572012 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -440,6 +440,9 @@ struct MigrationState {
/* QEMU_VM_VMDESCRIPTION content filled for all non-iterable devices. */
JSONWriter *vmdesc;
+
+ /* Is this a rdma migration */
+ bool rdma_migration;
};
void migrate_set_state(int *state, int old_state, int new_state);
diff --git a/migration/options.h b/migration/options.h
index 45991af3c2..33a6bae93c 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -54,6 +54,7 @@ bool migrate_zero_copy_send(void);
bool migrate_multifd_flush_after_each_section(void);
bool migrate_postcopy(void);
+bool migrate_rdma(void);
bool migrate_tls(void);
/* capabilities helpers */
diff --git a/migration/migration.c b/migration/migration.c
index 7c3425c6fe..31d30a2ac5 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1405,6 +1405,7 @@ void migrate_init(MigrationState *s)
s->vm_old_state = -1;
s->iteration_initial_bytes = 0;
s->threshold_size = 0;
+ s->rdma_migration = false;
}
int migrate_add_blocker_internal(Error *reason, Error **errp)
diff --git a/migration/options.c b/migration/options.c
index b62ab30cd5..4b99811197 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -350,6 +350,13 @@ bool migrate_postcopy(void)
return migrate_postcopy_ram() || migrate_dirty_bitmaps();
}
+bool migrate_rdma(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return s->rdma_migration;
+}
+
bool migrate_tls(void)
{
MigrationState *s = migrate_get_current();
diff --git a/migration/rdma.c b/migration/rdma.c
index 12d3c23fdc..c11863e614 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4122,6 +4122,7 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp)
int ret;
RDMAContext *rdma;
Error *local_err = NULL;
+ MigrationState *s = migrate_get_current();
trace_rdma_start_incoming_migration();
@@ -4152,7 +4153,7 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp)
}
trace_rdma_start_incoming_migration_after_rdma_listen();
-
+ s->rdma_migration = true;
qemu_set_fd_handler(rdma->channel->fd, rdma_accept_incoming_migration,
NULL, (void *)(intptr_t)rdma);
return;
@@ -4228,6 +4229,7 @@ void rdma_start_outgoing_migration(void *opaque,
trace_rdma_start_outgoing_migration_after_rdma_connect();
+ s->rdma_migration = true;
s->to_dst_file = qemu_fopen_rdma(rdma, "wb");
migrate_fd_connect(s, NULL);
return;
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 13/21] migration/rdma: Unfold ram_control_before_iterate()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (11 preceding siblings ...)
2023-05-30 18:25 ` [PULL 12/21] migration: Create migrate_rdma() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-06-01 8:57 ` Daniel P. Berrangé
2023-05-30 18:25 ` [PULL 14/21] migration/rdma: Unfold ram_control_after_iterate() Juan Quintela
` (8 subsequent siblings)
21 siblings, 1 reply; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Once there:
- Remove unused data parameter
- unfold it in its callers.
- change all callers to call qemu_rdma_registration_start()
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-3-quintela@redhat.com>
---
migration/qemu-file.h | 2 --
migration/rdma.h | 7 +++++++
migration/qemu-file.c | 13 +------------
migration/ram.c | 16 +++++++++++++---
migration/rdma.c | 6 ++----
5 files changed, 23 insertions(+), 21 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index ed77996201..ced2202137 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -55,7 +55,6 @@ typedef int (QEMURamSaveFunc)(QEMUFile *f,
size_t size);
typedef struct QEMUFileHooks {
- QEMURamHookFunc *before_ram_iterate;
QEMURamHookFunc *after_ram_iterate;
QEMURamHookFunc *hook_ram_load;
QEMURamSaveFunc *save_page;
@@ -131,7 +130,6 @@ void qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
-void ram_control_before_iterate(QEMUFile *f, uint64_t flags);
void ram_control_after_iterate(QEMUFile *f, uint64_t flags);
void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
diff --git a/migration/rdma.h b/migration/rdma.h
index de2ba09dc5..670c67a8cb 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -22,4 +22,11 @@ void rdma_start_outgoing_migration(void *opaque, const char *host_port,
void rdma_start_incoming_migration(const char *host_port, Error **errp);
+
+#ifdef CONFIG_RDMA
+int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
+#else
+static inline
+int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
+#endif
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 32ef5e9651..6243d6ac59 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -32,6 +32,7 @@
#include "trace.h"
#include "options.h"
#include "qapi/error.h"
+#include "rdma.h"
#define IO_BUF_SIZE 32768
#define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)
@@ -302,18 +303,6 @@ void qemu_fflush(QEMUFile *f)
f->iovcnt = 0;
}
-void ram_control_before_iterate(QEMUFile *f, uint64_t flags)
-{
- int ret = 0;
-
- if (f->hooks && f->hooks->before_ram_iterate) {
- ret = f->hooks->before_ram_iterate(f, flags, NULL);
- if (ret < 0) {
- qemu_file_set_error(f, ret);
- }
- }
-}
-
void ram_control_after_iterate(QEMUFile *f, uint64_t flags)
{
int ret = 0;
diff --git a/migration/ram.c b/migration/ram.c
index eab0cb2710..b070278796 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -58,6 +58,7 @@
#include "qemu/iov.h"
#include "multifd.h"
#include "sysemu/runstate.h"
+#include "rdma.h"
#include "options.h"
#include "hw/boards.h" /* for machine_dump_guest_core() */
@@ -3012,7 +3013,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
}
- ram_control_before_iterate(f, RAM_CONTROL_SETUP);
+ ret = qemu_rdma_registration_start(f, RAM_CONTROL_SETUP);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
ram_control_after_iterate(f, RAM_CONTROL_SETUP);
migration_ops = g_malloc0(sizeof(MigrationOps));
@@ -3072,7 +3076,10 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
/* Read version before ram_list.blocks */
smp_rmb();
- ram_control_before_iterate(f, RAM_CONTROL_ROUND);
+ ret = qemu_rdma_registration_start(f, RAM_CONTROL_ROUND);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
i = 0;
@@ -3177,7 +3184,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
migration_bitmap_sync_precopy(rs, true);
}
- ram_control_before_iterate(f, RAM_CONTROL_FINISH);
+ ret = qemu_rdma_registration_start(f, RAM_CONTROL_FINISH);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
/* try transferring iterative blocks of memory */
diff --git a/migration/rdma.c b/migration/rdma.c
index c11863e614..6ca89ff090 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3863,13 +3863,12 @@ static int rdma_load_hook(QEMUFile *f, uint64_t flags, void *data)
}
}
-static int qemu_rdma_registration_start(QEMUFile *f,
- uint64_t flags, void *data)
+int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
- if (migration_in_postcopy()) {
+ if (!migrate_rdma () || migration_in_postcopy()) {
return 0;
}
@@ -4007,7 +4006,6 @@ static const QEMUFileHooks rdma_read_hooks = {
};
static const QEMUFileHooks rdma_write_hooks = {
- .before_ram_iterate = qemu_rdma_registration_start,
.after_ram_iterate = qemu_rdma_registration_stop,
.save_page = qemu_rdma_save_page,
};
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PULL 13/21] migration/rdma: Unfold ram_control_before_iterate()
2023-05-30 18:25 ` [PULL 13/21] migration/rdma: Unfold ram_control_before_iterate() Juan Quintela
@ 2023-06-01 8:57 ` Daniel P. Berrangé
0 siblings, 0 replies; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 8:57 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
On Tue, May 30, 2023 at 08:25:23PM +0200, Juan Quintela wrote:
> Once there:
> - Remove unused data parameter
> - unfold it in its callers.
> - change all callers to call qemu_rdma_registration_start()
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Message-Id: <20230509120700.78359-3-quintela@redhat.com>
> ---
> migration/qemu-file.h | 2 --
> migration/rdma.h | 7 +++++++
> migration/qemu-file.c | 13 +------------
> migration/ram.c | 16 +++++++++++++---
> migration/rdma.c | 6 ++----
> 5 files changed, 23 insertions(+), 21 deletions(-)
> diff --git a/migration/rdma.c b/migration/rdma.c
> index c11863e614..6ca89ff090 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -3863,13 +3863,12 @@ static int rdma_load_hook(QEMUFile *f, uint64_t flags, void *data)
> }
> }
>
> -static int qemu_rdma_registration_start(QEMUFile *f,
> - uint64_t flags, void *data)
> +int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
> {
> QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
This is cast the migration QIOChannel to a QIOChannelRDMA....
> RDMAContext *rdma;
>
> - if (migration_in_postcopy()) {
> + if (!migrate_rdma () || migration_in_postcopy()) {
....before the code checks whether this is an RDMA migration.
This looks unsafe.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PULL 14/21] migration/rdma: Unfold ram_control_after_iterate()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (12 preceding siblings ...)
2023-05-30 18:25 ` [PULL 13/21] migration/rdma: Unfold ram_control_before_iterate() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-06-01 8:58 ` Daniel P. Berrangé
2023-05-30 18:25 ` [PULL 15/21] migration/rdma: Remove all uses of RAM_CONTROL_HOOK Juan Quintela
` (7 subsequent siblings)
21 siblings, 1 reply; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Once there:
- Remove unused data parameter
- unfold it in its callers
- change all callers to call qemu_rdma_registration_stop()
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-4-quintela@redhat.com>
---
migration/qemu-file.h | 2 --
migration/rdma.h | 3 +++
migration/qemu-file.c | 12 ------------
migration/ram.c | 17 ++++++++++++++---
migration/rdma.c | 6 ++----
5 files changed, 19 insertions(+), 21 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index ced2202137..323af5682f 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -55,7 +55,6 @@ typedef int (QEMURamSaveFunc)(QEMUFile *f,
size_t size);
typedef struct QEMUFileHooks {
- QEMURamHookFunc *after_ram_iterate;
QEMURamHookFunc *hook_ram_load;
QEMURamSaveFunc *save_page;
} QEMUFileHooks;
@@ -130,7 +129,6 @@ void qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
-void ram_control_after_iterate(QEMUFile *f, uint64_t flags);
void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
/* Whenever this is found in the data stream, the flags
diff --git a/migration/rdma.h b/migration/rdma.h
index 670c67a8cb..c13b94c782 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -25,8 +25,11 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp);
#ifdef CONFIG_RDMA
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
+int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
#else
static inline
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
+static inline
+int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
#endif
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 6243d6ac59..918df83035 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -303,18 +303,6 @@ void qemu_fflush(QEMUFile *f)
f->iovcnt = 0;
}
-void ram_control_after_iterate(QEMUFile *f, uint64_t flags)
-{
- int ret = 0;
-
- if (f->hooks && f->hooks->after_ram_iterate) {
- ret = f->hooks->after_ram_iterate(f, flags, NULL);
- if (ret < 0) {
- qemu_file_set_error(f, ret);
- }
- }
-}
-
void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data)
{
if (f->hooks && f->hooks->hook_ram_load) {
diff --git a/migration/ram.c b/migration/ram.c
index b070278796..c6fad7b0b3 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3017,7 +3017,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
if (ret < 0) {
qemu_file_set_error(f, ret);
}
- ram_control_after_iterate(f, RAM_CONTROL_SETUP);
+
+ ret = qemu_rdma_registration_stop(f, RAM_CONTROL_SETUP);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
migration_ops = g_malloc0(sizeof(MigrationOps));
migration_ops->ram_save_target_page = ram_save_target_page_legacy;
@@ -3136,7 +3140,10 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* Must occur before EOS (or any QEMUFile operation)
* because of RDMA protocol.
*/
- ram_control_after_iterate(f, RAM_CONTROL_ROUND);
+ ret = qemu_rdma_registration_stop(f, RAM_CONTROL_ROUND);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
out:
if (ret >= 0
@@ -3209,7 +3216,11 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
qemu_mutex_unlock(&rs->bitmap_mutex);
ram_flush_compressed_data(rs);
- ram_control_after_iterate(f, RAM_CONTROL_FINISH);
+
+ int ret = qemu_rdma_registration_stop(f, RAM_CONTROL_FINISH);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
}
if (ret < 0) {
diff --git a/migration/rdma.c b/migration/rdma.c
index 6ca89ff090..8001dcb960 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3891,15 +3891,14 @@ int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
* Inform dest that dynamic registrations are done for now.
* First, flush writes, if any.
*/
-static int qemu_rdma_registration_stop(QEMUFile *f,
- uint64_t flags, void *data)
+int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
RDMAControlHeader head = { .len = 0, .repeat = 1 };
int ret = 0;
- if (migration_in_postcopy()) {
+ if (!migrate_rdma() || migration_in_postcopy()) {
return 0;
}
@@ -4006,7 +4005,6 @@ static const QEMUFileHooks rdma_read_hooks = {
};
static const QEMUFileHooks rdma_write_hooks = {
- .after_ram_iterate = qemu_rdma_registration_stop,
.save_page = qemu_rdma_save_page,
};
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PULL 14/21] migration/rdma: Unfold ram_control_after_iterate()
2023-05-30 18:25 ` [PULL 14/21] migration/rdma: Unfold ram_control_after_iterate() Juan Quintela
@ 2023-06-01 8:58 ` Daniel P. Berrangé
0 siblings, 0 replies; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 8:58 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
On Tue, May 30, 2023 at 08:25:24PM +0200, Juan Quintela wrote:
> Once there:
> - Remove unused data parameter
> - unfold it in its callers
> - change all callers to call qemu_rdma_registration_stop()
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Message-Id: <20230509120700.78359-4-quintela@redhat.com>
> ---
> migration/qemu-file.h | 2 --
> migration/rdma.h | 3 +++
> migration/qemu-file.c | 12 ------------
> migration/ram.c | 17 ++++++++++++++---
> migration/rdma.c | 6 ++----
> 5 files changed, 19 insertions(+), 21 deletions(-)
>
> diff --git a/migration/rdma.c b/migration/rdma.c
> index 6ca89ff090..8001dcb960 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -3891,15 +3891,14 @@ int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
> * Inform dest that dynamic registrations are done for now.
> * First, flush writes, if any.
> */
> -static int qemu_rdma_registration_stop(QEMUFile *f,
> - uint64_t flags, void *data)
> +int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags)
> {
> QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
Again casting to a QIOChannelRDMA....
> RDMAContext *rdma;
> RDMAControlHeader head = { .len = 0, .repeat = 1 };
> int ret = 0;
>
> - if (migration_in_postcopy()) {
> + if (!migrate_rdma() || migration_in_postcopy()) {
...before checking if this is actually an RDMA migration
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PULL 15/21] migration/rdma: Remove all uses of RAM_CONTROL_HOOK
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (13 preceding siblings ...)
2023-05-30 18:25 ` [PULL 14/21] migration/rdma: Unfold ram_control_after_iterate() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-06-01 9:01 ` Daniel P. Berrangé
2023-05-30 18:25 ` [PULL 16/21] migration/rdma: Unfold hook_ram_load() Juan Quintela
` (6 subsequent siblings)
21 siblings, 1 reply; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Instead of going trhough ram_control_load_hook(), call
qemu_rdma_registration_handle() directly.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-5-quintela@redhat.com>
---
migration/qemu-file.h | 1 -
migration/rdma.h | 3 +++
migration/ram.c | 5 ++++-
migration/rdma.c | 9 +++++----
4 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 323af5682f..7cfc20825e 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -41,7 +41,6 @@ typedef int (QEMURamHookFunc)(QEMUFile *f, uint64_t flags, void *data);
*/
#define RAM_CONTROL_SETUP 0
#define RAM_CONTROL_ROUND 1
-#define RAM_CONTROL_HOOK 2
#define RAM_CONTROL_FINISH 3
#define RAM_CONTROL_BLOCK_REG 4
diff --git a/migration/rdma.h b/migration/rdma.h
index c13b94c782..8bd277efb9 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -24,10 +24,13 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp);
#ifdef CONFIG_RDMA
+int qemu_rdma_registration_handle(QEMUFile *f);
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
#else
static inline
+int qemu_rdma_registration_handle(QEMUFile *f) { return 0; }
+static inline
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
static inline
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
diff --git a/migration/ram.c b/migration/ram.c
index c6fad7b0b3..6f0597814c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -4024,7 +4024,10 @@ static int ram_load_precopy(QEMUFile *f)
}
break;
case RAM_SAVE_FLAG_HOOK:
- ram_control_load_hook(f, RAM_CONTROL_HOOK, NULL);
+ ret = qemu_rdma_registration_handle(f);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
break;
default:
error_report("Unknown combination of migration flags: 0x%x", flags);
diff --git a/migration/rdma.c b/migration/rdma.c
index 8001dcb960..a477985c6d 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3530,7 +3530,7 @@ static int dest_ram_sort_func(const void *a, const void *b)
*
* Keep doing this until the source tells us to stop.
*/
-static int qemu_rdma_registration_handle(QEMUFile *f)
+int qemu_rdma_registration_handle(QEMUFile *f)
{
RDMAControlHeader reg_resp = { .len = sizeof(RDMARegisterResult),
.type = RDMA_CONTROL_REGISTER_RESULT,
@@ -3557,6 +3557,10 @@ static int qemu_rdma_registration_handle(QEMUFile *f)
int count = 0;
int i = 0;
+ if (!migrate_rdma()) {
+ return 0;
+ }
+
RCU_READ_LOCK_GUARD();
rdma = qatomic_rcu_read(&rioc->rdmain);
@@ -3854,9 +3858,6 @@ static int rdma_load_hook(QEMUFile *f, uint64_t flags, void *data)
case RAM_CONTROL_BLOCK_REG:
return rdma_block_notification_handle(f, data);
- case RAM_CONTROL_HOOK:
- return qemu_rdma_registration_handle(f);
-
default:
/* Shouldn't be called with any other values */
abort();
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PULL 15/21] migration/rdma: Remove all uses of RAM_CONTROL_HOOK
2023-05-30 18:25 ` [PULL 15/21] migration/rdma: Remove all uses of RAM_CONTROL_HOOK Juan Quintela
@ 2023-06-01 9:01 ` Daniel P. Berrangé
0 siblings, 0 replies; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 9:01 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
On Tue, May 30, 2023 at 08:25:25PM +0200, Juan Quintela wrote:
> Instead of going trhough ram_control_load_hook(), call
> qemu_rdma_registration_handle() directly.
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Message-Id: <20230509120700.78359-5-quintela@redhat.com>
> ---
> migration/qemu-file.h | 1 -
> migration/rdma.h | 3 +++
> migration/ram.c | 5 ++++-
> migration/rdma.c | 9 +++++----
> 4 files changed, 12 insertions(+), 6 deletions(-)
>
> diff --git a/migration/qemu-file.h b/migration/qemu-file.h
> index 323af5682f..7cfc20825e 100644
> --- a/migration/qemu-file.h
> +++ b/migration/qemu-file.h
> @@ -41,7 +41,6 @@ typedef int (QEMURamHookFunc)(QEMUFile *f, uint64_t flags, void *data);
> */
> #define RAM_CONTROL_SETUP 0
> #define RAM_CONTROL_ROUND 1
> -#define RAM_CONTROL_HOOK 2
> #define RAM_CONTROL_FINISH 3
> #define RAM_CONTROL_BLOCK_REG 4
>
> diff --git a/migration/rdma.h b/migration/rdma.h
> index c13b94c782..8bd277efb9 100644
> --- a/migration/rdma.h
> +++ b/migration/rdma.h
> @@ -24,10 +24,13 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp);
>
>
> #ifdef CONFIG_RDMA
> +int qemu_rdma_registration_handle(QEMUFile *f);
> int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
> int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
> #else
> static inline
> +int qemu_rdma_registration_handle(QEMUFile *f) { return 0; }
> +static inline
> int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
> static inline
> int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
> diff --git a/migration/ram.c b/migration/ram.c
> index c6fad7b0b3..6f0597814c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -4024,7 +4024,10 @@ static int ram_load_precopy(QEMUFile *f)
> }
> break;
> case RAM_SAVE_FLAG_HOOK:
> - ram_control_load_hook(f, RAM_CONTROL_HOOK, NULL);
> + ret = qemu_rdma_registration_handle(f);
> + if (ret < 0) {
> + qemu_file_set_error(f, ret);
> + }
> break;
> default:
> error_report("Unknown combination of migration flags: 0x%x", flags);
> diff --git a/migration/rdma.c b/migration/rdma.c
> index 8001dcb960..a477985c6d 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -3530,7 +3530,7 @@ static int dest_ram_sort_func(const void *a, const void *b)
> *
> * Keep doing this until the source tells us to stop.
> */
> -static int qemu_rdma_registration_handle(QEMUFile *f)
> +int qemu_rdma_registration_handle(QEMUFile *f)
> {
> RDMAControlHeader reg_resp = { .len = sizeof(RDMARegisterResult),
> .type = RDMA_CONTROL_REGISTER_RESULT,
> @@ -3557,6 +3557,10 @@ static int qemu_rdma_registration_handle(QEMUFile *f)
A few lines above this diff contxt there is a call to
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
which is likely unsafe given that....
> int count = 0;
> int i = 0;
>
> + if (!migrate_rdma()) {
....we're only checking if this is RDMA here.
> + return 0;
> + }
> +
> RCU_READ_LOCK_GUARD();
> rdma = qatomic_rcu_read(&rioc->rdmain);
>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PULL 16/21] migration/rdma: Unfold hook_ram_load()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (14 preceding siblings ...)
2023-05-30 18:25 ` [PULL 15/21] migration/rdma: Remove all uses of RAM_CONTROL_HOOK Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-06-01 9:02 ` Daniel P. Berrangé
2023-05-30 18:25 ` [PULL 17/21] migration/rdma: Create rdma_control_save_page() Juan Quintela
` (5 subsequent siblings)
21 siblings, 1 reply; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
There is only one flag called with: RAM_CONTROL_BLOCK_REG.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-6-quintela@redhat.com>
---
migration/qemu-file.h | 11 -----------
migration/rdma.h | 3 +++
migration/qemu-file.c | 10 ----------
migration/ram.c | 6 ++++--
migration/rdma.c | 29 +++++++++--------------------
5 files changed, 16 insertions(+), 43 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 7cfc20825e..6791db6b08 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -29,20 +29,12 @@
#include "exec/cpu-common.h"
#include "io/channel.h"
-/*
- * This function provides hooks around different
- * stages of RAM migration.
- * 'data' is call specific data associated with the 'flags' value
- */
-typedef int (QEMURamHookFunc)(QEMUFile *f, uint64_t flags, void *data);
-
/*
* Constants used by ram_control_* hooks
*/
#define RAM_CONTROL_SETUP 0
#define RAM_CONTROL_ROUND 1
#define RAM_CONTROL_FINISH 3
-#define RAM_CONTROL_BLOCK_REG 4
/*
* This function allows override of where the RAM page
@@ -54,7 +46,6 @@ typedef int (QEMURamSaveFunc)(QEMUFile *f,
size_t size);
typedef struct QEMUFileHooks {
- QEMURamHookFunc *hook_ram_load;
QEMURamSaveFunc *save_page;
} QEMUFileHooks;
@@ -128,8 +119,6 @@ void qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
-void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
-
/* Whenever this is found in the data stream, the flags
* will be passed to ram_control_load_hook in the incoming-migration
* side. This lets before_ram_iterate/after_ram_iterate add
diff --git a/migration/rdma.h b/migration/rdma.h
index 8bd277efb9..8df8b4089a 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -27,6 +27,7 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp);
int qemu_rdma_registration_handle(QEMUFile *f);
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
+int rdma_block_notification_handle(QEMUFile *f, const char *name);
#else
static inline
int qemu_rdma_registration_handle(QEMUFile *f) { return 0; }
@@ -34,5 +35,7 @@ static inline
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
static inline
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
+static inline
+int rdma_block_notification_handle(QEMUFile *f, const char *name) { return 0; }
#endif
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 918df83035..08bbc29e64 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -303,16 +303,6 @@ void qemu_fflush(QEMUFile *f)
f->iovcnt = 0;
}
-void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data)
-{
- if (f->hooks && f->hooks->hook_ram_load) {
- int ret = f->hooks->hook_ram_load(f, flags, data);
- if (ret < 0) {
- qemu_file_set_error(f, ret);
- }
- }
-}
-
int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
ram_addr_t offset, size_t size)
{
diff --git a/migration/ram.c b/migration/ram.c
index 6f0597814c..67d0f20368 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3975,8 +3975,10 @@ static int ram_load_precopy(QEMUFile *f)
ret = -EINVAL;
}
}
- ram_control_load_hook(f, RAM_CONTROL_BLOCK_REG,
- block->idstr);
+ ret = rdma_block_notification_handle(f, block->idstr);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
} else {
error_report("Unknown ramblock \"%s\", cannot "
"accept migration", id);
diff --git a/migration/rdma.c b/migration/rdma.c
index a477985c6d..948e93256d 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3811,20 +3811,22 @@ out:
}
/* Destination:
- * Called via a ram_control_load_hook during the initial RAM load section which
- * lists the RAMBlocks by name. This lets us know the order of the RAMBlocks
- * on the source.
- * We've already built our local RAMBlock list, but not yet sent the list to
- * the source.
+ * Called during the initial RAM load section which lists the
+ * RAMBlocks by name. This lets us know the order of the RAMBlocks on
+ * the source. We've already built our local RAMBlock list, but not
+ * yet sent the list to the source.
*/
-static int
-rdma_block_notification_handle(QEMUFile *f, const char *name)
+int rdma_block_notification_handle(QEMUFile *f, const char *name)
{
RDMAContext *rdma;
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
int curr;
int found = -1;
+ if (!migrate_rdma()) {
+ return 0;
+ }
+
RCU_READ_LOCK_GUARD();
rdma = qatomic_rcu_read(&rioc->rdmain);
@@ -3852,18 +3854,6 @@ rdma_block_notification_handle(QEMUFile *f, const char *name)
return 0;
}
-static int rdma_load_hook(QEMUFile *f, uint64_t flags, void *data)
-{
- switch (flags) {
- case RAM_CONTROL_BLOCK_REG:
- return rdma_block_notification_handle(f, data);
-
- default:
- /* Shouldn't be called with any other values */
- abort();
- }
-}
-
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
@@ -4002,7 +3992,6 @@ err:
}
static const QEMUFileHooks rdma_read_hooks = {
- .hook_ram_load = rdma_load_hook,
};
static const QEMUFileHooks rdma_write_hooks = {
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PULL 16/21] migration/rdma: Unfold hook_ram_load()
2023-05-30 18:25 ` [PULL 16/21] migration/rdma: Unfold hook_ram_load() Juan Quintela
@ 2023-06-01 9:02 ` Daniel P. Berrangé
0 siblings, 0 replies; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 9:02 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
On Tue, May 30, 2023 at 08:25:26PM +0200, Juan Quintela wrote:
> There is only one flag called with: RAM_CONTROL_BLOCK_REG.
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Message-Id: <20230509120700.78359-6-quintela@redhat.com>
> ---
> migration/qemu-file.h | 11 -----------
> migration/rdma.h | 3 +++
> migration/qemu-file.c | 10 ----------
> migration/ram.c | 6 ++++--
> migration/rdma.c | 29 +++++++++--------------------
> 5 files changed, 16 insertions(+), 43 deletions(-)
> diff --git a/migration/rdma.c b/migration/rdma.c
> index a477985c6d..948e93256d 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -3811,20 +3811,22 @@ out:
> }
>
> /* Destination:
> - * Called via a ram_control_load_hook during the initial RAM load section which
> - * lists the RAMBlocks by name. This lets us know the order of the RAMBlocks
> - * on the source.
> - * We've already built our local RAMBlock list, but not yet sent the list to
> - * the source.
> + * Called during the initial RAM load section which lists the
> + * RAMBlocks by name. This lets us know the order of the RAMBlocks on
> + * the source. We've already built our local RAMBlock list, but not
> + * yet sent the list to the source.
> */
> -static int
> -rdma_block_notification_handle(QEMUFile *f, const char *name)
> +int rdma_block_notification_handle(QEMUFile *f, const char *name)
> {
> RDMAContext *rdma;
> QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
Casting to QIOChannelRDMA ...
> int curr;
> int found = -1;
>
> + if (!migrate_rdma()) {
..before checking if this is RDMA
> + return 0;
> + }
> +
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PULL 17/21] migration/rdma: Create rdma_control_save_page()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (15 preceding siblings ...)
2023-05-30 18:25 ` [PULL 16/21] migration/rdma: Unfold hook_ram_load() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 18/21] qemu-file: Remove QEMUFileHooks Juan Quintela
` (4 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
The only user of ram_control_save_page() and save_page() hook was
rdma. Just move the function to rdma.c, rename it to
rdma_control_save_page().
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-7-quintela@redhat.com>
---
migration/qemu-file.h | 12 ------------
migration/rdma.h | 10 ++++++++++
migration/qemu-file.c | 20 --------------------
migration/ram.c | 4 ++--
migration/rdma.c | 20 +++++++++++++++++++-
5 files changed, 31 insertions(+), 35 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 6791db6b08..c43c410168 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -36,17 +36,7 @@
#define RAM_CONTROL_ROUND 1
#define RAM_CONTROL_FINISH 3
-/*
- * This function allows override of where the RAM page
- * is saved (such as RDMA, for example.)
- */
-typedef int (QEMURamSaveFunc)(QEMUFile *f,
- ram_addr_t block_offset,
- ram_addr_t offset,
- size_t size);
-
typedef struct QEMUFileHooks {
- QEMURamSaveFunc *save_page;
} QEMUFileHooks;
QEMUFile *qemu_file_new_input(QIOChannel *ioc);
@@ -129,8 +119,6 @@ int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
#define RAM_SAVE_CONTROL_NOT_SUPP -1000
#define RAM_SAVE_CONTROL_DELAYED -2000
-int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size);
QIOChannel *qemu_file_get_ioc(QEMUFile *file);
#endif
diff --git a/migration/rdma.h b/migration/rdma.h
index 8df8b4089a..09a16c1e3c 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -17,6 +17,8 @@
#ifndef QEMU_MIGRATION_RDMA_H
#define QEMU_MIGRATION_RDMA_H
+#include "exec/memory.h"
+
void rdma_start_outgoing_migration(void *opaque, const char *host_port,
Error **errp);
@@ -28,6 +30,8 @@ int qemu_rdma_registration_handle(QEMUFile *f);
int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
int rdma_block_notification_handle(QEMUFile *f, const char *name);
+int rdma_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size);
#else
static inline
int qemu_rdma_registration_handle(QEMUFile *f) { return 0; }
@@ -37,5 +41,11 @@ static inline
int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
static inline
int rdma_block_notification_handle(QEMUFile *f, const char *name) { return 0; }
+static inline
+int rdma_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
+{
+ return RAM_SAVE_CONTROL_NOT_SUPP;
+}
#endif
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 08bbc29e64..a222daeaab 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -303,26 +303,6 @@ void qemu_fflush(QEMUFile *f)
f->iovcnt = 0;
}
-int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size)
-{
- if (f->hooks && f->hooks->save_page) {
- int ret = f->hooks->save_page(f, block_offset, offset, size);
- /*
- * RAM_SAVE_CONTROL_* are negative values
- */
- if (ret != RAM_SAVE_CONTROL_DELAYED &&
- ret != RAM_SAVE_CONTROL_NOT_SUPP) {
- if (ret < 0) {
- qemu_file_set_error(f, ret);
- }
- }
- return ret;
- }
-
- return RAM_SAVE_CONTROL_NOT_SUPP;
-}
-
/*
* Attempt to fill the buffer from the underlying file
* Returns the number of bytes read, or negative value for an error.
diff --git a/migration/ram.c b/migration/ram.c
index 67d0f20368..19e5e93de8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1147,8 +1147,8 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
{
int ret;
- ret = ram_control_save_page(pss->pss_channel, block->offset, offset,
- TARGET_PAGE_SIZE);
+ ret = rdma_control_save_page(pss->pss_channel, block->offset, offset,
+ TARGET_PAGE_SIZE);
if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
return false;
}
diff --git a/migration/rdma.c b/migration/rdma.c
index 948e93256d..b506b86b47 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3319,6 +3319,25 @@ err:
return ret;
}
+int rdma_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
+{
+ if (!migrate_rdma()) {
+ return RAM_SAVE_CONTROL_NOT_SUPP;
+ }
+
+ int ret = qemu_rdma_save_page(f, block_offset, offset, size);
+
+ if (ret != RAM_SAVE_CONTROL_DELAYED &&
+ ret != RAM_SAVE_CONTROL_NOT_SUPP) {
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ }
+ }
+ return ret;
+}
+
+
static void rdma_accept_incoming_migration(void *opaque);
static void rdma_cm_poll_handler(void *opaque)
@@ -3995,7 +4014,6 @@ static const QEMUFileHooks rdma_read_hooks = {
};
static const QEMUFileHooks rdma_write_hooks = {
- .save_page = qemu_rdma_save_page,
};
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 18/21] qemu-file: Remove QEMUFileHooks
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (16 preceding siblings ...)
2023-05-30 18:25 ` [PULL 17/21] migration/rdma: Create rdma_control_save_page() Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 19/21] migration/rdma: Move rdma constants from qemu-file.h to rdma.h Juan Quintela
` (3 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
The only user was rdma, and its use is gone.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-8-quintela@redhat.com>
---
migration/qemu-file.h | 4 ----
migration/qemu-file.c | 6 ------
migration/rdma.c | 9 ---------
3 files changed, 19 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index c43c410168..c7c832d200 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -36,12 +36,8 @@
#define RAM_CONTROL_ROUND 1
#define RAM_CONTROL_FINISH 3
-typedef struct QEMUFileHooks {
-} QEMUFileHooks;
-
QEMUFile *qemu_file_new_input(QIOChannel *ioc);
QEMUFile *qemu_file_new_output(QIOChannel *ioc);
-void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks);
int qemu_fclose(QEMUFile *f);
/*
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index a222daeaab..c94b667726 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -38,7 +38,6 @@
#define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)
struct QEMUFile {
- const QEMUFileHooks *hooks;
QIOChannel *ioc;
bool is_writable;
@@ -147,11 +146,6 @@ QEMUFile *qemu_file_new_input(QIOChannel *ioc)
return qemu_file_new_impl(ioc, false);
}
-void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
-{
- f->hooks = hooks;
-}
-
/*
* Get last error for stream f with optional Error*
*
diff --git a/migration/rdma.c b/migration/rdma.c
index b506b86b47..b85259263a 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -4010,13 +4010,6 @@ err:
return ret;
}
-static const QEMUFileHooks rdma_read_hooks = {
-};
-
-static const QEMUFileHooks rdma_write_hooks = {
-};
-
-
static void qio_channel_rdma_finalize(Object *obj)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(obj);
@@ -4075,12 +4068,10 @@ static QEMUFile *qemu_fopen_rdma(RDMAContext *rdma, const char *mode)
rioc->file = qemu_file_new_output(QIO_CHANNEL(rioc));
rioc->rdmaout = rdma;
rioc->rdmain = rdma->return_path;
- qemu_file_set_hooks(rioc->file, &rdma_write_hooks);
} else {
rioc->file = qemu_file_new_input(QIO_CHANNEL(rioc));
rioc->rdmain = rdma;
rioc->rdmaout = rdma->return_path;
- qemu_file_set_hooks(rioc->file, &rdma_read_hooks);
}
return rioc->file;
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 19/21] migration/rdma: Move rdma constants from qemu-file.h to rdma.h
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (17 preceding siblings ...)
2023-05-30 18:25 ` [PULL 18/21] qemu-file: Remove QEMUFileHooks Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 20/21] migration/rdma: Remove qemu_ prefix from exported functions Juan Quintela
` (2 subsequent siblings)
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-9-quintela@redhat.com>
---
migration/qemu-file.h | 17 -----------------
migration/rdma.h | 16 ++++++++++++++++
migration/ram.c | 2 +-
3 files changed, 17 insertions(+), 18 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index c7c832d200..83b8fb10de 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -29,13 +29,6 @@
#include "exec/cpu-common.h"
#include "io/channel.h"
-/*
- * Constants used by ram_control_* hooks
- */
-#define RAM_CONTROL_SETUP 0
-#define RAM_CONTROL_ROUND 1
-#define RAM_CONTROL_FINISH 3
-
QEMUFile *qemu_file_new_input(QIOChannel *ioc);
QEMUFile *qemu_file_new_output(QIOChannel *ioc);
int qemu_fclose(QEMUFile *f);
@@ -105,16 +98,6 @@ void qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
-/* Whenever this is found in the data stream, the flags
- * will be passed to ram_control_load_hook in the incoming-migration
- * side. This lets before_ram_iterate/after_ram_iterate add
- * transport-specific sections to the RAM migration data.
- */
-#define RAM_SAVE_FLAG_HOOK 0x80
-
-#define RAM_SAVE_CONTROL_NOT_SUPP -1000
-#define RAM_SAVE_CONTROL_DELAYED -2000
-
QIOChannel *qemu_file_get_ioc(QEMUFile *file);
#endif
diff --git a/migration/rdma.h b/migration/rdma.h
index 09a16c1e3c..1ff3718a76 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -24,6 +24,22 @@ void rdma_start_outgoing_migration(void *opaque, const char *host_port,
void rdma_start_incoming_migration(const char *host_port, Error **errp);
+/*
+ * Constants used by rdma return codes
+ */
+#define RAM_CONTROL_SETUP 0
+#define RAM_CONTROL_ROUND 1
+#define RAM_CONTROL_FINISH 3
+
+/*
+ * Whenever this is found in the data stream, the flags
+ * will be passed to rdma functions in the incoming-migration
+ * side.
+ */
+#define RAM_SAVE_FLAG_HOOK 0x80
+
+#define RAM_SAVE_CONTROL_NOT_SUPP -1000
+#define RAM_SAVE_CONTROL_DELAYED -2000
#ifdef CONFIG_RDMA
int qemu_rdma_registration_handle(QEMUFile *f);
diff --git a/migration/ram.c b/migration/ram.c
index 19e5e93de8..83da952823 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -86,7 +86,7 @@
#define RAM_SAVE_FLAG_EOS 0x10
#define RAM_SAVE_FLAG_CONTINUE 0x20
#define RAM_SAVE_FLAG_XBZRLE 0x40
-/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
+/* 0x80 is reserved in rdma.h for RAM_SAVE_FLAG_HOOK */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
#define RAM_SAVE_FLAG_MULTIFD_FLUSH 0x200
/* We can't use any flag that is bigger than 0x200 */
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 20/21] migration/rdma: Remove qemu_ prefix from exported functions
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (18 preceding siblings ...)
2023-05-30 18:25 ` [PULL 19/21] migration/rdma: Move rdma constants from qemu-file.h to rdma.h Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 18:25 ` [PULL 21/21] migration/rdma: Check sooner if we are in postcopy for save_page() Juan Quintela
2023-05-30 20:23 ` [PULL 00/21] Migration 20230530 patches Richard Henderson
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Functions are long enough even without this.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-10-quintela@redhat.com>
---
migration/rdma.h | 12 ++++++------
migration/ram.c | 14 +++++++-------
migration/rdma.c | 40 +++++++++++++++++++---------------------
migration/trace-events | 28 ++++++++++++++--------------
4 files changed, 46 insertions(+), 48 deletions(-)
diff --git a/migration/rdma.h b/migration/rdma.h
index 1ff3718a76..30b15b4466 100644
--- a/migration/rdma.h
+++ b/migration/rdma.h
@@ -42,19 +42,19 @@ void rdma_start_incoming_migration(const char *host_port, Error **errp);
#define RAM_SAVE_CONTROL_DELAYED -2000
#ifdef CONFIG_RDMA
-int qemu_rdma_registration_handle(QEMUFile *f);
-int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags);
-int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags);
+int rdma_registration_handle(QEMUFile *f);
+int rdma_registration_start(QEMUFile *f, uint64_t flags);
+int rdma_registration_stop(QEMUFile *f, uint64_t flags);
int rdma_block_notification_handle(QEMUFile *f, const char *name);
int rdma_control_save_page(QEMUFile *f, ram_addr_t block_offset,
ram_addr_t offset, size_t size);
#else
static inline
-int qemu_rdma_registration_handle(QEMUFile *f) { return 0; }
+int rdma_registration_handle(QEMUFile *f) { return 0; }
static inline
-int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
+int rdma_registration_start(QEMUFile *f, uint64_t flags) { return 0; }
static inline
-int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
+int rdma_registration_stop(QEMUFile *f, uint64_t flags) { return 0; }
static inline
int rdma_block_notification_handle(QEMUFile *f, const char *name) { return 0; }
static inline
diff --git a/migration/ram.c b/migration/ram.c
index 83da952823..87fc504674 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3013,12 +3013,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
}
- ret = qemu_rdma_registration_start(f, RAM_CONTROL_SETUP);
+ ret = rdma_registration_start(f, RAM_CONTROL_SETUP);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
- ret = qemu_rdma_registration_stop(f, RAM_CONTROL_SETUP);
+ ret = rdma_registration_stop(f, RAM_CONTROL_SETUP);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
@@ -3080,7 +3080,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
/* Read version before ram_list.blocks */
smp_rmb();
- ret = qemu_rdma_registration_start(f, RAM_CONTROL_ROUND);
+ ret = rdma_registration_start(f, RAM_CONTROL_ROUND);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
@@ -3140,7 +3140,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* Must occur before EOS (or any QEMUFile operation)
* because of RDMA protocol.
*/
- ret = qemu_rdma_registration_stop(f, RAM_CONTROL_ROUND);
+ ret = rdma_registration_stop(f, RAM_CONTROL_ROUND);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
@@ -3191,7 +3191,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
migration_bitmap_sync_precopy(rs, true);
}
- ret = qemu_rdma_registration_start(f, RAM_CONTROL_FINISH);
+ ret = rdma_registration_start(f, RAM_CONTROL_FINISH);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
@@ -3217,7 +3217,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
ram_flush_compressed_data(rs);
- int ret = qemu_rdma_registration_stop(f, RAM_CONTROL_FINISH);
+ int ret = rdma_registration_stop(f, RAM_CONTROL_FINISH);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
@@ -4026,7 +4026,7 @@ static int ram_load_precopy(QEMUFile *f)
}
break;
case RAM_SAVE_FLAG_HOOK:
- ret = qemu_rdma_registration_handle(f);
+ ret = rdma_registration_handle(f);
if (ret < 0) {
qemu_file_set_error(f, ret);
}
diff --git a/migration/rdma.c b/migration/rdma.c
index b85259263a..cf0681575d 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3549,7 +3549,7 @@ static int dest_ram_sort_func(const void *a, const void *b)
*
* Keep doing this until the source tells us to stop.
*/
-int qemu_rdma_registration_handle(QEMUFile *f)
+int rdma_registration_handle(QEMUFile *f)
{
RDMAControlHeader reg_resp = { .len = sizeof(RDMARegisterResult),
.type = RDMA_CONTROL_REGISTER_RESULT,
@@ -3591,7 +3591,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
local = &rdma->local_ram_blocks;
do {
- trace_qemu_rdma_registration_handle_wait();
+ trace_rdma_registration_handle_wait();
ret = qemu_rdma_exchange_recv(rdma, &head, RDMA_CONTROL_NONE);
@@ -3611,9 +3611,9 @@ int qemu_rdma_registration_handle(QEMUFile *f)
comp = (RDMACompress *) rdma->wr_data[idx].control_curr;
network_to_compress(comp);
- trace_qemu_rdma_registration_handle_compress(comp->length,
- comp->block_idx,
- comp->offset);
+ trace_rdma_registration_handle_compress(comp->length,
+ comp->block_idx,
+ comp->offset);
if (comp->block_idx >= rdma->local_ram_blocks.nb_blocks) {
error_report("rdma: 'compress' bad block index %u (vs %d)",
(unsigned int)comp->block_idx,
@@ -3630,11 +3630,11 @@ int qemu_rdma_registration_handle(QEMUFile *f)
break;
case RDMA_CONTROL_REGISTER_FINISHED:
- trace_qemu_rdma_registration_handle_finished();
+ trace_rdma_registration_handle_finished();
goto out;
case RDMA_CONTROL_RAM_BLOCKS_REQUEST:
- trace_qemu_rdma_registration_handle_ram_blocks();
+ trace_rdma_registration_handle_ram_blocks();
/* Sort our local RAM Block list so it's the same as the source,
* we can do this since we've filled in a src_index in the list
@@ -3674,7 +3674,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
rdma->dest_blocks[i].length = local->block[i].length;
dest_block_to_network(&rdma->dest_blocks[i]);
- trace_qemu_rdma_registration_handle_ram_blocks_loop(
+ trace_rdma_registration_handle_ram_blocks_loop(
local->block[i].block_name,
local->block[i].offset,
local->block[i].length,
@@ -3696,7 +3696,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
break;
case RDMA_CONTROL_REGISTER_REQUEST:
- trace_qemu_rdma_registration_handle_register(head.repeat);
+ trace_rdma_registration_handle_register(head.repeat);
reg_resp.repeat = head.repeat;
registers = (RDMARegister *) rdma->wr_data[idx].control_curr;
@@ -3710,7 +3710,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
reg_result = &results[count];
- trace_qemu_rdma_registration_handle_register_loop(count,
+ trace_rdma_registration_handle_register_loop(count,
reg->current_index, reg->key.current_addr, reg->chunks);
if (reg->current_index >= rdma->local_ram_blocks.nb_blocks) {
@@ -3762,8 +3762,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
reg_result->host_addr = (uintptr_t)block->local_host_addr;
- trace_qemu_rdma_registration_handle_register_rkey(
- reg_result->rkey);
+ trace_rdma_registration_handle_register_rkey(reg_result->rkey);
result_to_network(reg_result);
}
@@ -3777,7 +3776,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
}
break;
case RDMA_CONTROL_UNREGISTER_REQUEST:
- trace_qemu_rdma_registration_handle_unregister(head.repeat);
+ trace_rdma_registration_handle_unregister(head.repeat);
unreg_resp.repeat = head.repeat;
registers = (RDMARegister *) rdma->wr_data[idx].control_curr;
@@ -3785,7 +3784,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
reg = ®isters[count];
network_to_register(reg);
- trace_qemu_rdma_registration_handle_unregister_loop(count,
+ trace_rdma_registration_handle_unregister_loop(count,
reg->current_index, reg->key.chunk);
block = &(rdma->local_ram_blocks.block[reg->current_index]);
@@ -3801,8 +3800,7 @@ int qemu_rdma_registration_handle(QEMUFile *f)
rdma->total_registrations--;
- trace_qemu_rdma_registration_handle_unregister_success(
- reg->key.chunk);
+ trace_rdma_registration_handle_unregister_success(reg->key.chunk);
}
ret = qemu_rdma_post_send_control(rdma, NULL, &unreg_resp);
@@ -3873,7 +3871,7 @@ int rdma_block_notification_handle(QEMUFile *f, const char *name)
return 0;
}
-int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
+int rdma_registration_start(QEMUFile *f, uint64_t flags)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
@@ -3890,7 +3888,7 @@ int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
CHECK_ERROR_STATE();
- trace_qemu_rdma_registration_start(flags);
+ trace_rdma_registration_start(flags);
qemu_put_be64(f, RAM_SAVE_FLAG_HOOK);
qemu_fflush(f);
@@ -3901,7 +3899,7 @@ int qemu_rdma_registration_start(QEMUFile *f, uint64_t flags)
* Inform dest that dynamic registrations are done for now.
* First, flush writes, if any.
*/
-int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags)
+int rdma_registration_stop(QEMUFile *f, uint64_t flags)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
@@ -3933,7 +3931,7 @@ int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags)
int reg_result_idx, i, nb_dest_blocks;
head.type = RDMA_CONTROL_RAM_BLOCKS_REQUEST;
- trace_qemu_rdma_registration_stop_ram();
+ trace_rdma_registration_stop_ram();
/*
* Make sure that we parallelize the pinning on both sides.
@@ -3995,7 +3993,7 @@ int qemu_rdma_registration_stop(QEMUFile *f, uint64_t flags)
}
}
- trace_qemu_rdma_registration_stop(flags);
+ trace_rdma_registration_stop(flags);
head.type = RDMA_CONTROL_REGISTER_FINISHED;
ret = qemu_rdma_exchange_send(rdma, &head, NULL, NULL, NULL, NULL);
diff --git a/migration/trace-events b/migration/trace-events
index 54ae5653fd..5ad9d9edbe 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -224,20 +224,6 @@ qemu_rdma_post_send_control(const char *desc) "CONTROL: sending %s.."
qemu_rdma_register_and_get_keys(uint64_t len, void *start) "Registering %" PRIu64 " bytes @ %p"
qemu_rdma_register_odp_mr(const char *name) "Try to register On-Demand Paging memory region: %s"
qemu_rdma_advise_mr(const char *name, uint32_t len, uint64_t addr, const char *res) "Try to advise block %s prefetch at %" PRIu32 "@0x%" PRIx64 ": %s"
-qemu_rdma_registration_handle_compress(int64_t length, int index, int64_t offset) "Zapping zero chunk: %" PRId64 " bytes, index %d, offset %" PRId64
-qemu_rdma_registration_handle_finished(void) ""
-qemu_rdma_registration_handle_ram_blocks(void) ""
-qemu_rdma_registration_handle_ram_blocks_loop(const char *name, uint64_t offset, uint64_t length, void *local_host_addr, unsigned int src_index) "%s: @0x%" PRIx64 "/%" PRIu64 " host:@%p src_index: %u"
-qemu_rdma_registration_handle_register(int requests) "%d requests"
-qemu_rdma_registration_handle_register_loop(int req, int index, uint64_t addr, uint64_t chunks) "Registration request (%d): index %d, current_addr %" PRIu64 " chunks: %" PRIu64
-qemu_rdma_registration_handle_register_rkey(int rkey) "0x%x"
-qemu_rdma_registration_handle_unregister(int requests) "%d requests"
-qemu_rdma_registration_handle_unregister_loop(int count, int index, uint64_t chunk) "Unregistration request (%d): index %d, chunk %" PRIu64
-qemu_rdma_registration_handle_unregister_success(uint64_t chunk) "%" PRIu64
-qemu_rdma_registration_handle_wait(void) ""
-qemu_rdma_registration_start(uint64_t flags) "%" PRIu64
-qemu_rdma_registration_stop(uint64_t flags) "%" PRIu64
-qemu_rdma_registration_stop_ram(void) ""
qemu_rdma_resolve_host_trying(const char *host, const char *ip) "Trying %s => %s"
qemu_rdma_signal_unregister_append(uint64_t chunk, int pos) "Appending unregister chunk %" PRIu64 " at position %d"
qemu_rdma_signal_unregister_already(uint64_t chunk) "Unregister chunk %" PRIu64 " already in queue"
@@ -256,6 +242,20 @@ qemu_rdma_write_one_zero(uint64_t chunk, int len, int index, int64_t offset) "En
rdma_add_block(const char *block_name, int block, uint64_t addr, uint64_t offset, uint64_t len, uint64_t end, uint64_t bits, int chunks) "Added Block: '%s':%d, addr: %" PRIu64 ", offset: %" PRIu64 " length: %" PRIu64 " end: %" PRIu64 " bits %" PRIu64 " chunks %d"
rdma_block_notification_handle(const char *name, int index) "%s at %d"
rdma_delete_block(void *block, uint64_t addr, uint64_t offset, uint64_t len, uint64_t end, uint64_t bits, int chunks) "Deleted Block: %p, addr: %" PRIu64 ", offset: %" PRIu64 " length: %" PRIu64 " end: %" PRIu64 " bits %" PRIu64 " chunks %d"
+rdma_registration_handle_compress(int64_t length, int index, int64_t offset) "Zapping zero chunk: %" PRId64 " bytes, index %d, offset %" PRId64
+rdma_registration_handle_finished(void) ""
+rdma_registration_handle_ram_blocks(void) ""
+rdma_registration_handle_ram_blocks_loop(const char *name, uint64_t offset, uint64_t length, void *local_host_addr, unsigned int src_index) "%s: @0x%" PRIx64 "/%" PRIu64 " host:@%p src_index: %u"
+rdma_registration_handle_register(int requests) "%d requests"
+rdma_registration_handle_register_loop(int req, int index, uint64_t addr, uint64_t chunks) "Registration request (%d): index %d, current_addr %" PRIu64 " chunks: %" PRIu64
+rdma_registration_handle_register_rkey(int rkey) "0x%x"
+rdma_registration_handle_unregister(int requests) "%d requests"
+rdma_registration_handle_unregister_loop(int count, int index, uint64_t chunk) "Unregistration request (%d): index %d, chunk %" PRIu64
+rdma_registration_handle_unregister_success(uint64_t chunk) "%" PRIu64
+rdma_registration_handle_wait(void) ""
+rdma_registration_start(uint64_t flags) "%" PRIu64
+rdma_registration_stop(uint64_t flags) "%" PRIu64
+rdma_registration_stop_ram(void) ""
rdma_start_incoming_migration(void) ""
rdma_start_incoming_migration_after_dest_init(void) ""
rdma_start_incoming_migration_after_rdma_listen(void) ""
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PULL 21/21] migration/rdma: Check sooner if we are in postcopy for save_page()
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (19 preceding siblings ...)
2023-05-30 18:25 ` [PULL 20/21] migration/rdma: Remove qemu_ prefix from exported functions Juan Quintela
@ 2023-05-30 18:25 ` Juan Quintela
2023-05-30 20:23 ` [PULL 00/21] Migration 20230530 patches Richard Henderson
21 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-30 18:25 UTC (permalink / raw)
To: qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu, Juan Quintela
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230509120700.78359-11-quintela@redhat.com>
---
migration/rdma.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/migration/rdma.c b/migration/rdma.c
index cf0681575d..f912a31b45 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3250,10 +3250,6 @@ static int qemu_rdma_save_page(QEMUFile *f, ram_addr_t block_offset,
RDMAContext *rdma;
int ret;
- if (migration_in_postcopy()) {
- return RAM_SAVE_CONTROL_NOT_SUPP;
- }
-
RCU_READ_LOCK_GUARD();
rdma = qatomic_rcu_read(&rioc->rdmaout);
@@ -3322,7 +3318,7 @@ err:
int rdma_control_save_page(QEMUFile *f, ram_addr_t block_offset,
ram_addr_t offset, size_t size)
{
- if (!migrate_rdma()) {
+ if (!migrate_rdma() || migration_in_postcopy()) {
return RAM_SAVE_CONTROL_NOT_SUPP;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-05-30 18:25 [PULL 00/21] Migration 20230530 patches Juan Quintela
` (20 preceding siblings ...)
2023-05-30 18:25 ` [PULL 21/21] migration/rdma: Check sooner if we are in postcopy for save_page() Juan Quintela
@ 2023-05-30 20:23 ` Richard Henderson
2023-05-31 7:28 ` Juan Quintela
` (2 more replies)
21 siblings, 3 replies; 37+ messages in thread
From: Richard Henderson @ 2023-05-30 20:23 UTC (permalink / raw)
To: Juan Quintela, qemu-devel; +Cc: Paolo Bonzini, Leonardo Bras, Peter Xu
On 5/30/23 11:25, Juan Quintela wrote:
> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>
> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu into staging (2023-05-29 14:31:52 -0700)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/migration-20230530-pull-request
>
> for you to fetch changes up to c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>
> migration/rdma: Check sooner if we are in postcopy for save_page() (2023-05-30 19:23:50 +0200)
>
> ----------------------------------------------------------------
> Migration 20230530 Pull request (take 2)
>
> Hi
>
> Resend last PULL request, this time it compiles when CONFIG_RDMA is
> not configured in.
>
> [take 1]
> On this PULL request:
>
> - Set vmstate migration failure right (vladimir)
> - Migration QEMUFileHook removal (juan)
> - Migration Atomic counters (juan)
>
> Please apply.
>
> ----------------------------------------------------------------
>
> Juan Quintela (16):
> migration: Don't abuse qemu_file transferred for RDMA
> migration/RDMA: It is accounting for zero/normal pages in two places
> migration/rdma: Remove QEMUFile parameter when not used
> migration/rdma: Don't use imaginary transfers
> migration: Remove unused qemu_file_credit_transfer()
> migration/rdma: Simplify the function that saves a page
> migration: Create migrate_rdma()
> migration/rdma: Unfold ram_control_before_iterate()
> migration/rdma: Unfold ram_control_after_iterate()
> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
> migration/rdma: Unfold hook_ram_load()
> migration/rdma: Create rdma_control_save_page()
> qemu-file: Remove QEMUFileHooks
> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
> migration/rdma: Remove qemu_ prefix from exported functions
> migration/rdma: Check sooner if we are in postcopy for save_page()
>
> Vladimir Sementsov-Ogievskiy (5):
> runstate: add runstate_get()
> migration: never fail in global_state_store()
> runstate: drop unused runstate_store()
> migration: switch from .vm_was_running to .vm_old_state
> migration: restore vmstate on migration failure
Appears to introduce multiple avocado failures:
https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
Test summary:
tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
Test summary:
tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is not an instance of
type qio-channel-rdma
qemu-system-aarch64: Not a migration stream
qemu-system-aarch64: load of migration failed: Invalid argument
Broken pipe
r~
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-05-30 20:23 ` [PULL 00/21] Migration 20230530 patches Richard Henderson
@ 2023-05-31 7:28 ` Juan Quintela
2023-05-31 9:10 ` Juan Quintela
[not found] ` <87mt1ktdr8.fsf@secure.mitica>
2 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-31 7:28 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 5/30/23 11:25, Juan Quintela wrote:
>> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
>> into staging (2023-05-29 14:31:52 -0700)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git
>> tags/migration-20230530-pull-request
>> for you to fetch changes up to
>> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>> migration/rdma: Check sooner if we are in postcopy for
>> save_page() (2023-05-30 19:23:50 +0200)
>> ----------------------------------------------------------------
>> Migration 20230530 Pull request (take 2)
>> Hi
>> Resend last PULL request, this time it compiles when CONFIG_RDMA is
>> not configured in.
>> [take 1]
>> On this PULL request:
>> - Set vmstate migration failure right (vladimir)
>> - Migration QEMUFileHook removal (juan)
>> - Migration Atomic counters (juan)
>> Please apply.
>> ----------------------------------------------------------------
>> Juan Quintela (16):
>> migration: Don't abuse qemu_file transferred for RDMA
>> migration/RDMA: It is accounting for zero/normal pages in two places
>> migration/rdma: Remove QEMUFile parameter when not used
>> migration/rdma: Don't use imaginary transfers
>> migration: Remove unused qemu_file_credit_transfer()
>> migration/rdma: Simplify the function that saves a page
>> migration: Create migrate_rdma()
>> migration/rdma: Unfold ram_control_before_iterate()
>> migration/rdma: Unfold ram_control_after_iterate()
>> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
>> migration/rdma: Unfold hook_ram_load()
>> migration/rdma: Create rdma_control_save_page()
>> qemu-file: Remove QEMUFileHooks
>> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
>> migration/rdma: Remove qemu_ prefix from exported functions
>> migration/rdma: Check sooner if we are in postcopy for save_page()
>> Vladimir Sementsov-Ogievskiy (5):
>> runstate: add runstate_get()
>> migration: never fail in global_state_store()
>> runstate: drop unused runstate_store()
>> migration: switch from .vm_was_running to .vm_old_state
>> migration: restore vmstate on migration failure
Self nack
> Appears to introduce multiple avocado failures:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
>
> Test summary:
> tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
No clue what is going on here.
I haven't touched exec migration (famous last words)
> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost:
> ERROR
This is tested by ./tests/migration-test
It passes for me with qemu-system-x86_64 (kvm), qemu-system-i386 (kvm)
and aarch64 (tcg). What make check tests.
> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
Dunno again.
> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
>
> Test summary:
> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>
> Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
>
> ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
This is something that changed on the series, so I will check with this
avocado tests. No clue why it passes with x86_64 and not with aarch64.
> not an instance of type qio-channel-rdma
Here there was an error creating the RDMA channel somehow. Somehow is
the important word on the previous sentence.
> qemu-system-aarch64: Not a migration stream
> qemu-system-aarch64: load of migration failed: Invalid argument
> Broken pipe
Will isolate the non RDMA changes and try to pass the RDMA ones through
avocado.
I am sorry for the noise.
Later, Juan.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-05-30 20:23 ` [PULL 00/21] Migration 20230530 patches Richard Henderson
2023-05-31 7:28 ` Juan Quintela
@ 2023-05-31 9:10 ` Juan Quintela
[not found] ` <87mt1ktdr8.fsf@secure.mitica>
2 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-05-31 9:10 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 5/30/23 11:25, Juan Quintela wrote:
>> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
>> into staging (2023-05-29 14:31:52 -0700)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git
>> tags/migration-20230530-pull-request
>> for you to fetch changes up to
>> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>> migration/rdma: Check sooner if we are in postcopy for
>> save_page() (2023-05-30 19:23:50 +0200)
>> ----------------------------------------------------------------
>> Migration 20230530 Pull request (take 2)
>> Hi
>> Resend last PULL request, this time it compiles when CONFIG_RDMA is
>> not configured in.
>> [take 1]
>> On this PULL request:
>> - Set vmstate migration failure right (vladimir)
>> - Migration QEMUFileHook removal (juan)
>> - Migration Atomic counters (juan)
>> Please apply.
>> ----------------------------------------------------------------
>> Juan Quintela (16):
>> migration: Don't abuse qemu_file transferred for RDMA
>> migration/RDMA: It is accounting for zero/normal pages in two places
>> migration/rdma: Remove QEMUFile parameter when not used
>> migration/rdma: Don't use imaginary transfers
>> migration: Remove unused qemu_file_credit_transfer()
>> migration/rdma: Simplify the function that saves a page
>> migration: Create migrate_rdma()
>> migration/rdma: Unfold ram_control_before_iterate()
>> migration/rdma: Unfold ram_control_after_iterate()
>> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
>> migration/rdma: Unfold hook_ram_load()
>> migration/rdma: Create rdma_control_save_page()
>> qemu-file: Remove QEMUFileHooks
>> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
>> migration/rdma: Remove qemu_ prefix from exported functions
>> migration/rdma: Check sooner if we are in postcopy for save_page()
>> Vladimir Sementsov-Ogievskiy (5):
>> runstate: add runstate_get()
>> migration: never fail in global_state_store()
>> runstate: drop unused runstate_store()
>> migration: switch from .vm_was_running to .vm_old_state
>> migration: restore vmstate on migration failure
>
> Appears to introduce multiple avocado failures:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
>
> Test summary:
> tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
>
> Test summary:
> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>
> Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
>
> ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
> not an instance of type qio-channel-rdma
> qemu-system-aarch64: Not a migration stream
> qemu-system-aarch64: load of migration failed: Invalid argument
> Broken pipe
And as it couldn't be anyother way, on my machine (with upstream qemu,
i.e. none of my changes in):
$ make check-avocado (check-acceptance fails in the same test)
...
STARTED
(063/243) tests/avocado/kvm_xen_guest.py:KVMXenGuest.test_kvm_xen_guest_nomsi: ERROR: ConnectError: Failed to establish session: EOFError\n Exit code: 1\n Command: ./qemu-system-x86_64 -display none -vga none -chardev socket,id=mon,fd=9 -mon chardev=mon,mode=control -machine q35 -chardev socket,id=console,path=/var/tmp/qemu_yv2aqehm/7f8b6cf5... (2.52 s)
Interrupting job (failfast).
RESULTS : PASS 42 | ERROR 1 | FAIL 0 | SKIP 199 | WARN 0 | INTERRUPT 0 | CANCEL 1
JOB TIME : 71.66 s
Test summary:
tests/avocado/kvm_xen_guest.py:KVMXenGuest.test_kvm_xen_guest_nomsi: ERROR
make: *** [/mnt/code/qemu/full/tests/Makefile.include:142: check-avocado] Error 9
the good news: Some tests passed. sixty something.
the bad news: it is failing on kvm xen guest and I have no clue
why/where.
Is there some documentation that can help me here?
Thanks, Juan.
^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <87mt1ktdr8.fsf@secure.mitica>]
* Re: [PULL 00/21] Migration 20230530 patches
[not found] ` <87mt1ktdr8.fsf@secure.mitica>
@ 2023-05-31 21:28 ` Richard Henderson
2023-06-01 6:47 ` Juan Quintela
2023-06-01 8:27 ` Daniel P. Berrangé
1 sibling, 1 reply; 37+ messages in thread
From: Richard Henderson @ 2023-05-31 21:28 UTC (permalink / raw)
To: quintela, Daniel Berrange, Markus Armbruster
Cc: qemu-devel, Paolo Bonzini, Leonardo Bras, Peter Xu
On 5/31/23 14:03, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 5/30/23 11:25, Juan Quintela wrote:
>>> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>>> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
>>> into staging (2023-05-29 14:31:52 -0700)
>>> are available in the Git repository at:
>>> https://gitlab.com/juan.quintela/qemu.git
>>> tags/migration-20230530-pull-request
>>> for you to fetch changes up to
>>> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>>> migration/rdma: Check sooner if we are in postcopy for
>>> save_page() (2023-05-30 19:23:50 +0200)
>>> ----------------------------------------------------------------
>
> Added Markus and Daniel.
>
>>> Migration 20230530 Pull request (take 2)
>>> Hi
>>> Resend last PULL request, this time it compiles when CONFIG_RDMA is
>>> not configured in.
>>> [take 1]
>>> On this PULL request:
>>> - Set vmstate migration failure right (vladimir)
>>> - Migration QEMUFileHook removal (juan)
>>> - Migration Atomic counters (juan)
>>> Please apply.
>>> ----------------------------------------------------------------
>>> Juan Quintela (16):
>>> migration: Don't abuse qemu_file transferred for RDMA
>>> migration/RDMA: It is accounting for zero/normal pages in two places
>>> migration/rdma: Remove QEMUFile parameter when not used
>>> migration/rdma: Don't use imaginary transfers
>>> migration: Remove unused qemu_file_credit_transfer()
>>> migration/rdma: Simplify the function that saves a page
>>> migration: Create migrate_rdma()
>>> migration/rdma: Unfold ram_control_before_iterate()
>>> migration/rdma: Unfold ram_control_after_iterate()
>>> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
>>> migration/rdma: Unfold hook_ram_load()
>>> migration/rdma: Create rdma_control_save_page()
>>> qemu-file: Remove QEMUFileHooks
>>> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
>>> migration/rdma: Remove qemu_ prefix from exported functions
>>> migration/rdma: Check sooner if we are in postcopy for save_page()
>>> Vladimir Sementsov-Ogievskiy (5):
>>> runstate: add runstate_get()
>>> migration: never fail in global_state_store()
>>> runstate: drop unused runstate_store()
>>> migration: switch from .vm_was_running to .vm_old_state
>>> migration: restore vmstate on migration failure
>>
>> Appears to introduce multiple avocado failures:
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
>>
>> Test summary:
>> tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
>> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
>>
>> Test summary:
>> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>>
>> Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
>>
>> ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
>> not an instance of type qio-channel-rdma
>
> I am looking at the other errors, but this one is weird. It is failing
> here:
>
> #define TYPE_QIO_CHANNEL_RDMA "qio-channel-rdma"
> OBJECT_DECLARE_SIMPLE_TYPE(QIOChannelRDMA, QIO_CHANNEL_RDMA)
>
> In the OBJECT line.
>
> I have no clue what problem are we having here with the object system to
> decide at declaration time that a variable is not of the type that we
> are declaring.
>
> I am missing something obvious here?
This is where the inline function is declared, but you need to look at the backtrace,
where you have applied QIO_CHANNEL_RDMA to an object that is *not* of that type.
r~
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-05-31 21:28 ` Richard Henderson
@ 2023-06-01 6:47 ` Juan Quintela
0 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-06-01 6:47 UTC (permalink / raw)
To: Richard Henderson
Cc: Daniel Berrange, Markus Armbruster, qemu-devel, Paolo Bonzini,
Leonardo Bras, Peter Xu
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 5/31/23 14:03, Juan Quintela wrote:
>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>> On 5/30/23 11:25, Juan Quintela wrote:
>>>> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>>>> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
>>>> into staging (2023-05-29 14:31:52 -0700)
>>>> are available in the Git repository at:
>>>> https://gitlab.com/juan.quintela/qemu.git
>>>> tags/migration-20230530-pull-request
>>>> for you to fetch changes up to
>>>> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>>>> migration/rdma: Check sooner if we are in postcopy for
>>>> save_page() (2023-05-30 19:23:50 +0200)
>>>> ----------------------------------------------------------------
>> Added Markus and Daniel.
>>
>>>> Migration 20230530 Pull request (take 2)
>>>> Hi
>>>> Resend last PULL request, this time it compiles when CONFIG_RDMA is
>>>> not configured in.
>>>> [take 1]
>>>> On this PULL request:
>>>> - Set vmstate migration failure right (vladimir)
>>>> - Migration QEMUFileHook removal (juan)
>>>> - Migration Atomic counters (juan)
>>>> Please apply.
>>>> ----------------------------------------------------------------
>>>> Juan Quintela (16):
>>>> migration: Don't abuse qemu_file transferred for RDMA
>>>> migration/RDMA: It is accounting for zero/normal pages in two places
>>>> migration/rdma: Remove QEMUFile parameter when not used
>>>> migration/rdma: Don't use imaginary transfers
>>>> migration: Remove unused qemu_file_credit_transfer()
>>>> migration/rdma: Simplify the function that saves a page
>>>> migration: Create migrate_rdma()
>>>> migration/rdma: Unfold ram_control_before_iterate()
>>>> migration/rdma: Unfold ram_control_after_iterate()
>>>> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
>>>> migration/rdma: Unfold hook_ram_load()
>>>> migration/rdma: Create rdma_control_save_page()
>>>> qemu-file: Remove QEMUFileHooks
>>>> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
>>>> migration/rdma: Remove qemu_ prefix from exported functions
>>>> migration/rdma: Check sooner if we are in postcopy for save_page()
>>>> Vladimir Sementsov-Ogievskiy (5):
>>>> runstate: add runstate_get()
>>>> migration: never fail in global_state_store()
>>>> runstate: drop unused runstate_store()
>>>> migration: switch from .vm_was_running to .vm_old_state
>>>> migration: restore vmstate on migration failure
>>>
>>> Appears to introduce multiple avocado failures:
>>>
>>> https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
>>>
>>> Test summary:
>>> tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
>>> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>>> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>>> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142:
>>> check-avocado] Error 1
>>> https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
>>>
>>> Test summary:
>>> tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>>> tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>>> make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>>>
>>> Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
>>>
>>> ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
>>> not an instance of type qio-channel-rdma
>> I am looking at the other errors, but this one is weird. It is
>> failing
>> here:
>> #define TYPE_QIO_CHANNEL_RDMA "qio-channel-rdma"
>> OBJECT_DECLARE_SIMPLE_TYPE(QIOChannelRDMA, QIO_CHANNEL_RDMA)
>> In the OBJECT line.
>> I have no clue what problem are we having here with the object
>> system to
>> decide at declaration time that a variable is not of the type that we
>> are declaring.
>> I am missing something obvious here?
>
> This is where the inline function is declared, but you need to look at
> the backtrace, where you have applied QIO_CHANNEL_RDMA to an object
> that is *not* of that type.
Where is the stack trace?
Are you running aarch64 natively? Here running qemu-system-aarch64 on
x86_64 works for me. Neither avocado test nor migration-test fails with
my changes.
Cleber found the reason why I was having trouble running avocado
locally. It appears that some change happened and there are several
tests that can't be run in parallel. (Temporary) solution is to run it
as:
make -j1 check-avocado
Until they sort which tests are/aren't able to run in parallel.
Later, Juan.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
[not found] ` <87mt1ktdr8.fsf@secure.mitica>
2023-05-31 21:28 ` Richard Henderson
@ 2023-06-01 8:27 ` Daniel P. Berrangé
2023-06-01 9:05 ` Daniel P. Berrangé
1 sibling, 1 reply; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 8:27 UTC (permalink / raw)
To: Juan Quintela
Cc: Richard Henderson, Markus Armbruster, qemu-devel, Paolo Bonzini,
Leonardo Bras, Peter Xu
On Wed, May 31, 2023 at 11:03:23PM +0200, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
> > On 5/30/23 11:25, Juan Quintela wrote:
> >> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
> >> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
> >> into staging (2023-05-29 14:31:52 -0700)
> >> are available in the Git repository at:
> >> https://gitlab.com/juan.quintela/qemu.git
> >> tags/migration-20230530-pull-request
> >> for you to fetch changes up to
> >> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
> >> migration/rdma: Check sooner if we are in postcopy for
> >> save_page() (2023-05-30 19:23:50 +0200)
> >> ----------------------------------------------------------------
>
> Added Markus and Daniel.
>
> >> Migration 20230530 Pull request (take 2)
> >> Hi
> >> Resend last PULL request, this time it compiles when CONFIG_RDMA is
> >> not configured in.
> >> [take 1]
> >> On this PULL request:
> >> - Set vmstate migration failure right (vladimir)
> >> - Migration QEMUFileHook removal (juan)
> >> - Migration Atomic counters (juan)
> >> Please apply.
> >> ----------------------------------------------------------------
> >> Juan Quintela (16):
> >> migration: Don't abuse qemu_file transferred for RDMA
> >> migration/RDMA: It is accounting for zero/normal pages in two places
> >> migration/rdma: Remove QEMUFile parameter when not used
> >> migration/rdma: Don't use imaginary transfers
> >> migration: Remove unused qemu_file_credit_transfer()
> >> migration/rdma: Simplify the function that saves a page
> >> migration: Create migrate_rdma()
> >> migration/rdma: Unfold ram_control_before_iterate()
> >> migration/rdma: Unfold ram_control_after_iterate()
> >> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
> >> migration/rdma: Unfold hook_ram_load()
> >> migration/rdma: Create rdma_control_save_page()
> >> qemu-file: Remove QEMUFileHooks
> >> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
> >> migration/rdma: Remove qemu_ prefix from exported functions
> >> migration/rdma: Check sooner if we are in postcopy for save_page()
> >> Vladimir Sementsov-Ogievskiy (5):
> >> runstate: add runstate_get()
> >> migration: never fail in global_state_store()
> >> runstate: drop unused runstate_store()
> >> migration: switch from .vm_was_running to .vm_old_state
> >> migration: restore vmstate on migration failure
> >
> > Appears to introduce multiple avocado failures:
> >
> > https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
> >
> > Test summary:
> > tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
> > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
> >
> > https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
> >
> > Test summary:
> > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
> >
> > Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
> >
> > ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
> > not an instance of type qio-channel-rdma
>
> I am looking at the other errors, but this one is weird. It is failing
> here:
>
> #define TYPE_QIO_CHANNEL_RDMA "qio-channel-rdma"
> OBJECT_DECLARE_SIMPLE_TYPE(QIOChannelRDMA, QIO_CHANNEL_RDMA)
>
> In the OBJECT line.
>
> I have no clue what problem are we having here with the object system to
> decide at declaration time that a variable is not of the type that we
> are declaring.
>
> I am missing something obvious here?
I expect somewhere in the code has either corrupted memory, or is
using free'd memory. Either way you'll need to get a stack trace
to debug this kind of thing
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-06-01 8:27 ` Daniel P. Berrangé
@ 2023-06-01 9:05 ` Daniel P. Berrangé
2023-06-01 11:46 ` Juan Quintela
0 siblings, 1 reply; 37+ messages in thread
From: Daniel P. Berrangé @ 2023-06-01 9:05 UTC (permalink / raw)
To: Juan Quintela, Richard Henderson, Markus Armbruster, qemu-devel,
Paolo Bonzini, Leonardo Bras, Peter Xu
On Thu, Jun 01, 2023 at 09:27:09AM +0100, Daniel P. Berrangé wrote:
> On Wed, May 31, 2023 at 11:03:23PM +0200, Juan Quintela wrote:
> > Richard Henderson <richard.henderson@linaro.org> wrote:
> > > On 5/30/23 11:25, Juan Quintela wrote:
> > >> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
> > >> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
> > >> into staging (2023-05-29 14:31:52 -0700)
> > >> are available in the Git repository at:
> > >> https://gitlab.com/juan.quintela/qemu.git
> > >> tags/migration-20230530-pull-request
> > >> for you to fetch changes up to
> > >> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
> > >> migration/rdma: Check sooner if we are in postcopy for
> > >> save_page() (2023-05-30 19:23:50 +0200)
> > >> ----------------------------------------------------------------
> >
> > Added Markus and Daniel.
> >
> > >> Migration 20230530 Pull request (take 2)
> > >> Hi
> > >> Resend last PULL request, this time it compiles when CONFIG_RDMA is
> > >> not configured in.
> > >> [take 1]
> > >> On this PULL request:
> > >> - Set vmstate migration failure right (vladimir)
> > >> - Migration QEMUFileHook removal (juan)
> > >> - Migration Atomic counters (juan)
> > >> Please apply.
> > >> ----------------------------------------------------------------
> > >> Juan Quintela (16):
> > >> migration: Don't abuse qemu_file transferred for RDMA
> > >> migration/RDMA: It is accounting for zero/normal pages in two places
> > >> migration/rdma: Remove QEMUFile parameter when not used
> > >> migration/rdma: Don't use imaginary transfers
> > >> migration: Remove unused qemu_file_credit_transfer()
> > >> migration/rdma: Simplify the function that saves a page
> > >> migration: Create migrate_rdma()
> > >> migration/rdma: Unfold ram_control_before_iterate()
> > >> migration/rdma: Unfold ram_control_after_iterate()
> > >> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
> > >> migration/rdma: Unfold hook_ram_load()
> > >> migration/rdma: Create rdma_control_save_page()
> > >> qemu-file: Remove QEMUFileHooks
> > >> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
> > >> migration/rdma: Remove qemu_ prefix from exported functions
> > >> migration/rdma: Check sooner if we are in postcopy for save_page()
> > >> Vladimir Sementsov-Ogievskiy (5):
> > >> runstate: add runstate_get()
> > >> migration: never fail in global_state_store()
> > >> runstate: drop unused runstate_store()
> > >> migration: switch from .vm_was_running to .vm_old_state
> > >> migration: restore vmstate on migration failure
> > >
> > > Appears to introduce multiple avocado failures:
> > >
> > > https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
> > >
> > > Test summary:
> > > tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
> > > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> > > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> > > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
> > >
> > > https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
> > >
> > > Test summary:
> > > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
> > > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
> > > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
> > >
> > > Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
> > >
> > > ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
> > > not an instance of type qio-channel-rdma
> >
> > I am looking at the other errors, but this one is weird. It is failing
> > here:
> >
> > #define TYPE_QIO_CHANNEL_RDMA "qio-channel-rdma"
> > OBJECT_DECLARE_SIMPLE_TYPE(QIOChannelRDMA, QIO_CHANNEL_RDMA)
> >
> > In the OBJECT line.
> >
> > I have no clue what problem are we having here with the object system to
> > decide at declaration time that a variable is not of the type that we
> > are declaring.
> >
> > I am missing something obvious here?
>
> I expect somewhere in the code has either corrupted memory, or is
> using free'd memory. Either way you'll need to get a stack trace
> to debug this kind of thing
I've replied to the patches pointing out 4 places where the code
casts to QIOChannelRDMA, without first checking that this is an
RDMA migration, which look likely to be the cause of this.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PULL 00/21] Migration 20230530 patches
2023-06-01 9:05 ` Daniel P. Berrangé
@ 2023-06-01 11:46 ` Juan Quintela
0 siblings, 0 replies; 37+ messages in thread
From: Juan Quintela @ 2023-06-01 11:46 UTC (permalink / raw)
To: Daniel P. Berrangé
Cc: Richard Henderson, Markus Armbruster, qemu-devel, Paolo Bonzini,
Leonardo Bras, Peter Xu
Daniel P. Berrangé <berrange@redhat.com> wrote:
> On Thu, Jun 01, 2023 at 09:27:09AM +0100, Daniel P. Berrangé wrote:
>> On Wed, May 31, 2023 at 11:03:23PM +0200, Juan Quintela wrote:
>> > Richard Henderson <richard.henderson@linaro.org> wrote:
>> > > On 5/30/23 11:25, Juan Quintela wrote:
>> > >> The following changes since commit aa9bbd865502ed517624ab6fe7d4b5d89ca95e43:
>> > >> Merge tag 'pull-ppc-20230528' of https://gitlab.com/danielhb/qemu
>> > >> into staging (2023-05-29 14:31:52 -0700)
>> > >> are available in the Git repository at:
>> > >> https://gitlab.com/juan.quintela/qemu.git
>> > >> tags/migration-20230530-pull-request
>> > >> for you to fetch changes up to
>> > >> c63c544005e6b1375a9c038f0e0fb8dfb8b249f4:
>> > >> migration/rdma: Check sooner if we are in postcopy for
>> > >> save_page() (2023-05-30 19:23:50 +0200)
>> > >> ----------------------------------------------------------------
>> >
>> > Added Markus and Daniel.
>> >
>> > >> Migration 20230530 Pull request (take 2)
>> > >> Hi
>> > >> Resend last PULL request, this time it compiles when CONFIG_RDMA is
>> > >> not configured in.
>> > >> [take 1]
>> > >> On this PULL request:
>> > >> - Set vmstate migration failure right (vladimir)
>> > >> - Migration QEMUFileHook removal (juan)
>> > >> - Migration Atomic counters (juan)
>> > >> Please apply.
>> > >> ----------------------------------------------------------------
>> > >> Juan Quintela (16):
>> > >> migration: Don't abuse qemu_file transferred for RDMA
>> > >> migration/RDMA: It is accounting for zero/normal pages in two places
>> > >> migration/rdma: Remove QEMUFile parameter when not used
>> > >> migration/rdma: Don't use imaginary transfers
>> > >> migration: Remove unused qemu_file_credit_transfer()
>> > >> migration/rdma: Simplify the function that saves a page
>> > >> migration: Create migrate_rdma()
>> > >> migration/rdma: Unfold ram_control_before_iterate()
>> > >> migration/rdma: Unfold ram_control_after_iterate()
>> > >> migration/rdma: Remove all uses of RAM_CONTROL_HOOK
>> > >> migration/rdma: Unfold hook_ram_load()
>> > >> migration/rdma: Create rdma_control_save_page()
>> > >> qemu-file: Remove QEMUFileHooks
>> > >> migration/rdma: Move rdma constants from qemu-file.h to rdma.h
>> > >> migration/rdma: Remove qemu_ prefix from exported functions
>> > >> migration/rdma: Check sooner if we are in postcopy for save_page()
>> > >> Vladimir Sementsov-Ogievskiy (5):
>> > >> runstate: add runstate_get()
>> > >> migration: never fail in global_state_store()
>> > >> runstate: drop unused runstate_store()
>> > >> migration: switch from .vm_was_running to .vm_old_state
>> > >> migration: restore vmstate on migration failure
>> > >
>> > > Appears to introduce multiple avocado failures:
>> > >
>> > > https://gitlab.com/qemu-project/qemu/-/jobs/4378066518#L286
>> > >
>> > > Test summary:
>> > > tests/avocado/migration.py:X86_64.test_migration_with_exec: ERROR
>> > > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>> > > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>> > > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>> > >
>> > > https://gitlab.com/qemu-project/qemu/-/jobs/4378066523#L387
>> > >
>> > > Test summary:
>> > > tests/avocado/migration.py:X86_64.test_migration_with_tcp_localhost: ERROR
>> > > tests/avocado/migration.py:X86_64.test_migration_with_unix: ERROR
>> > > make: *** [/builds/qemu-project/qemu/tests/Makefile.include:142: check-avocado] Error 1
>> > >
>> > > Also fails QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/migration-test
>> > >
>> > > ../src/migration/rdma.c:408:QIO_CHANNEL_RDMA: Object 0xaaaaf7bba680 is
>> > > not an instance of type qio-channel-rdma
>> >
>> > I am looking at the other errors, but this one is weird. It is failing
>> > here:
>> >
>> > #define TYPE_QIO_CHANNEL_RDMA "qio-channel-rdma"
>> > OBJECT_DECLARE_SIMPLE_TYPE(QIOChannelRDMA, QIO_CHANNEL_RDMA)
>> >
>> > In the OBJECT line.
>> >
>> > I have no clue what problem are we having here with the object system to
>> > decide at declaration time that a variable is not of the type that we
>> > are declaring.
>> >
>> > I am missing something obvious here?
>>
>> I expect somewhere in the code has either corrupted memory, or is
>> using free'd memory. Either way you'll need to get a stack trace
>> to debug this kind of thing
>
> I've replied to the patches pointing out 4 places where the code
> casts to QIOChannelRDMA, without first checking that this is an
> RDMA migration, which look likely to be the cause of this.
Good catch.
I can only say: Ouch.
And why it don't failed for me. It passes for me:
- make check (compiled every target/device/... that can be compiled on
Fedora38)
- I tested hundreds of times migration-test during development, never
failed like that
- I am switching to test aarch64 tcg as main target, because it appears
it finds way more bugs on migration-tests.
Thanks again.
Later, Juan.
^ permalink raw reply [flat|nested] 37+ messages in thread