All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Laurent Vivier" <laurent@vivier.eu>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Fam Zheng" <fam@euphon.net>,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Thomas Huth" <thuth@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	qemu-trivial@nongnu.org, "Michael Tokarev" <mjt@tls.msk.ru>,
	"Daniel P. Berrangé" <berrange@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Peter Xu" <peterx@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>
Subject: [PATCH 21/30] migration: Use atomic ops properly for page accountings
Date: Tue, 15 Nov 2022 13:12:17 +0100	[thread overview]
Message-ID: <20221115121226.26609-22-quintela@redhat.com> (raw)
In-Reply-To: <20221115121226.26609-1-quintela@redhat.com>

From: Peter Xu <peterx@redhat.com>

To prepare for thread-safety on page accountings, at least below counters
need to be accessed only atomically, they are:

        ram_counters.transferred
        ram_counters.duplicate
        ram_counters.normal
        ram_counters.postcopy_bytes

There are a lot of other counters but they won't be accessed outside
migration thread, then they're still safe to be accessed without atomic
ops.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.h       | 20 ++++++++++++++++++++
 migration/migration.c | 10 +++++-----
 migration/multifd.c   |  4 ++--
 migration/ram.c       | 40 ++++++++++++++++++++++++----------------
 4 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/migration/ram.h b/migration/ram.h
index 038d52f49f..81cbb0947c 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -32,7 +32,27 @@
 #include "qapi/qapi-types-migration.h"
 #include "exec/cpu-common.h"
 #include "io/channel.h"
+#include "qemu/stats64.h"
 
+/*
+ * These are the migration statistic counters that need to be updated using
+ * atomic ops (can be accessed by more than one thread).  Here since we
+ * cannot modify MigrationStats directly to use Stat64 as it was defined in
+ * the QAPI scheme, we define an internal structure to hold them, and we
+ * propagate the real values when QMP queries happen.
+ *
+ * IOW, the corresponding fields within ram_counters on these specific
+ * fields will be always zero and not being used at all; they're just
+ * placeholders to make it QAPI-compatible.
+ */
+typedef struct {
+    Stat64 transferred;
+    Stat64 duplicate;
+    Stat64 normal;
+    Stat64 postcopy_bytes;
+} MigrationAtomicStats;
+
+extern MigrationAtomicStats ram_atomic_counters;
 extern MigrationStats ram_counters;
 extern XBZRLECacheStats xbzrle_counters;
 extern CompressionStats compression_counters;
diff --git a/migration/migration.c b/migration/migration.c
index 9fbed8819a..1f95877fb4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1069,13 +1069,13 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 
     info->has_ram = true;
     info->ram = g_malloc0(sizeof(*info->ram));
-    info->ram->transferred = ram_counters.transferred;
+    info->ram->transferred = stat64_get(&ram_atomic_counters.transferred);
     info->ram->total = ram_bytes_total();
-    info->ram->duplicate = ram_counters.duplicate;
+    info->ram->duplicate = stat64_get(&ram_atomic_counters.duplicate);
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
-    info->ram->normal = ram_counters.normal;
-    info->ram->normal_bytes = ram_counters.normal * page_size;
+    info->ram->normal = stat64_get(&ram_atomic_counters.normal);
+    info->ram->normal_bytes = info->ram->normal * page_size;
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
     info->ram->dirty_sync_missed_zero_copy =
@@ -1086,7 +1086,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->pages_per_second = s->pages_per_second;
     info->ram->precopy_bytes = ram_counters.precopy_bytes;
     info->ram->downtime_bytes = ram_counters.downtime_bytes;
-    info->ram->postcopy_bytes = ram_counters.postcopy_bytes;
+    info->ram->postcopy_bytes = stat64_get(&ram_atomic_counters.postcopy_bytes);
 
     if (migrate_use_xbzrle()) {
         info->has_xbzrle_cache = true;
diff --git a/migration/multifd.c b/migration/multifd.c
index c40d98ad5c..7d3aec9a52 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -432,7 +432,7 @@ static int multifd_send_pages(QEMUFile *f)
     transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
     qemu_file_acct_rate_limit(f, transferred);
     ram_counters.multifd_bytes += transferred;
-    ram_counters.transferred += transferred;
+    stat64_add(&ram_atomic_counters.transferred, transferred);
     qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
@@ -624,7 +624,7 @@ int multifd_send_sync_main(QEMUFile *f)
         p->pending_job++;
         qemu_file_acct_rate_limit(f, p->packet_len);
         ram_counters.multifd_bytes += p->packet_len;
-        ram_counters.transferred += p->packet_len;
+        stat64_add(&ram_atomic_counters.transferred, p->packet_len);
         qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
 
diff --git a/migration/ram.c b/migration/ram.c
index 6428138194..25fd3cf7dc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -453,18 +453,25 @@ uint64_t ram_bytes_remaining(void)
                        0;
 }
 
+/*
+ * NOTE: not all stats in ram_counters are used in reality.  See comments
+ * for struct MigrationAtomicStats.  The ultimate result of ram migration
+ * counters will be a merged version with both ram_counters and the atomic
+ * fields in ram_atomic_counters.
+ */
 MigrationStats ram_counters;
+MigrationAtomicStats ram_atomic_counters;
 
 void ram_transferred_add(uint64_t bytes)
 {
     if (runstate_is_running()) {
         ram_counters.precopy_bytes += bytes;
     } else if (migration_in_postcopy()) {
-        ram_counters.postcopy_bytes += bytes;
+        stat64_add(&ram_atomic_counters.postcopy_bytes, bytes);
     } else {
         ram_counters.downtime_bytes += bytes;
     }
-    ram_counters.transferred += bytes;
+    stat64_add(&ram_atomic_counters.transferred, bytes);
 }
 
 void dirty_sync_missed_zero_copy(void)
@@ -753,7 +760,7 @@ void mig_throttle_counter_reset(void)
 
     rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     rs->num_dirty_pages_period = 0;
-    rs->bytes_xfer_prev = ram_counters.transferred;
+    rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
 }
 
 /**
@@ -1113,8 +1120,9 @@ uint64_t ram_pagesize_summary(void)
 
 uint64_t ram_get_total_transferred_pages(void)
 {
-    return  ram_counters.normal + ram_counters.duplicate +
-                compression_counters.pages + xbzrle_counters.pages;
+    return  stat64_get(&ram_atomic_counters.normal) +
+        stat64_get(&ram_atomic_counters.duplicate) +
+        compression_counters.pages + xbzrle_counters.pages;
 }
 
 static void migration_update_rates(RAMState *rs, int64_t end_time)
@@ -1173,8 +1181,8 @@ static void migration_trigger_throttle(RAMState *rs)
 {
     MigrationState *s = migrate_get_current();
     uint64_t threshold = s->parameters.throttle_trigger_threshold;
-
-    uint64_t bytes_xfer_period = ram_counters.transferred - rs->bytes_xfer_prev;
+    uint64_t bytes_xfer_period =
+        stat64_get(&ram_atomic_counters.transferred) - rs->bytes_xfer_prev;
     uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
     uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
 
@@ -1237,7 +1245,7 @@ static void migration_bitmap_sync(RAMState *rs)
         /* reset period counters */
         rs->time_last_bitmap_sync = end_time;
         rs->num_dirty_pages_period = 0;
-        rs->bytes_xfer_prev = ram_counters.transferred;
+        rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
     }
     if (migrate_use_events()) {
         qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
@@ -1313,7 +1321,7 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
     int len = save_zero_page_to_file(rs, rs->f, block, offset);
 
     if (len) {
-        ram_counters.duplicate++;
+        stat64_add(&ram_atomic_counters.duplicate, 1);
         ram_transferred_add(len);
         return 1;
     }
@@ -1350,9 +1358,9 @@ static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     }
 
     if (bytes_xmit > 0) {
-        ram_counters.normal++;
+        stat64_add(&ram_atomic_counters.normal, 1);
     } else if (bytes_xmit == 0) {
-        ram_counters.duplicate++;
+        stat64_add(&ram_atomic_counters.duplicate, 1);
     }
 
     return true;
@@ -1382,7 +1390,7 @@ static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
         qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
     }
     ram_transferred_add(TARGET_PAGE_SIZE);
-    ram_counters.normal++;
+    stat64_add(&ram_atomic_counters.normal, 1);
     return 1;
 }
 
@@ -1438,7 +1446,7 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block,
     if (multifd_queue_page(rs->f, block, offset) < 0) {
         return -1;
     }
-    ram_counters.normal++;
+    stat64_add(&ram_atomic_counters.normal, 1);
 
     return 1;
 }
@@ -1476,7 +1484,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
     ram_transferred_add(bytes_xmit);
 
     if (param->zero_page) {
-        ram_counters.duplicate++;
+        stat64_add(&ram_atomic_counters.duplicate, 1);
         return;
     }
 
@@ -2651,9 +2659,9 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     uint64_t pages = size / TARGET_PAGE_SIZE;
 
     if (zero) {
-        ram_counters.duplicate += pages;
+        stat64_add(&ram_atomic_counters.duplicate, pages);
     } else {
-        ram_counters.normal += pages;
+        stat64_add(&ram_atomic_counters.normal, pages);
         ram_transferred_add(size);
         qemu_file_credit_transfer(f, size);
     }
-- 
2.38.1



  parent reply	other threads:[~2022-11-15 12:29 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-15 12:11 [PATCH 00/30] Migration PULL request Juan Quintela
2022-11-15 12:11 ` [PATCH 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
2022-11-15 12:11 ` [PATCH 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
2022-11-15 12:11 ` [PATCH 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
2022-11-15 12:12 ` [PATCH 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-11-15 12:12 ` [PATCH 05/30] multifd: Create page_count " Juan Quintela
2022-11-15 12:12 ` [PATCH 06/30] migration: Export ram_transferred_ram() Juan Quintela
2022-11-15 12:12 ` [PATCH 07/30] migration: Export ram_release_page() Juan Quintela
2022-11-15 12:12 ` [PATCH 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
2022-11-15 12:12 ` [PATCH 09/30] Unit test code and benchmark code Juan Quintela
2022-11-15 12:12 ` [PATCH 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
2022-11-15 12:12 ` [PATCH 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
2022-11-15 12:12 ` [PATCH 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
2022-11-15 12:12 ` [PATCH 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
2022-11-15 12:12 ` [PATCH 14/30] migration: Disable multifd explicitly with compression Juan Quintela
2022-11-15 12:12 ` [PATCH 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
2022-11-15 12:12 ` [PATCH 16/30] migration: Add postcopy_preempt_active() Juan Quintela
2022-11-15 12:12 ` [PATCH 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
2022-11-15 12:12 ` [PATCH 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
2022-11-15 12:12 ` [PATCH 19/30] migration: Remove RAMState.f references in compression code Juan Quintela
2022-11-15 12:12 ` [PATCH 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
2022-11-15 12:12 ` Juan Quintela [this message]
2022-11-15 12:12 ` [PATCH 22/30] migration: Teach PSS about host page Juan Quintela
2022-11-15 12:12 ` [PATCH 23/30] migration: Introduce pss_channel Juan Quintela
2022-11-15 12:12 ` [PATCH 24/30] migration: Add pss_init() Juan Quintela
2022-11-15 12:12 ` [PATCH 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
2022-11-15 12:12 ` [PATCH 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
2022-11-15 12:12 ` [PATCH 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
2022-11-15 12:12 ` [PATCH 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
2022-11-15 12:12 ` [PATCH 29/30] migration: Drop rs->f Juan Quintela
2022-11-15 12:12 ` [PATCH 30/30] migration: Block migration comment or code is wrong Juan Quintela
2022-11-15 14:55 ` [PATCH 00/30] Migration PULL request Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221115121226.26609-22-quintela@redhat.com \
    --to=quintela@redhat.com \
    --cc=berrange@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=fam@euphon.net \
    --cc=laurent@vivier.eu \
    --cc=marcandre.lureau@redhat.com \
    --cc=mjt@tls.msk.ru \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-trivial@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.