All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hao Xiang <hao.xiang@bytedance.com>
To: pbonzini@redhat.com, berrange@redhat.com, eduardo@habkost.net,
	peterx@redhat.com, farosas@suse.de, eblake@redhat.com,
	armbru@redhat.com, thuth@redhat.com, lvivier@redhat.com,
	jdenemar@redhat.com, marcel.apfelbaum@gmail.com,
	philmd@linaro.org, wangyanan55@huawei.com, qemu-devel@nongnu.org
Cc: Hao Xiang <hao.xiang@bytedance.com>
Subject: [PATCH v4 3/7] migration/multifd: Implement ram_save_target_page_multifd to handle multifd version of MigrationOps::ram_save_target_page.
Date: Fri,  1 Mar 2024 02:28:25 +0000	[thread overview]
Message-ID: <20240301022829.3390548-4-hao.xiang@bytedance.com> (raw)
In-Reply-To: <20240301022829.3390548-1-hao.xiang@bytedance.com>

1. Add a dedicated handler for MigrationOps::ram_save_target_page in
multifd live migration.
2. Refactor ram_save_target_page_legacy so that the legacy and multifd
handlers don't have internal functions calling into each other.

Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20240226195654.934709-4-hao.xiang@bytedance.com>
---
 migration/ram.c | 43 ++++++++++++++++++++++++++++++-------------
 1 file changed, 30 insertions(+), 13 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index e1fa229acf..f9d6ea65cc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1122,10 +1122,6 @@ static int save_zero_page(RAMState *rs, PageSearchStatus *pss,
     QEMUFile *file = pss->pss_channel;
     int len = 0;
 
-    if (migrate_zero_page_detection() == ZERO_PAGE_DETECTION_NONE) {
-        return 0;
-    }
-
     if (!buffer_is_zero(p, TARGET_PAGE_SIZE)) {
         return 0;
     }
@@ -2045,7 +2041,6 @@ static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
  */
 static int ram_save_target_page_legacy(RAMState *rs, PageSearchStatus *pss)
 {
-    RAMBlock *block = pss->block;
     ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
     int res;
 
@@ -2061,17 +2056,34 @@ static int ram_save_target_page_legacy(RAMState *rs, PageSearchStatus *pss)
         return 1;
     }
 
+    return ram_save_page(rs, pss);
+}
+
+/**
+ * ram_save_target_page_multifd: send one target page to multifd workers
+ *
+ * Returns 1 if the page was queued, -1 otherwise.
+ *
+ * @rs: current RAM state
+ * @pss: data about the page we want to send
+ */
+static int ram_save_target_page_multifd(RAMState *rs, PageSearchStatus *pss)
+{
+    RAMBlock *block = pss->block;
+    ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
+
     /*
-     * Do not use multifd in postcopy as one whole host page should be
-     * placed.  Meanwhile postcopy requires atomic update of pages, so even
-     * if host page size == guest page size the dest guest during run may
-     * still see partially copied pages which is data corruption.
+     * Backward compatibility support. While using multifd live
+     * migration, we still need to handle zero page checking on the
+     * migration main thread.
      */
-    if (migrate_multifd() && !migration_in_postcopy()) {
-        return ram_save_multifd_page(block, offset);
+    if (migrate_zero_page_detection() == ZERO_PAGE_DETECTION_LEGACY) {
+        if (save_zero_page(rs, pss, offset)) {
+            return 1;
+        }
     }
 
-    return ram_save_page(rs, pss);
+    return ram_save_multifd_page(block, offset);
 }
 
 /* Should be called before sending a host page */
@@ -2983,7 +2995,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     }
 
     migration_ops = g_malloc0(sizeof(MigrationOps));
-    migration_ops->ram_save_target_page = ram_save_target_page_legacy;
+
+    if (migrate_multifd()) {
+        migration_ops->ram_save_target_page = ram_save_target_page_multifd;
+    } else {
+        migration_ops->ram_save_target_page = ram_save_target_page_legacy;
+    }
 
     bql_unlock();
     ret = multifd_send_sync_main();
-- 
2.30.2



  parent reply	other threads:[~2024-03-01  2:31 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-01  2:28 [PATCH v4 0/7] Introduce multifd zero page checking Hao Xiang
2024-03-01  2:28 ` [PATCH v4 1/7] migration/multifd: Add new migration option zero-page-detection Hao Xiang
2024-03-01  7:24   ` Markus Armbruster
2024-03-04  7:01   ` Peter Xu
2024-03-01  2:28 ` [PATCH v4 2/7] migration/multifd: Implement zero page transmission on the multifd thread Hao Xiang
2024-03-01  7:28   ` Markus Armbruster
2024-03-01 22:49     ` [External] " Hao Xiang
2024-03-04  7:16   ` Peter Xu
2024-03-04 13:17     ` Fabiano Rosas
2024-03-04 14:31       ` Fabiano Rosas
2024-03-04 14:39         ` Fabiano Rosas
2024-03-04 18:24       ` Fabiano Rosas
     [not found]         ` <CAAYibXiLLztnPnKkGZKgXpD8HfSsFqdmhUGcETpzQDUoURRNwg@mail.gmail.com>
2024-03-09  8:08           ` hao.xiang
     [not found]     ` <CAAYibXi0xjpwayO1u8P4skjpeOuUteyuRmrhFHmjFwoRF2JWJg@mail.gmail.com>
2024-03-09  2:37       ` [External] " hao.xiang
2024-03-01  2:28 ` Hao Xiang [this message]
2024-03-04  7:46   ` [PATCH v4 3/7] migration/multifd: Implement ram_save_target_page_multifd to handle multifd version of MigrationOps::ram_save_target_page Peter Xu
     [not found]     ` <CAAYibXhCzozRhHxp2Dk3L9BMhFhZtqyvgbwkj+8ZGMCHURZGug@mail.gmail.com>
2024-03-09  2:06       ` hao.xiang
2024-03-11 13:20         ` Peter Xu
2024-03-11 18:02           ` hao.xiang
2024-03-01  2:28 ` [PATCH v4 4/7] migration/multifd: Enable multifd zero page checking by default Hao Xiang
2024-03-04  7:20   ` Peter Xu
2024-03-01  2:28 ` [PATCH v4 5/7] migration/multifd: Add new migration test cases for legacy zero page checking Hao Xiang
2024-03-04  7:23   ` Peter Xu
2024-03-01  2:28 ` [PATCH v4 6/7] migration/multifd: Add zero pages and zero bytes counter to migration status interface Hao Xiang
2024-03-01  7:40   ` Markus Armbruster
     [not found]     ` <CAAYibXjyMT5YJqOcDheDUB1qzi+JjFhAcv3L57zM9pCFMGbYbw@mail.gmail.com>
2024-03-09  6:56       ` [External] " hao.xiang
2024-03-01  2:28 ` [PATCH v4 7/7] Update maintainer contact for migration multifd zero page checking acceleration Hao Xiang
2024-03-04  7:34   ` Peter Xu
     [not found]     ` <CAAYibXjoji3GY7TW_USFsuT3YyVnv_kGFXpvBgK_kf9i1S1VSw@mail.gmail.com>
2024-03-09  8:13       ` hao.xiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240301022829.3390548-4-hao.xiang@bytedance.com \
    --to=hao.xiang@bytedance.com \
    --cc=armbru@redhat.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=farosas@suse.de \
    --cc=jdenemar@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=thuth@redhat.com \
    --cc=wangyanan55@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.