All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"Corey Minyard" <cminyard@mvista.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Peter Xu" <peterx@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"David Gibson" <david@gibson.dropbear.id.au>,
	"Laurent Vivier" <lvivier@redhat.com>,
	"Thomas Huth" <thuth@redhat.com>,
	"Eduardo Habkost" <ehabkost@redhat.com>,
	"Stefan Weil" <sw@weilnetz.de>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-arm@nongnu.org, "Richard Henderson" <rth@twiddle.net>,
	"Daniel P. Berrangé" <berrange@redhat.com>,
	qemu-ppc@nongnu.org, "Lin Ma" <LMa@suse.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Stefan Berger" <stefanb@linux.ibm.com>
Subject: [PULL 09/30] migration: Rate limit inside host pages
Date: Tue, 14 Jan 2020 13:52:33 +0100	[thread overview]
Message-ID: <20200114125254.4515-10-quintela@redhat.com> (raw)
In-Reply-To: <20200114125254.4515-1-quintela@redhat.com>

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

When using hugepages, rate limiting is necessary within each huge
page, since a 1G huge page can take a significant time to send, so
you end up with bursty behaviour.

Fixes: 4c011c37ecb3 ("postcopy: Send whole huge pages")
Reported-by: Lin Ma <LMa@suse.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c  | 57 ++++++++++++++++++++++++------------------
 migration/migration.h  |  1 +
 migration/ram.c        |  2 ++
 migration/trace-events |  4 +--
 4 files changed, 37 insertions(+), 27 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 354ad072fa..27500d09a9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3224,6 +3224,37 @@ void migration_consume_urgent_request(void)
     qemu_sem_wait(&migrate_get_current()->rate_limit_sem);
 }
 
+/* Returns true if the rate limiting was broken by an urgent request */
+bool migration_rate_limit(void)
+{
+    int64_t now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+    MigrationState *s = migrate_get_current();
+
+    bool urgent = false;
+    migration_update_counters(s, now);
+    if (qemu_file_rate_limit(s->to_dst_file)) {
+        /*
+         * Wait for a delay to do rate limiting OR
+         * something urgent to post the semaphore.
+         */
+        int ms = s->iteration_start_time + BUFFER_DELAY - now;
+        trace_migration_rate_limit_pre(ms);
+        if (qemu_sem_timedwait(&s->rate_limit_sem, ms) == 0) {
+            /*
+             * We were woken by one or more urgent things but
+             * the timedwait will have consumed one of them.
+             * The service routine for the urgent wake will dec
+             * the semaphore itself for each item it consumes,
+             * so add this one we just eat back.
+             */
+            qemu_sem_post(&s->rate_limit_sem);
+            urgent = true;
+        }
+        trace_migration_rate_limit_post(urgent);
+    }
+    return urgent;
+}
+
 /*
  * Master migration thread on the source VM.
  * It drives the migration and pumps the data down the outgoing channel.
@@ -3290,8 +3321,6 @@ static void *migration_thread(void *opaque)
     trace_migration_thread_setup_complete();
 
     while (migration_is_active(s)) {
-        int64_t current_time;
-
         if (urgent || !qemu_file_rate_limit(s->to_dst_file)) {
             MigIterateState iter_state = migration_iteration_run(s);
             if (iter_state == MIG_ITERATE_SKIP) {
@@ -3318,29 +3347,7 @@ static void *migration_thread(void *opaque)
             update_iteration_initial_status(s);
         }
 
-        current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
-
-        migration_update_counters(s, current_time);
-
-        urgent = false;
-        if (qemu_file_rate_limit(s->to_dst_file)) {
-            /* Wait for a delay to do rate limiting OR
-             * something urgent to post the semaphore.
-             */
-            int ms = s->iteration_start_time + BUFFER_DELAY - current_time;
-            trace_migration_thread_ratelimit_pre(ms);
-            if (qemu_sem_timedwait(&s->rate_limit_sem, ms) == 0) {
-                /* We were worken by one or more urgent things but
-                 * the timedwait will have consumed one of them.
-                 * The service routine for the urgent wake will dec
-                 * the semaphore itself for each item it consumes,
-                 * so add this one we just eat back.
-                 */
-                qemu_sem_post(&s->rate_limit_sem);
-                urgent = true;
-            }
-            trace_migration_thread_ratelimit_post(urgent);
-        }
+        urgent = migration_rate_limit();
     }
 
     trace_migration_thread_after_loop();
diff --git a/migration/migration.h b/migration/migration.h
index 79b3dda146..aa9ff6f27b 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -341,5 +341,6 @@ int foreach_not_ignored_block(RAMBlockIterFunc func, void *opaque);
 
 void migration_make_urgent_request(void);
 void migration_consume_urgent_request(void);
+bool migration_rate_limit(void);
 
 #endif
diff --git a/migration/ram.c b/migration/ram.c
index 825f47f517..aa6cc7d47a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2639,6 +2639,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
 
         pages += tmppages;
         pss->page++;
+        /* Allow rate limiting to happen in the middle of huge pages */
+        migration_rate_limit();
     } while ((pss->page & (pagesize_bits - 1)) &&
              offset_in_ramblock(pss->block, pss->page << TARGET_PAGE_BITS));
 
diff --git a/migration/trace-events b/migration/trace-events
index 6dee7b5389..2f9129e213 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -138,12 +138,12 @@ migrate_send_rp_recv_bitmap(char *name, int64_t size) "block '%s' size 0x%"PRIi6
 migration_completion_file_err(void) ""
 migration_completion_postcopy_end(void) ""
 migration_completion_postcopy_end_after_complete(void) ""
+migration_rate_limit_pre(int ms) "%d ms"
+migration_rate_limit_post(int urgent) "urgent: %d"
 migration_return_path_end_before(void) ""
 migration_return_path_end_after(int rp_error) "%d"
 migration_thread_after_loop(void) ""
 migration_thread_file_err(void) ""
-migration_thread_ratelimit_pre(int ms) "%d ms"
-migration_thread_ratelimit_post(int urgent) "urgent: %d"
 migration_thread_setup_complete(void) ""
 open_return_path_on_source(void) ""
 open_return_path_on_source_continue(void) ""
-- 
2.24.1



  parent reply	other threads:[~2020-01-14 12:59 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-14 12:52 [PULL 00/30] Migration pull patches (take 4) Juan Quintela
2020-01-14 12:52 ` [PULL 01/30] multifd: Initialize local variable Juan Quintela
2020-01-14 12:52 ` [PULL 02/30] multifd: Allocate uint64_t instead of ram_addr_t Juan Quintela
2020-01-14 12:52 ` [PULL 03/30] migration-test: Add migration multifd test Juan Quintela
2020-01-14 12:52 ` [PULL 04/30] migration: Make sure that we don't call write() in case of error Juan Quintela
2020-01-14 12:52 ` [PULL 05/30] migration-test: introduce functions to handle string parameters Juan Quintela
2020-01-14 12:52 ` [PULL 06/30] migration-test: ppc64: fix FORTH test program Juan Quintela
2020-01-14 12:52 ` [PULL 07/30] runstate: ignore finishmigrate -> prelaunch transition Juan Quintela
2020-01-14 12:52 ` [PULL 08/30] ram.c: remove unneeded labels Juan Quintela
2020-01-14 12:52 ` Juan Quintela [this message]
2020-01-14 12:52 ` [PULL 10/30] migration: Fix incorrect integer->float conversion caught by clang Juan Quintela
2020-01-14 12:52 ` [PULL 11/30] migration: Fix the re-run check of the migrate-incoming command Juan Quintela
2020-01-14 12:52 ` [PULL 12/30] misc: use QEMU_IS_ALIGNED Juan Quintela
2020-01-14 12:52 ` [PULL 13/30] migration: add savevm_state_handler_remove() Juan Quintela
2020-01-14 12:52 ` [PULL 14/30] migration: savevm_state_handler_insert: constant-time element insertion Juan Quintela
2020-01-14 12:52 ` [PULL 15/30] migration/ram: Yield periodically to the main loop Juan Quintela
2020-01-14 12:52 ` [PULL 16/30] migration/postcopy: reduce memset when it is zero page and matches_target_page_size Juan Quintela
2020-01-14 12:52 ` [PULL 17/30] migration/postcopy: wait for decompress thread in precopy Juan Quintela
2020-01-14 12:52 ` [PULL 18/30] migration/postcopy: count target page number to decide the place_needed Juan Quintela
2020-01-14 12:52 ` [PULL 19/30] migration/postcopy: set all_zero to true on the first target page Juan Quintela
2020-01-14 12:52 ` [PULL 20/30] migration/postcopy: enable random order target page arrival Juan Quintela
2020-01-14 12:52 ` [PULL 21/30] migration/postcopy: enable compress during postcopy Juan Quintela
2020-01-14 12:52 ` [PULL 22/30] migration/multifd: clean pages after filling packet Juan Quintela
2020-01-14 12:52 ` [PULL 23/30] migration/multifd: not use multifd during postcopy Juan Quintela
2020-01-14 12:52 ` [PULL 24/30] migration/multifd: fix nullptr access in terminating multifd threads Juan Quintela
2020-01-14 12:52 ` [PULL 25/30] migration/multifd: fix destroyed mutex " Juan Quintela
2020-01-14 12:52 ` [PULL 26/30] Bug #1829242 correction Juan Quintela
2020-01-14 12:52 ` [PULL 27/30] migration: Define VMSTATE_INSTANCE_ID_ANY Juan Quintela
2020-01-14 12:52 ` [PULL 28/30] migration: Change SaveStateEntry.instance_id into uint32_t Juan Quintela
2020-01-14 12:52 ` [PULL 29/30] apic: Use 32bit APIC ID for migration instance ID Juan Quintela
2020-01-14 12:52 ` [PULL 30/30] migration: Support QLIST migration Juan Quintela
2020-01-17 12:05 ` [PULL 00/30] Migration pull patches (take 4) Peter Maydell
2020-01-17 12:22   ` Juan Quintela
2020-01-17 12:41     ` Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200114125254.4515-10-quintela@redhat.com \
    --to=quintela@redhat.com \
    --cc=LMa@suse.com \
    --cc=berrange@redhat.com \
    --cc=cminyard@mvista.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=stefanb@linux.ibm.com \
    --cc=sw@weilnetz.de \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.