All of lore.kernel.org
 help / color / mirror / Atom feed
From: Leonardo Bras <leobras@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Peter Xu" <peterx@redhat.com>
Cc: Leonardo Bras <leobras@redhat.com>, qemu-devel@nongnu.org
Subject: [RFC PATCH 4/4] migration/multifd/zero-copy: Flush only the LRU half of the header array
Date: Tue, 25 Oct 2022 01:47:31 -0300	[thread overview]
Message-ID: <20221025044730.319941-5-leobras@redhat.com> (raw)
In-Reply-To: <20221025044730.319941-1-leobras@redhat.com>

Zero-copy multifd migration sends both the header and the memory pages in a
single syscall. Since it's necessary to flush before reusing the header, a
header array was implemented, so each write call uses a different
array, and flushing only take place after all headers have been used,
meaning 1 flush for each N writes.

This method has a bottleneck, though: After the last write, a flush will
have to wait for all writes to finish, which will be a lot, meaning the
recvmsg() syscall called in qio_channel_socket_flush() will be called a
lot. On top of that, it will create a time period when the I/O queue is
empty and nothing is getting send: between the flush and the next write.

To avoid that, use qio_channel_flush()'s new max_pending parameter to wait
until at most half of the array is still in use. (i.e. the LRU half of the
array can be reused)

Flushing for the LRU half of the array is much faster, since it does not
have to wait for the most recent writes to finish, making up for having
to flush twice per array.

As a main benefit, this approach keeps the I/O queue from being empty while
there are still data to be sent, making it easier to keep the I/O maximum
throughput while consuming less cpu time.

Signed-off-by: Leonardo Bras <leobras@redhat.com>
---
 migration/multifd.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index c5d1f911a4..fe9df460f6 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -569,12 +569,13 @@ void multifd_save_cleanup(void)
     multifd_send_state = NULL;
 }
 
-static int multifd_zero_copy_flush(QIOChannel *c)
+static int multifd_zero_copy_flush(QIOChannel *c,
+                                   int max_remaining)
 {
     int ret;
     Error *err = NULL;
 
-    ret = qio_channel_flush(c, 0, &err);
+    ret = qio_channel_flush(c, max_remaining, &err);
     if (ret < 0) {
         error_report_err(err);
         return -1;
@@ -636,7 +637,7 @@ int multifd_send_sync_main(QEMUFile *f)
         qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
 
-        if (flush_zero_copy && p->c && (multifd_zero_copy_flush(p->c) < 0)) {
+        if (flush_zero_copy && p->c && (multifd_zero_copy_flush(p->c, 0) < 0)) {
             return -1;
         }
     }
@@ -719,12 +720,17 @@ static void *multifd_send_thread(void *opaque)
 
             if (use_zero_copy_send) {
                 p->packet_idx = (p->packet_idx + 1) % HEADER_ARR_SZ;
-
-                if (!p->packet_idx && (multifd_zero_copy_flush(p->c) < 0)) {
+                /*
+                 * When half the array have been used, flush to make sure the
+                 * next half is available
+                 */
+                if (!(p->packet_idx % (HEADER_ARR_SZ / 2)) &&
+                    (multifd_zero_copy_flush(p->c, HEADER_ARR_SZ / 2) < 0)) {
                     break;
                 }
                 header = (void *)p->packet + p->packet_idx * p->packet_len;
             }
+
             qemu_mutex_lock(&p->mutex);
             p->pending_job--;
             qemu_mutex_unlock(&p->mutex);
-- 
2.38.0



  parent reply	other threads:[~2022-10-25  4:53 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-25  4:47 [RFC PATCH 0/4] MultiFD zero-copy improvements Leonardo Bras
2022-10-25  4:47 ` [RFC PATCH 1/4] migration/multifd/zero-copy: Create helper function for flushing Leonardo Bras
2022-10-25  9:44   ` Juan Quintela
2022-10-25  4:47 ` [RFC PATCH 2/4] migration/multifd/zero-copy: Merge header & pages send in a single write Leonardo Bras
2022-10-25  9:51   ` Juan Quintela
2022-10-25 13:28     ` Leonardo Brás
2022-10-25  4:47 ` [RFC PATCH 3/4] QIOChannel: Add max_pending parameter to qio_channel_flush() Leonardo Bras
2022-10-25  4:47 ` Leonardo Bras [this message]
2022-10-25  8:35   ` [RFC PATCH 4/4] migration/multifd/zero-copy: Flush only the LRU half of the header array Daniel P. Berrangé
2022-10-25 10:07     ` Juan Quintela
2022-10-25 13:47       ` Leonardo Brás
2022-10-25 16:36 ` [RFC PATCH 0/4] MultiFD zero-copy improvements Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221025044730.319941-5-leobras@redhat.com \
    --to=leobras@redhat.com \
    --cc=berrange@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.