All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Leonardo Bras" <leobras@redhat.com>,
	"Peter Xu" <peterx@redhat.com>, "Eric Blake" <eblake@redhat.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
	"Yanan Wang" <wangyanan55@huawei.com>,
	"Juan Quintela" <quintela@redhat.com>,
	"Markus Armbruster" <armbru@redhat.com>,
	"Eduardo Habkost" <eduardo@habkost.net>
Subject: [PATCH v7 04/12] multifd: Count the number of bytes sent correctly
Date: Tue,  2 Aug 2022 08:38:59 +0200	[thread overview]
Message-ID: <20220802063907.18882-5-quintela@redhat.com> (raw)
In-Reply-To: <20220802063907.18882-1-quintela@redhat.com>

Current code asumes that all pages are whole.  That is not true for
example for compression already.  Fix it for creating a new field
->sent_bytes that includes it.

All ram_counters are used only from the migration thread, so we have
two options:
- put a mutex and fill everything when we sent it (not only
ram_counters, also qemu_file->xfer_bytes).
- Create a local variable that implements how much has been sent
through each channel.  And when we push another packet, we "add" the
previous stats.

I choose two due to less changes overall.  On the previous code we
increase transferred and then we sent.  Current code goes the other
way around.  It sents the data, and after the fact, it updates the
counters.  Notice that each channel can have a maximum of half a
megabyte of data without counting, so it is not very important.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h |  2 ++
 migration/multifd.c | 14 ++++++--------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index e2802a9ce2..36f899c56f 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -102,6 +102,8 @@ typedef struct {
     uint32_t flags;
     /* global number of generated multifd packets */
     uint64_t packet_num;
+    /* How many bytes have we sent on the last packet */
+    uint64_t sent_bytes;
     /* thread has work to do */
     int pending_job;
     /* array of pages to sent.
diff --git a/migration/multifd.c b/migration/multifd.c
index aa3808a6f4..e25b529235 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -394,7 +394,6 @@ static int multifd_send_pages(QEMUFile *f)
     static int next_channel;
     MultiFDSendParams *p = NULL; /* make happy gcc */
     MultiFDPages_t *pages = multifd_send_state->pages;
-    uint64_t transferred;
 
     if (qatomic_read(&multifd_send_state->exiting)) {
         return -1;
@@ -429,10 +428,10 @@ static int multifd_send_pages(QEMUFile *f)
     p->packet_num = multifd_send_state->packet_num++;
     multifd_send_state->pages = p->pages;
     p->pages = pages;
-    transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
-    qemu_file_acct_rate_limit(f, transferred);
-    ram_counters.multifd_bytes += transferred;
-    ram_counters.transferred += transferred;
+    ram_transferred_add(p->sent_bytes);
+    ram_counters.multifd_bytes += p->sent_bytes;
+    qemu_file_acct_rate_limit(f, p->sent_bytes);
+    p->sent_bytes = 0;
     qemu_mutex_unlock(&p->mutex);
     qemu_sem_post(&p->sem);
 
@@ -605,9 +604,6 @@ int multifd_send_sync_main(QEMUFile *f)
         p->packet_num = multifd_send_state->packet_num++;
         p->flags |= MULTIFD_FLAG_SYNC;
         p->pending_job++;
-        qemu_file_acct_rate_limit(f, p->packet_len);
-        ram_counters.multifd_bytes += p->packet_len;
-        ram_counters.transferred += p->packet_len;
         qemu_mutex_unlock(&p->mutex);
         qemu_sem_post(&p->sem);
 
@@ -714,6 +710,8 @@ static void *multifd_send_thread(void *opaque)
             }
 
             qemu_mutex_lock(&p->mutex);
+            p->sent_bytes += p->packet_len;;
+            p->sent_bytes += p->next_packet_size;
             p->pending_job--;
             qemu_mutex_unlock(&p->mutex);
 
-- 
2.37.1



  parent reply	other threads:[~2022-08-02  7:04 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-02  6:38 [PATCH v7 00/12] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
2022-08-02  6:38 ` [PATCH v7 01/12] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-08-11  8:10   ` [PATCH v7 01/12] multifd: Create page_size fields into both MultiFD{Recv,Send}Params Leonardo Brás
2022-08-13 15:41     ` Juan Quintela
2022-08-02  6:38 ` [PATCH v7 02/12] multifd: Create page_count fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-08-11  8:10   ` [PATCH v7 02/12] multifd: Create page_count fields into both MultiFD{Recv,Send}Params Leonardo Brás
2022-08-02  6:38 ` [PATCH v7 03/12] migration: Export ram_transferred_ram() Juan Quintela
2022-08-11  8:11   ` Leonardo Brás
2022-08-13 15:36     ` Juan Quintela
2022-08-02  6:38 ` Juan Quintela [this message]
2022-08-11  8:11   ` [PATCH v7 04/12] multifd: Count the number of bytes sent correctly Leonardo Brás
2022-08-19  9:35     ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 05/12] migration: Make ram_save_target_page() a pointer Juan Quintela
2022-08-11  8:11   ` Leonardo Brás
2022-08-19  9:51     ` Juan Quintela
2022-08-20  7:14       ` Leonardo Bras Soares Passos
2022-08-22 21:35         ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 06/12] multifd: Make flags field thread local Juan Quintela
2022-08-11  9:04   ` Leonardo Brás
2022-08-19 10:03     ` Juan Quintela
2022-08-20  7:24       ` Leonardo Bras Soares Passos
2022-08-23 13:00         ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 07/12] multifd: Prepare to send a packet without the mutex held Juan Quintela
2022-08-11  9:16   ` Leonardo Brás
2022-08-19 11:32     ` Juan Quintela
2022-08-20  7:27       ` Leonardo Bras Soares Passos
2022-08-02  6:39 ` [PATCH v7 08/12] multifd: Add capability to enable/disable zero_page Juan Quintela
2022-08-11  9:29   ` Leonardo Brás
2022-08-19 11:36     ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 09/12] migration: Export ram_release_page() Juan Quintela
2022-08-11  9:31   ` Leonardo Brás
2022-08-02  6:39 ` [PATCH v7 10/12] multifd: Support for zero pages transmission Juan Quintela
2022-09-02 13:27   ` Leonardo Brás
2022-11-14 12:09     ` Juan Quintela
2022-10-25  9:10   ` chuang xu
2022-11-14 12:10     ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 11/12] multifd: Zero " Juan Quintela
2022-09-02 13:27   ` Leonardo Brás
2022-11-14 12:20     ` Juan Quintela
2022-11-14 12:27     ` Juan Quintela
2022-08-02  6:39 ` [PATCH v7 12/12] So we use multifd to transmit zero pages Juan Quintela
2022-09-02 13:27   ` Leonardo Brás
2022-11-14 12:30     ` Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220802063907.18882-5-quintela@redhat.com \
    --to=quintela@redhat.com \
    --cc=armbru@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eblake@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=f4bug@amsat.org \
    --cc=leobras@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=wangyanan55@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.