All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration
@ 2017-03-23 20:44 Juan Quintela
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
                   ` (51 more replies)
  0 siblings, 52 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Hi

Continuation of previous series, all review comments addressed. New things:
- Consolidate all function comments in the same style (yes, docs)
- Be much more careful with maintaining comments correct
- Move all postcopy fields to RAMState
- Move QEMUFile to RAMState
- rename qemu_target_page_bits() to qemu_target_page_size() to reflect use
- Remove MigrationState from functions that don't need it
- reorganize last_sent_block to the place where it is used/needed
- Move several places from offsets to pages
- Rename last_ram_offset() to last_ram_page() to refect use

Please comment.


[v1]
Currently, we have several places where we store informaticon about
ram for migration pruposes:
- global variables on migration/ram.c
- inside the accounting_info struct in migration/ram.c
  notice that not all the accounting vars are inside there
- some stuff is in MigrationState, althought it belongs to migrate/ram.c

So, this series does:
- move everything related to ram.c to RAMState struct
- make all the statistics consistent, exporting them with an accessor
  function

Why now?

Because I am trying to do some more optimizations about how we send
data around and it is basically impossible to do with current code, we
still need to add more variables.  Notice that there are things like that:
- accounting info was only reset if we had xbzrle enabled
- How/where to initialize variables are completely inconsistent.



To Do:

- There are still places that access directly the global struct.
  Mainly postcopy.  We could finfd a way to make a pointer to the
  current migration.  If people like the approach, I will search where
  to put it.
- I haven't posted any real change here, this is just the move of
  variables to the struct and pass the struct around.  Optimizations
  will came after.

- Consolidate XBZRLE, Compression params, etc in its own structs
  (inside or not RAMState, to be able to allocate ones, others, or
  ...)

Comments, please.


Chao Fan (1):
  Add page-size to output in 'info migrate'

Juan Quintela (50):
  ram: Update all functions comments
  ram: rename block_name to rbname
  ram: Create RAMState
  ram: Add dirty_rate_high_cnt to RAMState
  ram: Move bitmap_sync_count into RAMState
  ram: Move start time into RAMState
  ram: Move bytes_xfer_prev into RAMState
  ram: Move num_dirty_pages_period into RAMState
  ram: Move xbzrle_cache_miss_prev into RAMState
  ram: Move iterations_prev into RAMState
  ram: Move dup_pages into RAMState
  ram: Remove unused dup_mig_bytes_transferred()
  ram: Remove unused pages_skipped variable
  ram: Move norm_pages to RAMState
  ram: Remove norm_mig_bytes_transferred
  ram: Move iterations into RAMState
  ram: Move xbzrle_bytes into RAMState
  ram: Move xbzrle_pages into RAMState
  ram: Move xbzrle_cache_miss into RAMState
  ram: Move xbzrle_cache_miss_rate into RAMState
  ram: Move xbzrle_overflows into RAMState
  ram: Move migration_dirty_pages to RAMState
  ram: Everything was init to zero, so use memset
  ram: Move migration_bitmap_mutex into RAMState
  ram: Move migration_bitmap_rcu into RAMState
  ram: Move bytes_transferred into RAMState
  ram: Use the RAMState bytes_transferred parameter
  ram: Remove ram_save_remaining
  ram: Move last_req_rb to RAMState
  ram: Move src_page_req* to RAMState
  ram: Create ram_dirty_sync_count()
  ram: Remove dirty_bytes_rate
  ram: Move dirty_pages_rate to RAMState
  ram: Move postcopy_requests into RAMState
  ram: Add QEMUFile to RAMState
  ram: Move QEMUFile into RAMState
  ram: Move compression_switch to RAMState
  migration: Remove MigrationState from migration_in_postcopy
  ram: We don't need MigrationState parameter anymore
  ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  ram: Pass RAMBlock to bitmap_sync
  ram: ram_discard_range() don't use the mis parameter
  ram: reorganize last_sent_block
  ram: Use page number instead of an address for the bitmap operations
  ram: Remember last_page instead of last_offset
  ram: Change offset field in PageSearchStatus to page
  ram: Use ramblock and page offset instead of absolute offset
  ram: rename last_ram_offset() last_ram_pages()
  ram: Use RAMBitmap type for coherence
  migration: Remove MigrationState parameter from migration_is_idle()

 exec.c                        |   10 +-
 hmp.c                         |    3 +
 include/exec/ram_addr.h       |    4 +-
 include/migration/migration.h |   41 +-
 include/sysemu/sysemu.h       |    2 +-
 migration/migration.c         |   44 +-
 migration/postcopy-ram.c      |   14 +-
 migration/ram.c               | 1190 ++++++++++++++++++++++-------------------
 migration/savevm.c            |   15 +-
 migration/trace-events        |    2 +-
 qapi-schema.json              |    5 +-
 11 files changed, 695 insertions(+), 635 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-24  9:55   ` Peter Xu
  2017-03-31 15:51   ` Dr. David Alan Gilbert
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname Juan Quintela
                   ` (50 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Added doc comments for existing functions comment and rewrite them in
a common style.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 348 ++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 227 insertions(+), 121 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index de1e0a3..76f1fc4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -96,11 +96,17 @@ static void XBZRLE_cache_unlock(void)
         qemu_mutex_unlock(&XBZRLE.lock);
 }
 
-/*
- * called from qmp_migrate_set_cache_size in main thread, possibly while
- * a migration is in progress.
- * A running migration maybe using the cache and might finish during this
- * call, hence changes to the cache are protected by XBZRLE.lock().
+/**
+ * xbzrle_cache_resize: resize the xbzrle cache
+ *
+ * This function is called from qmp_migrate_set_cache_size in main
+ * thread, possibly while a migration is in progress.  A running
+ * migration may be using the cache and might finish during this call,
+ * hence changes to the cache are protected by XBZRLE.lock().
+ *
+ * Returns the new_size or negative in case of error.
+ *
+ * @new_size: new cache size
  */
 int64_t xbzrle_cache_resize(int64_t new_size)
 {
@@ -323,6 +329,7 @@ static inline void terminate_compression_threads(void)
     int idx, thread_count;
 
     thread_count = migrate_compress_threads();
+
     for (idx = 0; idx < thread_count; idx++) {
         qemu_mutex_lock(&comp_param[idx].mutex);
         comp_param[idx].quit = true;
@@ -383,11 +390,11 @@ void migrate_compress_threads_create(void)
 }
 
 /**
- * save_page_header: Write page header to wire
+ * save_page_header: write page header to wire
  *
  * If this is the 1st block, it also writes the block identification
  *
- * Returns: Number of bytes written
+ * Returns the number of bytes written
  *
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -410,11 +417,14 @@ static size_t save_page_header(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
     return size;
 }
 
-/* Reduce amount of guest cpu execution to hopefully slow down memory writes.
- * If guest dirty memory rate is reduced below the rate at which we can
- * transfer pages to the destination then we should be able to complete
- * migration. Some workloads dirty memory way too fast and will not effectively
- * converge, even with auto-converge.
+/**
+ * mig_throotle_guest_down: throotle down the guest
+ *
+ * Reduce amount of guest cpu execution to hopefully slow down memory
+ * writes. If guest dirty memory rate is reduced below the rate at
+ * which we can transfer pages to the destination then we should be
+ * able to complete migration. Some workloads dirty memory way too
+ * fast and will not effectively converge, even with auto-converge.
  */
 static void mig_throttle_guest_down(void)
 {
@@ -431,11 +441,16 @@ static void mig_throttle_guest_down(void)
     }
 }
 
-/* Update the xbzrle cache to reflect a page that's been sent as all 0.
+/**
+ * xbzrle_cache_zero_page: insert a zero page in the XBZRLE cache
+ *
+ * @current_addr: address for the zero page
+ *
+ * Update the xbzrle cache to reflect a page that's been sent as all 0.
  * The important thing is that a stale (not-yet-0'd) page be replaced
  * by the new data.
  * As a bonus, if the page wasn't in the cache it gets added so that
- * when a small write is made into the 0'd page it gets XBZRLE sent
+ * when a small write is made into the 0'd page it gets XBZRLE sent.
  */
 static void xbzrle_cache_zero_page(ram_addr_t current_addr)
 {
@@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
  *          -1 means that xbzrle would be longer than normal
  *
  * @f: QEMUFile where to send the data
- * @current_data:
- * @current_addr:
+ * @current_data: contents of the page
+ * @current_addr: addr of the page
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
@@ -530,13 +545,17 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
     return 1;
 }
 
-/* Called with rcu_read_lock() to protect migration_bitmap
- * rb: The RAMBlock  to search for dirty pages in
- * start: Start address (typically so we can continue from previous page)
- * ram_addr_abs: Pointer into which to store the address of the dirty page
- *               within the global ram_addr space
+/**
+ * migration_bitmap_find_dirty: find the next drity page from start
  *
- * Returns: byte offset within memory region of the start of a dirty page
+ * Called with rcu_read_lock() to protect migration_bitmap
+ *
+ * Returns the byte offset within memory region of the start of a dirty page
+ *
+ * @rb: RAMBlock where to search for dirty pages
+ * @start: starting address (typically so we can continue from previous page)
+ * @ram_addr_abs: pointer into which to store the address of the dirty page
+ *                within the global ram_addr space
  */
 static inline
 ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
@@ -600,10 +619,14 @@ static void migration_bitmap_sync_init(void)
     iterations_prev = 0;
 }
 
-/* Returns a summary bitmap of the page sizes of all RAMBlocks;
- * for VMs with just normal pages this is equivalent to the
- * host page size.  If it's got some huge pages then it's the OR
- * of all the different page sizes.
+/**
+ * ram_pagesize_summary: calculate all the pagesizes of a VM
+ *
+ * Returns a summary bitmap of the page sizes of all RAMBlocks
+ *
+ * For VMs with just normal pages this is equivalent to the host page
+ * size. If it's got some huge pages then it's the OR of all the
+ * different page sizes.
  */
 uint64_t ram_pagesize_summary(void)
 {
@@ -693,9 +716,9 @@ static void migration_bitmap_sync(void)
 }
 
 /**
- * save_zero_page: Send the zero page to the stream
+ * save_zero_page: send the zero page to the stream
  *
- * Returns: Number of pages written.
+ * Returns the number of pages written.
  *
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -731,14 +754,14 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
 }
 
 /**
- * ram_save_page: Send the given page to the stream
+ * ram_save_page: send the given page to the stream
  *
- * Returns: Number of pages written.
+ * Returns the number of pages written.
  *          < 0 - error
  *          >=0 - Number of pages written - this might legally be 0
  *                if xbzrle noticed the page was the same.
  *
- * @ms: The current migration state.
+ * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
@@ -921,9 +944,9 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
 /**
  * ram_save_compressed_page: compress the given page and send it to the stream
  *
- * Returns: Number of pages written.
+ * Returns the number of pages written.
  *
- * @ms: The current migration state.
+ * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
@@ -1000,17 +1023,17 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
     return pages;
 }
 
-/*
- * Find the next dirty page and update any state associated with
- * the search process.
+/**
+ * find_dirty_block: find the next dirty page and update any state
+ * associated with the search process.
  *
- * Returns: True if a page is found
+ * Returns if a page is found
  *
- * @f: Current migration stream.
- * @pss: Data about the state of the current dirty page scan.
- * @*again: Set to false if the search has scanned the whole of RAM
- * *ram_addr_abs: Pointer into which to store the address of the dirty page
- *               within the global ram_addr space
+ * @f: QEMUFile where to send the data
+ * @pss: data about the state of the current dirty page scan
+ * @again: set to false if the search has scanned the whole of RAM
+ * @ram_addr_abs: pointer into which to store the address of the dirty page
+ *                within the global ram_addr space
  */
 static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
                              bool *again, ram_addr_t *ram_addr_abs)
@@ -1055,13 +1078,17 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
     }
 }
 
-/*
+/**
+ * unqueue_page: gets a page of the queue
+ *
  * Helper for 'get_queued_page' - gets a page off the queue
- *      ms:      MigrationState in
- * *offset:      Used to return the offset within the RAMBlock
- * ram_addr_abs: global offset in the dirty/sent bitmaps
  *
- * Returns:      block (or NULL if none available)
+ * Returns the block of the page (or NULL if none available)
+ *
+ * @ms: current migration state
+ * @offset: used to return the offset within the RAMBlock
+ * @ram_addr_abs: pointer into which to store the address of the dirty page
+ *                within the global ram_addr space
  */
 static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
                               ram_addr_t *ram_addr_abs)
@@ -1091,15 +1118,17 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
     return block;
 }
 
-/*
- * Unqueue a page from the queue fed by postcopy page requests; skips pages
- * that are already sent (!dirty)
+/**
+ * get_queued_page: unqueue a page from the postocpy requests
  *
- *      ms:      MigrationState in
- *     pss:      PageSearchStatus structure updated with found block/offset
- * ram_addr_abs: global offset in the dirty/sent bitmaps
+ * Skips pages that are already sent (!dirty)
  *
- * Returns:      true if a queued page is found
+ * Returns if a queued page is found
+ *
+ * @ms: current migration state
+ * @pss: data about the state of the current dirty page scan
+ * @ram_addr_abs: pointer into which to store the address of the dirty page
+ *                within the global ram_addr space
  */
 static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
                             ram_addr_t *ram_addr_abs)
@@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
 }
 
 /**
- * flush_page_queue: Flush any remaining pages in the ram request queue
- *    it should be empty at the end anyway, but in error cases there may be
- *    some left.
+ * flush_page_queue: flush any remaining pages in the ram request queue
  *
- * ms: MigrationState
+ * It should be empty at the end anyway, but in error cases there may
+ * xbe some left.
+ *
+ * @ms: current migration state
  */
 void flush_page_queue(MigrationState *ms)
 {
@@ -1179,12 +1209,17 @@ void flush_page_queue(MigrationState *ms)
 }
 
 /**
- * Queue the pages for transmission, e.g. a request from postcopy destination
- *   ms: MigrationStatus in which the queue is held
- *   rbname: The RAMBlock the request is for - may be NULL (to mean reuse last)
- *   start: Offset from the start of the RAMBlock
- *   len: Length (in bytes) to send
- *   Return: 0 on success
+ * ram_save_queue_pages: queue the page for transmission
+ *
+ * A request from postcopy destination for example.
+ *
+ * Returns zero on success or negative on error
+ *
+ * @ms: current migration state
+ * @rbname: Name of the RAMBLock of the request. NULL means the
+ *          same that last one.
+ * @start: starting address from the start of the RAMBlock
+ * @len: length (in bytes) to send
  */
 int ram_save_queue_pages(MigrationState *ms, const char *rbname,
                          ram_addr_t start, ram_addr_t len)
@@ -1243,17 +1278,16 @@ err:
 }
 
 /**
- * ram_save_target_page: Save one target page
+ * ram_save_target_page: save one target page
  *
+ * Returns the umber of pages written
  *
+ * @ms: current migration state
  * @f: QEMUFile where to send the data
- * @block: pointer to block that contains the page we want to send
- * @offset: offset inside the block for the page;
+ * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
- * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
- *
- * Returns: Number of pages written.
+ * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
  */
 static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
                                 PageSearchStatus *pss,
@@ -1295,20 +1329,19 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
 }
 
 /**
- * ram_save_host_page: Starting at *offset send pages up to the end
- *                     of the current host page.  It's valid for the initial
- *                     offset to point into the middle of a host page
- *                     in which case the remainder of the hostpage is sent.
- *                     Only dirty target pages are sent.
- *                     Note that the host page size may be a huge page for this
- *                     block.
+ * ram_save_host_page: save a whole host page
  *
- * Returns: Number of pages written.
+ * Starting at *offset send pages up to the end of the current host
+ * page. It's valid for the initial offset to point into the middle of
+ * a host page in which case the remainder of the hostpage is sent.
+ * Only dirty target pages are sent. Note that the host page size may
+ * be a huge page for this block.
  *
+ * Returns the number of pages written or negative on error
+ *
+ * @ms: current migration state
  * @f: QEMUFile where to send the data
- * @block: pointer to block that contains the page we want to send
- * @offset: offset inside the block for the page; updated to last target page
- *          sent
+ * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
@@ -1340,12 +1373,11 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
 }
 
 /**
- * ram_find_and_save_block: Finds a dirty page and sends it to f
+ * ram_find_and_save_block: finds a dirty page and sends it to f
  *
  * Called within an RCU critical section.
  *
- * Returns:  The number of pages written
- *           0 means no dirty pages
+ * Returns the number of pages written where zero means no dirty pages
  *
  * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
@@ -1580,12 +1612,19 @@ void ram_postcopy_migrated_memory_release(MigrationState *ms)
     }
 }
 
-/*
+/**
+ * postcopy_send_discard_bm_ram: discard a RAMBlock
+ *
+ * Returns zero on success
+ *
  * Callback from postcopy_each_ram_send_discard for each RAMBlock
  * Note: At this point the 'unsentmap' is the processed bitmap combined
  *       with the dirtymap; so a '1' means it's either dirty or unsent.
- * start,length: Indexes into the bitmap for the first bit
- *            representing the named block and length in target-pages
+ *
+ * @ms: current migration state
+ * @pds: state for postcopy
+ * @start: RAMBlock starting page
+ * @length: RAMBlock size
  */
 static int postcopy_send_discard_bm_ram(MigrationState *ms,
                                         PostcopyDiscardState *pds,
@@ -1621,13 +1660,18 @@ static int postcopy_send_discard_bm_ram(MigrationState *ms,
     return 0;
 }
 
-/*
+/**
+ * postcopy_each_ram_send_discard: discard all RAMBlocks
+ *
+ * Returns 0 for success or negative for error
+ *
  * Utility for the outgoing postcopy code.
  *   Calls postcopy_send_discard_bm_ram for each RAMBlock
  *   passing it bitmap indexes and name.
- * Returns: 0 on success
  * (qemu_ram_foreach_block ends up passing unscaled lengths
  *  which would mean postcopy code would have to deal with target page)
+ *
+ * @ms: current migration state
  */
 static int postcopy_each_ram_send_discard(MigrationState *ms)
 {
@@ -1656,17 +1700,21 @@ static int postcopy_each_ram_send_discard(MigrationState *ms)
     return 0;
 }
 
-/*
- * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
- *   the two bitmaps, that are similar, but one is inverted.
+/**
+ * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
  *
- * We search for runs of target-pages that don't start or end on a
- * host page boundary;
- * unsent_pass=true: Cleans up partially unsent host pages by searching
- *                 the unsentmap
- * unsent_pass=false: Cleans up partially dirty host pages by searching
- *                 the main migration bitmap
+ * Helper for postcopy_chunk_hostpages; it's called twice to
+ * canonicalize the two bitmaps, that are similar, but one is
+ * inverted.
  *
+ * Postcopy requires that all target pages in a hostpage are dirty or
+ * clean, not a mix.  This function canonicalizes the bitmaps.
+ *
+ * @ms: current migration state
+ * @unsent_pass: if true we need to canonicalize partially unsent host pages
+ *               otherwise we need to canonicalize partially dirty host pages
+ * @block: block that contains the page we want to canonicalize
+ * @pds: state for postcopy
  */
 static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                                           RAMBlock *block,
@@ -1784,14 +1832,18 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
     }
 }
 
-/*
+/**
+ * postcopy_chuck_hostpages: discrad any partially sent host page
+ *
  * Utility for the outgoing postcopy code.
  *
  * Discard any partially sent host-page size chunks, mark any partially
  * dirty host-page size chunks as all dirty.  In this case the host-page
  * is the host-page for the particular RAMBlock, i.e. it might be a huge page
  *
- * Returns: 0 on success
+ * Returns zero on success
+ *
+ * @ms: current migration state
  */
 static int postcopy_chunk_hostpages(MigrationState *ms)
 {
@@ -1822,7 +1874,11 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
     return 0;
 }
 
-/*
+/**
+ * ram_postcopy_send_discard_bitmap: transmit the discard bitmap
+ *
+ * Returns zero on success
+ *
  * Transmit the set of pages to be discarded after precopy to the target
  * these are pages that:
  *     a) Have been previously transmitted but are now dirty again
@@ -1830,6 +1886,8 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
  *        any pages on the destination that have been mapped by background
  *        tasks get discarded (transparent huge pages is the specific concern)
  * Hopefully this is pretty sparse
+ *
+ * @ms: current migration state
  */
 int ram_postcopy_send_discard_bitmap(MigrationState *ms)
 {
@@ -1878,13 +1936,16 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     return ret;
 }
 
-/*
- * At the start of the postcopy phase of migration, any now-dirty
- * precopied pages are discarded.
+/**
+ * ram_discard_range: discard dirtied pages at the beginning of postcopy
  *
- * start, length describe a byte address range within the RAMBlock
+ * Returns zero on success
  *
- * Returns 0 on success.
+ * @mis: current migration incoming state
+ * @block_name: Name of the RAMBLock of the request. NULL means the
+ *              same that last one.
+ * @start: RAMBlock starting page
+ * @length: RAMBlock size
  */
 int ram_discard_range(MigrationIncomingState *mis,
                       const char *block_name,
@@ -1987,12 +2048,21 @@ static int ram_save_init_globals(void)
     return 0;
 }
 
-/* Each of ram_save_setup, ram_save_iterate and ram_save_complete has
+/*
+ * Each of ram_save_setup, ram_save_iterate and ram_save_complete has
  * long-running RCU critical section.  When rcu-reclaims in the code
  * start to become numerous it will be necessary to reduce the
  * granularity of these critical sections.
  */
 
+/**
+ * ram_save_setup: Setup RAM for migration
+ *
+ * Returns zero to indicate success and negative for error
+ *
+ * @f: QEMUFile where to send the data
+ * @opaque: RAMState pointer
+ */
 static int ram_save_setup(QEMUFile *f, void *opaque)
 {
     RAMBlock *block;
@@ -2027,6 +2097,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
     return 0;
 }
 
+/**
+ * ram_save_setup: iterative stage for migration
+ *
+ * Returns zero to indicate success and negative for error
+ *
+ * @f: QEMUFile where to send the data
+ * @opaque: RAMState pointer
+ */
 static int ram_save_iterate(QEMUFile *f, void *opaque)
 {
     int ret;
@@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     return done;
 }
 
-/* Called with iothread lock */
+/**
+ * ram_save_complete: function called to send the remaining amount of ram
+ *
+ * Returns zero to indicate success
+ *
+ * Called with iothread lock
+ *
+ * @f: QEMUFile where to send the data
+ * @opaque: RAMState pointer
+ */
 static int ram_save_complete(QEMUFile *f, void *opaque)
 {
     rcu_read_lock();
@@ -2185,17 +2272,17 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
     return 0;
 }
 
-/* Must be called from within a rcu critical section.
+/**
+ * ram_block_from_stream: read a RAMBlock id from the migration stream
+ *
+ * Must be called from within a rcu critical section.
+ *
  * Returns a pointer from within the RCU-protected ram_list.
- */
-/*
- * Read a RAMBlock ID from the stream f.
  *
- * f: Stream to read from
- * flags: Page flags (mostly to see if it's a continuation of previous block)
+ * @f: QEMUFile where to read the data from
+ * @flags: Page flags (mostly to see if it's a continuation of previous block)
  */
-static inline RAMBlock *ram_block_from_stream(QEMUFile *f,
-                                              int flags)
+static inline RAMBlock *ram_block_from_stream(QEMUFile *f, int flags)
 {
     static RAMBlock *block = NULL;
     char id[256];
@@ -2232,9 +2319,15 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
     return block->host + offset;
 }
 
-/*
+/**
+ * ram_handle_compressed: handle the zero page case
+ *
  * If a page (or a whole RDMA chunk) has been
  * determined to be zero, then zap it.
+ *
+ * @host: host address for the zero page
+ * @ch: what the page is filled from.  We only support zero
+ * @size: size of the zero page
  */
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
 {
@@ -2373,9 +2466,16 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
     qemu_mutex_unlock(&decomp_done_lock);
 }
 
-/*
- * Allocate data structures etc needed by incoming migration with postcopy-ram
- * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
+/**
+ * ram_postococpy_incoming_init: allocate postcopy data structures
+ *
+ * Returns 0 for success and negative if there was one error
+ *
+ * @mis: current migration incoming state
+ *
+ * Allocate data structures etc needed by incoming migration with
+ * postcopy-ram postcopy-ram's similarly names
+ * postcopy_ram_incoming_init does the work
  */
 int ram_postcopy_incoming_init(MigrationIncomingState *mis)
 {
@@ -2384,9 +2484,15 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis)
     return postcopy_ram_incoming_init(mis, ram_pages);
 }
 
-/*
+/**
+ * ram_load_postocpy: load a page in postcopy case
+ *
+ * Returns 0 for success or -errno in case of error
+ *
  * Called in postcopy mode by ram_load().
  * rcu_read_lock is taken prior to this being called.
+ *
+ * @f: QEMUFile where to send the data
  */
 static int ram_load_postcopy(QEMUFile *f)
 {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-24 11:11   ` Dr. David Alan Gilbert
  2017-03-24 17:15   ` Eric Blake
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 03/51] ram: Create RAMState Juan Quintela
                   ` (49 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

So all places are consisten on the nambing of a block name parameter.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 76f1fc4..21047c5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -743,14 +743,14 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
     return pages;
 }
 
-static void ram_release_pages(MigrationState *ms, const char *block_name,
+static void ram_release_pages(MigrationState *ms, const char *rbname,
                               uint64_t offset, int pages)
 {
     if (!migrate_release_ram() || !migration_in_postcopy(ms)) {
         return;
     }
 
-    ram_discard_range(NULL, block_name, offset, pages << TARGET_PAGE_BITS);
+    ram_discard_range(NULL, rbname, offset, pages << TARGET_PAGE_BITS);
 }
 
 /**
@@ -1942,25 +1942,24 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
  * Returns zero on success
  *
  * @mis: current migration incoming state
- * @block_name: Name of the RAMBLock of the request. NULL means the
- *              same that last one.
+ * @rbname: name of the RAMBLock of the request. NULL means the
+ *          same that last one.
  * @start: RAMBlock starting page
  * @length: RAMBlock size
  */
 int ram_discard_range(MigrationIncomingState *mis,
-                      const char *block_name,
+                      const char *rbname,
                       uint64_t start, size_t length)
 {
     int ret = -1;
 
-    trace_ram_discard_range(block_name, start, length);
+    trace_ram_discard_range(rbname, start, length);
 
     rcu_read_lock();
-    RAMBlock *rb = qemu_ram_block_by_name(block_name);
+    RAMBlock *rb = qemu_ram_block_by_name(rbname);
 
     if (!rb) {
-        error_report("ram_discard_range: Failed to find block '%s'",
-                     block_name);
+        error_report("ram_discard_range: Failed to find block '%s'", rbname);
         goto err;
     }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 03/51] ram: Create RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-27  4:43   ` Peter Xu
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
                   ` (48 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We create a struct where to put all the ram state

Start with the following fields:

last_seen_block, last_sent_block, last_offset, last_version and
ram_bulk_stage are globals that are really related together.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

--

Fix typo and warnings

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 140 +++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 83 insertions(+), 57 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 21047c5..a6e90d7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -142,6 +142,23 @@ out:
     return ret;
 }
 
+/* State of RAM for migration */
+struct RAMState {
+    /* Last block that we have visited searching for dirty pages */
+    RAMBlock *last_seen_block;
+    /* Last block from where we have sent data */
+    RAMBlock *last_sent_block;
+    /* Last offset we have sent data from */
+    ram_addr_t last_offset;
+    /* last ram version we have seen */
+    uint32_t last_version;
+    /* We are in the first round */
+    bool ram_bulk_stage;
+};
+typedef struct RAMState RAMState;
+
+static RAMState ram_state;
+
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
     uint64_t dup_pages;
@@ -217,16 +234,8 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return acct_info.xbzrle_overflows;
 }
 
-/* This is the last block that we have visited serching for dirty pages
- */
-static RAMBlock *last_seen_block;
-/* This is the last block from where we have sent data */
-static RAMBlock *last_sent_block;
-static ram_addr_t last_offset;
 static QemuMutex migration_bitmap_mutex;
 static uint64_t migration_dirty_pages;
-static uint32_t last_version;
-static bool ram_bulk_stage;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -444,6 +453,7 @@ static void mig_throttle_guest_down(void)
 /**
  * xbzrle_cache_zero_page: insert a zero page in the XBZRLE cache
  *
+ * @rs: current RAM state
  * @current_addr: address for the zero page
  *
  * Update the xbzrle cache to reflect a page that's been sent as all 0.
@@ -452,9 +462,9 @@ static void mig_throttle_guest_down(void)
  * As a bonus, if the page wasn't in the cache it gets added so that
  * when a small write is made into the 0'd page it gets XBZRLE sent.
  */
-static void xbzrle_cache_zero_page(ram_addr_t current_addr)
+static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
 {
-    if (ram_bulk_stage || !migrate_use_xbzrle()) {
+    if (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
         return;
     }
 
@@ -552,13 +562,14 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
  *
  * Returns the byte offset within memory region of the start of a dirty page
  *
+ * @rs: current RAM state
  * @rb: RAMBlock where to search for dirty pages
  * @start: starting address (typically so we can continue from previous page)
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
 static inline
-ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
+ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
                                        ram_addr_t start,
                                        ram_addr_t *ram_addr_abs)
 {
@@ -571,7 +582,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
     unsigned long next;
 
     bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    if (ram_bulk_stage && nr > base) {
+    if (rs->ram_bulk_stage && nr > base) {
         next = nr + 1;
     } else {
         next = find_next_bit(bitmap, size, nr);
@@ -761,6 +772,7 @@ static void ram_release_pages(MigrationState *ms, const char *rbname,
  *          >=0 - Number of pages written - this might legally be 0
  *                if xbzrle noticed the page was the same.
  *
+ * @rs: current RAM state
  * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -768,8 +780,9 @@ static void ram_release_pages(MigrationState *ms, const char *rbname,
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
-                         bool last_stage, uint64_t *bytes_transferred)
+static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
+                         PageSearchStatus *pss, bool last_stage,
+                         uint64_t *bytes_transferred)
 {
     int pages = -1;
     uint64_t bytes_xmit;
@@ -795,7 +808,7 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
 
     current_addr = block->offset + offset;
 
-    if (block == last_sent_block) {
+    if (block == rs->last_sent_block) {
         offset |= RAM_SAVE_FLAG_CONTINUE;
     }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
@@ -812,9 +825,9 @@ static int ram_save_page(MigrationState *ms, QEMUFile *f, PageSearchStatus *pss,
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
              */
-            xbzrle_cache_zero_page(current_addr);
+            xbzrle_cache_zero_page(rs, current_addr);
             ram_release_pages(ms, block->idstr, pss->offset, pages);
-        } else if (!ram_bulk_stage &&
+        } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(f, &p, current_addr, block,
                                      offset, last_stage, bytes_transferred);
@@ -946,6 +959,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
  *
  * Returns the number of pages written.
  *
+ * @rs: current RAM state
  * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -953,7 +967,8 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
+                                    QEMUFile *f,
                                     PageSearchStatus *pss, bool last_stage,
                                     uint64_t *bytes_transferred)
 {
@@ -987,7 +1002,7 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
          * out, keeping this order is important, because the 'cont' flag
          * is used to avoid resending the block name.
          */
-        if (block != last_sent_block) {
+        if (block != rs->last_sent_block) {
             flush_compressed_data(f);
             pages = save_zero_page(f, block, offset, p, bytes_transferred);
             if (pages == -1) {
@@ -1029,19 +1044,20 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
  *
  * Returns if a page is found
  *
+ * @rs: current RAM state
  * @f: QEMUFile where to send the data
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
-static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
+static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
                              bool *again, ram_addr_t *ram_addr_abs)
 {
-    pss->offset = migration_bitmap_find_dirty(pss->block, pss->offset,
+    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
                                               ram_addr_abs);
-    if (pss->complete_round && pss->block == last_seen_block &&
-        pss->offset >= last_offset) {
+    if (pss->complete_round && pss->block == rs->last_seen_block &&
+        pss->offset >= rs->last_offset) {
         /*
          * We've been once around the RAM and haven't found anything.
          * Give up.
@@ -1058,7 +1074,7 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
             pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
             /* Flag that we've looped */
             pss->complete_round = true;
-            ram_bulk_stage = false;
+            rs->ram_bulk_stage = false;
             if (migrate_use_xbzrle()) {
                 /* If xbzrle is on, stop using the data compression at this
                  * point. In theory, xbzrle can do better than compression.
@@ -1125,12 +1141,14 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
  *
  * Returns if a queued page is found
  *
+ * @rs: current RAM state
  * @ms: current migration state
  * @pss: data about the state of the current dirty page scan
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
-static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
+static bool get_queued_page(RAMState *rs, MigrationState *ms,
+                            PageSearchStatus *pss,
                             ram_addr_t *ram_addr_abs)
 {
     RAMBlock  *block;
@@ -1171,7 +1189,7 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
          * in (migration_bitmap_find_and_reset_dirty) that every page is
          * dirty, that's no longer true.
          */
-        ram_bulk_stage = false;
+        rs->ram_bulk_stage = false;
 
         /*
          * We want the background search to continue from the queued page
@@ -1282,6 +1300,7 @@ err:
  *
  * Returns the umber of pages written
  *
+ * @rs: current RAM state
  * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
@@ -1289,7 +1308,7 @@ err:
  * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                                 PageSearchStatus *pss,
                                 bool last_stage,
                                 uint64_t *bytes_transferred,
@@ -1301,11 +1320,11 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
     if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
-            res = ram_save_compressed_page(ms, f, pss,
+            res = ram_save_compressed_page(rs, ms, f, pss,
                                            last_stage,
                                            bytes_transferred);
         } else {
-            res = ram_save_page(ms, f, pss, last_stage,
+            res = ram_save_page(rs, ms, f, pss, last_stage,
                                 bytes_transferred);
         }
 
@@ -1321,7 +1340,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
          * to the stream.
          */
         if (res > 0) {
-            last_sent_block = pss->block;
+            rs->last_sent_block = pss->block;
         }
     }
 
@@ -1339,6 +1358,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
  *
  * Returns the number of pages written or negative on error
  *
+ * @rs: current RAM state
  * @ms: current migration state
  * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
@@ -1346,7 +1366,7 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
  * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
+static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                               PageSearchStatus *pss,
                               bool last_stage,
                               uint64_t *bytes_transferred,
@@ -1356,7 +1376,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(ms, f, pss, last_stage,
+        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
                                         bytes_transferred, dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
@@ -1379,6 +1399,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
  *
  * Returns the number of pages written where zero means no dirty pages
  *
+ * @rs: current RAM state
  * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
@@ -1387,7 +1408,7 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
  * pages in a host page that are dirty.
  */
 
-static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
+static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
                                    uint64_t *bytes_transferred)
 {
     PageSearchStatus pss;
@@ -1402,8 +1423,8 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
         return pages;
     }
 
-    pss.block = last_seen_block;
-    pss.offset = last_offset;
+    pss.block = rs->last_seen_block;
+    pss.offset = rs->last_offset;
     pss.complete_round = false;
 
     if (!pss.block) {
@@ -1412,22 +1433,22 @@ static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
 
     do {
         again = true;
-        found = get_queued_page(ms, &pss, &dirty_ram_abs);
+        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(f, &pss, &again, &dirty_ram_abs);
+            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
         }
 
         if (found) {
-            pages = ram_save_host_page(ms, f, &pss,
+            pages = ram_save_host_page(rs, ms, f, &pss,
                                        last_stage, bytes_transferred,
                                        dirty_ram_abs);
         }
     } while (!pages && again);
 
-    last_seen_block = pss.block;
-    last_offset = pss.offset;
+    rs->last_seen_block = pss.block;
+    rs->last_offset = pss.offset;
 
     return pages;
 }
@@ -1509,13 +1530,13 @@ static void ram_migration_cleanup(void *opaque)
     XBZRLE_cache_unlock();
 }
 
-static void reset_ram_globals(void)
+static void ram_state_reset(RAMState *rs)
 {
-    last_seen_block = NULL;
-    last_sent_block = NULL;
-    last_offset = 0;
-    last_version = ram_list.version;
-    ram_bulk_stage = true;
+    rs->last_seen_block = NULL;
+    rs->last_sent_block = NULL;
+    rs->last_offset = 0;
+    rs->last_version = ram_list.version;
+    rs->ram_bulk_stage = true;
 }
 
 #define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -1847,12 +1868,13 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
  */
 static int postcopy_chunk_hostpages(MigrationState *ms)
 {
+    RAMState *rs = &ram_state;
     struct RAMBlock *block;
 
     /* Easiest way to make sure we don't resume in the middle of a host-page */
-    last_seen_block = NULL;
-    last_sent_block = NULL;
-    last_offset     = 0;
+    rs->last_seen_block = NULL;
+    rs->last_sent_block = NULL;
+    rs->last_offset     = 0;
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         unsigned long first = block->offset >> TARGET_PAGE_BITS;
@@ -1971,7 +1993,7 @@ err:
     return ret;
 }
 
-static int ram_save_init_globals(void)
+static int ram_save_init_globals(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
@@ -2017,7 +2039,7 @@ static int ram_save_init_globals(void)
     qemu_mutex_lock_ramlist();
     rcu_read_lock();
     bytes_transferred = 0;
-    reset_ram_globals();
+    ram_state_reset(rs);
 
     migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
     /* Skip setting bitmap if there is no RAM */
@@ -2064,11 +2086,12 @@ static int ram_save_init_globals(void)
  */
 static int ram_save_setup(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
     RAMBlock *block;
 
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
-        if (ram_save_init_globals() < 0) {
+        if (ram_save_init_globals(rs) < 0) {
             return -1;
          }
     }
@@ -2106,14 +2129,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
  */
 static int ram_save_iterate(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
     int ret;
     int i;
     int64_t t0;
     int done = 0;
 
     rcu_read_lock();
-    if (ram_list.version != last_version) {
-        reset_ram_globals();
+    if (ram_list.version != rs->last_version) {
+        ram_state_reset(rs);
     }
 
     /* Read version before ram_list.blocks */
@@ -2126,7 +2150,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(f, false, &bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2180,6 +2204,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
  */
 static int ram_save_complete(QEMUFile *f, void *opaque)
 {
+    RAMState *rs = opaque;
+
     rcu_read_lock();
 
     if (!migration_in_postcopy(migrate_get_current())) {
@@ -2194,7 +2220,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     while (true) {
         int pages;
 
-        pages = ram_find_and_save_block(f, !migration_in_colo_state(),
+        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
                                         &bytes_transferred);
         /* no more blocks to sent */
         if (pages == 0) {
@@ -2778,5 +2804,5 @@ static SaveVMHandlers savevm_ram_handlers = {
 void ram_mig_init(void)
 {
     qemu_mutex_init(&XBZRLE.lock);
-    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, NULL);
+    register_savevm_live(NULL, "ram", 0, 4, &savevm_ram_handlers, &ram_state);
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (2 preceding siblings ...)
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 03/51] ram: Create RAMState Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-27  7:24   ` Peter Xu
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState Juan Quintela
                   ` (47 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We need to add a parameter to several functions to make this work.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index a6e90d7..1d5bf22 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,8 +45,6 @@
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
 
-static int dirty_rate_high_cnt;
-
 static uint64_t bitmap_sync_count;
 
 /***********************************************************/
@@ -154,6 +152,8 @@ struct RAMState {
     uint32_t last_version;
     /* We are in the first round */
     bool ram_bulk_stage;
+    /* How many times we have dirty too many pages */
+    int dirty_rate_high_cnt;
 };
 typedef struct RAMState RAMState;
 
@@ -651,7 +651,7 @@ uint64_t ram_pagesize_summary(void)
     return summary;
 }
 
-static void migration_bitmap_sync(void)
+static void migration_bitmap_sync(RAMState *rs)
 {
     RAMBlock *block;
     MigrationState *s = migrate_get_current();
@@ -696,9 +696,9 @@ static void migration_bitmap_sync(void)
             if (s->dirty_pages_rate &&
                (num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - bytes_xfer_prev)/2) &&
-               (dirty_rate_high_cnt++ >= 2)) {
+               (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
-                    dirty_rate_high_cnt = 0;
+                    rs->dirty_rate_high_cnt = 0;
                     mig_throttle_guest_down();
              }
              bytes_xfer_prev = bytes_xfer_now;
@@ -1919,7 +1919,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     rcu_read_lock();
 
     /* This should be our last sync, the src is now paused */
-    migration_bitmap_sync();
+    migration_bitmap_sync(&ram_state);
 
     unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
     if (!unsentmap) {
@@ -1997,7 +1997,7 @@ static int ram_save_init_globals(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
-    dirty_rate_high_cnt = 0;
+    rs->dirty_rate_high_cnt = 0;
     bitmap_sync_count = 0;
     migration_bitmap_sync_init();
     qemu_mutex_init(&migration_bitmap_mutex);
@@ -2061,7 +2061,7 @@ static int ram_save_init_globals(RAMState *rs)
     migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
-    migration_bitmap_sync();
+    migration_bitmap_sync(rs);
     qemu_mutex_unlock_ramlist();
     qemu_mutex_unlock_iothread();
     rcu_read_unlock();
@@ -2209,7 +2209,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     rcu_read_lock();
 
     if (!migration_in_postcopy(migrate_get_current())) {
-        migration_bitmap_sync();
+        migration_bitmap_sync(rs);
     }
 
     ram_control_before_iterate(f, RAM_CONTROL_FINISH);
@@ -2242,6 +2242,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
                              uint64_t *non_postcopiable_pending,
                              uint64_t *postcopiable_pending)
 {
+    RAMState *rs = opaque;
     uint64_t remaining_size;
 
     remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
@@ -2250,7 +2251,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         remaining_size < max_size) {
         qemu_mutex_lock_iothread();
         rcu_read_lock();
-        migration_bitmap_sync();
+        migration_bitmap_sync(rs);
         rcu_read_unlock();
         qemu_mutex_unlock_iothread();
         remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (3 preceding siblings ...)
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-27  7:34   ` Peter Xu
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 06/51] ram: Move start time " Juan Quintela
                   ` (46 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 1d5bf22..f811e81 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -45,8 +45,6 @@
 #include "qemu/rcu_queue.h"
 #include "migration/colo.h"
 
-static uint64_t bitmap_sync_count;
-
 /***********************************************************/
 /* ram save/restore */
 
@@ -154,6 +152,8 @@ struct RAMState {
     bool ram_bulk_stage;
     /* How many times we have dirty too many pages */
     int dirty_rate_high_cnt;
+    /* How many times we have synchronized the bitmap */
+    uint64_t bitmap_sync_count;
 };
 typedef struct RAMState RAMState;
 
@@ -471,7 +471,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
-                 bitmap_sync_count);
+                 rs->bitmap_sync_count);
 }
 
 #define ENCODING_FLAG_XBZRLE 0x1
@@ -483,6 +483,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  *          0 means that page is identical to the one already sent
  *          -1 means that xbzrle would be longer than normal
  *
+ * @rs: current RAM state
  * @f: QEMUFile where to send the data
  * @current_data: contents of the page
  * @current_addr: addr of the page
@@ -491,7 +492,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  * @last_stage: if we are at the completion stage
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
+static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
                             ram_addr_t current_addr, RAMBlock *block,
                             ram_addr_t offset, bool last_stage,
                             uint64_t *bytes_transferred)
@@ -499,11 +500,11 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
 
-    if (!cache_is_cached(XBZRLE.cache, current_addr, bitmap_sync_count)) {
+    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
         acct_info.xbzrle_cache_miss++;
         if (!last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
-                             bitmap_sync_count) == -1) {
+                             rs->bitmap_sync_count) == -1) {
                 return -1;
             } else {
                 /* update *current_data when the page has been
@@ -658,7 +659,7 @@ static void migration_bitmap_sync(RAMState *rs)
     int64_t end_time;
     int64_t bytes_xfer_now;
 
-    bitmap_sync_count++;
+    rs->bitmap_sync_count++;
 
     if (!bytes_xfer_prev) {
         bytes_xfer_prev = ram_bytes_transferred();
@@ -720,9 +721,9 @@ static void migration_bitmap_sync(RAMState *rs)
         start_time = end_time;
         num_dirty_pages_period = 0;
     }
-    s->dirty_sync_count = bitmap_sync_count;
+    s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
-        qapi_event_send_migration_pass(bitmap_sync_count, NULL);
+        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
     }
 }
 
@@ -829,7 +830,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             ram_release_pages(ms, block->idstr, pss->offset, pages);
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
-            pages = save_xbzrle_page(f, &p, current_addr, block,
+            pages = save_xbzrle_page(rs, f, &p, current_addr, block,
                                      offset, last_stage, bytes_transferred);
             if (!last_stage) {
                 /* Can't send this cached data async, since the cache page
@@ -1998,7 +1999,7 @@ static int ram_save_init_globals(RAMState *rs)
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
     rs->dirty_rate_high_cnt = 0;
-    bitmap_sync_count = 0;
+    rs->bitmap_sync_count = 0;
     migration_bitmap_sync_init();
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 06/51] ram: Move start time into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (4 preceding siblings ...)
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState Juan Quintela
@ 2017-03-23 20:44 ` Juan Quintela
  2017-03-27  7:54   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev " Juan Quintela
                   ` (45 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index f811e81..5881805 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -154,6 +154,9 @@ struct RAMState {
     int dirty_rate_high_cnt;
     /* How many times we have synchronized the bitmap */
     uint64_t bitmap_sync_count;
+    /* this variables are used for bitmap sync */
+    /* last time we did a full bitmap_sync */
+    int64_t start_time;
 };
 typedef struct RAMState RAMState;
 
@@ -617,14 +620,13 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static int64_t start_time;
 static int64_t bytes_xfer_prev;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
-static void migration_bitmap_sync_init(void)
+static void migration_bitmap_sync_init(RAMState *rs)
 {
-    start_time = 0;
+    rs->start_time = 0;
     bytes_xfer_prev = 0;
     num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
@@ -665,8 +667,8 @@ static void migration_bitmap_sync(RAMState *rs)
         bytes_xfer_prev = ram_bytes_transferred();
     }
 
-    if (!start_time) {
-        start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+    if (!rs->start_time) {
+        rs->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     }
 
     trace_migration_bitmap_sync_start();
@@ -685,7 +687,7 @@ static void migration_bitmap_sync(RAMState *rs)
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
     /* more than 1 second = 1000 millisecons */
-    if (end_time > start_time + 1000) {
+    if (end_time > rs->start_time + 1000) {
         if (migrate_auto_converge()) {
             /* The following detection logic can be refined later. For now:
                Check to see if the dirtied bytes is 50% more than the approx.
@@ -716,9 +718,9 @@ static void migration_bitmap_sync(RAMState *rs)
             xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = num_dirty_pages_period * 1000
-            / (end_time - start_time);
+            / (end_time - rs->start_time);
         s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
-        start_time = end_time;
+        rs->start_time = end_time;
         num_dirty_pages_period = 0;
     }
     s->dirty_sync_count = rs->bitmap_sync_count;
@@ -2000,7 +2002,7 @@ static int ram_save_init_globals(RAMState *rs)
 
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
-    migration_bitmap_sync_init();
+    migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (5 preceding siblings ...)
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 06/51] ram: Move start time " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  8:04   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period " Juan Quintela
                   ` (44 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 5881805..5e53b47 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -157,6 +157,8 @@ struct RAMState {
     /* this variables are used for bitmap sync */
     /* last time we did a full bitmap_sync */
     int64_t start_time;
+    /* bytes transferred at start_time */
+    int64_t bytes_xfer_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -620,14 +622,13 @@ static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static int64_t bytes_xfer_prev;
 static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
 static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
-    bytes_xfer_prev = 0;
+    rs->bytes_xfer_prev = 0;
     num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
@@ -663,8 +664,8 @@ static void migration_bitmap_sync(RAMState *rs)
 
     rs->bitmap_sync_count++;
 
-    if (!bytes_xfer_prev) {
-        bytes_xfer_prev = ram_bytes_transferred();
+    if (!rs->bytes_xfer_prev) {
+        rs->bytes_xfer_prev = ram_bytes_transferred();
     }
 
     if (!rs->start_time) {
@@ -698,13 +699,13 @@ static void migration_bitmap_sync(RAMState *rs)
 
             if (s->dirty_pages_rate &&
                (num_dirty_pages_period * TARGET_PAGE_SIZE >
-                   (bytes_xfer_now - bytes_xfer_prev)/2) &&
+                   (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
                     rs->dirty_rate_high_cnt = 0;
                     mig_throttle_guest_down();
              }
-             bytes_xfer_prev = bytes_xfer_now;
+             rs->bytes_xfer_prev = bytes_xfer_now;
         }
 
         if (migrate_use_xbzrle()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (6 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  8:07   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 09/51] ram: Move xbzrle_cache_miss_prev " Juan Quintela
                   ` (43 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 5e53b47..748d047 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -159,6 +159,8 @@ struct RAMState {
     int64_t start_time;
     /* bytes transferred at start_time */
     int64_t bytes_xfer_prev;
+    /* number of dirty pages since start_time */
+    int64_t num_dirty_pages_period;
 };
 typedef struct RAMState RAMState;
 
@@ -612,13 +614,13 @@ static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
     return ret;
 }
 
-static int64_t num_dirty_pages_period;
-static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
+static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
+                                        ram_addr_t length)
 {
     unsigned long *bitmap;
     bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
     migration_dirty_pages += cpu_physical_memory_sync_dirty_bitmap(bitmap,
-                             start, length, &num_dirty_pages_period);
+                             start, length, &rs->num_dirty_pages_period);
 }
 
 /* Fix me: there are too many global variables used in migration process. */
@@ -629,7 +631,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
-    num_dirty_pages_period = 0;
+    rs->num_dirty_pages_period = 0;
     xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
 }
@@ -678,12 +680,12 @@ static void migration_bitmap_sync(RAMState *rs)
     qemu_mutex_lock(&migration_bitmap_mutex);
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
-        migration_bitmap_sync_range(block->offset, block->used_length);
+        migration_bitmap_sync_range(rs, block->offset, block->used_length);
     }
     rcu_read_unlock();
     qemu_mutex_unlock(&migration_bitmap_mutex);
 
-    trace_migration_bitmap_sync_end(num_dirty_pages_period);
+    trace_migration_bitmap_sync_end(rs->num_dirty_pages_period);
 
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
@@ -698,7 +700,7 @@ static void migration_bitmap_sync(RAMState *rs)
             bytes_xfer_now = ram_bytes_transferred();
 
             if (s->dirty_pages_rate &&
-               (num_dirty_pages_period * TARGET_PAGE_SIZE >
+               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
@@ -718,11 +720,11 @@ static void migration_bitmap_sync(RAMState *rs)
             iterations_prev = acct_info.iterations;
             xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
-        s->dirty_pages_rate = num_dirty_pages_period * 1000
+        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
         s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
         rs->start_time = end_time;
-        num_dirty_pages_period = 0;
+        rs->num_dirty_pages_period = 0;
     }
     s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 09/51] ram: Move xbzrle_cache_miss_prev into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (7 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 10/51] ram: Move iterations_prev " Juan Quintela
                   ` (42 subsequent siblings)
  51 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 migration/ram.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 748d047..826ba6d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -161,6 +161,8 @@ struct RAMState {
     int64_t bytes_xfer_prev;
     /* number of dirty pages since start_time */
     int64_t num_dirty_pages_period;
+    /* xbzrle misses since the beginning of the period */
+    uint64_t xbzrle_cache_miss_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -624,7 +626,6 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
 }
 
 /* Fix me: there are too many global variables used in migration process. */
-static uint64_t xbzrle_cache_miss_prev;
 static uint64_t iterations_prev;
 
 static void migration_bitmap_sync_init(RAMState *rs)
@@ -632,7 +633,7 @@ static void migration_bitmap_sync_init(RAMState *rs)
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
     rs->num_dirty_pages_period = 0;
-    xbzrle_cache_miss_prev = 0;
+    rs->xbzrle_cache_miss_prev = 0;
     iterations_prev = 0;
 }
 
@@ -714,11 +715,11 @@ static void migration_bitmap_sync(RAMState *rs)
             if (iterations_prev != acct_info.iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
-                            xbzrle_cache_miss_prev) /
+                            rs->xbzrle_cache_miss_prev) /
                    (acct_info.iterations - iterations_prev);
             }
             iterations_prev = acct_info.iterations;
-            xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
+            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 10/51] ram: Move iterations_prev into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (8 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 09/51] ram: Move xbzrle_cache_miss_prev " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 11/51] ram: Move dup_pages " Juan Quintela
                   ` (41 subsequent siblings)
  51 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 migration/ram.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 826ba6d..d8428c1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -163,6 +163,8 @@ struct RAMState {
     int64_t num_dirty_pages_period;
     /* xbzrle misses since the beginning of the period */
     uint64_t xbzrle_cache_miss_prev;
+    /* number of iterations at the beginning of period */
+    uint64_t iterations_prev;
 };
 typedef struct RAMState RAMState;
 
@@ -625,16 +627,13 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
                              start, length, &rs->num_dirty_pages_period);
 }
 
-/* Fix me: there are too many global variables used in migration process. */
-static uint64_t iterations_prev;
-
 static void migration_bitmap_sync_init(RAMState *rs)
 {
     rs->start_time = 0;
     rs->bytes_xfer_prev = 0;
     rs->num_dirty_pages_period = 0;
     rs->xbzrle_cache_miss_prev = 0;
-    iterations_prev = 0;
+    rs->iterations_prev = 0;
 }
 
 /**
@@ -712,13 +711,13 @@ static void migration_bitmap_sync(RAMState *rs)
         }
 
         if (migrate_use_xbzrle()) {
-            if (iterations_prev != acct_info.iterations) {
+            if (rs->iterations_prev != acct_info.iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
-                   (acct_info.iterations - iterations_prev);
+                   (acct_info.iterations - rs->iterations_prev);
             }
-            iterations_prev = acct_info.iterations;
+            rs->iterations_prev = acct_info.iterations;
             rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (9 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 10/51] ram: Move iterations_prev " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  9:23   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred() Juan Quintela
                   ` (40 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Once there rename it to its actual meaning, zero_pages.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index d8428c1..0da133f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -165,6 +165,9 @@ struct RAMState {
     uint64_t xbzrle_cache_miss_prev;
     /* number of iterations at the beginning of period */
     uint64_t iterations_prev;
+    /* Accounting fields */
+    /* number of zero pages.  It used to be pages filled by the same char. */
+    uint64_t zero_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -172,7 +175,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t dup_pages;
     uint64_t skipped_pages;
     uint64_t norm_pages;
     uint64_t iterations;
@@ -192,12 +194,12 @@ static void acct_clear(void)
 
 uint64_t dup_mig_bytes_transferred(void)
 {
-    return acct_info.dup_pages * TARGET_PAGE_SIZE;
+    return ram_state.zero_pages * TARGET_PAGE_SIZE;
 }
 
 uint64_t dup_mig_pages_transferred(void)
 {
-    return acct_info.dup_pages;
+    return ram_state.zero_pages;
 }
 
 uint64_t skipped_mig_bytes_transferred(void)
@@ -737,19 +739,21 @@ static void migration_bitmap_sync(RAMState *rs)
  *
  * Returns the number of pages written.
  *
+ * @rs: current RAM state
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @p: pointer to the page
  * @bytes_transferred: increase it with the number of transferred bytes
  */
-static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
+static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
+                          ram_addr_t offset,
                           uint8_t *p, uint64_t *bytes_transferred)
 {
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
-        acct_info.dup_pages++;
+        rs->zero_pages++;
         *bytes_transferred += save_page_header(f, block,
                                                offset | RAM_SAVE_FLAG_COMPRESS);
         qemu_put_byte(f, 0);
@@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             if (bytes_xmit > 0) {
                 acct_info.norm_pages++;
             } else if (bytes_xmit == 0) {
-                acct_info.dup_pages++;
+                rs->zero_pages++;
             }
         }
     } else {
-        pages = save_zero_page(f, block, offset, p, bytes_transferred);
+        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
         if (pages > 0) {
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
@@ -998,7 +1002,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             if (bytes_xmit > 0) {
                 acct_info.norm_pages++;
             } else if (bytes_xmit == 0) {
-                acct_info.dup_pages++;
+                rs->zero_pages++;
             }
         }
     } else {
@@ -1010,7 +1014,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          */
         if (block != rs->last_sent_block) {
             flush_compressed_data(f);
-            pages = save_zero_page(f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
                 bytes_xmit = save_page_header(f, block, offset |
@@ -1031,7 +1035,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
-            pages = save_zero_page(f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 pages = compress_page_with_multi_thread(f, block, offset,
                                                         bytes_transferred);
@@ -1462,8 +1466,10 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
 void acct_update_position(QEMUFile *f, size_t size, bool zero)
 {
     uint64_t pages = size / TARGET_PAGE_SIZE;
+    RAMState *rs = &ram_state;
+
     if (zero) {
-        acct_info.dup_pages += pages;
+        rs->zero_pages += pages;
     } else {
         acct_info.norm_pages += pages;
         bytes_transferred += size;
@@ -2005,6 +2011,7 @@ static int ram_save_init_globals(RAMState *rs)
 
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
+    rs->zero_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred()
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (10 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 11/51] ram: Move dup_pages " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  9:24   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable Juan Quintela
                   ` (39 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/ram.c               | 5 -----
 2 files changed, 6 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 5720c88..3e6bb68 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -276,7 +276,6 @@ void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
-uint64_t dup_mig_bytes_transferred(void);
 uint64_t dup_mig_pages_transferred(void);
 uint64_t skipped_mig_bytes_transferred(void);
 uint64_t skipped_mig_pages_transferred(void);
diff --git a/migration/ram.c b/migration/ram.c
index 0da133f..af385c4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -192,11 +192,6 @@ static void acct_clear(void)
     memset(&acct_info, 0, sizeof(acct_info));
 }
 
-uint64_t dup_mig_bytes_transferred(void)
-{
-    return ram_state.zero_pages * TARGET_PAGE_SIZE;
-}
-
 uint64_t dup_mig_pages_transferred(void)
 {
     return ram_state.zero_pages;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (11 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred() Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  9:26   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState Juan Quintela
                   ` (38 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

For compatibility, we need to still send a value, but just specify it
and comment the fact.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 include/migration/migration.h |  2 --
 migration/migration.c         |  3 ++-
 migration/ram.c               | 11 -----------
 3 files changed, 2 insertions(+), 14 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 3e6bb68..9c83951 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -277,8 +277,6 @@ void free_xbzrle_decoded_buf(void);
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
 uint64_t dup_mig_pages_transferred(void);
-uint64_t skipped_mig_bytes_transferred(void);
-uint64_t skipped_mig_pages_transferred(void);
 uint64_t norm_mig_bytes_transferred(void);
 uint64_t norm_mig_pages_transferred(void);
 uint64_t xbzrle_mig_bytes_transferred(void);
diff --git a/migration/migration.c b/migration/migration.c
index 54060f7..c078157 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -643,7 +643,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->transferred = ram_bytes_transferred();
     info->ram->total = ram_bytes_total();
     info->ram->duplicate = dup_mig_pages_transferred();
-    info->ram->skipped = skipped_mig_pages_transferred();
+    /* legacy value.  It is not used anymore */
+    info->ram->skipped = 0;
     info->ram->normal = norm_mig_pages_transferred();
     info->ram->normal_bytes = norm_mig_bytes_transferred();
     info->ram->mbps = s->mbps;
diff --git a/migration/ram.c b/migration/ram.c
index af385c4..57f5858 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -175,7 +175,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t skipped_pages;
     uint64_t norm_pages;
     uint64_t iterations;
     uint64_t xbzrle_bytes;
@@ -197,16 +196,6 @@ uint64_t dup_mig_pages_transferred(void)
     return ram_state.zero_pages;
 }
 
-uint64_t skipped_mig_bytes_transferred(void)
-{
-    return acct_info.skipped_pages * TARGET_PAGE_SIZE;
-}
-
-uint64_t skipped_mig_pages_transferred(void)
-{
-    return acct_info.skipped_pages;
-}
-
 uint64_t norm_mig_bytes_transferred(void)
 {
     return acct_info.norm_pages * TARGET_PAGE_SIZE;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (12 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27  9:43   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 15/51] ram: Remove norm_mig_bytes_transferred Juan Quintela
                   ` (37 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 57f5858..2c36729 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -168,6 +168,8 @@ struct RAMState {
     /* Accounting fields */
     /* number of zero pages.  It used to be pages filled by the same char. */
     uint64_t zero_pages;
+    /* number of normal transferred pages */
+    uint64_t norm_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -175,7 +177,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t norm_pages;
     uint64_t iterations;
     uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
@@ -198,12 +199,12 @@ uint64_t dup_mig_pages_transferred(void)
 
 uint64_t norm_mig_bytes_transferred(void)
 {
-    return acct_info.norm_pages * TARGET_PAGE_SIZE;
+    return ram_state.norm_pages * TARGET_PAGE_SIZE;
 }
 
 uint64_t norm_mig_pages_transferred(void)
 {
-    return acct_info.norm_pages;
+    return ram_state.norm_pages;
 }
 
 uint64_t xbzrle_mig_bytes_transferred(void)
@@ -808,7 +809,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                acct_info.norm_pages++;
+                rs->norm_pages++;
             } else if (bytes_xmit == 0) {
                 rs->zero_pages++;
             }
@@ -847,7 +848,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         }
         *bytes_transferred += TARGET_PAGE_SIZE;
         pages = 1;
-        acct_info.norm_pages++;
+        rs->norm_pages++;
     }
 
     XBZRLE_cache_unlock();
@@ -914,8 +915,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
     param->offset = offset;
 }
 
-static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
-                                           ram_addr_t offset,
+static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
+                                           RAMBlock *block, ram_addr_t offset,
                                            uint64_t *bytes_transferred)
 {
     int idx, thread_count, bytes_xmit = -1, pages = -1;
@@ -932,7 +933,7 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
                 qemu_cond_signal(&comp_param[idx].cond);
                 qemu_mutex_unlock(&comp_param[idx].mutex);
                 pages = 1;
-                acct_info.norm_pages++;
+                rs->norm_pages++;
                 *bytes_transferred += bytes_xmit;
                 break;
             }
@@ -984,7 +985,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
-                acct_info.norm_pages++;
+                rs->norm_pages++;
             } else if (bytes_xmit == 0) {
                 rs->zero_pages++;
             }
@@ -1007,7 +1008,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                                                  migrate_compress_level());
                 if (blen > 0) {
                     *bytes_transferred += bytes_xmit + blen;
-                    acct_info.norm_pages++;
+                    rs->norm_pages++;
                     pages = 1;
                 } else {
                     qemu_file_set_error(f, blen);
@@ -1021,7 +1022,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             offset |= RAM_SAVE_FLAG_CONTINUE;
             pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
-                pages = compress_page_with_multi_thread(f, block, offset,
+                pages = compress_page_with_multi_thread(rs, f, block, offset,
                                                         bytes_transferred);
             } else {
                 ram_release_pages(ms, block->idstr, pss->offset, pages);
@@ -1455,7 +1456,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     if (zero) {
         rs->zero_pages += pages;
     } else {
-        acct_info.norm_pages += pages;
+        rs->norm_pages += pages;
         bytes_transferred += size;
         qemu_update_position(f, size);
     }
@@ -1996,6 +1997,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->dirty_rate_high_cnt = 0;
     rs->bitmap_sync_count = 0;
     rs->zero_pages = 0;
+    rs->norm_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 15/51] ram: Remove norm_mig_bytes_transferred
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (13 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState Juan Quintela
                   ` (36 subsequent siblings)
  51 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Its value can be calculated by other exported.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/migration.c         | 3 ++-
 migration/ram.c               | 5 -----
 3 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 9c83951..84cef4b 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -277,7 +277,6 @@ void free_xbzrle_decoded_buf(void);
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
 
 uint64_t dup_mig_pages_transferred(void);
-uint64_t norm_mig_bytes_transferred(void);
 uint64_t norm_mig_pages_transferred(void);
 uint64_t xbzrle_mig_bytes_transferred(void);
 uint64_t xbzrle_mig_pages_transferred(void);
diff --git a/migration/migration.c b/migration/migration.c
index c078157..e532430 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -646,7 +646,8 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     /* legacy value.  It is not used anymore */
     info->ram->skipped = 0;
     info->ram->normal = norm_mig_pages_transferred();
-    info->ram->normal_bytes = norm_mig_bytes_transferred();
+    info->ram->normal_bytes = norm_mig_pages_transferred() *
+        (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = s->dirty_sync_count;
     info->ram->postcopy_requests = s->postcopy_requests;
diff --git a/migration/ram.c b/migration/ram.c
index 2c36729..9fa3bd7 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -197,11 +197,6 @@ uint64_t dup_mig_pages_transferred(void)
     return ram_state.zero_pages;
 }
 
-uint64_t norm_mig_bytes_transferred(void)
-{
-    return ram_state.norm_pages * TARGET_PAGE_SIZE;
-}
-
 uint64_t norm_mig_pages_transferred(void)
 {
     return ram_state.norm_pages;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (14 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 15/51] ram: Remove norm_mig_bytes_transferred Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-27 10:46   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes " Juan Quintela
                   ` (35 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 9fa3bd7..690ca8f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -170,6 +170,8 @@ struct RAMState {
     uint64_t zero_pages;
     /* number of normal transferred pages */
     uint64_t norm_pages;
+    /* Iterations since start */
+    uint64_t iterations;
 };
 typedef struct RAMState RAMState;
 
@@ -177,7 +179,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t iterations;
     uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
@@ -693,13 +694,13 @@ static void migration_bitmap_sync(RAMState *rs)
         }
 
         if (migrate_use_xbzrle()) {
-            if (rs->iterations_prev != acct_info.iterations) {
+            if (rs->iterations_prev != rs->iterations) {
                 acct_info.xbzrle_cache_miss_rate =
                    (double)(acct_info.xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
-                   (acct_info.iterations - rs->iterations_prev);
+                   (rs->iterations - rs->iterations_prev);
             }
-            rs->iterations_prev = acct_info.iterations;
+            rs->iterations_prev = rs->iterations;
             rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
@@ -1993,6 +1994,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->bitmap_sync_count = 0;
     rs->zero_pages = 0;
     rs->norm_pages = 0;
+    rs->iterations = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
@@ -2150,7 +2152,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
             done = 1;
             break;
         }
-        acct_info.iterations++;
+        rs->iterations++;
 
         /* we want to check in the 1st loop, just in case it was the 1st time
            and we had to sync the dirty bitmap.
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (15 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:12   ` Dr. David Alan Gilbert
  2017-03-27 10:48   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages " Juan Quintela
                   ` (34 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 690ca8f..721fd66 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -172,6 +172,8 @@ struct RAMState {
     uint64_t norm_pages;
     /* Iterations since start */
     uint64_t iterations;
+    /* xbzrle transmitted bytes */
+    uint64_t xbzrle_bytes;
 };
 typedef struct RAMState RAMState;
 
@@ -179,7 +181,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_bytes;
     uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
@@ -205,7 +206,7 @@ uint64_t norm_mig_pages_transferred(void)
 
 uint64_t xbzrle_mig_bytes_transferred(void)
 {
-    return acct_info.xbzrle_bytes;
+    return ram_state.xbzrle_bytes;
 }
 
 uint64_t xbzrle_mig_pages_transferred(void)
@@ -544,7 +545,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
     qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
     acct_info.xbzrle_pages++;
-    acct_info.xbzrle_bytes += bytes_xbzrle;
+    rs->xbzrle_bytes += bytes_xbzrle;
     *bytes_transferred += bytes_xbzrle;
 
     return 1;
@@ -1995,6 +1996,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->zero_pages = 0;
     rs->norm_pages = 0;
     rs->iterations = 0;
+    rs->xbzrle_bytes = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (16 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:13   ` Dr. David Alan Gilbert
  2017-03-27 10:59   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss " Juan Quintela
                   ` (33 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 721fd66..b4e647a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -174,6 +174,8 @@ struct RAMState {
     uint64_t iterations;
     /* xbzrle transmitted bytes */
     uint64_t xbzrle_bytes;
+    /* xbzrle transmmited pages */
+    uint64_t xbzrle_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -181,7 +183,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_pages;
     uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
@@ -211,7 +212,7 @@ uint64_t xbzrle_mig_bytes_transferred(void)
 
 uint64_t xbzrle_mig_pages_transferred(void)
 {
-    return acct_info.xbzrle_pages;
+    return ram_state.xbzrle_pages;
 }
 
 uint64_t xbzrle_mig_pages_cache_miss(void)
@@ -544,7 +545,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
     qemu_put_be16(f, encoded_len);
     qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
-    acct_info.xbzrle_pages++;
+    rs->xbzrle_pages++;
     rs->xbzrle_bytes += bytes_xbzrle;
     *bytes_transferred += bytes_xbzrle;
 
@@ -1997,6 +1998,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->norm_pages = 0;
     rs->iterations = 0;
     rs->xbzrle_bytes = 0;
+    rs->xbzrle_pages = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (17 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:15   ` Dr. David Alan Gilbert
  2017-03-27 11:00   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate " Juan Quintela
                   ` (32 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b4e647a..cc19406 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -176,6 +176,8 @@ struct RAMState {
     uint64_t xbzrle_bytes;
     /* xbzrle transmmited pages */
     uint64_t xbzrle_pages;
+    /* xbzrle number of cache miss */
+    uint64_t xbzrle_cache_miss;
 };
 typedef struct RAMState RAMState;
 
@@ -183,7 +185,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    uint64_t xbzrle_cache_miss;
     double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
 } AccountingInfo;
@@ -217,7 +218,7 @@ uint64_t xbzrle_mig_pages_transferred(void)
 
 uint64_t xbzrle_mig_pages_cache_miss(void)
 {
-    return acct_info.xbzrle_cache_miss;
+    return ram_state.xbzrle_cache_miss;
 }
 
 double xbzrle_mig_cache_miss_rate(void)
@@ -497,7 +498,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
     uint8_t *prev_cached_page;
 
     if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
-        acct_info.xbzrle_cache_miss++;
+        rs->xbzrle_cache_miss++;
         if (!last_stage) {
             if (cache_insert(XBZRLE.cache, current_addr, *current_data,
                              rs->bitmap_sync_count) == -1) {
@@ -698,12 +699,12 @@ static void migration_bitmap_sync(RAMState *rs)
         if (migrate_use_xbzrle()) {
             if (rs->iterations_prev != rs->iterations) {
                 acct_info.xbzrle_cache_miss_rate =
-                   (double)(acct_info.xbzrle_cache_miss -
+                   (double)(rs->xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
                    (rs->iterations - rs->iterations_prev);
             }
             rs->iterations_prev = rs->iterations;
-            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
+            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
@@ -1999,6 +2000,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->iterations = 0;
     rs->xbzrle_bytes = 0;
     rs->xbzrle_pages = 0;
+    rs->xbzrle_cache_miss = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (18 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:17   ` Dr. David Alan Gilbert
  2017-03-27 11:01   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows " Juan Quintela
                   ` (31 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index cc19406..c398ff9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -178,6 +178,8 @@ struct RAMState {
     uint64_t xbzrle_pages;
     /* xbzrle number of cache miss */
     uint64_t xbzrle_cache_miss;
+    /* xbzrle miss rate */
+    double xbzrle_cache_miss_rate;
 };
 typedef struct RAMState RAMState;
 
@@ -185,7 +187,6 @@ static RAMState ram_state;
 
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
-    double xbzrle_cache_miss_rate;
     uint64_t xbzrle_overflows;
 } AccountingInfo;
 
@@ -223,7 +224,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void)
 
 double xbzrle_mig_cache_miss_rate(void)
 {
-    return acct_info.xbzrle_cache_miss_rate;
+    return ram_state.xbzrle_cache_miss_rate;
 }
 
 uint64_t xbzrle_mig_pages_overflow(void)
@@ -698,7 +699,7 @@ static void migration_bitmap_sync(RAMState *rs)
 
         if (migrate_use_xbzrle()) {
             if (rs->iterations_prev != rs->iterations) {
-                acct_info.xbzrle_cache_miss_rate =
+                rs->xbzrle_cache_miss_rate =
                    (double)(rs->xbzrle_cache_miss -
                             rs->xbzrle_cache_miss_prev) /
                    (rs->iterations - rs->iterations_prev);
@@ -2001,6 +2002,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->xbzrle_bytes = 0;
     rs->xbzrle_pages = 0;
     rs->xbzrle_cache_miss = 0;
+    rs->xbzrle_cache_miss_rate = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (19 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:22   ` Dr. David Alan Gilbert
  2017-03-27 11:03   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState Juan Quintela
                   ` (30 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Once there, remove the now unused AccountingInfo struct and var.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 21 +++++----------------
 1 file changed, 5 insertions(+), 16 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c398ff9..3292eb0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -180,23 +180,13 @@ struct RAMState {
     uint64_t xbzrle_cache_miss;
     /* xbzrle miss rate */
     double xbzrle_cache_miss_rate;
+    /* xbzrle number of overflows */
+    uint64_t xbzrle_overflows;
 };
 typedef struct RAMState RAMState;
 
 static RAMState ram_state;
 
-/* accounting for migration statistics */
-typedef struct AccountingInfo {
-    uint64_t xbzrle_overflows;
-} AccountingInfo;
-
-static AccountingInfo acct_info;
-
-static void acct_clear(void)
-{
-    memset(&acct_info, 0, sizeof(acct_info));
-}
-
 uint64_t dup_mig_pages_transferred(void)
 {
     return ram_state.zero_pages;
@@ -229,7 +219,7 @@ double xbzrle_mig_cache_miss_rate(void)
 
 uint64_t xbzrle_mig_pages_overflow(void)
 {
-    return acct_info.xbzrle_overflows;
+    return ram_state.xbzrle_overflows;
 }
 
 static QemuMutex migration_bitmap_mutex;
@@ -527,7 +517,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
         return 0;
     } else if (encoded_len == -1) {
         trace_save_xbzrle_page_overflow();
-        acct_info.xbzrle_overflows++;
+        rs->xbzrle_overflows++;
         /* update data in the cache */
         if (!last_stage) {
             memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
@@ -2003,6 +1993,7 @@ static int ram_save_init_globals(RAMState *rs)
     rs->xbzrle_pages = 0;
     rs->xbzrle_cache_miss = 0;
     rs->xbzrle_cache_miss_rate = 0;
+    rs->xbzrle_overflows = 0;
     migration_bitmap_sync_init(rs);
     qemu_mutex_init(&migration_bitmap_mutex);
 
@@ -2033,8 +2024,6 @@ static int ram_save_init_globals(RAMState *rs)
             XBZRLE.encoded_buf = NULL;
             return -1;
         }
-
-        acct_clear();
     }
 
     /* For memory_global_dirty_log_start below.  */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (20 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:24   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset Juan Quintela
                   ` (29 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 3292eb0..c6ba92c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -182,6 +182,8 @@ struct RAMState {
     double xbzrle_cache_miss_rate;
     /* xbzrle number of overflows */
     uint64_t xbzrle_overflows;
+    /* number of dirty bits in the bitmap */
+    uint64_t migration_dirty_pages;
 };
 typedef struct RAMState RAMState;
 
@@ -222,8 +224,12 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return ram_state.xbzrle_overflows;
 }
 
+static ram_addr_t ram_save_remaining(void)
+{
+    return ram_state.migration_dirty_pages;
+}
+
 static QemuMutex migration_bitmap_mutex;
-static uint64_t migration_dirty_pages;
 
 /* used by the search for pages to send */
 struct PageSearchStatus {
@@ -581,7 +587,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
     return (next - base) << TARGET_PAGE_BITS;
 }
 
-static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
+static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
 {
     bool ret;
     int nr = addr >> TARGET_PAGE_BITS;
@@ -590,7 +596,7 @@ static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
     ret = test_and_clear_bit(nr, bitmap);
 
     if (ret) {
-        migration_dirty_pages--;
+        rs->migration_dirty_pages--;
     }
     return ret;
 }
@@ -600,8 +606,9 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
 {
     unsigned long *bitmap;
     bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    migration_dirty_pages += cpu_physical_memory_sync_dirty_bitmap(bitmap,
-                             start, length, &rs->num_dirty_pages_period);
+    rs->migration_dirty_pages +=
+        cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
+                                              &rs->num_dirty_pages_period);
 }
 
 static void migration_bitmap_sync_init(RAMState *rs)
@@ -1302,7 +1309,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     int res = 0;
 
     /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
+    if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
             res = ram_save_compressed_page(rs, ms, f, pss,
@@ -1452,11 +1459,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     }
 }
 
-static ram_addr_t ram_save_remaining(void)
-{
-    return migration_dirty_pages;
-}
-
 uint64_t ram_bytes_remaining(void)
 {
     return ram_save_remaining() * TARGET_PAGE_SIZE;
@@ -1530,6 +1532,7 @@ static void ram_state_reset(RAMState *rs)
 
 void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
 {
+    RAMState *rs = &ram_state;
     /* called in qemu main thread, so there is
      * no writing race against this migration_bitmap
      */
@@ -1555,7 +1558,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
 
         atomic_rcu_set(&migration_bitmap_rcu, bitmap);
         qemu_mutex_unlock(&migration_bitmap_mutex);
-        migration_dirty_pages += new - old;
+        rs->migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
     }
 }
@@ -1728,6 +1731,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                                           RAMBlock *block,
                                           PostcopyDiscardState *pds)
 {
+    RAMState *rs = &ram_state;
     unsigned long *bitmap;
     unsigned long *unsentmap;
     unsigned int host_ratio = block->page_size / TARGET_PAGE_SIZE;
@@ -1825,7 +1829,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
                  * Remark them as dirty, updating the count for any pages
                  * that weren't previously dirty.
                  */
-                migration_dirty_pages += !test_and_set_bit(page, bitmap);
+                rs->migration_dirty_pages += !test_and_set_bit(page, bitmap);
             }
         }
 
@@ -2051,7 +2055,7 @@ static int ram_save_init_globals(RAMState *rs)
      * Count the total number of pages used by ram blocks not including any
      * gaps due to alignment or unplugs.
      */
-    migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
+    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
 
     memory_global_dirty_log_start();
     migration_bitmap_sync(rs);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (21 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-29 17:14   ` Dr. David Alan Gilbert
  2017-03-30  6:25   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState Juan Quintela
                   ` (28 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

And then init only things that are not zero by default.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 25 +++----------------------
 1 file changed, 3 insertions(+), 22 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c6ba92c..a890179 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -611,15 +611,6 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
                                               &rs->num_dirty_pages_period);
 }
 
-static void migration_bitmap_sync_init(RAMState *rs)
-{
-    rs->start_time = 0;
-    rs->bytes_xfer_prev = 0;
-    rs->num_dirty_pages_period = 0;
-    rs->xbzrle_cache_miss_prev = 0;
-    rs->iterations_prev = 0;
-}
-
 /**
  * ram_pagesize_summary: calculate all the pagesizes of a VM
  *
@@ -1984,21 +1975,11 @@ err:
     return ret;
 }
 
-static int ram_save_init_globals(RAMState *rs)
+static int ram_state_init(RAMState *rs)
 {
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
-    rs->dirty_rate_high_cnt = 0;
-    rs->bitmap_sync_count = 0;
-    rs->zero_pages = 0;
-    rs->norm_pages = 0;
-    rs->iterations = 0;
-    rs->xbzrle_bytes = 0;
-    rs->xbzrle_pages = 0;
-    rs->xbzrle_cache_miss = 0;
-    rs->xbzrle_cache_miss_rate = 0;
-    rs->xbzrle_overflows = 0;
-    migration_bitmap_sync_init(rs);
+    memset(rs, 0, sizeof(*rs));
     qemu_mutex_init(&migration_bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
@@ -2088,7 +2069,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
 
     /* migration has already setup the bitmap, reuse it. */
     if (!migration_in_colo_state()) {
-        if (ram_save_init_globals(rs) < 0) {
+        if (ram_state_init(rs) < 0) {
             return -1;
          }
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (22 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:25   ` Peter Xu
  2017-03-30  8:49   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu " Juan Quintela
                   ` (27 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index a890179..ae2b89f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -184,6 +184,8 @@ struct RAMState {
     uint64_t xbzrle_overflows;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* protects modification of the bitmap */
+    QemuMutex bitmap_mutex;
 };
 typedef struct RAMState RAMState;
 
@@ -229,8 +231,6 @@ static ram_addr_t ram_save_remaining(void)
     return ram_state.migration_dirty_pages;
 }
 
-static QemuMutex migration_bitmap_mutex;
-
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -652,13 +652,13 @@ static void migration_bitmap_sync(RAMState *rs)
     trace_migration_bitmap_sync_start();
     memory_global_dirty_log_sync();
 
-    qemu_mutex_lock(&migration_bitmap_mutex);
+    qemu_mutex_lock(&rs->bitmap_mutex);
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         migration_bitmap_sync_range(rs, block->offset, block->used_length);
     }
     rcu_read_unlock();
-    qemu_mutex_unlock(&migration_bitmap_mutex);
+    qemu_mutex_unlock(&rs->bitmap_mutex);
 
     trace_migration_bitmap_sync_end(rs->num_dirty_pages_period);
 
@@ -1524,6 +1524,7 @@ static void ram_state_reset(RAMState *rs)
 void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
 {
     RAMState *rs = &ram_state;
+
     /* called in qemu main thread, so there is
      * no writing race against this migration_bitmap
      */
@@ -1537,7 +1538,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
          * it is safe to migration if migration_bitmap is cleared bit
          * at the same time.
          */
-        qemu_mutex_lock(&migration_bitmap_mutex);
+        qemu_mutex_lock(&rs->bitmap_mutex);
         bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
         bitmap_set(bitmap->bmap, old, new - old);
 
@@ -1548,7 +1549,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
         bitmap->unsentmap = NULL;
 
         atomic_rcu_set(&migration_bitmap_rcu, bitmap);
-        qemu_mutex_unlock(&migration_bitmap_mutex);
+        qemu_mutex_unlock(&rs->bitmap_mutex);
         rs->migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
     }
@@ -1980,7 +1981,7 @@ static int ram_state_init(RAMState *rs)
     int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
 
     memset(rs, 0, sizeof(*rs));
-    qemu_mutex_init(&migration_bitmap_mutex);
+    qemu_mutex_init(&rs->bitmap_mutex);
 
     if (migrate_use_xbzrle()) {
         XBZRLE_cache_lock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (23 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:25   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred " Juan Quintela
                   ` (26 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Once there, rename the type to be shorter.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/ram.c | 86 +++++++++++++++++++++++++++++++--------------------------
 1 file changed, 47 insertions(+), 39 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ae2b89f..090084b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -138,6 +138,19 @@ out:
     return ret;
 }
 
+struct RAMBitmap {
+    struct rcu_head rcu;
+    /* Main migration bitmap */
+    unsigned long *bmap;
+    /* bitmap of pages that haven't been sent even once
+     * only maintained and used in postcopy at the moment
+     * where it's used to send the dirtymap at the start
+     * of the postcopy phase
+     */
+    unsigned long *unsentmap;
+};
+typedef struct RAMBitmap RAMBitmap;
+
 /* State of RAM for migration */
 struct RAMState {
     /* Last block that we have visited searching for dirty pages */
@@ -186,6 +199,8 @@ struct RAMState {
     uint64_t migration_dirty_pages;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
+    /* Ram Bitmap protected by RCU */
+    RAMBitmap *ram_bitmap;
 };
 typedef struct RAMState RAMState;
 
@@ -242,18 +257,6 @@ struct PageSearchStatus {
 };
 typedef struct PageSearchStatus PageSearchStatus;
 
-static struct BitmapRcu {
-    struct rcu_head rcu;
-    /* Main migration bitmap */
-    unsigned long *bmap;
-    /* bitmap of pages that haven't been sent even once
-     * only maintained and used in postcopy at the moment
-     * where it's used to send the dirtymap at the start
-     * of the postcopy phase
-     */
-    unsigned long *unsentmap;
-} *migration_bitmap_rcu;
-
 struct CompressParam {
     bool done;
     bool quit;
@@ -576,7 +579,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
 
     unsigned long next;
 
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     if (rs->ram_bulk_stage && nr > base) {
         next = nr + 1;
     } else {
@@ -591,7 +594,7 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
 {
     bool ret;
     int nr = addr >> TARGET_PAGE_BITS;
-    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
 
     ret = test_and_clear_bit(nr, bitmap);
 
@@ -605,7 +608,7 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
                                         ram_addr_t length)
 {
     unsigned long *bitmap;
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     rs->migration_dirty_pages +=
         cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
                                               &rs->num_dirty_pages_period);
@@ -1148,14 +1151,14 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
          */
         if (block) {
             unsigned long *bitmap;
-            bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+            bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
             dirty = test_bit(*ram_addr_abs >> TARGET_PAGE_BITS, bitmap);
             if (!dirty) {
                 trace_get_queued_page_not_dirty(
                     block->idstr, (uint64_t)offset,
                     (uint64_t)*ram_addr_abs,
                     test_bit(*ram_addr_abs >> TARGET_PAGE_BITS,
-                         atomic_rcu_read(&migration_bitmap_rcu)->unsentmap));
+                         atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
             } else {
                 trace_get_queued_page(block->idstr,
                                       (uint64_t)offset,
@@ -1314,7 +1317,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         if (res < 0) {
             return res;
         }
-        unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+        unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
         if (unsentmap) {
             clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
         }
@@ -1478,7 +1481,7 @@ void free_xbzrle_decoded_buf(void)
     xbzrle_decoded_buf = NULL;
 }
 
-static void migration_bitmap_free(struct BitmapRcu *bmap)
+static void migration_bitmap_free(struct RAMBitmap *bmap)
 {
     g_free(bmap->bmap);
     g_free(bmap->unsentmap);
@@ -1487,11 +1490,13 @@ static void migration_bitmap_free(struct BitmapRcu *bmap)
 
 static void ram_migration_cleanup(void *opaque)
 {
+    RAMState *rs = opaque;
+
     /* caller have hold iothread lock or is in a bh, so there is
      * no writing race against this migration_bitmap
      */
-    struct BitmapRcu *bitmap = migration_bitmap_rcu;
-    atomic_rcu_set(&migration_bitmap_rcu, NULL);
+    struct RAMBitmap *bitmap = rs->ram_bitmap;
+    atomic_rcu_set(&rs->ram_bitmap, NULL);
     if (bitmap) {
         memory_global_dirty_log_stop();
         call_rcu(bitmap, migration_bitmap_free, rcu);
@@ -1528,9 +1533,9 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
     /* called in qemu main thread, so there is
      * no writing race against this migration_bitmap
      */
-    if (migration_bitmap_rcu) {
-        struct BitmapRcu *old_bitmap = migration_bitmap_rcu, *bitmap;
-        bitmap = g_new(struct BitmapRcu, 1);
+    if (rs->ram_bitmap) {
+        struct RAMBitmap *old_bitmap = rs->ram_bitmap, *bitmap;
+        bitmap = g_new(struct RAMBitmap, 1);
         bitmap->bmap = bitmap_new(new);
 
         /* prevent migration_bitmap content from being set bit
@@ -1548,7 +1553,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
          */
         bitmap->unsentmap = NULL;
 
-        atomic_rcu_set(&migration_bitmap_rcu, bitmap);
+        atomic_rcu_set(&rs->ram_bitmap, bitmap);
         qemu_mutex_unlock(&rs->bitmap_mutex);
         rs->migration_dirty_pages += new - old;
         call_rcu(old_bitmap, migration_bitmap_free, rcu);
@@ -1563,13 +1568,13 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
 void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
 {
     int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
-
+    RAMState *rs = &ram_state;
     int64_t cur;
     int64_t linelen = 128;
     char linebuf[129];
 
     if (!todump) {
-        todump = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+        todump = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     }
 
     for (cur = 0; cur < ram_pages; cur += linelen) {
@@ -1598,8 +1603,9 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
 
 void ram_postcopy_migrated_memory_release(MigrationState *ms)
 {
+    RAMState *rs = &ram_state;
     struct RAMBlock *block;
-    unsigned long *bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         unsigned long first = block->offset >> TARGET_PAGE_BITS;
@@ -1634,11 +1640,12 @@ static int postcopy_send_discard_bm_ram(MigrationState *ms,
                                         unsigned long start,
                                         unsigned long length)
 {
+    RAMState *rs = &ram_state;
     unsigned long end = start + length; /* one after the end */
     unsigned long current;
     unsigned long *unsentmap;
 
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
     for (current = start; current < end; ) {
         unsigned long one = find_next_bit(unsentmap, end, current);
 
@@ -1737,8 +1744,8 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
         return;
     }
 
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
+    unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
 
     if (unsent_pass) {
         /* Find a sent page */
@@ -1896,15 +1903,16 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
  */
 int ram_postcopy_send_discard_bitmap(MigrationState *ms)
 {
+    RAMState *rs = &ram_state;
     int ret;
     unsigned long *bitmap, *unsentmap;
 
     rcu_read_lock();
 
     /* This should be our last sync, the src is now paused */
-    migration_bitmap_sync(&ram_state);
+    migration_bitmap_sync(rs);
 
-    unsentmap = atomic_rcu_read(&migration_bitmap_rcu)->unsentmap;
+    unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
     if (!unsentmap) {
         /* We don't have a safe way to resize the sentmap, so
          * if the bitmap was resized it will be NULL at this
@@ -1925,7 +1933,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
     /*
      * Update the unsentmap to be unsentmap = unsentmap | dirty
      */
-    bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
+    bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     bitmap_or(unsentmap, unsentmap, bitmap,
                last_ram_offset() >> TARGET_PAGE_BITS);
 
@@ -2020,16 +2028,16 @@ static int ram_state_init(RAMState *rs)
     bytes_transferred = 0;
     ram_state_reset(rs);
 
-    migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
+    rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
     /* Skip setting bitmap if there is no RAM */
     if (ram_bytes_total()) {
         ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
-        migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
-        bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
+        rs->ram_bitmap->bmap = bitmap_new(ram_bitmap_pages);
+        bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages);
 
         if (migrate_postcopy_ram()) {
-            migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages);
-            bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages);
+            rs->ram_bitmap->unsentmap = bitmap_new(ram_bitmap_pages);
+            bitmap_set(rs->ram_bitmap->unsentmap, 0, ram_bitmap_pages);
         }
     }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (24 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-29 17:38   ` Dr. David Alan Gilbert
  2017-03-30  6:26   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter Juan Quintela
                   ` (25 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 090084b..872ea23 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -197,6 +197,8 @@ struct RAMState {
     uint64_t xbzrle_overflows;
     /* number of dirty bits in the bitmap */
     uint64_t migration_dirty_pages;
+    /* total number of bytes transferred */
+    uint64_t bytes_transferred;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -246,6 +248,11 @@ static ram_addr_t ram_save_remaining(void)
     return ram_state.migration_dirty_pages;
 }
 
+uint64_t ram_bytes_transferred(void)
+{
+    return ram_state.bytes_transferred;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -870,9 +877,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
     return bytes_sent;
 }
 
-static uint64_t bytes_transferred;
-
-static void flush_compressed_data(QEMUFile *f)
+static void flush_compressed_data(RAMState *rs, QEMUFile *f)
 {
     int idx, len, thread_count;
 
@@ -893,7 +898,7 @@ static void flush_compressed_data(QEMUFile *f)
         qemu_mutex_lock(&comp_param[idx].mutex);
         if (!comp_param[idx].quit) {
             len = qemu_put_qemu_file(f, comp_param[idx].file);
-            bytes_transferred += len;
+            rs->bytes_transferred += len;
         }
         qemu_mutex_unlock(&comp_param[idx].mutex);
     }
@@ -989,7 +994,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          * is used to avoid resending the block name.
          */
         if (block != rs->last_sent_block) {
-            flush_compressed_data(f);
+            flush_compressed_data(rs, f);
             pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
@@ -1065,7 +1070,7 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
                 /* If xbzrle is on, stop using the data compression at this
                  * point. In theory, xbzrle can do better than compression.
                  */
-                flush_compressed_data(f);
+                flush_compressed_data(rs, f);
                 compression_switch = false;
             }
         }
@@ -1448,7 +1453,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
         rs->zero_pages += pages;
     } else {
         rs->norm_pages += pages;
-        bytes_transferred += size;
+        rs->bytes_transferred += size;
         qemu_update_position(f, size);
     }
 }
@@ -1458,11 +1463,6 @@ uint64_t ram_bytes_remaining(void)
     return ram_save_remaining() * TARGET_PAGE_SIZE;
 }
 
-uint64_t ram_bytes_transferred(void)
-{
-    return bytes_transferred;
-}
-
 uint64_t ram_bytes_total(void)
 {
     RAMBlock *block;
@@ -2025,7 +2025,6 @@ static int ram_state_init(RAMState *rs)
 
     qemu_mutex_lock_ramlist();
     rcu_read_lock();
-    bytes_transferred = 0;
     ram_state_reset(rs);
 
     rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
@@ -2137,7 +2136,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2159,7 +2158,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(f);
+    flush_compressed_data(rs, f);
     rcu_read_unlock();
 
     /*
@@ -2169,7 +2168,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     ram_control_after_iterate(f, RAM_CONTROL_ROUND);
 
     qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
-    bytes_transferred += 8;
+    rs->bytes_transferred += 8;
 
     ret = qemu_file_get_error(f);
     if (ret < 0) {
@@ -2208,14 +2207,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
         int pages;
 
         pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
-                                        &bytes_transferred);
+                                        &rs->bytes_transferred);
         /* no more blocks to sent */
         if (pages == 0) {
             break;
         }
     }
 
-    flush_compressed_data(f);
+    flush_compressed_data(rs, f);
     ram_control_after_iterate(f, RAM_CONTROL_FINISH);
 
     rcu_read_unlock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (25 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:27   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining Juan Quintela
                   ` (24 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Somewhere it was passed by reference, just use it from RAMState.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 75 +++++++++++++++++++++------------------------------------
 1 file changed, 27 insertions(+), 48 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 872ea23..3ae00e2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -494,12 +494,10 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
                             ram_addr_t current_addr, RAMBlock *block,
-                            ram_addr_t offset, bool last_stage,
-                            uint64_t *bytes_transferred)
+                            ram_addr_t offset, bool last_stage)
 {
     int encoded_len = 0, bytes_xbzrle;
     uint8_t *prev_cached_page;
@@ -555,7 +553,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
     bytes_xbzrle += encoded_len + 1 + 2;
     rs->xbzrle_pages++;
     rs->xbzrle_bytes += bytes_xbzrle;
-    *bytes_transferred += bytes_xbzrle;
+    rs->bytes_transferred += bytes_xbzrle;
 
     return 1;
 }
@@ -727,20 +725,18 @@ static void migration_bitmap_sync(RAMState *rs)
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @p: pointer to the page
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
-                          ram_addr_t offset,
-                          uint8_t *p, uint64_t *bytes_transferred)
+                          ram_addr_t offset, uint8_t *p)
 {
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
         rs->zero_pages++;
-        *bytes_transferred += save_page_header(f, block,
-                                               offset | RAM_SAVE_FLAG_COMPRESS);
+        rs->bytes_transferred +=
+            save_page_header(f, block, offset | RAM_SAVE_FLAG_COMPRESS);
         qemu_put_byte(f, 0);
-        *bytes_transferred += 1;
+        rs->bytes_transferred += 1;
         pages = 1;
     }
 
@@ -771,11 +767,9 @@ static void ram_release_pages(MigrationState *ms, const char *rbname,
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
-                         PageSearchStatus *pss, bool last_stage,
-                         uint64_t *bytes_transferred)
+                         PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit;
@@ -793,7 +787,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     ret = ram_control_save_page(f, block->offset,
                            offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        *bytes_transferred += bytes_xmit;
+        rs->bytes_transferred += bytes_xmit;
         pages = 1;
     }
 
@@ -813,7 +807,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             }
         }
     } else {
-        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+        pages = save_zero_page(rs, f, block, offset, p);
         if (pages > 0) {
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
@@ -823,7 +817,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(rs, f, &p, current_addr, block,
-                                     offset, last_stage, bytes_transferred);
+                                     offset, last_stage);
             if (!last_stage) {
                 /* Can't send this cached data async, since the cache page
                  * might get updated before it gets to the wire
@@ -835,7 +829,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        *bytes_transferred += save_page_header(f, block,
+        rs->bytes_transferred += save_page_header(f, block,
                                                offset | RAM_SAVE_FLAG_PAGE);
         if (send_async) {
             qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE,
@@ -844,7 +838,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
         } else {
             qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
         }
-        *bytes_transferred += TARGET_PAGE_SIZE;
+        rs->bytes_transferred += TARGET_PAGE_SIZE;
         pages = 1;
         rs->norm_pages++;
     }
@@ -912,8 +906,7 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
 }
 
 static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
-                                           RAMBlock *block, ram_addr_t offset,
-                                           uint64_t *bytes_transferred)
+                                           RAMBlock *block, ram_addr_t offset)
 {
     int idx, thread_count, bytes_xmit = -1, pages = -1;
 
@@ -930,7 +923,7 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
                 qemu_mutex_unlock(&comp_param[idx].mutex);
                 pages = 1;
                 rs->norm_pages++;
-                *bytes_transferred += bytes_xmit;
+                rs->bytes_transferred += bytes_xmit;
                 break;
             }
         }
@@ -956,12 +949,10 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  */
 static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                                     QEMUFile *f,
-                                    PageSearchStatus *pss, bool last_stage,
-                                    uint64_t *bytes_transferred)
+                                    PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit = 0;
@@ -975,7 +966,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
     ret = ram_control_save_page(f, block->offset,
                                 offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
-        *bytes_transferred += bytes_xmit;
+        rs->bytes_transferred += bytes_xmit;
         pages = 1;
     }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
@@ -995,7 +986,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          */
         if (block != rs->last_sent_block) {
             flush_compressed_data(rs, f);
-            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
                 bytes_xmit = save_page_header(f, block, offset |
@@ -1003,7 +994,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                 blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
                                                  migrate_compress_level());
                 if (blen > 0) {
-                    *bytes_transferred += bytes_xmit + blen;
+                    rs->bytes_transferred += bytes_xmit + blen;
                     rs->norm_pages++;
                     pages = 1;
                 } else {
@@ -1016,10 +1007,9 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
-            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
+            pages = save_zero_page(rs, f, block, offset, p);
             if (pages == -1) {
-                pages = compress_page_with_multi_thread(rs, f, block, offset,
-                                                        bytes_transferred);
+                pages = compress_page_with_multi_thread(rs, f, block, offset);
             } else {
                 ram_release_pages(ms, block->idstr, pss->offset, pages);
             }
@@ -1296,13 +1286,11 @@ err:
  * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
  */
 static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                                 PageSearchStatus *pss,
                                 bool last_stage,
-                                uint64_t *bytes_transferred,
                                 ram_addr_t dirty_ram_abs)
 {
     int res = 0;
@@ -1311,12 +1299,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
-            res = ram_save_compressed_page(rs, ms, f, pss,
-                                           last_stage,
-                                           bytes_transferred);
+            res = ram_save_compressed_page(rs, ms, f, pss, last_stage);
         } else {
-            res = ram_save_page(rs, ms, f, pss, last_stage,
-                                bytes_transferred);
+            res = ram_save_page(rs, ms, f, pss, last_stage);
         }
 
         if (res < 0) {
@@ -1354,13 +1339,11 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
 static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
                               PageSearchStatus *pss,
                               bool last_stage,
-                              uint64_t *bytes_transferred,
                               ram_addr_t dirty_ram_abs)
 {
     int tmppages, pages = 0;
@@ -1368,7 +1351,7 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
 
     do {
         tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
-                                        bytes_transferred, dirty_ram_abs);
+                                        dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
         }
@@ -1393,14 +1376,12 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  * @rs: current RAM state
  * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
- * @bytes_transferred: increase it with the number of transferred bytes
  *
  * On systems where host-page-size > target-page-size it will send all the
  * pages in a host page that are dirty.
  */
 
-static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
-                                   uint64_t *bytes_transferred)
+static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
 {
     PageSearchStatus pss;
     MigrationState *ms = migrate_get_current();
@@ -1432,8 +1413,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, ms, f, &pss,
-                                       last_stage, bytes_transferred,
+            pages = ram_save_host_page(rs, ms, f, &pss, last_stage,
                                        dirty_ram_abs);
         }
     } while (!pages && again);
@@ -2136,7 +2116,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, false);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2206,8 +2186,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     while (true) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
-                                        &rs->bytes_transferred);
+        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state());
         /* no more blocks to sent */
         if (pages == 0) {
             break;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (26 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 15:34   ` Dr. David Alan Gilbert
  2017-03-30  6:24   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState Juan Quintela
                   ` (23 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Just unfold it.  Move ram_bytes_remaining() with the rest of exported
functions.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 3ae00e2..dd5a453 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -243,16 +243,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
     return ram_state.xbzrle_overflows;
 }
 
-static ram_addr_t ram_save_remaining(void)
-{
-    return ram_state.migration_dirty_pages;
-}
-
 uint64_t ram_bytes_transferred(void)
 {
     return ram_state.bytes_transferred;
 }
 
+uint64_t ram_bytes_remaining(void)
+{
+    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -1438,11 +1438,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
     }
 }
 
-uint64_t ram_bytes_remaining(void)
-{
-    return ram_save_remaining() * TARGET_PAGE_SIZE;
-}
-
 uint64_t ram_bytes_total(void)
 {
     RAMBlock *block;
@@ -2210,7 +2205,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
     RAMState *rs = opaque;
     uint64_t remaining_size;
 
-    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
+    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
 
     if (!migration_in_postcopy(migrate_get_current()) &&
         remaining_size < max_size) {
@@ -2219,7 +2214,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         migration_bitmap_sync(rs);
         rcu_read_unlock();
         qemu_mutex_unlock_iothread();
-        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
+        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
     }
 
     /* We can do postcopy, and all the data is postcopiable */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (27 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:49   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* " Juan Quintela
                   ` (22 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

It was on MigrationState when it is only used inside ram.c for
postcopy.  Problem is that we need to access it without being able to
pass it RAMState directly.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 2 --
 migration/migration.c         | 1 -
 migration/ram.c               | 7 +++++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 84cef4b..e032fb0 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -189,8 +189,6 @@ struct MigrationState
     /* Queue of outstanding page requests from the destination */
     QemuMutex src_page_req_mutex;
     QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
-    /* The RAMBlock used in the last src_page_request */
-    RAMBlock *last_req_rb;
     /* The semaphore is used to notify COLO thread that failover is finished */
     QemuSemaphore colo_exit_sem;
 
diff --git a/migration/migration.c b/migration/migration.c
index e532430..b220941 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1118,7 +1118,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->postcopy_after_devices = false;
     s->postcopy_requests = 0;
     s->migration_thread_running = false;
-    s->last_req_rb = NULL;
     error_free(s->error);
     s->error = NULL;
 
diff --git a/migration/ram.c b/migration/ram.c
index dd5a453..325a0f3 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -203,6 +203,8 @@ struct RAMState {
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
     RAMBitmap *ram_bitmap;
+    /* The RAMBlock used in the last src_page_request */
+    RAMBlock *last_req_rb;
 };
 typedef struct RAMState RAMState;
 
@@ -1224,12 +1226,13 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
                          ram_addr_t start, ram_addr_t len)
 {
     RAMBlock *ramblock;
+    RAMState *rs = &ram_state;
 
     ms->postcopy_requests++;
     rcu_read_lock();
     if (!rbname) {
         /* Reuse last RAMBlock */
-        ramblock = ms->last_req_rb;
+        ramblock = rs->last_req_rb;
 
         if (!ramblock) {
             /*
@@ -1247,7 +1250,7 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
             error_report("ram_save_queue_pages no block '%s'", rbname);
             goto err;
         }
-        ms->last_req_rb = ramblock;
+        rs->last_req_rb = ramblock;
     }
     trace_ram_save_queue_pages(ramblock->idstr, start, len);
     if (start+len > ramblock->used_length) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (28 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  6:56   ` Peter Xu
  2017-03-31 16:52   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count() Juan Quintela
                   ` (21 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

This are the last postcopy fields still at MigrationState.  Once there
Move MigrationSrcPageRequest to ram.c and remove MigrationState
parameters where appropiate.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 17 +-----------
 migration/migration.c         |  5 +---
 migration/ram.c               | 62 ++++++++++++++++++++++++++-----------------
 3 files changed, 40 insertions(+), 44 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index e032fb0..8a6caa3 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -128,18 +128,6 @@ struct MigrationIncomingState {
 MigrationIncomingState *migration_incoming_get_current(void);
 void migration_incoming_state_destroy(void);
 
-/*
- * An outstanding page request, on the source, having been received
- * and queued
- */
-struct MigrationSrcPageRequest {
-    RAMBlock *rb;
-    hwaddr    offset;
-    hwaddr    len;
-
-    QSIMPLEQ_ENTRY(MigrationSrcPageRequest) next_req;
-};
-
 struct MigrationState
 {
     size_t bytes_xfer;
@@ -186,9 +174,6 @@ struct MigrationState
     /* Flag set once the migration thread called bdrv_inactivate_all */
     bool block_inactive;
 
-    /* Queue of outstanding page requests from the destination */
-    QemuMutex src_page_req_mutex;
-    QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
     /* The semaphore is used to notify COLO thread that failover is finished */
     QemuSemaphore colo_exit_sem;
 
@@ -371,7 +356,7 @@ void savevm_skip_configuration(void);
 int global_state_store(void);
 void global_state_store_running(void);
 
-void flush_page_queue(MigrationState *ms);
+void flush_page_queue(void);
 int ram_save_queue_pages(MigrationState *ms, const char *rbname,
                          ram_addr_t start, ram_addr_t len);
 uint64_t ram_pagesize_summary(void);
diff --git a/migration/migration.c b/migration/migration.c
index b220941..58c1587 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -109,7 +109,6 @@ MigrationState *migrate_get_current(void)
     };
 
     if (!once) {
-        qemu_mutex_init(&current_migration.src_page_req_mutex);
         current_migration.parameters.tls_creds = g_strdup("");
         current_migration.parameters.tls_hostname = g_strdup("");
         once = true;
@@ -949,7 +948,7 @@ static void migrate_fd_cleanup(void *opaque)
     qemu_bh_delete(s->cleanup_bh);
     s->cleanup_bh = NULL;
 
-    flush_page_queue(s);
+    flush_page_queue();
 
     if (s->to_dst_file) {
         trace_migrate_fd_cleanup();
@@ -1123,8 +1122,6 @@ MigrationState *migrate_init(const MigrationParams *params)
 
     migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
 
-    QSIMPLEQ_INIT(&s->src_page_requests);
-
     s->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
     return s;
 }
diff --git a/migration/ram.c b/migration/ram.c
index 325a0f3..601370c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -151,6 +151,18 @@ struct RAMBitmap {
 };
 typedef struct RAMBitmap RAMBitmap;
 
+/*
+ * An outstanding page request, on the source, having been received
+ * and queued
+ */
+struct RAMSrcPageRequest {
+    RAMBlock *rb;
+    hwaddr    offset;
+    hwaddr    len;
+
+    QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
+};
+
 /* State of RAM for migration */
 struct RAMState {
     /* Last block that we have visited searching for dirty pages */
@@ -205,6 +217,9 @@ struct RAMState {
     RAMBitmap *ram_bitmap;
     /* The RAMBlock used in the last src_page_request */
     RAMBlock *last_req_rb;
+    /* Queue of outstanding page requests from the destination */
+    QemuMutex src_page_req_mutex;
+    QSIMPLEQ_HEAD(src_page_requests, RAMSrcPageRequest) src_page_requests;
 };
 typedef struct RAMState RAMState;
 
@@ -1084,20 +1099,20 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
  *
  * Returns the block of the page (or NULL if none available)
  *
- * @ms: current migration state
+ * @rs: current RAM state
  * @offset: used to return the offset within the RAMBlock
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
-static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
+static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
                               ram_addr_t *ram_addr_abs)
 {
     RAMBlock *block = NULL;
 
-    qemu_mutex_lock(&ms->src_page_req_mutex);
-    if (!QSIMPLEQ_EMPTY(&ms->src_page_requests)) {
-        struct MigrationSrcPageRequest *entry =
-                                QSIMPLEQ_FIRST(&ms->src_page_requests);
+    qemu_mutex_lock(&rs->src_page_req_mutex);
+    if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
+        struct RAMSrcPageRequest *entry =
+                                QSIMPLEQ_FIRST(&rs->src_page_requests);
         block = entry->rb;
         *offset = entry->offset;
         *ram_addr_abs = (entry->offset + entry->rb->offset) &
@@ -1108,11 +1123,11 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
             entry->offset += TARGET_PAGE_SIZE;
         } else {
             memory_region_unref(block->mr);
-            QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
+            QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
             g_free(entry);
         }
     }
-    qemu_mutex_unlock(&ms->src_page_req_mutex);
+    qemu_mutex_unlock(&rs->src_page_req_mutex);
 
     return block;
 }
@@ -1125,13 +1140,11 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
  * Returns if a queued page is found
  *
  * @rs: current RAM state
- * @ms: current migration state
  * @pss: data about the state of the current dirty page scan
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
-static bool get_queued_page(RAMState *rs, MigrationState *ms,
-                            PageSearchStatus *pss,
+static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
                             ram_addr_t *ram_addr_abs)
 {
     RAMBlock  *block;
@@ -1139,7 +1152,7 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
     bool dirty;
 
     do {
-        block = unqueue_page(ms, &offset, ram_addr_abs);
+        block = unqueue_page(rs, &offset, ram_addr_abs);
         /*
          * We're sending this page, and since it's postcopy nothing else
          * will dirty it, and we must make sure it doesn't get sent again
@@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
  *
  * It should be empty at the end anyway, but in error cases there may
  * xbe some left.
- *
- * @ms: current migration state
  */
-void flush_page_queue(MigrationState *ms)
+void flush_page_queue(void)
 {
-    struct MigrationSrcPageRequest *mspr, *next_mspr;
+    struct RAMSrcPageRequest *mspr, *next_mspr;
+    RAMState *rs = &ram_state;
     /* This queue generally should be empty - but in the case of a failed
      * migration might have some droppings in.
      */
     rcu_read_lock();
-    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
+    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
         memory_region_unref(mspr->rb->mr);
-        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
+        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
         g_free(mspr);
     }
     rcu_read_unlock();
@@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
         goto err;
     }
 
-    struct MigrationSrcPageRequest *new_entry =
-        g_malloc0(sizeof(struct MigrationSrcPageRequest));
+    struct RAMSrcPageRequest *new_entry =
+        g_malloc0(sizeof(struct RAMSrcPageRequest));
     new_entry->rb = ramblock;
     new_entry->offset = start;
     new_entry->len = len;
 
     memory_region_ref(ramblock->mr);
-    qemu_mutex_lock(&ms->src_page_req_mutex);
-    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
-    qemu_mutex_unlock(&ms->src_page_req_mutex);
+    qemu_mutex_lock(&rs->src_page_req_mutex);
+    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
+    qemu_mutex_unlock(&rs->src_page_req_mutex);
     rcu_read_unlock();
 
     return 0;
@@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
 
     do {
         again = true;
-        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
+        found = get_queued_page(rs, &pss, &dirty_ram_abs);
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
@@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
 
     memset(rs, 0, sizeof(*rs));
     qemu_mutex_init(&rs->bitmap_mutex);
+    qemu_mutex_init(&rs->src_page_req_mutex);
+    QSIMPLEQ_INIT(&rs->src_page_requests);
 
     if (migrate_use_xbzrle()) {
         XBZRLE_cache_lock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count()
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (29 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* " Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-29  9:06   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate Juan Quintela
                   ` (20 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

This is a ram field that was inside MigrationState.  Move it to
RAMState and make it the same that the other ram stats.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 2 +-
 migration/migration.c         | 3 +--
 migration/ram.c               | 6 +++++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 8a6caa3..768fa72 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -159,7 +159,6 @@ struct MigrationState
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
-    int64_t dirty_sync_count;
     /* Count of requests incoming from destination */
     int64_t postcopy_requests;
 
@@ -255,6 +254,7 @@ void migrate_decompress_threads_join(void);
 uint64_t ram_bytes_remaining(void);
 uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
+uint64_t ram_dirty_sync_count(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
diff --git a/migration/migration.c b/migration/migration.c
index 58c1587..983c3d9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -648,7 +648,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->normal_bytes = norm_mig_pages_transferred() *
         (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
-    info->ram->dirty_sync_count = s->dirty_sync_count;
+    info->ram->dirty_sync_count = ram_dirty_sync_count();
     info->ram->postcopy_requests = s->postcopy_requests;
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
@@ -1112,7 +1112,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->dirty_pages_rate = 0;
     s->dirty_bytes_rate = 0;
     s->setup_time = 0;
-    s->dirty_sync_count = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
     s->postcopy_requests = 0;
diff --git a/migration/ram.c b/migration/ram.c
index 601370c..98095ea 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -270,6 +270,11 @@ uint64_t ram_bytes_remaining(void)
     return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
 }
 
+uint64_t ram_dirty_sync_count(void)
+{
+    return ram_state.bitmap_sync_count;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -726,7 +731,6 @@ static void migration_bitmap_sync(RAMState *rs)
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
     }
-    s->dirty_sync_count = rs->bitmap_sync_count;
     if (migrate_use_events()) {
         qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (30 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count() Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  7:00   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState Juan Quintela
                   ` (19 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

It can be recalculated from dirty_pages_rate.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 1 -
 migration/migration.c         | 6 +++---
 migration/ram.c               | 1 -
 3 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 768fa72..0d5d5fc 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -155,7 +155,6 @@ struct MigrationState
     int64_t downtime;
     int64_t expected_downtime;
     int64_t dirty_pages_rate;
-    int64_t dirty_bytes_rate;
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
diff --git a/migration/migration.c b/migration/migration.c
index 983c3d9..4af934b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1110,7 +1110,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->downtime = 0;
     s->expected_downtime = 0;
     s->dirty_pages_rate = 0;
-    s->dirty_bytes_rate = 0;
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
@@ -2000,8 +1999,9 @@ static void *migration_thread(void *opaque)
                                       bandwidth, max_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (s->dirty_bytes_rate && transferred_bytes > 10000) {
-                s->expected_downtime = s->dirty_bytes_rate / bandwidth;
+            if (s->dirty_pages_rate && transferred_bytes > 10000) {
+                s->expected_downtime = s->dirty_pages_rate *
+                    (1ul << qemu_target_page_bits()) / bandwidth;
             }
 
             qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/ram.c b/migration/ram.c
index 98095ea..c66c308 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -727,7 +727,6 @@ static void migration_bitmap_sync(RAMState *rs)
         }
         s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
-        s->dirty_bytes_rate = s->dirty_pages_rate * TARGET_PAGE_SIZE;
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (31 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  7:04   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState Juan Quintela
                   ` (18 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Treat it like the rest of ram stats counters.  Export its value the
same way.  As an added bonus, no more MigrationState used in
migration_bitmap_sync();

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  2 +-
 migration/migration.c         |  7 +++----
 migration/ram.c               | 12 +++++++++---
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 0d5d5fc..ffa7944 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -154,7 +154,6 @@ struct MigrationState
     int64_t total_time;
     int64_t downtime;
     int64_t expected_downtime;
-    int64_t dirty_pages_rate;
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
@@ -254,6 +253,7 @@ uint64_t ram_bytes_remaining(void);
 uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
 uint64_t ram_dirty_sync_count(void);
+uint64_t ram_dirty_pages_rate(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
diff --git a/migration/migration.c b/migration/migration.c
index 4af934b..d2d9b91 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -653,7 +653,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
-        info->ram->dirty_pages_rate = s->dirty_pages_rate;
+        info->ram->dirty_pages_rate = ram_dirty_pages_rate();
     }
 }
 
@@ -1109,7 +1109,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->mbps = 0.0;
     s->downtime = 0;
     s->expected_downtime = 0;
-    s->dirty_pages_rate = 0;
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
@@ -1999,8 +1998,8 @@ static void *migration_thread(void *opaque)
                                       bandwidth, max_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (s->dirty_pages_rate && transferred_bytes > 10000) {
-                s->expected_downtime = s->dirty_pages_rate *
+            if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
+                s->expected_downtime = ram_dirty_pages_rate() *
                     (1ul << qemu_target_page_bits()) / bandwidth;
             }
 
diff --git a/migration/ram.c b/migration/ram.c
index c66c308..6cb1435 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -211,6 +211,8 @@ struct RAMState {
     uint64_t migration_dirty_pages;
     /* total number of bytes transferred */
     uint64_t bytes_transferred;
+    /* number of dirtied pages in the last second */
+    uint64_t dirty_pages_rate;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -275,6 +277,11 @@ uint64_t ram_dirty_sync_count(void)
     return ram_state.bitmap_sync_count;
 }
 
+uint64_t ram_dirty_pages_rate(void)
+{
+    return ram_state.dirty_pages_rate;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -665,7 +672,6 @@ uint64_t ram_pagesize_summary(void)
 static void migration_bitmap_sync(RAMState *rs)
 {
     RAMBlock *block;
-    MigrationState *s = migrate_get_current();
     int64_t end_time;
     int64_t bytes_xfer_now;
 
@@ -704,7 +710,7 @@ static void migration_bitmap_sync(RAMState *rs)
                throttling */
             bytes_xfer_now = ram_bytes_transferred();
 
-            if (s->dirty_pages_rate &&
+            if (rs->dirty_pages_rate &&
                (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
                (rs->dirty_rate_high_cnt++ >= 2)) {
@@ -725,7 +731,7 @@ static void migration_bitmap_sync(RAMState *rs)
             rs->iterations_prev = rs->iterations;
             rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
         }
-        s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
+        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
             / (end_time - rs->start_time);
         rs->start_time = end_time;
         rs->num_dirty_pages_period = 0;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (32 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-30  7:06   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState Juan Quintela
                   ` (17 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  6 ++----
 migration/migration.c         |  5 ++---
 migration/ram.c               | 13 +++++++++----
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index ffa7944..e88bbaf 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -157,8 +157,6 @@ struct MigrationState
     bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
     int64_t xbzrle_cache_size;
     int64_t setup_time;
-    /* Count of requests incoming from destination */
-    int64_t postcopy_requests;
 
     /* Flag set once the migration has been asked to enter postcopy */
     bool start_postcopy;
@@ -254,6 +252,7 @@ uint64_t ram_bytes_transferred(void);
 uint64_t ram_bytes_total(void);
 uint64_t ram_dirty_sync_count(void);
 uint64_t ram_dirty_pages_rate(void);
+uint64_t ram_postcopy_requests(void);
 void free_xbzrle_decoded_buf(void);
 
 void acct_update_position(QEMUFile *f, size_t size, bool zero);
@@ -356,8 +355,7 @@ int global_state_store(void);
 void global_state_store_running(void);
 
 void flush_page_queue(void);
-int ram_save_queue_pages(MigrationState *ms, const char *rbname,
-                         ram_addr_t start, ram_addr_t len);
+int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len);
 uint64_t ram_pagesize_summary(void);
 
 PostcopyState postcopy_state_get(void);
diff --git a/migration/migration.c b/migration/migration.c
index d2d9b91..ad4ea03 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -649,7 +649,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
         (1ul << qemu_target_page_bits());
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_dirty_sync_count();
-    info->ram->postcopy_requests = s->postcopy_requests;
+    info->ram->postcopy_requests = ram_postcopy_requests();
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
@@ -1112,7 +1112,6 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->setup_time = 0;
     s->start_postcopy = false;
     s->postcopy_after_devices = false;
-    s->postcopy_requests = 0;
     s->migration_thread_running = false;
     error_free(s->error);
     s->error = NULL;
@@ -1472,7 +1471,7 @@ static void migrate_handle_rp_req_pages(MigrationState *ms, const char* rbname,
         return;
     }
 
-    if (ram_save_queue_pages(ms, rbname, start, len)) {
+    if (ram_save_queue_pages(rbname, start, len)) {
         mark_source_rp_bad(ms);
     }
 }
diff --git a/migration/ram.c b/migration/ram.c
index 6cb1435..c0d6841 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -213,6 +213,8 @@ struct RAMState {
     uint64_t bytes_transferred;
     /* number of dirtied pages in the last second */
     uint64_t dirty_pages_rate;
+    /* Count of requests incoming from destination */
+    uint64_t postcopy_requests;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -282,6 +284,11 @@ uint64_t ram_dirty_pages_rate(void)
     return ram_state.dirty_pages_rate;
 }
 
+uint64_t ram_postcopy_requests(void)
+{
+    return ram_state.postcopy_requests;
+}
+
 /* used by the search for pages to send */
 struct PageSearchStatus {
     /* Current block being searched */
@@ -1237,19 +1244,17 @@ void flush_page_queue(void)
  *
  * Returns zero on success or negative on error
  *
- * @ms: current migration state
  * @rbname: Name of the RAMBLock of the request. NULL means the
  *          same that last one.
  * @start: starting address from the start of the RAMBlock
  * @len: length (in bytes) to send
  */
-int ram_save_queue_pages(MigrationState *ms, const char *rbname,
-                         ram_addr_t start, ram_addr_t len)
+int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
 {
     RAMBlock *ramblock;
     RAMState *rs = &ram_state;
 
-    ms->postcopy_requests++;
+    rs->postcopy_requests++;
     rcu_read_lock();
     if (!rbname) {
         /* Reuse last RAMBlock */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (33 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 10:52   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState Juan Quintela
                   ` (16 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index c0d6841..7667e73 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -165,6 +165,8 @@ struct RAMSrcPageRequest {
 
 /* State of RAM for migration */
 struct RAMState {
+    /* QEMUFile used for this migration */
+    QEMUFile *f;
     /* Last block that we have visited searching for dirty pages */
     RAMBlock *last_seen_block;
     /* Last block from where we have sent data */
@@ -524,14 +526,13 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
  *          -1 means that xbzrle would be longer than normal
  *
  * @rs: current RAM state
- * @f: QEMUFile where to send the data
  * @current_data: contents of the page
  * @current_addr: addr of the page
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
  */
-static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
+static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
                             ram_addr_t current_addr, RAMBlock *block,
                             ram_addr_t offset, bool last_stage)
 {
@@ -582,10 +583,11 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
     }
 
     /* Send XBZRLE based compressed page */
-    bytes_xbzrle = save_page_header(f, block, offset | RAM_SAVE_FLAG_XBZRLE);
-    qemu_put_byte(f, ENCODING_FLAG_XBZRLE);
-    qemu_put_be16(f, encoded_len);
-    qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
+    bytes_xbzrle = save_page_header(rs->f, block,
+                                    offset | RAM_SAVE_FLAG_XBZRLE);
+    qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
+    qemu_put_be16(rs->f, encoded_len);
+    qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
     bytes_xbzrle += encoded_len + 1 + 2;
     rs->xbzrle_pages++;
     rs->xbzrle_bytes += bytes_xbzrle;
@@ -849,7 +851,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             ram_release_pages(ms, block->idstr, pss->offset, pages);
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
-            pages = save_xbzrle_page(rs, f, &p, current_addr, block,
+            pages = save_xbzrle_page(rs, &p, current_addr, block,
                                      offset, last_stage);
             if (!last_stage) {
                 /* Can't send this cached data async, since the cache page
@@ -2087,6 +2089,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
             return -1;
          }
     }
+    rs->f = f;
 
     rcu_read_lock();
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (34 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 14:21   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState Juan Quintela
                   ` (15 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We receive the file from save_live operations and we don't use it
until 3 or 4 levels of calls down.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 84 +++++++++++++++++++++++++--------------------------------
 1 file changed, 37 insertions(+), 47 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 7667e73..6a39704 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -756,21 +756,20 @@ static void migration_bitmap_sync(RAMState *rs)
  * Returns the number of pages written.
  *
  * @rs: current RAM state
- * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @p: pointer to the page
  */
-static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
-                          ram_addr_t offset, uint8_t *p)
+static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
+                          uint8_t *p)
 {
     int pages = -1;
 
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
         rs->zero_pages++;
         rs->bytes_transferred +=
-            save_page_header(f, block, offset | RAM_SAVE_FLAG_COMPRESS);
-        qemu_put_byte(f, 0);
+            save_page_header(rs->f, block, offset | RAM_SAVE_FLAG_COMPRESS);
+        qemu_put_byte(rs->f, 0);
         rs->bytes_transferred += 1;
         pages = 1;
     }
@@ -798,12 +797,11 @@ static void ram_release_pages(MigrationState *ms, const char *rbname,
  *
  * @rs: current RAM state
  * @ms: current migration state
- * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
  */
-static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
+static int ram_save_page(RAMState *rs, MigrationState *ms,
                          PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
@@ -819,7 +817,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
 
     /* In doubt sent page as normal */
     bytes_xmit = 0;
-    ret = ram_control_save_page(f, block->offset,
+    ret = ram_control_save_page(rs->f, block->offset,
                            offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
         rs->bytes_transferred += bytes_xmit;
@@ -842,7 +840,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
             }
         }
     } else {
-        pages = save_zero_page(rs, f, block, offset, p);
+        pages = save_zero_page(rs, block, offset, p);
         if (pages > 0) {
             /* Must let xbzrle know, otherwise a previous (now 0'd) cached
              * page would be stale
@@ -864,14 +862,14 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        rs->bytes_transferred += save_page_header(f, block,
+        rs->bytes_transferred += save_page_header(rs->f, block,
                                                offset | RAM_SAVE_FLAG_PAGE);
         if (send_async) {
-            qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE,
+            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
                                   migrate_release_ram() &
                                   migration_in_postcopy(ms));
         } else {
-            qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
+            qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
         }
         rs->bytes_transferred += TARGET_PAGE_SIZE;
         pages = 1;
@@ -906,7 +904,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
     return bytes_sent;
 }
 
-static void flush_compressed_data(RAMState *rs, QEMUFile *f)
+static void flush_compressed_data(RAMState *rs)
 {
     int idx, len, thread_count;
 
@@ -926,7 +924,7 @@ static void flush_compressed_data(RAMState *rs, QEMUFile *f)
     for (idx = 0; idx < thread_count; idx++) {
         qemu_mutex_lock(&comp_param[idx].mutex);
         if (!comp_param[idx].quit) {
-            len = qemu_put_qemu_file(f, comp_param[idx].file);
+            len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
             rs->bytes_transferred += len;
         }
         qemu_mutex_unlock(&comp_param[idx].mutex);
@@ -940,8 +938,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
     param->offset = offset;
 }
 
-static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
-                                           RAMBlock *block, ram_addr_t offset)
+static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
+                                           ram_addr_t offset)
 {
     int idx, thread_count, bytes_xmit = -1, pages = -1;
 
@@ -951,7 +949,7 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
         for (idx = 0; idx < thread_count; idx++) {
             if (comp_param[idx].done) {
                 comp_param[idx].done = false;
-                bytes_xmit = qemu_put_qemu_file(f, comp_param[idx].file);
+                bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
                 qemu_mutex_lock(&comp_param[idx].mutex);
                 set_compress_params(&comp_param[idx], block, offset);
                 qemu_cond_signal(&comp_param[idx].cond);
@@ -980,13 +978,11 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
  *
  * @rs: current RAM state
  * @ms: current migration state
- * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
  */
 static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
-                                    QEMUFile *f,
                                     PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
@@ -998,7 +994,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
 
     p = block->host + offset;
 
-    ret = ram_control_save_page(f, block->offset,
+    ret = ram_control_save_page(rs->f, block->offset,
                                 offset, TARGET_PAGE_SIZE, &bytes_xmit);
     if (bytes_xmit) {
         rs->bytes_transferred += bytes_xmit;
@@ -1020,20 +1016,20 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
          * is used to avoid resending the block name.
          */
         if (block != rs->last_sent_block) {
-            flush_compressed_data(rs, f);
-            pages = save_zero_page(rs, f, block, offset, p);
+            flush_compressed_data(rs);
+            pages = save_zero_page(rs, block, offset, p);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
-                bytes_xmit = save_page_header(f, block, offset |
+                bytes_xmit = save_page_header(rs->f, block, offset |
                                               RAM_SAVE_FLAG_COMPRESS_PAGE);
-                blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
+                blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
                                                  migrate_compress_level());
                 if (blen > 0) {
                     rs->bytes_transferred += bytes_xmit + blen;
                     rs->norm_pages++;
                     pages = 1;
                 } else {
-                    qemu_file_set_error(f, blen);
+                    qemu_file_set_error(rs->f, blen);
                     error_report("compressed data failed!");
                 }
             }
@@ -1042,9 +1038,9 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
-            pages = save_zero_page(rs, f, block, offset, p);
+            pages = save_zero_page(rs, block, offset, p);
             if (pages == -1) {
-                pages = compress_page_with_multi_thread(rs, f, block, offset);
+                pages = compress_page_with_multi_thread(rs, block, offset);
             } else {
                 ram_release_pages(ms, block->idstr, pss->offset, pages);
             }
@@ -1061,13 +1057,12 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
  * Returns if a page is found
  *
  * @rs: current RAM state
- * @f: QEMUFile where to send the data
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
  * @ram_addr_abs: pointer into which to store the address of the dirty page
  *                within the global ram_addr space
  */
-static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
+static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
                              bool *again, ram_addr_t *ram_addr_abs)
 {
     pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
@@ -1095,7 +1090,7 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
                 /* If xbzrle is on, stop using the data compression at this
                  * point. In theory, xbzrle can do better than compression.
                  */
-                flush_compressed_data(rs, f);
+                flush_compressed_data(rs);
                 compression_switch = false;
             }
         }
@@ -1314,12 +1309,11 @@ err:
  *
  * @rs: current RAM state
  * @ms: current migration state
- * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
+static int ram_save_target_page(RAMState *rs, MigrationState *ms,
                                 PageSearchStatus *pss,
                                 bool last_stage,
                                 ram_addr_t dirty_ram_abs)
@@ -1330,9 +1324,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (compression_switch && migrate_use_compression()) {
-            res = ram_save_compressed_page(rs, ms, f, pss, last_stage);
+            res = ram_save_compressed_page(rs, ms, pss, last_stage);
         } else {
-            res = ram_save_page(rs, ms, f, pss, last_stage);
+            res = ram_save_page(rs, ms, pss, last_stage);
         }
 
         if (res < 0) {
@@ -1367,12 +1361,11 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  *
  * @rs: current RAM state
  * @ms: current migration state
- * @f: QEMUFile where to send the data
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
+static int ram_save_host_page(RAMState *rs, MigrationState *ms,
                               PageSearchStatus *pss,
                               bool last_stage,
                               ram_addr_t dirty_ram_abs)
@@ -1381,8 +1374,7 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
-                                        dirty_ram_abs);
+        tmppages = ram_save_target_page(rs, ms, pss, last_stage, dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
         }
@@ -1405,14 +1397,13 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
  * Returns the number of pages written where zero means no dirty pages
  *
  * @rs: current RAM state
- * @f: QEMUFile where to send the data
  * @last_stage: if we are at the completion stage
  *
  * On systems where host-page-size > target-page-size it will send all the
  * pages in a host page that are dirty.
  */
 
-static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
+static int ram_find_and_save_block(RAMState *rs, bool last_stage)
 {
     PageSearchStatus pss;
     MigrationState *ms = migrate_get_current();
@@ -1440,12 +1431,11 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
+            found = find_dirty_block(rs, &pss, &again, &dirty_ram_abs);
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, ms, f, &pss, last_stage,
-                                       dirty_ram_abs);
+            pages = ram_save_host_page(rs, ms, &pss, last_stage, dirty_ram_abs);
         }
     } while (!pages && again);
 
@@ -2145,7 +2135,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
     while ((ret = qemu_file_rate_limit(f)) == 0) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, false);
+        pages = ram_find_and_save_block(rs, false);
         /* no more pages to sent */
         if (pages == 0) {
             done = 1;
@@ -2167,7 +2157,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
         }
         i++;
     }
-    flush_compressed_data(rs, f);
+    flush_compressed_data(rs);
     rcu_read_unlock();
 
     /*
@@ -2215,14 +2205,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
     while (true) {
         int pages;
 
-        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state());
+        pages = ram_find_and_save_block(rs, !migration_in_colo_state());
         /* no more blocks to sent */
         if (pages == 0) {
             break;
         }
     }
 
-    flush_compressed_data(rs, f);
+    flush_compressed_data(rs);
     ram_control_after_iterate(f, RAM_CONTROL_FINISH);
 
     rcu_read_unlock();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (35 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-29 18:02   ` Dr. David Alan Gilbert
  2017-03-30  7:52   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy Juan Quintela
                   ` (14 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Rename it to preffer_xbzrle that is a more descriptive name.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 6a39704..591cf89 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -217,6 +217,9 @@ struct RAMState {
     uint64_t dirty_pages_rate;
     /* Count of requests incoming from destination */
     uint64_t postcopy_requests;
+    /* Should we move to xbzrle after the 1st round
+       of compression */
+    bool preffer_xbzrle;
     /* protects modification of the bitmap */
     QemuMutex bitmap_mutex;
     /* Ram Bitmap protected by RCU */
@@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
 /* The empty QEMUFileOps will be used by file in CompressParam */
 static const QEMUFileOps empty_ops = { };
 
-static bool compression_switch;
 static DecompressParam *decomp_param;
 static QemuThread *decompress_threads;
 static QemuMutex decomp_done_lock;
@@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
     if (!migrate_use_compression()) {
         return;
     }
-    compression_switch = true;
     thread_count = migrate_compress_threads();
     compress_threads = g_new0(QemuThread, thread_count);
     comp_param = g_new0(CompressParam, thread_count);
@@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
                  * point. In theory, xbzrle can do better than compression.
                  */
                 flush_compressed_data(rs);
-                compression_switch = false;
+                rs->preffer_xbzrle = true;
             }
         }
         /* Didn't find anything this time, but try again on the new block */
@@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
     /* Check the pages is dirty and if it is send it */
     if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
-        if (compression_switch && migrate_use_compression()) {
+        if (!rs->preffer_xbzrle && migrate_use_compression()) {
             res = ram_save_compressed_page(rs, ms, pss, last_stage);
         } else {
             res = ram_save_page(rs, ms, pss, last_stage);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (36 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 15:27   ` Dr. David Alan Gilbert
  2017-03-30  8:06   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore Juan Quintela
                   ` (13 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We need to call for the migrate_get_current() in more that half of the
uses, so call that inside.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h |  2 +-
 migration/migration.c         |  6 ++++--
 migration/ram.c               | 22 ++++++++++------------
 migration/savevm.c            |  4 ++--
 4 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index e88bbaf..90849a5 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -238,7 +238,7 @@ bool migration_is_idle(MigrationState *s);
 bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
 /* True if outgoing migration has entered postcopy phase */
-bool migration_in_postcopy(MigrationState *);
+bool migration_in_postcopy(void);
 /* ...and after the device transmission */
 bool migration_in_postcopy_after_devices(MigrationState *);
 MigrationState *migrate_get_current(void);
diff --git a/migration/migration.c b/migration/migration.c
index ad4ea03..3f99ab3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1054,14 +1054,16 @@ bool migration_has_failed(MigrationState *s)
             s->state == MIGRATION_STATUS_FAILED);
 }
 
-bool migration_in_postcopy(MigrationState *s)
+bool migration_in_postcopy(void)
 {
+    MigrationState *s = migrate_get_current();
+
     return (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
 }
 
 bool migration_in_postcopy_after_devices(MigrationState *s)
 {
-    return migration_in_postcopy(s) && s->postcopy_after_devices;
+    return migration_in_postcopy() && s->postcopy_after_devices;
 }
 
 bool migration_is_idle(MigrationState *s)
diff --git a/migration/ram.c b/migration/ram.c
index 591cf89..cb5f06f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -778,10 +778,9 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     return pages;
 }
 
-static void ram_release_pages(MigrationState *ms, const char *rbname,
-                              uint64_t offset, int pages)
+static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
 {
-    if (!migrate_release_ram() || !migration_in_postcopy(ms)) {
+    if (!migrate_release_ram() || !migration_in_postcopy()) {
         return;
     }
 
@@ -847,9 +846,9 @@ static int ram_save_page(RAMState *rs, MigrationState *ms,
              * page would be stale
              */
             xbzrle_cache_zero_page(rs, current_addr);
-            ram_release_pages(ms, block->idstr, pss->offset, pages);
+            ram_release_pages(block->idstr, pss->offset, pages);
         } else if (!rs->ram_bulk_stage &&
-                   !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
+                   !migration_in_postcopy() && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(rs, &p, current_addr, block,
                                      offset, last_stage);
             if (!last_stage) {
@@ -868,7 +867,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms,
         if (send_async) {
             qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
                                   migrate_release_ram() &
-                                  migration_in_postcopy(ms));
+                                  migration_in_postcopy());
         } else {
             qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
         }
@@ -898,8 +897,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
         error_report("compressed data failed!");
     } else {
         bytes_sent += blen;
-        ram_release_pages(migrate_get_current(), block->idstr,
-                          offset & TARGET_PAGE_MASK, 1);
+        ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1);
     }
 
     return bytes_sent;
@@ -1035,7 +1033,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
                 }
             }
             if (pages > 0) {
-                ram_release_pages(ms, block->idstr, pss->offset, pages);
+                ram_release_pages(block->idstr, pss->offset, pages);
             }
         } else {
             offset |= RAM_SAVE_FLAG_CONTINUE;
@@ -1043,7 +1041,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
             if (pages == -1) {
                 pages = compress_page_with_multi_thread(rs, block, offset);
             } else {
-                ram_release_pages(ms, block->idstr, pss->offset, pages);
+                ram_release_pages(block->idstr, pss->offset, pages);
             }
         }
     }
@@ -2194,7 +2192,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
 
     rcu_read_lock();
 
-    if (!migration_in_postcopy(migrate_get_current())) {
+    if (!migration_in_postcopy()) {
         migration_bitmap_sync(rs);
     }
 
@@ -2232,7 +2230,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
 
     remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
 
-    if (!migration_in_postcopy(migrate_get_current()) &&
+    if (!migration_in_postcopy() &&
         remaining_size < max_size) {
         qemu_mutex_lock_iothread();
         rcu_read_lock();
diff --git a/migration/savevm.c b/migration/savevm.c
index 3b19a4a..853a81a 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1062,7 +1062,7 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
 static bool should_send_vmdesc(void)
 {
     MachineState *machine = MACHINE(qdev_get_machine());
-    bool in_postcopy = migration_in_postcopy(migrate_get_current());
+    bool in_postcopy = migration_in_postcopy();
     return !machine->suppress_vmdesc && !in_postcopy;
 }
 
@@ -1111,7 +1111,7 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only)
     int vmdesc_len;
     SaveStateEntry *se;
     int ret;
-    bool in_postcopy = migration_in_postcopy(migrate_get_current());
+    bool in_postcopy = migration_in_postcopy();
 
     trace_savevm_state_complete_precopy();
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (37 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 15:28   ` Dr. David Alan Gilbert
  2017-03-30  8:05   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size() Juan Quintela
                   ` (12 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Remove it from callers and callees.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 27 ++++++++++-----------------
 1 file changed, 10 insertions(+), 17 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index cb5f06f..064b2c0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -796,13 +796,11 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
  *                if xbzrle noticed the page was the same.
  *
  * @rs: current RAM state
- * @ms: current migration state
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
  */
-static int ram_save_page(RAMState *rs, MigrationState *ms,
-                         PageSearchStatus *pss, bool last_stage)
+static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit;
@@ -976,13 +974,12 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
  * Returns the number of pages written.
  *
  * @rs: current RAM state
- * @ms: current migration state
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
  */
-static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
-                                    PageSearchStatus *pss, bool last_stage)
+static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
+                                    bool last_stage)
 {
     int pages = -1;
     uint64_t bytes_xmit = 0;
@@ -1312,10 +1309,8 @@ err:
  * @last_stage: if we are at the completion stage
  * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_target_page(RAMState *rs, MigrationState *ms,
-                                PageSearchStatus *pss,
-                                bool last_stage,
-                                ram_addr_t dirty_ram_abs)
+static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
+                                bool last_stage, ram_addr_t dirty_ram_abs)
 {
     int res = 0;
 
@@ -1323,9 +1318,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
     if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
         unsigned long *unsentmap;
         if (!rs->preffer_xbzrle && migrate_use_compression()) {
-            res = ram_save_compressed_page(rs, ms, pss, last_stage);
+            res = ram_save_compressed_page(rs, pss, last_stage);
         } else {
-            res = ram_save_page(rs, ms, pss, last_stage);
+            res = ram_save_page(rs, pss, last_stage);
         }
 
         if (res < 0) {
@@ -1364,8 +1359,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
  * @last_stage: if we are at the completion stage
  * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
  */
-static int ram_save_host_page(RAMState *rs, MigrationState *ms,
-                              PageSearchStatus *pss,
+static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
                               bool last_stage,
                               ram_addr_t dirty_ram_abs)
 {
@@ -1373,7 +1367,7 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms,
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(rs, ms, pss, last_stage, dirty_ram_abs);
+        tmppages = ram_save_target_page(rs, pss, last_stage, dirty_ram_abs);
         if (tmppages < 0) {
             return tmppages;
         }
@@ -1405,7 +1399,6 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms,
 static int ram_find_and_save_block(RAMState *rs, bool last_stage)
 {
     PageSearchStatus pss;
-    MigrationState *ms = migrate_get_current();
     int pages = 0;
     bool again, found;
     ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
@@ -1434,7 +1427,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, ms, &pss, last_stage, dirty_ram_abs);
+            pages = ram_save_host_page(rs, &pss, last_stage, dirty_ram_abs);
         }
     } while (!pages && again);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (38 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 15:32   ` Dr. David Alan Gilbert
  2017-03-30  8:03   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate' Juan Quintela
                   ` (11 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

It was used as a size in all cases except one.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 exec.c                   | 4 ++--
 include/sysemu/sysemu.h  | 2 +-
 migration/migration.c    | 4 ++--
 migration/postcopy-ram.c | 8 ++++----
 migration/savevm.c       | 8 ++++----
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/exec.c b/exec.c
index e57a8a2..9a4c385 100644
--- a/exec.c
+++ b/exec.c
@@ -3349,9 +3349,9 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
  * Allows code that needs to deal with migration bitmaps etc to still be built
  * target independent.
  */
-size_t qemu_target_page_bits(void)
+size_t qemu_target_page_size(void)
 {
-    return TARGET_PAGE_BITS;
+    return TARGET_PAGE_SIZE;
 }
 
 #endif
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 576c7ce..16175f7 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -67,7 +67,7 @@ int qemu_reset_requested_get(void);
 void qemu_system_killed(int signal, pid_t pid);
 void qemu_system_reset(bool report);
 void qemu_system_guest_panicked(GuestPanicInformation *info);
-size_t qemu_target_page_bits(void);
+size_t qemu_target_page_size(void);
 
 void qemu_add_exit_notifier(Notifier *notify);
 void qemu_remove_exit_notifier(Notifier *notify);
diff --git a/migration/migration.c b/migration/migration.c
index 3f99ab3..92c3c6b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -646,7 +646,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->skipped = 0;
     info->ram->normal = norm_mig_pages_transferred();
     info->ram->normal_bytes = norm_mig_pages_transferred() *
-        (1ul << qemu_target_page_bits());
+        qemu_target_page_size();
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_dirty_sync_count();
     info->ram->postcopy_requests = ram_postcopy_requests();
@@ -2001,7 +2001,7 @@ static void *migration_thread(void *opaque)
                10000 is a small enough number for our purposes */
             if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
                 s->expected_downtime = ram_dirty_pages_rate() *
-                    (1ul << qemu_target_page_bits()) / bandwidth;
+                    qemu_target_page_size() / bandwidth;
             }
 
             qemu_file_reset_rate_limit(s->to_dst_file);
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index dc80dbb..8756364 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -123,7 +123,7 @@ bool postcopy_ram_supported_by_host(void)
     struct uffdio_range range_struct;
     uint64_t feature_mask;
 
-    if ((1ul << qemu_target_page_bits()) > pagesize) {
+    if (qemu_target_page_size() > pagesize) {
         error_report("Target page size bigger than host page size");
         goto out;
     }
@@ -745,10 +745,10 @@ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms,
 void postcopy_discard_send_range(MigrationState *ms, PostcopyDiscardState *pds,
                                 unsigned long start, unsigned long length)
 {
-    size_t tp_bits = qemu_target_page_bits();
+    size_t tp_size = qemu_target_page_size();
     /* Convert to byte offsets within the RAM block */
-    pds->start_list[pds->cur_entry] = (start - pds->offset) << tp_bits;
-    pds->length_list[pds->cur_entry] = length << tp_bits;
+    pds->start_list[pds->cur_entry] = (start - pds->offset) * tp_size;
+    pds->length_list[pds->cur_entry] = length * tp_size;
     trace_postcopy_discard_send_range(pds->ramblock_name, start, length);
     pds->cur_entry++;
     pds->nsentwords++;
diff --git a/migration/savevm.c b/migration/savevm.c
index 853a81a..bbf055d 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -871,7 +871,7 @@ void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
     uint64_t tmp[2];
     tmp[0] = cpu_to_be64(ram_pagesize_summary());
-    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+    tmp[1] = cpu_to_be64(qemu_target_page_size());
 
     trace_qemu_savevm_send_postcopy_advise();
     qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
@@ -1390,13 +1390,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
     }
 
     remote_tps = qemu_get_be64(mis->from_src_file);
-    if (remote_tps != (1ul << qemu_target_page_bits())) {
+    if (remote_tps != qemu_target_page_size()) {
         /*
          * Again, some differences could be dealt with, but for now keep it
          * simple.
          */
-        error_report("Postcopy needs matching target page sizes (s=%d d=%d)",
-                     (int)remote_tps, 1 << qemu_target_page_bits());
+        error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
+                     (int)remote_tps, qemu_target_page_size());
         return -1;
     }
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate'
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (39 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size() Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 17:17   ` Eric Blake
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync Juan Quintela
                   ` (10 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert, Chao Fan, Li Zhijian

From: Chao Fan <fanc.fnst@cn.fujitsu.com>

The number of dirty pages outputed in 'pages' in the command
'info migrate', so add page-size to calculate the number of dirty
pages in bytes.

Signed-off-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 hmp.c                 | 3 +++
 migration/migration.c | 1 +
 qapi-schema.json      | 5 ++++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/hmp.c b/hmp.c
index edb8970..be75e71 100644
--- a/hmp.c
+++ b/hmp.c
@@ -215,6 +215,9 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->ram->normal_bytes >> 10);
         monitor_printf(mon, "dirty sync count: %" PRIu64 "\n",
                        info->ram->dirty_sync_count);
+        monitor_printf(mon, "page size: %" PRIu64 " kbytes\n",
+                       info->ram->page_size >> 10);
+
         if (info->ram->dirty_pages_rate) {
             monitor_printf(mon, "dirty pages rate: %" PRIu64 " pages\n",
                            info->ram->dirty_pages_rate);
diff --git a/migration/migration.c b/migration/migration.c
index 92c3c6b..fc19ba7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -650,6 +650,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
     info->ram->mbps = s->mbps;
     info->ram->dirty_sync_count = ram_dirty_sync_count();
     info->ram->postcopy_requests = ram_postcopy_requests();
+    info->ram->page_size = qemu_target_page_size();
 
     if (s->state != MIGRATION_STATUS_COMPLETED) {
         info->ram->remaining = ram_bytes_remaining();
diff --git a/qapi-schema.json b/qapi-schema.json
index 68a4327..c7ec62c 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -598,6 +598,9 @@
 # @postcopy-requests: The number of page requests received from the destination
 #        (since 2.7)
 #
+# @page-size: The number of bytes per page for the various page-based
+#        statistics (since 2.10)
+#
 # Since: 0.14.0
 ##
 { 'struct': 'MigrationStats',
@@ -605,7 +608,7 @@
            'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
            'normal-bytes': 'int', 'dirty-pages-rate' : 'int',
            'mbps' : 'number', 'dirty-sync-count' : 'int',
-           'postcopy-requests' : 'int' } }
+           'postcopy-requests' : 'int', 'page-size' : 'int' } }
 
 ##
 # @XBZRLECacheStats:
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (40 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate' Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24  1:10   ` Yang Hongyang
  2017-03-30  9:07   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter Juan Quintela
                   ` (9 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We change the meaning of start to be the offset from the beggining of
the block.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/exec/ram_addr.h | 2 ++
 migration/ram.c         | 8 ++++----
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index b05dc84..d50c970 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -354,11 +354,13 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start,
 
 static inline
 uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest,
+                                               RAMBlock *rb,
                                                ram_addr_t start,
                                                ram_addr_t length,
                                                int64_t *real_dirty_pages)
 {
     ram_addr_t addr;
+    start = rb->offset + start;
     unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
     uint64_t num_dirty = 0;
 
diff --git a/migration/ram.c b/migration/ram.c
index 064b2c0..9772fd8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -648,13 +648,13 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
     return ret;
 }
 
-static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
-                                        ram_addr_t length)
+static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
+                                        ram_addr_t start, ram_addr_t length)
 {
     unsigned long *bitmap;
     bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
     rs->migration_dirty_pages +=
-        cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
+        cpu_physical_memory_sync_dirty_bitmap(bitmap, rb, start, length,
                                               &rs->num_dirty_pages_period);
 }
 
@@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
     qemu_mutex_lock(&rs->bitmap_mutex);
     rcu_read_lock();
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
-        migration_bitmap_sync_range(rs, block->offset, block->used_length);
+        migration_bitmap_sync_range(rs, block, 0, block->used_length);
     }
     rcu_read_unlock();
     qemu_mutex_unlock(&rs->bitmap_mutex);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (41 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-29 18:43   ` Dr. David Alan Gilbert
  2017-03-30 10:28   ` Peter Xu
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block Juan Quintela
                   ` (8 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 3 +--
 migration/postcopy-ram.c      | 6 ++----
 migration/ram.c               | 9 +++------
 migration/savevm.c            | 3 +--
 4 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 90849a5..39a8e7e 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -270,8 +270,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected);
 /* For outgoing discard bitmap */
 int ram_postcopy_send_discard_bitmap(MigrationState *ms);
 /* For incoming postcopy discard */
-int ram_discard_range(MigrationIncomingState *mis, const char *block_name,
-                      uint64_t start, size_t length);
+int ram_discard_range(const char *block_name, uint64_t start, size_t length);
 int ram_postcopy_incoming_init(MigrationIncomingState *mis);
 void ram_postcopy_migrated_memory_release(MigrationState *ms);
 
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 8756364..85fd8d7 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -213,8 +213,6 @@ out:
 static int init_range(const char *block_name, void *host_addr,
                       ram_addr_t offset, ram_addr_t length, void *opaque)
 {
-    MigrationIncomingState *mis = opaque;
-
     trace_postcopy_init_range(block_name, host_addr, offset, length);
 
     /*
@@ -223,7 +221,7 @@ static int init_range(const char *block_name, void *host_addr,
      * - we're going to get the copy from the source anyway.
      * (Precopy will just overwrite this data, so doesn't need the discard)
      */
-    if (ram_discard_range(mis, block_name, 0, length)) {
+    if (ram_discard_range(block_name, 0, length)) {
         return -1;
     }
 
@@ -271,7 +269,7 @@ static int cleanup_range(const char *block_name, void *host_addr,
  */
 int postcopy_ram_incoming_init(MigrationIncomingState *mis, size_t ram_pages)
 {
-    if (qemu_ram_foreach_block(init_range, mis)) {
+    if (qemu_ram_foreach_block(init_range, NULL)) {
         return -1;
     }
 
diff --git a/migration/ram.c b/migration/ram.c
index 9772fd8..83c749c 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -784,7 +784,7 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
         return;
     }
 
-    ram_discard_range(NULL, rbname, offset, pages << TARGET_PAGE_BITS);
+    ram_discard_range(rbname, offset, pages << TARGET_PAGE_BITS);
 }
 
 /**
@@ -1602,7 +1602,7 @@ void ram_postcopy_migrated_memory_release(MigrationState *ms)
 
         while (run_start < range) {
             unsigned long run_end = find_next_bit(bitmap, range, run_start + 1);
-            ram_discard_range(NULL, block->idstr, run_start << TARGET_PAGE_BITS,
+            ram_discard_range(block->idstr, run_start << TARGET_PAGE_BITS,
                               (run_end - run_start) << TARGET_PAGE_BITS);
             run_start = find_next_zero_bit(bitmap, range, run_end + 1);
         }
@@ -1942,15 +1942,12 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
  *
  * Returns zero on success
  *
- * @mis: current migration incoming state
  * @rbname: name of the RAMBLock of the request. NULL means the
  *          same that last one.
  * @start: RAMBlock starting page
  * @length: RAMBlock size
  */
-int ram_discard_range(MigrationIncomingState *mis,
-                      const char *rbname,
-                      uint64_t start, size_t length)
+int ram_discard_range(const char *rbname, uint64_t start, size_t length)
 {
     int ret = -1;
 
diff --git a/migration/savevm.c b/migration/savevm.c
index bbf055d..7cf387f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1479,8 +1479,7 @@ static int loadvm_postcopy_ram_handle_discard(MigrationIncomingState *mis,
         block_length = qemu_get_be64(mis->from_src_file);
 
         len -= 16;
-        int ret = ram_discard_range(mis, ramid, start_addr,
-                                    block_length);
+        int ret = ram_discard_range(ramid, start_addr, block_length);
         if (ret) {
             return ret;
         }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (42 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31  8:35   ` Peter Xu
  2017-03-31  8:40   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations Juan Quintela
                   ` (7 subsequent siblings)
  51 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We were setting it far away of when we changed it.  Now everything is
done inside save_page_header.  Once there, reorganize code to pass
RAMState.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 83c749c..6cd77b5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -453,18 +453,22 @@ void migrate_compress_threads_create(void)
  * @offset: offset inside the block for the page
  *          in the lower bits, it contains flags
  */
-static size_t save_page_header(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
+static size_t save_page_header(RAMState *rs, RAMBlock *block, ram_addr_t offset)
 {
     size_t size, len;
 
-    qemu_put_be64(f, offset);
+    if (block == rs->last_sent_block) {
+        offset |= RAM_SAVE_FLAG_CONTINUE;
+    }
+    qemu_put_be64(rs->f, offset);
     size = 8;
 
     if (!(offset & RAM_SAVE_FLAG_CONTINUE)) {
         len = strlen(block->idstr);
-        qemu_put_byte(f, len);
-        qemu_put_buffer(f, (uint8_t *)block->idstr, len);
+        qemu_put_byte(rs->f, len);
+        qemu_put_buffer(rs->f, (uint8_t *)block->idstr, len);
         size += 1 + len;
+        rs->last_sent_block = block;
     }
     return size;
 }
@@ -584,7 +588,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
     }
 
     /* Send XBZRLE based compressed page */
-    bytes_xbzrle = save_page_header(rs->f, block,
+    bytes_xbzrle = save_page_header(rs, block,
                                     offset | RAM_SAVE_FLAG_XBZRLE);
     qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
     qemu_put_be16(rs->f, encoded_len);
@@ -769,7 +773,7 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
     if (is_zero_range(p, TARGET_PAGE_SIZE)) {
         rs->zero_pages++;
         rs->bytes_transferred +=
-            save_page_header(rs->f, block, offset | RAM_SAVE_FLAG_COMPRESS);
+            save_page_header(rs, block, offset | RAM_SAVE_FLAG_COMPRESS);
         qemu_put_byte(rs->f, 0);
         rs->bytes_transferred += 1;
         pages = 1;
@@ -826,9 +830,6 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 
     current_addr = block->offset + offset;
 
-    if (block == rs->last_sent_block) {
-        offset |= RAM_SAVE_FLAG_CONTINUE;
-    }
     if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_xmit > 0) {
@@ -860,8 +861,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 
     /* XBZRLE overflow or normal page */
     if (pages == -1) {
-        rs->bytes_transferred += save_page_header(rs->f, block,
-                                               offset | RAM_SAVE_FLAG_PAGE);
+        rs->bytes_transferred += save_page_header(rs, block,
+                                                  offset | RAM_SAVE_FLAG_PAGE);
         if (send_async) {
             qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
                                   migrate_release_ram() &
@@ -882,10 +883,11 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
 static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
                                 ram_addr_t offset)
 {
+    RAMState *rs = &ram_state;
     int bytes_sent, blen;
     uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
 
-    bytes_sent = save_page_header(f, block, offset |
+    bytes_sent = save_page_header(rs, block, offset |
                                   RAM_SAVE_FLAG_COMPRESS_PAGE);
     blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
                                      migrate_compress_level());
@@ -1016,7 +1018,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
             pages = save_zero_page(rs, block, offset, p);
             if (pages == -1) {
                 /* Make sure the first page is sent out before other pages */
-                bytes_xmit = save_page_header(rs->f, block, offset |
+                bytes_xmit = save_page_header(rs, block, offset |
                                               RAM_SAVE_FLAG_COMPRESS_PAGE);
                 blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
                                                  migrate_compress_level());
@@ -1033,7 +1035,6 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
                 ram_release_pages(block->idstr, pss->offset, pages);
             }
         } else {
-            offset |= RAM_SAVE_FLAG_CONTINUE;
             pages = save_zero_page(rs, block, offset, p);
             if (pages == -1) {
                 pages = compress_page_with_multi_thread(rs, block, offset);
@@ -1330,13 +1331,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         if (unsentmap) {
             clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
         }
-        /* Only update last_sent_block if a block was actually sent; xbzrle
-         * might have decided the page was identical so didn't bother writing
-         * to the stream.
-         */
-        if (res > 0) {
-            rs->last_sent_block = pss->block;
-        }
     }
 
     return res;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (43 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 12:22   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset Juan Quintela
                   ` (6 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We use an unsigned long for the page number.  Notice that our bitmaps
already got that for the index, so we have that limit.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 76 ++++++++++++++++++++++++++-------------------------------
 1 file changed, 34 insertions(+), 42 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 6cd77b5..b1a031e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -611,13 +611,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
  * @rs: current RAM state
  * @rb: RAMBlock where to search for dirty pages
  * @start: starting address (typically so we can continue from previous page)
- * @ram_addr_abs: pointer into which to store the address of the dirty page
- *                within the global ram_addr space
+ * @page: pointer into where to store the dirty page
  */
 static inline
 ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
                                        ram_addr_t start,
-                                       ram_addr_t *ram_addr_abs)
+                                       unsigned long *page)
 {
     unsigned long base = rb->offset >> TARGET_PAGE_BITS;
     unsigned long nr = base + (start >> TARGET_PAGE_BITS);
@@ -634,17 +633,17 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
         next = find_next_bit(bitmap, size, nr);
     }
 
-    *ram_addr_abs = next << TARGET_PAGE_BITS;
+    *page = next;
     return (next - base) << TARGET_PAGE_BITS;
 }
 
-static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
+static inline bool migration_bitmap_clear_dirty(RAMState *rs,
+                                                unsigned long page)
 {
     bool ret;
-    int nr = addr >> TARGET_PAGE_BITS;
     unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
 
-    ret = test_and_clear_bit(nr, bitmap);
+    ret = test_and_clear_bit(page, bitmap);
 
     if (ret) {
         rs->migration_dirty_pages--;
@@ -1056,14 +1055,13 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
  * @again: set to false if the search has scanned the whole of RAM
- * @ram_addr_abs: pointer into which to store the address of the dirty page
- *                within the global ram_addr space
+ * @page: pointer into where to store the dirty page
  */
 static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
-                             bool *again, ram_addr_t *ram_addr_abs)
+                             bool *again, unsigned long *page)
 {
     pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
-                                              ram_addr_abs);
+                                              page);
     if (pss->complete_round && pss->block == rs->last_seen_block &&
         pss->offset >= rs->last_offset) {
         /*
@@ -1111,11 +1109,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
  *
  * @rs: current RAM state
  * @offset: used to return the offset within the RAMBlock
- * @ram_addr_abs: pointer into which to store the address of the dirty page
- *                within the global ram_addr space
+ * @page: pointer into where to store the dirty page
  */
 static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
-                              ram_addr_t *ram_addr_abs)
+                              unsigned long *page)
 {
     RAMBlock *block = NULL;
 
@@ -1125,8 +1122,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
                                 QSIMPLEQ_FIRST(&rs->src_page_requests);
         block = entry->rb;
         *offset = entry->offset;
-        *ram_addr_abs = (entry->offset + entry->rb->offset) &
-                        TARGET_PAGE_MASK;
+        *page = (entry->offset + entry->rb->offset) >> TARGET_PAGE_BITS;
 
         if (entry->len > TARGET_PAGE_SIZE) {
             entry->len -= TARGET_PAGE_SIZE;
@@ -1151,18 +1147,17 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
  *
  * @rs: current RAM state
  * @pss: data about the state of the current dirty page scan
- * @ram_addr_abs: pointer into which to store the address of the dirty page
- *                within the global ram_addr space
+ * @page: pointer into where to store the dirty page
  */
 static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
-                            ram_addr_t *ram_addr_abs)
+                            unsigned long *page)
 {
     RAMBlock  *block;
     ram_addr_t offset;
     bool dirty;
 
     do {
-        block = unqueue_page(rs, &offset, ram_addr_abs);
+        block = unqueue_page(rs, &offset, page);
         /*
          * We're sending this page, and since it's postcopy nothing else
          * will dirty it, and we must make sure it doesn't get sent again
@@ -1172,17 +1167,15 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
         if (block) {
             unsigned long *bitmap;
             bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
-            dirty = test_bit(*ram_addr_abs >> TARGET_PAGE_BITS, bitmap);
+            dirty = test_bit(*page, bitmap);
             if (!dirty) {
-                trace_get_queued_page_not_dirty(
-                    block->idstr, (uint64_t)offset,
-                    (uint64_t)*ram_addr_abs,
-                    test_bit(*ram_addr_abs >> TARGET_PAGE_BITS,
-                         atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
+                trace_get_queued_page_not_dirty(block->idstr, (uint64_t)offset,
+                    *page,
+                    test_bit(*page,
+                             atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
             } else {
-                trace_get_queued_page(block->idstr,
-                                      (uint64_t)offset,
-                                      (uint64_t)*ram_addr_abs);
+                trace_get_queued_page(block->idstr, (uint64_t)offset,
+                                     *page);
             }
         }
 
@@ -1308,15 +1301,15 @@ err:
  * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
+ * @page: page number of the dirty page
  */
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
-                                bool last_stage, ram_addr_t dirty_ram_abs)
+                                bool last_stage, unsigned long page)
 {
     int res = 0;
 
     /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
+    if (migration_bitmap_clear_dirty(rs, page)) {
         unsigned long *unsentmap;
         if (!rs->preffer_xbzrle && migrate_use_compression()) {
             res = ram_save_compressed_page(rs, pss, last_stage);
@@ -1329,7 +1322,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
         }
         unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
         if (unsentmap) {
-            clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
+            clear_bit(page, unsentmap);
         }
     }
 
@@ -1351,24 +1344,24 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
  * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
+ * @page: Page number of the dirty page
  */
 static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
                               bool last_stage,
-                              ram_addr_t dirty_ram_abs)
+                              unsigned long page)
 {
     int tmppages, pages = 0;
     size_t pagesize = qemu_ram_pagesize(pss->block);
 
     do {
-        tmppages = ram_save_target_page(rs, pss, last_stage, dirty_ram_abs);
+        tmppages = ram_save_target_page(rs, pss, last_stage, page);
         if (tmppages < 0) {
             return tmppages;
         }
 
         pages += tmppages;
         pss->offset += TARGET_PAGE_SIZE;
-        dirty_ram_abs += TARGET_PAGE_SIZE;
+        page++;
     } while (pss->offset & (pagesize - 1));
 
     /* The offset we leave with is the last one we looked at */
@@ -1395,8 +1388,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     PageSearchStatus pss;
     int pages = 0;
     bool again, found;
-    ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
-                                 ram_addr_t space */
+    unsigned long page; /* Page number of the dirty page */
 
     /* No dirty page as there is zero RAM */
     if (!ram_bytes_total()) {
@@ -1413,15 +1405,15 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
 
     do {
         again = true;
-        found = get_queued_page(rs, &pss, &dirty_ram_abs);
+        found = get_queued_page(rs, &pss, &page);
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, &pss, &again, &dirty_ram_abs);
+            found = find_dirty_block(rs, &pss, &again, &page);
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, &pss, last_stage, dirty_ram_abs);
+            pages = ram_save_host_page(rs, &pss, last_stage, page);
         }
     } while (!pages && again);
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (44 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31  9:09   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page Juan Quintela
                   ` (5 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index b1a031e..57b776b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -171,8 +171,8 @@ struct RAMState {
     RAMBlock *last_seen_block;
     /* Last block from where we have sent data */
     RAMBlock *last_sent_block;
-    /* Last offset we have sent data from */
-    ram_addr_t last_offset;
+    /* Last dirty page we have sent */
+    ram_addr_t last_page;
     /* last ram version we have seen */
     uint32_t last_version;
     /* We are in the first round */
@@ -1063,7 +1063,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
     pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
                                               page);
     if (pss->complete_round && pss->block == rs->last_seen_block &&
-        pss->offset >= rs->last_offset) {
+        pss->offset >= rs->last_page) {
         /*
          * We've been once around the RAM and haven't found anything.
          * Give up.
@@ -1396,7 +1396,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     }
 
     pss.block = rs->last_seen_block;
-    pss.offset = rs->last_offset;
+    pss.offset = rs->last_page << TARGET_PAGE_BITS;
     pss.complete_round = false;
 
     if (!pss.block) {
@@ -1418,7 +1418,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     } while (!pages && again);
 
     rs->last_seen_block = pss.block;
-    rs->last_offset = pss.offset;
+    rs->last_page = pss.offset >> TARGET_PAGE_BITS;
 
     return pages;
 }
@@ -1493,7 +1493,7 @@ static void ram_state_reset(RAMState *rs)
 {
     rs->last_seen_block = NULL;
     rs->last_sent_block = NULL;
-    rs->last_offset = 0;
+    rs->last_page = 0;
     rs->last_version = ram_list.version;
     rs->ram_bulk_stage = true;
 }
@@ -1838,7 +1838,7 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
     /* Easiest way to make sure we don't resume in the middle of a host-page */
     rs->last_seen_block = NULL;
     rs->last_sent_block = NULL;
-    rs->last_offset     = 0;
+    rs->last_page = 0;
 
     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
         unsigned long first = block->offset >> TARGET_PAGE_BITS;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (45 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 12:31   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset Juan Quintela
                   ` (4 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We are moving everything to work on pages, not addresses.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 50 +++++++++++++++++++++++++-------------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 57b776b..ef3b428 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -298,8 +298,8 @@ uint64_t ram_postcopy_requests(void)
 struct PageSearchStatus {
     /* Current block being searched */
     RAMBlock    *block;
-    /* Current offset to search from */
-    ram_addr_t   offset;
+    /* Current page to search from */
+    unsigned long page;
     /* Set once we wrap around */
     bool         complete_round;
 };
@@ -610,16 +610,16 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
  *
  * @rs: current RAM state
  * @rb: RAMBlock where to search for dirty pages
- * @start: starting address (typically so we can continue from previous page)
+ * @start: page where we start the search
  * @page: pointer into where to store the dirty page
  */
 static inline
-ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
-                                       ram_addr_t start,
-                                       unsigned long *page)
+unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
+                                          unsigned long start,
+                                          unsigned long *page)
 {
     unsigned long base = rb->offset >> TARGET_PAGE_BITS;
-    unsigned long nr = base + (start >> TARGET_PAGE_BITS);
+    unsigned long nr = base + start;
     uint64_t rb_size = rb->used_length;
     unsigned long size = base + (rb_size >> TARGET_PAGE_BITS);
     unsigned long *bitmap;
@@ -634,7 +634,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
     }
 
     *page = next;
-    return (next - base) << TARGET_PAGE_BITS;
+    return next - base;
 }
 
 static inline bool migration_bitmap_clear_dirty(RAMState *rs,
@@ -812,7 +812,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
     int ret;
     bool send_async = true;
     RAMBlock *block = pss->block;
-    ram_addr_t offset = pss->offset;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
     p = block->host + offset;
 
@@ -844,7 +844,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
              * page would be stale
              */
             xbzrle_cache_zero_page(rs, current_addr);
-            ram_release_pages(block->idstr, pss->offset, pages);
+            ram_release_pages(block->idstr, offset, pages);
         } else if (!rs->ram_bulk_stage &&
                    !migration_in_postcopy() && migrate_use_xbzrle()) {
             pages = save_xbzrle_page(rs, &p, current_addr, block,
@@ -987,7 +987,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
     uint8_t *p;
     int ret, blen;
     RAMBlock *block = pss->block;
-    ram_addr_t offset = pss->offset;
+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
 
     p = block->host + offset;
 
@@ -1031,14 +1031,14 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
                 }
             }
             if (pages > 0) {
-                ram_release_pages(block->idstr, pss->offset, pages);
+                ram_release_pages(block->idstr, offset, pages);
             }
         } else {
             pages = save_zero_page(rs, block, offset, p);
             if (pages == -1) {
                 pages = compress_page_with_multi_thread(rs, block, offset);
             } else {
-                ram_release_pages(block->idstr, pss->offset, pages);
+                ram_release_pages(block->idstr, offset, pages);
             }
         }
     }
@@ -1060,10 +1060,9 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
 static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
                              bool *again, unsigned long *page)
 {
-    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
-                                              page);
+    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page, page);
     if (pss->complete_round && pss->block == rs->last_seen_block &&
-        pss->offset >= rs->last_page) {
+        pss->page >= rs->last_page) {
         /*
          * We've been once around the RAM and haven't found anything.
          * Give up.
@@ -1071,9 +1070,9 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
         *again = false;
         return false;
     }
-    if (pss->offset >= pss->block->used_length) {
+    if ((pss->page << TARGET_PAGE_BITS) >= pss->block->used_length) {
         /* Didn't find anything in this RAM Block */
-        pss->offset = 0;
+        pss->page = 0;
         pss->block = QLIST_NEXT_RCU(pss->block, next);
         if (!pss->block) {
             /* Hit the end of the list */
@@ -1196,7 +1195,7 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
          * it just requested.
          */
         pss->block = block;
-        pss->offset = offset;
+        pss->page = offset >> TARGET_PAGE_BITS;
     }
 
     return !!block;
@@ -1351,7 +1350,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
                               unsigned long page)
 {
     int tmppages, pages = 0;
-    size_t pagesize = qemu_ram_pagesize(pss->block);
+    size_t pagesize_bits =
+        qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
 
     do {
         tmppages = ram_save_target_page(rs, pss, last_stage, page);
@@ -1360,12 +1360,12 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
         }
 
         pages += tmppages;
-        pss->offset += TARGET_PAGE_SIZE;
+        pss->page++;
         page++;
-    } while (pss->offset & (pagesize - 1));
+    } while (pss->page & (pagesize_bits - 1));
 
     /* The offset we leave with is the last one we looked at */
-    pss->offset -= TARGET_PAGE_SIZE;
+    pss->page--;
     return pages;
 }
 
@@ -1396,7 +1396,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     }
 
     pss.block = rs->last_seen_block;
-    pss.offset = rs->last_page << TARGET_PAGE_BITS;
+    pss.page = rs->last_page;
     pss.complete_round = false;
 
     if (!pss.block) {
@@ -1418,7 +1418,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     } while (!pages && again);
 
     rs->last_seen_block = pss.block;
-    rs->last_page = pss.offset >> TARGET_PAGE_BITS;
+    rs->last_page = pss.page;
 
     return pages;
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (46 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 17:17   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages() Juan Quintela
                   ` (3 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

This removes the needto pass also the absolute offset.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c        | 56 ++++++++++++++++++++++----------------------------
 migration/trace-events |  2 +-
 2 files changed, 26 insertions(+), 32 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index ef3b428..3f283ba 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -611,12 +611,10 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
  * @rs: current RAM state
  * @rb: RAMBlock where to search for dirty pages
  * @start: page where we start the search
- * @page: pointer into where to store the dirty page
  */
 static inline
 unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
-                                          unsigned long start,
-                                          unsigned long *page)
+                                          unsigned long start)
 {
     unsigned long base = rb->offset >> TARGET_PAGE_BITS;
     unsigned long nr = base + start;
@@ -633,17 +631,18 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
         next = find_next_bit(bitmap, size, nr);
     }
 
-    *page = next;
     return next - base;
 }
 
 static inline bool migration_bitmap_clear_dirty(RAMState *rs,
+                                                RAMBlock *rb,
                                                 unsigned long page)
 {
     bool ret;
     unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
+    unsigned long nr = (rb->offset >> TARGET_PAGE_BITS) + page;
 
-    ret = test_and_clear_bit(page, bitmap);
+    ret = test_and_clear_bit(nr, bitmap);
 
     if (ret) {
         rs->migration_dirty_pages--;
@@ -1057,10 +1056,9 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
  * @again: set to false if the search has scanned the whole of RAM
  * @page: pointer into where to store the dirty page
  */
-static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
-                             bool *again, unsigned long *page)
+static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
 {
-    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page, page);
+    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
     if (pss->complete_round && pss->block == rs->last_seen_block &&
         pss->page >= rs->last_page) {
         /*
@@ -1110,8 +1108,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
  * @offset: used to return the offset within the RAMBlock
  * @page: pointer into where to store the dirty page
  */
-static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
-                              unsigned long *page)
+static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
 {
     RAMBlock *block = NULL;
 
@@ -1121,7 +1118,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
                                 QSIMPLEQ_FIRST(&rs->src_page_requests);
         block = entry->rb;
         *offset = entry->offset;
-        *page = (entry->offset + entry->rb->offset) >> TARGET_PAGE_BITS;
 
         if (entry->len > TARGET_PAGE_SIZE) {
             entry->len -= TARGET_PAGE_SIZE;
@@ -1148,15 +1144,14 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
  * @pss: data about the state of the current dirty page scan
  * @page: pointer into where to store the dirty page
  */
-static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
-                            unsigned long *page)
+static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
 {
     RAMBlock  *block;
     ram_addr_t offset;
     bool dirty;
 
     do {
-        block = unqueue_page(rs, &offset, page);
+        block = unqueue_page(rs, &offset);
         /*
          * We're sending this page, and since it's postcopy nothing else
          * will dirty it, and we must make sure it doesn't get sent again
@@ -1165,16 +1160,18 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
          */
         if (block) {
             unsigned long *bitmap;
+            unsigned long page;
+
             bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
-            dirty = test_bit(*page, bitmap);
+            page = (block->offset + offset) >> TARGET_PAGE_BITS;
+            dirty = test_bit(page, bitmap);
             if (!dirty) {
                 trace_get_queued_page_not_dirty(block->idstr, (uint64_t)offset,
-                    *page,
-                    test_bit(*page,
+                    page,
+                    test_bit(page,
                              atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
             } else {
-                trace_get_queued_page(block->idstr, (uint64_t)offset,
-                                     *page);
+                trace_get_queued_page(block->idstr, (uint64_t)offset, page);
             }
         }
 
@@ -1300,16 +1297,17 @@ err:
  * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @page: page number of the dirty page
  */
 static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
-                                bool last_stage, unsigned long page)
+                                bool last_stage)
 {
     int res = 0;
 
     /* Check the pages is dirty and if it is send it */
-    if (migration_bitmap_clear_dirty(rs, page)) {
+    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
         unsigned long *unsentmap;
+        unsigned long page =
+            (pss->block->offset >> TARGET_PAGE_BITS) + pss->page;
         if (!rs->preffer_xbzrle && migrate_use_compression()) {
             res = ram_save_compressed_page(rs, pss, last_stage);
         } else {
@@ -1343,25 +1341,22 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
  * @ms: current migration state
  * @pss: data about the page we want to send
  * @last_stage: if we are at the completion stage
- * @page: Page number of the dirty page
  */
 static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
-                              bool last_stage,
-                              unsigned long page)
+                              bool last_stage)
 {
     int tmppages, pages = 0;
     size_t pagesize_bits =
         qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
 
     do {
-        tmppages = ram_save_target_page(rs, pss, last_stage, page);
+        tmppages = ram_save_target_page(rs, pss, last_stage);
         if (tmppages < 0) {
             return tmppages;
         }
 
         pages += tmppages;
         pss->page++;
-        page++;
     } while (pss->page & (pagesize_bits - 1));
 
     /* The offset we leave with is the last one we looked at */
@@ -1388,7 +1383,6 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
     PageSearchStatus pss;
     int pages = 0;
     bool again, found;
-    unsigned long page; /* Page number of the dirty page */
 
     /* No dirty page as there is zero RAM */
     if (!ram_bytes_total()) {
@@ -1405,15 +1399,15 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
 
     do {
         again = true;
-        found = get_queued_page(rs, &pss, &page);
+        found = get_queued_page(rs, &pss);
 
         if (!found) {
             /* priority queue empty, so just search for something dirty */
-            found = find_dirty_block(rs, &pss, &again, &page);
+            found = find_dirty_block(rs, &pss, &again);
         }
 
         if (found) {
-            pages = ram_save_host_page(rs, &pss, last_stage, page);
+            pages = ram_save_host_page(rs, &pss, last_stage);
         }
     } while (!pages && again);
 
diff --git a/migration/trace-events b/migration/trace-events
index 7372ce2..0a3f033 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -63,7 +63,7 @@ put_qtailq_end(const char *name, const char *reason) "%s %s"
 qemu_file_fclose(void) ""
 
 # migration/ram.c
-get_queued_page(const char *block_name, uint64_t tmp_offset, uint64_t ram_addr) "%s/%" PRIx64 " ram_addr=%" PRIx64
+get_queued_page(const char *block_name, uint64_t tmp_offset, unsigned long page) "%s/%" PRIx64 " page=%lu"
 get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, uint64_t ram_addr, int sent) "%s/%" PRIx64 " ram_addr=%" PRIx64 " (sent=%d)"
 migration_bitmap_sync_start(void) ""
 migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages()
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (47 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 14:23   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence Juan Quintela
                   ` (2 subsequent siblings)
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

We always use it as pages anyways.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 exec.c                  |  6 +++---
 include/exec/ram_addr.h |  2 +-
 migration/ram.c         | 11 +++++------
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/exec.c b/exec.c
index 9a4c385..2cae288 100644
--- a/exec.c
+++ b/exec.c
@@ -1528,7 +1528,7 @@ static ram_addr_t find_ram_offset(ram_addr_t size)
     return offset;
 }
 
-ram_addr_t last_ram_offset(void)
+unsigned long last_ram_page(void)
 {
     RAMBlock *block;
     ram_addr_t last = 0;
@@ -1538,7 +1538,7 @@ ram_addr_t last_ram_offset(void)
         last = MAX(last, block->offset + block->max_length);
     }
     rcu_read_unlock();
-    return last;
+    return last >> TARGET_PAGE_BITS;
 }
 
 static void qemu_ram_setup_dump(void *addr, ram_addr_t size)
@@ -1727,7 +1727,7 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
     ram_addr_t old_ram_size, new_ram_size;
     Error *err = NULL;
 
-    old_ram_size = last_ram_offset() >> TARGET_PAGE_BITS;
+    old_ram_size = last_ram_page();
 
     qemu_mutex_lock_ramlist();
     new_block->offset = find_ram_offset(new_block->max_length);
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index d50c970..bbbfc7d 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -53,7 +53,7 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
 }
 
 long qemu_getrampagesize(void);
-ram_addr_t last_ram_offset(void);
+unsigned long last_ram_page(void);
 RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
                                    bool share, const char *mem_path,
                                    Error **errp);
diff --git a/migration/ram.c b/migration/ram.c
index 3f283ba..1be9a6b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1535,7 +1535,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
  */
 void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
 {
-    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
+    unsigned long ram_pages = last_ram_page();
     RAMState *rs = &ram_state;
     int64_t cur;
     int64_t linelen = 128;
@@ -1902,8 +1902,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
      * Update the unsentmap to be unsentmap = unsentmap | dirty
      */
     bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
-    bitmap_or(unsentmap, unsentmap, bitmap,
-               last_ram_offset() >> TARGET_PAGE_BITS);
+    bitmap_or(unsentmap, unsentmap, bitmap, last_ram_page());
 
 
     trace_ram_postcopy_send_discard_bitmap();
@@ -1951,7 +1950,7 @@ err:
 
 static int ram_state_init(RAMState *rs)
 {
-    int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
+    unsigned long ram_bitmap_pages;
 
     memset(rs, 0, sizeof(*rs));
     qemu_mutex_init(&rs->bitmap_mutex);
@@ -1997,7 +1996,7 @@ static int ram_state_init(RAMState *rs)
     rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
     /* Skip setting bitmap if there is no RAM */
     if (ram_bytes_total()) {
-        ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
+        ram_bitmap_pages = last_ram_page();
         rs->ram_bitmap->bmap = bitmap_new(ram_bitmap_pages);
         bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages);
 
@@ -2458,7 +2457,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
  */
 int ram_postcopy_incoming_init(MigrationIncomingState *mis)
 {
-    size_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
+    unsigned long ram_pages = last_ram_page();
 
     return postcopy_ram_incoming_init(mis, ram_pages);
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (48 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages() Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-31 14:27   ` Dr. David Alan Gilbert
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle() Juan Quintela
  2017-03-31 14:34 ` [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Dr. David Alan Gilbert
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 1be9a6b..4d62788 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1449,7 +1449,7 @@ void free_xbzrle_decoded_buf(void)
     xbzrle_decoded_buf = NULL;
 }
 
-static void migration_bitmap_free(struct RAMBitmap *bmap)
+static void migration_bitmap_free(RAMBitmap *bmap)
 {
     g_free(bmap->bmap);
     g_free(bmap->unsentmap);
@@ -1463,7 +1463,7 @@ static void ram_migration_cleanup(void *opaque)
     /* caller have hold iothread lock or is in a bh, so there is
      * no writing race against this migration_bitmap
      */
-    struct RAMBitmap *bitmap = rs->ram_bitmap;
+    RAMBitmap *bitmap = rs->ram_bitmap;
     atomic_rcu_set(&rs->ram_bitmap, NULL);
     if (bitmap) {
         memory_global_dirty_log_stop();
@@ -1502,8 +1502,8 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
      * no writing race against this migration_bitmap
      */
     if (rs->ram_bitmap) {
-        struct RAMBitmap *old_bitmap = rs->ram_bitmap, *bitmap;
-        bitmap = g_new(struct RAMBitmap, 1);
+        RAMBitmap *old_bitmap = rs->ram_bitmap, *bitmap;
+        bitmap = g_new(RAMBitmap, 1);
         bitmap->bmap = bitmap_new(new);
 
         /* prevent migration_bitmap content from being set bit
@@ -1993,7 +1993,7 @@ static int ram_state_init(RAMState *rs)
     rcu_read_lock();
     ram_state_reset(rs);
 
-    rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
+    rs->ram_bitmap = g_new0(RAMBitmap, 1);
     /* Skip setting bitmap if there is no RAM */
     if (ram_bytes_total()) {
         ram_bitmap_pages = last_ram_page();
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle()
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (49 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence Juan Quintela
@ 2017-03-23 20:45 ` Juan Quintela
  2017-03-24 16:38   ` Dr. David Alan Gilbert
  2017-03-31 14:34 ` [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Dr. David Alan Gilbert
  51 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-23 20:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: dgilbert

Only user don't have a MigrationState handly.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 include/migration/migration.h | 2 +-
 migration/migration.c         | 8 +++-----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 39a8e7e..6f7221f 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -234,7 +234,7 @@ void remove_migration_state_change_notifier(Notifier *notify);
 MigrationState *migrate_init(const MigrationParams *params);
 bool migration_is_blocked(Error **errp);
 bool migration_in_setup(MigrationState *);
-bool migration_is_idle(MigrationState *s);
+bool migration_is_idle(void);
 bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
 /* True if outgoing migration has entered postcopy phase */
diff --git a/migration/migration.c b/migration/migration.c
index fc19ba7..ba1d094 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1067,11 +1067,9 @@ bool migration_in_postcopy_after_devices(MigrationState *s)
     return migration_in_postcopy() && s->postcopy_after_devices;
 }
 
-bool migration_is_idle(MigrationState *s)
+bool migration_is_idle(void)
 {
-    if (!s) {
-        s = migrate_get_current();
-    }
+    MigrationState *s = migrate_get_current();
 
     switch (s->state) {
     case MIGRATION_STATUS_NONE:
@@ -1136,7 +1134,7 @@ int migrate_add_blocker(Error *reason, Error **errp)
         return -EACCES;
     }
 
-    if (migration_is_idle(NULL)) {
+    if (migration_is_idle()) {
         migration_blockers = g_slist_prepend(migration_blockers, reason);
         return 0;
     }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync Juan Quintela
@ 2017-03-24  1:10   ` Yang Hongyang
  2017-03-24  8:29     ` Juan Quintela
  2017-03-30  9:07   ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 167+ messages in thread
From: Yang Hongyang @ 2017-03-24  1:10 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel; +Cc: dgilbert


On 2017/3/24 4:45, Juan Quintela wrote:
> We change the meaning of start to be the offset from the beggining of
> the block.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/exec/ram_addr.h | 2 ++
>  migration/ram.c         | 8 ++++----
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index b05dc84..d50c970 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -354,11 +354,13 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start,
>  
>  static inline
>  uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest,
> +                                               RAMBlock *rb,
>                                                 ram_addr_t start,
>                                                 ram_addr_t length,
>                                                 int64_t *real_dirty_pages)
>  {
>      ram_addr_t addr;
> +    start = rb->offset + start;
>      unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
>      uint64_t num_dirty = 0;
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index 064b2c0..9772fd8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -648,13 +648,13 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
>      return ret;
>  }
>  
> -static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
> -                                        ram_addr_t length)
> +static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
> +                                        ram_addr_t start, ram_addr_t length)
>  {
>      unsigned long *bitmap;
>      bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>      rs->migration_dirty_pages +=
> -        cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
> +        cpu_physical_memory_sync_dirty_bitmap(bitmap, rb, start, length,
>                                                &rs->num_dirty_pages_period);
>  }
>  
> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      qemu_mutex_lock(&rs->bitmap_mutex);
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);

Since RAMBlock been passed to bitmap_sync, could we remove
param 'block->used_length' either?

>      }
>      rcu_read_unlock();
>      qemu_mutex_unlock(&rs->bitmap_mutex);
> 

-- 
Thanks,
Yang

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-24  1:10   ` Yang Hongyang
@ 2017-03-24  8:29     ` Juan Quintela
  2017-03-24  9:11       ` Yang Hongyang
  2017-03-28 17:12       ` Dr. David Alan Gilbert
  0 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-24  8:29 UTC (permalink / raw)
  To: Yang Hongyang; +Cc: qemu-devel, dgilbert

Yang Hongyang <yanghongyang@huawei.com> wrote:
> On 2017/3/24 4:45, Juan Quintela wrote:
>> We change the meaning of start to be the offset from the beggining of
>> the block.
>> 
>> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>      qemu_mutex_lock(&rs->bitmap_mutex);
>>      rcu_read_lock();
>>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
>> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
>
> Since RAMBlock been passed to bitmap_sync, could we remove
> param 'block->used_length' either?

Hi

good catch.

I had that removed, and then realized that I want to synchronize parts
of the bitmap, not the whole one.  That part of the series is still not
done.

Right now we do something like (I have simplified a lot of details):

while(true) {
            foreach(block)
                bitmap_sync(block)
            foreach(page)
                if(dirty(page))
                   page_send(page)
}


If you have several terabytes of RAM that is too ineficient, because
when we arrive to the page_send(page), it is possible that it is already
dirty again, and we have to send it twice.  So, the idea is to change to
something like:

while(true) {
            foreach(block)
                bitmap_sync(block)
            foreach(block)
                foreach(64pages)
                    bitmap_sync(64pages)
                    foreach(page of the 64)
                       if (dirty)
                          page_send(page)
}


Where 64 is a magic number, I have to test what is the good value.
Basically it should be a multiple of sizeof(long) and a multiple/divisor
of host page size.

Reason of changing the for to be for each block, is that then we can
easily put bitmaps by hostpage size, instead of having to had it for
target page size.

Thanks for the review, Juan.

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-24  8:29     ` Juan Quintela
@ 2017-03-24  9:11       ` Yang Hongyang
  2017-03-24 10:05         ` Juan Quintela
  2017-03-28 17:12       ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 167+ messages in thread
From: Yang Hongyang @ 2017-03-24  9:11 UTC (permalink / raw)
  To: quintela; +Cc: qemu-devel, dgilbert

Hi Juan,

On 2017/3/24 16:29, Juan Quintela wrote:
> Yang Hongyang <yanghongyang@huawei.com> wrote:
>> On 2017/3/24 4:45, Juan Quintela wrote:
>>> We change the meaning of start to be the offset from the beggining of
>>> the block.
>>>
>>> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>>      qemu_mutex_lock(&rs->bitmap_mutex);
>>>      rcu_read_lock();
>>>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
>>> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
>>
>> Since RAMBlock been passed to bitmap_sync, could we remove
>> param 'block->used_length' either?
> 
> Hi
> 
> good catch.
> 
> I had that removed, and then realized that I want to synchronize parts
> of the bitmap, not the whole one.  That part of the series is still not
> done.
> 
> Right now we do something like (I have simplified a lot of details):
> 
> while(true) {
>             foreach(block)
>                 bitmap_sync(block)
>             foreach(page)
>                 if(dirty(page))
>                    page_send(page)
> }
> 
> 
> If you have several terabytes of RAM that is too ineficient, because
> when we arrive to the page_send(page), it is possible that it is already
> dirty again, and we have to send it twice.  So, the idea is to change to
> something like:
> 
> while(true) {
>             foreach(block)
>                 bitmap_sync(block)

Do you mean sync with KVM here?

>             foreach(block)
>                 foreach(64pages)
>                     bitmap_sync(64pages)

Then here, we will sync with KVM too. For huge MEM,
it will generates lots of ioctl()...
Bitmap in KVM is per Memory region IIRC. KVM module currently
haven't the ability to sync parts of the bitmap. A sync have
to sync the whole mr. So if we want to do small sync, we might
need to modify KVM also, but that still won't solve the preblem
of increased ioctls.

>                     foreach(page of the 64)
>                        if (dirty)
>                           page_send(page)
> }
> 
> 
> Where 64 is a magic number, I have to test what is the good value.
> Basically it should be a multiple of sizeof(long) and a multiple/divisor
> of host page size.
> 
> Reason of changing the for to be for each block, is that then we can
> easily put bitmaps by hostpage size, instead of having to had it for
> target page size.
> 
> Thanks for the review, Juan.
> 
> Later, Juan.
> 

-- 
Thanks,
Yang

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
@ 2017-03-24  9:55   ` Peter Xu
  2017-03-24 11:44     ` Juan Quintela
  2017-03-31 14:43     ` Dr. David Alan Gilbert
  2017-03-31 15:51   ` Dr. David Alan Gilbert
  1 sibling, 2 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-24  9:55 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

Hi, Juan,

Got several nitpicks below... (along with some questions)

On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:

[...]

>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>  {
> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>   *          -1 means that xbzrle would be longer than normal
>   *
>   * @f: QEMUFile where to send the data
> - * @current_data:
> - * @current_addr:
> + * @current_data: contents of the page

Since current_data is a double pointer, so... maybe "pointer to the
address of page content"?

Btw, a question not related to this series... Why here in
save_xbzrle_page() we need to update *current_data to be the newly
created page cache? I see that we have:

    /* update *current_data when the page has been
       inserted into cache */
    *current_data = get_cached_data(XBZRLE.cache, current_addr);

What would be the difference if we just use the old pointer in
RAMBlock.host?

[...]

> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>  }
>  
>  /**
> - * flush_page_queue: Flush any remaining pages in the ram request queue
> - *    it should be empty at the end anyway, but in error cases there may be
> - *    some left.
> + * flush_page_queue: flush any remaining pages in the ram request queue

Here the comment says (just like mentioned in function name) that we
will "flush any remaining pages in the ram request queue", however in
the implementation, we should be only freeing everything in
src_page_requests. The problem is "flush" let me think about "flushing
the rest of the pages to the other side"... while it's not.

Would it be nice we just rename the function into something else, like
migration_page_queue_free()? We can tune the comments correspondingly
as well.

[...]

> -/*
> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> - *   the two bitmaps, that are similar, but one is inverted.
> +/**
> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
                  ^ should be n?     ^^^^^^^^^^ canonicalize?

>   *
> - * We search for runs of target-pages that don't start or end on a
> - * host page boundary;
> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> - *                 the unsentmap
> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> - *                 the main migration bitmap
> + * Helper for postcopy_chunk_hostpages; it's called twice to
> + * canonicalize the two bitmaps, that are similar, but one is
> + * inverted.
>   *
> + * Postcopy requires that all target pages in a hostpage are dirty or
> + * clean, not a mix.  This function canonicalizes the bitmaps.
> + *
> + * @ms: current migration state
> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> + *               otherwise we need to canonicalize partially dirty host pages
> + * @block: block that contains the page we want to canonicalize
> + * @pds: state for postcopy
>   */
>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                                            RAMBlock *block,

[...]

> +/**
> + * ram_save_setup: iterative stage for migration
      ^^^^^^^^^^^^^^ should be ram_save_iterate()?

> + *
> + * Returns zero to indicate success and negative for error
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
>      int ret;
> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      return done;
>  }

[...]

> -/*
> - * Allocate data structures etc needed by incoming migration with postcopy-ram
> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> +/**
> + * ram_postococpy_incoming_init: allocate postcopy data structures
> + *
> + * Returns 0 for success and negative if there was one error
> + *
> + * @mis: current migration incoming state
> + *
> + * Allocate data structures etc needed by incoming migration with
> + * postcopy-ram postcopy-ram's similarly names
> + * postcopy_ram_incoming_init does the work

This sentence is slightly hard to understand... But I think the
function name explained itself enough though. :)

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-24  9:11       ` Yang Hongyang
@ 2017-03-24 10:05         ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-24 10:05 UTC (permalink / raw)
  To: Yang Hongyang; +Cc: qemu-devel, dgilbert

Yang Hongyang <yanghongyang@huawei.com> wrote:
> Hi Juan,
>
> On 2017/3/24 16:29, Juan Quintela wrote:
>> Yang Hongyang <yanghongyang@huawei.com> wrote:
>>> On 2017/3/24 4:45, Juan Quintela wrote:
>>>> We change the meaning of start to be the offset from the beggining of
>>>> the block.
>>>>
>>>> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>>>>      qemu_mutex_lock(&rs->bitmap_mutex);
>>>>      rcu_read_lock();
>>>>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
>>>> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
>>>
>>> Since RAMBlock been passed to bitmap_sync, could we remove
>>> param 'block->used_length' either?
>> 
>> Hi
>> 
>> good catch.
>> 
>> I had that removed, and then realized that I want to synchronize parts
>> of the bitmap, not the whole one.  That part of the series is still not
>> done.
>> 
>> Right now we do something like (I have simplified a lot of details):
>> 
>> while(true) {
>>             foreach(block)
>>                 bitmap_sync(block)
>>             foreach(page)
>>                 if(dirty(page))
>>                    page_send(page)
>> }
>> 
>> 
>> If you have several terabytes of RAM that is too ineficient, because
>> when we arrive to the page_send(page), it is possible that it is already
>> dirty again, and we have to send it twice.  So, the idea is to change to
>> something like:
>> 
>> while(true) {
>>             foreach(block)
>>                 bitmap_sync(block)
>
> Do you mean sync with KVM here?
>
>>             foreach(block)
>>                 foreach(64pages)
>>                     bitmap_sync(64pages)
>
> Then here, we will sync with KVM too. For huge MEM,
> it will generates lots of ioctl()...
> Bitmap in KVM is per Memory region IIRC. KVM module currently
> haven't the ability to sync parts of the bitmap. A sync have
> to sync the whole mr. So if we want to do small sync, we might
> need to modify KVM also, but that still won't solve the preblem
> of increased ioctls.

And why I remembered incorrectly that we could sync part of the bitmaps.
Yes, we could have more ioctls, but less pages written twice, it is a
tradeoff, at some point it makes sense to change it.

Problem is that now this is going to be more difficult that I thought to
test for it.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes " Juan Quintela
@ 2017-03-24 10:12   ` Dr. David Alan Gilbert
  2017-03-27 10:48   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:12 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 690ca8f..721fd66 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -172,6 +172,8 @@ struct RAMState {
>      uint64_t norm_pages;
>      /* Iterations since start */
>      uint64_t iterations;
> +    /* xbzrle transmitted bytes */
> +    uint64_t xbzrle_bytes;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -179,7 +181,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t xbzrle_bytes;
>      uint64_t xbzrle_pages;
>      uint64_t xbzrle_cache_miss;
>      double xbzrle_cache_miss_rate;
> @@ -205,7 +206,7 @@ uint64_t norm_mig_pages_transferred(void)
>  
>  uint64_t xbzrle_mig_bytes_transferred(void)
>  {
> -    return acct_info.xbzrle_bytes;
> +    return ram_state.xbzrle_bytes;
>  }
>  
>  uint64_t xbzrle_mig_pages_transferred(void)
> @@ -544,7 +545,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>      qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
>      bytes_xbzrle += encoded_len + 1 + 2;
>      acct_info.xbzrle_pages++;
> -    acct_info.xbzrle_bytes += bytes_xbzrle;
> +    rs->xbzrle_bytes += bytes_xbzrle;
>      *bytes_transferred += bytes_xbzrle;
>  
>      return 1;
> @@ -1995,6 +1996,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->zero_pages = 0;
>      rs->norm_pages = 0;
>      rs->iterations = 0;
> +    rs->xbzrle_bytes = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages " Juan Quintela
@ 2017-03-24 10:13   ` Dr. David Alan Gilbert
  2017-03-27 10:59   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:13 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 721fd66..b4e647a 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -174,6 +174,8 @@ struct RAMState {
>      uint64_t iterations;
>      /* xbzrle transmitted bytes */
>      uint64_t xbzrle_bytes;
> +    /* xbzrle transmmited pages */
> +    uint64_t xbzrle_pages;

Yes, it might be useful to comment that the bytes are compressed bytes
while the pages are raw so it's not just a ratio.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>  };
>  typedef struct RAMState RAMState;
>  
> @@ -181,7 +183,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t xbzrle_pages;
>      uint64_t xbzrle_cache_miss;
>      double xbzrle_cache_miss_rate;
>      uint64_t xbzrle_overflows;
> @@ -211,7 +212,7 @@ uint64_t xbzrle_mig_bytes_transferred(void)
>  
>  uint64_t xbzrle_mig_pages_transferred(void)
>  {
> -    return acct_info.xbzrle_pages;
> +    return ram_state.xbzrle_pages;
>  }
>  
>  uint64_t xbzrle_mig_pages_cache_miss(void)
> @@ -544,7 +545,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>      qemu_put_be16(f, encoded_len);
>      qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
>      bytes_xbzrle += encoded_len + 1 + 2;
> -    acct_info.xbzrle_pages++;
> +    rs->xbzrle_pages++;
>      rs->xbzrle_bytes += bytes_xbzrle;
>      *bytes_transferred += bytes_xbzrle;
>  
> @@ -1997,6 +1998,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->norm_pages = 0;
>      rs->iterations = 0;
>      rs->xbzrle_bytes = 0;
> +    rs->xbzrle_pages = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss " Juan Quintela
@ 2017-03-24 10:15   ` Dr. David Alan Gilbert
  2017-03-27 11:00   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:15 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index b4e647a..cc19406 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -176,6 +176,8 @@ struct RAMState {
>      uint64_t xbzrle_bytes;
>      /* xbzrle transmmited pages */
>      uint64_t xbzrle_pages;
> +    /* xbzrle number of cache miss */
> +    uint64_t xbzrle_cache_miss;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -183,7 +185,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t xbzrle_cache_miss;
>      double xbzrle_cache_miss_rate;
>      uint64_t xbzrle_overflows;
>  } AccountingInfo;
> @@ -217,7 +218,7 @@ uint64_t xbzrle_mig_pages_transferred(void)
>  
>  uint64_t xbzrle_mig_pages_cache_miss(void)
>  {
> -    return acct_info.xbzrle_cache_miss;
> +    return ram_state.xbzrle_cache_miss;
>  }
>  
>  double xbzrle_mig_cache_miss_rate(void)
> @@ -497,7 +498,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>      uint8_t *prev_cached_page;
>  
>      if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
> -        acct_info.xbzrle_cache_miss++;
> +        rs->xbzrle_cache_miss++;
>          if (!last_stage) {
>              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
>                               rs->bitmap_sync_count) == -1) {
> @@ -698,12 +699,12 @@ static void migration_bitmap_sync(RAMState *rs)
>          if (migrate_use_xbzrle()) {
>              if (rs->iterations_prev != rs->iterations) {
>                  acct_info.xbzrle_cache_miss_rate =
> -                   (double)(acct_info.xbzrle_cache_miss -
> +                   (double)(rs->xbzrle_cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
>                     (rs->iterations - rs->iterations_prev);
>              }
>              rs->iterations_prev = rs->iterations;
> -            rs->xbzrle_cache_miss_prev = acct_info.xbzrle_cache_miss;
> +            rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
>          }
>          s->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>              / (end_time - rs->start_time);
> @@ -1999,6 +2000,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->iterations = 0;
>      rs->xbzrle_bytes = 0;
>      rs->xbzrle_pages = 0;
> +    rs->xbzrle_cache_miss = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate " Juan Quintela
@ 2017-03-24 10:17   ` Dr. David Alan Gilbert
  2017-03-27 11:01   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:17 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index cc19406..c398ff9 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -178,6 +178,8 @@ struct RAMState {
>      uint64_t xbzrle_pages;
>      /* xbzrle number of cache miss */
>      uint64_t xbzrle_cache_miss;
> +    /* xbzrle miss rate */
> +    double xbzrle_cache_miss_rate;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -185,7 +187,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    double xbzrle_cache_miss_rate;
>      uint64_t xbzrle_overflows;
>  } AccountingInfo;
>  
> @@ -223,7 +224,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void)
>  
>  double xbzrle_mig_cache_miss_rate(void)
>  {
> -    return acct_info.xbzrle_cache_miss_rate;
> +    return ram_state.xbzrle_cache_miss_rate;
>  }
>  
>  uint64_t xbzrle_mig_pages_overflow(void)
> @@ -698,7 +699,7 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>          if (migrate_use_xbzrle()) {
>              if (rs->iterations_prev != rs->iterations) {
> -                acct_info.xbzrle_cache_miss_rate =
> +                rs->xbzrle_cache_miss_rate =
>                     (double)(rs->xbzrle_cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
>                     (rs->iterations - rs->iterations_prev);
> @@ -2001,6 +2002,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->xbzrle_bytes = 0;
>      rs->xbzrle_pages = 0;
>      rs->xbzrle_cache_miss = 0;
> +    rs->xbzrle_cache_miss_rate = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows " Juan Quintela
@ 2017-03-24 10:22   ` Dr. David Alan Gilbert
  2017-03-27 11:03   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:22 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Once there, remove the now unused AccountingInfo struct and var.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 21 +++++----------------
>  1 file changed, 5 insertions(+), 16 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c398ff9..3292eb0 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -180,23 +180,13 @@ struct RAMState {
>      uint64_t xbzrle_cache_miss;
>      /* xbzrle miss rate */
>      double xbzrle_cache_miss_rate;
> +    /* xbzrle number of overflows */
> +    uint64_t xbzrle_overflows;
>  };
>  typedef struct RAMState RAMState;
>  
>  static RAMState ram_state;
>  
> -/* accounting for migration statistics */
> -typedef struct AccountingInfo {
> -    uint64_t xbzrle_overflows;
> -} AccountingInfo;
> -
> -static AccountingInfo acct_info;
> -
> -static void acct_clear(void)
> -{
> -    memset(&acct_info, 0, sizeof(acct_info));
> -}
> -
>  uint64_t dup_mig_pages_transferred(void)
>  {
>      return ram_state.zero_pages;
> @@ -229,7 +219,7 @@ double xbzrle_mig_cache_miss_rate(void)
>  
>  uint64_t xbzrle_mig_pages_overflow(void)
>  {
> -    return acct_info.xbzrle_overflows;
> +    return ram_state.xbzrle_overflows;
>  }
>  
>  static QemuMutex migration_bitmap_mutex;
> @@ -527,7 +517,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>          return 0;
>      } else if (encoded_len == -1) {
>          trace_save_xbzrle_page_overflow();
> -        acct_info.xbzrle_overflows++;
> +        rs->xbzrle_overflows++;
>          /* update data in the cache */
>          if (!last_stage) {
>              memcpy(prev_cached_page, *current_data, TARGET_PAGE_SIZE);
> @@ -2003,6 +1993,7 @@ static int ram_save_init_globals(RAMState *rs)
>      rs->xbzrle_pages = 0;
>      rs->xbzrle_cache_miss = 0;
>      rs->xbzrle_cache_miss_rate = 0;
> +    rs->xbzrle_overflows = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> @@ -2033,8 +2024,6 @@ static int ram_save_init_globals(RAMState *rs)
>              XBZRLE.encoded_buf = NULL;
>              return -1;
>          }
> -
> -        acct_clear();
>      }
>  
>      /* For memory_global_dirty_log_start below.  */
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState Juan Quintela
@ 2017-03-24 10:52   ` Dr. David Alan Gilbert
  2017-03-24 11:14     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 10:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c0d6841..7667e73 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -165,6 +165,8 @@ struct RAMSrcPageRequest {
>  
>  /* State of RAM for migration */
>  struct RAMState {
> +    /* QEMUFile used for this migration */
> +    QEMUFile *f;

Yes, I guess you're hoping that becomes 'for this RAMBlock' eventually?

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

>      /* Last block that we have visited searching for dirty pages */
>      RAMBlock *last_seen_block;
>      /* Last block from where we have sent data */
> @@ -524,14 +526,13 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>   *          -1 means that xbzrle would be longer than normal
>   *
>   * @rs: current RAM state
> - * @f: QEMUFile where to send the data
>   * @current_data: contents of the page
>   * @current_addr: addr of the page
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
>   */
> -static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
> +static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>                              ram_addr_t current_addr, RAMBlock *block,
>                              ram_addr_t offset, bool last_stage)
>  {
> @@ -582,10 +583,11 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>      }
>  
>      /* Send XBZRLE based compressed page */
> -    bytes_xbzrle = save_page_header(f, block, offset | RAM_SAVE_FLAG_XBZRLE);
> -    qemu_put_byte(f, ENCODING_FLAG_XBZRLE);
> -    qemu_put_be16(f, encoded_len);
> -    qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
> +    bytes_xbzrle = save_page_header(rs->f, block,
> +                                    offset | RAM_SAVE_FLAG_XBZRLE);
> +    qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
> +    qemu_put_be16(rs->f, encoded_len);
> +    qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
>      bytes_xbzrle += encoded_len + 1 + 2;
>      rs->xbzrle_pages++;
>      rs->xbzrle_bytes += bytes_xbzrle;
> @@ -849,7 +851,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>          } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
> -            pages = save_xbzrle_page(rs, f, &p, current_addr, block,
> +            pages = save_xbzrle_page(rs, &p, current_addr, block,
>                                       offset, last_stage);
>              if (!last_stage) {
>                  /* Can't send this cached data async, since the cache page
> @@ -2087,6 +2089,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>              return -1;
>           }
>      }
> +    rs->f = f;
>  
>      rcu_read_lock();
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname Juan Quintela
@ 2017-03-24 11:11   ` Dr. David Alan Gilbert
  2017-03-24 17:15   ` Eric Blake
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 11:11 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> So all places are consisten on the nambing of a block name parameter.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 76f1fc4..21047c5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -743,14 +743,14 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>      return pages;
>  }
>  
> -static void ram_release_pages(MigrationState *ms, const char *block_name,
> +static void ram_release_pages(MigrationState *ms, const char *rbname,
>                                uint64_t offset, int pages)
>  {
>      if (!migrate_release_ram() || !migration_in_postcopy(ms)) {
>          return;
>      }
>  
> -    ram_discard_range(NULL, block_name, offset, pages << TARGET_PAGE_BITS);
> +    ram_discard_range(NULL, rbname, offset, pages << TARGET_PAGE_BITS);
>  }
>  
>  /**
> @@ -1942,25 +1942,24 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>   * Returns zero on success
>   *
>   * @mis: current migration incoming state
> - * @block_name: Name of the RAMBLock of the request. NULL means the
> - *              same that last one.
> + * @rbname: name of the RAMBLock of the request. NULL means the
> + *          same that last one.
>   * @start: RAMBlock starting page
>   * @length: RAMBlock size
>   */
>  int ram_discard_range(MigrationIncomingState *mis,
> -                      const char *block_name,
> +                      const char *rbname,
>                        uint64_t start, size_t length)
>  {
>      int ret = -1;
>  
> -    trace_ram_discard_range(block_name, start, length);
> +    trace_ram_discard_range(rbname, start, length);
>  
>      rcu_read_lock();
> -    RAMBlock *rb = qemu_ram_block_by_name(block_name);
> +    RAMBlock *rb = qemu_ram_block_by_name(rbname);
>  
>      if (!rb) {
> -        error_report("ram_discard_range: Failed to find block '%s'",
> -                     block_name);
> +        error_report("ram_discard_range: Failed to find block '%s'", rbname);
>          goto err;
>      }
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState
  2017-03-24 10:52   ` Dr. David Alan Gilbert
@ 2017-03-24 11:14     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-24 11:14 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 17 ++++++++++-------
>>  1 file changed, 10 insertions(+), 7 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index c0d6841..7667e73 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -165,6 +165,8 @@ struct RAMSrcPageRequest {
>>  
>>  /* State of RAM for migration */
>>  struct RAMState {
>> +    /* QEMUFile used for this migration */
>> +    QEMUFile *f;
>
> Yes, I guess you're hoping that becomes 'for this RAMBlock' eventually?

For this ramblock or for this migration.  For some reason that I can't
yet fully understand people continues asking about starting a new migration
before the previous one has finished.  No, I haven't a good explanation
for why that could be a good idea.

Later, Juan.

>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
>>      /* Last block that we have visited searching for dirty pages */
>>      RAMBlock *last_seen_block;
>>      /* Last block from where we have sent data */
>> @@ -524,14 +526,13 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>>   *          -1 means that xbzrle would be longer than normal
>>   *
>>   * @rs: current RAM state
>> - * @f: QEMUFile where to send the data
>>   * @current_data: contents of the page
>>   * @current_addr: addr of the page
>>   * @block: block that contains the page we want to send
>>   * @offset: offset inside the block for the page
>>   * @last_stage: if we are at the completion stage
>>   */
>> -static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>> +static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>>                              ram_addr_t current_addr, RAMBlock *block,
>>                              ram_addr_t offset, bool last_stage)
>>  {
>> @@ -582,10 +583,11 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>>      }
>>  
>>      /* Send XBZRLE based compressed page */
>> -    bytes_xbzrle = save_page_header(f, block, offset | RAM_SAVE_FLAG_XBZRLE);
>> -    qemu_put_byte(f, ENCODING_FLAG_XBZRLE);
>> -    qemu_put_be16(f, encoded_len);
>> -    qemu_put_buffer(f, XBZRLE.encoded_buf, encoded_len);
>> +    bytes_xbzrle = save_page_header(rs->f, block,
>> +                                    offset | RAM_SAVE_FLAG_XBZRLE);
>> +    qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
>> +    qemu_put_be16(rs->f, encoded_len);
>> +    qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
>>      bytes_xbzrle += encoded_len + 1 + 2;
>>      rs->xbzrle_pages++;
>>      rs->xbzrle_bytes += bytes_xbzrle;
>> @@ -849,7 +851,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>>          } else if (!rs->ram_bulk_stage &&
>>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
>> -            pages = save_xbzrle_page(rs, f, &p, current_addr, block,
>> +            pages = save_xbzrle_page(rs, &p, current_addr, block,
>>                                       offset, last_stage);
>>              if (!last_stage) {
>>                  /* Can't send this cached data async, since the cache page
>> @@ -2087,6 +2089,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>>              return -1;
>>           }
>>      }
>> +    rs->f = f;
>>  
>>      rcu_read_lock();
>>  
>> -- 
>> 2.9.3
>> 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-24  9:55   ` Peter Xu
@ 2017-03-24 11:44     ` Juan Quintela
  2017-03-26 13:43       ` Peter Xu
  2017-03-31 14:43     ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-24 11:44 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> Hi, Juan,
>
> Got several nitpicks below... (along with some questions)
>
> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
>
> [...]
>
>>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>>  {
>> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>>   *          -1 means that xbzrle would be longer than normal
>>   *
>>   * @f: QEMUFile where to send the data
>> - * @current_data:
>> - * @current_addr:
>> + * @current_data: contents of the page
>
> Since current_data is a double pointer, so... maybe "pointer to the
> address of page content"?

ok. changed.

> Btw, a question not related to this series... Why here in
> save_xbzrle_page() we need to update *current_data to be the newly
> created page cache? I see that we have:
>
>     /* update *current_data when the page has been
>        inserted into cache */
>     *current_data = get_cached_data(XBZRLE.cache, current_addr);
>
> What would be the difference if we just use the old pointer in
> RAMBlock.host?

Its contents could have been changed since we inserted it into the
cache.  Then we could end having "memory corruption" during transfer.


> [...]
>
>> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
>> *ms, PageSearchStatus *pss,
>>  }
>>  
>>  /**
>> - * flush_page_queue: Flush any remaining pages in the ram request queue
>> - *    it should be empty at the end anyway, but in error cases there may be
>> - *    some left.
>> + * flush_page_queue: flush any remaining pages in the ram request queue
>
> Here the comment says (just like mentioned in function name) that we
> will "flush any remaining pages in the ram request queue", however in
> the implementation, we should be only freeing everything in
> src_page_requests. The problem is "flush" let me think about "flushing
> the rest of the pages to the other side"... while it's not.
>
> Would it be nice we just rename the function into something else, like
> migration_page_queue_free()? We can tune the comments correspondingly
> as well.

I will let this one to dave to answer O:-)
I agree than previous name is not perfect, but not sure that the new one
is mucth better either.

migration_drop_page_queue()?


>
> [...]
>
>> -/*
>> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
>> - *   the two bitmaps, that are similar, but one is inverted.
>> +/**
>> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
>                   ^ should be n?     ^^^^^^^^^^ canonicalize?

Fixed.

>> - * We search for runs of target-pages that don't start or end on a
>> - * host page boundary;
>> - * unsent_pass=true: Cleans up partially unsent host pages by searching
>> - *                 the unsentmap
>> - * unsent_pass=false: Cleans up partially dirty host pages by searching
>> - *                 the main migration bitmap
>> + * Helper for postcopy_chunk_hostpages; it's called twice to
>> + * canonicalize the two bitmaps, that are similar, but one is
>> + * inverted.
>>   *
>> + * Postcopy requires that all target pages in a hostpage are dirty or
>> + * clean, not a mix.  This function canonicalizes the bitmaps.
>> + *
>> + * @ms: current migration state
>> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
>> + *               otherwise we need to canonicalize partially dirty host pages
>> + * @block: block that contains the page we want to canonicalize
>> + * @pds: state for postcopy
>>   */
>>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>>                                            RAMBlock *block,
>
> [...]
>
>> +/**
>> + * ram_save_setup: iterative stage for migration
>       ^^^^^^^^^^^^^^ should be ram_save_iterate()?

fixed.  Too much copy and paste.

>
>> + *
>> + * Returns zero to indicate success and negative for error
>> + *
>> + * @f: QEMUFile where to send the data
>> + * @opaque: RAMState pointer
>> + */
>>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  {
>>      int ret;
>> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>      return done;
>>  }
>
> [...]
>
>> -/*
>> - * Allocate data structures etc needed by incoming migration with postcopy-ram
>> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
>> +/**
>> + * ram_postococpy_incoming_init: allocate postcopy data structures
>> + *
>> + * Returns 0 for success and negative if there was one error
>> + *
>> + * @mis: current migration incoming state
>> + *
>> + * Allocate data structures etc needed by incoming migration with
>> + * postcopy-ram postcopy-ram's similarly names
>> + * postcopy_ram_incoming_init does the work
>
> This sentence is slightly hard to understand... But I think the
> function name explained itself enough though. :)

I didn't want to remove Dave comments at this point, jusnt doing the
formating8 and put them consintent.  I agree that this file comments
could be improved.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy Juan Quintela
@ 2017-03-24 15:27   ` Dr. David Alan Gilbert
  2017-03-30  8:06   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 15:27 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We need to call for the migrate_get_current() in more that half of the
> uses, so call that inside.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h |  2 +-
>  migration/migration.c         |  6 ++++--
>  migration/ram.c               | 22 ++++++++++------------
>  migration/savevm.c            |  4 ++--
>  4 files changed, 17 insertions(+), 17 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index e88bbaf..90849a5 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -238,7 +238,7 @@ bool migration_is_idle(MigrationState *s);
>  bool migration_has_finished(MigrationState *);
>  bool migration_has_failed(MigrationState *);
>  /* True if outgoing migration has entered postcopy phase */
> -bool migration_in_postcopy(MigrationState *);
> +bool migration_in_postcopy(void);
>  /* ...and after the device transmission */
>  bool migration_in_postcopy_after_devices(MigrationState *);
>  MigrationState *migrate_get_current(void);
> diff --git a/migration/migration.c b/migration/migration.c
> index ad4ea03..3f99ab3 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1054,14 +1054,16 @@ bool migration_has_failed(MigrationState *s)
>              s->state == MIGRATION_STATUS_FAILED);
>  }
>  
> -bool migration_in_postcopy(MigrationState *s)
> +bool migration_in_postcopy(void)
>  {
> +    MigrationState *s = migrate_get_current();
> +
>      return (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
>  }
>  
>  bool migration_in_postcopy_after_devices(MigrationState *s)
>  {
> -    return migration_in_postcopy(s) && s->postcopy_after_devices;
> +    return migration_in_postcopy() && s->postcopy_after_devices;
>  }
>  
>  bool migration_is_idle(MigrationState *s)
> diff --git a/migration/ram.c b/migration/ram.c
> index 591cf89..cb5f06f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -778,10 +778,9 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
>      return pages;
>  }
>  
> -static void ram_release_pages(MigrationState *ms, const char *rbname,
> -                              uint64_t offset, int pages)
> +static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
>  {
> -    if (!migrate_release_ram() || !migration_in_postcopy(ms)) {
> +    if (!migrate_release_ram() || !migration_in_postcopy()) {
>          return;
>      }
>  
> @@ -847,9 +846,9 @@ static int ram_save_page(RAMState *rs, MigrationState *ms,
>               * page would be stale
>               */
>              xbzrle_cache_zero_page(rs, current_addr);
> -            ram_release_pages(ms, block->idstr, pss->offset, pages);
> +            ram_release_pages(block->idstr, pss->offset, pages);
>          } else if (!rs->ram_bulk_stage &&
> -                   !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
> +                   !migration_in_postcopy() && migrate_use_xbzrle()) {
>              pages = save_xbzrle_page(rs, &p, current_addr, block,
>                                       offset, last_stage);
>              if (!last_stage) {
> @@ -868,7 +867,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms,
>          if (send_async) {
>              qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
>                                    migrate_release_ram() &
> -                                  migration_in_postcopy(ms));
> +                                  migration_in_postcopy());
>          } else {
>              qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
>          }
> @@ -898,8 +897,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>          error_report("compressed data failed!");
>      } else {
>          bytes_sent += blen;
> -        ram_release_pages(migrate_get_current(), block->idstr,
> -                          offset & TARGET_PAGE_MASK, 1);
> +        ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1);
>      }
>  
>      return bytes_sent;
> @@ -1035,7 +1033,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>                  }
>              }
>              if (pages > 0) {
> -                ram_release_pages(ms, block->idstr, pss->offset, pages);
> +                ram_release_pages(block->idstr, pss->offset, pages);
>              }
>          } else {
>              offset |= RAM_SAVE_FLAG_CONTINUE;
> @@ -1043,7 +1041,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(rs, block, offset);
>              } else {
> -                ram_release_pages(ms, block->idstr, pss->offset, pages);
> +                ram_release_pages(block->idstr, pss->offset, pages);
>              }
>          }
>      }
> @@ -2194,7 +2192,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>  
>      rcu_read_lock();
>  
> -    if (!migration_in_postcopy(migrate_get_current())) {
> +    if (!migration_in_postcopy()) {
>          migration_bitmap_sync(rs);
>      }
>  
> @@ -2232,7 +2230,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>  
>      remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>  
> -    if (!migration_in_postcopy(migrate_get_current()) &&
> +    if (!migration_in_postcopy() &&
>          remaining_size < max_size) {
>          qemu_mutex_lock_iothread();
>          rcu_read_lock();
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 3b19a4a..853a81a 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1062,7 +1062,7 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
>  static bool should_send_vmdesc(void)
>  {
>      MachineState *machine = MACHINE(qdev_get_machine());
> -    bool in_postcopy = migration_in_postcopy(migrate_get_current());
> +    bool in_postcopy = migration_in_postcopy();
>      return !machine->suppress_vmdesc && !in_postcopy;
>  }
>  
> @@ -1111,7 +1111,7 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only)
>      int vmdesc_len;
>      SaveStateEntry *se;
>      int ret;
> -    bool in_postcopy = migration_in_postcopy(migrate_get_current());
> +    bool in_postcopy = migration_in_postcopy();
>  
>      trace_savevm_state_complete_precopy();
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore Juan Quintela
@ 2017-03-24 15:28   ` Dr. David Alan Gilbert
  2017-03-30  8:05   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 15:28 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Remove it from callers and callees.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 27 ++++++++++-----------------
>  1 file changed, 10 insertions(+), 17 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index cb5f06f..064b2c0 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -796,13 +796,11 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
>   *                if xbzrle noticed the page was the same.
>   *
>   * @rs: current RAM state
> - * @ms: current migration state
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
>   */
> -static int ram_save_page(RAMState *rs, MigrationState *ms,
> -                         PageSearchStatus *pss, bool last_stage)
> +static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  {
>      int pages = -1;
>      uint64_t bytes_xmit;
> @@ -976,13 +974,12 @@ static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
>   * Returns the number of pages written.
>   *
>   * @rs: current RAM state
> - * @ms: current migration state
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
>   */
> -static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
> -                                    PageSearchStatus *pss, bool last_stage)
> +static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
> +                                    bool last_stage)
>  {
>      int pages = -1;
>      uint64_t bytes_xmit = 0;
> @@ -1312,10 +1309,8 @@ err:
>   * @last_stage: if we are at the completion stage
>   * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
>   */
> -static int ram_save_target_page(RAMState *rs, MigrationState *ms,
> -                                PageSearchStatus *pss,
> -                                bool last_stage,
> -                                ram_addr_t dirty_ram_abs)
> +static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
> +                                bool last_stage, ram_addr_t dirty_ram_abs)
>  {
>      int res = 0;
>  
> @@ -1323,9 +1318,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
>          if (!rs->preffer_xbzrle && migrate_use_compression()) {
> -            res = ram_save_compressed_page(rs, ms, pss, last_stage);
> +            res = ram_save_compressed_page(rs, pss, last_stage);
>          } else {
> -            res = ram_save_page(rs, ms, pss, last_stage);
> +            res = ram_save_page(rs, pss, last_stage);
>          }
>  
>          if (res < 0) {
> @@ -1364,8 +1359,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>   * @last_stage: if we are at the completion stage
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>   */
> -static int ram_save_host_page(RAMState *rs, MigrationState *ms,
> -                              PageSearchStatus *pss,
> +static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>                                bool last_stage,
>                                ram_addr_t dirty_ram_abs)
>  {
> @@ -1373,7 +1367,7 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms,
>      size_t pagesize = qemu_ram_pagesize(pss->block);
>  
>      do {
> -        tmppages = ram_save_target_page(rs, ms, pss, last_stage, dirty_ram_abs);
> +        tmppages = ram_save_target_page(rs, pss, last_stage, dirty_ram_abs);
>          if (tmppages < 0) {
>              return tmppages;
>          }
> @@ -1405,7 +1399,6 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms,
>  static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>  {
>      PageSearchStatus pss;
> -    MigrationState *ms = migrate_get_current();
>      int pages = 0;
>      bool again, found;
>      ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
> @@ -1434,7 +1427,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(rs, ms, &pss, last_stage, dirty_ram_abs);
> +            pages = ram_save_host_page(rs, &pss, last_stage, dirty_ram_abs);
>          }
>      } while (!pages && again);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size() Juan Quintela
@ 2017-03-24 15:32   ` Dr. David Alan Gilbert
  2017-03-30  8:03   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 15:32 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> It was used as a size in all cases except one.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  exec.c                   | 4 ++--
>  include/sysemu/sysemu.h  | 2 +-
>  migration/migration.c    | 4 ++--
>  migration/postcopy-ram.c | 8 ++++----
>  migration/savevm.c       | 8 ++++----
>  5 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index e57a8a2..9a4c385 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3349,9 +3349,9 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
>   * Allows code that needs to deal with migration bitmaps etc to still be built
>   * target independent.
>   */
> -size_t qemu_target_page_bits(void)
> +size_t qemu_target_page_size(void)
>  {
> -    return TARGET_PAGE_BITS;
> +    return TARGET_PAGE_SIZE;
>  }
>  
>  #endif
> diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
> index 576c7ce..16175f7 100644
> --- a/include/sysemu/sysemu.h
> +++ b/include/sysemu/sysemu.h
> @@ -67,7 +67,7 @@ int qemu_reset_requested_get(void);
>  void qemu_system_killed(int signal, pid_t pid);
>  void qemu_system_reset(bool report);
>  void qemu_system_guest_panicked(GuestPanicInformation *info);
> -size_t qemu_target_page_bits(void);
> +size_t qemu_target_page_size(void);
>  
>  void qemu_add_exit_notifier(Notifier *notify);
>  void qemu_remove_exit_notifier(Notifier *notify);
> diff --git a/migration/migration.c b/migration/migration.c
> index 3f99ab3..92c3c6b 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -646,7 +646,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      info->ram->skipped = 0;
>      info->ram->normal = norm_mig_pages_transferred();
>      info->ram->normal_bytes = norm_mig_pages_transferred() *
> -        (1ul << qemu_target_page_bits());
> +        qemu_target_page_size();
>      info->ram->mbps = s->mbps;
>      info->ram->dirty_sync_count = ram_dirty_sync_count();
>      info->ram->postcopy_requests = ram_postcopy_requests();
> @@ -2001,7 +2001,7 @@ static void *migration_thread(void *opaque)
>                 10000 is a small enough number for our purposes */
>              if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
>                  s->expected_downtime = ram_dirty_pages_rate() *
> -                    (1ul << qemu_target_page_bits()) / bandwidth;
> +                    qemu_target_page_size() / bandwidth;
>              }
>  
>              qemu_file_reset_rate_limit(s->to_dst_file);
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index dc80dbb..8756364 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -123,7 +123,7 @@ bool postcopy_ram_supported_by_host(void)
>      struct uffdio_range range_struct;
>      uint64_t feature_mask;
>  
> -    if ((1ul << qemu_target_page_bits()) > pagesize) {
> +    if (qemu_target_page_size() > pagesize) {
>          error_report("Target page size bigger than host page size");
>          goto out;
>      }
> @@ -745,10 +745,10 @@ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms,
>  void postcopy_discard_send_range(MigrationState *ms, PostcopyDiscardState *pds,
>                                  unsigned long start, unsigned long length)
>  {
> -    size_t tp_bits = qemu_target_page_bits();
> +    size_t tp_size = qemu_target_page_size();
>      /* Convert to byte offsets within the RAM block */
> -    pds->start_list[pds->cur_entry] = (start - pds->offset) << tp_bits;
> -    pds->length_list[pds->cur_entry] = length << tp_bits;
> +    pds->start_list[pds->cur_entry] = (start - pds->offset) * tp_size;
> +    pds->length_list[pds->cur_entry] = length * tp_size;
>      trace_postcopy_discard_send_range(pds->ramblock_name, start, length);
>      pds->cur_entry++;
>      pds->nsentwords++;
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 853a81a..bbf055d 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -871,7 +871,7 @@ void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>  {
>      uint64_t tmp[2];
>      tmp[0] = cpu_to_be64(ram_pagesize_summary());
> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> +    tmp[1] = cpu_to_be64(qemu_target_page_size());
>  
>      trace_qemu_savevm_send_postcopy_advise();
>      qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> @@ -1390,13 +1390,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
>      }
>  
>      remote_tps = qemu_get_be64(mis->from_src_file);
> -    if (remote_tps != (1ul << qemu_target_page_bits())) {
> +    if (remote_tps != qemu_target_page_size()) {
>          /*
>           * Again, some differences could be dealt with, but for now keep it
>           * simple.
>           */
> -        error_report("Postcopy needs matching target page sizes (s=%d d=%d)",
> -                     (int)remote_tps, 1 << qemu_target_page_bits());
> +        error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
> +                     (int)remote_tps, qemu_target_page_size());
>          return -1;
>      }
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining Juan Quintela
@ 2017-03-24 15:34   ` Dr. David Alan Gilbert
  2017-03-30  6:24   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 15:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Just unfold it.  Move ram_bytes_remaining() with the rest of exported
> functions.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 19 +++++++------------
>  1 file changed, 7 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 3ae00e2..dd5a453 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -243,16 +243,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
>      return ram_state.xbzrle_overflows;
>  }
>  
> -static ram_addr_t ram_save_remaining(void)
> -{
> -    return ram_state.migration_dirty_pages;
> -}
> -
>  uint64_t ram_bytes_transferred(void)
>  {
>      return ram_state.bytes_transferred;
>  }
>  
> +uint64_t ram_bytes_remaining(void)
> +{
> +    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
> +}
> +
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -1438,11 +1438,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>      }
>  }
>  
> -uint64_t ram_bytes_remaining(void)
> -{
> -    return ram_save_remaining() * TARGET_PAGE_SIZE;
> -}
> -
>  uint64_t ram_bytes_total(void)
>  {
>      RAMBlock *block;
> @@ -2210,7 +2205,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>      RAMState *rs = opaque;
>      uint64_t remaining_size;
>  
> -    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>  
>      if (!migration_in_postcopy(migrate_get_current()) &&
>          remaining_size < max_size) {
> @@ -2219,7 +2214,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>          migration_bitmap_sync(rs);
>          rcu_read_unlock();
>          qemu_mutex_unlock_iothread();
> -        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> +        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>      }
>  
>      /* We can do postcopy, and all the data is postcopiable */
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle() Juan Quintela
@ 2017-03-24 16:38   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-24 16:38 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Only user don't have a MigrationState handly.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h | 2 +-
>  migration/migration.c         | 8 +++-----
>  2 files changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 39a8e7e..6f7221f 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -234,7 +234,7 @@ void remove_migration_state_change_notifier(Notifier *notify);
>  MigrationState *migrate_init(const MigrationParams *params);
>  bool migration_is_blocked(Error **errp);
>  bool migration_in_setup(MigrationState *);
> -bool migration_is_idle(MigrationState *s);
> +bool migration_is_idle(void);
>  bool migration_has_finished(MigrationState *);
>  bool migration_has_failed(MigrationState *);
>  /* True if outgoing migration has entered postcopy phase */
> diff --git a/migration/migration.c b/migration/migration.c
> index fc19ba7..ba1d094 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1067,11 +1067,9 @@ bool migration_in_postcopy_after_devices(MigrationState *s)
>      return migration_in_postcopy() && s->postcopy_after_devices;
>  }
>  
> -bool migration_is_idle(MigrationState *s)
> +bool migration_is_idle(void)
>  {
> -    if (!s) {
> -        s = migrate_get_current();
> -    }
> +    MigrationState *s = migrate_get_current();
>  
>      switch (s->state) {
>      case MIGRATION_STATUS_NONE:
> @@ -1136,7 +1134,7 @@ int migrate_add_blocker(Error *reason, Error **errp)
>          return -EACCES;
>      }
>  
> -    if (migration_is_idle(NULL)) {
> +    if (migration_is_idle()) {
>          migration_blockers = g_slist_prepend(migration_blockers, reason);
>          return 0;
>      }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname Juan Quintela
  2017-03-24 11:11   ` Dr. David Alan Gilbert
@ 2017-03-24 17:15   ` Eric Blake
  2017-03-28 10:52     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Eric Blake @ 2017-03-24 17:15 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel; +Cc: dgilbert

[-- Attachment #1: Type: text/plain, Size: 442 bytes --]

On 03/23/2017 03:44 PM, Juan Quintela wrote:
> So all places are consisten on the nambing of a block name parameter.

s/consisten/consistent/
s/nambing/naming/

> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate'
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate' Juan Quintela
@ 2017-03-24 17:17   ` Eric Blake
  0 siblings, 0 replies; 167+ messages in thread
From: Eric Blake @ 2017-03-24 17:17 UTC (permalink / raw)
  To: Juan Quintela, qemu-devel; +Cc: Chao Fan, dgilbert, Li Zhijian

[-- Attachment #1: Type: text/plain, Size: 765 bytes --]

On 03/23/2017 03:45 PM, Juan Quintela wrote:
> From: Chao Fan <fanc.fnst@cn.fujitsu.com>
> 
> The number of dirty pages outputed in 'pages' in the command

s/outputed/is output/

> 'info migrate', so add page-size to calculate the number of dirty
> pages in bytes.
> 
> Signed-off-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  hmp.c                 | 3 +++
>  migration/migration.c | 1 +
>  qapi-schema.json      | 5 ++++-
>  3 files changed, 8 insertions(+), 1 deletion(-)
> 

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-24 11:44     ` Juan Quintela
@ 2017-03-26 13:43       ` Peter Xu
  2017-03-28 18:32         ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-26 13:43 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Fri, Mar 24, 2017 at 12:44:06PM +0100, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > Hi, Juan,
> >
> > Got several nitpicks below... (along with some questions)
> >
> > On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
> >
> > [...]
> >
> >>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
> >>  {
> >> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
> >>   *          -1 means that xbzrle would be longer than normal
> >>   *
> >>   * @f: QEMUFile where to send the data
> >> - * @current_data:
> >> - * @current_addr:
> >> + * @current_data: contents of the page
> >
> > Since current_data is a double pointer, so... maybe "pointer to the
> > address of page content"?
> 
> ok. changed.
> 
> > Btw, a question not related to this series... Why here in
> > save_xbzrle_page() we need to update *current_data to be the newly
> > created page cache? I see that we have:
> >
> >     /* update *current_data when the page has been
> >        inserted into cache */
> >     *current_data = get_cached_data(XBZRLE.cache, current_addr);
> >
> > What would be the difference if we just use the old pointer in
> > RAMBlock.host?
> 
> Its contents could have been changed since we inserted it into the
> cache.  Then we could end having "memory corruption" during transfer.

Oh yes. Hmm I noticed that the content will be changed along the way
(IIUC even before we insert the page into the cache, since we are
doing everything in migration thread, while at the same time vcpu
thread might be doing anything), but I didn't notice that we need to
make sure the cached page be exactly the same as the one sent to the
destination side, or the "diff" may not match. Thanks for pointing
out. :)

> 
> 
> > [...]
> >
> >> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
> >> *ms, PageSearchStatus *pss,
> >>  }
> >>  
> >>  /**
> >> - * flush_page_queue: Flush any remaining pages in the ram request queue
> >> - *    it should be empty at the end anyway, but in error cases there may be
> >> - *    some left.
> >> + * flush_page_queue: flush any remaining pages in the ram request queue
> >
> > Here the comment says (just like mentioned in function name) that we
> > will "flush any remaining pages in the ram request queue", however in
> > the implementation, we should be only freeing everything in
> > src_page_requests. The problem is "flush" let me think about "flushing
> > the rest of the pages to the other side"... while it's not.
> >
> > Would it be nice we just rename the function into something else, like
> > migration_page_queue_free()? We can tune the comments correspondingly
> > as well.
> 
> I will let this one to dave to answer O:-)
> I agree than previous name is not perfect, but not sure that the new one
> is mucth better either.
> 
> migration_drop_page_queue()?

This is indeed a nitpick of mine... So please feel free to ignore it.
:)

But if we will keep the function name, I would slightly prefer that at
least we mention in the comment that, this is only freeing things up,
not sending anything out.

> 
> 
> >
> > [...]
> >
> >> -/*
> >> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> >> - *   the two bitmaps, that are similar, but one is inverted.
> >> +/**
> >> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
> >                   ^ should be n?     ^^^^^^^^^^ canonicalize?
> 
> Fixed.
> 
> >> - * We search for runs of target-pages that don't start or end on a
> >> - * host page boundary;
> >> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> >> - *                 the unsentmap
> >> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> >> - *                 the main migration bitmap
> >> + * Helper for postcopy_chunk_hostpages; it's called twice to
> >> + * canonicalize the two bitmaps, that are similar, but one is
> >> + * inverted.
> >>   *
> >> + * Postcopy requires that all target pages in a hostpage are dirty or
> >> + * clean, not a mix.  This function canonicalizes the bitmaps.
> >> + *
> >> + * @ms: current migration state
> >> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> >> + *               otherwise we need to canonicalize partially dirty host pages
> >> + * @block: block that contains the page we want to canonicalize
> >> + * @pds: state for postcopy
> >>   */
> >>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
> >>                                            RAMBlock *block,
> >
> > [...]
> >
> >> +/**
> >> + * ram_save_setup: iterative stage for migration
> >       ^^^^^^^^^^^^^^ should be ram_save_iterate()?
> 
> fixed.  Too much copy and paste.
> 
> >
> >> + *
> >> + * Returns zero to indicate success and negative for error
> >> + *
> >> + * @f: QEMUFile where to send the data
> >> + * @opaque: RAMState pointer
> >> + */
> >>  static int ram_save_iterate(QEMUFile *f, void *opaque)
> >>  {
> >>      int ret;
> >> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> >>      return done;
> >>  }
> >
> > [...]
> >
> >> -/*
> >> - * Allocate data structures etc needed by incoming migration with postcopy-ram
> >> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> >> +/**
> >> + * ram_postococpy_incoming_init: allocate postcopy data structures
> >> + *
> >> + * Returns 0 for success and negative if there was one error
> >> + *
> >> + * @mis: current migration incoming state
> >> + *
> >> + * Allocate data structures etc needed by incoming migration with
> >> + * postcopy-ram postcopy-ram's similarly names
> >> + * postcopy_ram_incoming_init does the work
> >
> > This sentence is slightly hard to understand... But I think the
> > function name explained itself enough though. :)
> 
> I didn't want to remove Dave comments at this point, jusnt doing the
> formating8 and put them consintent.  I agree that this file comments
> could be improved.

Totally fine with me.

With all the fixes above, please add:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 03/51] ram: Create RAMState
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 03/51] ram: Create RAMState Juan Quintela
@ 2017-03-27  4:43   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  4:43 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:44:56PM +0100, Juan Quintela wrote:
> We create a struct where to put all the ram state
> 
> Start with the following fields:
> 
> last_seen_block, last_sent_block, last_offset, last_version and
> ram_bulk_stage are globals that are really related together.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
@ 2017-03-27  7:24   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  7:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:44:57PM +0100, Juan Quintela wrote:
> We need to add a parameter to several functions to make this work.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState Juan Quintela
@ 2017-03-27  7:34   ` Peter Xu
  2017-03-28 10:56     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-27  7:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:44:58PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

(I see that we have MigrationStats.dirty_pages_rate which looks
 similar to this one. Maybe one day we can merge these two?)

> ---
>  migration/ram.c | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 1d5bf22..f811e81 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -45,8 +45,6 @@
>  #include "qemu/rcu_queue.h"
>  #include "migration/colo.h"
>  
> -static uint64_t bitmap_sync_count;
> -
>  /***********************************************************/
>  /* ram save/restore */
>  
> @@ -154,6 +152,8 @@ struct RAMState {
>      bool ram_bulk_stage;
>      /* How many times we have dirty too many pages */
>      int dirty_rate_high_cnt;
> +    /* How many times we have synchronized the bitmap */
> +    uint64_t bitmap_sync_count;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -471,7 +471,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
>      cache_insert(XBZRLE.cache, current_addr, ZERO_TARGET_PAGE,
> -                 bitmap_sync_count);
> +                 rs->bitmap_sync_count);
>  }
>  
>  #define ENCODING_FLAG_XBZRLE 0x1
> @@ -483,6 +483,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>   *          0 means that page is identical to the one already sent
>   *          -1 means that xbzrle would be longer than normal
>   *
> + * @rs: current RAM state
>   * @f: QEMUFile where to send the data
>   * @current_data: contents of the page
>   * @current_addr: addr of the page
> @@ -491,7 +492,7 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
> +static int save_xbzrle_page(RAMState *rs, QEMUFile *f, uint8_t **current_data,
>                              ram_addr_t current_addr, RAMBlock *block,
>                              ram_addr_t offset, bool last_stage,
>                              uint64_t *bytes_transferred)
> @@ -499,11 +500,11 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>      int encoded_len = 0, bytes_xbzrle;
>      uint8_t *prev_cached_page;
>  
> -    if (!cache_is_cached(XBZRLE.cache, current_addr, bitmap_sync_count)) {
> +    if (!cache_is_cached(XBZRLE.cache, current_addr, rs->bitmap_sync_count)) {
>          acct_info.xbzrle_cache_miss++;
>          if (!last_stage) {
>              if (cache_insert(XBZRLE.cache, current_addr, *current_data,
> -                             bitmap_sync_count) == -1) {
> +                             rs->bitmap_sync_count) == -1) {
>                  return -1;
>              } else {
>                  /* update *current_data when the page has been
> @@ -658,7 +659,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      int64_t end_time;
>      int64_t bytes_xfer_now;
>  
> -    bitmap_sync_count++;
> +    rs->bitmap_sync_count++;
>  
>      if (!bytes_xfer_prev) {
>          bytes_xfer_prev = ram_bytes_transferred();
> @@ -720,9 +721,9 @@ static void migration_bitmap_sync(RAMState *rs)
>          start_time = end_time;
>          num_dirty_pages_period = 0;
>      }
> -    s->dirty_sync_count = bitmap_sync_count;
> +    s->dirty_sync_count = rs->bitmap_sync_count;
>      if (migrate_use_events()) {
> -        qapi_event_send_migration_pass(bitmap_sync_count, NULL);
> +        qapi_event_send_migration_pass(rs->bitmap_sync_count, NULL);
>      }
>  }
>  
> @@ -829,7 +830,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              ram_release_pages(ms, block->idstr, pss->offset, pages);
>          } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy(ms) && migrate_use_xbzrle()) {
> -            pages = save_xbzrle_page(f, &p, current_addr, block,
> +            pages = save_xbzrle_page(rs, f, &p, current_addr, block,
>                                       offset, last_stage, bytes_transferred);
>              if (!last_stage) {
>                  /* Can't send this cached data async, since the cache page
> @@ -1998,7 +1999,7 @@ static int ram_save_init_globals(RAMState *rs)
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
>      rs->dirty_rate_high_cnt = 0;
> -    bitmap_sync_count = 0;
> +    rs->bitmap_sync_count = 0;
>      migration_bitmap_sync_init();
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 06/51] ram: Move start time into RAMState
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 06/51] ram: Move start time " Juan Quintela
@ 2017-03-27  7:54   ` Peter Xu
  2017-03-28 11:00     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-27  7:54 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:44:59PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/ram.c | 20 +++++++++++---------
>  1 file changed, 11 insertions(+), 9 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index f811e81..5881805 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -154,6 +154,9 @@ struct RAMState {
>      int dirty_rate_high_cnt;
>      /* How many times we have synchronized the bitmap */
>      uint64_t bitmap_sync_count;
> +    /* this variables are used for bitmap sync */

s/this/These/?

> +    /* last time we did a full bitmap_sync */
> +    int64_t start_time;

Not sure whether it'll be a good chance we rename this variable in
this patch to make it a less-generic name, like: bm_sync_start? But
again, this is nicpicking and totally optional.

With the typo fixed, please add:

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev " Juan Quintela
@ 2017-03-27  8:04   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  8:04 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:00PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period " Juan Quintela
@ 2017-03-27  8:07   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  8:07 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:01PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 11/51] ram: Move dup_pages " Juan Quintela
@ 2017-03-27  9:23   ` Peter Xu
  2017-03-28 18:43     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-27  9:23 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:04PM +0100, Juan Quintela wrote:
> Once there rename it to its actual meaning, zero_pages.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

Will post a question below though (not directly related to this patch
but context-wide)...

> ---
>  migration/ram.c | 29 ++++++++++++++++++-----------
>  1 file changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index d8428c1..0da133f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -165,6 +165,9 @@ struct RAMState {
>      uint64_t xbzrle_cache_miss_prev;
>      /* number of iterations at the beginning of period */
>      uint64_t iterations_prev;
> +    /* Accounting fields */
> +    /* number of zero pages.  It used to be pages filled by the same char. */
> +    uint64_t zero_pages;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -172,7 +175,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t dup_pages;
>      uint64_t skipped_pages;
>      uint64_t norm_pages;
>      uint64_t iterations;
> @@ -192,12 +194,12 @@ static void acct_clear(void)
>  
>  uint64_t dup_mig_bytes_transferred(void)
>  {
> -    return acct_info.dup_pages * TARGET_PAGE_SIZE;
> +    return ram_state.zero_pages * TARGET_PAGE_SIZE;
>  }
>  
>  uint64_t dup_mig_pages_transferred(void)
>  {
> -    return acct_info.dup_pages;
> +    return ram_state.zero_pages;
>  }
>  
>  uint64_t skipped_mig_bytes_transferred(void)
> @@ -737,19 +739,21 @@ static void migration_bitmap_sync(RAMState *rs)
>   *
>   * Returns the number of pages written.
>   *
> + * @rs: current RAM state
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @p: pointer to the page
>   * @bytes_transferred: increase it with the number of transferred bytes
>   */
> -static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
> +static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
> +                          ram_addr_t offset,
>                            uint8_t *p, uint64_t *bytes_transferred)
>  {
>      int pages = -1;
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> -        acct_info.dup_pages++;
> +        rs->zero_pages++;
>          *bytes_transferred += save_page_header(f, block,
>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
>          qemu_put_byte(f, 0);
> @@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              if (bytes_xmit > 0) {
>                  acct_info.norm_pages++;
>              } else if (bytes_xmit == 0) {
> -                acct_info.dup_pages++;
> +                rs->zero_pages++;

This code path looks suspicous... since iiuc currently it should only
be triggered by RDMA case, and I believe here qemu_rdma_save_page()
should have met something wrong (so that it didn't return with
RAM_SAVE_CONTROL_DELAYED). Then is it correct we do increase zero page
counting unconditionally here? (hmm, the default bytes_xmit is zero as
well...)

Another thing is that I see when RDMA is enabled we are updating
accounting info with acct_update_position(), while we updated it here
as well. Is this an issue of duplicated accounting?

Similar question in ram_save_compressed_page().

Thanks,

>              }
>          }
>      } else {
> -        pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +        pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>          if (pages > 0) {
>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>               * page would be stale
> @@ -998,7 +1002,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              if (bytes_xmit > 0) {
>                  acct_info.norm_pages++;
>              } else if (bytes_xmit == 0) {
> -                acct_info.dup_pages++;
> +                rs->zero_pages++;
>              }
>          }
>      } else {
> @@ -1010,7 +1014,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>           */
>          if (block != rs->last_sent_block) {
>              flush_compressed_data(f);
> -            pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
>                  bytes_xmit = save_page_header(f, block, offset |
> @@ -1031,7 +1035,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              }
>          } else {
>              offset |= RAM_SAVE_FLAG_CONTINUE;
> -            pages = save_zero_page(f, block, offset, p, bytes_transferred);
> +            pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(f, block, offset,
>                                                          bytes_transferred);
> @@ -1462,8 +1466,10 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage,
>  void acct_update_position(QEMUFile *f, size_t size, bool zero)
>  {
>      uint64_t pages = size / TARGET_PAGE_SIZE;
> +    RAMState *rs = &ram_state;
> +
>      if (zero) {
> -        acct_info.dup_pages += pages;
> +        rs->zero_pages += pages;
>      } else {
>          acct_info.norm_pages += pages;
>          bytes_transferred += size;
> @@ -2005,6 +2011,7 @@ static int ram_save_init_globals(RAMState *rs)
>  
>      rs->dirty_rate_high_cnt = 0;
>      rs->bitmap_sync_count = 0;
> +    rs->zero_pages = 0;
>      migration_bitmap_sync_init(rs);
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred() Juan Quintela
@ 2017-03-27  9:24   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  9:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:05PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable Juan Quintela
@ 2017-03-27  9:26   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  9:26 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:06PM +0100, Juan Quintela wrote:
> For compatibility, we need to still send a value, but just specify it
> and comment the fact.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState Juan Quintela
@ 2017-03-27  9:43   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27  9:43 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:07PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState Juan Quintela
@ 2017-03-27 10:46   ` Peter Xu
  2017-03-28 18:34     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-27 10:46 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:09PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

Another comment not directly related to this patch...

> ---
>  migration/ram.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 9fa3bd7..690ca8f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -170,6 +170,8 @@ struct RAMState {
>      uint64_t zero_pages;
>      /* number of normal transferred pages */
>      uint64_t norm_pages;
> +    /* Iterations since start */
> +    uint64_t iterations;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -177,7 +179,6 @@ static RAMState ram_state;
>  
>  /* accounting for migration statistics */
>  typedef struct AccountingInfo {
> -    uint64_t iterations;
>      uint64_t xbzrle_bytes;
>      uint64_t xbzrle_pages;
>      uint64_t xbzrle_cache_miss;
> @@ -693,13 +694,13 @@ static void migration_bitmap_sync(RAMState *rs)
>          }
>  
>          if (migrate_use_xbzrle()) {
> -            if (rs->iterations_prev != acct_info.iterations) {
> +            if (rs->iterations_prev != rs->iterations) {
>                  acct_info.xbzrle_cache_miss_rate =
>                     (double)(acct_info.xbzrle_cache_miss -
>                              rs->xbzrle_cache_miss_prev) /
> -                   (acct_info.iterations - rs->iterations_prev);
> +                   (rs->iterations - rs->iterations_prev);

Here we are calculating cache miss rate by xbzrle_cache_miss and
iterations. However looks like xbzrle_cache_miss is counted per guest
page (in save_xbzrle_page()) while the iteration count is per host
page (in ram_save_iterate()). Then, what if host page size not equals
to guest page size? E.g., when host uses 2M huge pages, host page size
is 2M, while guest page size can be 4K?

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes " Juan Quintela
  2017-03-24 10:12   ` Dr. David Alan Gilbert
@ 2017-03-27 10:48   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27 10:48 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:10PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages " Juan Quintela
  2017-03-24 10:13   ` Dr. David Alan Gilbert
@ 2017-03-27 10:59   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27 10:59 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:11PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss " Juan Quintela
  2017-03-24 10:15   ` Dr. David Alan Gilbert
@ 2017-03-27 11:00   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27 11:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:12PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate " Juan Quintela
  2017-03-24 10:17   ` Dr. David Alan Gilbert
@ 2017-03-27 11:01   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27 11:01 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:13PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows " Juan Quintela
  2017-03-24 10:22   ` Dr. David Alan Gilbert
@ 2017-03-27 11:03   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-27 11:03 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:14PM +0100, Juan Quintela wrote:
> Once there, remove the now unused AccountingInfo struct and var.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname
  2017-03-24 17:15   ` Eric Blake
@ 2017-03-28 10:52     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 10:52 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-devel, dgilbert

Eric Blake <eblake@redhat.com> wrote:
> On 03/23/2017 03:44 PM, Juan Quintela wrote:
>> So all places are consisten on the nambing of a block name parameter.
>
> s/consisten/consistent/
> s/nambing/naming/

Done, thanks.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-27  7:34   ` Peter Xu
@ 2017-03-28 10:56     ` Juan Quintela
  2017-03-29  6:55       ` Peter Xu
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 10:56 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:44:58PM +0100, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> (I see that we have MigrationStats.dirty_pages_rate which looks
>  similar to this one. Maybe one day we can merge these two?)

no, this one is how many times we have synchronized the dirty bitmap
with kvm/rest of qemu.
dirty_pages_rame is the pages we have dirtied in some <period>.

Period is not clear, it tries to be around one second, but that part is
not specially well done.

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 06/51] ram: Move start time into RAMState
  2017-03-27  7:54   ` Peter Xu
@ 2017-03-28 11:00     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 11:00 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:44:59PM +0100, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> ---
>>  migration/ram.c | 20 +++++++++++---------
>>  1 file changed, 11 insertions(+), 9 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index f811e81..5881805 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -154,6 +154,9 @@ struct RAMState {
>>      int dirty_rate_high_cnt;
>>      /* How many times we have synchronized the bitmap */
>>      uint64_t bitmap_sync_count;
>> +    /* this variables are used for bitmap sync */
>
> s/this/These/?
>
>> +    /* last time we did a full bitmap_sync */
>> +    int64_t start_time;
>
> Not sure whether it'll be a good chance we rename this variable in
> this patch to make it a less-generic name, like: bm_sync_start? But
> again, this is nicpicking and totally optional.

I changed it to time_last_bitmap_sync.

Thanks, Juan.


>
> With the typo fixed, please add:
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> Thanks,
>
> -- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-24  8:29     ` Juan Quintela
  2017-03-24  9:11       ` Yang Hongyang
@ 2017-03-28 17:12       ` Dr. David Alan Gilbert
  2017-03-28 18:45         ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-28 17:12 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Yang Hongyang, qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Yang Hongyang <yanghongyang@huawei.com> wrote:
> > On 2017/3/24 4:45, Juan Quintela wrote:
> >> We change the meaning of start to be the offset from the beggining of
> >> the block.
> >> 
> >> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
> >>      qemu_mutex_lock(&rs->bitmap_mutex);
> >>      rcu_read_lock();
> >>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> >> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
> >> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
> >
> > Since RAMBlock been passed to bitmap_sync, could we remove
> > param 'block->used_length' either?
> 
> Hi
> 
> good catch.
> 
> I had that removed, and then realized that I want to synchronize parts
> of the bitmap, not the whole one.  That part of the series is still not
> done.
> 
> Right now we do something like (I have simplified a lot of details):
> 
> while(true) {
>             foreach(block)
>                 bitmap_sync(block)
>             foreach(page)
>                 if(dirty(page))
>                    page_send(page)
> }
> 
> 
> If you have several terabytes of RAM that is too ineficient, because
> when we arrive to the page_send(page), it is possible that it is already
> dirty again, and we have to send it twice.  So, the idea is to change to
> something like:
> 
> while(true) {
>             foreach(block)
>                 bitmap_sync(block)
>             foreach(block)
>                 foreach(64pages)
>                     bitmap_sync(64pages)
>                     foreach(page of the 64)
>                        if (dirty)
>                           page_send(page)

Yes, although it might be best to actually do the sync in a separate thread
so that the sync is always a bit ahead of the thread doing the writing.

Dave

> }
> 
> 
> Where 64 is a magic number, I have to test what is the good value.
> Basically it should be a multiple of sizeof(long) and a multiple/divisor
> of host page size.
> 
> Reason of changing the for to be for each block, is that then we can
> easily put bitmaps by hostpage size, instead of having to had it for
> target page size.
> 
> Thanks for the review, Juan.
> 
> Later, Juan.
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-26 13:43       ` Peter Xu
@ 2017-03-28 18:32         ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 18:32 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Fri, Mar 24, 2017 at 12:44:06PM +0100, Juan Quintela wrote:

>> >
>> > Here the comment says (just like mentioned in function name) that we
>> > will "flush any remaining pages in the ram request queue", however in
>> > the implementation, we should be only freeing everything in
>> > src_page_requests. The problem is "flush" let me think about "flushing
>> > the rest of the pages to the other side"... while it's not.
>> >
>> > Would it be nice we just rename the function into something else, like
>> > migration_page_queue_free()? We can tune the comments correspondingly
>> > as well.
>> 
>> I will let this one to dave to answer O:-)
>> I agree than previous name is not perfect, but not sure that the new one
>> is mucth better either.
>> 
>> migration_drop_page_queue()?
>
> This is indeed a nitpick of mine... So please feel free to ignore it.
> :)
>
> But if we will keep the function name, I would slightly prefer that at
> least we mention in the comment that, this is only freeing things up,
> not sending anything out.

Added that to the comment.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState
  2017-03-27 10:46   ` Peter Xu
@ 2017-03-28 18:34     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 18:34 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:09PM +0100, Juan Quintela wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>

>> @@ -693,13 +694,13 @@ static void migration_bitmap_sync(RAMState *rs)
>>          }
>>  
>>          if (migrate_use_xbzrle()) {
>> -            if (rs->iterations_prev != acct_info.iterations) {
>> +            if (rs->iterations_prev != rs->iterations) {
>>                  acct_info.xbzrle_cache_miss_rate =
>>                     (double)(acct_info.xbzrle_cache_miss -
>>                              rs->xbzrle_cache_miss_prev) /
>> -                   (acct_info.iterations - rs->iterations_prev);
>> +                   (rs->iterations - rs->iterations_prev);
>
> Here we are calculating cache miss rate by xbzrle_cache_miss and
> iterations. However looks like xbzrle_cache_miss is counted per guest
> page (in save_xbzrle_page()) while the iteration count is per host
> page (in ram_save_iterate()). Then, what if host page size not equals
> to guest page size? E.g., when host uses 2M huge pages, host page size
> is 2M, while guest page size can be 4K?

Good catch.  Will have to think about this.  You are right.  I will
change that later.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-27  9:23   ` Peter Xu
@ 2017-03-28 18:43     ` Juan Quintela
  2017-03-29  7:02       ` Peter Xu
  2017-03-31 14:58       ` Dr. David Alan Gilbert
  0 siblings, 2 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 18:43 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:04PM +0100, Juan Quintela wrote:
>> Once there rename it to its actual meaning, zero_pages.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> Will post a question below though (not directly related to this patch
> but context-wide)...
>>  {
>>      int pages = -1;
>>  
>>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>> -        acct_info.dup_pages++;
>> +        rs->zero_pages++;
>>          *bytes_transferred += save_page_header(f, block,
>>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
>>          qemu_put_byte(f, 0);
>> @@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>>              if (bytes_xmit > 0) {
>>                  acct_info.norm_pages++;
>>              } else if (bytes_xmit == 0) {
>> -                acct_info.dup_pages++;
>> +                rs->zero_pages++;
>
> This code path looks suspicous... since iiuc currently it should only
> be triggered by RDMA case, and I believe here qemu_rdma_save_page()
> should have met something wrong (so that it didn't return with
> RAM_SAVE_CONTROL_DELAYED). Then is it correct we do increase zero page
> counting unconditionally here? (hmm, the default bytes_xmit is zero as
> well...)

My head hurts at this point.
ok.  bytse_xmit can only be zero if we called qemu_rdma_save_page() with
size=0 or there has been an RDMA error.  We ver call the function with
size = 0.  And if there is one error, we are in very bady shape already.

> Another thing is that I see when RDMA is enabled we are updating
> accounting info with acct_update_position(), while we updated it here
> as well. Is this an issue of duplicated accounting?

I think stats and rdma are not right.  I have to check more that.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-28 17:12       ` Dr. David Alan Gilbert
@ 2017-03-28 18:45         ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-28 18:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Yang Hongyang, qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Yang Hongyang <yanghongyang@huawei.com> wrote:
>> > On 2017/3/24 4:45, Juan Quintela wrote:
>> >> We change the meaning of start to be the offset from the beggining of
>> >> the block.
>> >> 
>> >> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>> >>      qemu_mutex_lock(&rs->bitmap_mutex);
>> >>      rcu_read_lock();
>> >>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>> >> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
>> >> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
>> If you have several terabytes of RAM that is too ineficient, because
>> when we arrive to the page_send(page), it is possible that it is already
>> dirty again, and we have to send it twice.  So, the idea is to change to
>> something like:
>> 
>> while(true) {
>>             foreach(block)
>>                 bitmap_sync(block)
>>             foreach(block)
>>                 foreach(64pages)
>>                     bitmap_sync(64pages)
>>                     foreach(page of the 64)
>>                        if (dirty)
>>                           page_send(page)
>
> Yes, although it might be best to actually do the sync in a separate thread
> so that the sync is always a bit ahead of the thread doing the writing.

Doing it synchronously shouldn't be a problem.  But we should be able to
in smaller chucks.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-28 10:56     ` Juan Quintela
@ 2017-03-29  6:55       ` Peter Xu
  2017-03-29  8:56         ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-29  6:55 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Tue, Mar 28, 2017 at 12:56:06PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:44:58PM +0100, Juan Quintela wrote:
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >
> > Reviewed-by: Peter Xu <peterx@redhat.com>
> >
> > (I see that we have MigrationStats.dirty_pages_rate which looks
> >  similar to this one. Maybe one day we can merge these two?)
> 
> no, this one is how many times we have synchronized the dirty bitmap
> with kvm/rest of qemu.
> dirty_pages_rame is the pages we have dirtied in some <period>.
> 
> Period is not clear, it tries to be around one second, but that part is
> not specially well done.

Oh, sorry... I was trying to mean MigrationStats.dirty_sync_count, not
MigrationStats.dirty_pages_rate. I think it was introduced in:

    commit 58570ed894631904bcdbcd1e8b34479cebe2aae9
    Author: ChenLiang <chenliang88@huawei.com>
    Date:   Fri Apr 4 17:57:55 2014 +0800

    migration: expose the bitmap_sync_count to the end

And these two variables are synchronized every time in
migration_bitmap_sync(), so looks the same. Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-28 18:43     ` Juan Quintela
@ 2017-03-29  7:02       ` Peter Xu
  2017-03-29  8:26         ` Juan Quintela
  2017-03-31 14:58       ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-29  7:02 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Tue, Mar 28, 2017 at 08:43:37PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:45:04PM +0100, Juan Quintela wrote:
> >> Once there rename it to its actual meaning, zero_pages.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >
> > Reviewed-by: Peter Xu <peterx@redhat.com>
> >
> > Will post a question below though (not directly related to this patch
> > but context-wide)...
> >>  {
> >>      int pages = -1;
> >>  
> >>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> >> -        acct_info.dup_pages++;
> >> +        rs->zero_pages++;
> >>          *bytes_transferred += save_page_header(f, block,
> >>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
> >>          qemu_put_byte(f, 0);
> >> @@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> >>              if (bytes_xmit > 0) {
> >>                  acct_info.norm_pages++;
> >>              } else if (bytes_xmit == 0) {
> >> -                acct_info.dup_pages++;
> >> +                rs->zero_pages++;
> >
> > This code path looks suspicous... since iiuc currently it should only
> > be triggered by RDMA case, and I believe here qemu_rdma_save_page()
> > should have met something wrong (so that it didn't return with
> > RAM_SAVE_CONTROL_DELAYED). Then is it correct we do increase zero page
> > counting unconditionally here? (hmm, the default bytes_xmit is zero as
> > well...)
> 
> My head hurts at this point.

Sorry about that! :(

> ok.  bytse_xmit can only be zero if we called qemu_rdma_save_page() with
> size=0 or there has been an RDMA error.  We ver call the function with
> size = 0.  And if there is one error, we are in very bady shape already.
> 
> > Another thing is that I see when RDMA is enabled we are updating
> > accounting info with acct_update_position(), while we updated it here
> > as well. Is this an issue of duplicated accounting?
> 
> I think stats and rdma are not right.  I have to check more that.

Sorry to have led the discussion too far away from the topic. I guess
it'll be perfectly okay to just mark this as TODO item, and we can
just move on with current series (and I believe you have further
patches after this big one :).

Out of curiosity - to what extent are people using migration with
RDMA? Should that be "very rare"? Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-29  7:02       ` Peter Xu
@ 2017-03-29  8:26         ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-29  8:26 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Tue, Mar 28, 2017 at 08:43:37PM +0200, Juan Quintela wrote:
>> Peter Xu <peterx@redhat.com> wrote:
>> > On Thu, Mar 23, 2017 at 09:45:04PM +0100, Juan Quintela wrote:
>> >> Once there rename it to its actual meaning, zero_pages.
>> >> 
>> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> >
>> > Reviewed-by: Peter Xu <peterx@redhat.com>
>> >
>> > Will post a question below though (not directly related to this patch
>> > but context-wide)...
>> >>  {
>> >>      int pages = -1;
>> >>  
>> >>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>> >> -        acct_info.dup_pages++;
>> >> +        rs->zero_pages++;
>> >>          *bytes_transferred += save_page_header(f, block,
>> >>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
>> >>          qemu_put_byte(f, 0);
>> >> @@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>> >>              if (bytes_xmit > 0) {
>> >>                  acct_info.norm_pages++;
>> >>              } else if (bytes_xmit == 0) {
>> >> -                acct_info.dup_pages++;
>> >> +                rs->zero_pages++;
>> >
>> > This code path looks suspicous... since iiuc currently it should only
>> > be triggered by RDMA case, and I believe here qemu_rdma_save_page()
>> > should have met something wrong (so that it didn't return with
>> > RAM_SAVE_CONTROL_DELAYED). Then is it correct we do increase zero page
>> > counting unconditionally here? (hmm, the default bytes_xmit is zero as
>> > well...)
>> 
>> My head hurts at this point.
>
> Sorry about that! :(

Hahaha, it was a ""figure of speak" O:-)

>> ok.  bytse_xmit can only be zero if we called qemu_rdma_save_page() with
>> size=0 or there has been an RDMA error.  We ver call the function with
>> size = 0.  And if there is one error, we are in very bady shape already.
>> 
>> > Another thing is that I see when RDMA is enabled we are updating
>> > accounting info with acct_update_position(), while we updated it here
>> > as well. Is this an issue of duplicated accounting?
>> 
>> I think stats and rdma are not right.  I have to check more that.
>
> Sorry to have led the discussion too far away from the topic. I guess
> it'll be perfectly okay to just mark this as TODO item, and we can
> just move on with current series (and I believe you have further
> patches after this big one :).

Yeap.

> Out of curiosity - to what extent are people using migration with
> RDMA? Should that be "very rare"? Thanks,

I don't really have numbers.  Some customers find it very important, but
I don't have a good idea of how to put it.

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-29  6:55       ` Peter Xu
@ 2017-03-29  8:56         ` Juan Quintela
  2017-03-29  9:07           ` Peter Xu
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-29  8:56 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Tue, Mar 28, 2017 at 12:56:06PM +0200, Juan Quintela wrote:
>> Peter Xu <peterx@redhat.com> wrote:
>> > On Thu, Mar 23, 2017 at 09:44:58PM +0100, Juan Quintela wrote:
>> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> >
>> > Reviewed-by: Peter Xu <peterx@redhat.com>
>> >
>> > (I see that we have MigrationStats.dirty_pages_rate which looks
>> >  similar to this one. Maybe one day we can merge these two?)
>> 
>> no, this one is how many times we have synchronized the dirty bitmap
>> with kvm/rest of qemu.
>> dirty_pages_rame is the pages we have dirtied in some <period>.
>> 
>> Period is not clear, it tries to be around one second, but that part is
>> not specially well done.
>
> Oh, sorry... I was trying to mean MigrationStats.dirty_sync_count, not
> MigrationStats.dirty_pages_rate. I think it was introduced in:
>
>     commit 58570ed894631904bcdbcd1e8b34479cebe2aae9
>     Author: ChenLiang <chenliang88@huawei.com>
>     Date:   Fri Apr 4 17:57:55 2014 +0800
>
>     migration: expose the bitmap_sync_count to the end
>
> And these two variables are synchronized every time in
> migration_bitmap_sync(), so looks the same. Thanks,

Ah, now I understand you.  See this patch, it does what you suggest, no?

[PATCH 31/51] ram: Create ram_dirty_sync_count()


Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count() Juan Quintela
@ 2017-03-29  9:06   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-29  9:06 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:24PM +0100, Juan Quintela wrote:
> This is a ram field that was inside MigrationState.  Move it to
> RAMState and make it the same that the other ram stats.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState
  2017-03-29  8:56         ` Juan Quintela
@ 2017-03-29  9:07           ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-29  9:07 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Wed, Mar 29, 2017 at 10:56:22AM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Tue, Mar 28, 2017 at 12:56:06PM +0200, Juan Quintela wrote:
> >> Peter Xu <peterx@redhat.com> wrote:
> >> > On Thu, Mar 23, 2017 at 09:44:58PM +0100, Juan Quintela wrote:
> >> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >> >
> >> > Reviewed-by: Peter Xu <peterx@redhat.com>
> >> >
> >> > (I see that we have MigrationStats.dirty_pages_rate which looks
> >> >  similar to this one. Maybe one day we can merge these two?)
> >> 
> >> no, this one is how many times we have synchronized the dirty bitmap
> >> with kvm/rest of qemu.
> >> dirty_pages_rame is the pages we have dirtied in some <period>.
> >> 
> >> Period is not clear, it tries to be around one second, but that part is
> >> not specially well done.
> >
> > Oh, sorry... I was trying to mean MigrationStats.dirty_sync_count, not
> > MigrationStats.dirty_pages_rate. I think it was introduced in:
> >
> >     commit 58570ed894631904bcdbcd1e8b34479cebe2aae9
> >     Author: ChenLiang <chenliang88@huawei.com>
> >     Date:   Fri Apr 4 17:57:55 2014 +0800
> >
> >     migration: expose the bitmap_sync_count to the end
> >
> > And these two variables are synchronized every time in
> > migration_bitmap_sync(), so looks the same. Thanks,
> 
> Ah, now I understand you.  See this patch, it does what you suggest, no?
> 
> [PATCH 31/51] ram: Create ram_dirty_sync_count()

Yes, it is. :)

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset Juan Quintela
@ 2017-03-29 17:14   ` Dr. David Alan Gilbert
  2017-03-30  6:25   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-29 17:14 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> And then init only things that are not zero by default.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 25 +++----------------------
>  1 file changed, 3 insertions(+), 22 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index c6ba92c..a890179 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -611,15 +611,6 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
>                                                &rs->num_dirty_pages_period);
>  }
>  
> -static void migration_bitmap_sync_init(RAMState *rs)
> -{
> -    rs->start_time = 0;
> -    rs->bytes_xfer_prev = 0;
> -    rs->num_dirty_pages_period = 0;
> -    rs->xbzrle_cache_miss_prev = 0;
> -    rs->iterations_prev = 0;
> -}
> -
>  /**
>   * ram_pagesize_summary: calculate all the pagesizes of a VM
>   *
> @@ -1984,21 +1975,11 @@ err:
>      return ret;
>  }
>  
> -static int ram_save_init_globals(RAMState *rs)
> +static int ram_state_init(RAMState *rs)
>  {
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
> -    rs->dirty_rate_high_cnt = 0;
> -    rs->bitmap_sync_count = 0;
> -    rs->zero_pages = 0;
> -    rs->norm_pages = 0;
> -    rs->iterations = 0;
> -    rs->xbzrle_bytes = 0;
> -    rs->xbzrle_pages = 0;
> -    rs->xbzrle_cache_miss = 0;
> -    rs->xbzrle_cache_miss_rate = 0;
> -    rs->xbzrle_overflows = 0;
> -    migration_bitmap_sync_init(rs);
> +    memset(rs, 0, sizeof(*rs));
>      qemu_mutex_init(&migration_bitmap_mutex);
>  
>      if (migrate_use_xbzrle()) {
> @@ -2088,7 +2069,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>  
>      /* migration has already setup the bitmap, reuse it. */
>      if (!migration_in_colo_state()) {
> -        if (ram_save_init_globals(rs) < 0) {
> +        if (ram_state_init(rs) < 0) {
>              return -1;
>           }
>      }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred " Juan Quintela
@ 2017-03-29 17:38   ` Dr. David Alan Gilbert
  2017-03-30  6:26   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-29 17:38 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 35 +++++++++++++++++------------------
>  1 file changed, 17 insertions(+), 18 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 090084b..872ea23 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -197,6 +197,8 @@ struct RAMState {
>      uint64_t xbzrle_overflows;
>      /* number of dirty bits in the bitmap */
>      uint64_t migration_dirty_pages;
> +    /* total number of bytes transferred */
> +    uint64_t bytes_transferred;
>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
> @@ -246,6 +248,11 @@ static ram_addr_t ram_save_remaining(void)
>      return ram_state.migration_dirty_pages;
>  }
>  
> +uint64_t ram_bytes_transferred(void)
> +{
> +    return ram_state.bytes_transferred;
> +}
> +
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -870,9 +877,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>      return bytes_sent;
>  }
>  
> -static uint64_t bytes_transferred;
> -
> -static void flush_compressed_data(QEMUFile *f)
> +static void flush_compressed_data(RAMState *rs, QEMUFile *f)
>  {
>      int idx, len, thread_count;
>  
> @@ -893,7 +898,7 @@ static void flush_compressed_data(QEMUFile *f)
>          qemu_mutex_lock(&comp_param[idx].mutex);
>          if (!comp_param[idx].quit) {
>              len = qemu_put_qemu_file(f, comp_param[idx].file);
> -            bytes_transferred += len;
> +            rs->bytes_transferred += len;
>          }
>          qemu_mutex_unlock(&comp_param[idx].mutex);
>      }
> @@ -989,7 +994,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>           * is used to avoid resending the block name.
>           */
>          if (block != rs->last_sent_block) {
> -            flush_compressed_data(f);
> +            flush_compressed_data(rs, f);
>              pages = save_zero_page(rs, f, block, offset, p, bytes_transferred);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
> @@ -1065,7 +1070,7 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
>                  /* If xbzrle is on, stop using the data compression at this
>                   * point. In theory, xbzrle can do better than compression.
>                   */
> -                flush_compressed_data(f);
> +                flush_compressed_data(rs, f);
>                  compression_switch = false;
>              }
>          }
> @@ -1448,7 +1453,7 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>          rs->zero_pages += pages;
>      } else {
>          rs->norm_pages += pages;
> -        bytes_transferred += size;
> +        rs->bytes_transferred += size;
>          qemu_update_position(f, size);
>      }
>  }
> @@ -1458,11 +1463,6 @@ uint64_t ram_bytes_remaining(void)
>      return ram_save_remaining() * TARGET_PAGE_SIZE;
>  }
>  
> -uint64_t ram_bytes_transferred(void)
> -{
> -    return bytes_transferred;
> -}
> -
>  uint64_t ram_bytes_total(void)
>  {
>      RAMBlock *block;
> @@ -2025,7 +2025,6 @@ static int ram_state_init(RAMState *rs)
>  
>      qemu_mutex_lock_ramlist();
>      rcu_read_lock();
> -    bytes_transferred = 0;
>      ram_state_reset(rs);
>  
>      rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
> @@ -2137,7 +2136,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      while ((ret = qemu_file_rate_limit(f)) == 0) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(rs, f, false, &bytes_transferred);
> +        pages = ram_find_and_save_block(rs, f, false, &rs->bytes_transferred);
>          /* no more pages to sent */
>          if (pages == 0) {
>              done = 1;
> @@ -2159,7 +2158,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>          }
>          i++;
>      }
> -    flush_compressed_data(f);
> +    flush_compressed_data(rs, f);
>      rcu_read_unlock();
>  
>      /*
> @@ -2169,7 +2168,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      ram_control_after_iterate(f, RAM_CONTROL_ROUND);
>  
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> -    bytes_transferred += 8;
> +    rs->bytes_transferred += 8;
>  
>      ret = qemu_file_get_error(f);
>      if (ret < 0) {
> @@ -2208,14 +2207,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>          int pages;
>  
>          pages = ram_find_and_save_block(rs, f, !migration_in_colo_state(),
> -                                        &bytes_transferred);
> +                                        &rs->bytes_transferred);
>          /* no more blocks to sent */
>          if (pages == 0) {
>              break;
>          }
>      }
>  
> -    flush_compressed_data(f);
> +    flush_compressed_data(rs, f);
>      ram_control_after_iterate(f, RAM_CONTROL_FINISH);
>  
>      rcu_read_unlock();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState Juan Quintela
@ 2017-03-29 18:02   ` Dr. David Alan Gilbert
  2017-03-30 16:19     ` Juan Quintela
  2017-03-30 16:27     ` Juan Quintela
  2017-03-30  7:52   ` Peter Xu
  1 sibling, 2 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-29 18:02 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Rename it to preffer_xbzrle that is a more descriptive name.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6a39704..591cf89 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -217,6 +217,9 @@ struct RAMState {
>      uint64_t dirty_pages_rate;
>      /* Count of requests incoming from destination */
>      uint64_t postcopy_requests;
> +    /* Should we move to xbzrle after the 1st round
> +       of compression */
> +    bool preffer_xbzrle;

That's 'prefer' - however, do we need it at all?
How about just replacing it by:
   !ram_bulk_stage && migrate_use_xbzrle()

would that work?

Dave

>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
>  /* The empty QEMUFileOps will be used by file in CompressParam */
>  static const QEMUFileOps empty_ops = { };
>  
> -static bool compression_switch;
>  static DecompressParam *decomp_param;
>  static QemuThread *decompress_threads;
>  static QemuMutex decomp_done_lock;
> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
>      if (!migrate_use_compression()) {
>          return;
>      }
> -    compression_switch = true;
>      thread_count = migrate_compress_threads();
>      compress_threads = g_new0(QemuThread, thread_count);
>      comp_param = g_new0(CompressParam, thread_count);
> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>                   * point. In theory, xbzrle can do better than compression.
>                   */
>                  flush_compressed_data(rs);
> -                compression_switch = false;
> +                rs->preffer_xbzrle = true;
>              }
>          }
>          /* Didn't find anything this time, but try again on the new block */
> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>      /* Check the pages is dirty and if it is send it */
>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
> -        if (compression_switch && migrate_use_compression()) {
> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {
>              res = ram_save_compressed_page(rs, ms, pss, last_stage);
>          } else {
>              res = ram_save_page(rs, ms, pss, last_stage);
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter Juan Quintela
@ 2017-03-29 18:43   ` Dr. David Alan Gilbert
  2017-03-30 10:28   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-29 18:43 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Oh yeh, so it doesn't.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  include/migration/migration.h | 3 +--
>  migration/postcopy-ram.c      | 6 ++----
>  migration/ram.c               | 9 +++------
>  migration/savevm.c            | 3 +--
>  4 files changed, 7 insertions(+), 14 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 90849a5..39a8e7e 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -270,8 +270,7 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool expected);
>  /* For outgoing discard bitmap */
>  int ram_postcopy_send_discard_bitmap(MigrationState *ms);
>  /* For incoming postcopy discard */
> -int ram_discard_range(MigrationIncomingState *mis, const char *block_name,
> -                      uint64_t start, size_t length);
> +int ram_discard_range(const char *block_name, uint64_t start, size_t length);
>  int ram_postcopy_incoming_init(MigrationIncomingState *mis);
>  void ram_postcopy_migrated_memory_release(MigrationState *ms);
>  
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index 8756364..85fd8d7 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -213,8 +213,6 @@ out:
>  static int init_range(const char *block_name, void *host_addr,
>                        ram_addr_t offset, ram_addr_t length, void *opaque)
>  {
> -    MigrationIncomingState *mis = opaque;
> -
>      trace_postcopy_init_range(block_name, host_addr, offset, length);
>  
>      /*
> @@ -223,7 +221,7 @@ static int init_range(const char *block_name, void *host_addr,
>       * - we're going to get the copy from the source anyway.
>       * (Precopy will just overwrite this data, so doesn't need the discard)
>       */
> -    if (ram_discard_range(mis, block_name, 0, length)) {
> +    if (ram_discard_range(block_name, 0, length)) {
>          return -1;
>      }
>  
> @@ -271,7 +269,7 @@ static int cleanup_range(const char *block_name, void *host_addr,
>   */
>  int postcopy_ram_incoming_init(MigrationIncomingState *mis, size_t ram_pages)
>  {
> -    if (qemu_ram_foreach_block(init_range, mis)) {
> +    if (qemu_ram_foreach_block(init_range, NULL)) {
>          return -1;
>      }
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index 9772fd8..83c749c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -784,7 +784,7 @@ static void ram_release_pages(const char *rbname, uint64_t offset, int pages)
>          return;
>      }
>  
> -    ram_discard_range(NULL, rbname, offset, pages << TARGET_PAGE_BITS);
> +    ram_discard_range(rbname, offset, pages << TARGET_PAGE_BITS);
>  }
>  
>  /**
> @@ -1602,7 +1602,7 @@ void ram_postcopy_migrated_memory_release(MigrationState *ms)
>  
>          while (run_start < range) {
>              unsigned long run_end = find_next_bit(bitmap, range, run_start + 1);
> -            ram_discard_range(NULL, block->idstr, run_start << TARGET_PAGE_BITS,
> +            ram_discard_range(block->idstr, run_start << TARGET_PAGE_BITS,
>                                (run_end - run_start) << TARGET_PAGE_BITS);
>              run_start = find_next_zero_bit(bitmap, range, run_end + 1);
>          }
> @@ -1942,15 +1942,12 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>   *
>   * Returns zero on success
>   *
> - * @mis: current migration incoming state
>   * @rbname: name of the RAMBLock of the request. NULL means the
>   *          same that last one.
>   * @start: RAMBlock starting page
>   * @length: RAMBlock size
>   */
> -int ram_discard_range(MigrationIncomingState *mis,
> -                      const char *rbname,
> -                      uint64_t start, size_t length)
> +int ram_discard_range(const char *rbname, uint64_t start, size_t length)
>  {
>      int ret = -1;
>  
> diff --git a/migration/savevm.c b/migration/savevm.c
> index bbf055d..7cf387f 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1479,8 +1479,7 @@ static int loadvm_postcopy_ram_handle_discard(MigrationIncomingState *mis,
>          block_length = qemu_get_be64(mis->from_src_file);
>  
>          len -= 16;
> -        int ret = ram_discard_range(mis, ramid, start_addr,
> -                                    block_length);
> +        int ret = ram_discard_range(ramid, start_addr, block_length);
>          if (ret) {
>              return ret;
>          }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining Juan Quintela
  2017-03-24 15:34   ` Dr. David Alan Gilbert
@ 2017-03-30  6:24   ` Peter Xu
  2017-03-30 16:07     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:21PM +0100, Juan Quintela wrote:
> Just unfold it.  Move ram_bytes_remaining() with the rest of exported
> functions.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 19 +++++++------------
>  1 file changed, 7 insertions(+), 12 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 3ae00e2..dd5a453 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -243,16 +243,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
>      return ram_state.xbzrle_overflows;
>  }
>  
> -static ram_addr_t ram_save_remaining(void)
> -{
> -    return ram_state.migration_dirty_pages;
> -}
> -
>  uint64_t ram_bytes_transferred(void)
>  {
>      return ram_state.bytes_transferred;
>  }
>  
> +uint64_t ram_bytes_remaining(void)
> +{
> +    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
> +}
> +
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -1438,11 +1438,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>      }
>  }
>  
> -uint64_t ram_bytes_remaining(void)
> -{
> -    return ram_save_remaining() * TARGET_PAGE_SIZE;
> -}
> -
>  uint64_t ram_bytes_total(void)
>  {
>      RAMBlock *block;
> @@ -2210,7 +2205,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>      RAMState *rs = opaque;
>      uint64_t remaining_size;
>  
> -    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;

Here we can directly use ram_bytes_remaining()?

>  
>      if (!migration_in_postcopy(migrate_get_current()) &&
>          remaining_size < max_size) {
> @@ -2219,7 +2214,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>          migration_bitmap_sync(rs);
>          rcu_read_unlock();
>          qemu_mutex_unlock_iothread();
> -        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> +        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;

Same here?

Besides:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState Juan Quintela
@ 2017-03-30  6:24   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:24 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:15PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

> ---
>  migration/ram.c | 32 ++++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 3292eb0..c6ba92c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -182,6 +182,8 @@ struct RAMState {
>      double xbzrle_cache_miss_rate;
>      /* xbzrle number of overflows */
>      uint64_t xbzrle_overflows;
> +    /* number of dirty bits in the bitmap */
> +    uint64_t migration_dirty_pages;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -222,8 +224,12 @@ uint64_t xbzrle_mig_pages_overflow(void)
>      return ram_state.xbzrle_overflows;
>  }
>  
> +static ram_addr_t ram_save_remaining(void)
> +{
> +    return ram_state.migration_dirty_pages;
> +}
> +
>  static QemuMutex migration_bitmap_mutex;
> -static uint64_t migration_dirty_pages;
>  
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
> @@ -581,7 +587,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>      return (next - base) << TARGET_PAGE_BITS;
>  }
>  
> -static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
> +static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
>  {
>      bool ret;
>      int nr = addr >> TARGET_PAGE_BITS;
> @@ -590,7 +596,7 @@ static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
>      ret = test_and_clear_bit(nr, bitmap);
>  
>      if (ret) {
> -        migration_dirty_pages--;
> +        rs->migration_dirty_pages--;
>      }
>      return ret;
>  }
> @@ -600,8 +606,9 @@ static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
>  {
>      unsigned long *bitmap;
>      bitmap = atomic_rcu_read(&migration_bitmap_rcu)->bmap;
> -    migration_dirty_pages += cpu_physical_memory_sync_dirty_bitmap(bitmap,
> -                             start, length, &rs->num_dirty_pages_period);
> +    rs->migration_dirty_pages +=
> +        cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
> +                                              &rs->num_dirty_pages_period);
>  }
>  
>  static void migration_bitmap_sync_init(RAMState *rs)
> @@ -1302,7 +1309,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      int res = 0;
>  
>      /* Check the pages is dirty and if it is send it */
> -    if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
> +    if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
>          if (compression_switch && migrate_use_compression()) {
>              res = ram_save_compressed_page(rs, ms, f, pss,
> @@ -1452,11 +1459,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>      }
>  }
>  
> -static ram_addr_t ram_save_remaining(void)
> -{
> -    return migration_dirty_pages;
> -}
> -
>  uint64_t ram_bytes_remaining(void)
>  {
>      return ram_save_remaining() * TARGET_PAGE_SIZE;
> @@ -1530,6 +1532,7 @@ static void ram_state_reset(RAMState *rs)
>  
>  void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>  {
> +    RAMState *rs = &ram_state;
>      /* called in qemu main thread, so there is
>       * no writing race against this migration_bitmap
>       */
> @@ -1555,7 +1558,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>  
>          atomic_rcu_set(&migration_bitmap_rcu, bitmap);
>          qemu_mutex_unlock(&migration_bitmap_mutex);
> -        migration_dirty_pages += new - old;
> +        rs->migration_dirty_pages += new - old;
>          call_rcu(old_bitmap, migration_bitmap_free, rcu);
>      }
>  }
> @@ -1728,6 +1731,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                                            RAMBlock *block,
>                                            PostcopyDiscardState *pds)
>  {
> +    RAMState *rs = &ram_state;
>      unsigned long *bitmap;
>      unsigned long *unsentmap;
>      unsigned int host_ratio = block->page_size / TARGET_PAGE_SIZE;
> @@ -1825,7 +1829,7 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                   * Remark them as dirty, updating the count for any pages
>                   * that weren't previously dirty.
>                   */
> -                migration_dirty_pages += !test_and_set_bit(page, bitmap);
> +                rs->migration_dirty_pages += !test_and_set_bit(page, bitmap);
>              }
>          }
>  
> @@ -2051,7 +2055,7 @@ static int ram_save_init_globals(RAMState *rs)
>       * Count the total number of pages used by ram blocks not including any
>       * gaps due to alignment or unplugs.
>       */
> -    migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
> +    rs->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>  
>      memory_global_dirty_log_start();
>      migration_bitmap_sync(rs);
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset Juan Quintela
  2017-03-29 17:14   ` Dr. David Alan Gilbert
@ 2017-03-30  6:25   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:25 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:16PM +0100, Juan Quintela wrote:
> And then init only things that are not zero by default.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState Juan Quintela
@ 2017-03-30  6:25   ` Peter Xu
  2017-03-30  8:49   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:25 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:17PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu " Juan Quintela
@ 2017-03-30  6:25   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:25 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:18PM +0100, Juan Quintela wrote:
> Once there, rename the type to be shorter.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred " Juan Quintela
  2017-03-29 17:38   ` Dr. David Alan Gilbert
@ 2017-03-30  6:26   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:26 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:19PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter Juan Quintela
@ 2017-03-30  6:27   ` Peter Xu
  2017-03-30 16:05     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:27 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:20PM +0100, Juan Quintela wrote:
> Somewhere it was passed by reference, just use it from RAMState.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Juan Quintela <quintela@redhat.com>

(Is this a self-review above? :-)

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState Juan Quintela
@ 2017-03-30  6:49   ` Peter Xu
  2017-03-30 16:08     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:49 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:22PM +0100, Juan Quintela wrote:
> It was on MigrationState when it is only used inside ram.c for
> postcopy.  Problem is that we need to access it without being able to
> pass it RAMState directly.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/migration/migration.h | 2 --
>  migration/migration.c         | 1 -
>  migration/ram.c               | 7 +++++--
>  3 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 84cef4b..e032fb0 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -189,8 +189,6 @@ struct MigrationState
>      /* Queue of outstanding page requests from the destination */
>      QemuMutex src_page_req_mutex;
>      QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
> -    /* The RAMBlock used in the last src_page_request */
> -    RAMBlock *last_req_rb;
>      /* The semaphore is used to notify COLO thread that failover is finished */
>      QemuSemaphore colo_exit_sem;
>  
> diff --git a/migration/migration.c b/migration/migration.c
> index e532430..b220941 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1118,7 +1118,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>      s->postcopy_after_devices = false;
>      s->postcopy_requests = 0;
>      s->migration_thread_running = false;
> -    s->last_req_rb = NULL;
>      error_free(s->error);
>      s->error = NULL;
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index dd5a453..325a0f3 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -203,6 +203,8 @@ struct RAMState {
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
>      RAMBitmap *ram_bitmap;
> +    /* The RAMBlock used in the last src_page_request */
                                                        ^ "s" missing

Besides:

Reviewed-by: Peter Xu <peterx@redhat.com>

> +    RAMBlock *last_req_rb;
>  };
>  typedef struct RAMState RAMState;

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* " Juan Quintela
@ 2017-03-30  6:56   ` Peter Xu
  2017-03-30 16:09     ` Juan Quintela
  2017-03-31 15:25     ` Dr. David Alan Gilbert
  2017-03-31 16:52   ` Dr. David Alan Gilbert
  1 sibling, 2 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  6:56 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
> This are the last postcopy fields still at MigrationState.  Once there

s/This/These/

> Move MigrationSrcPageRequest to ram.c and remove MigrationState
> parameters where appropiate.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

One question below though...

[...]

> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
>   *
>   * It should be empty at the end anyway, but in error cases there may
>   * xbe some left.
> - *
> - * @ms: current migration state
>   */
> -void flush_page_queue(MigrationState *ms)
> +void flush_page_queue(void)
>  {
> -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> +    struct RAMSrcPageRequest *mspr, *next_mspr;
> +    RAMState *rs = &ram_state;
>      /* This queue generally should be empty - but in the case of a failed
>       * migration might have some droppings in.
>       */
>      rcu_read_lock();

Could I ask why we are taking the RCU read lock rather than the mutex
here?

> -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
> +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
>          memory_region_unref(mspr->rb->mr);
> -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
>          g_free(mspr);
>      }
>      rcu_read_unlock();

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate Juan Quintela
@ 2017-03-30  7:00   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  7:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:25PM +0100, Juan Quintela wrote:
> It can be recalculated from dirty_pages_rate.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Juan Quintela <quintela@redhat.com>

Another self-review? :)

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState Juan Quintela
@ 2017-03-30  7:04   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  7:04 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:26PM +0100, Juan Quintela wrote:
> Treat it like the rest of ram stats counters.  Export its value the
> same way.  As an added bonus, no more MigrationState used in
> migration_bitmap_sync();
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Juan Quintela <quintela@redhat.com>

(I strongly suspect above r-b is for Dave)

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState Juan Quintela
@ 2017-03-30  7:06   ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  7:06 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:27PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState Juan Quintela
  2017-03-29 18:02   ` Dr. David Alan Gilbert
@ 2017-03-30  7:52   ` Peter Xu
  2017-03-30 16:30     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-03-30  7:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:30PM +0100, Juan Quintela wrote:
> Rename it to preffer_xbzrle that is a more descriptive name.

s/preffer/prefer/?
       
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6a39704..591cf89 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -217,6 +217,9 @@ struct RAMState {
>      uint64_t dirty_pages_rate;
>      /* Count of requests incoming from destination */
>      uint64_t postcopy_requests;
> +    /* Should we move to xbzrle after the 1st round
> +       of compression */
> +    bool preffer_xbzrle;
>      /* protects modification of the bitmap */
>      QemuMutex bitmap_mutex;
>      /* Ram Bitmap protected by RCU */
> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
>  /* The empty QEMUFileOps will be used by file in CompressParam */
>  static const QEMUFileOps empty_ops = { };
>  
> -static bool compression_switch;
>  static DecompressParam *decomp_param;
>  static QemuThread *decompress_threads;
>  static QemuMutex decomp_done_lock;
> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
>      if (!migrate_use_compression()) {
>          return;
>      }
> -    compression_switch = true;
>      thread_count = migrate_compress_threads();
>      compress_threads = g_new0(QemuThread, thread_count);
>      comp_param = g_new0(CompressParam, thread_count);
> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>                   * point. In theory, xbzrle can do better than compression.
>                   */
>                  flush_compressed_data(rs);
> -                compression_switch = false;
> +                rs->preffer_xbzrle = true;
>              }
>          }
>          /* Didn't find anything this time, but try again on the new block */
> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>      /* Check the pages is dirty and if it is send it */
>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
> -        if (compression_switch && migrate_use_compression()) {
> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {

IIUC this prefer_xbzrle can be dynamically calculated by existing
states:

static inline bool ram_compression_active(RAMState *rs)
{
    /*
     * If xbzrle is on, stop using the data compression after first
     * round of migration even if compression is enabled. In theory,
     * xbzrle can do better than compression.
     */
    return migrate_use_compression() && \
           (rs->ram_bulk_stage || !migrate_use_xbzrle());
}

Then this line can be written as:

    if (ram_compression_active(rs)) {

And if so, we can get rid of prefer_xbzrle, right?

Having prefer_xbzrle should be slightly faster though, since it'll at
least cache the above calculation.

Thanks,

>              res = ram_save_compressed_page(rs, ms, pss, last_stage);
>          } else {
>              res = ram_save_page(rs, ms, pss, last_stage);
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size() Juan Quintela
  2017-03-24 15:32   ` Dr. David Alan Gilbert
@ 2017-03-30  8:03   ` Peter Xu
  2017-03-30  8:55     ` Dr. David Alan Gilbert
  2017-03-30  9:11     ` Juan Quintela
  1 sibling, 2 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  8:03 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:33PM +0100, Juan Quintela wrote:
> It was used as a size in all cases except one.

Considering that:

- qemu_target_page_bits() is only used in migration codes, in only
  several places below

- migration codes is using TARGET_PAGE_{BITS|SIZE} a lot as well

How about we just remove this function, and directly use
TARGET_PAGE_{BITS|SIZE}?

Thanks,

> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  exec.c                   | 4 ++--
>  include/sysemu/sysemu.h  | 2 +-
>  migration/migration.c    | 4 ++--
>  migration/postcopy-ram.c | 8 ++++----
>  migration/savevm.c       | 8 ++++----
>  5 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index e57a8a2..9a4c385 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3349,9 +3349,9 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
>   * Allows code that needs to deal with migration bitmaps etc to still be built
>   * target independent.
>   */
> -size_t qemu_target_page_bits(void)
> +size_t qemu_target_page_size(void)
>  {
> -    return TARGET_PAGE_BITS;
> +    return TARGET_PAGE_SIZE;
>  }
>  
>  #endif
> diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
> index 576c7ce..16175f7 100644
> --- a/include/sysemu/sysemu.h
> +++ b/include/sysemu/sysemu.h
> @@ -67,7 +67,7 @@ int qemu_reset_requested_get(void);
>  void qemu_system_killed(int signal, pid_t pid);
>  void qemu_system_reset(bool report);
>  void qemu_system_guest_panicked(GuestPanicInformation *info);
> -size_t qemu_target_page_bits(void);
> +size_t qemu_target_page_size(void);
>  
>  void qemu_add_exit_notifier(Notifier *notify);
>  void qemu_remove_exit_notifier(Notifier *notify);
> diff --git a/migration/migration.c b/migration/migration.c
> index 3f99ab3..92c3c6b 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -646,7 +646,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
>      info->ram->skipped = 0;
>      info->ram->normal = norm_mig_pages_transferred();
>      info->ram->normal_bytes = norm_mig_pages_transferred() *
> -        (1ul << qemu_target_page_bits());
> +        qemu_target_page_size();
>      info->ram->mbps = s->mbps;
>      info->ram->dirty_sync_count = ram_dirty_sync_count();
>      info->ram->postcopy_requests = ram_postcopy_requests();
> @@ -2001,7 +2001,7 @@ static void *migration_thread(void *opaque)
>                 10000 is a small enough number for our purposes */
>              if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
>                  s->expected_downtime = ram_dirty_pages_rate() *
> -                    (1ul << qemu_target_page_bits()) / bandwidth;
> +                    qemu_target_page_size() / bandwidth;
>              }
>  
>              qemu_file_reset_rate_limit(s->to_dst_file);
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index dc80dbb..8756364 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -123,7 +123,7 @@ bool postcopy_ram_supported_by_host(void)
>      struct uffdio_range range_struct;
>      uint64_t feature_mask;
>  
> -    if ((1ul << qemu_target_page_bits()) > pagesize) {
> +    if (qemu_target_page_size() > pagesize) {
>          error_report("Target page size bigger than host page size");
>          goto out;
>      }
> @@ -745,10 +745,10 @@ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms,
>  void postcopy_discard_send_range(MigrationState *ms, PostcopyDiscardState *pds,
>                                  unsigned long start, unsigned long length)
>  {
> -    size_t tp_bits = qemu_target_page_bits();
> +    size_t tp_size = qemu_target_page_size();
>      /* Convert to byte offsets within the RAM block */
> -    pds->start_list[pds->cur_entry] = (start - pds->offset) << tp_bits;
> -    pds->length_list[pds->cur_entry] = length << tp_bits;
> +    pds->start_list[pds->cur_entry] = (start - pds->offset) * tp_size;
> +    pds->length_list[pds->cur_entry] = length * tp_size;
>      trace_postcopy_discard_send_range(pds->ramblock_name, start, length);
>      pds->cur_entry++;
>      pds->nsentwords++;
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 853a81a..bbf055d 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -871,7 +871,7 @@ void qemu_savevm_send_postcopy_advise(QEMUFile *f)
>  {
>      uint64_t tmp[2];
>      tmp[0] = cpu_to_be64(ram_pagesize_summary());
> -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> +    tmp[1] = cpu_to_be64(qemu_target_page_size());
>  
>      trace_qemu_savevm_send_postcopy_advise();
>      qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> @@ -1390,13 +1390,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
>      }
>  
>      remote_tps = qemu_get_be64(mis->from_src_file);
> -    if (remote_tps != (1ul << qemu_target_page_bits())) {
> +    if (remote_tps != qemu_target_page_size()) {
>          /*
>           * Again, some differences could be dealt with, but for now keep it
>           * simple.
>           */
> -        error_report("Postcopy needs matching target page sizes (s=%d d=%d)",
> -                     (int)remote_tps, 1 << qemu_target_page_bits());
> +        error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
> +                     (int)remote_tps, qemu_target_page_size());
>          return -1;
>      }
>  
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore Juan Quintela
  2017-03-24 15:28   ` Dr. David Alan Gilbert
@ 2017-03-30  8:05   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  8:05 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:32PM +0100, Juan Quintela wrote:
> Remove it from callers and callees.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy Juan Quintela
  2017-03-24 15:27   ` Dr. David Alan Gilbert
@ 2017-03-30  8:06   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30  8:06 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:31PM +0100, Juan Quintela wrote:
> We need to call for the migrate_get_current() in more that half of the
> uses, so call that inside.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState Juan Quintela
  2017-03-30  6:25   ` Peter Xu
@ 2017-03-30  8:49   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-30  8:49 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

I'm still pretty convinced that there's an existing problem with
this mutex can get init'd twice with no destroy on a second migration,
however you're not changing that (and it's not actually failed as 
far as I can tell):

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index a890179..ae2b89f 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -184,6 +184,8 @@ struct RAMState {
>      uint64_t xbzrle_overflows;
>      /* number of dirty bits in the bitmap */
>      uint64_t migration_dirty_pages;
> +    /* protects modification of the bitmap */
> +    QemuMutex bitmap_mutex;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -229,8 +231,6 @@ static ram_addr_t ram_save_remaining(void)
>      return ram_state.migration_dirty_pages;
>  }
>  
> -static QemuMutex migration_bitmap_mutex;
> -
>  /* used by the search for pages to send */
>  struct PageSearchStatus {
>      /* Current block being searched */
> @@ -652,13 +652,13 @@ static void migration_bitmap_sync(RAMState *rs)
>      trace_migration_bitmap_sync_start();
>      memory_global_dirty_log_sync();
>  
> -    qemu_mutex_lock(&migration_bitmap_mutex);
> +    qemu_mutex_lock(&rs->bitmap_mutex);
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          migration_bitmap_sync_range(rs, block->offset, block->used_length);
>      }
>      rcu_read_unlock();
> -    qemu_mutex_unlock(&migration_bitmap_mutex);
> +    qemu_mutex_unlock(&rs->bitmap_mutex);
>  
>      trace_migration_bitmap_sync_end(rs->num_dirty_pages_period);
>  
> @@ -1524,6 +1524,7 @@ static void ram_state_reset(RAMState *rs)
>  void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>  {
>      RAMState *rs = &ram_state;
> +
>      /* called in qemu main thread, so there is
>       * no writing race against this migration_bitmap
>       */
> @@ -1537,7 +1538,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>           * it is safe to migration if migration_bitmap is cleared bit
>           * at the same time.
>           */
> -        qemu_mutex_lock(&migration_bitmap_mutex);
> +        qemu_mutex_lock(&rs->bitmap_mutex);
>          bitmap_copy(bitmap->bmap, old_bitmap->bmap, old);
>          bitmap_set(bitmap->bmap, old, new - old);
>  
> @@ -1548,7 +1549,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>          bitmap->unsentmap = NULL;
>  
>          atomic_rcu_set(&migration_bitmap_rcu, bitmap);
> -        qemu_mutex_unlock(&migration_bitmap_mutex);
> +        qemu_mutex_unlock(&rs->bitmap_mutex);
>          rs->migration_dirty_pages += new - old;
>          call_rcu(old_bitmap, migration_bitmap_free, rcu);
>      }
> @@ -1980,7 +1981,7 @@ static int ram_state_init(RAMState *rs)
>      int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
>  
>      memset(rs, 0, sizeof(*rs));
> -    qemu_mutex_init(&migration_bitmap_mutex);
> +    qemu_mutex_init(&rs->bitmap_mutex);
>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  2017-03-30  8:03   ` Peter Xu
@ 2017-03-30  8:55     ` Dr. David Alan Gilbert
  2017-03-30  9:11     ` Juan Quintela
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-30  8:55 UTC (permalink / raw)
  To: Peter Xu; +Cc: Juan Quintela, qemu-devel

* Peter Xu (peterx@redhat.com) wrote:
> On Thu, Mar 23, 2017 at 09:45:33PM +0100, Juan Quintela wrote:
> > It was used as a size in all cases except one.
> 
> Considering that:
> 
> - qemu_target_page_bits() is only used in migration codes, in only
>   several places below
> 
> - migration codes is using TARGET_PAGE_{BITS|SIZE} a lot as well
> 
> How about we just remove this function, and directly use
> TARGET_PAGE_{BITS|SIZE}?

We can't because the TARGET_* macros are defined in headers
that are only allowed to be included in target-specific builds.
Most of QEMUs code (including all migration/*) is built
target independent and so the headers error if we try and include
them.

That's why I added qemu_target_page_bits()

Dave

> Thanks,
> 
> > 
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> >  exec.c                   | 4 ++--
> >  include/sysemu/sysemu.h  | 2 +-
> >  migration/migration.c    | 4 ++--
> >  migration/postcopy-ram.c | 8 ++++----
> >  migration/savevm.c       | 8 ++++----
> >  5 files changed, 13 insertions(+), 13 deletions(-)
> > 
> > diff --git a/exec.c b/exec.c
> > index e57a8a2..9a4c385 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -3349,9 +3349,9 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
> >   * Allows code that needs to deal with migration bitmaps etc to still be built
> >   * target independent.
> >   */
> > -size_t qemu_target_page_bits(void)
> > +size_t qemu_target_page_size(void)
> >  {
> > -    return TARGET_PAGE_BITS;
> > +    return TARGET_PAGE_SIZE;
> >  }
> >  
> >  #endif
> > diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
> > index 576c7ce..16175f7 100644
> > --- a/include/sysemu/sysemu.h
> > +++ b/include/sysemu/sysemu.h
> > @@ -67,7 +67,7 @@ int qemu_reset_requested_get(void);
> >  void qemu_system_killed(int signal, pid_t pid);
> >  void qemu_system_reset(bool report);
> >  void qemu_system_guest_panicked(GuestPanicInformation *info);
> > -size_t qemu_target_page_bits(void);
> > +size_t qemu_target_page_size(void);
> >  
> >  void qemu_add_exit_notifier(Notifier *notify);
> >  void qemu_remove_exit_notifier(Notifier *notify);
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 3f99ab3..92c3c6b 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -646,7 +646,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
> >      info->ram->skipped = 0;
> >      info->ram->normal = norm_mig_pages_transferred();
> >      info->ram->normal_bytes = norm_mig_pages_transferred() *
> > -        (1ul << qemu_target_page_bits());
> > +        qemu_target_page_size();
> >      info->ram->mbps = s->mbps;
> >      info->ram->dirty_sync_count = ram_dirty_sync_count();
> >      info->ram->postcopy_requests = ram_postcopy_requests();
> > @@ -2001,7 +2001,7 @@ static void *migration_thread(void *opaque)
> >                 10000 is a small enough number for our purposes */
> >              if (ram_dirty_pages_rate() && transferred_bytes > 10000) {
> >                  s->expected_downtime = ram_dirty_pages_rate() *
> > -                    (1ul << qemu_target_page_bits()) / bandwidth;
> > +                    qemu_target_page_size() / bandwidth;
> >              }
> >  
> >              qemu_file_reset_rate_limit(s->to_dst_file);
> > diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> > index dc80dbb..8756364 100644
> > --- a/migration/postcopy-ram.c
> > +++ b/migration/postcopy-ram.c
> > @@ -123,7 +123,7 @@ bool postcopy_ram_supported_by_host(void)
> >      struct uffdio_range range_struct;
> >      uint64_t feature_mask;
> >  
> > -    if ((1ul << qemu_target_page_bits()) > pagesize) {
> > +    if (qemu_target_page_size() > pagesize) {
> >          error_report("Target page size bigger than host page size");
> >          goto out;
> >      }
> > @@ -745,10 +745,10 @@ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms,
> >  void postcopy_discard_send_range(MigrationState *ms, PostcopyDiscardState *pds,
> >                                  unsigned long start, unsigned long length)
> >  {
> > -    size_t tp_bits = qemu_target_page_bits();
> > +    size_t tp_size = qemu_target_page_size();
> >      /* Convert to byte offsets within the RAM block */
> > -    pds->start_list[pds->cur_entry] = (start - pds->offset) << tp_bits;
> > -    pds->length_list[pds->cur_entry] = length << tp_bits;
> > +    pds->start_list[pds->cur_entry] = (start - pds->offset) * tp_size;
> > +    pds->length_list[pds->cur_entry] = length * tp_size;
> >      trace_postcopy_discard_send_range(pds->ramblock_name, start, length);
> >      pds->cur_entry++;
> >      pds->nsentwords++;
> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index 853a81a..bbf055d 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -871,7 +871,7 @@ void qemu_savevm_send_postcopy_advise(QEMUFile *f)
> >  {
> >      uint64_t tmp[2];
> >      tmp[0] = cpu_to_be64(ram_pagesize_summary());
> > -    tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
> > +    tmp[1] = cpu_to_be64(qemu_target_page_size());
> >  
> >      trace_qemu_savevm_send_postcopy_advise();
> >      qemu_savevm_command_send(f, MIG_CMD_POSTCOPY_ADVISE, 16, (uint8_t *)tmp);
> > @@ -1390,13 +1390,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis)
> >      }
> >  
> >      remote_tps = qemu_get_be64(mis->from_src_file);
> > -    if (remote_tps != (1ul << qemu_target_page_bits())) {
> > +    if (remote_tps != qemu_target_page_size()) {
> >          /*
> >           * Again, some differences could be dealt with, but for now keep it
> >           * simple.
> >           */
> > -        error_report("Postcopy needs matching target page sizes (s=%d d=%d)",
> > -                     (int)remote_tps, 1 << qemu_target_page_bits());
> > +        error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
> > +                     (int)remote_tps, qemu_target_page_size());
> >          return -1;
> >      }
> >  
> > -- 
> > 2.9.3
> > 
> > 
> 
> -- peterx
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync Juan Quintela
  2017-03-24  1:10   ` Yang Hongyang
@ 2017-03-30  9:07   ` Dr. David Alan Gilbert
  2017-03-30 11:38     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-30  9:07 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We change the meaning of start to be the offset from the beggining of
> the block.

s/beggining/beginning/

Why do this?
We have:
   migration_bitmap_sync (all blocks)
   migration_bitmap_sync_range - called per block
   cpu_physical_memory_sync_dirty_bitmap

Why keep migration_bitmap_sync_range having start/length as well as the block
if you could just rename it to migration_bitmap_sync_block and just give it the rb?
And since cpu_physical_memory_clear_dirty_range is lower level, why give it
the rb?

Dave


> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/exec/ram_addr.h | 2 ++
>  migration/ram.c         | 8 ++++----
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index b05dc84..d50c970 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -354,11 +354,13 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start,
>  
>  static inline
>  uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest,
> +                                               RAMBlock *rb,
>                                                 ram_addr_t start,
>                                                 ram_addr_t length,
>                                                 int64_t *real_dirty_pages)
>  {
>      ram_addr_t addr;
> +    start = rb->offset + start;
>      unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
>      uint64_t num_dirty = 0;
>  
> diff --git a/migration/ram.c b/migration/ram.c
> index 064b2c0..9772fd8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -648,13 +648,13 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
>      return ret;
>  }
>  
> -static void migration_bitmap_sync_range(RAMState *rs, ram_addr_t start,
> -                                        ram_addr_t length)
> +static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb,
> +                                        ram_addr_t start, ram_addr_t length)
>  {
>      unsigned long *bitmap;
>      bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>      rs->migration_dirty_pages +=
> -        cpu_physical_memory_sync_dirty_bitmap(bitmap, start, length,
> +        cpu_physical_memory_sync_dirty_bitmap(bitmap, rb, start, length,
>                                                &rs->num_dirty_pages_period);
>  }
>  
> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs)
>      qemu_mutex_lock(&rs->bitmap_mutex);
>      rcu_read_lock();
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> -        migration_bitmap_sync_range(rs, block->offset, block->used_length);
> +        migration_bitmap_sync_range(rs, block, 0, block->used_length);
>      }
>      rcu_read_unlock();
>      qemu_mutex_unlock(&rs->bitmap_mutex);
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size()
  2017-03-30  8:03   ` Peter Xu
  2017-03-30  8:55     ` Dr. David Alan Gilbert
@ 2017-03-30  9:11     ` Juan Quintela
  1 sibling, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-30  9:11 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:33PM +0100, Juan Quintela wrote:
>> It was used as a size in all cases except one.
>
> Considering that:
>
> - qemu_target_page_bits() is only used in migration codes, in only
>   several places below
>
> - migration codes is using TARGET_PAGE_{BITS|SIZE} a lot as well

TARGET_PAGE_* is only defined for target specific files, migration (in
general) is not one of them (ram.c on the other hand is).

Until we exported that function, there wasn't a way to know the
TARGET_PAGE_SIZE in migration.c, for instance.

>
> How about we just remove this function, and directly use
> TARGET_PAGE_{BITS|SIZE}?

We can't.  This was the reason why we used to have exported the sizes in
bytes and in pages.  There are files in qemu that are compiled by
target, and there are files that are compiled the same for all targets.

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter Juan Quintela
  2017-03-29 18:43   ` Dr. David Alan Gilbert
@ 2017-03-30 10:28   ` Peter Xu
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-30 10:28 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:36PM +0100, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-30  9:07   ` Dr. David Alan Gilbert
@ 2017-03-30 11:38     ` Juan Quintela
  2017-03-30 19:10       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 11:38 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We change the meaning of start to be the offset from the beggining of
>> the block.
>
> s/beggining/beginning/
>
> Why do this?
> We have:
>    migration_bitmap_sync (all blocks)
>    migration_bitmap_sync_range - called per block
>    cpu_physical_memory_sync_dirty_bitmap
>
> Why keep migration_bitmap_sync_range having start/length as well as the block
> if you could just rename it to migration_bitmap_sync_block and just give it the rb?
> And since cpu_physical_memory_clear_dirty_range is lower level, why give it
> the rb?

I did it on the previous series, then I remembered that I was not going
to be able to sync only part of the range, as I will want in the future.

If you preffer as an intermediate meassure to just move to blocks, I can
do, but change is really small and not sure if it makes sense.


Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter
  2017-03-30  6:27   ` Peter Xu
@ 2017-03-30 16:05     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:05 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:20PM +0100, Juan Quintela wrote:
>> Somewhere it was passed by reference, just use it from RAMState.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> Reviewed-by: Juan Quintela <quintela@redhat.com>
>
> (Is this a self-review above? :-)

Of course O:-)

Fat fingers and macros make that to me O:-)

Thanks.

>
> Reviewed-by: Peter Xu <peterx@redhat.com>

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining
  2017-03-30  6:24   ` Peter Xu
@ 2017-03-30 16:07     ` Juan Quintela
  2017-03-31  2:57       ` Peter Xu
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:07 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:21PM +0100, Juan Quintela wrote:
>> Just unfold it.  Move ram_bytes_remaining() with the rest of exported
>> functions.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 19 +++++++------------
>>  1 file changed, 7 insertions(+), 12 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 3ae00e2..dd5a453 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -243,16 +243,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
>>      return ram_state.xbzrle_overflows;
>>  }
>>  
>> -static ram_addr_t ram_save_remaining(void)
>> -{
>> -    return ram_state.migration_dirty_pages;
>> -}
>> -
>>  uint64_t ram_bytes_transferred(void)
>>  {
>>      return ram_state.bytes_transferred;
>>  }
>>  
>> +uint64_t ram_bytes_remaining(void)
>> +{
>> +    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
>> +}
>> +
>>  /* used by the search for pages to send */
>>  struct PageSearchStatus {
>>      /* Current block being searched */
>> @@ -1438,11 +1438,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
>>      }
>>  }
>>  
>> -uint64_t ram_bytes_remaining(void)
>> -{
>> -    return ram_save_remaining() * TARGET_PAGE_SIZE;
>> -}
>> -
>>  uint64_t ram_bytes_total(void)
>>  {
>>      RAMBlock *block;
>> @@ -2210,7 +2205,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>>      RAMState *rs = opaque;
>>      uint64_t remaining_size;
>>  
>> -    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
>> +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>
> Here we can directly use ram_bytes_remaining()?
>
>>  
>>      if (!migration_in_postcopy(migrate_get_current()) &&
>>          remaining_size < max_size) {
>> @@ -2219,7 +2214,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
>>          migration_bitmap_sync(rs);
>>          rcu_read_unlock();
>>          qemu_mutex_unlock_iothread();
>> -        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
>> +        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
>
> Same here?

To be consistent, I tried not to use the "accessor" functions inside
this file.  If you are in ram.c, you have to know about RAMstate.

Thanks, Juan.

>
> Besides:
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> -- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState
  2017-03-30  6:49   ` Peter Xu
@ 2017-03-30 16:08     ` Juan Quintela
  2017-03-31  3:00       ` Peter Xu
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:08 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:22PM +0100, Juan Quintela wrote:
>> It was on MigrationState when it is only used inside ram.c for
>> postcopy.  Problem is that we need to access it without being able to
>> pass it RAMState directly.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  include/migration/migration.h | 2 --
>>  migration/migration.c         | 1 -
>>  migration/ram.c               | 7 +++++--
>>  3 files changed, 5 insertions(+), 5 deletions(-)
>> 
>> diff --git a/include/migration/migration.h b/include/migration/migration.h
>> index 84cef4b..e032fb0 100644
>> --- a/include/migration/migration.h
>> +++ b/include/migration/migration.h
>> @@ -189,8 +189,6 @@ struct MigrationState
>>      /* Queue of outstanding page requests from the destination */
>>      QemuMutex src_page_req_mutex;
>>      QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
>> -    /* The RAMBlock used in the last src_page_request */
>> -    RAMBlock *last_req_rb;
>>      /* The semaphore is used to notify COLO thread that failover is finished */
>>      QemuSemaphore colo_exit_sem;
>>  
>> diff --git a/migration/migration.c b/migration/migration.c
>> index e532430..b220941 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -1118,7 +1118,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>>      s->postcopy_after_devices = false;
>>      s->postcopy_requests = 0;
>>      s->migration_thread_running = false;
>> -    s->last_req_rb = NULL;
>>      error_free(s->error);
>>      s->error = NULL;
>>  
>> diff --git a/migration/ram.c b/migration/ram.c
>> index dd5a453..325a0f3 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -203,6 +203,8 @@ struct RAMState {
>>      QemuMutex bitmap_mutex;
>>      /* Ram Bitmap protected by RCU */
>>      RAMBitmap *ram_bitmap;
>> +    /* The RAMBlock used in the last src_page_request */
>                                                         ^ "s" missing
>
> Besides:

The important one is only the last one, we don't really care about the
previous here, no?

>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
>> +    RAMBlock *last_req_rb;
>>  };
>>  typedef struct RAMState RAMState;
>
> -- peterx

Thanks,

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-30  6:56   ` Peter Xu
@ 2017-03-30 16:09     ` Juan Quintela
  2017-03-31 15:25     ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:09 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
>> This are the last postcopy fields still at MigrationState.  Once there
>
> s/This/These/
>
>> Move MigrationSrcPageRequest to ram.c and remove MigrationState
>> parameters where appropiate.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> One question below though...
>
> [...]
>
>> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
>>   *
>>   * It should be empty at the end anyway, but in error cases there may
>>   * xbe some left.
>> - *
>> - * @ms: current migration state
>>   */
>> -void flush_page_queue(MigrationState *ms)
>> +void flush_page_queue(void)
>>  {
>> -    struct MigrationSrcPageRequest *mspr, *next_mspr;
>> +    struct RAMSrcPageRequest *mspr, *next_mspr;
>> +    RAMState *rs = &ram_state;
>>      /* This queue generally should be empty - but in the case of a failed
>>       * migration might have some droppings in.
>>       */
>>      rcu_read_lock();
>
> Could I ask why we are taking the RCU read lock rather than the mutex
> here?

I will let this one for dave.


>
>> -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
>> +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
>>          memory_region_unref(mspr->rb->mr);
>> -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
>> +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
>>          g_free(mspr);
>>      }
>>      rcu_read_unlock();
>
> Thanks,
>
> -- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-29 18:02   ` Dr. David Alan Gilbert
@ 2017-03-30 16:19     ` Juan Quintela
  2017-03-30 16:27     ` Juan Quintela
  1 sibling, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:19 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Rename it to preffer_xbzrle that is a more descriptive name.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 6a39704..591cf89 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -217,6 +217,9 @@ struct RAMState {
>>      uint64_t dirty_pages_rate;
>>      /* Count of requests incoming from destination */
>>      uint64_t postcopy_requests;
>> +    /* Should we move to xbzrle after the 1st round
>> +       of compression */
>> +    bool preffer_xbzrle;
>
> That's 'prefer' - however, do we need it at all?
> How about just replacing it by:
>    !ram_bulk_stage && migrate_use_xbzrle()
>
> would that work?

Changed to that.  I am not sure if that is simpler or more complex, but
it is clearly one less variable, so ....

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-29 18:02   ` Dr. David Alan Gilbert
  2017-03-30 16:19     ` Juan Quintela
@ 2017-03-30 16:27     ` Juan Quintela
  1 sibling, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:27 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Rename it to preffer_xbzrle that is a more descriptive name.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 6a39704..591cf89 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -217,6 +217,9 @@ struct RAMState {
>>      uint64_t dirty_pages_rate;
>>      /* Count of requests incoming from destination */
>>      uint64_t postcopy_requests;
>> +    /* Should we move to xbzrle after the 1st round
>> +       of compression */
>> +    bool preffer_xbzrle;
>
> That's 'prefer' - however, do we need it at all?
> How about just replacing it by:
>    !ram_bulk_stage && migrate_use_xbzrle()
>
> would that work?

        if (migrate_use_compression()) &&
            (rs->ram_bulk_stage || !migrate_use_xbzrle()) {
            res = ram_save_compressed_page(rs, ms, pss, last_stage);

To remove the two !(!...)  I changed to this.

Later, Juan.


>
> Dave
>
>>      /* protects modification of the bitmap */
>>      QemuMutex bitmap_mutex;
>>      /* Ram Bitmap protected by RCU */
>> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
>>  /* The empty QEMUFileOps will be used by file in CompressParam */
>>  static const QEMUFileOps empty_ops = { };
>>  
>> -static bool compression_switch;
>>  static DecompressParam *decomp_param;
>>  static QemuThread *decompress_threads;
>>  static QemuMutex decomp_done_lock;
>> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
>>      if (!migrate_use_compression()) {
>>          return;
>>      }
>> -    compression_switch = true;
>>      thread_count = migrate_compress_threads();
>>      compress_threads = g_new0(QemuThread, thread_count);
>>      comp_param = g_new0(CompressParam, thread_count);
>> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>>                   * point. In theory, xbzrle can do better than compression.
>>                   */
>>                  flush_compressed_data(rs);
>> -                compression_switch = false;
>> +                rs->preffer_xbzrle = true;
>>              }
>>          }
>>          /* Didn't find anything this time, but try again on the new block */
>> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>>      /* Check the pages is dirty and if it is send it */
>>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>>          unsigned long *unsentmap;
>> -        if (compression_switch && migrate_use_compression()) {
>> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {
>>              res = ram_save_compressed_page(rs, ms, pss, last_stage);
>>          } else {
>>              res = ram_save_page(rs, ms, pss, last_stage);
>> -- 
>> 2.9.3
>> 
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-30  7:52   ` Peter Xu
@ 2017-03-30 16:30     ` Juan Quintela
  2017-03-31  3:04       ` Peter Xu
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-03-30 16:30 UTC (permalink / raw)
  To: Peter Xu; +Cc: qemu-devel, dgilbert

Peter Xu <peterx@redhat.com> wrote:
> On Thu, Mar 23, 2017 at 09:45:30PM +0100, Juan Quintela wrote:
>> Rename it to preffer_xbzrle that is a more descriptive name.
>
> s/preffer/prefer/?
>        
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 9 +++++----
>>  1 file changed, 5 insertions(+), 4 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 6a39704..591cf89 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -217,6 +217,9 @@ struct RAMState {
>>      uint64_t dirty_pages_rate;
>>      /* Count of requests incoming from destination */
>>      uint64_t postcopy_requests;
>> +    /* Should we move to xbzrle after the 1st round
>> +       of compression */
>> +    bool preffer_xbzrle;
>>      /* protects modification of the bitmap */
>>      QemuMutex bitmap_mutex;
>>      /* Ram Bitmap protected by RCU */
>> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
>>  /* The empty QEMUFileOps will be used by file in CompressParam */
>>  static const QEMUFileOps empty_ops = { };
>>  
>> -static bool compression_switch;
>>  static DecompressParam *decomp_param;
>>  static QemuThread *decompress_threads;
>>  static QemuMutex decomp_done_lock;
>> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
>>      if (!migrate_use_compression()) {
>>          return;
>>      }
>> -    compression_switch = true;
>>      thread_count = migrate_compress_threads();
>>      compress_threads = g_new0(QemuThread, thread_count);
>>      comp_param = g_new0(CompressParam, thread_count);
>> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>>                   * point. In theory, xbzrle can do better than compression.
>>                   */
>>                  flush_compressed_data(rs);
>> -                compression_switch = false;
>> +                rs->preffer_xbzrle = true;
>>              }
>>          }
>>          /* Didn't find anything this time, but try again on the new block */
>> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>>      /* Check the pages is dirty and if it is send it */
>>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>>          unsigned long *unsentmap;
>> -        if (compression_switch && migrate_use_compression()) {
>> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {
>
> IIUC this prefer_xbzrle can be dynamically calculated by existing
> states:
>
> static inline bool ram_compression_active(RAMState *rs)
> {
>     /*
>      * If xbzrle is on, stop using the data compression after first
>      * round of migration even if compression is enabled. In theory,
>      * xbzrle can do better than compression.
>      */
>     return migrate_use_compression() && \
>            (rs->ram_bulk_stage || !migrate_use_xbzrle());
> }
>
> Then this line can be written as:
>
>     if (ram_compression_active(rs)) {
>
> And if so, we can get rid of prefer_xbzrle, right?
>
> Having prefer_xbzrle should be slightly faster though, since it'll at
> least cache the above calculation.
>
> Thanks,

You arrived to the same conclusion than Dave.  As it was only used once,
I didn't create the extra function.

I stole your comment verbatim O:-)

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-30 11:38     ` Juan Quintela
@ 2017-03-30 19:10       ` Dr. David Alan Gilbert
  2017-04-04 17:46         ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-30 19:10 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> We change the meaning of start to be the offset from the beggining of
> >> the block.
> >
> > s/beggining/beginning/
> >
> > Why do this?
> > We have:
> >    migration_bitmap_sync (all blocks)
> >    migration_bitmap_sync_range - called per block
> >    cpu_physical_memory_sync_dirty_bitmap
> >
> > Why keep migration_bitmap_sync_range having start/length as well as the block
> > if you could just rename it to migration_bitmap_sync_block and just give it the rb?
> > And since cpu_physical_memory_clear_dirty_range is lower level, why give it
> > the rb?
> 
> I did it on the previous series, then I remembered that I was not going
> to be able to sync only part of the range, as I will want in the future.
> 
> If you preffer as an intermediate meassure to just move to blocks, I can
> do, but change is really small and not sure if it makes sense.

OK then, but just comment it to say you want to.
I'm still not sure if cpu_physical_memory_clear_dirty_range should have the RB;
it feels that it's lower level, kvm stuff rather than things that know about RAMBlocks.

Dave

> 
> Later, Juan.
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining
  2017-03-30 16:07     ` Juan Quintela
@ 2017-03-31  2:57       ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-31  2:57 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 30, 2017 at 06:07:11PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:45:21PM +0100, Juan Quintela wrote:
> >> Just unfold it.  Move ram_bytes_remaining() with the rest of exported
> >> functions.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/ram.c | 19 +++++++------------
> >>  1 file changed, 7 insertions(+), 12 deletions(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index 3ae00e2..dd5a453 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -243,16 +243,16 @@ uint64_t xbzrle_mig_pages_overflow(void)
> >>      return ram_state.xbzrle_overflows;
> >>  }
> >>  
> >> -static ram_addr_t ram_save_remaining(void)
> >> -{
> >> -    return ram_state.migration_dirty_pages;
> >> -}
> >> -
> >>  uint64_t ram_bytes_transferred(void)
> >>  {
> >>      return ram_state.bytes_transferred;
> >>  }
> >>  
> >> +uint64_t ram_bytes_remaining(void)
> >> +{
> >> +    return ram_state.migration_dirty_pages * TARGET_PAGE_SIZE;
> >> +}
> >> +
> >>  /* used by the search for pages to send */
> >>  struct PageSearchStatus {
> >>      /* Current block being searched */
> >> @@ -1438,11 +1438,6 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
> >>      }
> >>  }
> >>  
> >> -uint64_t ram_bytes_remaining(void)
> >> -{
> >> -    return ram_save_remaining() * TARGET_PAGE_SIZE;
> >> -}
> >> -
> >>  uint64_t ram_bytes_total(void)
> >>  {
> >>      RAMBlock *block;
> >> @@ -2210,7 +2205,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
> >>      RAMState *rs = opaque;
> >>      uint64_t remaining_size;
> >>  
> >> -    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> >> +    remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> >
> > Here we can directly use ram_bytes_remaining()?
> >
> >>  
> >>      if (!migration_in_postcopy(migrate_get_current()) &&
> >>          remaining_size < max_size) {
> >> @@ -2219,7 +2214,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
> >>          migration_bitmap_sync(rs);
> >>          rcu_read_unlock();
> >>          qemu_mutex_unlock_iothread();
> >> -        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
> >> +        remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> >
> > Same here?
> 
> To be consistent, I tried not to use the "accessor" functions inside
> this file.  If you are in ram.c, you have to know about RAMstate.

Then I'm okay with it. Thanks!

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState
  2017-03-30 16:08     ` Juan Quintela
@ 2017-03-31  3:00       ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-31  3:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 30, 2017 at 06:08:45PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:45:22PM +0100, Juan Quintela wrote:
> >> It was on MigrationState when it is only used inside ram.c for
> >> postcopy.  Problem is that we need to access it without being able to
> >> pass it RAMState directly.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  include/migration/migration.h | 2 --
> >>  migration/migration.c         | 1 -
> >>  migration/ram.c               | 7 +++++--
> >>  3 files changed, 5 insertions(+), 5 deletions(-)
> >> 
> >> diff --git a/include/migration/migration.h b/include/migration/migration.h
> >> index 84cef4b..e032fb0 100644
> >> --- a/include/migration/migration.h
> >> +++ b/include/migration/migration.h
> >> @@ -189,8 +189,6 @@ struct MigrationState
> >>      /* Queue of outstanding page requests from the destination */
> >>      QemuMutex src_page_req_mutex;
> >>      QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
> >> -    /* The RAMBlock used in the last src_page_request */
> >> -    RAMBlock *last_req_rb;
> >>      /* The semaphore is used to notify COLO thread that failover is finished */
> >>      QemuSemaphore colo_exit_sem;
> >>  
> >> diff --git a/migration/migration.c b/migration/migration.c
> >> index e532430..b220941 100644
> >> --- a/migration/migration.c
> >> +++ b/migration/migration.c
> >> @@ -1118,7 +1118,6 @@ MigrationState *migrate_init(const MigrationParams *params)
> >>      s->postcopy_after_devices = false;
> >>      s->postcopy_requests = 0;
> >>      s->migration_thread_running = false;
> >> -    s->last_req_rb = NULL;
> >>      error_free(s->error);
> >>      s->error = NULL;
> >>  
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index dd5a453..325a0f3 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -203,6 +203,8 @@ struct RAMState {
> >>      QemuMutex bitmap_mutex;
> >>      /* Ram Bitmap protected by RCU */
> >>      RAMBitmap *ram_bitmap;
> >> +    /* The RAMBlock used in the last src_page_request */
> >                                                         ^ "s" missing
> >
> > Besides:
> 
> The important one is only the last one, we don't really care about the
> previous here, no?

I preferred "src_page_requests" since that's the variable name (so
then people can do symbol search on that). Anyway, that's trivial, so
please feel free to ignore it. :-)

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState
  2017-03-30 16:30     ` Juan Quintela
@ 2017-03-31  3:04       ` Peter Xu
  0 siblings, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-31  3:04 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 30, 2017 at 06:30:36PM +0200, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:45:30PM +0100, Juan Quintela wrote:
> >> Rename it to preffer_xbzrle that is a more descriptive name.
> >
> > s/preffer/prefer/?
> >        
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> ---
> >>  migration/ram.c | 9 +++++----
> >>  1 file changed, 5 insertions(+), 4 deletions(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index 6a39704..591cf89 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -217,6 +217,9 @@ struct RAMState {
> >>      uint64_t dirty_pages_rate;
> >>      /* Count of requests incoming from destination */
> >>      uint64_t postcopy_requests;
> >> +    /* Should we move to xbzrle after the 1st round
> >> +       of compression */
> >> +    bool preffer_xbzrle;
> >>      /* protects modification of the bitmap */
> >>      QemuMutex bitmap_mutex;
> >>      /* Ram Bitmap protected by RCU */
> >> @@ -335,7 +338,6 @@ static QemuCond comp_done_cond;
> >>  /* The empty QEMUFileOps will be used by file in CompressParam */
> >>  static const QEMUFileOps empty_ops = { };
> >>  
> >> -static bool compression_switch;
> >>  static DecompressParam *decomp_param;
> >>  static QemuThread *decompress_threads;
> >>  static QemuMutex decomp_done_lock;
> >> @@ -419,7 +421,6 @@ void migrate_compress_threads_create(void)
> >>      if (!migrate_use_compression()) {
> >>          return;
> >>      }
> >> -    compression_switch = true;
> >>      thread_count = migrate_compress_threads();
> >>      compress_threads = g_new0(QemuThread, thread_count);
> >>      comp_param = g_new0(CompressParam, thread_count);
> >> @@ -1091,7 +1092,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
> >>                   * point. In theory, xbzrle can do better than compression.
> >>                   */
> >>                  flush_compressed_data(rs);
> >> -                compression_switch = false;
> >> +                rs->preffer_xbzrle = true;
> >>              }
> >>          }
> >>          /* Didn't find anything this time, but try again on the new block */
> >> @@ -1323,7 +1324,7 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms,
> >>      /* Check the pages is dirty and if it is send it */
> >>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
> >>          unsigned long *unsentmap;
> >> -        if (compression_switch && migrate_use_compression()) {
> >> +        if (!rs->preffer_xbzrle && migrate_use_compression()) {
> >
> > IIUC this prefer_xbzrle can be dynamically calculated by existing
> > states:
> >
> > static inline bool ram_compression_active(RAMState *rs)
> > {
> >     /*
> >      * If xbzrle is on, stop using the data compression after first
> >      * round of migration even if compression is enabled. In theory,
> >      * xbzrle can do better than compression.
> >      */
> >     return migrate_use_compression() && \
> >            (rs->ram_bulk_stage || !migrate_use_xbzrle());
> > }
> >
> > Then this line can be written as:
> >
> >     if (ram_compression_active(rs)) {
> >
> > And if so, we can get rid of prefer_xbzrle, right?
> >
> > Having prefer_xbzrle should be slightly faster though, since it'll at
> > least cache the above calculation.
> >
> > Thanks,
> 
> You arrived to the same conclusion than Dave.  As it was only used once,
> I didn't create the extra function.

I would still slightly prefer an extra function (of course I think
it'll be definitely inlined) for readability, or in case we'll use it
somewhere else in the future, then no need to think it twice.

But I'm okay without it as well. :)

> 
> I stole your comment verbatim O:-)

My pleasure!

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block Juan Quintela
@ 2017-03-31  8:35   ` Peter Xu
  2017-03-31  8:40   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Peter Xu @ 2017-03-31  8:35 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel, dgilbert

On Thu, Mar 23, 2017 at 09:45:37PM +0100, Juan Quintela wrote:
> We were setting it far away of when we changed it.  Now everything is
> done inside save_page_header.  Once there, reorganize code to pass
> RAMState.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Nit: would it worth mentioning as well the unified setup of CONTINUE
flag in commit message? No matter what:

Reviewed-by: Peter Xu <peterx@redhat.com>

> ---
>  migration/ram.c | 36 +++++++++++++++---------------------
>  1 file changed, 15 insertions(+), 21 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 83c749c..6cd77b5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -453,18 +453,22 @@ void migrate_compress_threads_create(void)
>   * @offset: offset inside the block for the page
>   *          in the lower bits, it contains flags
>   */
> -static size_t save_page_header(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
> +static size_t save_page_header(RAMState *rs, RAMBlock *block, ram_addr_t offset)
>  {
>      size_t size, len;
>  
> -    qemu_put_be64(f, offset);
> +    if (block == rs->last_sent_block) {
> +        offset |= RAM_SAVE_FLAG_CONTINUE;
> +    }
> +    qemu_put_be64(rs->f, offset);
>      size = 8;
>  
>      if (!(offset & RAM_SAVE_FLAG_CONTINUE)) {
>          len = strlen(block->idstr);
> -        qemu_put_byte(f, len);
> -        qemu_put_buffer(f, (uint8_t *)block->idstr, len);
> +        qemu_put_byte(rs->f, len);
> +        qemu_put_buffer(rs->f, (uint8_t *)block->idstr, len);
>          size += 1 + len;
> +        rs->last_sent_block = block;
>      }
>      return size;
>  }
> @@ -584,7 +588,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>      }
>  
>      /* Send XBZRLE based compressed page */
> -    bytes_xbzrle = save_page_header(rs->f, block,
> +    bytes_xbzrle = save_page_header(rs, block,
>                                      offset | RAM_SAVE_FLAG_XBZRLE);
>      qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
>      qemu_put_be16(rs->f, encoded_len);
> @@ -769,7 +773,7 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>          rs->zero_pages++;
>          rs->bytes_transferred +=
> -            save_page_header(rs->f, block, offset | RAM_SAVE_FLAG_COMPRESS);
> +            save_page_header(rs, block, offset | RAM_SAVE_FLAG_COMPRESS);
>          qemu_put_byte(rs->f, 0);
>          rs->bytes_transferred += 1;
>          pages = 1;
> @@ -826,9 +830,6 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  
>      current_addr = block->offset + offset;
>  
> -    if (block == rs->last_sent_block) {
> -        offset |= RAM_SAVE_FLAG_CONTINUE;
> -    }
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> @@ -860,8 +861,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  
>      /* XBZRLE overflow or normal page */
>      if (pages == -1) {
> -        rs->bytes_transferred += save_page_header(rs->f, block,
> -                                               offset | RAM_SAVE_FLAG_PAGE);
> +        rs->bytes_transferred += save_page_header(rs, block,
> +                                                  offset | RAM_SAVE_FLAG_PAGE);
>          if (send_async) {
>              qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
>                                    migrate_release_ram() &
> @@ -882,10 +883,11 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>                                  ram_addr_t offset)
>  {
> +    RAMState *rs = &ram_state;
>      int bytes_sent, blen;
>      uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
>  
> -    bytes_sent = save_page_header(f, block, offset |
> +    bytes_sent = save_page_header(rs, block, offset |
>                                    RAM_SAVE_FLAG_COMPRESS_PAGE);
>      blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
>                                       migrate_compress_level());
> @@ -1016,7 +1018,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>              pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
> -                bytes_xmit = save_page_header(rs->f, block, offset |
> +                bytes_xmit = save_page_header(rs, block, offset |
>                                                RAM_SAVE_FLAG_COMPRESS_PAGE);
>                  blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
>                                                   migrate_compress_level());
> @@ -1033,7 +1035,6 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>                  ram_release_pages(block->idstr, pss->offset, pages);
>              }
>          } else {
> -            offset |= RAM_SAVE_FLAG_CONTINUE;
>              pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(rs, block, offset);
> @@ -1330,13 +1331,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>          if (unsentmap) {
>              clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
>          }
> -        /* Only update last_sent_block if a block was actually sent; xbzrle
> -         * might have decided the page was identical so didn't bother writing
> -         * to the stream.
> -         */
> -        if (res > 0) {
> -            rs->last_sent_block = pss->block;
> -        }
>      }
>  
>      return res;
> -- 
> 2.9.3
> 
> 

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block Juan Quintela
  2017-03-31  8:35   ` Peter Xu
@ 2017-03-31  8:40   ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31  8:40 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We were setting it far away of when we changed it.  Now everything is
> done inside save_page_header.  Once there, reorganize code to pass
> RAMState.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 36 +++++++++++++++---------------------
>  1 file changed, 15 insertions(+), 21 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 83c749c..6cd77b5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -453,18 +453,22 @@ void migrate_compress_threads_create(void)
>   * @offset: offset inside the block for the page
>   *          in the lower bits, it contains flags
>   */
> -static size_t save_page_header(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
> +static size_t save_page_header(RAMState *rs, RAMBlock *block, ram_addr_t offset)
>  {
>      size_t size, len;
>  
> -    qemu_put_be64(f, offset);
> +    if (block == rs->last_sent_block) {
> +        offset |= RAM_SAVE_FLAG_CONTINUE;
> +    }
> +    qemu_put_be64(rs->f, offset);
>      size = 8;
>  
>      if (!(offset & RAM_SAVE_FLAG_CONTINUE)) {
>          len = strlen(block->idstr);
> -        qemu_put_byte(f, len);
> -        qemu_put_buffer(f, (uint8_t *)block->idstr, len);
> +        qemu_put_byte(rs->f, len);
> +        qemu_put_buffer(rs->f, (uint8_t *)block->idstr, len);
>          size += 1 + len;
> +        rs->last_sent_block = block;
>      }
>      return size;
>  }
> @@ -584,7 +588,7 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>      }
>  
>      /* Send XBZRLE based compressed page */
> -    bytes_xbzrle = save_page_header(rs->f, block,
> +    bytes_xbzrle = save_page_header(rs, block,
>                                      offset | RAM_SAVE_FLAG_XBZRLE);
>      qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
>      qemu_put_be16(rs->f, encoded_len);
> @@ -769,7 +773,7 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>          rs->zero_pages++;
>          rs->bytes_transferred +=
> -            save_page_header(rs->f, block, offset | RAM_SAVE_FLAG_COMPRESS);
> +            save_page_header(rs, block, offset | RAM_SAVE_FLAG_COMPRESS);
>          qemu_put_byte(rs->f, 0);
>          rs->bytes_transferred += 1;
>          pages = 1;
> @@ -826,9 +830,6 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  
>      current_addr = block->offset + offset;
>  
> -    if (block == rs->last_sent_block) {
> -        offset |= RAM_SAVE_FLAG_CONTINUE;
> -    }
>      if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_xmit > 0) {
> @@ -860,8 +861,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  
>      /* XBZRLE overflow or normal page */
>      if (pages == -1) {
> -        rs->bytes_transferred += save_page_header(rs->f, block,
> -                                               offset | RAM_SAVE_FLAG_PAGE);
> +        rs->bytes_transferred += save_page_header(rs, block,
> +                                                  offset | RAM_SAVE_FLAG_PAGE);
>          if (send_async) {
>              qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
>                                    migrate_release_ram() &
> @@ -882,10 +883,11 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>  static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>                                  ram_addr_t offset)
>  {
> +    RAMState *rs = &ram_state;
>      int bytes_sent, blen;
>      uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
>  
> -    bytes_sent = save_page_header(f, block, offset |
> +    bytes_sent = save_page_header(rs, block, offset |
>                                    RAM_SAVE_FLAG_COMPRESS_PAGE);
>      blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
>                                       migrate_compress_level());
> @@ -1016,7 +1018,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>              pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
> -                bytes_xmit = save_page_header(rs->f, block, offset |
> +                bytes_xmit = save_page_header(rs, block, offset |
>                                                RAM_SAVE_FLAG_COMPRESS_PAGE);
>                  blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
>                                                   migrate_compress_level());
> @@ -1033,7 +1035,6 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>                  ram_release_pages(block->idstr, pss->offset, pages);
>              }
>          } else {
> -            offset |= RAM_SAVE_FLAG_CONTINUE;
>              pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(rs, block, offset);
> @@ -1330,13 +1331,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>          if (unsentmap) {
>              clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
>          }
> -        /* Only update last_sent_block if a block was actually sent; xbzrle
> -         * might have decided the page was identical so didn't bother writing
> -         * to the stream.
> -         */
> -        if (res > 0) {
> -            rs->last_sent_block = pss->block;
> -        }
>      }
>  
>      return res;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset Juan Quintela
@ 2017-03-31  9:09   ` Dr. David Alan Gilbert
  2017-04-04 18:24     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31  9:09 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index b1a031e..57b776b 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -171,8 +171,8 @@ struct RAMState {
>      RAMBlock *last_seen_block;
>      /* Last block from where we have sent data */
>      RAMBlock *last_sent_block;
> -    /* Last offset we have sent data from */
> -    ram_addr_t last_offset;
> +    /* Last dirty page we have sent */

Can you make that 'Last dirty target page we have sent' 
just so we know which shape page we're dealing with.

> +    ram_addr_t last_page;
>      /* last ram version we have seen */
>      uint32_t last_version;
>      /* We are in the first round */
> @@ -1063,7 +1063,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>      pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
>                                                page);
>      if (pss->complete_round && pss->block == rs->last_seen_block &&
> -        pss->offset >= rs->last_offset) {
> +        pss->offset >= rs->last_page) {

That's odd; isn't pss->offset still in bytes?

Dave

>          /*
>           * We've been once around the RAM and haven't found anything.
>           * Give up.
> @@ -1396,7 +1396,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      }
>  
>      pss.block = rs->last_seen_block;
> -    pss.offset = rs->last_offset;
> +    pss.offset = rs->last_page << TARGET_PAGE_BITS;
>      pss.complete_round = false;
>  
>      if (!pss.block) {
> @@ -1418,7 +1418,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      } while (!pages && again);
>  
>      rs->last_seen_block = pss.block;
> -    rs->last_offset = pss.offset;
> +    rs->last_page = pss.offset >> TARGET_PAGE_BITS;
>  
>      return pages;
>  }
> @@ -1493,7 +1493,7 @@ static void ram_state_reset(RAMState *rs)
>  {
>      rs->last_seen_block = NULL;
>      rs->last_sent_block = NULL;
> -    rs->last_offset = 0;
> +    rs->last_page = 0;
>      rs->last_version = ram_list.version;
>      rs->ram_bulk_stage = true;
>  }
> @@ -1838,7 +1838,7 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
>      /* Easiest way to make sure we don't resume in the middle of a host-page */
>      rs->last_seen_block = NULL;
>      rs->last_sent_block = NULL;
> -    rs->last_offset     = 0;
> +    rs->last_page = 0;
>  
>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>          unsigned long first = block->offset >> TARGET_PAGE_BITS;
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations Juan Quintela
@ 2017-03-31 12:22   ` Dr. David Alan Gilbert
  2017-04-04 18:21     ` Juan Quintela
  0 siblings, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 12:22 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We use an unsigned long for the page number.  Notice that our bitmaps
> already got that for the index, so we have that limit.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 76 ++++++++++++++++++++++++++-------------------------------
>  1 file changed, 34 insertions(+), 42 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 6cd77b5..b1a031e 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -611,13 +611,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>   * @rs: current RAM state
>   * @rb: RAMBlock where to search for dirty pages
>   * @start: starting address (typically so we can continue from previous page)
> - * @ram_addr_abs: pointer into which to store the address of the dirty page
> - *                within the global ram_addr space
> + * @page: pointer into where to store the dirty page

I'd prefer if you could call it 'page_abs' - it often gets tricky to know
whether we're talking about a page offset within a RAMBlock or an offset within
the whole bitmap.
(I wish we had different index types)

>   */
>  static inline
>  ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>                                         ram_addr_t start,
> -                                       ram_addr_t *ram_addr_abs)
> +                                       unsigned long *page)
>  {
>      unsigned long base = rb->offset >> TARGET_PAGE_BITS;
>      unsigned long nr = base + (start >> TARGET_PAGE_BITS);
> @@ -634,17 +633,17 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>          next = find_next_bit(bitmap, size, nr);
>      }
>  
> -    *ram_addr_abs = next << TARGET_PAGE_BITS;
> +    *page = next;
>      return (next - base) << TARGET_PAGE_BITS;
>  }
>  
> -static inline bool migration_bitmap_clear_dirty(RAMState *rs, ram_addr_t addr)
> +static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> +                                                unsigned long page)
>  {
>      bool ret;
> -    int nr = addr >> TARGET_PAGE_BITS;
>      unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
>  
> -    ret = test_and_clear_bit(nr, bitmap);
> +    ret = test_and_clear_bit(page, bitmap);
>  
>      if (ret) {
>          rs->migration_dirty_pages--;
> @@ -1056,14 +1055,13 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>   * @rs: current RAM state
>   * @pss: data about the state of the current dirty page scan
>   * @again: set to false if the search has scanned the whole of RAM
> - * @ram_addr_abs: pointer into which to store the address of the dirty page
> - *                within the global ram_addr space
> + * @page: pointer into where to store the dirty page
>   */
>  static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
> -                             bool *again, ram_addr_t *ram_addr_abs)
> +                             bool *again, unsigned long *page)
>  {
>      pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
> -                                              ram_addr_abs);
> +                                              page);
>      if (pss->complete_round && pss->block == rs->last_seen_block &&
>          pss->offset >= rs->last_offset) {
>          /*
> @@ -1111,11 +1109,10 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>   *
>   * @rs: current RAM state
>   * @offset: used to return the offset within the RAMBlock
> - * @ram_addr_abs: pointer into which to store the address of the dirty page
> - *                within the global ram_addr space
> + * @page: pointer into where to store the dirty page
>   */
>  static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
> -                              ram_addr_t *ram_addr_abs)
> +                              unsigned long *page)
>  {
>      RAMBlock *block = NULL;
>  
> @@ -1125,8 +1122,7 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
>                                  QSIMPLEQ_FIRST(&rs->src_page_requests);
>          block = entry->rb;
>          *offset = entry->offset;
> -        *ram_addr_abs = (entry->offset + entry->rb->offset) &
> -                        TARGET_PAGE_MASK;
> +        *page = (entry->offset + entry->rb->offset) >> TARGET_PAGE_BITS;
>  
>          if (entry->len > TARGET_PAGE_SIZE) {
>              entry->len -= TARGET_PAGE_SIZE;
> @@ -1151,18 +1147,17 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
>   *
>   * @rs: current RAM state
>   * @pss: data about the state of the current dirty page scan
> - * @ram_addr_abs: pointer into which to store the address of the dirty page
> - *                within the global ram_addr space
> + * @page: pointer into where to store the dirty page
>   */
>  static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
> -                            ram_addr_t *ram_addr_abs)
> +                            unsigned long *page)
>  {
>      RAMBlock  *block;
>      ram_addr_t offset;
>      bool dirty;
>  
>      do {
> -        block = unqueue_page(rs, &offset, ram_addr_abs);
> +        block = unqueue_page(rs, &offset, page);
>          /*
>           * We're sending this page, and since it's postcopy nothing else
>           * will dirty it, and we must make sure it doesn't get sent again
> @@ -1172,17 +1167,15 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
>          if (block) {
>              unsigned long *bitmap;
>              bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
> -            dirty = test_bit(*ram_addr_abs >> TARGET_PAGE_BITS, bitmap);
> +            dirty = test_bit(*page, bitmap);
>              if (!dirty) {
> -                trace_get_queued_page_not_dirty(
> -                    block->idstr, (uint64_t)offset,
> -                    (uint64_t)*ram_addr_abs,
> -                    test_bit(*ram_addr_abs >> TARGET_PAGE_BITS,
> -                         atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
> +                trace_get_queued_page_not_dirty(block->idstr, (uint64_t)offset,
> +                    *page,
> +                    test_bit(*page,
> +                             atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
>              } else {
> -                trace_get_queued_page(block->idstr,
> -                                      (uint64_t)offset,
> -                                      (uint64_t)*ram_addr_abs);
> +                trace_get_queued_page(block->idstr, (uint64_t)offset,
> +                                     *page);

I think you need to fix the trace_ definitions for get_queued_page
and get_queued_page_not_dirty they're currently taking uint64_t's for
ram_addr and they now need to be long's (with the format changes).


Dave

>              }
>          }
>  
> @@ -1308,15 +1301,15 @@ err:
>   * @ms: current migration state
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
> - * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
> + * @page: page number of the dirty page
>   */
>  static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
> -                                bool last_stage, ram_addr_t dirty_ram_abs)
> +                                bool last_stage, unsigned long page)
>  {
>      int res = 0;
>  
>      /* Check the pages is dirty and if it is send it */
> -    if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
> +    if (migration_bitmap_clear_dirty(rs, page)) {
>          unsigned long *unsentmap;
>          if (!rs->preffer_xbzrle && migrate_use_compression()) {
>              res = ram_save_compressed_page(rs, pss, last_stage);
> @@ -1329,7 +1322,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>          }
>          unsentmap = atomic_rcu_read(&rs->ram_bitmap)->unsentmap;
>          if (unsentmap) {
> -            clear_bit(dirty_ram_abs >> TARGET_PAGE_BITS, unsentmap);
> +            clear_bit(page, unsentmap);
>          }
>      }
>  
> @@ -1351,24 +1344,24 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>   * @ms: current migration state
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
> - * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
> + * @page: Page number of the dirty page
>   */
>  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>                                bool last_stage,
> -                              ram_addr_t dirty_ram_abs)
> +                              unsigned long page)
>  {
>      int tmppages, pages = 0;
>      size_t pagesize = qemu_ram_pagesize(pss->block);
>  
>      do {
> -        tmppages = ram_save_target_page(rs, pss, last_stage, dirty_ram_abs);
> +        tmppages = ram_save_target_page(rs, pss, last_stage, page);
>          if (tmppages < 0) {
>              return tmppages;
>          }
>  
>          pages += tmppages;
>          pss->offset += TARGET_PAGE_SIZE;
> -        dirty_ram_abs += TARGET_PAGE_SIZE;
> +        page++;
>      } while (pss->offset & (pagesize - 1));
>  
>      /* The offset we leave with is the last one we looked at */
> @@ -1395,8 +1388,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      PageSearchStatus pss;
>      int pages = 0;
>      bool again, found;
> -    ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
> -                                 ram_addr_t space */
> +    unsigned long page; /* Page number of the dirty page */
>  
>      /* No dirty page as there is zero RAM */
>      if (!ram_bytes_total()) {
> @@ -1413,15 +1405,15 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>  
>      do {
>          again = true;
> -        found = get_queued_page(rs, &pss, &dirty_ram_abs);
> +        found = get_queued_page(rs, &pss, &page);
>  
>          if (!found) {
>              /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(rs, &pss, &again, &dirty_ram_abs);
> +            found = find_dirty_block(rs, &pss, &again, &page);
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(rs, &pss, last_stage, dirty_ram_abs);
> +            pages = ram_save_host_page(rs, &pss, last_stage, page);
>          }
>      } while (!pages && again);
>  
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page Juan Quintela
@ 2017-03-31 12:31   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 12:31 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We are moving everything to work on pages, not addresses.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 50 +++++++++++++++++++++++++-------------------------
>  1 file changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 57b776b..ef3b428 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -298,8 +298,8 @@ uint64_t ram_postcopy_requests(void)
>  struct PageSearchStatus {
>      /* Current block being searched */
>      RAMBlock    *block;
> -    /* Current offset to search from */
> -    ram_addr_t   offset;
> +    /* Current page to search from */
> +    unsigned long page;
>      /* Set once we wrap around */
>      bool         complete_round;
>  };
> @@ -610,16 +610,16 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>   *
>   * @rs: current RAM state
>   * @rb: RAMBlock where to search for dirty pages
> - * @start: starting address (typically so we can continue from previous page)
> + * @start: page where we start the search
>   * @page: pointer into where to store the dirty page
>   */
>  static inline
> -ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> -                                       ram_addr_t start,
> -                                       unsigned long *page)
> +unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> +                                          unsigned long start,
> +                                          unsigned long *page)
>  {
>      unsigned long base = rb->offset >> TARGET_PAGE_BITS;
> -    unsigned long nr = base + (start >> TARGET_PAGE_BITS);
> +    unsigned long nr = base + start;
>      uint64_t rb_size = rb->used_length;
>      unsigned long size = base + (rb_size >> TARGET_PAGE_BITS);
>      unsigned long *bitmap;
> @@ -634,7 +634,7 @@ ram_addr_t migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>      }
>  
>      *page = next;
> -    return (next - base) << TARGET_PAGE_BITS;
> +    return next - base;
>  }
>  
>  static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> @@ -812,7 +812,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>      int ret;
>      bool send_async = true;
>      RAMBlock *block = pss->block;
> -    ram_addr_t offset = pss->offset;
> +    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
>  
>      p = block->host + offset;
>  
> @@ -844,7 +844,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)
>               * page would be stale
>               */
>              xbzrle_cache_zero_page(rs, current_addr);
> -            ram_release_pages(block->idstr, pss->offset, pages);
> +            ram_release_pages(block->idstr, offset, pages);
>          } else if (!rs->ram_bulk_stage &&
>                     !migration_in_postcopy() && migrate_use_xbzrle()) {
>              pages = save_xbzrle_page(rs, &p, current_addr, block,
> @@ -987,7 +987,7 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>      uint8_t *p;
>      int ret, blen;
>      RAMBlock *block = pss->block;
> -    ram_addr_t offset = pss->offset;
> +    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;
>  
>      p = block->host + offset;
>  
> @@ -1031,14 +1031,14 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>                  }
>              }
>              if (pages > 0) {
> -                ram_release_pages(block->idstr, pss->offset, pages);
> +                ram_release_pages(block->idstr, offset, pages);
>              }
>          } else {
>              pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  pages = compress_page_with_multi_thread(rs, block, offset);
>              } else {
> -                ram_release_pages(block->idstr, pss->offset, pages);
> +                ram_release_pages(block->idstr, offset, pages);
>              }
>          }
>      }
> @@ -1060,10 +1060,9 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>  static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>                               bool *again, unsigned long *page)
>  {
> -    pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
> -                                              page);
> +    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page, page);
>      if (pss->complete_round && pss->block == rs->last_seen_block &&
> -        pss->offset >= rs->last_page) {
> +        pss->page >= rs->last_page) {
>          /*
>           * We've been once around the RAM and haven't found anything.
>           * Give up.
> @@ -1071,9 +1070,9 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>          *again = false;
>          return false;
>      }
> -    if (pss->offset >= pss->block->used_length) {
> +    if ((pss->page << TARGET_PAGE_BITS) >= pss->block->used_length) {
>          /* Didn't find anything in this RAM Block */
> -        pss->offset = 0;
> +        pss->page = 0;
>          pss->block = QLIST_NEXT_RCU(pss->block, next);
>          if (!pss->block) {
>              /* Hit the end of the list */
> @@ -1196,7 +1195,7 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
>           * it just requested.
>           */
>          pss->block = block;
> -        pss->offset = offset;
> +        pss->page = offset >> TARGET_PAGE_BITS;
>      }
>  
>      return !!block;
> @@ -1351,7 +1350,8 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>                                unsigned long page)
>  {
>      int tmppages, pages = 0;
> -    size_t pagesize = qemu_ram_pagesize(pss->block);
> +    size_t pagesize_bits =
> +        qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
>  
>      do {
>          tmppages = ram_save_target_page(rs, pss, last_stage, page);
> @@ -1360,12 +1360,12 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>          }
>  
>          pages += tmppages;
> -        pss->offset += TARGET_PAGE_SIZE;
> +        pss->page++;
>          page++;
> -    } while (pss->offset & (pagesize - 1));
> +    } while (pss->page & (pagesize_bits - 1));
>  
>      /* The offset we leave with is the last one we looked at */
> -    pss->offset -= TARGET_PAGE_SIZE;
> +    pss->page--;
>      return pages;
>  }
>  
> @@ -1396,7 +1396,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      }
>  
>      pss.block = rs->last_seen_block;
> -    pss.offset = rs->last_page << TARGET_PAGE_BITS;
> +    pss.page = rs->last_page;
>      pss.complete_round = false;
>  
>      if (!pss.block) {
> @@ -1418,7 +1418,7 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      } while (!pages && again);
>  
>      rs->last_seen_block = pss.block;
> -    rs->last_page = pss.offset >> TARGET_PAGE_BITS;
> +    rs->last_page = pss.page;
>  
>      return pages;
>  }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState Juan Quintela
@ 2017-03-31 14:21   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:21 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We receive the file from save_live operations and we don't use it
> until 3 or 4 levels of calls down.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 84 +++++++++++++++++++++++++--------------------------------
>  1 file changed, 37 insertions(+), 47 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7667e73..6a39704 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -756,21 +756,20 @@ static void migration_bitmap_sync(RAMState *rs)
>   * Returns the number of pages written.
>   *
>   * @rs: current RAM state
> - * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @p: pointer to the page
>   */
> -static int save_zero_page(RAMState *rs, QEMUFile *f, RAMBlock *block,
> -                          ram_addr_t offset, uint8_t *p)
> +static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
> +                          uint8_t *p)
>  {
>      int pages = -1;
>  
>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>          rs->zero_pages++;
>          rs->bytes_transferred +=
> -            save_page_header(f, block, offset | RAM_SAVE_FLAG_COMPRESS);
> -        qemu_put_byte(f, 0);
> +            save_page_header(rs->f, block, offset | RAM_SAVE_FLAG_COMPRESS);
> +        qemu_put_byte(rs->f, 0);
>          rs->bytes_transferred += 1;
>          pages = 1;
>      }
> @@ -798,12 +797,11 @@ static void ram_release_pages(MigrationState *ms, const char *rbname,
>   *
>   * @rs: current RAM state
>   * @ms: current migration state
> - * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
>   */
> -static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> +static int ram_save_page(RAMState *rs, MigrationState *ms,
>                           PageSearchStatus *pss, bool last_stage)
>  {
>      int pages = -1;
> @@ -819,7 +817,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>  
>      /* In doubt sent page as normal */
>      bytes_xmit = 0;
> -    ret = ram_control_save_page(f, block->offset,
> +    ret = ram_control_save_page(rs->f, block->offset,
>                             offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
>          rs->bytes_transferred += bytes_xmit;
> @@ -842,7 +840,7 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>              }
>          }
>      } else {
> -        pages = save_zero_page(rs, f, block, offset, p);
> +        pages = save_zero_page(rs, block, offset, p);
>          if (pages > 0) {
>              /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>               * page would be stale
> @@ -864,14 +862,14 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>  
>      /* XBZRLE overflow or normal page */
>      if (pages == -1) {
> -        rs->bytes_transferred += save_page_header(f, block,
> +        rs->bytes_transferred += save_page_header(rs->f, block,
>                                                 offset | RAM_SAVE_FLAG_PAGE);
>          if (send_async) {
> -            qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE,
> +            qemu_put_buffer_async(rs->f, p, TARGET_PAGE_SIZE,
>                                    migrate_release_ram() &
>                                    migration_in_postcopy(ms));
>          } else {
> -            qemu_put_buffer(f, p, TARGET_PAGE_SIZE);
> +            qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);
>          }
>          rs->bytes_transferred += TARGET_PAGE_SIZE;
>          pages = 1;
> @@ -906,7 +904,7 @@ static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,
>      return bytes_sent;
>  }
>  
> -static void flush_compressed_data(RAMState *rs, QEMUFile *f)
> +static void flush_compressed_data(RAMState *rs)
>  {
>      int idx, len, thread_count;
>  
> @@ -926,7 +924,7 @@ static void flush_compressed_data(RAMState *rs, QEMUFile *f)
>      for (idx = 0; idx < thread_count; idx++) {
>          qemu_mutex_lock(&comp_param[idx].mutex);
>          if (!comp_param[idx].quit) {
> -            len = qemu_put_qemu_file(f, comp_param[idx].file);
> +            len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
>              rs->bytes_transferred += len;
>          }
>          qemu_mutex_unlock(&comp_param[idx].mutex);
> @@ -940,8 +938,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
>      param->offset = offset;
>  }
>  
> -static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
> -                                           RAMBlock *block, ram_addr_t offset)
> +static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
> +                                           ram_addr_t offset)
>  {
>      int idx, thread_count, bytes_xmit = -1, pages = -1;
>  
> @@ -951,7 +949,7 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
>          for (idx = 0; idx < thread_count; idx++) {
>              if (comp_param[idx].done) {
>                  comp_param[idx].done = false;
> -                bytes_xmit = qemu_put_qemu_file(f, comp_param[idx].file);
> +                bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
>                  qemu_mutex_lock(&comp_param[idx].mutex);
>                  set_compress_params(&comp_param[idx], block, offset);
>                  qemu_cond_signal(&comp_param[idx].cond);
> @@ -980,13 +978,11 @@ static int compress_page_with_multi_thread(RAMState *rs, QEMUFile *f,
>   *
>   * @rs: current RAM state
>   * @ms: current migration state
> - * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
>   */
>  static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
> -                                    QEMUFile *f,
>                                      PageSearchStatus *pss, bool last_stage)
>  {
>      int pages = -1;
> @@ -998,7 +994,7 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>  
>      p = block->host + offset;
>  
> -    ret = ram_control_save_page(f, block->offset,
> +    ret = ram_control_save_page(rs->f, block->offset,
>                                  offset, TARGET_PAGE_SIZE, &bytes_xmit);
>      if (bytes_xmit) {
>          rs->bytes_transferred += bytes_xmit;
> @@ -1020,20 +1016,20 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>           * is used to avoid resending the block name.
>           */
>          if (block != rs->last_sent_block) {
> -            flush_compressed_data(rs, f);
> -            pages = save_zero_page(rs, f, block, offset, p);
> +            flush_compressed_data(rs);
> +            pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
>                  /* Make sure the first page is sent out before other pages */
> -                bytes_xmit = save_page_header(f, block, offset |
> +                bytes_xmit = save_page_header(rs->f, block, offset |
>                                                RAM_SAVE_FLAG_COMPRESS_PAGE);
> -                blen = qemu_put_compression_data(f, p, TARGET_PAGE_SIZE,
> +                blen = qemu_put_compression_data(rs->f, p, TARGET_PAGE_SIZE,
>                                                   migrate_compress_level());
>                  if (blen > 0) {
>                      rs->bytes_transferred += bytes_xmit + blen;
>                      rs->norm_pages++;
>                      pages = 1;
>                  } else {
> -                    qemu_file_set_error(f, blen);
> +                    qemu_file_set_error(rs->f, blen);
>                      error_report("compressed data failed!");
>                  }
>              }
> @@ -1042,9 +1038,9 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>              }
>          } else {
>              offset |= RAM_SAVE_FLAG_CONTINUE;
> -            pages = save_zero_page(rs, f, block, offset, p);
> +            pages = save_zero_page(rs, block, offset, p);
>              if (pages == -1) {
> -                pages = compress_page_with_multi_thread(rs, f, block, offset);
> +                pages = compress_page_with_multi_thread(rs, block, offset);
>              } else {
>                  ram_release_pages(ms, block->idstr, pss->offset, pages);
>              }
> @@ -1061,13 +1057,12 @@ static int ram_save_compressed_page(RAMState *rs, MigrationState *ms,
>   * Returns if a page is found
>   *
>   * @rs: current RAM state
> - * @f: QEMUFile where to send the data
>   * @pss: data about the state of the current dirty page scan
>   * @again: set to false if the search has scanned the whole of RAM
>   * @ram_addr_abs: pointer into which to store the address of the dirty page
>   *                within the global ram_addr space
>   */
> -static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
> +static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>                               bool *again, ram_addr_t *ram_addr_abs)
>  {
>      pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
> @@ -1095,7 +1090,7 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
>                  /* If xbzrle is on, stop using the data compression at this
>                   * point. In theory, xbzrle can do better than compression.
>                   */
> -                flush_compressed_data(rs, f);
> +                flush_compressed_data(rs);
>                  compression_switch = false;
>              }
>          }
> @@ -1314,12 +1309,11 @@ err:
>   *
>   * @rs: current RAM state
>   * @ms: current migration state
> - * @f: QEMUFile where to send the data
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
>   * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
>   */
> -static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> +static int ram_save_target_page(RAMState *rs, MigrationState *ms,
>                                  PageSearchStatus *pss,
>                                  bool last_stage,
>                                  ram_addr_t dirty_ram_abs)
> @@ -1330,9 +1324,9 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      if (migration_bitmap_clear_dirty(rs, dirty_ram_abs)) {
>          unsigned long *unsentmap;
>          if (compression_switch && migrate_use_compression()) {
> -            res = ram_save_compressed_page(rs, ms, f, pss, last_stage);
> +            res = ram_save_compressed_page(rs, ms, pss, last_stage);
>          } else {
> -            res = ram_save_page(rs, ms, f, pss, last_stage);
> +            res = ram_save_page(rs, ms, pss, last_stage);
>          }
>  
>          if (res < 0) {
> @@ -1367,12 +1361,11 @@ static int ram_save_target_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>   *
>   * @rs: current RAM state
>   * @ms: current migration state
> - * @f: QEMUFile where to send the data
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
>   */
> -static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> +static int ram_save_host_page(RAMState *rs, MigrationState *ms,
>                                PageSearchStatus *pss,
>                                bool last_stage,
>                                ram_addr_t dirty_ram_abs)
> @@ -1381,8 +1374,7 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>      size_t pagesize = qemu_ram_pagesize(pss->block);
>  
>      do {
> -        tmppages = ram_save_target_page(rs, ms, f, pss, last_stage,
> -                                        dirty_ram_abs);
> +        tmppages = ram_save_target_page(rs, ms, pss, last_stage, dirty_ram_abs);
>          if (tmppages < 0) {
>              return tmppages;
>          }
> @@ -1405,14 +1397,13 @@ static int ram_save_host_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
>   * Returns the number of pages written where zero means no dirty pages
>   *
>   * @rs: current RAM state
> - * @f: QEMUFile where to send the data
>   * @last_stage: if we are at the completion stage
>   *
>   * On systems where host-page-size > target-page-size it will send all the
>   * pages in a host page that are dirty.
>   */
>  
> -static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
> +static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>  {
>      PageSearchStatus pss;
>      MigrationState *ms = migrate_get_current();
> @@ -1440,12 +1431,11 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
>  
>          if (!found) {
>              /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(rs, f, &pss, &again, &dirty_ram_abs);
> +            found = find_dirty_block(rs, &pss, &again, &dirty_ram_abs);
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(rs, ms, f, &pss, last_stage,
> -                                       dirty_ram_abs);
> +            pages = ram_save_host_page(rs, ms, &pss, last_stage, dirty_ram_abs);
>          }
>      } while (!pages && again);
>  
> @@ -2145,7 +2135,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      while ((ret = qemu_file_rate_limit(f)) == 0) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(rs, f, false);
> +        pages = ram_find_and_save_block(rs, false);
>          /* no more pages to sent */
>          if (pages == 0) {
>              done = 1;
> @@ -2167,7 +2157,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>          }
>          i++;
>      }
> -    flush_compressed_data(rs, f);
> +    flush_compressed_data(rs);
>      rcu_read_unlock();
>  
>      /*
> @@ -2215,14 +2205,14 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
>      while (true) {
>          int pages;
>  
> -        pages = ram_find_and_save_block(rs, f, !migration_in_colo_state());
> +        pages = ram_find_and_save_block(rs, !migration_in_colo_state());
>          /* no more blocks to sent */
>          if (pages == 0) {
>              break;
>          }
>      }
>  
> -    flush_compressed_data(rs, f);
> +    flush_compressed_data(rs);
>      ram_control_after_iterate(f, RAM_CONTROL_FINISH);
>  
>      rcu_read_unlock();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages()
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages() Juan Quintela
@ 2017-03-31 14:23   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:23 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> We always use it as pages anyways.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  exec.c                  |  6 +++---
>  include/exec/ram_addr.h |  2 +-
>  migration/ram.c         | 11 +++++------
>  3 files changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index 9a4c385..2cae288 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1528,7 +1528,7 @@ static ram_addr_t find_ram_offset(ram_addr_t size)
>      return offset;
>  }
>  
> -ram_addr_t last_ram_offset(void)
> +unsigned long last_ram_page(void)
>  {
>      RAMBlock *block;
>      ram_addr_t last = 0;
> @@ -1538,7 +1538,7 @@ ram_addr_t last_ram_offset(void)
>          last = MAX(last, block->offset + block->max_length);
>      }
>      rcu_read_unlock();
> -    return last;
> +    return last >> TARGET_PAGE_BITS;
>  }
>  
>  static void qemu_ram_setup_dump(void *addr, ram_addr_t size)
> @@ -1727,7 +1727,7 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
>      ram_addr_t old_ram_size, new_ram_size;
>      Error *err = NULL;
>  
> -    old_ram_size = last_ram_offset() >> TARGET_PAGE_BITS;
> +    old_ram_size = last_ram_page();
>  
>      qemu_mutex_lock_ramlist();
>      new_block->offset = find_ram_offset(new_block->max_length);
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index d50c970..bbbfc7d 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -53,7 +53,7 @@ static inline void *ramblock_ptr(RAMBlock *block, ram_addr_t offset)
>  }
>  
>  long qemu_getrampagesize(void);
> -ram_addr_t last_ram_offset(void);
> +unsigned long last_ram_page(void);
>  RAMBlock *qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
>                                     bool share, const char *mem_path,
>                                     Error **errp);
> diff --git a/migration/ram.c b/migration/ram.c
> index 3f283ba..1be9a6b 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1535,7 +1535,7 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>   */
>  void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
>  {
> -    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    unsigned long ram_pages = last_ram_page();
>      RAMState *rs = &ram_state;
>      int64_t cur;
>      int64_t linelen = 128;
> @@ -1902,8 +1902,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>       * Update the unsentmap to be unsentmap = unsentmap | dirty
>       */
>      bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
> -    bitmap_or(unsentmap, unsentmap, bitmap,
> -               last_ram_offset() >> TARGET_PAGE_BITS);
> +    bitmap_or(unsentmap, unsentmap, bitmap, last_ram_page());
>  
>  
>      trace_ram_postcopy_send_discard_bitmap();
> @@ -1951,7 +1950,7 @@ err:
>  
>  static int ram_state_init(RAMState *rs)
>  {
> -    int64_t ram_bitmap_pages; /* Size of bitmap in pages, including gaps */
> +    unsigned long ram_bitmap_pages;
>  
>      memset(rs, 0, sizeof(*rs));
>      qemu_mutex_init(&rs->bitmap_mutex);
> @@ -1997,7 +1996,7 @@ static int ram_state_init(RAMState *rs)
>      rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
>      /* Skip setting bitmap if there is no RAM */
>      if (ram_bytes_total()) {
> -        ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +        ram_bitmap_pages = last_ram_page();
>          rs->ram_bitmap->bmap = bitmap_new(ram_bitmap_pages);
>          bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages);
>  
> @@ -2458,7 +2457,7 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
>   */
>  int ram_postcopy_incoming_init(MigrationIncomingState *mis)
>  {
> -    size_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    unsigned long ram_pages = last_ram_page();
>  
>      return postcopy_ram_incoming_init(mis, ram_pages);
>  }
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence Juan Quintela
@ 2017-03-31 14:27   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:27 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 1be9a6b..4d62788 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1449,7 +1449,7 @@ void free_xbzrle_decoded_buf(void)
>      xbzrle_decoded_buf = NULL;
>  }
>  
> -static void migration_bitmap_free(struct RAMBitmap *bmap)
> +static void migration_bitmap_free(RAMBitmap *bmap)
>  {
>      g_free(bmap->bmap);
>      g_free(bmap->unsentmap);
> @@ -1463,7 +1463,7 @@ static void ram_migration_cleanup(void *opaque)
>      /* caller have hold iothread lock or is in a bh, so there is
>       * no writing race against this migration_bitmap
>       */
> -    struct RAMBitmap *bitmap = rs->ram_bitmap;
> +    RAMBitmap *bitmap = rs->ram_bitmap;
>      atomic_rcu_set(&rs->ram_bitmap, NULL);
>      if (bitmap) {
>          memory_global_dirty_log_stop();
> @@ -1502,8 +1502,8 @@ void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
>       * no writing race against this migration_bitmap
>       */
>      if (rs->ram_bitmap) {
> -        struct RAMBitmap *old_bitmap = rs->ram_bitmap, *bitmap;
> -        bitmap = g_new(struct RAMBitmap, 1);
> +        RAMBitmap *old_bitmap = rs->ram_bitmap, *bitmap;
> +        bitmap = g_new(RAMBitmap, 1);
>          bitmap->bmap = bitmap_new(new);
>  
>          /* prevent migration_bitmap content from being set bit
> @@ -1993,7 +1993,7 @@ static int ram_state_init(RAMState *rs)
>      rcu_read_lock();
>      ram_state_reset(rs);
>  
> -    rs->ram_bitmap = g_new0(struct RAMBitmap, 1);
> +    rs->ram_bitmap = g_new0(RAMBitmap, 1);
>      /* Skip setting bitmap if there is no RAM */
>      if (ram_bytes_total()) {
>          ram_bitmap_pages = last_ram_page();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration
  2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
                   ` (50 preceding siblings ...)
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle() Juan Quintela
@ 2017-03-31 14:34 ` Dr. David Alan Gilbert
  2017-04-04 17:22   ` Juan Quintela
  51 siblings, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Hi

Some high level points:

> Continuation of previous series, all review comments addressed. New things:
> - Consolidate all function comments in the same style (yes, docs)
> - Be much more careful with maintaining comments correct
> - Move all postcopy fields to RAMState

> - Move QEMUFile to RAMState
> - rename qemu_target_page_bits() to qemu_target_page_size() to reflect use
> - Remove MigrationState from functions that don't need it
> - reorganize last_sent_block to the place where it is used/needed
> - Move several places from offsets to pages
> - Rename last_ram_offset() to last_ram_page() to refect use

An interesting question is what happens if we ever have multiple threads
working on RAM at once, I assume you're thinking there will be multiple
RAMStates?  It'll be interesting to see whether everything we have now got
in RAMState is stuff that wants to be replicated that way.

Dave

> 
> Please comment.
> 
> 
> [v1]
> Currently, we have several places where we store informaticon about
> ram for migration pruposes:
> - global variables on migration/ram.c
> - inside the accounting_info struct in migration/ram.c
>   notice that not all the accounting vars are inside there
> - some stuff is in MigrationState, althought it belongs to migrate/ram.c
> 
> So, this series does:
> - move everything related to ram.c to RAMState struct
> - make all the statistics consistent, exporting them with an accessor
>   function
> 
> Why now?
> 
> Because I am trying to do some more optimizations about how we send
> data around and it is basically impossible to do with current code, we
> still need to add more variables.  Notice that there are things like that:
> - accounting info was only reset if we had xbzrle enabled
> - How/where to initialize variables are completely inconsistent.
> 
> 
> 
> To Do:
> 
> - There are still places that access directly the global struct.
>   Mainly postcopy.  We could finfd a way to make a pointer to the
>   current migration.  If people like the approach, I will search where
>   to put it.
> - I haven't posted any real change here, this is just the move of
>   variables to the struct and pass the struct around.  Optimizations
>   will came after.
> 
> - Consolidate XBZRLE, Compression params, etc in its own structs
>   (inside or not RAMState, to be able to allocate ones, others, or
>   ...)
> 
> Comments, please.
> 
> 
> Chao Fan (1):
>   Add page-size to output in 'info migrate'
> 
> Juan Quintela (50):
>   ram: Update all functions comments
>   ram: rename block_name to rbname
>   ram: Create RAMState
>   ram: Add dirty_rate_high_cnt to RAMState
>   ram: Move bitmap_sync_count into RAMState
>   ram: Move start time into RAMState
>   ram: Move bytes_xfer_prev into RAMState
>   ram: Move num_dirty_pages_period into RAMState
>   ram: Move xbzrle_cache_miss_prev into RAMState
>   ram: Move iterations_prev into RAMState
>   ram: Move dup_pages into RAMState
>   ram: Remove unused dup_mig_bytes_transferred()
>   ram: Remove unused pages_skipped variable
>   ram: Move norm_pages to RAMState
>   ram: Remove norm_mig_bytes_transferred
>   ram: Move iterations into RAMState
>   ram: Move xbzrle_bytes into RAMState
>   ram: Move xbzrle_pages into RAMState
>   ram: Move xbzrle_cache_miss into RAMState
>   ram: Move xbzrle_cache_miss_rate into RAMState
>   ram: Move xbzrle_overflows into RAMState
>   ram: Move migration_dirty_pages to RAMState
>   ram: Everything was init to zero, so use memset
>   ram: Move migration_bitmap_mutex into RAMState
>   ram: Move migration_bitmap_rcu into RAMState
>   ram: Move bytes_transferred into RAMState
>   ram: Use the RAMState bytes_transferred parameter
>   ram: Remove ram_save_remaining
>   ram: Move last_req_rb to RAMState
>   ram: Move src_page_req* to RAMState
>   ram: Create ram_dirty_sync_count()
>   ram: Remove dirty_bytes_rate
>   ram: Move dirty_pages_rate to RAMState
>   ram: Move postcopy_requests into RAMState
>   ram: Add QEMUFile to RAMState
>   ram: Move QEMUFile into RAMState
>   ram: Move compression_switch to RAMState
>   migration: Remove MigrationState from migration_in_postcopy
>   ram: We don't need MigrationState parameter anymore
>   ram: Rename qemu_target_page_bits() to qemu_target_page_size()
>   ram: Pass RAMBlock to bitmap_sync
>   ram: ram_discard_range() don't use the mis parameter
>   ram: reorganize last_sent_block
>   ram: Use page number instead of an address for the bitmap operations
>   ram: Remember last_page instead of last_offset
>   ram: Change offset field in PageSearchStatus to page
>   ram: Use ramblock and page offset instead of absolute offset
>   ram: rename last_ram_offset() last_ram_pages()
>   ram: Use RAMBitmap type for coherence
>   migration: Remove MigrationState parameter from migration_is_idle()
> 
>  exec.c                        |   10 +-
>  hmp.c                         |    3 +
>  include/exec/ram_addr.h       |    4 +-
>  include/migration/migration.h |   41 +-
>  include/sysemu/sysemu.h       |    2 +-
>  migration/migration.c         |   44 +-
>  migration/postcopy-ram.c      |   14 +-
>  migration/ram.c               | 1190 ++++++++++++++++++++++-------------------
>  migration/savevm.c            |   15 +-
>  migration/trace-events        |    2 +-
>  qapi-schema.json              |    5 +-
>  11 files changed, 695 insertions(+), 635 deletions(-)
> 
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-24  9:55   ` Peter Xu
  2017-03-24 11:44     ` Juan Quintela
@ 2017-03-31 14:43     ` Dr. David Alan Gilbert
  2017-04-03 20:40       ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:43 UTC (permalink / raw)
  To: Peter Xu; +Cc: Juan Quintela, qemu-devel

* Peter Xu (peterx@redhat.com) wrote:
> Hi, Juan,
> 
> Got several nitpicks below... (along with some questions)
> 
> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
> 
> [...]

> > @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
> >  }
> >  
> >  /**
> > - * flush_page_queue: Flush any remaining pages in the ram request queue
> > - *    it should be empty at the end anyway, but in error cases there may be
> > - *    some left.
> > + * flush_page_queue: flush any remaining pages in the ram request queue
> 
> Here the comment says (just like mentioned in function name) that we
> will "flush any remaining pages in the ram request queue", however in
> the implementation, we should be only freeing everything in
> src_page_requests. The problem is "flush" let me think about "flushing
> the rest of the pages to the other side"... while it's not.
> 
> Would it be nice we just rename the function into something else, like
> migration_page_queue_free()? We can tune the comments correspondingly
> as well.

Yes that probably would be a better name.

> [...]
> 
> > -/*
> > - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> > - *   the two bitmaps, that are similar, but one is inverted.
> > +/**
> > + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
>                   ^ should be n?     ^^^^^^^^^^ canonicalize?
> 
> >   *
> > - * We search for runs of target-pages that don't start or end on a
> > - * host page boundary;
> > - * unsent_pass=true: Cleans up partially unsent host pages by searching
> > - *                 the unsentmap
> > - * unsent_pass=false: Cleans up partially dirty host pages by searching
> > - *                 the main migration bitmap
> > + * Helper for postcopy_chunk_hostpages; it's called twice to
> > + * canonicalize the two bitmaps, that are similar, but one is
> > + * inverted.
> >   *
> > + * Postcopy requires that all target pages in a hostpage are dirty or
> > + * clean, not a mix.  This function canonicalizes the bitmaps.
> > + *
> > + * @ms: current migration state
> > + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> > + *               otherwise we need to canonicalize partially dirty host pages
> > + * @block: block that contains the page we want to canonicalize
> > + * @pds: state for postcopy
> >   */
> >  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
> >                                            RAMBlock *block,
> 
> [...]
> 
> > +/**
> > + * ram_save_setup: iterative stage for migration
>       ^^^^^^^^^^^^^^ should be ram_save_iterate()?
> 
> > + *
> > + * Returns zero to indicate success and negative for error
> > + *
> > + * @f: QEMUFile where to send the data
> > + * @opaque: RAMState pointer
> > + */
> >  static int ram_save_iterate(QEMUFile *f, void *opaque)
> >  {
> >      int ret;
> > @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
> >      return done;
> >  }
> 
> [...]
> 
> > -/*
> > - * Allocate data structures etc needed by incoming migration with postcopy-ram
> > - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> > +/**
> > + * ram_postococpy_incoming_init: allocate postcopy data structures
> > + *
> > + * Returns 0 for success and negative if there was one error
> > + *
> > + * @mis: current migration incoming state
> > + *
> > + * Allocate data structures etc needed by incoming migration with
> > + * postcopy-ram postcopy-ram's similarly names
> > + * postcopy_ram_incoming_init does the work
> 
> This sentence is slightly hard to understand... But I think the
> function name explained itself enough though. :)

A '.' after the first 'postcopy-ram' would make it more readable.

Dave

> Thanks,
> 
> -- peterx
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 11/51] ram: Move dup_pages into RAMState
  2017-03-28 18:43     ` Juan Quintela
  2017-03-29  7:02       ` Peter Xu
@ 2017-03-31 14:58       ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 14:58 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Peter Xu, qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > On Thu, Mar 23, 2017 at 09:45:04PM +0100, Juan Quintela wrote:
> >> Once there rename it to its actual meaning, zero_pages.
> >> 
> >> Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> >
> > Reviewed-by: Peter Xu <peterx@redhat.com>
> >
> > Will post a question below though (not directly related to this patch
> > but context-wide)...
> >>  {
> >>      int pages = -1;
> >>  
> >>      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> >> -        acct_info.dup_pages++;
> >> +        rs->zero_pages++;
> >>          *bytes_transferred += save_page_header(f, block,
> >>                                                 offset | RAM_SAVE_FLAG_COMPRESS);
> >>          qemu_put_byte(f, 0);
> >> @@ -822,11 +826,11 @@ static int ram_save_page(RAMState *rs, MigrationState *ms, QEMUFile *f,
> >>              if (bytes_xmit > 0) {
> >>                  acct_info.norm_pages++;
> >>              } else if (bytes_xmit == 0) {
> >> -                acct_info.dup_pages++;
> >> +                rs->zero_pages++;
> >
> > This code path looks suspicous... since iiuc currently it should only
> > be triggered by RDMA case, and I believe here qemu_rdma_save_page()
> > should have met something wrong (so that it didn't return with
> > RAM_SAVE_CONTROL_DELAYED). Then is it correct we do increase zero page
> > counting unconditionally here? (hmm, the default bytes_xmit is zero as
> > well...)
> 
> My head hurts at this point.
> ok.  bytse_xmit can only be zero if we called qemu_rdma_save_page() with
> size=0 or there has been an RDMA error.  We ver call the function with
> size = 0.  And if there is one error, we are in very bady shape already.
> 
> > Another thing is that I see when RDMA is enabled we are updating
> > accounting info with acct_update_position(), while we updated it here
> > as well. Is this an issue of duplicated accounting?
> 
> I think stats and rdma are not right.  I have to check more that.

It should be vaguely right; the rdma code calls back into acct_update_position
to update them; but I agree it looks odd;  that line almost looks like it's the error
case - so why is it incrementing dup_pages?

Dave

> Thanks, Juan.
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-30  6:56   ` Peter Xu
  2017-03-30 16:09     ` Juan Quintela
@ 2017-03-31 15:25     ` Dr. David Alan Gilbert
  2017-04-01  7:15       ` Peter Xu
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 15:25 UTC (permalink / raw)
  To: Peter Xu; +Cc: Juan Quintela, qemu-devel

* Peter Xu (peterx@redhat.com) wrote:
> On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
> > This are the last postcopy fields still at MigrationState.  Once there
> 
> s/This/These/
> 
> > Move MigrationSrcPageRequest to ram.c and remove MigrationState
> > parameters where appropiate.
> > 
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> 
> Reviewed-by: Peter Xu <peterx@redhat.com>
> 
> One question below though...
> 
> [...]
> 
> > @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
> >   *
> >   * It should be empty at the end anyway, but in error cases there may
> >   * xbe some left.
> > - *
> > - * @ms: current migration state
> >   */
> > -void flush_page_queue(MigrationState *ms)
> > +void flush_page_queue(void)
> >  {
> > -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> > +    struct RAMSrcPageRequest *mspr, *next_mspr;
> > +    RAMState *rs = &ram_state;
> >      /* This queue generally should be empty - but in the case of a failed
> >       * migration might have some droppings in.
> >       */
> >      rcu_read_lock();
> 
> Could I ask why we are taking the RCU read lock rather than the mutex
> here?

It's a good question whether we need anything at all.
flush_page_queue is called only from migrate_fd_cleanup.
migrate_fd_cleanup is called either from a backhalf, which I think has the bql,
or from a failure path in migrate_fd_connect.
migrate_fd_connect is called from migration_channel_connect and rdma_start_outgoing_migration
which I think both end up at monitor commands so also in the bql.

So I think we can probably just lose the rcu_read_lock/unlock.

Dave

> 
> > -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
> > +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
> >          memory_region_unref(mspr->rb->mr);
> > -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> > +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
> >          g_free(mspr);
> >      }
> >      rcu_read_unlock();
> 
> Thanks,
> 
> -- peterx
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
  2017-03-24  9:55   ` Peter Xu
@ 2017-03-31 15:51   ` Dr. David Alan Gilbert
  2017-04-04 17:12     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 15:51 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> Added doc comments for existing functions comment and rewrite them in
> a common style.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c | 348 ++++++++++++++++++++++++++++++++++++--------------------
>  1 file changed, 227 insertions(+), 121 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index de1e0a3..76f1fc4 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -96,11 +96,17 @@ static void XBZRLE_cache_unlock(void)
>          qemu_mutex_unlock(&XBZRLE.lock);
>  }
>  
> -/*
> - * called from qmp_migrate_set_cache_size in main thread, possibly while
> - * a migration is in progress.
> - * A running migration maybe using the cache and might finish during this
> - * call, hence changes to the cache are protected by XBZRLE.lock().
> +/**
> + * xbzrle_cache_resize: resize the xbzrle cache
> + *
> + * This function is called from qmp_migrate_set_cache_size in main
> + * thread, possibly while a migration is in progress.  A running
> + * migration may be using the cache and might finish during this call,
> + * hence changes to the cache are protected by XBZRLE.lock().
> + *
> + * Returns the new_size or negative in case of error.
> + *
> + * @new_size: new cache size
>   */
>  int64_t xbzrle_cache_resize(int64_t new_size)
>  {
> @@ -323,6 +329,7 @@ static inline void terminate_compression_threads(void)
>      int idx, thread_count;
>  
>      thread_count = migrate_compress_threads();
> +
>      for (idx = 0; idx < thread_count; idx++) {
>          qemu_mutex_lock(&comp_param[idx].mutex);
>          comp_param[idx].quit = true;
> @@ -383,11 +390,11 @@ void migrate_compress_threads_create(void)
>  }
>  
>  /**
> - * save_page_header: Write page header to wire
> + * save_page_header: write page header to wire
>   *
>   * If this is the 1st block, it also writes the block identification
>   *
> - * Returns: Number of bytes written
> + * Returns the number of bytes written

Do the doc tools recognise that to pick up the explanation
for the return value?

>   *
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
> @@ -410,11 +417,14 @@ static size_t save_page_header(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
>      return size;
>  }
>  
> -/* Reduce amount of guest cpu execution to hopefully slow down memory writes.
> - * If guest dirty memory rate is reduced below the rate at which we can
> - * transfer pages to the destination then we should be able to complete
> - * migration. Some workloads dirty memory way too fast and will not effectively
> - * converge, even with auto-converge.
> +/**
> + * mig_throotle_guest_down: throotle down the guest

one 'o'

> + *
> + * Reduce amount of guest cpu execution to hopefully slow down memory
> + * writes. If guest dirty memory rate is reduced below the rate at
> + * which we can transfer pages to the destination then we should be
> + * able to complete migration. Some workloads dirty memory way too
> + * fast and will not effectively converge, even with auto-converge.
>   */
>  static void mig_throttle_guest_down(void)
>  {
> @@ -431,11 +441,16 @@ static void mig_throttle_guest_down(void)
>      }
>  }
>  
> -/* Update the xbzrle cache to reflect a page that's been sent as all 0.
> +/**
> + * xbzrle_cache_zero_page: insert a zero page in the XBZRLE cache
> + *
> + * @current_addr: address for the zero page
> + *
> + * Update the xbzrle cache to reflect a page that's been sent as all 0.
>   * The important thing is that a stale (not-yet-0'd) page be replaced
>   * by the new data.
>   * As a bonus, if the page wasn't in the cache it gets added so that
> - * when a small write is made into the 0'd page it gets XBZRLE sent
> + * when a small write is made into the 0'd page it gets XBZRLE sent.
>   */
>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>  {
> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>   *          -1 means that xbzrle would be longer than normal
>   *
>   * @f: QEMUFile where to send the data
> - * @current_data:
> - * @current_addr:
> + * @current_data: contents of the page

That's wrong.  The point of current_data is that it gets updated by this
function to point to the cache page whenever the data ends up in the cache.
It's important then that the caller uses that pointer to save the data to
disk/network rather than the original pointer, since the data that's saved
must exactly match the cache contents even if the guest is still writing to it.

> + * @current_addr: addr of the page
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
> @@ -530,13 +545,17 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t **current_data,
>      return 1;
>  }
>  
> -/* Called with rcu_read_lock() to protect migration_bitmap
> - * rb: The RAMBlock  to search for dirty pages in
> - * start: Start address (typically so we can continue from previous page)
> - * ram_addr_abs: Pointer into which to store the address of the dirty page
> - *               within the global ram_addr space
> +/**
> + * migration_bitmap_find_dirty: find the next drity page from start

Typo 'drity'

>   *
> - * Returns: byte offset within memory region of the start of a dirty page
> + * Called with rcu_read_lock() to protect migration_bitmap
> + *
> + * Returns the byte offset within memory region of the start of a dirty page
> + *
> + * @rb: RAMBlock where to search for dirty pages
> + * @start: starting address (typically so we can continue from previous page)
> + * @ram_addr_abs: pointer into which to store the address of the dirty page
> + *                within the global ram_addr space
>   */
>  static inline
>  ram_addr_t migration_bitmap_find_dirty(RAMBlock *rb,
> @@ -600,10 +619,14 @@ static void migration_bitmap_sync_init(void)
>      iterations_prev = 0;
>  }
>  
> -/* Returns a summary bitmap of the page sizes of all RAMBlocks;
> - * for VMs with just normal pages this is equivalent to the
> - * host page size.  If it's got some huge pages then it's the OR
> - * of all the different page sizes.
> +/**
> + * ram_pagesize_summary: calculate all the pagesizes of a VM
> + *
> + * Returns a summary bitmap of the page sizes of all RAMBlocks
> + *
> + * For VMs with just normal pages this is equivalent to the host page
> + * size. If it's got some huge pages then it's the OR of all the
> + * different page sizes.
>   */
>  uint64_t ram_pagesize_summary(void)
>  {
> @@ -693,9 +716,9 @@ static void migration_bitmap_sync(void)
>  }
>  
>  /**
> - * save_zero_page: Send the zero page to the stream
> + * save_zero_page: send the zero page to the stream
>   *
> - * Returns: Number of pages written.
> + * Returns the number of pages written.
>   *
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
> @@ -731,14 +754,14 @@ static void ram_release_pages(MigrationState *ms, const char *block_name,
>  }
>  
>  /**
> - * ram_save_page: Send the given page to the stream
> + * ram_save_page: send the given page to the stream
>   *
> - * Returns: Number of pages written.
> + * Returns the number of pages written.
>   *          < 0 - error
>   *          >=0 - Number of pages written - this might legally be 0
>   *                if xbzrle noticed the page was the same.
>   *
> - * @ms: The current migration state.
> + * @ms: current migration state
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
> @@ -921,9 +944,9 @@ static int compress_page_with_multi_thread(QEMUFile *f, RAMBlock *block,
>  /**
>   * ram_save_compressed_page: compress the given page and send it to the stream
>   *
> - * Returns: Number of pages written.
> + * Returns the number of pages written.
>   *
> - * @ms: The current migration state.
> + * @ms: current migration state
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
> @@ -1000,17 +1023,17 @@ static int ram_save_compressed_page(MigrationState *ms, QEMUFile *f,
>      return pages;
>  }
>  
> -/*
> - * Find the next dirty page and update any state associated with
> - * the search process.
> +/**
> + * find_dirty_block: find the next dirty page and update any state
> + * associated with the search process.
>   *
> - * Returns: True if a page is found
> + * Returns if a page is found
>   *
> - * @f: Current migration stream.
> - * @pss: Data about the state of the current dirty page scan.
> - * @*again: Set to false if the search has scanned the whole of RAM
> - * *ram_addr_abs: Pointer into which to store the address of the dirty page
> - *               within the global ram_addr space
> + * @f: QEMUFile where to send the data
> + * @pss: data about the state of the current dirty page scan
> + * @again: set to false if the search has scanned the whole of RAM
> + * @ram_addr_abs: pointer into which to store the address of the dirty page
> + *                within the global ram_addr space
>   */
>  static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
>                               bool *again, ram_addr_t *ram_addr_abs)
> @@ -1055,13 +1078,17 @@ static bool find_dirty_block(QEMUFile *f, PageSearchStatus *pss,
>      }
>  }
>  
> -/*
> +/**
> + * unqueue_page: gets a page of the queue
> + *
>   * Helper for 'get_queued_page' - gets a page off the queue
> - *      ms:      MigrationState in
> - * *offset:      Used to return the offset within the RAMBlock
> - * ram_addr_abs: global offset in the dirty/sent bitmaps
>   *
> - * Returns:      block (or NULL if none available)
> + * Returns the block of the page (or NULL if none available)
> + *
> + * @ms: current migration state
> + * @offset: used to return the offset within the RAMBlock
> + * @ram_addr_abs: pointer into which to store the address of the dirty page
> + *                within the global ram_addr space
>   */
>  static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>                                ram_addr_t *ram_addr_abs)
> @@ -1091,15 +1118,17 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>      return block;
>  }
>  
> -/*
> - * Unqueue a page from the queue fed by postcopy page requests; skips pages
> - * that are already sent (!dirty)
> +/**
> + * get_queued_page: unqueue a page from the postocpy requests
>   *
> - *      ms:      MigrationState in
> - *     pss:      PageSearchStatus structure updated with found block/offset
> - * ram_addr_abs: global offset in the dirty/sent bitmaps
> + * Skips pages that are already sent (!dirty)
>   *
> - * Returns:      true if a queued page is found
> + * Returns if a queued page is found
> + *
> + * @ms: current migration state
> + * @pss: data about the state of the current dirty page scan
> + * @ram_addr_abs: pointer into which to store the address of the dirty page
> + *                within the global ram_addr space
>   */
>  static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>                              ram_addr_t *ram_addr_abs)
> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, PageSearchStatus *pss,
>  }
>  
>  /**
> - * flush_page_queue: Flush any remaining pages in the ram request queue
> - *    it should be empty at the end anyway, but in error cases there may be
> - *    some left.
> + * flush_page_queue: flush any remaining pages in the ram request queue
>   *
> - * ms: MigrationState
> + * It should be empty at the end anyway, but in error cases there may
> + * xbe some left.
> + *
> + * @ms: current migration state
>   */
>  void flush_page_queue(MigrationState *ms)
>  {
> @@ -1179,12 +1209,17 @@ void flush_page_queue(MigrationState *ms)
>  }
>  
>  /**
> - * Queue the pages for transmission, e.g. a request from postcopy destination
> - *   ms: MigrationStatus in which the queue is held
> - *   rbname: The RAMBlock the request is for - may be NULL (to mean reuse last)
> - *   start: Offset from the start of the RAMBlock
> - *   len: Length (in bytes) to send
> - *   Return: 0 on success
> + * ram_save_queue_pages: queue the page for transmission
> + *
> + * A request from postcopy destination for example.
> + *
> + * Returns zero on success or negative on error
> + *
> + * @ms: current migration state
> + * @rbname: Name of the RAMBLock of the request. NULL means the
> + *          same that last one.
> + * @start: starting address from the start of the RAMBlock
> + * @len: length (in bytes) to send
>   */
>  int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>                           ram_addr_t start, ram_addr_t len)
> @@ -1243,17 +1278,16 @@ err:
>  }
>  
>  /**
> - * ram_save_target_page: Save one target page
> + * ram_save_target_page: save one target page
>   *
> + * Returns the umber of pages written

Missing ''n'

>   *
> + * @ms: current migration state
>   * @f: QEMUFile where to send the data
> - * @block: pointer to block that contains the page we want to send
> - * @offset: offset inside the block for the page;
> + * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
> - * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
> - *
> - * Returns: Number of pages written.
> + * @dirty_ram_abs: address of the start of the dirty page in ram_addr_t space
>   */
>  static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>                                  PageSearchStatus *pss,
> @@ -1295,20 +1329,19 @@ static int ram_save_target_page(MigrationState *ms, QEMUFile *f,
>  }
>  
>  /**
> - * ram_save_host_page: Starting at *offset send pages up to the end
> - *                     of the current host page.  It's valid for the initial
> - *                     offset to point into the middle of a host page
> - *                     in which case the remainder of the hostpage is sent.
> - *                     Only dirty target pages are sent.
> - *                     Note that the host page size may be a huge page for this
> - *                     block.
> + * ram_save_host_page: save a whole host page
>   *
> - * Returns: Number of pages written.
> + * Starting at *offset send pages up to the end of the current host
> + * page. It's valid for the initial offset to point into the middle of
> + * a host page in which case the remainder of the hostpage is sent.
> + * Only dirty target pages are sent. Note that the host page size may
> + * be a huge page for this block.
>   *
> + * Returns the number of pages written or negative on error
> + *
> + * @ms: current migration state
>   * @f: QEMUFile where to send the data
> - * @block: pointer to block that contains the page we want to send
> - * @offset: offset inside the block for the page; updated to last target page
> - *          sent
> + * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
>   * @bytes_transferred: increase it with the number of transferred bytes
>   * @dirty_ram_abs: Address of the start of the dirty page in ram_addr_t space
> @@ -1340,12 +1373,11 @@ static int ram_save_host_page(MigrationState *ms, QEMUFile *f,
>  }
>  
>  /**
> - * ram_find_and_save_block: Finds a dirty page and sends it to f
> + * ram_find_and_save_block: finds a dirty page and sends it to f
>   *
>   * Called within an RCU critical section.
>   *
> - * Returns:  The number of pages written
> - *           0 means no dirty pages
> + * Returns the number of pages written where zero means no dirty pages
>   *
>   * @f: QEMUFile where to send the data
>   * @last_stage: if we are at the completion stage
> @@ -1580,12 +1612,19 @@ void ram_postcopy_migrated_memory_release(MigrationState *ms)
>      }
>  }
>  
> -/*
> +/**
> + * postcopy_send_discard_bm_ram: discard a RAMBlock
> + *
> + * Returns zero on success
> + *
>   * Callback from postcopy_each_ram_send_discard for each RAMBlock
>   * Note: At this point the 'unsentmap' is the processed bitmap combined
>   *       with the dirtymap; so a '1' means it's either dirty or unsent.
> - * start,length: Indexes into the bitmap for the first bit
> - *            representing the named block and length in target-pages
> + *
> + * @ms: current migration state
> + * @pds: state for postcopy
> + * @start: RAMBlock starting page
> + * @length: RAMBlock size
>   */
>  static int postcopy_send_discard_bm_ram(MigrationState *ms,
>                                          PostcopyDiscardState *pds,
> @@ -1621,13 +1660,18 @@ static int postcopy_send_discard_bm_ram(MigrationState *ms,
>      return 0;
>  }
>  
> -/*
> +/**
> + * postcopy_each_ram_send_discard: discard all RAMBlocks
> + *
> + * Returns 0 for success or negative for error
> + *
>   * Utility for the outgoing postcopy code.
>   *   Calls postcopy_send_discard_bm_ram for each RAMBlock
>   *   passing it bitmap indexes and name.
> - * Returns: 0 on success
>   * (qemu_ram_foreach_block ends up passing unscaled lengths
>   *  which would mean postcopy code would have to deal with target page)
> + *
> + * @ms: current migration state
>   */
>  static int postcopy_each_ram_send_discard(MigrationState *ms)
>  {
> @@ -1656,17 +1700,21 @@ static int postcopy_each_ram_send_discard(MigrationState *ms)
>      return 0;
>  }
>  
> -/*
> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> - *   the two bitmaps, that are similar, but one is inverted.
> +/**
> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
>   *
> - * We search for runs of target-pages that don't start or end on a
> - * host page boundary;
> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> - *                 the unsentmap
> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> - *                 the main migration bitmap
> + * Helper for postcopy_chunk_hostpages; it's called twice to
> + * canonicalize the two bitmaps, that are similar, but one is
> + * inverted.
>   *
> + * Postcopy requires that all target pages in a hostpage are dirty or
> + * clean, not a mix.  This function canonicalizes the bitmaps.
> + *
> + * @ms: current migration state
> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> + *               otherwise we need to canonicalize partially dirty host pages
> + * @block: block that contains the page we want to canonicalize
> + * @pds: state for postcopy
>   */
>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>                                            RAMBlock *block,
> @@ -1784,14 +1832,18 @@ static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool unsent_pass,
>      }
>  }
>  
> -/*
> +/**
> + * postcopy_chuck_hostpages: discrad any partially sent host page
> + *
>   * Utility for the outgoing postcopy code.
>   *
>   * Discard any partially sent host-page size chunks, mark any partially
>   * dirty host-page size chunks as all dirty.  In this case the host-page
>   * is the host-page for the particular RAMBlock, i.e. it might be a huge page
>   *
> - * Returns: 0 on success
> + * Returns zero on success
> + *
> + * @ms: current migration state
>   */
>  static int postcopy_chunk_hostpages(MigrationState *ms)
>  {
> @@ -1822,7 +1874,11 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
>      return 0;
>  }
>  
> -/*
> +/**
> + * ram_postcopy_send_discard_bitmap: transmit the discard bitmap
> + *
> + * Returns zero on success
> + *
>   * Transmit the set of pages to be discarded after precopy to the target
>   * these are pages that:
>   *     a) Have been previously transmitted but are now dirty again
> @@ -1830,6 +1886,8 @@ static int postcopy_chunk_hostpages(MigrationState *ms)
>   *        any pages on the destination that have been mapped by background
>   *        tasks get discarded (transparent huge pages is the specific concern)
>   * Hopefully this is pretty sparse
> + *
> + * @ms: current migration state
>   */
>  int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>  {
> @@ -1878,13 +1936,16 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms)
>      return ret;
>  }
>  
> -/*
> - * At the start of the postcopy phase of migration, any now-dirty
> - * precopied pages are discarded.
> +/**
> + * ram_discard_range: discard dirtied pages at the beginning of postcopy
>   *
> - * start, length describe a byte address range within the RAMBlock
> + * Returns zero on success
>   *
> - * Returns 0 on success.
> + * @mis: current migration incoming state
> + * @block_name: Name of the RAMBLock of the request. NULL means the
 'BL'->'Bl'

> + *              same that last one.
> + * @start: RAMBlock starting page
> + * @length: RAMBlock size
>   */
>  int ram_discard_range(MigrationIncomingState *mis,
>                        const char *block_name,
> @@ -1987,12 +2048,21 @@ static int ram_save_init_globals(void)
>      return 0;
>  }
>  
> -/* Each of ram_save_setup, ram_save_iterate and ram_save_complete has
> +/*
> + * Each of ram_save_setup, ram_save_iterate and ram_save_complete has
>   * long-running RCU critical section.  When rcu-reclaims in the code
>   * start to become numerous it will be necessary to reduce the
>   * granularity of these critical sections.
>   */
>  
> +/**
> + * ram_save_setup: Setup RAM for migration
> + *
> + * Returns zero to indicate success and negative for error
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_setup(QEMUFile *f, void *opaque)
>  {
>      RAMBlock *block;
> @@ -2027,6 +2097,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      return 0;
>  }
>  
> +/**
> + * ram_save_setup: iterative stage for migration
> + *
> + * Returns zero to indicate success and negative for error
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
>      int ret;
> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>      return done;
>  }
>  
> -/* Called with iothread lock */
> +/**
> + * ram_save_complete: function called to send the remaining amount of ram
> + *
> + * Returns zero to indicate success
> + *
> + * Called with iothread lock
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_complete(QEMUFile *f, void *opaque)
>  {
>      rcu_read_lock();
> @@ -2185,17 +2272,17 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
>      return 0;
>  }
>  
> -/* Must be called from within a rcu critical section.
> +/**
> + * ram_block_from_stream: read a RAMBlock id from the migration stream
> + *
> + * Must be called from within a rcu critical section.
> + *
>   * Returns a pointer from within the RCU-protected ram_list.
> - */
> -/*
> - * Read a RAMBlock ID from the stream f.
>   *
> - * f: Stream to read from
> - * flags: Page flags (mostly to see if it's a continuation of previous block)
> + * @f: QEMUFile where to read the data from
> + * @flags: Page flags (mostly to see if it's a continuation of previous block)
>   */
> -static inline RAMBlock *ram_block_from_stream(QEMUFile *f,
> -                                              int flags)
> +static inline RAMBlock *ram_block_from_stream(QEMUFile *f, int flags)
>  {
>      static RAMBlock *block = NULL;
>      char id[256];
> @@ -2232,9 +2319,15 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
>      return block->host + offset;
>  }
>  
> -/*
> +/**
> + * ram_handle_compressed: handle the zero page case
> + *
>   * If a page (or a whole RDMA chunk) has been
>   * determined to be zero, then zap it.
> + *
> + * @host: host address for the zero page
> + * @ch: what the page is filled from.  We only support zero
> + * @size: size of the zero page
>   */
>  void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
>  {
> @@ -2373,9 +2466,16 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
>      qemu_mutex_unlock(&decomp_done_lock);
>  }
>  
> -/*
> - * Allocate data structures etc needed by incoming migration with postcopy-ram
> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> +/**
> + * ram_postococpy_incoming_init: allocate postcopy data structures
          ^^^^^^^^^^
 typo

> + *
> + * Returns 0 for success and negative if there was one error
> + *
> + * @mis: current migration incoming state
> + *
> + * Allocate data structures etc needed by incoming migration with
> + * postcopy-ram postcopy-ram's similarly names
> + * postcopy_ram_incoming_init does the work
>   */
>  int ram_postcopy_incoming_init(MigrationIncomingState *mis)
>  {
> @@ -2384,9 +2484,15 @@ int ram_postcopy_incoming_init(MigrationIncomingState *mis)
>      return postcopy_ram_incoming_init(mis, ram_pages);
>  }
>  
> -/*
> +/**
> + * ram_load_postocpy: load a page in postcopy case
               ^^^^^^^^

typo

Dave

> + *
> + * Returns 0 for success or -errno in case of error
> + *
>   * Called in postcopy mode by ram_load().
>   * rcu_read_lock is taken prior to this being called.
> + *
> + * @f: QEMUFile where to send the data
>   */
>  static int ram_load_postcopy(QEMUFile *f)
>  {
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* " Juan Quintela
  2017-03-30  6:56   ` Peter Xu
@ 2017-03-31 16:52   ` Dr. David Alan Gilbert
  2017-04-04 17:42     ` Juan Quintela
  1 sibling, 1 reply; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 16:52 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> This are the last postcopy fields still at MigrationState.  Once there
> Move MigrationSrcPageRequest to ram.c and remove MigrationState
> parameters where appropiate.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  include/migration/migration.h | 17 +-----------
>  migration/migration.c         |  5 +---
>  migration/ram.c               | 62 ++++++++++++++++++++++++++-----------------
>  3 files changed, 40 insertions(+), 44 deletions(-)
> 
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index e032fb0..8a6caa3 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -128,18 +128,6 @@ struct MigrationIncomingState {
>  MigrationIncomingState *migration_incoming_get_current(void);
>  void migration_incoming_state_destroy(void);
>  
> -/*
> - * An outstanding page request, on the source, having been received
> - * and queued
> - */
> -struct MigrationSrcPageRequest {
> -    RAMBlock *rb;
> -    hwaddr    offset;
> -    hwaddr    len;
> -
> -    QSIMPLEQ_ENTRY(MigrationSrcPageRequest) next_req;
> -};
> -
>  struct MigrationState
>  {
>      size_t bytes_xfer;
> @@ -186,9 +174,6 @@ struct MigrationState
>      /* Flag set once the migration thread called bdrv_inactivate_all */
>      bool block_inactive;
>  
> -    /* Queue of outstanding page requests from the destination */
> -    QemuMutex src_page_req_mutex;
> -    QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) src_page_requests;
>      /* The semaphore is used to notify COLO thread that failover is finished */
>      QemuSemaphore colo_exit_sem;
>  
> @@ -371,7 +356,7 @@ void savevm_skip_configuration(void);
>  int global_state_store(void);
>  void global_state_store_running(void);
>  
> -void flush_page_queue(MigrationState *ms);
> +void flush_page_queue(void);
>  int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>                           ram_addr_t start, ram_addr_t len);
>  uint64_t ram_pagesize_summary(void);
> diff --git a/migration/migration.c b/migration/migration.c
> index b220941..58c1587 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -109,7 +109,6 @@ MigrationState *migrate_get_current(void)
>      };
>  
>      if (!once) {
> -        qemu_mutex_init(&current_migration.src_page_req_mutex);
>          current_migration.parameters.tls_creds = g_strdup("");
>          current_migration.parameters.tls_hostname = g_strdup("");
>          once = true;
> @@ -949,7 +948,7 @@ static void migrate_fd_cleanup(void *opaque)
>      qemu_bh_delete(s->cleanup_bh);
>      s->cleanup_bh = NULL;
>  
> -    flush_page_queue(s);
> +    flush_page_queue();
>  
>      if (s->to_dst_file) {
>          trace_migrate_fd_cleanup();
> @@ -1123,8 +1122,6 @@ MigrationState *migrate_init(const MigrationParams *params)
>  
>      migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
>  
> -    QSIMPLEQ_INIT(&s->src_page_requests);
> -
>      s->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
>      return s;
>  }
> diff --git a/migration/ram.c b/migration/ram.c
> index 325a0f3..601370c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -151,6 +151,18 @@ struct RAMBitmap {
>  };
>  typedef struct RAMBitmap RAMBitmap;
>  
> +/*
> + * An outstanding page request, on the source, having been received
> + * and queued
> + */
> +struct RAMSrcPageRequest {
> +    RAMBlock *rb;
> +    hwaddr    offset;
> +    hwaddr    len;
> +
> +    QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
> +};
> +
>  /* State of RAM for migration */
>  struct RAMState {
>      /* Last block that we have visited searching for dirty pages */
> @@ -205,6 +217,9 @@ struct RAMState {
>      RAMBitmap *ram_bitmap;
>      /* The RAMBlock used in the last src_page_request */
>      RAMBlock *last_req_rb;
> +    /* Queue of outstanding page requests from the destination */
> +    QemuMutex src_page_req_mutex;
> +    QSIMPLEQ_HEAD(src_page_requests, RAMSrcPageRequest) src_page_requests;
>  };
>  typedef struct RAMState RAMState;
>  
> @@ -1084,20 +1099,20 @@ static bool find_dirty_block(RAMState *rs, QEMUFile *f, PageSearchStatus *pss,
>   *
>   * Returns the block of the page (or NULL if none available)
>   *
> - * @ms: current migration state
> + * @rs: current RAM state
>   * @offset: used to return the offset within the RAMBlock
>   * @ram_addr_abs: pointer into which to store the address of the dirty page
>   *                within the global ram_addr space
>   */
> -static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
> +static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
>                                ram_addr_t *ram_addr_abs)
>  {
>      RAMBlock *block = NULL;
>  
> -    qemu_mutex_lock(&ms->src_page_req_mutex);
> -    if (!QSIMPLEQ_EMPTY(&ms->src_page_requests)) {
> -        struct MigrationSrcPageRequest *entry =
> -                                QSIMPLEQ_FIRST(&ms->src_page_requests);
> +    qemu_mutex_lock(&rs->src_page_req_mutex);
> +    if (!QSIMPLEQ_EMPTY(&rs->src_page_requests)) {
> +        struct RAMSrcPageRequest *entry =
> +                                QSIMPLEQ_FIRST(&rs->src_page_requests);
>          block = entry->rb;
>          *offset = entry->offset;
>          *ram_addr_abs = (entry->offset + entry->rb->offset) &
> @@ -1108,11 +1123,11 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>              entry->offset += TARGET_PAGE_SIZE;
>          } else {
>              memory_region_unref(block->mr);
> -            QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> +            QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
>              g_free(entry);
>          }
>      }
> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
> +    qemu_mutex_unlock(&rs->src_page_req_mutex);
>  
>      return block;
>  }
> @@ -1125,13 +1140,11 @@ static RAMBlock *unqueue_page(MigrationState *ms, ram_addr_t *offset,
>   * Returns if a queued page is found
>   *
>   * @rs: current RAM state
> - * @ms: current migration state
>   * @pss: data about the state of the current dirty page scan
>   * @ram_addr_abs: pointer into which to store the address of the dirty page
>   *                within the global ram_addr space
>   */
> -static bool get_queued_page(RAMState *rs, MigrationState *ms,
> -                            PageSearchStatus *pss,
> +static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
>                              ram_addr_t *ram_addr_abs)
>  {
>      RAMBlock  *block;
> @@ -1139,7 +1152,7 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
>      bool dirty;
>  
>      do {
> -        block = unqueue_page(ms, &offset, ram_addr_abs);
> +        block = unqueue_page(rs, &offset, ram_addr_abs);
>          /*
>           * We're sending this page, and since it's postcopy nothing else
>           * will dirty it, and we must make sure it doesn't get sent again
> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
>   *
>   * It should be empty at the end anyway, but in error cases there may
>   * xbe some left.
> - *
> - * @ms: current migration state
>   */
> -void flush_page_queue(MigrationState *ms)
> +void flush_page_queue(void)

I'm not sure this is safe;  it's called from migrate_fd_cleanup right at
the end, if you do any finalisation/cleanup of the RAMState in ram_save_complete
then when is it safe to run this?

>  {
> -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> +    struct RAMSrcPageRequest *mspr, *next_mspr;
> +    RAMState *rs = &ram_state;
>      /* This queue generally should be empty - but in the case of a failed
>       * migration might have some droppings in.
>       */
>      rcu_read_lock();
> -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
> +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
>          memory_region_unref(mspr->rb->mr);
> -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
>          g_free(mspr);
>      }
>      rcu_read_unlock();
> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>          goto err;
>      }
>  
> -    struct MigrationSrcPageRequest *new_entry =
> -        g_malloc0(sizeof(struct MigrationSrcPageRequest));
> +    struct RAMSrcPageRequest *new_entry =
> +        g_malloc0(sizeof(struct RAMSrcPageRequest));
>      new_entry->rb = ramblock;
>      new_entry->offset = start;
>      new_entry->len = len;
>  
>      memory_region_ref(ramblock->mr);
> -    qemu_mutex_lock(&ms->src_page_req_mutex);
> -    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
> +    qemu_mutex_lock(&rs->src_page_req_mutex);
> +    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
> +    qemu_mutex_unlock(&rs->src_page_req_mutex);

Hmm ok where did it get it's rs from?
Anyway, the thing I needed to convince myself of was that there was any guarantee that
RAMState would exist by the time the first request came in, something that we now need
to be careful of.
I think we're mostly OK; we call qemu_savevm_state_begin() at the top
of migration_thread so the ram_save_setup should be done and allocate
the RAMState before we get into the main loop and thus before we ever
look at the 'start_postcopy' flag and thus before we ever ask the destination
to send us stuff.

>      rcu_read_unlock();
>  
>      return 0;
> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
>  
>      do {
>          again = true;
> -        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
> +        found = get_queued_page(rs, &pss, &dirty_ram_abs);
>  
>          if (!found) {
>              /* priority queue empty, so just search for something dirty */
> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
>  
>      memset(rs, 0, sizeof(*rs));
>      qemu_mutex_init(&rs->bitmap_mutex);
> +    qemu_mutex_init(&rs->src_page_req_mutex);
> +    QSIMPLEQ_INIT(&rs->src_page_requests);

Similar question to above; that mutex is going to get reinit'd
on a new migration and it shouldn't be without it being destroyed.
Maybe make it a once.

Dave

>  
>      if (migrate_use_xbzrle()) {
>          XBZRLE_cache_lock();
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset
  2017-03-23 20:45 ` [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset Juan Quintela
@ 2017-03-31 17:17   ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-03-31 17:17 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> This removes the needto pass also the absolute offset.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
>  migration/ram.c        | 56 ++++++++++++++++++++++----------------------------
>  migration/trace-events |  2 +-
>  2 files changed, 26 insertions(+), 32 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index ef3b428..3f283ba 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -611,12 +611,10 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>   * @rs: current RAM state
>   * @rb: RAMBlock where to search for dirty pages
>   * @start: page where we start the search
> - * @page: pointer into where to store the dirty page
>   */
>  static inline
>  unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
> -                                          unsigned long start,
> -                                          unsigned long *page)
> +                                          unsigned long start)
>  {
>      unsigned long base = rb->offset >> TARGET_PAGE_BITS;
>      unsigned long nr = base + start;
> @@ -633,17 +631,18 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
>          next = find_next_bit(bitmap, size, nr);
>      }
>  
> -    *page = next;
>      return next - base;
>  }
>  
>  static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> +                                                RAMBlock *rb,
>                                                  unsigned long page)
>  {
>      bool ret;
>      unsigned long *bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
> +    unsigned long nr = (rb->offset >> TARGET_PAGE_BITS) + page;
>  
> -    ret = test_and_clear_bit(page, bitmap);
> +    ret = test_and_clear_bit(nr, bitmap);
>  
>      if (ret) {
>          rs->migration_dirty_pages--;
> @@ -1057,10 +1056,9 @@ static int ram_save_compressed_page(RAMState *rs, PageSearchStatus *pss,
>   * @again: set to false if the search has scanned the whole of RAM
>   * @page: pointer into where to store the dirty page
>   */
> -static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
> -                             bool *again, unsigned long *page)
> +static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
>  {
> -    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page, page);
> +    pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
>      if (pss->complete_round && pss->block == rs->last_seen_block &&
>          pss->page >= rs->last_page) {
>          /*
> @@ -1110,8 +1108,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>   * @offset: used to return the offset within the RAMBlock
>   * @page: pointer into where to store the dirty page
>   */
> -static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
> -                              unsigned long *page)
> +static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset)
>  {
>      RAMBlock *block = NULL;
>  
> @@ -1121,7 +1118,6 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
>                                  QSIMPLEQ_FIRST(&rs->src_page_requests);
>          block = entry->rb;
>          *offset = entry->offset;
> -        *page = (entry->offset + entry->rb->offset) >> TARGET_PAGE_BITS;
>  
>          if (entry->len > TARGET_PAGE_SIZE) {
>              entry->len -= TARGET_PAGE_SIZE;
> @@ -1148,15 +1144,14 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t *offset,
>   * @pss: data about the state of the current dirty page scan
>   * @page: pointer into where to store the dirty page
>   */
> -static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
> -                            unsigned long *page)
> +static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
>  {
>      RAMBlock  *block;
>      ram_addr_t offset;
>      bool dirty;
>  
>      do {
> -        block = unqueue_page(rs, &offset, page);
> +        block = unqueue_page(rs, &offset);
>          /*
>           * We're sending this page, and since it's postcopy nothing else
>           * will dirty it, and we must make sure it doesn't get sent again
> @@ -1165,16 +1160,18 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss,
>           */
>          if (block) {
>              unsigned long *bitmap;
> +            unsigned long page;
> +
>              bitmap = atomic_rcu_read(&rs->ram_bitmap)->bmap;
> -            dirty = test_bit(*page, bitmap);
> +            page = (block->offset + offset) >> TARGET_PAGE_BITS;
> +            dirty = test_bit(page, bitmap);
>              if (!dirty) {
>                  trace_get_queued_page_not_dirty(block->idstr, (uint64_t)offset,
> -                    *page,
> -                    test_bit(*page,
> +                    page,
> +                    test_bit(page,
>                               atomic_rcu_read(&rs->ram_bitmap)->unsentmap));
>              } else {
> -                trace_get_queued_page(block->idstr, (uint64_t)offset,
> -                                     *page);
> +                trace_get_queued_page(block->idstr, (uint64_t)offset, page);
>              }
>          }
>  
> @@ -1300,16 +1297,17 @@ err:
>   * @ms: current migration state
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
> - * @page: page number of the dirty page
>   */
>  static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
> -                                bool last_stage, unsigned long page)
> +                                bool last_stage)
>  {
>      int res = 0;
>  
>      /* Check the pages is dirty and if it is send it */
> -    if (migration_bitmap_clear_dirty(rs, page)) {
> +    if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>          unsigned long *unsentmap;
> +        unsigned long page =
> +            (pss->block->offset >> TARGET_PAGE_BITS) + pss->page;
>          if (!rs->preffer_xbzrle && migrate_use_compression()) {
>              res = ram_save_compressed_page(rs, pss, last_stage);
>          } else {
> @@ -1343,25 +1341,22 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,
>   * @ms: current migration state
>   * @pss: data about the page we want to send
>   * @last_stage: if we are at the completion stage
> - * @page: Page number of the dirty page
>   */
>  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
> -                              bool last_stage,
> -                              unsigned long page)
> +                              bool last_stage)
>  {
>      int tmppages, pages = 0;
>      size_t pagesize_bits =
>          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
>  
>      do {
> -        tmppages = ram_save_target_page(rs, pss, last_stage, page);
> +        tmppages = ram_save_target_page(rs, pss, last_stage);
>          if (tmppages < 0) {
>              return tmppages;
>          }
>  
>          pages += tmppages;
>          pss->page++;
> -        page++;
>      } while (pss->page & (pagesize_bits - 1));
>  
>      /* The offset we leave with is the last one we looked at */
> @@ -1388,7 +1383,6 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>      PageSearchStatus pss;
>      int pages = 0;
>      bool again, found;
> -    unsigned long page; /* Page number of the dirty page */
>  
>      /* No dirty page as there is zero RAM */
>      if (!ram_bytes_total()) {
> @@ -1405,15 +1399,15 @@ static int ram_find_and_save_block(RAMState *rs, bool last_stage)
>  
>      do {
>          again = true;
> -        found = get_queued_page(rs, &pss, &page);
> +        found = get_queued_page(rs, &pss);
>  
>          if (!found) {
>              /* priority queue empty, so just search for something dirty */
> -            found = find_dirty_block(rs, &pss, &again, &page);
> +            found = find_dirty_block(rs, &pss, &again);
>          }
>  
>          if (found) {
> -            pages = ram_save_host_page(rs, &pss, last_stage, page);
> +            pages = ram_save_host_page(rs, &pss, last_stage);
>          }
>      } while (!pages && again);
>  
> diff --git a/migration/trace-events b/migration/trace-events
> index 7372ce2..0a3f033 100644
> --- a/migration/trace-events
> +++ b/migration/trace-events
> @@ -63,7 +63,7 @@ put_qtailq_end(const char *name, const char *reason) "%s %s"
>  qemu_file_fclose(void) ""
>  
>  # migration/ram.c
> -get_queued_page(const char *block_name, uint64_t tmp_offset, uint64_t ram_addr) "%s/%" PRIx64 " ram_addr=%" PRIx64
> +get_queued_page(const char *block_name, uint64_t tmp_offset, unsigned long page) "%s/%" PRIx64 " page=%lu"
>  get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, uint64_t ram_addr, int sent) "%s/%" PRIx64 " ram_addr=%" PRIx64 " (sent=%d)"
>  migration_bitmap_sync_start(void) ""
>  migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64
> -- 
> 2.9.3
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-31 15:25     ` Dr. David Alan Gilbert
@ 2017-04-01  7:15       ` Peter Xu
  2017-04-05 10:27         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 167+ messages in thread
From: Peter Xu @ 2017-04-01  7:15 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Juan Quintela, qemu-devel

On Fri, Mar 31, 2017 at 04:25:56PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
> > > This are the last postcopy fields still at MigrationState.  Once there
> > 
> > s/This/These/
> > 
> > > Move MigrationSrcPageRequest to ram.c and remove MigrationState
> > > parameters where appropiate.
> > > 
> > > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > 
> > Reviewed-by: Peter Xu <peterx@redhat.com>
> > 
> > One question below though...
> > 
> > [...]
> > 
> > > @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
> > >   *
> > >   * It should be empty at the end anyway, but in error cases there may
> > >   * xbe some left.
> > > - *
> > > - * @ms: current migration state
> > >   */
> > > -void flush_page_queue(MigrationState *ms)
> > > +void flush_page_queue(void)
> > >  {
> > > -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> > > +    struct RAMSrcPageRequest *mspr, *next_mspr;
> > > +    RAMState *rs = &ram_state;
> > >      /* This queue generally should be empty - but in the case of a failed
> > >       * migration might have some droppings in.
> > >       */
> > >      rcu_read_lock();
> > 
> > Could I ask why we are taking the RCU read lock rather than the mutex
> > here?
> 
> It's a good question whether we need anything at all.
> flush_page_queue is called only from migrate_fd_cleanup.
> migrate_fd_cleanup is called either from a backhalf, which I think has the bql,
> or from a failure path in migrate_fd_connect.
> migrate_fd_connect is called from migration_channel_connect and rdma_start_outgoing_migration
> which I think both end up at monitor commands so also in the bql.
> 
> So I think we can probably just lose the rcu_read_lock/unlock.

Thanks for the confirmation.

(ps: even if we are not with bql, we should not need this
 rcu_read_lock, right? My understanding is: if we want to protect
 src_page_requests, we should need the mutex, not rcu lock; while for
 the memory_region_unref() since we have had the reference, looks like
 we don't need any kind of locking either)

> 
> Dave
> 
> > 
> > > -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
> > > +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
> > >          memory_region_unref(mspr->rb->mr);
> > > -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> > > +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
> > >          g_free(mspr);
> > >      }
> > >      rcu_read_unlock();
> > 
> > Thanks,
> > 
> > -- peterx
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

-- peterx

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-31 14:43     ` Dr. David Alan Gilbert
@ 2017-04-03 20:40       ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-04-03 20:40 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: Peter Xu, qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Peter Xu (peterx@redhat.com) wrote:
>> Hi, Juan,
>> 
>> Got several nitpicks below... (along with some questions)
>> 
>> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
>> 
>> [...]
>
>> > @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
>> > *ms, PageSearchStatus *pss,
>> >  }
>> >  
>> >  /**
>> > - * flush_page_queue: Flush any remaining pages in the ram request queue
>> > - *    it should be empty at the end anyway, but in error cases there may be
>> > - *    some left.
>> > + * flush_page_queue: flush any remaining pages in the ram request queue
>> 
>> Here the comment says (just like mentioned in function name) that we
>> will "flush any remaining pages in the ram request queue", however in
>> the implementation, we should be only freeing everything in
>> src_page_requests. The problem is "flush" let me think about "flushing
>> the rest of the pages to the other side"... while it's not.
>> 
>> Would it be nice we just rename the function into something else, like
>> migration_page_queue_free()? We can tune the comments correspondingly
>> as well.
>
> Yes that probably would be a better name.

done
>> > - * Allocate data structures etc needed by incoming migration with postcopy-ram
>> > - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
>> > +/**
>> > + * ram_postococpy_incoming_init: allocate postcopy data structures
>> > + *
>> > + * Returns 0 for success and negative if there was one error
>> > + *
>> > + * @mis: current migration incoming state
>> > + *
>> > + * Allocate data structures etc needed by incoming migration with
>> > + * postcopy-ram postcopy-ram's similarly names
>> > + * postcopy_ram_incoming_init does the work
>> 
>> This sentence is slightly hard to understand... But I think the
>> function name explained itself enough though. :)
>
> A '.' after the first 'postcopy-ram' would make it more readable.
>
> Dave

Done.  Once there, I spelled postcopy correctly O:-)

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments
  2017-03-31 15:51   ` Dr. David Alan Gilbert
@ 2017-04-04 17:12     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 17:12 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Added doc comments for existing functions comment and rewrite them in
>> a common style.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 348 ++++++++++++++++++++++++++++++++++++--------------------
>>  1 file changed, 227 insertions(+), 121 deletions(-)
>> 

>>   *
>>   * If this is the 1st block, it also writes the block identification
>>   *
>> - * Returns: Number of bytes written
>> + * Returns the number of bytes written
>
> Do the doc tools recognise that to pick up the explanation
> for the return value?

No clue.  Following qemu/include/exec/memory.h

>> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>>   *          -1 means that xbzrle would be longer than normal
>>   *
>>   * @f: QEMUFile where to send the data
>> - * @current_data:
>> - * @current_addr:
>> + * @current_data: contents of the page
>
> That's wrong.  The point of current_data is that it gets updated by this
> function to point to the cache page whenever the data ends up in the cache.
> It's important then that the caller uses that pointer to save the data to
> disk/network rather than the original pointer, since the data that's saved
> must exactly match the cache contents even if the guest is still writing to it.

this is the current text:

* @current_data: pointer to the address of the page contents

This was Peter suggestion.

Rest of suggestions included. 

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration
  2017-03-31 14:34 ` [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Dr. David Alan Gilbert
@ 2017-04-04 17:22   ` Juan Quintela
  2017-04-04 17:36     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 17:22 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Hi
>
> Some high level points:
>
>> Continuation of previous series, all review comments addressed. New things:
>> - Consolidate all function comments in the same style (yes, docs)
>> - Be much more careful with maintaining comments correct
>> - Move all postcopy fields to RAMState
>
>> - Move QEMUFile to RAMState
>> - rename qemu_target_page_bits() to qemu_target_page_size() to reflect use
>> - Remove MigrationState from functions that don't need it
>> - reorganize last_sent_block to the place where it is used/needed
>> - Move several places from offsets to pages
>> - Rename last_ram_offset() to last_ram_page() to refect use
>
> An interesting question is what happens if we ever have multiple threads
> working on RAM at once, I assume you're thinking there will be multiple
> RAMStates?  It'll be interesting to see whether everything we have now got
> in RAMState is stuff that wants to be replicated that way.

Working on paolo suggestion on sending everyhting through multiplefd's.
That requires multiple FD's.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration
  2017-04-04 17:22   ` Juan Quintela
@ 2017-04-04 17:36     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-04-04 17:36 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> >> Hi
> >
> > Some high level points:
> >
> >> Continuation of previous series, all review comments addressed. New things:
> >> - Consolidate all function comments in the same style (yes, docs)
> >> - Be much more careful with maintaining comments correct
> >> - Move all postcopy fields to RAMState
> >
> >> - Move QEMUFile to RAMState
> >> - rename qemu_target_page_bits() to qemu_target_page_size() to reflect use
> >> - Remove MigrationState from functions that don't need it
> >> - reorganize last_sent_block to the place where it is used/needed
> >> - Move several places from offsets to pages
> >> - Rename last_ram_offset() to last_ram_page() to refect use
> >
> > An interesting question is what happens if we ever have multiple threads
> > working on RAM at once, I assume you're thinking there will be multiple
> > RAMStates?  It'll be interesting to see whether everything we have now got
> > in RAMState is stuff that wants to be replicated that way.
> 
> Working on paolo suggestion on sending everyhting through multiplefd's.
> That requires multiple FD's.

Yes, but for example do we want multiple postcopy request queues?
Do we have one reverse stream for requests or multiple?

Dave

> Thanks, Juan.
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-03-31 16:52   ` Dr. David Alan Gilbert
@ 2017-04-04 17:42     ` Juan Quintela
  2017-04-05 10:34       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 17:42 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:

Hi

>> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
>>   *
>>   * It should be empty at the end anyway, but in error cases there may
>>   * xbe some left.
>> - *
>> - * @ms: current migration state
>>   */
>> -void flush_page_queue(MigrationState *ms)
>> +void flush_page_queue(void)
>
> I'm not sure this is safe;  it's called from migrate_fd_cleanup right at
> the end, if you do any finalisation/cleanup of the RAMState in
> ram_save_complete
> then when is it safe to run this?

But, looking into it, I think that we should be able to move this into
ram_save_cleanup() no?

We don't need it after that?
As an added bonus, we can remove the export of it.

>> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>>          goto err;
>>      }
>>  
>> -    struct MigrationSrcPageRequest *new_entry =
>> -        g_malloc0(sizeof(struct MigrationSrcPageRequest));
>> +    struct RAMSrcPageRequest *new_entry =
>> +        g_malloc0(sizeof(struct RAMSrcPageRequest));
>>      new_entry->rb = ramblock;
>>      new_entry->offset = start;
>>      new_entry->len = len;
>>  
>>      memory_region_ref(ramblock->mr);
>> -    qemu_mutex_lock(&ms->src_page_req_mutex);
>> -    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
>> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
>> +    qemu_mutex_lock(&rs->src_page_req_mutex);
>> +    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
>> +    qemu_mutex_unlock(&rs->src_page_req_mutex);
>
> Hmm ok where did it get it's rs from?
> Anyway, the thing I needed to convince myself of was that there was
> any guarantee that
> RAMState would exist by the time the first request came in, something
> that we now need
> to be careful of.
> I think we're mostly OK; we call qemu_savevm_state_begin() at the top
> of migration_thread so the ram_save_setup should be done and allocate
> the RAMState before we get into the main loop and thus before we ever
> look at the 'start_postcopy' flag and thus before we ever ask the destination
> to send us stuff.
>
>>      rcu_read_unlock();
>>  
>>      return 0;
>> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
>>  
>>      do {
>>          again = true;
>> -        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
>> +        found = get_queued_page(rs, &pss, &dirty_ram_abs);
>>  
>>          if (!found) {
>>              /* priority queue empty, so just search for something dirty */
>> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
>>  
>>      memset(rs, 0, sizeof(*rs));
>>      qemu_mutex_init(&rs->bitmap_mutex);
>> +    qemu_mutex_init(&rs->src_page_req_mutex);
>> +    QSIMPLEQ_INIT(&rs->src_page_requests);
>
> Similar question to above; that mutex is going to get reinit'd
> on a new migration and it shouldn't be without it being destroyed.
> Maybe make it a once.

good catch.  I think that the easiest way is to just move it into proper
RAMState, init it here, and destroy it at cleanup?

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync
  2017-03-30 19:10       ` Dr. David Alan Gilbert
@ 2017-04-04 17:46         ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 17:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
>> > * Juan Quintela (quintela@redhat.com) wrote:
>> >> We change the meaning of start to be the offset from the beggining of
>> >> the block.
>> >
>> > s/beggining/beginning/
>> >
>> > Why do this?
>> > We have:
>> >    migration_bitmap_sync (all blocks)
>> >    migration_bitmap_sync_range - called per block
>> >    cpu_physical_memory_sync_dirty_bitmap
>> >
>> > Why keep migration_bitmap_sync_range having start/length as well
>> > as the block
>> > if you could just rename it to migration_bitmap_sync_block and
>> > just give it the rb?
>> > And since cpu_physical_memory_clear_dirty_range is lower level,
>> > why give it
>> > the rb?
>> 
>> I did it on the previous series, then I remembered that I was not going
>> to be able to sync only part of the range, as I will want in the future.
>> 
>> If you preffer as an intermediate meassure to just move to blocks, I can
>> do, but change is really small and not sure if it makes sense.
>
> OK then, but just comment it to say you want to.
> I'm still not sure if cpu_physical_memory_clear_dirty_range should
> have the RB;
> it feels that it's lower level, kvm stuff rather than things that know
> about RAMBlocks.

Bitmap is going to be there in the following patch.  Not a lot that we
can be done about that, no?

Right now we have:

- absolute address
- RAMblock
- byte offset inside block
- byte offset of ramblock
- Whole bitmaps (Migration, code and vga)
- migration bitmaps

This rseries move the migration bitmap inside the RAMBlock.  And we have
the RAMBlock in the caller.  We could search for it there, but looks
very inefficient.

I am trying to change all the code to use:

RAMblock pointer + target page offset inside ramblock

So we need to do a lot less calculations.

Later, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations
  2017-03-31 12:22   ` Dr. David Alan Gilbert
@ 2017-04-04 18:21     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 18:21 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> We use an unsigned long for the page number.  Notice that our bitmaps
>> already got that for the index, so we have that limit.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 76 ++++++++++++++++++++++++++-------------------------------
>>  1 file changed, 34 insertions(+), 42 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 6cd77b5..b1a031e 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -611,13 +611,12 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
>>   * @rs: current RAM state
>>   * @rb: RAMBlock where to search for dirty pages
>>   * @start: starting address (typically so we can continue from previous page)
>> - * @ram_addr_abs: pointer into which to store the address of the dirty page
>> - *                within the global ram_addr space
>> + * @page: pointer into where to store the dirty page
>
> I'd prefer if you could call it 'page_abs' - it often gets tricky to know
> whether we're talking about a page offset within a RAMBlock or an
> offset within
> the whole bitmap.

I don't really care.  Changed.

> (I wish we had different index types)

This is C man!!
>> -                trace_get_queued_page(block->idstr,
>> -                                      (uint64_t)offset,
>> -                                      (uint64_t)*ram_addr_abs);
>> +                trace_get_queued_page(block->idstr, (uint64_t)offset,
>> +                                     *page);
>
> I think you need to fix the trace_ definitions for get_queued_page
> and get_queued_page_not_dirty they're currently taking uint64_t's for
> ram_addr and they now need to be long's (with the format changes).

Done.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset
  2017-03-31  9:09   ` Dr. David Alan Gilbert
@ 2017-04-04 18:24     ` Juan Quintela
  0 siblings, 0 replies; 167+ messages in thread
From: Juan Quintela @ 2017-04-04 18:24 UTC (permalink / raw)
  To: Dr. David Alan Gilbert; +Cc: qemu-devel

"Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>>  migration/ram.c | 14 +++++++-------
>>  1 file changed, 7 insertions(+), 7 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index b1a031e..57b776b 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -171,8 +171,8 @@ struct RAMState {
>>      RAMBlock *last_seen_block;
>>      /* Last block from where we have sent data */
>>      RAMBlock *last_sent_block;
>> -    /* Last offset we have sent data from */
>> -    ram_addr_t last_offset;
>> +    /* Last dirty page we have sent */
>
> Can you make that 'Last dirty target page we have sent' 
> just so we know which shape page we're dealing with.

Done.

>> +    ram_addr_t last_page;
>>      /* last ram version we have seen */
>>      uint32_t last_version;
>>      /* We are in the first round */
>> @@ -1063,7 +1063,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss,
>>      pss->offset = migration_bitmap_find_dirty(rs, pss->block, pss->offset,
>>                                                page);
>>      if (pss->complete_round && pss->block == rs->last_seen_block &&
>> -        pss->offset >= rs->last_offset) {
>> +        pss->offset >= rs->last_page) {
>
> That's odd; isn't pss->offset still in bytes?

It is not odd, it is wrong.

Fixed.

Thanks, Juan.

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-04-01  7:15       ` Peter Xu
@ 2017-04-05 10:27         ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-04-05 10:27 UTC (permalink / raw)
  To: Peter Xu; +Cc: Juan Quintela, qemu-devel

* Peter Xu (peterx@redhat.com) wrote:
> On Fri, Mar 31, 2017 at 04:25:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > On Thu, Mar 23, 2017 at 09:45:23PM +0100, Juan Quintela wrote:
> > > > This are the last postcopy fields still at MigrationState.  Once there
> > > 
> > > s/This/These/
> > > 
> > > > Move MigrationSrcPageRequest to ram.c and remove MigrationState
> > > > parameters where appropiate.
> > > > 
> > > > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > > 
> > > Reviewed-by: Peter Xu <peterx@redhat.com>
> > > 
> > > One question below though...
> > > 
> > > [...]
> > > 
> > > > @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
> > > >   *
> > > >   * It should be empty at the end anyway, but in error cases there may
> > > >   * xbe some left.
> > > > - *
> > > > - * @ms: current migration state
> > > >   */
> > > > -void flush_page_queue(MigrationState *ms)
> > > > +void flush_page_queue(void)
> > > >  {
> > > > -    struct MigrationSrcPageRequest *mspr, *next_mspr;
> > > > +    struct RAMSrcPageRequest *mspr, *next_mspr;
> > > > +    RAMState *rs = &ram_state;
> > > >      /* This queue generally should be empty - but in the case of a failed
> > > >       * migration might have some droppings in.
> > > >       */
> > > >      rcu_read_lock();
> > > 
> > > Could I ask why we are taking the RCU read lock rather than the mutex
> > > here?
> > 
> > It's a good question whether we need anything at all.
> > flush_page_queue is called only from migrate_fd_cleanup.
> > migrate_fd_cleanup is called either from a backhalf, which I think has the bql,
> > or from a failure path in migrate_fd_connect.
> > migrate_fd_connect is called from migration_channel_connect and rdma_start_outgoing_migration
> > which I think both end up at monitor commands so also in the bql.
> > 
> > So I think we can probably just lose the rcu_read_lock/unlock.
> 
> Thanks for the confirmation.
> 
> (ps: even if we are not with bql, we should not need this
>  rcu_read_lock, right? My understanding is: if we want to protect
>  src_page_requests, we should need the mutex, not rcu lock; while for
>  the memory_region_unref() since we have had the reference, looks like
>  we don't need any kind of locking either)

Right; I guess the memory_region_unref might cause the memory region
to be cleanup up in that loop without the rcu locks, but I don't think
it's a problem even if they are cleaned up.

Dave

> > 
> > Dave
> > 
> > > 
> > > > -    QSIMPLEQ_FOREACH_SAFE(mspr, &ms->src_page_requests, next_req, next_mspr) {
> > > > +    QSIMPLEQ_FOREACH_SAFE(mspr, &rs->src_page_requests, next_req, next_mspr) {
> > > >          memory_region_unref(mspr->rb->mr);
> > > > -        QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
> > > > +        QSIMPLEQ_REMOVE_HEAD(&rs->src_page_requests, next_req);
> > > >          g_free(mspr);
> > > >      }
> > > >      rcu_read_unlock();
> > > 
> > > Thanks,
> > > 
> > > -- peterx
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> 
> -- peterx
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

* Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
  2017-04-04 17:42     ` Juan Quintela
@ 2017-04-05 10:34       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 167+ messages in thread
From: Dr. David Alan Gilbert @ 2017-04-05 10:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

* Juan Quintela (quintela@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote:
> > * Juan Quintela (quintela@redhat.com) wrote:
> 
> Hi
> 
> >> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, MigrationState *ms,
> >>   *
> >>   * It should be empty at the end anyway, but in error cases there may
> >>   * xbe some left.
> >> - *
> >> - * @ms: current migration state
> >>   */
> >> -void flush_page_queue(MigrationState *ms)
> >> +void flush_page_queue(void)
> >
> > I'm not sure this is safe;  it's called from migrate_fd_cleanup right at
> > the end, if you do any finalisation/cleanup of the RAMState in
> > ram_save_complete
> > then when is it safe to run this?
> 
> But, looking into it, I think that we should be able to move this into
> ram_save_cleanup() no?
> 
> We don't need it after that?
> As an added bonus, we can remove the export of it.

As discussed by irc;  the thing I'm cautious about is getting the order
of cleanup right.
If you look at migration_completion you see we call
qemu_savevm_state_complete_postcopy() (which calls ram_save_complete)
before we call await_return_path_close_on_source  which ensures that the
thread that's handling requests from the destination and queuing them
has finished.

It seems right to make sure that thread has finished (and thus nothing
is trying to add anythign to that queue) before trying to clean it up.

Dave

> >> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
> >>          goto err;
> >>      }
> >>  
> >> -    struct MigrationSrcPageRequest *new_entry =
> >> -        g_malloc0(sizeof(struct MigrationSrcPageRequest));
> >> +    struct RAMSrcPageRequest *new_entry =
> >> +        g_malloc0(sizeof(struct RAMSrcPageRequest));
> >>      new_entry->rb = ramblock;
> >>      new_entry->offset = start;
> >>      new_entry->len = len;
> >>  
> >>      memory_region_ref(ramblock->mr);
> >> -    qemu_mutex_lock(&ms->src_page_req_mutex);
> >> -    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
> >> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
> >> +    qemu_mutex_lock(&rs->src_page_req_mutex);
> >> +    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
> >> +    qemu_mutex_unlock(&rs->src_page_req_mutex);
> >
> > Hmm ok where did it get it's rs from?
> > Anyway, the thing I needed to convince myself of was that there was
> > any guarantee that
> > RAMState would exist by the time the first request came in, something
> > that we now need
> > to be careful of.
> > I think we're mostly OK; we call qemu_savevm_state_begin() at the top
> > of migration_thread so the ram_save_setup should be done and allocate
> > the RAMState before we get into the main loop and thus before we ever
> > look at the 'start_postcopy' flag and thus before we ever ask the destination
> > to send us stuff.
> >
> >>      rcu_read_unlock();
> >>  
> >>      return 0;
> >> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, QEMUFile *f, bool last_stage)
> >>  
> >>      do {
> >>          again = true;
> >> -        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
> >> +        found = get_queued_page(rs, &pss, &dirty_ram_abs);
> >>  
> >>          if (!found) {
> >>              /* priority queue empty, so just search for something dirty */
> >> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
> >>  
> >>      memset(rs, 0, sizeof(*rs));
> >>      qemu_mutex_init(&rs->bitmap_mutex);
> >> +    qemu_mutex_init(&rs->src_page_req_mutex);
> >> +    QSIMPLEQ_INIT(&rs->src_page_requests);
> >
> > Similar question to above; that mutex is going to get reinit'd
> > on a new migration and it shouldn't be without it being destroyed.
> > Maybe make it a once.
> 
> good catch.  I think that the easiest way is to just move it into proper
> RAMState, init it here, and destroy it at cleanup?
> 
> Later, Juan.
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 167+ messages in thread

end of thread, other threads:[~2017-04-05 10:34 UTC | newest]

Thread overview: 167+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-23 20:44 [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Juan Quintela
2017-03-23 20:44 ` [Qemu-devel] [PATCH 01/51] ram: Update all functions comments Juan Quintela
2017-03-24  9:55   ` Peter Xu
2017-03-24 11:44     ` Juan Quintela
2017-03-26 13:43       ` Peter Xu
2017-03-28 18:32         ` Juan Quintela
2017-03-31 14:43     ` Dr. David Alan Gilbert
2017-04-03 20:40       ` Juan Quintela
2017-03-31 15:51   ` Dr. David Alan Gilbert
2017-04-04 17:12     ` Juan Quintela
2017-03-23 20:44 ` [Qemu-devel] [PATCH 02/51] ram: rename block_name to rbname Juan Quintela
2017-03-24 11:11   ` Dr. David Alan Gilbert
2017-03-24 17:15   ` Eric Blake
2017-03-28 10:52     ` Juan Quintela
2017-03-23 20:44 ` [Qemu-devel] [PATCH 03/51] ram: Create RAMState Juan Quintela
2017-03-27  4:43   ` Peter Xu
2017-03-23 20:44 ` [Qemu-devel] [PATCH 04/51] ram: Add dirty_rate_high_cnt to RAMState Juan Quintela
2017-03-27  7:24   ` Peter Xu
2017-03-23 20:44 ` [Qemu-devel] [PATCH 05/51] ram: Move bitmap_sync_count into RAMState Juan Quintela
2017-03-27  7:34   ` Peter Xu
2017-03-28 10:56     ` Juan Quintela
2017-03-29  6:55       ` Peter Xu
2017-03-29  8:56         ` Juan Quintela
2017-03-29  9:07           ` Peter Xu
2017-03-23 20:44 ` [Qemu-devel] [PATCH 06/51] ram: Move start time " Juan Quintela
2017-03-27  7:54   ` Peter Xu
2017-03-28 11:00     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 07/51] ram: Move bytes_xfer_prev " Juan Quintela
2017-03-27  8:04   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 08/51] ram: Move num_dirty_pages_period " Juan Quintela
2017-03-27  8:07   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 09/51] ram: Move xbzrle_cache_miss_prev " Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 10/51] ram: Move iterations_prev " Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 11/51] ram: Move dup_pages " Juan Quintela
2017-03-27  9:23   ` Peter Xu
2017-03-28 18:43     ` Juan Quintela
2017-03-29  7:02       ` Peter Xu
2017-03-29  8:26         ` Juan Quintela
2017-03-31 14:58       ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 12/51] ram: Remove unused dup_mig_bytes_transferred() Juan Quintela
2017-03-27  9:24   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 13/51] ram: Remove unused pages_skipped variable Juan Quintela
2017-03-27  9:26   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 14/51] ram: Move norm_pages to RAMState Juan Quintela
2017-03-27  9:43   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 15/51] ram: Remove norm_mig_bytes_transferred Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 16/51] ram: Move iterations into RAMState Juan Quintela
2017-03-27 10:46   ` Peter Xu
2017-03-28 18:34     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 17/51] ram: Move xbzrle_bytes " Juan Quintela
2017-03-24 10:12   ` Dr. David Alan Gilbert
2017-03-27 10:48   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 18/51] ram: Move xbzrle_pages " Juan Quintela
2017-03-24 10:13   ` Dr. David Alan Gilbert
2017-03-27 10:59   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 19/51] ram: Move xbzrle_cache_miss " Juan Quintela
2017-03-24 10:15   ` Dr. David Alan Gilbert
2017-03-27 11:00   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 20/51] ram: Move xbzrle_cache_miss_rate " Juan Quintela
2017-03-24 10:17   ` Dr. David Alan Gilbert
2017-03-27 11:01   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 21/51] ram: Move xbzrle_overflows " Juan Quintela
2017-03-24 10:22   ` Dr. David Alan Gilbert
2017-03-27 11:03   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 22/51] ram: Move migration_dirty_pages to RAMState Juan Quintela
2017-03-30  6:24   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 23/51] ram: Everything was init to zero, so use memset Juan Quintela
2017-03-29 17:14   ` Dr. David Alan Gilbert
2017-03-30  6:25   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 24/51] ram: Move migration_bitmap_mutex into RAMState Juan Quintela
2017-03-30  6:25   ` Peter Xu
2017-03-30  8:49   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 25/51] ram: Move migration_bitmap_rcu " Juan Quintela
2017-03-30  6:25   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 26/51] ram: Move bytes_transferred " Juan Quintela
2017-03-29 17:38   ` Dr. David Alan Gilbert
2017-03-30  6:26   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 27/51] ram: Use the RAMState bytes_transferred parameter Juan Quintela
2017-03-30  6:27   ` Peter Xu
2017-03-30 16:05     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 28/51] ram: Remove ram_save_remaining Juan Quintela
2017-03-24 15:34   ` Dr. David Alan Gilbert
2017-03-30  6:24   ` Peter Xu
2017-03-30 16:07     ` Juan Quintela
2017-03-31  2:57       ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 29/51] ram: Move last_req_rb to RAMState Juan Quintela
2017-03-30  6:49   ` Peter Xu
2017-03-30 16:08     ` Juan Quintela
2017-03-31  3:00       ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* " Juan Quintela
2017-03-30  6:56   ` Peter Xu
2017-03-30 16:09     ` Juan Quintela
2017-03-31 15:25     ` Dr. David Alan Gilbert
2017-04-01  7:15       ` Peter Xu
2017-04-05 10:27         ` Dr. David Alan Gilbert
2017-03-31 16:52   ` Dr. David Alan Gilbert
2017-04-04 17:42     ` Juan Quintela
2017-04-05 10:34       ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 31/51] ram: Create ram_dirty_sync_count() Juan Quintela
2017-03-29  9:06   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 32/51] ram: Remove dirty_bytes_rate Juan Quintela
2017-03-30  7:00   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 33/51] ram: Move dirty_pages_rate to RAMState Juan Quintela
2017-03-30  7:04   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 34/51] ram: Move postcopy_requests into RAMState Juan Quintela
2017-03-30  7:06   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 35/51] ram: Add QEMUFile to RAMState Juan Quintela
2017-03-24 10:52   ` Dr. David Alan Gilbert
2017-03-24 11:14     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 36/51] ram: Move QEMUFile into RAMState Juan Quintela
2017-03-31 14:21   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 37/51] ram: Move compression_switch to RAMState Juan Quintela
2017-03-29 18:02   ` Dr. David Alan Gilbert
2017-03-30 16:19     ` Juan Quintela
2017-03-30 16:27     ` Juan Quintela
2017-03-30  7:52   ` Peter Xu
2017-03-30 16:30     ` Juan Quintela
2017-03-31  3:04       ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 38/51] migration: Remove MigrationState from migration_in_postcopy Juan Quintela
2017-03-24 15:27   ` Dr. David Alan Gilbert
2017-03-30  8:06   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 39/51] ram: We don't need MigrationState parameter anymore Juan Quintela
2017-03-24 15:28   ` Dr. David Alan Gilbert
2017-03-30  8:05   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 40/51] ram: Rename qemu_target_page_bits() to qemu_target_page_size() Juan Quintela
2017-03-24 15:32   ` Dr. David Alan Gilbert
2017-03-30  8:03   ` Peter Xu
2017-03-30  8:55     ` Dr. David Alan Gilbert
2017-03-30  9:11     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 41/51] Add page-size to output in 'info migrate' Juan Quintela
2017-03-24 17:17   ` Eric Blake
2017-03-23 20:45 ` [Qemu-devel] [PATCH 42/51] ram: Pass RAMBlock to bitmap_sync Juan Quintela
2017-03-24  1:10   ` Yang Hongyang
2017-03-24  8:29     ` Juan Quintela
2017-03-24  9:11       ` Yang Hongyang
2017-03-24 10:05         ` Juan Quintela
2017-03-28 17:12       ` Dr. David Alan Gilbert
2017-03-28 18:45         ` Juan Quintela
2017-03-30  9:07   ` Dr. David Alan Gilbert
2017-03-30 11:38     ` Juan Quintela
2017-03-30 19:10       ` Dr. David Alan Gilbert
2017-04-04 17:46         ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 43/51] ram: ram_discard_range() don't use the mis parameter Juan Quintela
2017-03-29 18:43   ` Dr. David Alan Gilbert
2017-03-30 10:28   ` Peter Xu
2017-03-23 20:45 ` [Qemu-devel] [PATCH 44/51] ram: reorganize last_sent_block Juan Quintela
2017-03-31  8:35   ` Peter Xu
2017-03-31  8:40   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 45/51] ram: Use page number instead of an address for the bitmap operations Juan Quintela
2017-03-31 12:22   ` Dr. David Alan Gilbert
2017-04-04 18:21     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 46/51] ram: Remember last_page instead of last_offset Juan Quintela
2017-03-31  9:09   ` Dr. David Alan Gilbert
2017-04-04 18:24     ` Juan Quintela
2017-03-23 20:45 ` [Qemu-devel] [PATCH 47/51] ram: Change offset field in PageSearchStatus to page Juan Quintela
2017-03-31 12:31   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 48/51] ram: Use ramblock and page offset instead of absolute offset Juan Quintela
2017-03-31 17:17   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 49/51] ram: rename last_ram_offset() last_ram_pages() Juan Quintela
2017-03-31 14:23   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 50/51] ram: Use RAMBitmap type for coherence Juan Quintela
2017-03-31 14:27   ` Dr. David Alan Gilbert
2017-03-23 20:45 ` [Qemu-devel] [PATCH 51/51] migration: Remove MigrationState parameter from migration_is_idle() Juan Quintela
2017-03-24 16:38   ` Dr. David Alan Gilbert
2017-03-31 14:34 ` [Qemu-devel] [PATCH v2 00/51] Creating RAMState for migration Dr. David Alan Gilbert
2017-04-04 17:22   ` Juan Quintela
2017-04-04 17:36     ` Dr. David Alan Gilbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.