qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	qemu-devel@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrey Gruzdev <andrey.gruzdev@virtuozzo.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: Re: [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination
Date: Wed, 4 Aug 2021 20:04:49 -0400	[thread overview]
Message-ID: <YQsrIQ4gvP6M+/rS@t490s> (raw)
In-Reply-To: <20210730085249.8246-7-david@redhat.com>

On Fri, Jul 30, 2021 at 10:52:48AM +0200, David Hildenbrand wrote:
> Currently, when someone (i.e., the VM) accesses discarded parts inside a
> RAMBlock with a RamDiscardManager managing the corresponding mapped memory
> region, postcopy will request migration of the corresponding page from the
> source. The source, however, will never answer, because it refuses to
> migrate such pages with undefined content ("logically unplugged"): the
> pages are never dirty, and get_queued_page() will consequently skip
> processing these postcopy requests.
> 
> Especially reading discarded ("logically unplugged") ranges is supposed to
> work in some setups (for example with current virtio-mem), although it
> barely ever happens: still, not placing a page would currently stall the
> VM, as it cannot make forward progress.
> 
> Let's check the state via the RamDiscardManager (the state e.g.,
> of virtio-mem is migrated during precopy) and avoid sending a request
> that will never get answered. Place a fresh zero page instead to keep
> the VM working. This is the same behavior that would happen
> automatically without userfaultfd being active, when accessing virtual
> memory regions without populated pages -- "populate on demand".
> 
> For now, there are valid cases (as documented in the virtio-mem spec) where
> a VM might read discarded memory; in the future, we will disallow that.
> Then, we might want to handle that case differently, e.g., warning the
> user that the VM seems to be mis-behaving.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  migration/postcopy-ram.c | 31 +++++++++++++++++++++++++++----
>  migration/ram.c          | 21 +++++++++++++++++++++
>  migration/ram.h          |  1 +
>  3 files changed, 49 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index 2e9697bdd2..38cdfc09c3 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -671,6 +671,29 @@ int postcopy_wake_shared(struct PostCopyFD *pcfd,
>      return ret;
>  }
>  
> +static int postcopy_request_page(MigrationIncomingState *mis, RAMBlock *rb,
> +                                 ram_addr_t start, uint64_t haddr)
> +{
> +    void *aligned = (void *)(uintptr_t)(haddr & -qemu_ram_pagesize(rb));
> +
> +    /*
> +     * Discarded pages (via RamDiscardManager) are never migrated. On unlikely
> +     * access, place a zeropage, which will also set the relevant bits in the
> +     * recv_bitmap accordingly, so we won't try placing a zeropage twice.
> +     *
> +     * Checking a single bit is sufficient to handle pagesize > TPS as either
> +     * all relevant bits are set or not.
> +     */
> +    assert(QEMU_IS_ALIGNED(start, qemu_ram_pagesize(rb)));

Is this check for ramblock_page_is_discarded()?  If so, shall we move this into
it, e.g. after memory_region_has_ram_discard_manager() returned true?

Other than that it looks good to me, thanks.

> +    if (ramblock_page_is_discarded(rb, start)) {
> +        bool received = ramblock_recv_bitmap_test_byte_offset(rb, start);
> +
> +        return received ? 0 : postcopy_place_page_zero(mis, aligned, rb);
> +    }
> +
> +    return migrate_send_rp_req_pages(mis, rb, start, haddr);
> +}
> +
>  /*
>   * Callback from shared fault handlers to ask for a page,
>   * the page must be specified by a RAMBlock and an offset in that rb
> @@ -690,7 +713,7 @@ int postcopy_request_shared_page(struct PostCopyFD *pcfd, RAMBlock *rb,
>                                          qemu_ram_get_idstr(rb), rb_offset);
>          return postcopy_wake_shared(pcfd, client_addr, rb);
>      }
> -    migrate_send_rp_req_pages(mis, rb, aligned_rbo, client_addr);
> +    postcopy_request_page(mis, rb, aligned_rbo, client_addr);
>      return 0;
>  }
>  
> @@ -984,8 +1007,8 @@ retry:
>               * Send the request to the source - we want to request one
>               * of our host page sizes (which is >= TPS)
>               */
> -            ret = migrate_send_rp_req_pages(mis, rb, rb_offset,
> -                                            msg.arg.pagefault.address);
> +            ret = postcopy_request_page(mis, rb, rb_offset,
> +                                        msg.arg.pagefault.address);
>              if (ret) {
>                  /* May be network failure, try to wait for recovery */
>                  if (ret == -EIO && postcopy_pause_fault_thread(mis)) {
> @@ -993,7 +1016,7 @@ retry:
>                      goto retry;
>                  } else {
>                      /* This is a unavoidable fault */
> -                    error_report("%s: migrate_send_rp_req_pages() get %d",
> +                    error_report("%s: postcopy_request_page() get %d",
>                                   __func__, ret);
>                      break;
>                  }
> diff --git a/migration/ram.c b/migration/ram.c
> index 9776919faa..01cea01774 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -912,6 +912,27 @@ static uint64_t ramblock_dirty_bitmap_clear_discarded_pages(RAMBlock *rb)
>      return cleared_bits;
>  }
>  
> +/*
> + * Check if a host-page aligned page falls into a discarded range as managed by
> + * a RamDiscardManager responsible for the mapped memory region of the RAMBlock.
> + *
> + * Note: The result is only stable while migration (precopy/postcopy).
> + */
> +bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start)
> +{
> +    if (rb->mr && memory_region_has_ram_discard_manager(rb->mr)) {
> +        RamDiscardManager *rdm = memory_region_get_ram_discard_manager(rb->mr);
> +        MemoryRegionSection section = {
> +            .mr = rb->mr,
> +            .offset_within_region = start,
> +            .size = int128_get64(qemu_ram_pagesize(rb)),
> +        };
> +
> +        return !ram_discard_manager_is_populated(rdm, &section);
> +    }
> +    return false;
> +}
> +
>  /* Called with RCU critical section */
>  static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb)
>  {
> diff --git a/migration/ram.h b/migration/ram.h
> index 4833e9fd5b..dda1988f3d 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -72,6 +72,7 @@ void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
>  int64_t ramblock_recv_bitmap_send(QEMUFile *file,
>                                    const char *block_name);
>  int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
> +bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start);
>  
>  /* ram cache */
>  int colo_init_ram_cache(void);
> -- 
> 2.31.1
> 

-- 
Peter Xu



  reply	other threads:[~2021-08-05  0:06 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-30  8:52 [PATCH v3 0/7] migration/ram: Optimize for virtio-mem via RamDiscardManager David Hildenbrand
2021-07-30  8:52 ` [PATCH v3 1/7] memory: Introduce replay_discarded callback for RamDiscardManager David Hildenbrand
2021-07-30  8:52 ` [PATCH v3 2/7] virtio-mem: Implement replay_discarded RamDiscardManager callback David Hildenbrand
2021-07-30  8:52 ` [PATCH v3 3/7] migration/ram: Don't passs RAMState to migration_clear_memory_region_dirty_bitmap_*() David Hildenbrand
2021-08-05  0:05   ` Peter Xu
2021-08-05  7:41   ` Philippe Mathieu-Daudé
2021-07-30  8:52 ` [PATCH v3 4/7] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source David Hildenbrand
2021-08-05  0:06   ` Peter Xu
2021-07-30  8:52 ` [PATCH v3 5/7] virtio-mem: Drop precopy notifier David Hildenbrand
2021-07-30  8:52 ` [PATCH v3 6/7] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination David Hildenbrand
2021-08-05  0:04   ` Peter Xu [this message]
2021-08-05  8:10     ` David Hildenbrand
2021-08-05 12:52       ` Peter Xu
2021-08-05  7:48   ` Philippe Mathieu-Daudé
2021-08-05  8:07     ` David Hildenbrand
2021-08-05  8:17       ` Philippe Mathieu-Daudé
2021-08-05  8:20         ` David Hildenbrand
2021-07-30  8:52 ` [PATCH v3 7/7] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots David Hildenbrand
2021-08-05  8:04   ` Philippe Mathieu-Daudé
2021-08-05  8:11     ` David Hildenbrand
2021-08-05  8:21       ` Philippe Mathieu-Daudé
2021-08-05  8:27         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YQsrIQ4gvP6M+/rS@t490s \
    --to=peterx@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=andrey.gruzdev@virtuozzo.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).