All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>,
	"Juergen Gross" <jgross@suse.com>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Ian Jackson" <iwj@xenproject.org>
Subject: [PATCH 03/12] libxenguest: short-circuit "all-dirty" handling
Date: Fri, 25 Jun 2021 15:18:53 +0200	[thread overview]
Message-ID: <55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com> (raw)
In-Reply-To: <912fa390-f9e9-198a-9aee-39fdb9a28fcc@suse.com>

For one it is unnecessary to fill a perhaps large chunk of memory with
all ones. Add a new parameter to send_dirty_pages() for callers to
indicate so.

Then it is further unnecessary to allocate the dirty bitmap altogether
when all that's ever going to happen is a single all-dirty run.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -368,7 +368,7 @@ static int suspend_domain(struct xc_sr_c
  * Bitmap is bounded by p2m_size.
  */
 static int send_dirty_pages(struct xc_sr_context *ctx,
-                            unsigned long entries)
+                            unsigned long entries, bool all_dirty)
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t p;
@@ -379,7 +379,7 @@ static int send_dirty_pages(struct xc_sr
 
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
-        if ( !test_bit(p, dirty_bitmap) )
+        if ( !all_dirty && !test_bit(p, dirty_bitmap) )
             continue;
 
         rc = add_to_batch(ctx, p);
@@ -411,12 +411,7 @@ static int send_dirty_pages(struct xc_sr
  */
 static int send_all_pages(struct xc_sr_context *ctx)
 {
-    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
-                                    &ctx->save.dirty_bitmap_hbuf);
-
-    bitmap_set(dirty_bitmap, ctx->save.p2m_size);
-
-    return send_dirty_pages(ctx, ctx->save.p2m_size);
+    return send_dirty_pages(ctx, ctx->save.p2m_size, true /* all_dirty */);
 }
 
 static int enable_logdirty(struct xc_sr_context *ctx)
@@ -508,9 +503,6 @@ static int send_memory_live(struct xc_sr
     int rc;
     int policy_decision;
 
-    DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
-                                    &ctx->save.dirty_bitmap_hbuf);
-
     precopy_policy_t precopy_policy = ctx->save.callbacks->precopy_policy;
     void *data = ctx->save.callbacks->data;
 
@@ -528,8 +520,6 @@ static int send_memory_live(struct xc_sr
     if ( precopy_policy == NULL )
         precopy_policy = simple_precopy_policy;
 
-    bitmap_set(dirty_bitmap, ctx->save.p2m_size);
-
     for ( ; ; )
     {
         policy_decision = precopy_policy(*policy_stats, data);
@@ -541,7 +531,7 @@ static int send_memory_live(struct xc_sr
             if ( rc )
                 goto out;
 
-            rc = send_dirty_pages(ctx, stats.dirty_count);
+            rc = send_dirty_pages(ctx, stats.dirty_count, x == 1);
             if ( rc )
                 goto out;
         }
@@ -687,7 +677,8 @@ static int suspend_and_send_dirty(struct
         }
     }
 
-    rc = send_dirty_pages(ctx, stats.dirty_count + ctx->save.nr_deferred_pages);
+    rc = send_dirty_pages(ctx, stats.dirty_count + ctx->save.nr_deferred_pages,
+                          false /* all_dirty */);
     if ( rc )
         goto out;
 
@@ -807,8 +798,11 @@ static int setup(struct xc_sr_context *c
     if ( rc )
         goto err;
 
-    dirty_bitmap = xc_hypercall_buffer_alloc_pages(
-        xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
+    dirty_bitmap = ctx->save.live || ctx->stream_type != XC_STREAM_PLAIN
+        ? xc_hypercall_buffer_alloc_pages(
+              xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)))
+        : (void *)-1L;
+
     ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
                                   sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);



  parent reply	other threads:[~2021-06-25 13:19 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-25 13:15 [PATCH 00/12] x86: more or less log-dirty related improvements Jan Beulich
2021-06-25 13:17 ` [PATCH 01/12] libxc: split xc_logdirty_control() from xc_shadow_control() Jan Beulich
2021-06-25 14:51   ` Christian Lindig
2021-06-25 15:49   ` Andrew Cooper
2021-06-28  9:40     ` Jan Beulich
2021-06-25 13:18 ` [PATCH 02/12] libxenguest: deal with log-dirty op stats overflow Jan Beulich
2021-06-25 16:36   ` Andrew Cooper
2021-06-28  7:48     ` Jan Beulich
2021-06-28 11:10       ` Olaf Hering
2021-06-28 11:20         ` Jan Beulich
2021-06-28 11:30           ` Olaf Hering
2021-06-25 13:18 ` Jan Beulich [this message]
2021-06-25 17:02   ` [PATCH 03/12] libxenguest: short-circuit "all-dirty" handling Andrew Cooper
2021-06-28  8:26     ` Jan Beulich
2021-09-02 17:11       ` Ian Jackson
2021-06-25 13:19 ` [PATCH 04/12] libxenguest: avoid allocating unused deferred-pages bitmap Jan Beulich
2021-06-25 18:08   ` Andrew Cooper
2021-06-28  8:47     ` Jan Beulich
2021-09-02 17:17       ` Ian Jackson
2021-06-25 13:19 ` [PATCH 05/12] libxenguest: complete loops in xc_map_domain_meminfo() Jan Beulich
2021-06-25 18:30   ` Andrew Cooper
2021-06-28  8:53     ` Jan Beulich
2021-06-25 13:20 ` [PATCH 06/12] libxenguest: guard against overflow from too large p2m when checkpointing Jan Beulich
2021-06-25 19:00   ` Andrew Cooper
2021-06-28  9:05     ` Jan Beulich
2021-06-25 13:20 ` [PATCH 07/12] libxenguest: fix off-by-1 in colo-secondary-bitmap merging Jan Beulich
2021-06-25 19:06   ` Andrew Cooper
2021-06-25 13:21 ` [PATCH 08/12] x86/paging: deal with log-dirty stats overflow Jan Beulich
2021-06-25 19:09   ` Andrew Cooper
2021-06-25 13:21 ` [PATCH 09/12] x86/paging: supply more useful log-dirty page count Jan Beulich
2021-06-25 13:22 ` [PATCH 10/12] x86/mm: update log-dirty bitmap when manipulating P2M Jan Beulich
2021-06-25 13:22 ` [PATCH 11/12] x86/mm: pull a sanity check earlier in xenmem_add_to_physmap_one() Jan Beulich
2021-06-25 19:10   ` Andrew Cooper
2021-06-25 13:24 ` [PATCH 12/12] SUPPORT.md: write down restriction of 32-bit tool stacks Jan Beulich
2021-06-25 19:45   ` Andrew Cooper
2021-06-28  9:22     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55875a26-7f1d-a6d9-9384-b03b3b2cb86d@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jgross@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.