All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
@ 2013-07-16 10:52 Andrew Cooper
  2013-07-18 13:01 ` Andrew Cooper
  2013-07-18 16:20 ` Shriram Rajagopalan
  0 siblings, 2 replies; 10+ messages in thread
From: Andrew Cooper @ 2013-07-16 10:52 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
  "libxc: provide notification of final checkpoint to restore end"
broke migration from any version of Xen using tools from prior to that commit

Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing newer
tools xc_domain_restore() to start reading the qemu save record, as
ctx->last_checkpoint is 0.

The failure looks like:
  xc: error: Max batch size exceeded (1970103633). Giving up.
where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"

Sadly, the simple fix of just setting ctx->last_checkpoint = 1 will cause an
opposite function regresson for anyone using the current behaviour of
save_callbacks->checkpoint().

The only safe fix is to rely on the toolstack to provide this information.

Passing 0 results in unchanged behaviour, while passing nonzero means "the
other end of the migration stream does not know about
XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

---
Changes since v1:
 * Rename "last_checkpoint_unaware" to "checkpointed_stream" to be more
   generic yet still accurate as to the meaning and fix for the issue
---
 tools/libxc/xc_domain_restore.c |    3 ++-
 tools/libxc/xc_nomigrate.c      |    2 +-
 tools/libxc/xenguest.h          |    3 ++-
 tools/libxl/libxl_save_helper.c |    2 +-
 tools/xcutils/xc_restore.c      |    2 +-
 5 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 63d36cd..e50b185 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -1401,7 +1401,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       domid_t store_domid, unsigned int console_evtchn,
                       unsigned long *console_mfn, domid_t console_domid,
                       unsigned int hvm, unsigned int pae, int superpages,
-                      int no_incr_generationid,
+                      int no_incr_generationid, int checkpointed_stream,
                       unsigned long *vm_generationid_addr,
                       struct restore_callbacks *callbacks)
 {
@@ -1472,6 +1472,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
 
     ctx->superpages = superpages;
     ctx->hvm = hvm;
+    ctx->last_checkpoint = !!checkpointed_stream;
 
     ctxt = xc_hypercall_buffer_alloc(xch, ctxt, sizeof(*ctxt));
 
diff --git a/tools/libxc/xc_nomigrate.c b/tools/libxc/xc_nomigrate.c
index 73e7566..fb6d53e 100644
--- a/tools/libxc/xc_nomigrate.c
+++ b/tools/libxc/xc_nomigrate.c
@@ -35,7 +35,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       domid_t store_domid, unsigned int console_evtchn,
                       unsigned long *console_mfn, domid_t console_domid,
                       unsigned int hvm, unsigned int pae, int superpages,
-                      int no_incr_generationid,
+                      int no_incr_generationid, int checkpointed_stream,
                       unsigned long *vm_generationid_addr,
                       struct restore_callbacks *callbacks)
 {
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index 4714bd2..2101810 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -114,6 +114,7 @@ struct restore_callbacks {
  * @parm pae non-zero if this HVM domain has PAE support enabled
  * @parm superpages non-zero to allocate guest memory with superpages
  * @parm no_incr_generationid non-zero if generation id is NOT to be incremented
+ * @parm checkpointed_stream non-zero if the far end of the stream is using checkpointing
  * @parm vm_generationid_addr returned with the address of the generation id buffer
  * @parm callbacks non-NULL to receive a callback to restore toolstack
  *       specific data
@@ -124,7 +125,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       domid_t store_domid, unsigned int console_evtchn,
                       unsigned long *console_mfn, domid_t console_domid,
                       unsigned int hvm, unsigned int pae, int superpages,
-                      int no_incr_generationid,
+                      int no_incr_generationid, int checkpointed_stream,
                       unsigned long *vm_generationid_addr,
                       struct restore_callbacks *callbacks);
 /**
diff --git a/tools/libxl/libxl_save_helper.c b/tools/libxl/libxl_save_helper.c
index 772251a..8f14b8b 100644
--- a/tools/libxl/libxl_save_helper.c
+++ b/tools/libxl/libxl_save_helper.c
@@ -264,7 +264,7 @@ int main(int argc, char **argv)
         r = xc_domain_restore(xch, io_fd, dom, store_evtchn, &store_mfn,
                               store_domid, console_evtchn, &console_mfn,
                               console_domid, hvm, pae, superpages,
-                              no_incr_genidad, &genidad,
+                              no_incr_genidad, 0, &genidad,
                               &helper_restore_callbacks);
         helper_stub_restore_results(store_mfn,console_mfn,genidad,0);
         complete(r);
diff --git a/tools/xcutils/xc_restore.c b/tools/xcutils/xc_restore.c
index 35d725c..e657c7f 100644
--- a/tools/xcutils/xc_restore.c
+++ b/tools/xcutils/xc_restore.c
@@ -52,7 +52,7 @@ main(int argc, char **argv)
 
     ret = xc_domain_restore(xch, io_fd, domid, store_evtchn, &store_mfn, 0,
                             console_evtchn, &console_mfn, 0, hvm, pae, superpages,
-                            0, NULL, NULL);
+                            0, 0, NULL, NULL);
 
     if ( ret == 0 )
     {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-16 10:52 [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen Andrew Cooper
@ 2013-07-18 13:01 ` Andrew Cooper
  2013-07-18 13:14   ` Ian Campbell
  2013-07-18 16:20 ` Shriram Rajagopalan
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Cooper @ 2013-07-18 13:01 UTC (permalink / raw)
  To: Xen-devel List; +Cc: Ian Jackson, Ian Campbell

On 16/07/13 11:52, Andrew Cooper wrote:
> Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
>   "libxc: provide notification of final checkpoint to restore end"
> broke migration from any version of Xen using tools from prior to that commit
>
> Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing newer
> tools xc_domain_restore() to start reading the qemu save record, as
> ctx->last_checkpoint is 0.
>
> The failure looks like:
>   xc: error: Max batch size exceeded (1970103633). Giving up.
> where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"
>
> Sadly, the simple fix of just setting ctx->last_checkpoint = 1 will cause an
> opposite function regresson for anyone using the current behaviour of
> save_callbacks->checkpoint().
>
> The only safe fix is to rely on the toolstack to provide this information.
>
> Passing 0 results in unchanged behaviour, while passing nonzero means "the
> other end of the migration stream does not know about
> XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Apologies - missed the CC's when sending.

Ping?

>
> ---
> Changes since v1:
>  * Rename "last_checkpoint_unaware" to "checkpointed_stream" to be more
>    generic yet still accurate as to the meaning and fix for the issue
> ---
>  tools/libxc/xc_domain_restore.c |    3 ++-
>  tools/libxc/xc_nomigrate.c      |    2 +-
>  tools/libxc/xenguest.h          |    3 ++-
>  tools/libxl/libxl_save_helper.c |    2 +-
>  tools/xcutils/xc_restore.c      |    2 +-
>  5 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 63d36cd..e50b185 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1401,7 +1401,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>                        domid_t store_domid, unsigned int console_evtchn,
>                        unsigned long *console_mfn, domid_t console_domid,
>                        unsigned int hvm, unsigned int pae, int superpages,
> -                      int no_incr_generationid,
> +                      int no_incr_generationid, int checkpointed_stream,
>                        unsigned long *vm_generationid_addr,
>                        struct restore_callbacks *callbacks)
>  {
> @@ -1472,6 +1472,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>  
>      ctx->superpages = superpages;
>      ctx->hvm = hvm;
> +    ctx->last_checkpoint = !!checkpointed_stream;
>  
>      ctxt = xc_hypercall_buffer_alloc(xch, ctxt, sizeof(*ctxt));
>  
> diff --git a/tools/libxc/xc_nomigrate.c b/tools/libxc/xc_nomigrate.c
> index 73e7566..fb6d53e 100644
> --- a/tools/libxc/xc_nomigrate.c
> +++ b/tools/libxc/xc_nomigrate.c
> @@ -35,7 +35,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>                        domid_t store_domid, unsigned int console_evtchn,
>                        unsigned long *console_mfn, domid_t console_domid,
>                        unsigned int hvm, unsigned int pae, int superpages,
> -                      int no_incr_generationid,
> +                      int no_incr_generationid, int checkpointed_stream,
>                        unsigned long *vm_generationid_addr,
>                        struct restore_callbacks *callbacks)
>  {
> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> index 4714bd2..2101810 100644
> --- a/tools/libxc/xenguest.h
> +++ b/tools/libxc/xenguest.h
> @@ -114,6 +114,7 @@ struct restore_callbacks {
>   * @parm pae non-zero if this HVM domain has PAE support enabled
>   * @parm superpages non-zero to allocate guest memory with superpages
>   * @parm no_incr_generationid non-zero if generation id is NOT to be incremented
> + * @parm checkpointed_stream non-zero if the far end of the stream is using checkpointing
>   * @parm vm_generationid_addr returned with the address of the generation id buffer
>   * @parm callbacks non-NULL to receive a callback to restore toolstack
>   *       specific data
> @@ -124,7 +125,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>                        domid_t store_domid, unsigned int console_evtchn,
>                        unsigned long *console_mfn, domid_t console_domid,
>                        unsigned int hvm, unsigned int pae, int superpages,
> -                      int no_incr_generationid,
> +                      int no_incr_generationid, int checkpointed_stream,
>                        unsigned long *vm_generationid_addr,
>                        struct restore_callbacks *callbacks);
>  /**
> diff --git a/tools/libxl/libxl_save_helper.c b/tools/libxl/libxl_save_helper.c
> index 772251a..8f14b8b 100644
> --- a/tools/libxl/libxl_save_helper.c
> +++ b/tools/libxl/libxl_save_helper.c
> @@ -264,7 +264,7 @@ int main(int argc, char **argv)
>          r = xc_domain_restore(xch, io_fd, dom, store_evtchn, &store_mfn,
>                                store_domid, console_evtchn, &console_mfn,
>                                console_domid, hvm, pae, superpages,
> -                              no_incr_genidad, &genidad,
> +                              no_incr_genidad, 0, &genidad,
>                                &helper_restore_callbacks);
>          helper_stub_restore_results(store_mfn,console_mfn,genidad,0);
>          complete(r);
> diff --git a/tools/xcutils/xc_restore.c b/tools/xcutils/xc_restore.c
> index 35d725c..e657c7f 100644
> --- a/tools/xcutils/xc_restore.c
> +++ b/tools/xcutils/xc_restore.c
> @@ -52,7 +52,7 @@ main(int argc, char **argv)
>  
>      ret = xc_domain_restore(xch, io_fd, domid, store_evtchn, &store_mfn, 0,
>                              console_evtchn, &console_mfn, 0, hvm, pae, superpages,
> -                            0, NULL, NULL);
> +                            0, 0, NULL, NULL);
>  
>      if ( ret == 0 )
>      {

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-18 13:01 ` Andrew Cooper
@ 2013-07-18 13:14   ` Ian Campbell
  0 siblings, 0 replies; 10+ messages in thread
From: Ian Campbell @ 2013-07-18 13:14 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Ian Jackson, Xen-devel List

On Thu, 2013-07-18 at 14:01 +0100, Andrew Cooper wrote:
> On 16/07/13 11:52, Andrew Cooper wrote:
> > Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
> >   "libxc: provide notification of final checkpoint to restore end"
> > broke migration from any version of Xen using tools from prior to that commit
> >
> > Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing newer
> > tools xc_domain_restore() to start reading the qemu save record, as
> > ctx->last_checkpoint is 0.
> >
> > The failure looks like:
> >   xc: error: Max batch size exceeded (1970103633). Giving up.
> > where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"
> >
> > Sadly, the simple fix of just setting ctx->last_checkpoint = 1 will cause an
> > opposite function regresson for anyone using the current behaviour of
> > save_callbacks->checkpoint().
> >
> > The only safe fix is to rely on the toolstack to provide this information.
> >
> > Passing 0 results in unchanged behaviour, while passing nonzero means "the
> > other end of the migration stream does not know about
> > XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"

Can we just say "the other end is performing a normal migrate"? That's
what I was originally trying to suggest, I think it is as you have
implemented it anyway.

> > diff --git a/tools/libxl/libxl_save_helper.c b/tools/libxl/libxl_save_helper.c
> > index 772251a..8f14b8b 100644
> > --- a/tools/libxl/libxl_save_helper.c
> > +++ b/tools/libxl/libxl_save_helper.c
> > @@ -264,7 +264,7 @@ int main(int argc, char **argv)
> >          r = xc_domain_restore(xch, io_fd, dom, store_evtchn, &store_mfn,
> >                                store_domid, console_evtchn, &console_mfn,
> >                                console_domid, hvm, pae, superpages,
> > -                              no_incr_genidad, &genidad,
> > +                              no_incr_genidad, 0, &genidad,

Does the "xl remus" or "xl save -c" stuff not result in a call chain
which ends up here wanting to pass 1?

> >                                &helper_restore_callbacks);
> >          helper_stub_restore_results(store_mfn,console_mfn,genidad,0);
> >          complete(r);
> > diff --git a/tools/xcutils/xc_restore.c b/tools/xcutils/xc_restore.c
> > index 35d725c..e657c7f 100644
> > --- a/tools/xcutils/xc_restore.c
> > +++ b/tools/xcutils/xc_restore.c
> > @@ -52,7 +52,7 @@ main(int argc, char **argv)
> >  
> >      ret = xc_domain_restore(xch, io_fd, domid, store_evtchn, &store_mfn, 0,
> >                              console_evtchn, &console_mfn, 0, hvm, pae, superpages,
> > -                            0, NULL, NULL);
> > +                            0, 0, NULL, NULL);
> >  
> >      if ( ret == 0 )
> >      {
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-16 10:52 [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen Andrew Cooper
  2013-07-18 13:01 ` Andrew Cooper
@ 2013-07-18 16:20 ` Shriram Rajagopalan
  2013-07-18 16:27   ` Andrew Cooper
  1 sibling, 1 reply; 10+ messages in thread
From: Shriram Rajagopalan @ 2013-07-18 16:20 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Ian Campbell, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3842 bytes --]

On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:

> Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
>   "libxc: provide notification of final checkpoint to restore end"
> broke migration from any version of Xen using tools from prior to that
> commit
>
> Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing newer
> tools xc_domain_restore() to start reading the qemu save record, as
> ctx->last_checkpoint is 0.
>
> The failure looks like:
>   xc: error: Max batch size exceeded (1970103633). Giving up.
> where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"
>
> Sadly, the simple fix of just setting ctx->last_checkpoint = 1 will cause
> an
> opposite function regresson for anyone using the current behaviour of
> save_callbacks->checkpoint().
>
> The only safe fix is to rely on the toolstack to provide this information.
>
> Passing 0 results in unchanged behaviour, while passing nonzero means "the
> other end of the migration stream does not know about
> XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> ---
> Changes since v1:
>  * Rename "last_checkpoint_unaware" to "checkpointed_stream" to be more
>    generic yet still accurate as to the meaning and fix for the issue
> ---
>  tools/libxc/xc_domain_restore.c |    3 ++-
>  tools/libxc/xc_nomigrate.c      |    2 +-
>  tools/libxc/xenguest.h          |    3 ++-
>  tools/libxl/libxl_save_helper.c |    2 +-
>  tools/xcutils/xc_restore.c      |    2 +-
>  5 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c
> b/tools/libxc/xc_domain_restore.c
> index 63d36cd..e50b185 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1401,7 +1401,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd,
> uint32_t dom,
>                        domid_t store_domid, unsigned int console_evtchn,
>                        unsigned long *console_mfn, domid_t console_domid,
>                        unsigned int hvm, unsigned int pae, int superpages,
> -                      int no_incr_generationid,
> +                      int no_incr_generationid, int checkpointed_stream,
>                        unsigned long *vm_generationid_addr,
>                        struct restore_callbacks *callbacks)
>  {
> @@ -1472,6 +1472,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd,
> uint32_t dom,
>
>      ctx->superpages = superpages;
>      ctx->hvm = hvm;
> +    ctx->last_checkpoint = !!checkpointed_stream;
>
>
shouldnt it be a single unary NOT ?
i.e., ctx->last_checkpoint = !checkpointed_stream;

Otherwise, I am a bit confused. Basically, there is a memset(ctx, 0) right
before this.
And you seem to be passing checkpointed_stream = 0 via both xc_restore and
libxl which basically sets it back to ctx->last_checkpoint = 0.

Also, after getting pages from the last iteration, the code goes like this:
 if (ctx->last_checkpoint)
    goto finish;
 get pagebuf or goto finish on error
 get tailbuf or goto finish on error
 goto loadpages;

finish:
   ...

and you setting ctx->last_checkpoint = 0 basically means that you are
banking on the far end (with older version of tools) to close the socket,
causing
"get pagebuf" to fail and subsequently land on finish.
IIRC, this was the behavior before XC_SAVE_ID_LAST_CHECKPOINT was introduced
by Ian Campbell, to get rid of this benign error message.

"This allows the restore end to finish up without waiting for a
spurious timeout on the receive fd and thereby avoids unnecessary
error logging in the case of a successful migration or restore."

Further,
"In the normal migration or restore case the first checkpoint is always
the last. For a rolling checkpoint (such as Remus) the notification is
currently unused "

[-- Attachment #1.2: Type: text/html, Size: 5087 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-18 16:20 ` Shriram Rajagopalan
@ 2013-07-18 16:27   ` Andrew Cooper
  2013-07-18 18:47     ` Shriram Rajagopalan
  2013-07-19  9:36     ` Ian Campbell
  0 siblings, 2 replies; 10+ messages in thread
From: Andrew Cooper @ 2013-07-18 16:27 UTC (permalink / raw)
  To: rshriram; +Cc: Ian Campbell, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4837 bytes --]

On 18/07/13 17:20, Shriram Rajagopalan wrote:
> On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
>       "libxc: provide notification of final checkpoint to restore end"
>     broke migration from any version of Xen using tools from prior to
>     that commit
>
>     Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT,
>     causing newer
>     tools xc_domain_restore() to start reading the qemu save record, as
>     ctx->last_checkpoint is 0.
>
>     The failure looks like:
>       xc: error: Max batch size exceeded (1970103633). Giving up.
>     where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"
>
>     Sadly, the simple fix of just setting ctx->last_checkpoint = 1
>     will cause an
>     opposite function regresson for anyone using the current behaviour of
>     save_callbacks->checkpoint().
>
>     The only safe fix is to rely on the toolstack to provide this
>     information.
>
>     Passing 0 results in unchanged behaviour, while passing nonzero
>     means "the
>     other end of the migration stream does not know about
>     XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"
>
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com
>     <mailto:andrew.cooper3@citrix.com>>
>
>     ---
>     Changes since v1:
>      * Rename "last_checkpoint_unaware" to "checkpointed_stream" to be
>     more
>        generic yet still accurate as to the meaning and fix for the issue
>     ---
>      tools/libxc/xc_domain_restore.c |    3 ++-
>      tools/libxc/xc_nomigrate.c      |    2 +-
>      tools/libxc/xenguest.h          |    3 ++-
>      tools/libxl/libxl_save_helper.c |    2 +-
>      tools/xcutils/xc_restore.c      |    2 +-
>      5 files changed, 7 insertions(+), 5 deletions(-)
>
>     diff --git a/tools/libxc/xc_domain_restore.c
>     b/tools/libxc/xc_domain_restore.c
>     index 63d36cd..e50b185 100644
>     --- a/tools/libxc/xc_domain_restore.c
>     +++ b/tools/libxc/xc_domain_restore.c
>     @@ -1401,7 +1401,7 @@ int xc_domain_restore(xc_interface *xch, int
>     io_fd, uint32_t dom,
>                            domid_t store_domid, unsigned int
>     console_evtchn,
>                            unsigned long *console_mfn, domid_t
>     console_domid,
>                            unsigned int hvm, unsigned int pae, int
>     superpages,
>     -                      int no_incr_generationid,
>     +                      int no_incr_generationid, int
>     checkpointed_stream,
>                            unsigned long *vm_generationid_addr,
>                            struct restore_callbacks *callbacks)
>      {
>     @@ -1472,6 +1472,7 @@ int xc_domain_restore(xc_interface *xch, int
>     io_fd, uint32_t dom,
>
>          ctx->superpages = superpages;
>          ctx->hvm = hvm;
>     +    ctx->last_checkpoint = !!checkpointed_stream;
>
>
> shouldnt it be a single unary NOT ?
> i.e., ctx->last_checkpoint = !checkpointed_stream;

It should, which reminds me why I had it named the way it was before.  I
didn't think enough before running sed to change the name as per Ians
suggestion.

With the current naming, you have to set checpointed_stream=1 to get the
current behaviour, which means "possibly checkpointed or possibly not
but new enough that a non-checkpointed stream sends a LAST_CHECKPOINT
anyway"

>
> Otherwise, I am a bit confused. Basically, there is a memset(ctx, 0)
> right before this.
> And you seem to be passing checkpointed_stream = 0 via both xc_restore and
> libxl which basically sets it back to ctx->last_checkpoint = 0.
>
> Also, after getting pages from the last iteration, the code goes like
> this:
>  if (ctx->last_checkpoint)
>     goto finish;
>  get pagebuf or goto finish on error
>  get tailbuf or goto finish on error
>  goto loadpages;
>
> finish:
>    ...
>
> and you setting ctx->last_checkpoint = 0 basically means that you are
> banking on the far end (with older version of tools) to close the
> socket, causing
> "get pagebuf" to fail and subsequently land on finish.
> IIRC, this was the behavior before XC_SAVE_ID_LAST_CHECKPOINT was
> introduced
> by Ian Campbell, to get rid of this benign error message.

That might have been 'fine' for PV guests, but it cause active breakage
for HVM domains where the qemu save record immediately follows in the
migration stream.

~Andrew

>
> "This allows the restore end to finish up without waiting for a
> spurious timeout on the receive fd and thereby avoids unnecessary
> error logging in the case of a successful migration or restore."
>
> Further,
> "In the normal migration or restore case the first checkpoint is always
> the last. For a rolling checkpoint (such as Remus) the notification is
> currently unused "
>
>


[-- Attachment #1.2: Type: text/html, Size: 9506 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-18 16:27   ` Andrew Cooper
@ 2013-07-18 18:47     ` Shriram Rajagopalan
  2013-07-22 17:52       ` Andrew Cooper
  2013-07-19  9:36     ` Ian Campbell
  1 sibling, 1 reply; 10+ messages in thread
From: Shriram Rajagopalan @ 2013-07-18 18:47 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Ian Campbell, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5961 bytes --]

On Thu, Jul 18, 2013 at 12:27 PM, Andrew Cooper
<andrew.cooper3@citrix.com>wrote:

>  On 18/07/13 17:20, Shriram Rajagopalan wrote:
>
> On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:
>
>> Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010
>>   "libxc: provide notification of final checkpoint to restore end"
>> broke migration from any version of Xen using tools from prior to that
>> commit
>>
>> Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing
>> newer
>> tools xc_domain_restore() to start reading the qemu save record, as
>> ctx->last_checkpoint is 0.
>>
>>  The failure looks like:
>>   xc: error: Max batch size exceeded (1970103633). Giving up.
>> where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu"
>>
>> Sadly, the simple fix of just setting ctx->last_checkpoint = 1 will cause
>> an
>> opposite function regresson for anyone using the current behaviour of
>> save_callbacks->checkpoint().
>>
>> The only safe fix is to rely on the toolstack to provide this information.
>>
>> Passing 0 results in unchanged behaviour, while passing nonzero means "the
>> other end of the migration stream does not know about
>> XC_SAVE_ID_LAST_CHECKPOINT but is performing a normal migrate"
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> ---
>>  Changes since v1:
>>  * Rename "last_checkpoint_unaware" to "checkpointed_stream" to be more
>>    generic yet still accurate as to the meaning and fix for the issue
>> ---
>>  tools/libxc/xc_domain_restore.c |    3 ++-
>>  tools/libxc/xc_nomigrate.c      |    2 +-
>>   tools/libxc/xenguest.h          |    3 ++-
>>  tools/libxl/libxl_save_helper.c |    2 +-
>>  tools/xcutils/xc_restore.c      |    2 +-
>>  5 files changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/libxc/xc_domain_restore.c
>> b/tools/libxc/xc_domain_restore.c
>>  index 63d36cd..e50b185 100644
>> --- a/tools/libxc/xc_domain_restore.c
>> +++ b/tools/libxc/xc_domain_restore.c
>> @@ -1401,7 +1401,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd,
>> uint32_t dom,
>>                        domid_t store_domid, unsigned int console_evtchn,
>>                        unsigned long *console_mfn, domid_t console_domid,
>>                        unsigned int hvm, unsigned int pae, int superpages,
>> -                      int no_incr_generationid,
>>  +                      int no_incr_generationid, int checkpointed_stream,
>>                        unsigned long *vm_generationid_addr,
>>                        struct restore_callbacks *callbacks)
>>  {
>> @@ -1472,6 +1472,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd,
>> uint32_t dom,
>>
>>      ctx->superpages = superpages;
>>      ctx->hvm = hvm;
>>  +    ctx->last_checkpoint = !!checkpointed_stream;
>>
>>
>  shouldnt it be a single unary NOT ?
> i.e., ctx->last_checkpoint = !checkpointed_stream;
>
>
> It should, which reminds me why I had it named the way it was before.  I
> didn't think enough before running sed to change the name as per Ians
> suggestion.
>
>
So the !! is a bug, then ..
ctx->last_checkpoint = !checkpointed_stream;

I know xend is deprecated but xend+remus is currently the only working
version of remus.
and this patch, especially setting the value to 0 in xc_restore.c breaks
xend-remus.
We could atleast mandate that migrations/remus in Xend will work only
between same version of toolstack.

 With the current naming, you have to set checpointed_stream=1 to get the
> current behaviour, which means "possibly checkpointed or possibly not but
> new enough that a non-checkpointed stream sends a LAST_CHECKPOINT anyway"
>
>
>
>  Otherwise, I am a bit confused. Basically, there is a memset(ctx, 0)
> right before this.
> And you seem to be passing checkpointed_stream = 0 via both xc_restore and
> libxl which basically sets it back to ctx->last_checkpoint = 0.
>
>  Also, after getting pages from the last iteration, the code goes like
> this:
>  if (ctx->last_checkpoint)
>     goto finish;
>  get pagebuf or goto finish on error
>  get tailbuf or goto finish on error
>  goto loadpages;
>
>  finish:
>    ...
>
>  and you setting ctx->last_checkpoint = 0 basically means that you are
> banking on the far end (with older version of tools) to close the socket,
> causing
> "get pagebuf" to fail and subsequently land on finish.
> IIRC, this was the behavior before XC_SAVE_ID_LAST_CHECKPOINT was
> introduced
> by Ian Campbell, to get rid of this benign error message.
>
>
> That might have been 'fine' for PV guests, but it cause active breakage
> for HVM domains where the qemu save record immediately follows in the
> migration stream.
>
>
Just to clarify.. the code flows like this, iirc.

loadpages:
 while (1)
     if !completed
        get pagebufs
        if 0 pages sent, break
     endif
     apply batch (pagebufs)
 endwhile

 if !completed
   get tailbuf [[this is where the QEMU record would be obtained]]
   completed = 1
 endif

 if last_checkpoint
   goto finish
 endif

 get pagebuf or goto finish on error ---> this is where old code used to
exit
 get tailbuf
goto loadpages
finish:
   apply tailbuf [tailbuf obtained inside the 'if !completed' block]
   do the rest of the restore

So are you sure that the max batch size error is a result of that
XC_SAVE_ID_LAST_CHECKPOINT ?
After all, its an "option" thats received from the remote end. Not a
mandatory parameter.
If we revert to the old style of control flow, like the patch below, do you
still get the error ?


diff -r a90bf2537141 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c   Mon Jun 10 21:06:47 2013 -0400
+++ b/tools/libxc/xc_domain_restore.c   Thu Jul 18 14:44:04 2013 -0400
@@ -1628,7 +1628,6 @@
          * If more checkpoints are expected then shift into
          * nonblocking mode for the remainder.
          */
-        if ( !ctx->last_checkpoint )
             fcntl(io_fd, F_SETFL, orig_io_fd_flags | O_NONBLOCK);

         /*

[-- Attachment #1.2: Type: text/html, Size: 10762 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-18 16:27   ` Andrew Cooper
  2013-07-18 18:47     ` Shriram Rajagopalan
@ 2013-07-19  9:36     ` Ian Campbell
  1 sibling, 0 replies; 10+ messages in thread
From: Ian Campbell @ 2013-07-19  9:36 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: rshriram, Xen-devel

On Thu, 2013-07-18 at 17:27 +0100, Andrew Cooper wrote:

> > 
> > and you setting ctx->last_checkpoint = 0 basically means that you
> > are
> > banking on the far end (with older version of tools) to close the
> > socket, causing
> > "get pagebuf" to fail and subsequently land on finish.
> > IIRC, this was the behavior before XC_SAVE_ID_LAST_CHECKPOINT was
> > introduced
> > by Ian Campbell, to get rid of this benign error message.
> 
> That might have been 'fine' for PV guests, but it cause active
> breakage for HVM domains where the qemu save record immediately
> follows in the migration stream.

There is a "tail" section on a PV migrate as well, it's not the qemu
save record, but how come this isn't impacted?

Ian.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-18 18:47     ` Shriram Rajagopalan
@ 2013-07-22 17:52       ` Andrew Cooper
  2013-07-22 17:59         ` Ian Campbell
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Cooper @ 2013-07-22 17:52 UTC (permalink / raw)
  To: rshriram; +Cc: Ian Campbell, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3341 bytes --]

On 18/07/13 19:47, Shriram Rajagopalan wrote:
> On Thu, Jul 18, 2013 at 12:27 PM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     On 18/07/13 17:20, Shriram Rajagopalan wrote:
>>     On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper
>>     <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>>
>>     wrote:
>>     and you setting ctx->last_checkpoint = 0 basically means that you are
>>     banking on the far end (with older version of tools) to close the
>>     socket, causing
>>     "get pagebuf" to fail and subsequently land on finish.
>>     IIRC, this was the behavior before XC_SAVE_ID_LAST_CHECKPOINT was
>>     introduced
>>     by Ian Campbell, to get rid of this benign error message.
>
>
>     That might have been 'fine' for PV guests, but it cause active
>     breakage for HVM domains where the qemu save record immediately
>     follows in the migration stream.
>
>
> Just to clarify.. the code flows like this, iirc.
>
> loadpages:
>  while (1)
>      if !completed
>         get pagebufs
>         if 0 pages sent, break
>      endif
>      apply batch (pagebufs)
>  endwhile
>
>  if !completed
>    get tailbuf [[this is where the QEMU record would be obtained]]
>    completed = 1
>  endif
>
>  if last_checkpoint
>    goto finish
>  endif
>
>  get pagebuf or goto finish on error ---> this is where old code used
> to exit
>  get tailbuf
> goto loadpages
> finish:
>    apply tailbuf [tailbuf obtained inside the 'if !completed' block]
>    do the rest of the restore

(Logically joining the two divergent threads, as this is the answer to both)

This has nothing to do with the buffering mode, and that is not where
the Qemu record would be obtained from.

As the code currently stands, if a XC_SAVE_ID_LAST_CHECKPOINT chunk is
not seen in the stream, we complete the loadpages: section, including
the magic pages (TSS, console info etc).

We then read the buffer_tail() on line 171, set ctx->completed on line
1725, but fail the ctx->last_checkpoint check on line 1758.

What we should do is pass the last_checkpoint test, and goto finish
which then calls dump_qemu().  What actually happens is a call to
pagebuf_get() on line 1766 which raises an error because of finding a
Qemu save record rather than more pages.

So this is very much a bug directly introduced by 00a4b65f85, and can
only be fixed with knowledge from the higher levels of the toolstack.

As for the wording of the parameter, I still prefer the original
"last_checkpoint_unaware" over "checkpointed_stream" as it is more accurate.

Any migration stream started from a version of the tools after c/s
00a4b65f85 will work, whether it is checkpointed or not (as the
XC_SAVE_ID_LAST_CHECKPOINT chunk will be sent in the correct place). 
Any migration started from a version of the tools before c/s 00a4b65f85
will fail because it has no idea that the receiving end is expecting it
to insert XC_SAVE_ID_LAST_CHECKPOINT chunk.  The fault here is with the
receiving end expecting to find a XC_SAVE_ID_LAST_CHECKPOINT chunk.

The only fix is for newer toolstack to be aware that the migration
stream is from an older toolstack, and to set last_checkpoint_unaware=1,
which will set ctx->last_checkpoint to 1, allowing the receiving side of
the migration to correctly read the migration stream.

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 6883 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-22 17:52       ` Andrew Cooper
@ 2013-07-22 17:59         ` Ian Campbell
  2013-07-22 18:06           ` Andrew Cooper
  0 siblings, 1 reply; 10+ messages in thread
From: Ian Campbell @ 2013-07-22 17:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: rshriram, Xen-devel

On Mon, 2013-07-22 at 18:52 +0100, Andrew Cooper wrote:
> On 18/07/13 19:47, Shriram Rajagopalan wrote:
> 
> > On Thu, Jul 18, 2013 at 12:27 PM, Andrew Cooper
> > <andrew.cooper3@citrix.com> wrote:
> >         On 18/07/13 17:20, Shriram Rajagopalan wrote:
> >         
> >         > On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper
> >         > <andrew.cooper3@citrix.com> wrote: 
> >         > and you setting ctx->last_checkpoint = 0 basically means
> >         > that you are
> >         > banking on the far end (with older version of tools) to
> >         > close the socket, causing
> >         > "get pagebuf" to fail and subsequently land on finish.
> >         > IIRC, this was the behavior before
> >         > XC_SAVE_ID_LAST_CHECKPOINT was introduced
> >         > by Ian Campbell, to get rid of this benign error message.
> >         > 
> >         
> >         
> >         That might have been 'fine' for PV guests, but it cause
> >         active breakage for HVM domains where the qemu save record
> >         immediately follows in the migration stream.
> >         
> >         
> > 
> > 
> > Just to clarify.. the code flows like this, iirc.
> > 
> > 
> > loadpages:
> >  while (1)
> >      if !completed
> >         get pagebufs
> >         if 0 pages sent, break
> >      endif
> >      apply batch (pagebufs)
> >  endwhile
> > 
> > 
> >  if !completed
> >    get tailbuf [[this is where the QEMU record would be obtained]]
> >    completed = 1
> >  endif
> > 
> > 
> >  if last_checkpoint
> >    goto finish
> >  endif
> > 
> > 
> >  get pagebuf or goto finish on error ---> this is where old code
> > used to exit
> >  get tailbuf
> > goto loadpages
> > finish:
> >    apply tailbuf [tailbuf obtained inside the 'if !completed' block]
> >    do the rest of the restore
> 
> (Logically joining the two divergent threads, as this is the answer to
> both)
> 
> This has nothing to do with the buffering mode, and that is not where
> the Qemu record would be obtained from.
> 
> As the code currently stands, if a XC_SAVE_ID_LAST_CHECKPOINT chunk is
> not seen in the stream, we complete the loadpages: section, including
> the magic pages (TSS, console info etc).
> 
> We then read the buffer_tail() on line 171, set ctx->completed on line
> 1725, but fail the ctx->last_checkpoint check on line 1758.
> 
> What we should do is pass the last_checkpoint test, and goto finish
> which then calls dump_qemu().  What actually happens is a call to
> pagebuf_get() on line 1766 which raises an error because of finding a
> Qemu save record rather than more pages.
> 
> So this is very much a bug directly introduced by 00a4b65f85, and can
> only be fixed with knowledge from the higher levels of the toolstack.
> 
> As for the wording of the parameter, I still prefer the original
> "last_checkpoint_unaware" over "checkpointed_stream" as it is more
> accurate.
> 
> Any migration stream started from a version of the tools after c/s
> 00a4b65f85 will work, whether it is checkpointed or not (as the
> XC_SAVE_ID_LAST_CHECKPOINT chunk will be sent in the correct place).
> Any migration started from a version of the tools before c/s
> 00a4b65f85 will fail because it has no idea that the receiving end is
> expecting it to insert XC_SAVE_ID_LAST_CHECKPOINT chunk.  The fault
> here is with the receiving end expecting to find a
> XC_SAVE_ID_LAST_CHECKPOINT chunk.
> 
> The only fix is for newer toolstack to be aware that the migration
> stream is from an older toolstack, and to set
> last_checkpoint_unaware=1, which will set ctx->last_checkpoint to 1,
> allowing the receiving side of the migration to correctly read the
> migration stream.

The toolstack cannot in general know that (i.e. how does xl know?)

It does know if it is doing checkpointing or not though.

There are four cases:

Regular migrate from pre-00a4b65f85 to post-00a4b65f85. Receiver sets
last_checkpoint = 1 at start of day. Doesn't care that sender never
sends XC_SAVE_ID_LAST_CHECKPOINT.

Regular migrate from post-00a4b65f85 to post-00a4b65f85. Receiver sets
last_checkpoint = 1 at start of day. Doesn't care that it sees
XC_SAVE_ID_LAST_CHECKPOINT again, just ends up (pointlessly) setting
last_checkpoint = 1.

Checkpointed migrate from post-00a4b65f85 to post-00a4b65f85. Receiver
doesn't set last_checkpoint = 1 at start of day.
XC_SAVE_ID_LAST_CHECKPOINT is sent at the right time and both sides
agree etc. Things work.

Checkpointed migrate from pre-00a4b65f85 to post-00a4b65f85. We don't
support this, because checkpoints should only be supported between like
versions of Xen (N->N+1 case doesn't apply).

Ian.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen
  2013-07-22 17:59         ` Ian Campbell
@ 2013-07-22 18:06           ` Andrew Cooper
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Cooper @ 2013-07-22 18:06 UTC (permalink / raw)
  To: Ian Campbell; +Cc: rshriram, Xen-devel

On 22/07/13 18:59, Ian Campbell wrote:
> On Mon, 2013-07-22 at 18:52 +0100, Andrew Cooper wrote:
>> On 18/07/13 19:47, Shriram Rajagopalan wrote:
>>
>>> On Thu, Jul 18, 2013 at 12:27 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>         On 18/07/13 17:20, Shriram Rajagopalan wrote:
>>>         
>>>         > On Tue, Jul 16, 2013 at 6:52 AM, Andrew Cooper
>>>         > <andrew.cooper3@citrix.com> wrote: 
>>>         > and you setting ctx->last_checkpoint = 0 basically means
>>>         > that you are
>>>         > banking on the far end (with older version of tools) to
>>>         > close the socket, causing
>>>         > "get pagebuf" to fail and subsequently land on finish.
>>>         > IIRC, this was the behavior before
>>>         > XC_SAVE_ID_LAST_CHECKPOINT was introduced
>>>         > by Ian Campbell, to get rid of this benign error message.
>>>         > 
>>>         
>>>         
>>>         That might have been 'fine' for PV guests, but it cause
>>>         active breakage for HVM domains where the qemu save record
>>>         immediately follows in the migration stream.
>>>         
>>>         
>>>
>>>
>>> Just to clarify.. the code flows like this, iirc.
>>>
>>>
>>> loadpages:
>>>  while (1)
>>>      if !completed
>>>         get pagebufs
>>>         if 0 pages sent, break
>>>      endif
>>>      apply batch (pagebufs)
>>>  endwhile
>>>
>>>
>>>  if !completed
>>>    get tailbuf [[this is where the QEMU record would be obtained]]
>>>    completed = 1
>>>  endif
>>>
>>>
>>>  if last_checkpoint
>>>    goto finish
>>>  endif
>>>
>>>
>>>  get pagebuf or goto finish on error ---> this is where old code
>>> used to exit
>>>  get tailbuf
>>> goto loadpages
>>> finish:
>>>    apply tailbuf [tailbuf obtained inside the 'if !completed' block]
>>>    do the rest of the restore
>> (Logically joining the two divergent threads, as this is the answer to
>> both)
>>
>> This has nothing to do with the buffering mode, and that is not where
>> the Qemu record would be obtained from.
>>
>> As the code currently stands, if a XC_SAVE_ID_LAST_CHECKPOINT chunk is
>> not seen in the stream, we complete the loadpages: section, including
>> the magic pages (TSS, console info etc).
>>
>> We then read the buffer_tail() on line 171, set ctx->completed on line
>> 1725, but fail the ctx->last_checkpoint check on line 1758.
>>
>> What we should do is pass the last_checkpoint test, and goto finish
>> which then calls dump_qemu().  What actually happens is a call to
>> pagebuf_get() on line 1766 which raises an error because of finding a
>> Qemu save record rather than more pages.
>>
>> So this is very much a bug directly introduced by 00a4b65f85, and can
>> only be fixed with knowledge from the higher levels of the toolstack.
>>
>> As for the wording of the parameter, I still prefer the original
>> "last_checkpoint_unaware" over "checkpointed_stream" as it is more
>> accurate.
>>
>> Any migration stream started from a version of the tools after c/s
>> 00a4b65f85 will work, whether it is checkpointed or not (as the
>> XC_SAVE_ID_LAST_CHECKPOINT chunk will be sent in the correct place).
>> Any migration started from a version of the tools before c/s
>> 00a4b65f85 will fail because it has no idea that the receiving end is
>> expecting it to insert XC_SAVE_ID_LAST_CHECKPOINT chunk.  The fault
>> here is with the receiving end expecting to find a
>> XC_SAVE_ID_LAST_CHECKPOINT chunk.
>>
>> The only fix is for newer toolstack to be aware that the migration
>> stream is from an older toolstack, and to set
>> last_checkpoint_unaware=1, which will set ctx->last_checkpoint to 1,
>> allowing the receiving side of the migration to correctly read the
>> migration stream.
> The toolstack cannot in general know that (i.e. how does xl know?)
>
> It does know if it is doing checkpointing or not though.
>
> There are four cases:
>
> Regular migrate from pre-00a4b65f85 to post-00a4b65f85. Receiver sets
> last_checkpoint = 1 at start of day. Doesn't care that sender never
> sends XC_SAVE_ID_LAST_CHECKPOINT.
>
> Regular migrate from post-00a4b65f85 to post-00a4b65f85. Receiver sets
> last_checkpoint = 1 at start of day. Doesn't care that it sees
> XC_SAVE_ID_LAST_CHECKPOINT again, just ends up (pointlessly) setting
> last_checkpoint = 1.
>
> Checkpointed migrate from post-00a4b65f85 to post-00a4b65f85. Receiver
> doesn't set last_checkpoint = 1 at start of day.
> XC_SAVE_ID_LAST_CHECKPOINT is sent at the right time and both sides
> agree etc. Things work.
>
> Checkpointed migrate from pre-00a4b65f85 to post-00a4b65f85. We don't
> support this, because checkpoints should only be supported between like
> versions of Xen (N->N+1 case doesn't apply).
>
> Ian.
>
>

Ok - that seems to make it easier.  I will see about plumbing the
correct information through from xend/remus.

~Andrew

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-07-22 18:06 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-16 10:52 [Patch v2] tools/migrate: Fix regression when migrating from older version of Xen Andrew Cooper
2013-07-18 13:01 ` Andrew Cooper
2013-07-18 13:14   ` Ian Campbell
2013-07-18 16:20 ` Shriram Rajagopalan
2013-07-18 16:27   ` Andrew Cooper
2013-07-18 18:47     ` Shriram Rajagopalan
2013-07-22 17:52       ` Andrew Cooper
2013-07-22 17:59         ` Ian Campbell
2013-07-22 18:06           ` Andrew Cooper
2013-07-19  9:36     ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.