xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Stefano Stabellini <sstabellini@kernel.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: anthony.perard@citrix.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: "xl vcpu-set" not persistent across reboot?
Date: Mon, 6 Jun 2016 14:07:46 +0100 (BST)	[thread overview]
Message-ID: <alpine.DEB.2.10.1606061404080.6721@sstabellini-ThinkPad-X260> (raw)
In-Reply-To: <575558C302000078000F1E6F@prv-mh.provo.novell.com>

On Mon, 6 Jun 2016, Jan Beulich wrote:
> >>> On 03.06.16 at 18:35, <wei.liu2@citrix.com> wrote:
> > I got a patch ready.  But QEMU upstream refuses to start on the receiving end
> > with following error message:
> > 
> > qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> > qemu-system-i386: load of migration failed: Invalid argument
> > 
> > With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> > and running with all hot plugged cpus available.
> > 
> > So I think the relevant libxl information is transmitted but we also
> > need to fix QEMU upstream. But that's a separate issue.

For clarity, you have applied the patch below, started a VM, hotplugged
a vcpu, rebooted the guest, then migrated the VM, but at this point
there is an error?

What are the QEMU command line arguments at the receiving side? Are you
sure that the increased vcpu count is passed to the receiving end by
libxl? It looks like QEMU has been started passing the old vcpu count as
command line argument (-smp etc) at the receiving end.


> Stefano, Anthony,
> 
> any thoughts here?
> 
> Thanks, Jan
> 
> > ---8<---
> > From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Fri, 3 Jun 2016 16:38:32 +0100
> > Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
> > 
> > ... because the available vcpu bitmap can change during domain life time
> > due to cpu hotplug and unplug.
> > 
> > Reported-by: Jan Beulich <jbeulich@suse.com>
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  tools/libxl/libxl.c | 31 +++++++++++++++++++++++++++++++
> >  1 file changed, 31 insertions(+)
> > 
> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index 006b83f..99617f3 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -7270,6 +7270,37 @@ int libxl_retrieve_domain_configuration(libxl_ctx 
> > *ctx, uint32_t domid,
> >          libxl_dominfo_dispose(&info);
> >      }
> >  
> > +    /* VCPUs */
> > +    {
> > +        libxl_vcpuinfo *vcpus;
> > +        libxl_bitmap *map;
> > +        int nr_vcpus, nr_cpus;
> > +        unsigned int i;
> > +
> > +        vcpus = libxl_list_vcpu(ctx, domid, &nr_vcpus, &nr_cpus);
> > +        if (!vcpus) {
> > +            LOG(ERROR, "fail to get vcpu list for domain %d", domid);
> > +            rc = ERROR_FAIL;
> > +            goto out;
> > +        }
> > +
> > +        /* Update the avail_vcpus bitmap accordingly */
> > +        map = &d_config->b_info.avail_vcpus;
> > +
> > +        libxl_bitmap_dispose(map);
> > +
> > +        libxl_bitmap_alloc(ctx, map, nr_vcpus);
> > +
> > +        libxl_bitmap_init(map);
> > +
> > +        for (i = 0; i < nr_vcpus; i++) {
> > +            if (vcpus[i].online)
> > +                libxl_bitmap_set(map, i);
> > +        }
> > +
> > +        libxl_vcpuinfo_list_free(vcpus, nr_vcpus);
> > +    }
> > +
> >      /* Memory limits:
> >       *
> >       * Currently there are three memory limits:
> > -- 
> > 2.1.4
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-06 13:07 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-06  9:04 "xl vcpu-set" not persistent across reboot? Jan Beulich
2016-06-06 13:07 ` Stefano Stabellini [this message]
2016-06-06 13:42   ` Wei Liu
2016-06-06 14:00     ` Stefano Stabellini
  -- strict thread matches above, loose matches on Subject: below --
2016-06-03  8:29 Jan Beulich
2016-06-03 13:41 ` Wei Liu
2016-06-03 14:42   ` Jan Beulich
2016-06-03 16:35     ` Wei Liu
2016-06-06  8:58       ` Jan Beulich
2016-06-06 17:18       ` Wei Liu
2016-06-06 17:20         ` Wei Liu
2016-06-06 17:34           ` Andrew Cooper
2016-06-07  8:30             ` Wei Liu
2016-06-14 16:34               ` Ian Jackson
2016-06-14 16:39                 ` Wei Liu
2016-06-14 16:57                   ` Ian Jackson
2016-06-14 16:59                     ` Andrew Cooper
2016-06-14 17:06                       ` Wei Liu
2016-06-14 17:03                     ` Wei Liu
2016-06-14 17:23                       ` Wei Liu
2016-06-14 17:35                     ` Anthony PERARD
2016-06-07  6:38         ` Jan Beulich
2016-06-07  8:27           ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.10.1606061404080.6721@sstabellini-ThinkPad-X260 \
    --to=sstabellini@kernel.org \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=anthony.perard@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).