xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Philipp Hahn <hahn@univention.de>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Keir Fraser" <keir.fraser@citrix.com>,
	"Nico Stöckigt" <Stoeckigt@univention.de>,
	"xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [xen-4.1.6.1] SIGSEGV libxc/xc_save_domain.c: p2m_size >> configured_ram_size
Date: Mon, 13 Jun 2016 11:15:26 +0100	[thread overview]
Message-ID: <CAFLBxZbfhrZY=RSZ0NEwLu2vvAfKNtvS0=BgHFWuYAEmnA+O4g@mail.gmail.com> (raw)
In-Reply-To: <575ADB31.7000303@univention.de>

On Fri, Jun 10, 2016 at 4:22 PM, Philipp Hahn <hahn@univention.de> wrote:
> Hi,
>
> while trying to live migrate some VMs from an xen-4.1.6.1 host "xc_save"
> crashes with a segmentation fault in tools/libxc/xc_domain_save.c:1141
>>         /*
>>          * Quick belt and braces sanity check.
>>          */
>>         for ( i = 0; i < dinfo->p2m_size; i++ )
>>         {
>>             mfn = pfn_to_mfn(i);
>>             if( (mfn != INVALID_P2M_ENTRY) && (mfn_to_pfn(mfn) != i) )
>                                                  ^^^^^^^^^^^^^^^
> due to a de-reference through
>> #define pfn_to_mfn(_pfn)                                            \
>>   ((xen_pfn_t) ((dinfo->guest_width==8)                               \
>>                 ? (((uint64_t *)ctx->live_p2m)[(_pfn)])                  \
>>                 : ((((uint32_t *)ctx->live_p2m)[(_pfn)]) == 0xffffffffU  \
>>                    ? (-1UL) : (((uint32_t *)ctx->live_p2m)[(_pfn)]))))
>                                                             ^^^^^^^^
> The VM is a 32bit Linux-PV-domain having maxmem=1997[MB]
>> (gdb) print _ctx
>> $1 = {hvirt_start = 4118806528, pt_levels = 3, max_mfn = 25690112, live_p2m = 0x7f421cc2e000, live_m2p = 0x7f421d02e000, m2p_mfn0 = 8649728, dinfo = {guest_width = 4, p2m_size = 1048576}}
>
> Note that p2m_size = 0x100000 = 4GiB_RAM/4KiB_page_size »
> maxmem_of_domU, so so loop doesn't end around the allocated 2GB, but
> tries to go up to the 32-maximum of 4GB and fails.
>
> I've added more debugging to verify that:
>> xc: detail: p2m_size=0x100000
> ...
>> xc: detail: i=0x7cfff mfn=183ecf2 live_m2p=7cfff
>> segfault
>
> I can reproduce that easily doing
>  /usr/lib/xen/bin/xc_save 28 $domid 0 0 1 28>/dev/null
>
> Doing a non-live-migration (28 $domid 0 0 0) also fails, so it doesn't
> depend on being live.
>
> Increasing the configured memory size of the domU to 2001 doesn't change
> the problem:
>> xc: detail: p2m_size=0x100000
>> xc: detail: i=0x7cfff mfn=183ecf2 live_m2p=7cfff
>> xc: detail: i=0x7d000 mfn=183ecf1 live_m2p=7d000
> ...
>> xc: detail: i=0x7d0fd mfn=10aebce live_m2p=7d0fd
>> xc: detail: i=0x7d0fe mfn=10aebcd live_m2p=7d0fe
>> xc: detail: i=0x7d0ff mfn=10aebcc live_m2p=7d0ff
>> Speicherzugriffsfehler (Speicherabzug geschrieben)
>
> Rebooting the domU also doesn't fix the problem.
>
> There has been an 8 year old report about live migration mailing:
> <http://lists.xenproject.org/archives/html/xen-devel/2008-10/msg00107.html>
>
> The host is Linux-3.10.71 amd64, while the guest is Linux-4.1.16 i386
> with PAE. The guest kernel reports
>> [    0.000000] e820: last_pfn = 0x7d900 max_arch_pfn = 0x1000000
>
>> # cat iomem
>> 00000000-00000fff : reserved
>> 00001000-0009ffff : System RAM
>> 000a0000-000fffff : reserved
>>   000f0000-000fffff : System ROM
>> 00100000-7d8fffff : System RAM
>>   01000000-014e51d9 : Kernel code
>>   014e51da-016efd7f : Kernel data
>>   017a9000-0181cfff : Kernel bss
>> fee00000-fee00fff : Local APIC
>
>
> To me that looks like a bad mixture of
> 1. xc_domain_maximum_gpfn() returning not the right maximum,
> 2. xc_map_foreign_pages() mapping only the really allocated pages,
>
> Any idea?
> If you need more info, I can provide that on request.
>
> I know that the old migration implementation has been removed with
> xen-4.6, but I still would like to know how to fix that since we will
> not have 4.6 in the next few month and I need working live migration.
>
> Thank you in advance and have a nice weekend.

Phillip,

Given that 4.1 is long out of support, we won't be making a proper fix
in-tree (since it will never be released).  So what kind of resolution
would be the most help to you?  A patch you can apply locally to allow
the save/restore to work?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-13 10:15 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-10 15:22 [xen-4.1.6.1] SIGSEGV libxc/xc_save_domain.c: p2m_size >> configured_ram_size Philipp Hahn
2016-06-13 10:15 ` George Dunlap [this message]
2016-06-13 11:14   ` Philipp Hahn
2016-06-17  9:31     ` Xen 4.1 maintenance? (Was Re: [xen-4.1.6.1] SIGSEGV libxc/xc_save_domain.c: p2m_size >> configured_ram_size) George Dunlap
2016-06-29  9:46       ` Stefan Bader

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAFLBxZbfhrZY=RSZ0NEwLu2vvAfKNtvS0=BgHFWuYAEmnA+O4g@mail.gmail.com' \
    --to=george.dunlap@citrix.com \
    --cc=Stoeckigt@univention.de \
    --cc=Xen-devel@lists.xen.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=hahn@univention.de \
    --cc=keir.fraser@citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).