xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Aaron Cornelius <aaron.cornelius@dornerworks.com>,
	Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: Xen 4.7 crash
Date: Mon, 6 Jun 2016 15:05:47 +0100	[thread overview]
Message-ID: <5755833B.4090105@arm.com> (raw)
In-Reply-To: <f7cb027e-d5cd-4189-196f-7730181eccf7@dornerworks.com>

(CC Ian, Stefano and Wei)

Hello Aaron,

On 06/06/16 14:58, Aaron Cornelius wrote:
> On 6/2/2016 5:07 AM, Julien Grall wrote:
>> Hello Aaron,
>>
>> On 02/06/2016 02:32, Aaron Cornelius wrote:
>>> This is with a custom application, we use the libxl APIs to interact
>>> with Xen.  Domains are created using the libxl_domain_create_new()
>>> function, and domains are destroyed using the libxl_domain_destroy()
>>> function.
>>>
>>> The test in this case creates a domain, waits a minute, then
>>> deletes/creates the next domain, waits a minute, and so on.  So I
>>> wouldn't be surprised to see the VMID occasionally indicate there are 2
>>> active domains since there could be one being created and one being
>>> destroyed in a very short time.  However, I wouldn't expect to ever have
>>> 256 domains.
>>
>> Your log has:
>>
>> (XEN) grant_table.c:3288:d0v1 Grant release (0) ref:(9) flags:(2) dom:(0)
>> (XEN) grant_table.c:3288:d0v1 Grant release (1) ref:(11) flags:(2)
>> dom:(0)
>>
>> Which suggest that some grants are still mapped in DOM0.
>>
>>>
>>> The CubieTruck only has 2GB of RAM, I allocate 512MB for dom0 which
>>> means that only 48 of the the Mirage domains (with 32MB of RAM) would
>>> work at the same time anyway.  Which doesn't account for the various
>>> inter-domain resources or the RAM used by Xen itself.
>>
>> All the pages who belongs to the domain could have been freed except the
>> one referenced by DOM0. So the footprint of this domain will be limited
>> at the time.
>>
>> I would recommend you to check how many domain are running at this time
>> and if DOM0 effectively released all the resources.
>>
>>> If the p2m_teardown() function checked for NULL it would prevent the
>>> crash, but I suspect Xen would be just as broken since all of my
>>> resources have leaked away.  More broken in fact, since if the board
>>> reboots at least the applications will restart and domains can be
>>> recreated.
>>>
>>> It certainly appears that some resources are leaking when domains are
>>> deleted (possibly only on the ARM or ARM32 platforms).  We will try to
>>> add some debug prints and see if we can discover exactly what is
>>> going on.
>>
>> The leakage could also happen from DOM0. FWIW, I have been able to cycle
>> 2000 guests over the night on an ARM platforms.
>>
>
> We've done some more testing regarding this issue.  And further testing
> shows that it doesn't matter if we delete the vchans before the domains
> are deleted.  Those appear to be cleaned up correctly when the domain is
> destroyed.
>
> What does stop this issue from happening (using the same version of Xen
> that the issue was detected on) is removing any non-standard xenstore
> references before deleting the domain.  In this case our application
> allocates permissions for created domains to non-standard xenstore
> paths.  Making sure to remove those domain permissions before deleting
> the domain prevents this issue from happening.

I am not sure to understand what you mean here. Could you give a quick 
example?

>
> It does not appear to matter if we delete the standard domain xenstore
> path (/local/domain/<id>) since libxl handles removing this path when
> the domain is destroyed.
>
> Based on this I would guess that the xenstore is hanging onto the VMID.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-06 14:05 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-01 19:54 Xen 4.7 crash Aaron Cornelius
2016-06-01 20:00 ` Andrew Cooper
2016-06-01 20:45   ` Aaron Cornelius
2016-06-01 21:24     ` Andrew Cooper
2016-06-01 22:18       ` Julien Grall
2016-06-01 22:26         ` Andrew Cooper
2016-06-01 21:35 ` Andrew Cooper
2016-06-01 22:24   ` Julien Grall
2016-06-01 22:31     ` Andrew Cooper
2016-06-02  8:47       ` Jan Beulich
2016-06-02  8:53         ` Andrew Cooper
2016-06-02  9:07           ` Jan Beulich
2016-06-01 22:35 ` Julien Grall
2016-06-02  1:32   ` Aaron Cornelius
2016-06-02  8:49     ` Jan Beulich
2016-06-02  9:07     ` Julien Grall
2016-06-06 13:58       ` Aaron Cornelius
2016-06-06 14:05         ` Julien Grall [this message]
2016-06-06 14:19           ` Wei Liu
2016-06-06 15:02             ` Aaron Cornelius
2016-06-07  9:53               ` Ian Jackson
2016-06-07 13:40                 ` Aaron Cornelius
2016-06-07 15:13                   ` Aaron Cornelius
2016-06-09 11:14                     ` Ian Jackson
2016-06-14 13:11                       ` Aaron Cornelius
2016-06-14 13:15                         ` Wei Liu
2016-06-14 13:26                           ` Aaron Cornelius
2016-06-14 13:38                             ` Aaron Cornelius
2016-06-14 13:47                               ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5755833B.4090105@arm.com \
    --to=julien.grall@arm.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=aaron.cornelius@dornerworks.com \
    --cc=jbeulich@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).