xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: wj zhou <zhouwjenter@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: daniel.kiper@oracle.com, xen-devel@lists.xensource.com,
	anderson@redhat.com
Subject: Re: Question of xl dump-core
Date: Wed, 15 Jun 2016 09:46:45 +0800	[thread overview]
Message-ID: <CAFSuLs8dPzdE1TV5ZRLafGDfNryt1TvCBDCCV3Nxno2MCS+PGQ@mail.gmail.com> (raw)
In-Reply-To: <20160614150223.GB9456@char.us.oracle.com>

Hi,

Thanks a lot for your reply!

On Tue, Jun 14, 2016 at 11:02 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Jun 14, 2016 at 08:21:16AM +0800, wj zhou wrote:
>> Hello all,
>
> Hey,
>
> CC-ing Daniel, and Dave.
>>
>> Sorry to disturb you, but I really want to figure it out.
>> The xen core of redhat 6 with pod is unable to be used with crash.
>>
>> I installed a hvm of redhat 6 by xen 4.7.0-rc2.
>> And the memory is set as below:
>> memory=1024
>> maxmem=4096
>>
>> "xl dump-core" is executed, and the core is produced successfully.
>> I got the following message:
>> xc: info: exceeded nr_pages (261111) losing pages
>>
>> Unfortunately, I got some errors when executing crash with it.
>> The below is the log of crash.
>>
>> <cut>
>> crash 7.0.9-4.el7
>
> http://people.redhat.com/anderson says that the latest is 7.1.5.
> Can you try that version?
>

I have just tried the latest crash version, and got the same error message.
Since the following info exists, I think there is something wrong in xen.
xc: info: exceeded nr_pages (261111) losing pages

>> ...
>>
>> please wait... (gathering kmem slab cache data)
>> crash: read error: kernel virtual address: ffff88010b532e00  type:
>> "kmem_cache buffer"
>>
>> crash: unable to initialize kmem slab cache subsystem
>>
>> please wait... (gathering module symbol data)
>> crash: read error: physical address: 1058a1000  type: "page table"
>> <cut>
>>
>> I knew balloon is not supported by redhat 6, so pod is also not supported.
>
> ?
>

In redhat 6 hvm, there is no balloon module.

>> But I wonder the reason why the above error happens.
>> Balloon and pod are also not supported by redhat 7, but it won't
>> happen in redhat 7 hvm.
>> I'm very appreciated if someone can help me.
>
> Well, does it work if you maxmem != memory ?
>

Yes, it works in redhat 7 hvm when maxmem>memory.

But I found something strange.
"free -m" performs quite different between redhat6 and redhat7 hvm.
In redhat 6 the value by "free -m" equals maxmem(4096).
However, in redhat 7 it equals memory(1024),

-- 
Regards
Zhou

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-15  1:46 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-14  0:21 Question of xl dump-core wj zhou
2016-06-14 15:02 ` Konrad Rzeszutek Wilk
2016-06-15  1:46   ` wj zhou [this message]
2016-06-17 10:39     ` Daniel Kiper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFSuLs8dPzdE1TV5ZRLafGDfNryt1TvCBDCCV3Nxno2MCS+PGQ@mail.gmail.com \
    --to=zhouwjenter@gmail.com \
    --cc=anderson@redhat.com \
    --cc=daniel.kiper@oracle.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).