All of lore.kernel.org
 help / color / mirror / Atom feed
* Xen4.2-rc3 test result
@ 2012-08-31  7:27 Ren, Yongjie
  2012-08-31 15:18 ` Ian Campbell
  2012-08-31 17:24 ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 15+ messages in thread
From: Ren, Yongjie @ 2012-08-31  7:27 UTC (permalink / raw)
  To: 'xen-devel'
  Cc: 'Keir Fraser', 'Ian Campbell',
	'Jan Beulich', 'Konrad Rzeszutek Wilk'

Hi All,
We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
We found no new issue, and verified 1 fixed bug.

Fixed bug (1):
1. long stop during the guest boot process with qcow image
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
  -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".

The following are some of the old issues which we guess are something important.
Some of the old issues:
1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
  -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
2. Poor performance when do guest save/restore and migration with linux 3.x dom0
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
4. after detaching a VF from a guest, shutdown the guest is very slow
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
6. Guest hang after resuming from S3
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828
7. Dom0 S3 resume fails
  http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Best Regards,
     Yongjie (Jay)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-08-31  7:27 Xen4.2-rc3 test result Ren, Yongjie
@ 2012-08-31 15:18 ` Ian Campbell
  2012-09-06  5:59   ` Ren, Yongjie
  2012-08-31 17:24 ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 15+ messages in thread
From: Ian Campbell @ 2012-08-31 15:18 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: 'Konrad Rzeszutek Wilk', Keir (Xen.org),
	'Jan Beulich', 'xen-devel'

On Fri, 2012-08-31 at 08:27 +0100, Ren, Yongjie wrote:
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822

I've only just noticed that this has changed from the previous
description which was "vcpu-set doesn't take effect on guest".

Have we ever supported HVM guest CPU remove? I thought not.

http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
describe the behaviour I would expect.

If this is supposed to be an existing feature then is this a regression
with xl vs xm or from 4.1 to 4.2?

Ian.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-08-31  7:27 Xen4.2-rc3 test result Ren, Yongjie
  2012-08-31 15:18 ` Ian Campbell
@ 2012-08-31 17:24 ` Konrad Rzeszutek Wilk
  2012-09-03  7:45   ` Jan Beulich
  2012-09-06  8:18   ` Ren, Yongjie
  1 sibling, 2 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-08-31 17:24 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: 'Konrad Rzeszutek Wilk', 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

On Fri, Aug 31, 2012 at 07:27:51AM +0000, Ren, Yongjie wrote:
> Hi All,
> We did a round testing for Xen 4.2 RC3 (CS# 25784) with Linux 3.5.2 dom0.
> We found no new issue, and verified 1 fixed bug.
> 
> Fixed bug (1):
> 1. long stop during the guest boot process with qcow image
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821
>   -- Fixed by reverting a bad commit about "O_DIRECT to open IDE block device".
> 
> The following are some of the old issues which we guess are something important.
> Some of the old issues:
> 1. Fail to probe NIC driver to HVM domU (with 3.5/3.6 Linux as Dom0)
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1824
>   -- We already know the corrupt commit in Linux tree. Konrad will try to fix it.
> 2. Poor performance when do guest save/restore and migration with linux 3.x dom0
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1784
> 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 4. after detaching a VF from a guest, shutdown the guest is very slow
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1812
> 5. Dom0 cannot be shutdown before PCI detachment from guest and when pci assignment conflicts
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

Um, so you are assigning the same VF to two guests. I am surprised that
the tools even allowed you to do that. Was 'xm' allowing you to do that?

> 6. Guest hang after resuming from S3
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828

Jan posted some patches to fix that. Can you test with an up-to-date
guest? (so not RHEL6U1 which does not have the fix).

> 7. Dom0 S3 resume fails
>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707

Yeah, that one is mine. Have some patches for that I will post soonish.
> 
> Best Regards,
>      Yongjie (Jay)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-08-31 17:24 ` Konrad Rzeszutek Wilk
@ 2012-09-03  7:45   ` Jan Beulich
  2012-09-03 10:18     ` Konrad Rzeszutek Wilk
  2012-09-06  8:18   ` Ren, Yongjie
  1 sibling, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2012-09-03  7:45 UTC (permalink / raw)
  To: Yongjie Ren, Konrad Rzeszutek Wilk
  Cc: 'Konrad Rzeszutek Wilk', 'Keir Fraser',
	'Ian Campbell', 'xen-devel'

>>> On 31.08.12 at 19:24, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> On Fri, Aug 31, 2012 at 07:27:51AM +0000, Ren, Yongjie wrote:
>> 6. Guest hang after resuming from S3
>>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828 
> 
> Jan posted some patches to fix that. Can you test with an up-to-date
> guest? (so not RHEL6U1 which does not have the fix).

Did I? I don't recall...

Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-03  7:45   ` Jan Beulich
@ 2012-09-03 10:18     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-03 10:18 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, Yongjie Ren, 'Keir Fraser',
	'Ian Campbell', 'xen-devel'

On Mon, Sep 03, 2012 at 08:45:22AM +0100, Jan Beulich wrote:
> >>> On 31.08.12 at 19:24, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> > On Fri, Aug 31, 2012 at 07:27:51AM +0000, Ren, Yongjie wrote:
> >> 6. Guest hang after resuming from S3
> >>   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828 
> > 
> > Jan posted some patches to fix that. Can you test with an up-to-date
> > guest? (so not RHEL6U1 which does not have the fix).
> 
> Did I? I don't recall..

r8605067 xen-blkfront: module exit handling adjustments
e77c78c xen-blkfront: properly name all devices
569ca5b xen/gnttab: add deferred freeing logic

> 
> Jan
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-08-31 15:18 ` Ian Campbell
@ 2012-09-06  5:59   ` Ren, Yongjie
  2012-09-06  7:31     ` Ian Campbell
  0 siblings, 1 reply; 15+ messages in thread
From: Ren, Yongjie @ 2012-09-06  5:59 UTC (permalink / raw)
  To: Ian Campbell
  Cc: 'Konrad Rzeszutek Wilk', Keir (Xen.org),
	'Jan Beulich', 'xen-devel'

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Friday, August 31, 2012 11:19 PM
> To: Ren, Yongjie
> Cc: 'xen-devel'; Keir (Xen.org); 'Jan Beulich'; 'Konrad Rzeszutek Wilk'
> Subject: Re: Xen4.2-rc3 test result
> 
> On Fri, 2012-08-31 at 08:27 +0100, Ren, Yongjie wrote:
> > 3. 'xl vcpu-set' can't decrease the vCPU number of a HVM guest
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822
> 
> I've only just noticed that this has changed from the previous
> description which was "vcpu-set doesn't take effect on guest".
> 
Sorry. It's me who changed its description. :-)
vCPU number increment works fine, while decrement can't work.

> Have we ever supported HVM guest CPU remove? I thought not.
> 
> http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
> describe the behaviour I would expect.
> 
If we don't want to support HVM guest CPU remove in near future, I want to close this bug.

> If this is supposed to be an existing feature then is this a regression
> with xl vs xm or from 4.1 to 4.2?
> 
No, it's not a regression from 4.1 to 4.2.
Neither Xen 4.1 nor 4.2 supports HVM guest CPU remove with xm or xl.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06  5:59   ` Ren, Yongjie
@ 2012-09-06  7:31     ` Ian Campbell
  2012-09-06 11:08       ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 15+ messages in thread
From: Ian Campbell @ 2012-09-06  7:31 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: 'Konrad Rzeszutek Wilk', Keir (Xen.org),
	'Jan Beulich', 'xen-devel'

On Thu, 2012-09-06 at 06:59 +0100, Ren, Yongjie wrote:
> > Have we ever supported HVM guest CPU remove? I thought not.
> > 
> > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
> > describe the behaviour I would expect.
> > 
> If we don't want to support HVM guest CPU remove in near future, I want to close this bug.

Is this something Intel is considering working on? If so then someone
should mention it to George in the 4.3 planning thread.

> > If this is supposed to be an existing feature then is this a regression
> > with xl vs xm or from 4.1 to 4.2?
> > 
> No, it's not a regression from 4.1 to 4.2.
> Neither Xen 4.1 nor 4.2 supports HVM guest CPU remove with xm or xl.

OK, then I can remove it from the TODO list for 4.2, since it certainly
isn't happening for 4.2.0 at this stage.

Thanks for letting me know,
Ian.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-08-31 17:24 ` Konrad Rzeszutek Wilk
  2012-09-03  7:45   ` Jan Beulich
@ 2012-09-06  8:18   ` Ren, Yongjie
  2012-09-06  8:28     ` Ian Campbell
  2012-09-06 11:11     ` Konrad Rzeszutek Wilk
  1 sibling, 2 replies; 15+ messages in thread
From: Ren, Yongjie @ 2012-09-06  8:18 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: 'Konrad Rzeszutek Wilk', 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

> -----Original Message-----
> From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> Konrad Rzeszutek Wilk
> Sent: Saturday, September 01, 2012 1:24 AM
> To: Ren, Yongjie
> Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> Rzeszutek Wilk'
> Subject: Re: [Xen-devel] Xen4.2-rc3 test result

> > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> when pci assignment conflicts
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> 
> Um, so you are assigning the same VF to two guests. I am surprised that
> the tools even allowed you to do that. Was 'xm' allowing you to do that?
> 
No, 'xl' doesn't allow me to do that. We can't assignment a device to different guests. 
Sorry, the description of this bug is not accurate. I changed its title to "Dom0 cannot be shut down before PCI device detachment from a guest".
If a guest (with a PCI device assigned) is running, Dom0 will panic when shutting down.

> > 6. Guest hang after resuming from S3
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828
> 
> Jan posted some patches to fix that. Can you test with an up-to-date
> guest? (so not RHEL6U1 which does not have the fix).
> 
I tested a RHEL guest with Linux kernel 3.5.3 which already includes Jan's patches you mentioned.
It will not fix this bug.
If using kernel 3.5.3 in guest, the guest totally can't resume after running ' xl trigger $dom_ID s3resume'.
There's some info (as following) in 'xl dmesg' when trying to resume the guest.
(XEN) HVM10: S3 resume called 00fe 0x00099180
(XEN) HVM10: S3 resume jump to 9918:0000
But the guest can't resume.

> > 7. Dom0 S3 resume fails
> >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> 
> Yeah, that one is mine. Have some patches for that I will post soonish.
> >
> > Best Regards,
> >      Yongjie (Jay)
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06  8:18   ` Ren, Yongjie
@ 2012-09-06  8:28     ` Ian Campbell
  2012-09-07  1:58       ` Ren, Yongjie
  2012-09-06 11:11     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 15+ messages in thread
From: Ian Campbell @ 2012-09-06  8:28 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: Konrad Rzeszutek Wilk, 'Konrad Rzeszutek Wilk',
	Keir (Xen.org), 'Jan Beulich', 'xen-devel'

On Thu, 2012-09-06 at 09:18 +0100, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> > Konrad Rzeszutek Wilk
> > Sent: Saturday, September 01, 2012 1:24 AM
> > To: Ren, Yongjie
> > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > Rzeszutek Wilk'
> > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> > > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> > when pci assignment conflicts
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > 
> > Um, so you are assigning the same VF to two guests. I am surprised that
> > the tools even allowed you to do that. Was 'xm' allowing you to do that?
> > 
> No, 'xl' doesn't allow me to do that. We can't assignment a device to different guests. 
> Sorry, the description of this bug is not accurate. I changed its title to "Dom0 cannot be shut down before PCI device detachment from a guest".
> If a guest (with a PCI device assigned) is running, Dom0 will panic when shutting down.

So this is a dom0 kernel issue and not something relating to 4.2? In
which case I shall remove it from the 4.2 TODO list.

Ian.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06  7:31     ` Ian Campbell
@ 2012-09-06 11:08       ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-06 11:08 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ren, Yongjie, Keir (Xen.org), 'Jan Beulich', 'xen-devel'

On Thu, Sep 06, 2012 at 08:31:12AM +0100, Ian Campbell wrote:
> On Thu, 2012-09-06 at 06:59 +0100, Ren, Yongjie wrote:
> > > Have we ever supported HVM guest CPU remove? I thought not.
> > > 
> > > http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1822#c3 seems to
> > > describe the behaviour I would expect.
> > > 
> > If we don't want to support HVM guest CPU remove in near future, I want to close this bug.
> 
> Is this something Intel is considering working on? If so then someone
> should mention it to George in the 4.3 planning thread.
> 
> > > If this is supposed to be an existing feature then is this a regression
> > > with xl vs xm or from 4.1 to 4.2?
> > > 
> > No, it's not a regression from 4.1 to 4.2.
> > Neither Xen 4.1 nor 4.2 supports HVM guest CPU remove with xm or xl.

But I think the bug does not talk about 'remove' but 'offline'.

That functionality (from a Xen toolstack perspective) works - if you
do 'xl vcpu-set' it properly tells the guest (either PV or HVM) to decrease
the count.

The problem is with the Linux kernel - and with the generic code:

https://lkml.org/lkml/2012/4/30/198

(and no, I had not a chance to actually fix it. Looking for volunteers).

> 
> OK, then I can remove it from the TODO list for 4.2, since it certainly
> isn't happening for 4.2.0 at this stage.

Right. Its a Linux kernel issue.
> 
> Thanks for letting me know,
> Ian.
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06  8:18   ` Ren, Yongjie
  2012-09-06  8:28     ` Ian Campbell
@ 2012-09-06 11:11     ` Konrad Rzeszutek Wilk
  2012-09-07  8:01       ` Ren, Yongjie
  1 sibling, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-06 11:11 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: Konrad Rzeszutek Wilk, 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

On Thu, Sep 06, 2012 at 08:18:16AM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> > Konrad Rzeszutek Wilk
> > Sent: Saturday, September 01, 2012 1:24 AM
> > To: Ren, Yongjie
> > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > Rzeszutek Wilk'
> > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> > > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> > when pci assignment conflicts
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > 
> > Um, so you are assigning the same VF to two guests. I am surprised that
> > the tools even allowed you to do that. Was 'xm' allowing you to do that?
> > 
> No, 'xl' doesn't allow me to do that. We can't assignment a device to different guests. 
> Sorry, the description of this bug is not accurate. I changed its title to "Dom0 cannot be shut down before PCI device detachment from a guest".
> If a guest (with a PCI device assigned) is running, Dom0 will panic when shutting down.

And does it panic if you use the 'irqpoll' option it asks for?
Xen-pciback has no involvment as:


[  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised != Connected, skipping

and the guest still keeps on getting interrupts.

What is the stack at the hang? 
> 
> > > 6. Guest hang after resuming from S3
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1828
> > 
> > Jan posted some patches to fix that. Can you test with an up-to-date
> > guest? (so not RHEL6U1 which does not have the fix).
> > 
> I tested a RHEL guest with Linux kernel 3.5.3 which already includes Jan's patches you mentioned.
> It will not fix this bug.
> If using kernel 3.5.3 in guest, the guest totally can't resume after running ' xl trigger $dom_ID s3resume'.
> There's some info (as following) in 'xl dmesg' when trying to resume the guest.
> (XEN) HVM10: S3 resume called 00fe 0x00099180
> (XEN) HVM10: S3 resume jump to 9918:0000
> But the guest can't resume.
> 
> > > 7. Dom0 S3 resume fails
> > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1707
> > 
> > Yeah, that one is mine. Have some patches for that I will post soonish.
> > >
> > > Best Regards,
> > >      Yongjie (Jay)
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > >

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06  8:28     ` Ian Campbell
@ 2012-09-07  1:58       ` Ren, Yongjie
  0 siblings, 0 replies; 15+ messages in thread
From: Ren, Yongjie @ 2012-09-07  1:58 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Konrad Rzeszutek Wilk, 'Konrad Rzeszutek Wilk',
	Keir (Xen.org), 'Jan Beulich', 'xen-devel'

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Thursday, September 06, 2012 4:29 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; 'xen-devel'; Keir (Xen.org); 'Jan Beulich';
> 'Konrad Rzeszutek Wilk'
> Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> On Thu, 2012-09-06 at 09:18 +0100, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> > > Konrad Rzeszutek Wilk
> > > Sent: Saturday, September 01, 2012 1:24 AM
> > > To: Ren, Yongjie
> > > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > > Rzeszutek Wilk'
> > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> >
> > > > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> > > when pci assignment conflicts
> > > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > >
> > > Um, so you are assigning the same VF to two guests. I am surprised
> that
> > > the tools even allowed you to do that. Was 'xm' allowing you to do
> that?
> > >
> > No, 'xl' doesn't allow me to do that. We can't assignment a device to
> different guests.
> > Sorry, the description of this bug is not accurate. I changed its title to
> "Dom0 cannot be shut down before PCI device detachment from a guest".
> > If a guest (with a PCI device assigned) is running, Dom0 will panic when
> shutting down.
> 
> So this is a dom0 kernel issue and not something relating to 4.2? In
> which case I shall remove it from the 4.2 TODO list.
> 
I don't think so.
In my testing, it should be a regression from Xen 4.1 to 4.2.
Xen4.1(22972) + Dom0(kernel3.5.3) = good
Xen4.2(25791) + Dom0(kernel3.5.3) = bad (it has this issue.)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-06 11:11     ` Konrad Rzeszutek Wilk
@ 2012-09-07  8:01       ` Ren, Yongjie
  2012-09-07 13:55         ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 15+ messages in thread
From: Ren, Yongjie @ 2012-09-07  8:01 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Konrad Rzeszutek Wilk, 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Thursday, September 06, 2012 7:12 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan
> Beulich'
> Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> On Thu, Sep 06, 2012 at 08:18:16AM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> > > Konrad Rzeszutek Wilk
> > > Sent: Saturday, September 01, 2012 1:24 AM
> > > To: Ren, Yongjie
> > > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > > Rzeszutek Wilk'
> > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> >
> > > > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> > > when pci assignment conflicts
> > > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > >
> > > Um, so you are assigning the same VF to two guests. I am surprised
> that
> > > the tools even allowed you to do that. Was 'xm' allowing you to do
> that?
> > >
> > No, 'xl' doesn't allow me to do that. We can't assignment a device to
> different guests.
> > Sorry, the description of this bug is not accurate. I changed its title to
> "Dom0 cannot be shut down before PCI device detachment from a guest".
> > If a guest (with a PCI device assigned) is running, Dom0 will panic when
> shutting down.
> 
> And does it panic if you use the 'irqpoll' option it asks for?
>
Adding 'irqpoll' makes no change.

It should be a regression for Xen from 4.1 to 4.2.
I didn't meet this issue with 4.2 Xen and 3.5.3 Dom0.

> Xen-pciback has no involvment as:
> 
> 
> [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> Connected, skipping
> 
> and the guest still keeps on getting interrupts.
> 
> What is the stack at the hang?
>
[  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised != Connected,
skipping
[  283.747505] xenbus_dev_shutdown: backend/vkbd/2/0: Initialising !=
Connected, skipping
[  283.747515] xenbus_dev_shutdown: backend/console/2/0: Initialising !=
Connected, skipping
[  380.236571] irq 16: nobody cared (try booting with the "irqpoll" option)
[  380.236588] Pid: 0, comm: swapper/0 Not tainted 3.4.4 #1
[  380.236596] Call Trace:
[  380.236601]  <IRQ>  [<ffffffff8110b538>] __report_bad_irq+0x38/0xd0
[  380.236626]  [<ffffffff8110b72c>] note_interrupt+0x15c/0x210
[  380.236637]  [<ffffffff81108ffa>] handle_irq_event_percpu+0xca/0x230
[  380.236648]  [<ffffffff811091b6>] handle_irq_event+0x56/0x90
[  380.236658]  [<ffffffff8110bde3>] handle_fasteoi_irq+0x63/0x120
[  380.236671]  [<ffffffff8131c8f1>] __xen_evtchn_do_upcall+0x1b1/0x280
[  380.236703]  [<ffffffff8131d51a>] xen_evtchn_do_upcall+0x2a/0x40
[  380.236716]  [<ffffffff816bc36e>] xen_do_hypervisor_callback+0x1e/0x30
[  380.236723]  <EOI>  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[  380.236742]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
[  380.236754]  [<ffffffff810538d0>] ? xen_safe_halt+0x10/0x20
[  380.236768]  [<ffffffff81067c9a>] ? default_idle+0x6a/0x1d0
[  380.236778]  [<ffffffff81067256>] ? cpu_idle+0x96/0xf0
[  380.236789]  [<ffffffff81689a18>] ? rest_init+0x68/0x70
[  380.236800]  [<ffffffff81ca9e33>] ? start_kernel+0x407/0x414
[  380.236810]  [<ffffffff81ca984a>] ? kernel_init+0x1e1/0x1e1
[  380.236821]  [<ffffffff81ca9346>] ? x86_64_start_reservations+0x131/0x136
[  380.236833]  [<ffffffff81cade7a>] ? xen_start_kernel+0x621/0x628
[  380.236841] handlers:
[  380.236850] [<ffffffff81428d80>] usb_hcd_irq
[  380.236860] Disabling IRQ #16

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-07  8:01       ` Ren, Yongjie
@ 2012-09-07 13:55         ` Konrad Rzeszutek Wilk
  2012-09-25  5:57           ` Ren, Yongjie
  0 siblings, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-07 13:55 UTC (permalink / raw)
  To: Ren, Yongjie
  Cc: Konrad Rzeszutek Wilk, 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

On Fri, Sep 07, 2012 at 08:01:37AM +0000, Ren, Yongjie wrote:
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Thursday, September 06, 2012 7:12 PM
> > To: Ren, Yongjie
> > Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan
> > Beulich'
> > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > 
> > On Thu, Sep 06, 2012 at 08:18:16AM +0000, Ren, Yongjie wrote:
> > > > -----Original Message-----
> > > > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf Of
> > > > Konrad Rzeszutek Wilk
> > > > Sent: Saturday, September 01, 2012 1:24 AM
> > > > To: Ren, Yongjie
> > > > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > > > Rzeszutek Wilk'
> > > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > >
> > > > > 5. Dom0 cannot be shutdown before PCI detachment from guest and
> > > > when pci assignment conflicts
> > > > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > > >
> > > > Um, so you are assigning the same VF to two guests. I am surprised
> > that
> > > > the tools even allowed you to do that. Was 'xm' allowing you to do
> > that?
> > > >
> > > No, 'xl' doesn't allow me to do that. We can't assignment a device to
> > different guests.
> > > Sorry, the description of this bug is not accurate. I changed its title to
> > "Dom0 cannot be shut down before PCI device detachment from a guest".
> > > If a guest (with a PCI device assigned) is running, Dom0 will panic when
> > shutting down.
> > 
> > And does it panic if you use the 'irqpoll' option it asks for?
> >
> Adding 'irqpoll' makes no change.
> 
> It should be a regression for Xen from 4.1 to 4.2.
> I didn't meet this issue with 4.2 Xen and 3.5.3 Dom0.

Aaah. That was not clear from the bugzilla.

> 
> > Xen-pciback has no involvment as:
> > 
> > 
> > [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> > Connected, skipping
> > 
> > and the guest still keeps on getting interrupts.
> > 
> > What is the stack at the hang?
> >
> [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised != Connected,
> skipping
> [  283.747505] xenbus_dev_shutdown: backend/vkbd/2/0: Initialising !=
> Connected, skipping
> [  283.747515] xenbus_dev_shutdown: backend/console/2/0: Initialising !=
> Connected, skipping
> [  380.236571] irq 16: nobody cared (try booting with the "irqpoll" option)
> [  380.236588] Pid: 0, comm: swapper/0 Not tainted 3.4.4 #1
> [  380.236596] Call Trace:
> [  380.236601]  <IRQ>  [<ffffffff8110b538>] __report_bad_irq+0x38/0xd0
> [  380.236626]  [<ffffffff8110b72c>] note_interrupt+0x15c/0x210
> [  380.236637]  [<ffffffff81108ffa>] handle_irq_event_percpu+0xca/0x230
> [  380.236648]  [<ffffffff811091b6>] handle_irq_event+0x56/0x90
> [  380.236658]  [<ffffffff8110bde3>] handle_fasteoi_irq+0x63/0x120
> [  380.236671]  [<ffffffff8131c8f1>] __xen_evtchn_do_upcall+0x1b1/0x280
> [  380.236703]  [<ffffffff8131d51a>] xen_evtchn_do_upcall+0x2a/0x40
> [  380.236716]  [<ffffffff816bc36e>] xen_do_hypervisor_callback+0x1e/0x30
> [  380.236723]  <EOI>  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
> [  380.236742]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
> [  380.236754]  [<ffffffff810538d0>] ? xen_safe_halt+0x10/0x20
> [  380.236768]  [<ffffffff81067c9a>] ? default_idle+0x6a/0x1d0
> [  380.236778]  [<ffffffff81067256>] ? cpu_idle+0x96/0xf0
> [  380.236789]  [<ffffffff81689a18>] ? rest_init+0x68/0x70
> [  380.236800]  [<ffffffff81ca9e33>] ? start_kernel+0x407/0x414
> [  380.236810]  [<ffffffff81ca984a>] ? kernel_init+0x1e1/0x1e1
> [  380.236821]  [<ffffffff81ca9346>] ? x86_64_start_reservations+0x131/0x136
> [  380.236833]  [<ffffffff81cade7a>] ? xen_start_kernel+0x621/0x628
> [  380.236841] handlers:
> [  380.236850] [<ffffffff81428d80>] usb_hcd_irq
> [  380.236860] Disabling IRQ #16

That is not the hang stack. That is the kernel telling you that something
has gone astray with an interrupt. But its unclear what happend _after_ that.

It might be that the system called the proper shutdown hypercall and its
waiting for the hypervisor to do its stuff. Can you try using the 'q' to get
a stack dump of the dom0 and see where its spinning/sitting please?
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Xen4.2-rc3 test result
  2012-09-07 13:55         ` Konrad Rzeszutek Wilk
@ 2012-09-25  5:57           ` Ren, Yongjie
  0 siblings, 0 replies; 15+ messages in thread
From: Ren, Yongjie @ 2012-09-25  5:57 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Konrad Rzeszutek Wilk, 'Keir Fraser',
	'Ian Campbell', 'Jan Beulich',
	'xen-devel'

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, September 07, 2012 9:56 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan
> Beulich'
> Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> On Fri, Sep 07, 2012 at 08:01:37AM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Thursday, September 06, 2012 7:12 PM
> > > To: Ren, Yongjie
> > > Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell';
> 'Jan
> > > Beulich'
> > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > >
> > > On Thu, Sep 06, 2012 at 08:18:16AM +0000, Ren, Yongjie wrote:
> > > > > -----Original Message-----
> > > > > From: Konrad Rzeszutek [mailto:ketuzsezr@gmail.com] On Behalf
> Of
> > > > > Konrad Rzeszutek Wilk
> > > > > Sent: Saturday, September 01, 2012 1:24 AM
> > > > > To: Ren, Yongjie
> > > > > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > > > > Rzeszutek Wilk'
> > > > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > > >
> > > > > > 5. Dom0 cannot be shutdown before PCI detachment from guest
> and
> > > > > when pci assignment conflicts
> > > > > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > > > >
> > > > > Um, so you are assigning the same VF to two guests. I am surprised
> > > that
> > > > > the tools even allowed you to do that. Was 'xm' allowing you to do
> > > that?
> > > > >
> > > > No, 'xl' doesn't allow me to do that. We can't assignment a device to
> > > different guests.
> > > > Sorry, the description of this bug is not accurate. I changed its title to
> > > "Dom0 cannot be shut down before PCI device detachment from a
> guest".
> > > > If a guest (with a PCI device assigned) is running, Dom0 will panic
> when
> > > shutting down.
> > >
> > > And does it panic if you use the 'irqpoll' option it asks for?
> > >
> > Adding 'irqpoll' makes no change.
> >
> > It should be a regression for Xen from 4.1 to 4.2.
> > I didn't meet this issue with 4.1 Xen and 3.5.3 Dom0.
> 
> Aaah. That was not clear from the bugzilla.
> 
> >
> > > Xen-pciback has no involvment as:
> > >
> > >
> > > [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> > > Connected, skipping
> > >
> > > and the guest still keeps on getting interrupts.
> > >
> > > What is the stack at the hang?
> > >
> > [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> Connected,
> > skipping
> > [  283.747505] xenbus_dev_shutdown: backend/vkbd/2/0: Initialising !=
> > Connected, skipping
> > [  283.747515] xenbus_dev_shutdown: backend/console/2/0:
> Initialising !=
> > Connected, skipping
> > [  380.236571] irq 16: nobody cared (try booting with the "irqpoll"
> option)
> > [  380.236588] Pid: 0, comm: swapper/0 Not tainted 3.4.4 #1
> > [  380.236596] Call Trace:
> > [  380.236601]  <IRQ>  [<ffffffff8110b538>]
> __report_bad_irq+0x38/0xd0
> > [  380.236626]  [<ffffffff8110b72c>] note_interrupt+0x15c/0x210
> > [  380.236637]  [<ffffffff81108ffa>]
> handle_irq_event_percpu+0xca/0x230
> > [  380.236648]  [<ffffffff811091b6>] handle_irq_event+0x56/0x90
> > [  380.236658]  [<ffffffff8110bde3>] handle_fasteoi_irq+0x63/0x120
> > [  380.236671]  [<ffffffff8131c8f1>]
> __xen_evtchn_do_upcall+0x1b1/0x280
> > [  380.236703]  [<ffffffff8131d51a>] xen_evtchn_do_upcall+0x2a/0x40
> > [  380.236716]  [<ffffffff816bc36e>]
> xen_do_hypervisor_callback+0x1e/0x30
> > [  380.236723]  <EOI>  [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> > [  380.236742]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
> > [  380.236754]  [<ffffffff810538d0>] ? xen_safe_halt+0x10/0x20
> > [  380.236768]  [<ffffffff81067c9a>] ? default_idle+0x6a/0x1d0
> > [  380.236778]  [<ffffffff81067256>] ? cpu_idle+0x96/0xf0
> > [  380.236789]  [<ffffffff81689a18>] ? rest_init+0x68/0x70
> > [  380.236800]  [<ffffffff81ca9e33>] ? start_kernel+0x407/0x414
> > [  380.236810]  [<ffffffff81ca984a>] ? kernel_init+0x1e1/0x1e1
> > [  380.236821]  [<ffffffff81ca9346>] ?
> x86_64_start_reservations+0x131/0x136
> > [  380.236833]  [<ffffffff81cade7a>] ? xen_start_kernel+0x621/0x628
> > [  380.236841] handlers:
> > [  380.236850] [<ffffffff81428d80>] usb_hcd_irq
> > [  380.236860] Disabling IRQ #16
> 
> That is not the hang stack. That is the kernel telling you that something
> has gone astray with an interrupt. But its unclear what happend _after_
> that.
> 
> It might be that the system called the proper shutdown hypercall and its
> waiting for the hypervisor to do its stuff. Can you try using the 'q' to get
> a stack dump of the dom0 and see where its spinning/sitting please?
> 
(XEN) 'q' pressed -> dumping domain info (now=0x43F:E9B06F08)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=1048576 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-24,26-30} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-407, 40c-cfb, d00-ffff }
(XEN)     Interrupts { 0-315, 317-327 }
(XEN)     I/O Memory { 0-febff, fec01-fec3e, fec40-fec7e, fec80-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 000000000043c8ad: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000043c8ac: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000043c8ab: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000043c8aa: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bb48e: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000418d84: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU14 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={14} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU1: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={18} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU15 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={15} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU24 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={24} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU4: CPU27 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={27} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU5: CPU30 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={30} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU6: CPU24 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU7: CPU11 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={11} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU8: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={19} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU9: CPU23 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU10: CPU28 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={28} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU11: CPU29 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={29} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU12: CPU10 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={10} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU13: CPU17 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={17} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU14: CPU8 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={8} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU15: CPU22 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={22} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU16: CPU7 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={7} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU17: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={16} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU18: CPU26 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={26} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU19: CPU13 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={13} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU20: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU21: CPU12 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={12} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU22: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU23: CPU23 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={23} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU24: CPU20 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={20} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU25: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU26: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU27: CPU9 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={9} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU28: CPU5 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={5} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU29: CPU4 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={4} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU30: CPU21 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={21} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU31: CPU6 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={6} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) General information for domain 1:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=1047549 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=1048832
(XEN)     handle=dd1ee134-05ef-4a65-a2fe-163bce610687 vm_assist=00000000
(XEN)     paging assistance: hap refcounts translate external 
(XEN) Rangesets belonging to domain 1:
(XEN)     I/O Ports  { }
(XEN)     Interrupts { 101-103 }
(XEN)     I/O Memory { ebb41-ebb43, ebb60-ebb63 }
(XEN) Memory pages belonging to domain 1:
(XEN)     DomPage list too long to display
(XEN)     PoD entries=0 cachesize=0
(XEN)     XenPage 000000000044000e: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000044000d: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000044000c: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004a04f7: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bd4f8: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004a0400: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN)     VCPU0: CPU1 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=4
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU1: CPU10 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU2: CPU11 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU3: CPU12 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU4: CPU13 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU5: CPU14 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU6: CPU15 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU7: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU8: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU9: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU10: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU11: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU12: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU13: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU14: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU15: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN) General information for domain 2:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=523261 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={31} max_pages=524544
(XEN)     handle=b10a1595-a5b0-46e2-a67a-3c77c4755761 vm_assist=00000000
(XEN)     paging assistance: hap refcounts translate external 
(XEN) Rangesets belonging to domain 2:
(XEN)     I/O Ports  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 2:
(XEN)     DomPage list too long to display
(XEN)     PoD entries=0 cachesize=0
(XEN)     XenPage 00000000004b44f3: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f2: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f1: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f0: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bb08f: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b27a7: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 2:
(XEN)     VCPU0: CPU31 [has=T] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={31} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=0
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU1: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=4
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU2: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU3: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU4: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU5: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU6: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU7: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU8: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU9: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU10: CPU17 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU11: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU12: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU13: CPU20 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU14: CPU21 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU15: CPU22 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/0)
(XEN) Notifying guest 0:1 (virq 1, port 12, stat 0/0/0)
[ 4672.954495] (XEN) Notifying guest 0:2 (virq 1, port 19, stat 0/0/0)

vcpu 0
  (XEN) Notifying guest 0:3 (virq 1, port 26, stat 0/0/0)
0: masked=0 pend(XEN) Notifying guest 0:4 (virq 1, port 33, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:5 (virq 1, port 40, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:6 (virq 1, port 47, stat 0/0/0)

  (XEN) Notifying guest 0:7 (virq 1, port 54, stat 0/0/0)
1: masked=1 pend(XEN) Notifying guest 0:8 (virq 1, port 61, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:9 (virq 1, port 68, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:10 (virq 1, port 75, stat 0/0/0)

  (XEN) Notifying guest 0:11 (virq 1, port 82, stat 0/0/0)
2: masked=1 pend(XEN) Notifying guest 0:12 (virq 1, port 89, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:13 (virq 1, port 96, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:14 (virq 1, port 103, stat 0/0/0)

  (XEN) Notifying guest 0:15 (virq 1, port 110, stat 0/0/0)
3: masked=1 pend(XEN) Notifying guest 0:16 (virq 1, port 117, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:17 (virq 1, port 124, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:18 (virq 1, port 131, stat 0/0/0)

  (XEN) Notifying guest 0:19 (virq 1, port 138, stat 0/0/0)
4: masked=1 pend(XEN) Notifying guest 0:20 (virq 1, port 145, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:21 (virq 1, port 152, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:22 (virq 1, port 159, stat 0/0/0)

  (XEN) Notifying guest 0:23 (virq 1, port 166, stat 0/0/0)
5: masked=1 pend(XEN) Notifying guest 0:24 (virq 1, port 173, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:25 (virq 1, port 180, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:26 (virq 1, port 187, stat 0/0/0)

  (XEN) Notifying guest 0:27 (virq 1, port 194, stat 0/0/0)
6: masked=1 pend(XEN) Notifying guest 0:28 (virq 1, port 201, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:29 (virq 1, port 208, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:30 (virq 1, port 215, stat 0/0/0)

  (XEN) Notifying guest 0:31 (virq 1, port 222, stat 0/0/0)
7: masked=1 pend(XEN) Notifying guest 1:0 (virq 1, port 0, stat 0/-1/-1)
ing=0 event_sel (XEN) Notifying guest 1:1 (virq 1, port 0, stat 0/-1/0)
0000000000000000(XEN) Notifying guest 1:2 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:3 (virq 1, port 0, stat 0/-1/0)
8: masked=1 pend(XEN) Notifying guest 1:4 (virq 1, port 0, stat 0/-1/0)
ing=0 event_sel (XEN) Notifying guest 1:5 (virq 1, port 0, stat 0/-1/0)
0000000000000000(XEN) Notifying guest 1:6 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:7 (virq 1, port 0, stat 0/-1/0)
9: masked=1 pend(XEN) Notifying guest 1:8 (virq 1, port 0, stat 0/-1/0)
ing=1 event_sel (XEN) Notifying guest 1:9 (virq 1, port 0, stat 0/-1/0)
0000000000000002(XEN) Notifying guest 1:10 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:11 (virq 1, port 0, stat 0/-1/0)
10: masked=1 pen(XEN) Notifying guest 1:12 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 1:13 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 1:14 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 1:15 (virq 1, port 0, stat 0/-1/0)
11: masked=1 pen(XEN) Notifying guest 2:0 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:1 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:2 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:3 (virq 1, port 0, stat 0/-1/0)
12: masked=1 pen(XEN) Notifying guest 2:4 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:5 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:6 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:7 (virq 1, port 0, stat 0/-1/0)
13: masked=1 pen(XEN) Notifying guest 2:8 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:9 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:10 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:11 (virq 1, port 0, stat 0/-1/0)
14: masked=1 pen(XEN) Notifying guest 2:12 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:13 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:14 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:15 (virq 1, port 0, stat 0/-1/0)
15: masked=1 pen(XEN) Shared frames 0 -- Saved frames 0
ding=(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2012-09-25  5:57 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-31  7:27 Xen4.2-rc3 test result Ren, Yongjie
2012-08-31 15:18 ` Ian Campbell
2012-09-06  5:59   ` Ren, Yongjie
2012-09-06  7:31     ` Ian Campbell
2012-09-06 11:08       ` Konrad Rzeszutek Wilk
2012-08-31 17:24 ` Konrad Rzeszutek Wilk
2012-09-03  7:45   ` Jan Beulich
2012-09-03 10:18     ` Konrad Rzeszutek Wilk
2012-09-06  8:18   ` Ren, Yongjie
2012-09-06  8:28     ` Ian Campbell
2012-09-07  1:58       ` Ren, Yongjie
2012-09-06 11:11     ` Konrad Rzeszutek Wilk
2012-09-07  8:01       ` Ren, Yongjie
2012-09-07 13:55         ` Konrad Rzeszutek Wilk
2012-09-25  5:57           ` Ren, Yongjie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.