All of lore.kernel.org
 help / color / mirror / Atom feed
* Xen4.2 S3 regression?
@ 2012-08-07 15:04 Ben Guthro
  2012-08-07 16:21 ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-07 15:04 UTC (permalink / raw)
  To: xen-devel

I have been doing some experiments in upgrading the Xen version in a
future version of XenClient Enterprise, and I've been running into a
regression that I'm wondering if anyone else has seen.

dom0 suspend/resume (S3) does not seem to be working for me.

In swapping out components of the system, the common failure seems to
be when I use Xen-4.2 (upgraded from Xen-4.0.3)

The first suspend seems to mostly work...but subsequent ones always
resume improperly.
By "improperly" - I see I/O failures, and stalls of many processes.

Below is a log excerpt of 2 S3 attempts.


Has anyone else seen these failures?

- Ben


(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Breaking vcpu affinity for domain 0 vcpu 2
(XEN) Breaking vcpu affinity for domain 0 vcpu 3
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
[   66.508829] ata3.00: revalidation failed (errno=-5)
[   66.508861] ata1.00: revalidation failed (errno=-5)
[   76.858815] ata3.00: revalidation failed (errno=-5)
[   76.898807] ata1.00: revalidation failed (errno=-5)
[  107.208817] ata3.00: revalidation failed (errno=-5)
[  107.288807] ata1.00: revalidation failed (errno=-5)
[  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
[  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
[  107.718913] end_request: I/O error, dev sda, sector 35193296
[  107.718919] Buffer I/O error on device dm-5, logical block 7690
[  107.718947] end_request: I/O error, dev sda, sector 35657184
[  107.718965] end_request: I/O error, dev sda, sector 246202760
[  107.718968] Buffer I/O error on device dm-6, logical block 26252801
[  107.718995] end_request: I/O error, dev sda, sector 254548368
[  107.719009] Aborting journal on device dm-6-8.
[  107.719021] end_request: I/O error, dev sda, sector 35164192
[  107.719023] Buffer I/O error on device dm-5, logical block 4052
[  107.719063] Aborting journal on device dm-5-8.
[  107.719085] end_request: I/O error, dev sda, sector 254546304
[  107.719097] Buffer I/O error on device dm-6, logical block 27295744
[  107.719129] JBD2: I/O error detected when updating journal
superblock for dm-6-8.
[  107.719141] end_request: I/O error, dev sda, sector 35656064
[  107.719146] Buffer I/O error on device dm-5, logical block 65536
[  107.719168] JBD2: I/O error detected when updating journal
superblock for dm-5-8.
[  107.870082] end_request: I/O error, dev sda, sector 35131776
[  107.875825] Buffer I/O error on device dm-5, logical block 0
[  107.881805] end_request: I/O error, dev sda, sector 35131776
[  107.887637] Buffer I/O error on device dm-5, logical block 0
[  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
[  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
[  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
[  107.893617] end_request: I/O error, dev sda, sector 35131776
[  107.893620] Buffer I/O error on device dm-5, logical block 0
[  107.893749] end_request: I/O error, dev sda, sector 36180352
[  107.893752] Buffer I/O error on device dm-6, logical block 0
[  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
Detected aborted journal
[  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
[  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
[  107.893784] end_request: I/O error, dev sda, sector 36180352
[  107.893787] Buffer I/O error on device dm-6, logical block 0
[  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
Detected aborted journal
[  108.669763] end_request: I/O error, dev sda, sector 25957784
[  108.675555] Aborting journal on device dm-3-8.
[  108.680246] end_request: I/O error, dev sda, sector 25956736
[  108.686099] JBD2: I/O error detected when updating journal
superblock for dm-3-8.
[  108.693908] journal commit I/O error
[  108.755829] end_request: I/O error, dev sda, sector 17305984
[  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
Detected aborted journal
[  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
[  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
[  108.782904] end_request: I/O error, dev sda, sector 17305984
[  109.660011] end_request: I/O error, dev sda, sector 358788
[  109.665572] Buffer I/O error on device dm-1, logical block 46082
[  109.682479] end_request: I/O error, dev sda, sector 18832256
[  109.688246] end_request: I/O error, dev sda, sector 18832256
[  109.709559] end_request: I/O error, dev sda, sector 357762
[  109.715120] Buffer I/O error on device dm-1, logical block 45569
[  109.721506] end_request: I/O error, dev sda, sector 358790
[  109.727114] Buffer I/O error on device dm-1, logical block 46083
[  109.743714] end_request: I/O error, dev sda, sector 18832256
[  109.755555] end_request: I/O error, dev sda, sector 18832256
[  109.886187] end_request: I/O error, dev sda, sector 357764
[  109.891756] Buffer I/O error on device dm-1, logical block 45570
[  109.908344] end_request: I/O error, dev sda, sector 18832256
[  109.928369] end_request: I/O error, dev sda, sector 349574
[  109.933938] Buffer I/O error on device dm-1, logical block 41475
[  109.950336] end_request: I/O error, dev sda, sector 18832256
[  115.378875] end_request: I/O error, dev sda, sector 365000
[  115.384445] Aborting journal on device dm-1-8.
[  115.389120] end_request: I/O error, dev sda, sector 364930
[  115.394798] Buffer I/O error on device dm-1, logical block 49153
[  115.401101] JBD2: I/O error detected when updating journal
superblock for dm-1-8.
[  207.207426] end_request: I/O error, dev sda, sector 246192376
[  207.213313] end_request: I/O error, dev sda, sector 246192376
[  207.903181] end_request: I/O error, dev sda, sector 246192376
[  209.234399] end_request: I/O error, dev sda, sector 18518400
[  209.240221] end_request: I/O error, dev sda, sector 18518400

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-07 15:04 Xen4.2 S3 regression? Ben Guthro
@ 2012-08-07 16:21 ` Ben Guthro
  2012-08-07 16:33   ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-07 16:21 UTC (permalink / raw)
  To: xen-devel

It looks like this regression may be related to MSI handling.

"pci=nomsi" on the kernel command line seems to bypass the issue.

Clearly, legacy interrupts are not ideal.


On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> I have been doing some experiments in upgrading the Xen version in a
> future version of XenClient Enterprise, and I've been running into a
> regression that I'm wondering if anyone else has seen.
>
> dom0 suspend/resume (S3) does not seem to be working for me.
>
> In swapping out components of the system, the common failure seems to
> be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>
> The first suspend seems to mostly work...but subsequent ones always
> resume improperly.
> By "improperly" - I see I/O failures, and stalls of many processes.
>
> Below is a log excerpt of 2 S3 attempts.
>
>
> Has anyone else seen these failures?
>
> - Ben
>
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> 0 extended MCE MSR 0
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) Enabling non-boot CPUs  ...
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> [   66.508829] ata3.00: revalidation failed (errno=-5)
> [   66.508861] ata1.00: revalidation failed (errno=-5)
> [   76.858815] ata3.00: revalidation failed (errno=-5)
> [   76.898807] ata1.00: revalidation failed (errno=-5)
> [  107.208817] ata3.00: revalidation failed (errno=-5)
> [  107.288807] ata1.00: revalidation failed (errno=-5)
> [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> [  107.718913] end_request: I/O error, dev sda, sector 35193296
> [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> [  107.718947] end_request: I/O error, dev sda, sector 35657184
> [  107.718965] end_request: I/O error, dev sda, sector 246202760
> [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> [  107.718995] end_request: I/O error, dev sda, sector 254548368
> [  107.719009] Aborting journal on device dm-6-8.
> [  107.719021] end_request: I/O error, dev sda, sector 35164192
> [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> [  107.719063] Aborting journal on device dm-5-8.
> [  107.719085] end_request: I/O error, dev sda, sector 254546304
> [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> [  107.719129] JBD2: I/O error detected when updating journal
> superblock for dm-6-8.
> [  107.719141] end_request: I/O error, dev sda, sector 35656064
> [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> [  107.719168] JBD2: I/O error detected when updating journal
> superblock for dm-5-8.
> [  107.870082] end_request: I/O error, dev sda, sector 35131776
> [  107.875825] Buffer I/O error on device dm-5, logical block 0
> [  107.881805] end_request: I/O error, dev sda, sector 35131776
> [  107.887637] Buffer I/O error on device dm-5, logical block 0
> [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> [  107.893617] end_request: I/O error, dev sda, sector 35131776
> [  107.893620] Buffer I/O error on device dm-5, logical block 0
> [  107.893749] end_request: I/O error, dev sda, sector 36180352
> [  107.893752] Buffer I/O error on device dm-6, logical block 0
> [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> Detected aborted journal
> [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> [  107.893784] end_request: I/O error, dev sda, sector 36180352
> [  107.893787] Buffer I/O error on device dm-6, logical block 0
> [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.669763] end_request: I/O error, dev sda, sector 25957784
> [  108.675555] Aborting journal on device dm-3-8.
> [  108.680246] end_request: I/O error, dev sda, sector 25956736
> [  108.686099] JBD2: I/O error detected when updating journal
> superblock for dm-3-8.
> [  108.693908] journal commit I/O error
> [  108.755829] end_request: I/O error, dev sda, sector 17305984
> [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> Detected aborted journal
> [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> [  108.782904] end_request: I/O error, dev sda, sector 17305984
> [  109.660011] end_request: I/O error, dev sda, sector 358788
> [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> [  109.682479] end_request: I/O error, dev sda, sector 18832256
> [  109.688246] end_request: I/O error, dev sda, sector 18832256
> [  109.709559] end_request: I/O error, dev sda, sector 357762
> [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> [  109.721506] end_request: I/O error, dev sda, sector 358790
> [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> [  109.743714] end_request: I/O error, dev sda, sector 18832256
> [  109.755555] end_request: I/O error, dev sda, sector 18832256
> [  109.886187] end_request: I/O error, dev sda, sector 357764
> [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> [  109.908344] end_request: I/O error, dev sda, sector 18832256
> [  109.928369] end_request: I/O error, dev sda, sector 349574
> [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> [  109.950336] end_request: I/O error, dev sda, sector 18832256
> [  115.378875] end_request: I/O error, dev sda, sector 365000
> [  115.384445] Aborting journal on device dm-1-8.
> [  115.389120] end_request: I/O error, dev sda, sector 364930
> [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> [  115.401101] JBD2: I/O error detected when updating journal
> superblock for dm-1-8.
> [  207.207426] end_request: I/O error, dev sda, sector 246192376
> [  207.213313] end_request: I/O error, dev sda, sector 246192376
> [  207.903181] end_request: I/O error, dev sda, sector 246192376
> [  209.234399] end_request: I/O error, dev sda, sector 18518400
> [  209.240221] end_request: I/O error, dev sda, sector 18518400

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-07 16:21 ` Ben Guthro
@ 2012-08-07 16:33   ` Konrad Rzeszutek Wilk
  2012-08-07 16:48     ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-08-07 16:33 UTC (permalink / raw)
  To: Ben Guthro; +Cc: xen-devel

On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
> It looks like this regression may be related to MSI handling.
> 
> "pci=nomsi" on the kernel command line seems to bypass the issue.
> 
> Clearly, legacy interrupts are not ideal.

This is with v3.5 kernel right? With the earlier one you did not have
this issue?
> 
> 
> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
> > I have been doing some experiments in upgrading the Xen version in a
> > future version of XenClient Enterprise, and I've been running into a
> > regression that I'm wondering if anyone else has seen.
> >
> > dom0 suspend/resume (S3) does not seem to be working for me.
> >
> > In swapping out components of the system, the common failure seems to
> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
> >
> > The first suspend seems to mostly work...but subsequent ones always
> > resume improperly.
> > By "improperly" - I see I/O failures, and stalls of many processes.
> >
> > Below is a log excerpt of 2 S3 attempts.
> >
> >
> > Has anyone else seen these failures?
> >
> > - Ben
> >
> >
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > (XEN) Preparing system for ACPI S3 state.
> > (XEN) Disabling non-boot CPUs ...
> > (XEN) Entering ACPI S3 state.
> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
> > 0 extended MCE MSR 0
> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
> > [   66.508829] ata3.00: revalidation failed (errno=-5)
> > [   66.508861] ata1.00: revalidation failed (errno=-5)
> > [   76.858815] ata3.00: revalidation failed (errno=-5)
> > [   76.898807] ata1.00: revalidation failed (errno=-5)
> > [  107.208817] ata3.00: revalidation failed (errno=-5)
> > [  107.288807] ata1.00: revalidation failed (errno=-5)
> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
> > [  107.719009] Aborting journal on device dm-6-8.
> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
> > [  107.719063] Aborting journal on device dm-5-8.
> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
> > [  107.719129] JBD2: I/O error detected when updating journal
> > superblock for dm-6-8.
> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
> > [  107.719168] JBD2: I/O error detected when updating journal
> > superblock for dm-5-8.
> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
> > [  108.675555] Aborting journal on device dm-3-8.
> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
> > [  108.686099] JBD2: I/O error detected when updating journal
> > superblock for dm-3-8.
> > [  108.693908] journal commit I/O error
> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
> > Detected aborted journal
> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
> > [  109.660011] end_request: I/O error, dev sda, sector 358788
> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
> > [  109.709559] end_request: I/O error, dev sda, sector 357762
> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
> > [  109.721506] end_request: I/O error, dev sda, sector 358790
> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
> > [  109.886187] end_request: I/O error, dev sda, sector 357764
> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
> > [  109.928369] end_request: I/O error, dev sda, sector 349574
> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
> > [  115.378875] end_request: I/O error, dev sda, sector 365000
> > [  115.384445] Aborting journal on device dm-1-8.
> > [  115.389120] end_request: I/O error, dev sda, sector 364930
> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
> > [  115.401101] JBD2: I/O error detected when updating journal
> > superblock for dm-1-8.
> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-07 16:33   ` Konrad Rzeszutek Wilk
@ 2012-08-07 16:48     ` Ben Guthro
  2012-08-07 20:14       ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-07 16:48 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel

No - the issue seems to follow xen-4.2

my test matrix looks as such:

Xen      Linux                        S3 result
4.0.3     3.2.23                       OK
4.0.3     3.5                            OK
4.2        3.2.23                       FAIL
4.2        3.5                            FAIL
4.2        3.2.23 pci=nomsi    OK
4.2        3.5 pci=nomsi         (untested)




On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>> It looks like this regression may be related to MSI handling.
>>
>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>
>> Clearly, legacy interrupts are not ideal.
>
> This is with v3.5 kernel right? With the earlier one you did not have
> this issue?
>>
>>
>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>> > I have been doing some experiments in upgrading the Xen version in a
>> > future version of XenClient Enterprise, and I've been running into a
>> > regression that I'm wondering if anyone else has seen.
>> >
>> > dom0 suspend/resume (S3) does not seem to be working for me.
>> >
>> > In swapping out components of the system, the common failure seems to
>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>> >
>> > The first suspend seems to mostly work...but subsequent ones always
>> > resume improperly.
>> > By "improperly" - I see I/O failures, and stalls of many processes.
>> >
>> > Below is a log excerpt of 2 S3 attempts.
>> >
>> >
>> > Has anyone else seen these failures?
>> >
>> > - Ben
>> >
>> >
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > (XEN) Preparing system for ACPI S3 state.
>> > (XEN) Disabling non-boot CPUs ...
>> > (XEN) Entering ACPI S3 state.
>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>> > 0 extended MCE MSR 0
>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>> > (XEN) Finishing wakeup from ACPI S3 state.
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) Enabling non-boot CPUs  ...
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>> > [  107.719009] Aborting journal on device dm-6-8.
>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>> > [  107.719063] Aborting journal on device dm-5-8.
>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>> > [  107.719129] JBD2: I/O error detected when updating journal
>> > superblock for dm-6-8.
>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>> > [  107.719168] JBD2: I/O error detected when updating journal
>> > superblock for dm-5-8.
>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>> > [  108.675555] Aborting journal on device dm-3-8.
>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>> > [  108.686099] JBD2: I/O error detected when updating journal
>> > superblock for dm-3-8.
>> > [  108.693908] journal commit I/O error
>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>> > Detected aborted journal
>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>> > [  115.384445] Aborting journal on device dm-1-8.
>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>> > [  115.401101] JBD2: I/O error detected when updating journal
>> > superblock for dm-1-8.
>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-07 16:48     ` Ben Guthro
@ 2012-08-07 20:14       ` Ben Guthro
  2012-08-08  8:35         ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-07 20:14 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 10098 bytes --]

Any suggestions on how best to chase this down?

The first S3 suspend/resume cycle works, but the second does not.

On the second try, I never get any interrupts delivered to ahci.
(at least according to /proc/interrupts)


syslog traces from the first (good) and the second (bad) are attached,
as well as the output from the "*" debug Ctrl+a handler in both cases.




On Tue, Aug 7, 2012 at 12:48 PM, Ben Guthro <ben@guthro.net> wrote:
> No - the issue seems to follow xen-4.2
>
> my test matrix looks as such:
>
> Xen      Linux                        S3 result
> 4.0.3     3.2.23                       OK
> 4.0.3     3.5                            OK
> 4.2        3.2.23                       FAIL
> 4.2        3.5                            FAIL
> 4.2        3.2.23 pci=nomsi    OK
> 4.2        3.5 pci=nomsi         (untested)
>
>
>
>
> On Tue, Aug 7, 2012 at 12:33 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Tue, Aug 07, 2012 at 12:21:22PM -0400, Ben Guthro wrote:
>>> It looks like this regression may be related to MSI handling.
>>>
>>> "pci=nomsi" on the kernel command line seems to bypass the issue.
>>>
>>> Clearly, legacy interrupts are not ideal.
>>
>> This is with v3.5 kernel right? With the earlier one you did not have
>> this issue?
>>>
>>>
>>> On Tue, Aug 7, 2012 at 11:04 AM, Ben Guthro <ben@guthro.net> wrote:
>>> > I have been doing some experiments in upgrading the Xen version in a
>>> > future version of XenClient Enterprise, and I've been running into a
>>> > regression that I'm wondering if anyone else has seen.
>>> >
>>> > dom0 suspend/resume (S3) does not seem to be working for me.
>>> >
>>> > In swapping out components of the system, the common failure seems to
>>> > be when I use Xen-4.2 (upgraded from Xen-4.0.3)
>>> >
>>> > The first suspend seems to mostly work...but subsequent ones always
>>> > resume improperly.
>>> > By "improperly" - I see I/O failures, and stalls of many processes.
>>> >
>>> > Below is a log excerpt of 2 S3 attempts.
>>> >
>>> >
>>> > Has anyone else seen these failures?
>>> >
>>> > - Ben
>>> >
>>> >
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 1
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 2
>>> > (XEN) Breaking vcpu affinity for domain 0 vcpu 3
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   36.440696] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > (XEN) Preparing system for ACPI S3 state.
>>> > (XEN) Disabling non-boot CPUs ...
>>> > (XEN) Entering ACPI S3 state.
>>> > (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank
>>> > 0 extended MCE MSR 0
>>> > (XEN) CPU0 CMCI LVT vector (0xf1) already installed
>>> > (XEN) Finishing wakeup from ACPI S3 state.
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) Enabling non-boot CPUs  ...
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > (XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
>>> > [   65.893235] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
>>> > [   66.508829] ata3.00: revalidation failed (errno=-5)
>>> > [   66.508861] ata1.00: revalidation failed (errno=-5)
>>> > [   76.858815] ata3.00: revalidation failed (errno=-5)
>>> > [   76.898807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.208817] ata3.00: revalidation failed (errno=-5)
>>> > [  107.288807] ata1.00: revalidation failed (errno=-5)
>>> > [  107.718866] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
>>> > [  107.718877] PM: Device 0:0:0:0 failed to resume async: error 262144
>>> > [  107.718913] end_request: I/O error, dev sda, sector 35193296
>>> > [  107.718919] Buffer I/O error on device dm-5, logical block 7690
>>> > [  107.718947] end_request: I/O error, dev sda, sector 35657184
>>> > [  107.718965] end_request: I/O error, dev sda, sector 246202760
>>> > [  107.718968] Buffer I/O error on device dm-6, logical block 26252801
>>> > [  107.718995] end_request: I/O error, dev sda, sector 254548368
>>> > [  107.719009] Aborting journal on device dm-6-8.
>>> > [  107.719021] end_request: I/O error, dev sda, sector 35164192
>>> > [  107.719023] Buffer I/O error on device dm-5, logical block 4052
>>> > [  107.719063] Aborting journal on device dm-5-8.
>>> > [  107.719085] end_request: I/O error, dev sda, sector 254546304
>>> > [  107.719097] Buffer I/O error on device dm-6, logical block 27295744
>>> > [  107.719129] JBD2: I/O error detected when updating journal
>>> > superblock for dm-6-8.
>>> > [  107.719141] end_request: I/O error, dev sda, sector 35656064
>>> > [  107.719146] Buffer I/O error on device dm-5, logical block 65536
>>> > [  107.719168] JBD2: I/O error detected when updating journal
>>> > superblock for dm-5-8.
>>> > [  107.870082] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.875825] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.881805] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.887637] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893573] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > [  107.893579] EXT4-fs (dm-5): I/O error while writing superblock
>>> > [  107.893582] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893584] EXT4-fs (dm-5): Remounting filesystem read-only
>>> > [  107.893617] end_request: I/O error, dev sda, sector 35131776
>>> > [  107.893620] Buffer I/O error on device dm-5, logical block 0
>>> > [  107.893749] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893752] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.893762] EXT4-fs error (device dm-6): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  107.893765] EXT4-fs (dm-6): Remounting filesystem read-only
>>> > [  107.893766] EXT4-fs (dm-6): previous I/O error to superblock detected
>>> > [  107.893784] end_request: I/O error, dev sda, sector 36180352
>>> > [  107.893787] Buffer I/O error on device dm-6, logical block 0
>>> > [  107.894467] EXT4-fs error (device dm-5): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.669763] end_request: I/O error, dev sda, sector 25957784
>>> > [  108.675555] Aborting journal on device dm-3-8.
>>> > [  108.680246] end_request: I/O error, dev sda, sector 25956736
>>> > [  108.686099] JBD2: I/O error detected when updating journal
>>> > superblock for dm-3-8.
>>> > [  108.693908] journal commit I/O error
>>> > [  108.755829] end_request: I/O error, dev sda, sector 17305984
>>> > [  108.761600] EXT4-fs error (device dm-3): ext4_journal_start_sb:327:
>>> > Detected aborted journal
>>> > [  108.770340] EXT4-fs (dm-3): Remounting filesystem read-only
>>> > [  108.776159] EXT4-fs (dm-3): previous I/O error to superblock detected
>>> > [  108.782904] end_request: I/O error, dev sda, sector 17305984
>>> > [  109.660011] end_request: I/O error, dev sda, sector 358788
>>> > [  109.665572] Buffer I/O error on device dm-1, logical block 46082
>>> > [  109.682479] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.688246] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.709559] end_request: I/O error, dev sda, sector 357762
>>> > [  109.715120] Buffer I/O error on device dm-1, logical block 45569
>>> > [  109.721506] end_request: I/O error, dev sda, sector 358790
>>> > [  109.727114] Buffer I/O error on device dm-1, logical block 46083
>>> > [  109.743714] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.755555] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.886187] end_request: I/O error, dev sda, sector 357764
>>> > [  109.891756] Buffer I/O error on device dm-1, logical block 45570
>>> > [  109.908344] end_request: I/O error, dev sda, sector 18832256
>>> > [  109.928369] end_request: I/O error, dev sda, sector 349574
>>> > [  109.933938] Buffer I/O error on device dm-1, logical block 41475
>>> > [  109.950336] end_request: I/O error, dev sda, sector 18832256
>>> > [  115.378875] end_request: I/O error, dev sda, sector 365000
>>> > [  115.384445] Aborting journal on device dm-1-8.
>>> > [  115.389120] end_request: I/O error, dev sda, sector 364930
>>> > [  115.394798] Buffer I/O error on device dm-1, logical block 49153
>>> > [  115.401101] JBD2: I/O error detected when updating journal
>>> > superblock for dm-1-8.
>>> > [  207.207426] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.213313] end_request: I/O error, dev sda, sector 246192376
>>> > [  207.903181] end_request: I/O error, dev sda, sector 246192376
>>> > [  209.234399] end_request: I/O error, dev sda, sector 18518400
>>> > [  209.240221] end_request: I/O error, dev sda, sector 18518400
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel

[-- Attachment #2: xen-dump-bad.txt --]
[-- Type: text/plain, Size: 66956 bytes --]

(XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch input to DOM0)
(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff82c4803025a0   rbx: ffff82c480302480   rcx: 0000000000000003
(XEN) rdx: 0000000000000000   rsi: ffff82c4802e25c8   rdi: ffff82c480271800
(XEN) rbp: ffff82c4802b7e30   rsp: ffff82c4802b7e30   r8:  0000000000000001
(XEN) r9:  ffff83014899aea8   r10: 0000004bf7d41783   r11: 0000000000000246
(XEN) r12: ffff82c480271800   r13: ffff82c48013d757   r14: 0000004bf7c6894c
(XEN) r15: ffff82c480302308   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 00000000aa2c5000   cr2: ffff880026baa108
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802b7e30:
(XEN)    ffff82c4802b7e60 ffff82c48012817f 0000000000000002 ffff82c4802e25c8
(XEN)    ffff82c480302480 ffff830148992d40 ffff82c4802b7eb0 ffff82c480128281
(XEN)    ffff82c4802b7f18 0000000000000246 0000004bf7d41783 ffff82c4802d8880
(XEN)    ffff82c4802d8880 ffff82c4802b7f18 ffffffffffffffff ffff82c480302308
(XEN)    ffff82c4802b7ee0 ffff82c480125405 ffff82c4802b7f18 ffff82c4802b7f18
(XEN)    00000000ffffffff 0000000000000002 ffff82c4802b7ef0 ffff82c480125484
(XEN)    ffff82c4802b7f10 ffff82c480158c05 ffff8300aa584000 ffff8300aa0fc000
(XEN)    ffff82c4802b7da8 0000000000000000 ffffffffffffffff 0000000000000000
(XEN)    ffffffff81aafda0 ffffffff81a01ee8 ffffffff81a01fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffffffff81a01ed0 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff8300aa584000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN)    [<ffff82c48012817f>] execute_timer+0x4e/0x6c
(XEN)    [<ffff82c480128281>] timer_softirq_action+0xe4/0x21a
(XEN)    [<ffff82c480125405>] __do_softirq+0x95/0xa0
(XEN)    [<ffff82c480125484>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480158c05>] idle_loop+0x6f/0x71
(XEN)    
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83013e67ff18   rcx: 0000000000000001
(XEN) rdx: 0000003cbd368d80   rsi: 000000008ce4ca7a   rdi: 0000000000000001
(XEN) rbp: ffff83013e67fef0   rsp: ffff83013e67fef0   r8:  00000050ef7ef570
(XEN) r9:  ffff8300a83fd060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83013e67ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013d66b088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013d96c000   cr2: ffff880026b8e1e8
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013e67fef0:
(XEN)    ffff83013e67ff10 ffff82c480158bf8 ffff8300aa0fe000 ffff8300a83fd000
(XEN)    ffff83013e67fda8 0000000000000000 0000000000000000 0000000000000001
(XEN)    ffffffff81aafda0 ffff88002786dee0 ffff88002786dfd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786dec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000001 ffff8300aa0fe000
(XEN)    0000003cbd368d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014893ff18   rcx: 0000000000000002
(XEN) rdx: 0000003cccfb5d80   rsi: 000000008daa6bb0   rdi: 0000000000000002
(XEN) rbp: ffff83014893fef0   rsp: ffff83014893fef0   r8:  0000005111da1694
(XEN) r9:  ffff8300a83fc060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014893ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83014d2b8088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014cdab000   cr2: ffff880002e63768
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014893fef0:
(XEN)    ffff83014893ff10 ffff82c480158bf8 ffff8300a85c7000 ffff8300a83fc000
(XEN)    ffff83014893fda8 0000000000000000 0000000000000000 0000000000000002
(XEN)    ffffffff81aafda0 ffff88002786fee0 ffff88002786ffd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786fec8 000000000000e02b 3ec2e3268092e167 b97cec5b9e68dc93
(XEN)    cc98079249fb73b5 e1a35de4fd161a6f e1e35d6400000002 ffff8300a85c7000
(XEN)    0000003cccfb5d80 e24d5a38f2ae051e
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014892ff18   rcx: 0000000000000003
(XEN) rdx: 0000003cccc52d80   rsi: 000000008e703250   rdi: 0000000000000003
(XEN) rbp: ffff83014892fef0   rsp: ffff83014892fef0   r8:  00000051348f1744
(XEN) r9:  ffff8300aa583060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014892ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83014cf55088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014caf4000   cr2: ffff880003213108
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014892fef0:
(XEN)    ffff83014892ff10 ffff82c480158bf8 ffff8300a83fe000 ffff8300aa583000
(XEN)    ffff83014892fda8 0000000000000000 0000000000000000 0000000000000003
(XEN)    ffffffff81aafda0 ffff880027881ee0 ffff880027881fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff880027881ec8 000000000000e02b 3ec2e3268092e167 b97cec5b9e68dc93
(XEN)    cc98079249fb73b5 e1a35de4fd161a6f e1e35d6400000003 ffff8300a83fe000
(XEN)    0000003cccc52d80 e24d5a38f2ae051e
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffffffff81a01fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffffffff81a01ee8   rsp: ffffffff81a01ed0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000000   r14: ffffffffffffffff
(XEN) r15: 0000000000000000   cr0: 0000000000000008   cr4: 0000000000002660
(XEN) cr3: 0000000141a05000   cr2: 00007fc969621c62
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff81a01ed0:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffffffff81a01f18
(XEN)    ffffffff8101c663 ffffffff81a01fd8 ffffffff81aafda0 ffff88002dee1a00
(XEN)    ffffffffffffffff ffffffff81a01f48 ffffffff81013236 ffffffffffffffff
(XEN)    8e6a1de960e75dc3 0000000000000000 ffffffff81b15160 ffffffff81a01f58
(XEN)    ffffffff81554f5e ffffffff81a01f98 ffffffff81accbf5 ffffffff81b15160
(XEN)    d0a0f0752ad9a008 0000000000cdf000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81a01fb8 ffffffff81acc34b ffffffff7fffffff
(XEN)    ffffffff84b25000 ffffffff81a01ff8 ffffffff81acfecc 0000000000000000
(XEN)    0000000100000000 00100800000306a4 1fc98b75e3b82283 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786dfd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786dee0   rsp: ffff88002786dec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000001   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013d96c000   cr2: 00007fcb6643d039
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786dec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786df10
(XEN)    ffffffff8101c663 ffff88002786dfd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786df40 ffffffff81013236 ffffffff8100ade9
(XEN)    a1021ef739fb27d3 0000000000000000 0000000000000000 ffff88002786df50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786df58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786ffd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786fee0   rsp: ffff88002786fec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000014cdab000   cr2: 00000000014ed188
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786fec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786ff10
(XEN)    ffffffff8101c663 ffff88002786ffd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786ff40 ffffffff81013236 ffffffff8100ade9
(XEN)    57902e18ee3212f9 0000000000000000 0000000000000000 ffff88002786ff50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786ff58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff880027881fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff880027881ee0   rsp: ffff880027881ec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000003   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013e421000   cr2: 00007fc969621c62
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880027881ec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff880027881f10
(XEN)    ffffffff8101c663 ffff880027881fd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff880027881f40 ffffffff81013236 ffffffff8100ade9
(XEN)    67b1044cc4b7c391 0000000000000000 0000000000000000 ffff880027881f50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880027881f58 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x4C:41923B6B)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16128 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 130559 pages
(XEN) heap[node=0][zone=18] -> 262143 pages
(XEN) heap[node=0][zone=19] -> 172772 pages
(XEN) heap[node=0][zone=20] -> 134290 pages
(XEN) heap[node=0][zone=21] -> 0 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) [I: dump HVM irq info]
(XEN) 'I' pressed -> dumping HVM irq info
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=a1 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=a9 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=b1 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    31 vec=c9 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) ==== segment 0000 ====
(XEN) 0000:05:01.0 - dom 0   - MSIs < >
(XEN) 0000:04:00.0 - dom 0   - MSIs < >
(XEN) 0000:03:00.0 - dom 0   - MSIs < >
(XEN) 0000:02:00.0 - dom 0   - MSIs < 30 >
(XEN) 0000:00:1f.3 - dom 0   - MSIs < >
(XEN) 0000:00:1f.2 - dom 0   - MSIs < 27 >
(XEN) 0000:00:1f.0 - dom 0   - MSIs < >
(XEN) 0000:00:1e.0 - dom 0   - MSIs < >
(XEN) 0000:00:1d.0 - dom 0   - MSIs < >
(XEN) 0000:00:1c.7 - dom 0   - MSIs < >
(XEN) 0000:00:1c.6 - dom 0   - MSIs < >
(XEN) 0000:00:1c.0 - dom 0   - MSIs < >
(XEN) 0000:00:1b.0 - dom 0   - MSIs < 26 >
(XEN) 0000:00:1a.0 - dom 0   - MSIs < >
(XEN) 0000:00:19.0 - dom 0   - MSIs < 31 >
(XEN) 0000:00:16.3 - dom 0   - MSIs < >
(XEN) 0000:00:16.0 - dom 0   - MSIs < >
(XEN) 0000:00:14.0 - dom 0   - MSIs < 29 >
(XEN) 0000:00:02.0 - dom 0   - MSIs < 28 >
(XEN) 0000:00:00.0 - dom 0   - MSIs < >
(XEN) [V: dump iommu info]
(XEN) 
(XEN) iommu 0: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  0010 00000001 29    0   1  0  1  1   0 1
(XEN) 
(XEN) iommu 1: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
(XEN)   0001:  1   0  f0f8 00000001 f0    0   1  0  1  1   0 1
(XEN)   0002:  1   0  f0f8 00000001 40    0   1  0  1  1   0 1
(XEN)   0003:  1   0  f0f8 00000001 48    0   1  0  1  1   0 1
(XEN)   0004:  1   0  f0f8 00000001 50    0   1  0  1  1   0 1
(XEN)   0005:  1   0  f0f8 00000001 58    0   1  0  1  1   0 1
(XEN)   0006:  1   0  f0f8 00000001 60    0   1  0  1  1   0 1
(XEN)   0007:  1   0  f0f8 00000001 68    0   1  0  1  1   0 1
(XEN)   0008:  1   0  f0f8 00000001 70    0   1  1  1  1   0 1
(XEN)   0009:  1   0  f0f8 00000001 78    0   1  0  1  1   0 1
(XEN)   000a:  1   0  f0f8 00000001 88    0   1  0  1  1   0 1
(XEN)   000b:  1   0  f0f8 00000001 90    0   1  0  1  1   0 1
(XEN)   000c:  1   0  f0f8 00000001 98    0   1  0  1  1   0 1
(XEN)   000d:  1   0  f0f8 00000001 a0    0   1  0  1  1   0 1
(XEN)   000e:  1   0  f0f8 00000001 a8    0   1  0  1  1   0 1
(XEN)   000f:  1   0  f0f8 00000001 b0    0   1  1  1  1   0 1
(XEN)   0010:  1   0  f0f8 00000001 b8    0   1  1  1  1   0 1
(XEN)   0011:  1   0  f0f8 00000001 c0    0   1  1  1  1   0 1
(XEN)   0012:  1   0  f0f8 00000001 c8    0   1  1  1  1   0 1
(XEN)   0013:  1   0  f0f8 00000001 d0    0   1  1  1  1   0 1
(XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
(XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
(XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
(XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
(XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
(XEN) 
(XEN) Redirection table of IOAPIC 0:
(XEN)   #entry IDX FMT MASK TRIG IRR POL STAT DELI  VECTOR
(XEN)    01:  0000   1    0   0   0   0    0    0     38
(XEN)    02:  0001   1    0   0   0   0    0    0     f0
(XEN)    03:  0002   1    0   0   0   0    0    0     40
(XEN)    04:  0003   1    0   0   0   0    0    0     48
(XEN)    05:  0004   1    0   0   0   0    0    0     50
(XEN)    06:  0005   1    0   0   0   0    0    0     58
(XEN)    07:  0006   1    0   0   0   0    0    0     60
(XEN)    08:  0007   1    0   0   0   0    0    0     68
(XEN)    09:  0008   1    0   1   0   0    0    0     70
(XEN)    0a:  0009   1    0   0   0   0    0    0     78
(XEN)    0b:  000a   1    0   0   0   0    0    0     88
(XEN)    0c:  000b   1    0   0   0   0    0    0     90
(XEN)    0d:  000c   1    1   0   0   0    0    0     98
(XEN)    0e:  000d   1    0   0   0   0    0    0     a0
(XEN)    0f:  000e   1    0   0   0   0    0    0     a8
(XEN)    10:  000f   1    0   1   0   1    0    0     b0
(XEN)    12:  0010   1    1   1   0   1    0    0     b8
(XEN)    13:  0011   1    1   1   0   1    0    0     c0
(XEN)    14:  0016   1    1   1   0   1    0    0     31
(XEN)    16:  0013   1    1   1   0   1    0    0     d0
(XEN)    17:  0012   1    0   1   0   1    0    0     c8
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues:
(XEN) CPU00:
(XEN)   ex=   -1681us timer=ffff82c4802e25c8 cb=ffff82c48013d757(ffff82c480271800) ns16550_poll+0x0/0x33
(XEN)   ex=    7318us timer=ffff83014899a1b8 cb=ffff82c480119d72(ffff83014899a190) csched_acct+0x0/0x42a
(XEN)   ex=  457001us timer=ffff82c480300580 cb=ffff82c4801a8850(0000000000000000) mce_work_fn+0x0/0xa9
(XEN)   ex=122315759us timer=ffff82c4802fe280 cb=ffff82c4801807c2(0000000000000000) plt_overflow+0x0/0x131
(XEN)   ex=    7318us timer=ffff83014899aea8 cb=ffff82c48011aaf0(0000000000000000) csched_tick+0x0/0x314
(XEN) CPU01:
(XEN)   ex=   62611us timer=ffff83014c9665f8 cb=ffff82c48011aaf0(0000000000000001) csched_tick+0x0/0x314
(XEN)   ex=  262093us timer=ffff8300a83fd060 cb=ffff82c480121c6b(ffff8300a83fd000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU02:
(XEN)   ex=   82953us timer=ffff830136a49a28 cb=ffff82c48011aaf0(0000000000000002) csched_tick+0x0/0x314
(XEN)   ex=  238099us timer=ffff8300a83fc060 cb=ffff82c480121c6b(ffff8300a83fc000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU03:
(XEN)   ex=  103267us timer=ffff83011c9e9948 cb=ffff82c48011aaf0(0000000000000003) csched_tick+0x0/0x314
(XEN)   ex= 3920019us timer=ffff8300aa583060 cb=ffff82c480121c6b(ffff8300aa583000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[328278054162]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu1==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[328302844429]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu2==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[328327634823]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu3==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[328352424522]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Event channel information for domain 0:
(XEN) Polling vCPUs: {}
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 x=0 v=0
(XEN)        2 [1/1]: s=6 n=0 x=0
(XEN)        3 [1/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0
(XEN)        5 [0/0]: s=5 n=0 x=0 v=1
(XEN)        6 [0/0]: s=6 n=0 x=0
(XEN)        7 [0/0]: s=5 n=1 x=0 v=0
(XEN)        8 [1/1]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=6 n=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=1 x=0 v=1
(XEN)       12 [0/0]: s=6 n=1 x=0
(XEN)       13 [0/0]: s=5 n=2 x=0 v=0
(XEN)       14 [1/1]: s=6 n=2 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=6 n=2 x=0
(XEN)       17 [0/0]: s=5 n=2 x=0 v=1
(XEN)       18 [0/0]: s=6 n=2 x=0
(XEN)       19 [0/0]: s=5 n=3 x=0 v=0
(XEN)       20 [1/1]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=6 n=3 x=0
(XEN)       22 [0/0]: s=6 n=3 x=0
(XEN)       23 [0/0]: s=5 n=3 x=0 v=1
(XEN)       24 [0/0]: s=6 n=3 x=0
(XEN)       25 [0/0]: s=3 n=0 x=0 d=0 p=36
(XEN)       26 [0/0]: s=4 n=0 x=0 p=9 i=9
(XEN)       27 [1/1]: s=5 n=0 x=0 v=2
(XEN)       28 [0/0]: s=4 n=0 x=0 p=8 i=8
(XEN)       29 [0/0]: s=4 n=0 x=0 p=279 i=26
(XEN)       30 [0/0]: s=4 n=0 x=0 p=277 i=28
(XEN)       31 [0/0]: s=4 n=0 x=0 p=16 i=16
(XEN)       32 [0/0]: s=4 n=0 x=0 p=278 i=27
(XEN)       33 [0/0]: s=4 n=0 x=0 p=23 i=23
(XEN)       34 [0/0]: s=4 n=0 x=0 p=276 i=29
(XEN)       35 [0/0]: s=4 n=0 x=0 p=275 i=30
(XEN)       36 [0/0]: s=3 n=0 x=0 d=0 p=25
(XEN)       37 [0/0]: s=5 n=0 x=0 v=3
(XEN)       38 [1/0]: s=4 n=0 x=0 p=274 i=31
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:0001 vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:0001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   2 affinity:ffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:0001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   4 affinity:0001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   5 affinity:0001 vec:50 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:0001 vec:58 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:0001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:0001 vec:68 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:0001 vec:70 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:0001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:0001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:0001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  13 affinity:000f vec:98 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:0001 vec:a0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:0001 vec:a8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:0001 vec:b0 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 16(-S--),
(XEN)    IRQ:  18 affinity:000f vec:b8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  19 affinity:0001 vec:c0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  20 affinity:000f vec:31 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  22 affinity:0001 vec:d0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:0001 vec:c8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 23(-S--),
(XEN)    IRQ:  24 affinity:0001 vec:28 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  25 affinity:0001 vec:30 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  26 affinity:0001 vec:a1 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
(XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:a9 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:b1 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(-S--),
(XEN)    IRQ:  31 affinity:0001 vec:c9 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  1 Vec 56:
(XEN)       Apic 0x00, Pin  1: vec=38 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  3 Vec 64:
(XEN)       Apic 0x00, Pin  3: vec=40 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  4 Vec 72:
(XEN)       Apic 0x00, Pin  4: vec=48 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  5 Vec 80:
(XEN)       Apic 0x00, Pin  5: vec=50 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  6 Vec 88:
(XEN)       Apic 0x00, Pin  6: vec=58 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  7 Vec 96:
(XEN)       Apic 0x00, Pin  7: vec=60 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  8 Vec104:
(XEN)       Apic 0x00, Pin  8: vec=68 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  9 Vec112:
(XEN)       Apic 0x00, Pin  9: vec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 10 Vec120:
(XEN)       Apic 0x00, Pin 10: vec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 11 Vec136:
(XEN)       Apic 0x00, Pin 11: vec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 12 Vec144:
(XEN)       Apic 0x00, Pin 12: vec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 13 Vec152:
(XEN)       Apic 0x00, Pin 13: vec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:0
(XEN)     IRQ 14 Vec160:
(XEN)       Apic 0x00, Pin 14: vec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 15 Vec168:
(XEN)       Apic 0x00, Pin 15: vec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 16 Vec176:
(XEN)       Apic 0x00, Pin 16: vec=b0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 18 Vec184:
(XEN)       Apic 0x00, Pin 18: vec=b8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 19 Vec192:
(XEN)       Apic 0x00, Pin 19: vec=c0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 20 Vec 49:
(XEN)       Apic 0x00, Pin 20: vec=31 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 22 Vec208:
(XEN)       Apic 0x00, Pin 22: vec=d0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 23 Vec200:
(XEN)       Apic 0x00, Pin 23: vec=c8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64512kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 522236kB free
(XEN)     heap[18]: 1048572kB free
(XEN)     heap[19]: 691088kB free
(XEN)     heap[20]: 537160kB free
(XEN)     Dom heap: 3256784kB free
(XEN) [n: NMI statistics]
(XEN) CPU	NMI
(XEN)   0	  0
(XEN)   1	  0
(XEN)   2	  0
(XEN)   3	  0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x4C:A187B5FD)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=187539 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-3} max_pages=188147
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-204f, 2058-ffff }
(XEN)     Interrupts { 0-279 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 00000000001476f6: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 00000000001476f5: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000001476f4: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000001476f3: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000aa0fd: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000011c9ea: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN)     VCPU1: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/-1)
(XEN) Notifying guest 0:1 (virq 1, port 11, stat 0/0/0)
(XEN) Notifying guest 0:2 (virq 1, port 17, stat 0/0/0)
(XEN) Notifying guest 0:3 (virq 1, port 23, stat 0/0/0)

(XEN) Shared frames 0 -- Saved frames 0
[  329.305747] v(XEN) [r: dump run queues]
cpu 1
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=0x0000004CAD4EEB4F
(XEN) Idle cpupool:
(XEN) Scheduler: SMP Credit Scheduler (credit)
[  329.305748]  (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = 47
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2924
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
 (XEN) idlers: 000c
(XEN) active vcpus:
(XEN) 	  1: 0: masked=0 pend[0.1] pri=-1 flags=0 cpu=1 credit=-540 [w=256]
ing=1 event_sel (XEN) Cpupool 0:
(XEN) Scheduler: SMP Credit Scheduler (credit)
(XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = 47
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2924
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
0000000000000001(XEN) idlers: 000c
(XEN) active vcpus:
(XEN) 	  1: [0.1] pri=-1 flags=0 cpu=1 credit=-1096 [w=256]

(XEN) CPU[00] [  329.376264]   sort=2924, sibling=0001,  core=000f
(XEN) 	run: [32767.0] pri=0 flags=0 cpu=0
(XEN) 	  1: [0.0] pri=0 flags=0 cpu=0 credit=74 [w=256]
1: masked=0 pend(XEN) CPU[01]  sort=2924, sibling=0002, core=000f
(XEN) 	run: [0.1] pri=-1 flags=0 cpu=1 credit=-1355 [w=256]
(XEN) 	  1: [32767.1] pri=-64 flags=0 cpu=1
(XEN) CPU[02] ing=1 event_sel  sort=2924, sibling=0004, 0000000000000001core=000f
(XEN) 	run: [32767.2] pri=-64 flags=0 cpu=2

(XEN) CPU[03] [  329.446337]   sort=2924, sibling=0008,  core=000f
(XEN) 	run: [32767.3] pri=-64 flags=0 cpu=3
2: masked=1 pend(XEN) [s: dump softtsc stats]
ing=1 event_sel (XEN) TSC marked as reliable, warp = 0 (count=3)
0000000000000001(XEN) No domains have emulated TSC

(XEN) [t: display multi-cpu clock info]
[  329.488579]  (XEN) Synced stime skew: max=15761ns avg=12126ns samples=2 current=15761ns
(XEN) Synced cycles skew: max=170 avg=165 samples=2 current=160
 (XEN) [u: dump numa info]
3: masked=1 pend(XEN) 'u' pressed -> dumping numa info (now-0x4C:BAA19C57)
ing=0 event_sel (XEN) idx0 -> NODE0 start->0 size->1369600 free->814196
0000000000000000(XEN) phys_to_nid(0000000000001000) -> 0 should be 0

(XEN) CPU0 -> NODE0
(XEN) CPU1 -> NODE0
(XEN) CPU2 -> NODE0
(XEN) CPU3 -> NODE0
(XEN) Memory location of each domain:
(XEN) Domain 0 (total: 187539):
[  329.546269]  (XEN)     Node 0: 187539
 (XEN) [v: dump Intel's VMCS]

(XEN) *********** VMCS Areas **************
(XEN) **************************************
[  329.586362] p(XEN) [z: print ioapic info]
ending:
(XEN) number of MP IRQ sources: 15.
[  329.586363]  (XEN) number of IO-APIC #2 registers: 24.
(XEN) testing the IO APIC.......................
  (XEN) IO APIC #2......
(XEN) .... register #00: 02000000
(XEN) .......    : physical APIC id: 02
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
0000000000000000(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
 (XEN)  00 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  01 000 00  0    0    0   0   0    1    1    38
 (XEN)  02 000 00  0    0    0   0   0    1    1    F0
0000000000000000(XEN)  03 000 00  0    0    0   0   0    1    1    40
 (XEN)  04 000 00  0    0    0   0   0    1    1    48
0000000000000000(XEN)  05 000 00  0    0    0   0   0    1    1    50
 (XEN)  06 000 00  0    0    0   0   0    1    1    58
0000000000000000(XEN)  07 000 00  0    0    0   0   0    1    1    60
 (XEN)  08 000 00  0    0    0   0   0    1    1    68
0000000000000000(XEN)  09 000 00  0    1    0   0   0    1    1    70
 (XEN)  0a 000 00  0    0    0   0   0    1    1    78
0000000000000000(XEN)  0b 000 00  0    0    0   0   0    1    1    88
 (XEN)  0c 000 00  0    0    0   0   0    1    1    90
0000000000000000(XEN)  0d 000 00  1    0    0   0   0    1    1    98

(XEN)  0e 000 00  0    0    0   0   0    1    1    A0
[  329.724721]  (XEN)  0f 000 00  0    0    0   0   0    1    1    A8
  (XEN)  10 000 00  0    1    0   1   0    1    1    B0
0000000000000000(XEN)  11 000 00  1    0    0   0   0    0    0    00
 (XEN)  12 000 00  1    1    0   1   0    1    1    B8
0000000000000000(XEN)  13 000 00  1    1    0   1   0    1    1    C0
 (XEN)  14 000 00  1    1    0   1   0    1    1    31
0000000000000000(XEN)  15 000 00  1    0    0   0   0    0    0    00
 (XEN)  16 000 00  1    1    0   1   0    1    1    D0
0000000000000000(XEN)  17 000 00  0    1    0   1   0    1    1    C8
(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
 (XEN) IRQ240 -> 0:2
(XEN) IRQ56 -> 0:1
(XEN) IRQ64 -> 0:3
(XEN) IRQ72 -> 0:4
(XEN) IRQ80 -> 0:5
(XEN) IRQ88 -> 0:6
(XEN) IRQ96 -> 0:7
(XEN) IRQ104 -> 0:8
(XEN) IRQ112 -> 0:9
(XEN) IRQ120 -> 0:10
(XEN) IRQ136 -> 0:11
(XEN) IRQ144 -> 0:12
(XEN) IRQ152 -> 0:13
(XEN) IRQ160 -> 0:14
(XEN) IRQ168 -> 0:15
(XEN) IRQ176 -> 0:16
(XEN) IRQ184 -> 0:18
(XEN) IRQ192 -> 0:19
(XEN) IRQ49 -> 0:20
(XEN) IRQ208 -> 0:22
(XEN) IRQ200 -> 0:23
(XEN) .................................... done.
0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.847775]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.861736]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.875698]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.889659]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.903620]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  329.917582]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004008002182
[  329.931543]    
[  329.934854] global mask:
[  329.934854]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  329.950069]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  329.964030]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  329.977991]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  329.991952]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.005913]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.019874]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.033835]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  330.047796]    
[  330.051108] globally unmasked:
[  330.051108]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.066859]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.080821]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.094781]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.108743]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.122704]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.136665]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.150627]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004000002082
[  330.164587]    
[  330.167898] local cpu1 mask:
[  330.167899]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.183470]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.197432]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.211392]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.225354]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.239316]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.253277]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.267238]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000001f80
[  330.281199]    
[  330.284510] locally unmasked:
[  330.284511]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.300172]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.314133]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.328094]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.342056]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.356017]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.369978]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.383942]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000080
[  330.397900]    
[  330.401212] pending list:
[  330.404255]   0: event 1 -> irq 272 locally-masked
[  330.409266]   1: event 7 -> irq 278
[  330.412937]   1: event 8 -> irq 279 globally-masked
[  330.418037]   2: event 13 -> irq 284 locally-masked
[  330.423138]   0: event 27 -> irq 297 globally-masked locally-masked
[  330.429671]   0: event 38 -> irq 303 locally-masked
[  330.434794] 
[  330.434796] vcpu 0
[  330.434796]   0: masked=0 pending=0 event_sel 0000000000000000
[  330.440038]   1: masked=0 pending=0 event_sel 0000000000000000
[  330.446123]   2: masked=1 pending=1 event_sel 0000000000000001
[  330.452209]   3: masked=1 pending=1 event_sel 0000000000000001
[  330.458295]   
[  330.464379] pending:
[  330.464380]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.479236]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.493197]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.507158]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.521119]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.535080]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.549041]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.563003]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004008202106
[  330.576964]    
[  330.580275] global mask:
[  330.580275]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.595489]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.609450]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.623412]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.637372]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.651334]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.665295]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  330.679257]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  330.693218]    
[  330.696529] globally unmasked:
[  330.696530]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.712280]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.726241]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.740203]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.754163]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.768125]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.782086]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.796048]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004000202002
[  330.810008]    
[  330.813319] local cpu0 mask:
[  330.813320]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.828891]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.842854]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.856814]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.870776]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.884736]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.898698]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.912659]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 fffffffffe00007f
[  330.926620]    
[  330.929931] locally unmasked:
[  330.929932]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.945593]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.959555]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.973516]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  330.987476]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.001438]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.015400]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.029360]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004000000002
[  331.043321]    
[  331.046633] pending list:
[  331.049681]   0: event 1 -> irq 272 l2-clear
[  331.054151]   0: event 2 -> irq 273 l2-clear globally-masked
[  331.060058]   1: event 8 -> irq 279 l2-clear globally-masked locally-masked
[  331.067312]   2: event 13 -> irq 284 l2-clear locally-masked
[  331.073214]   3: event 21 -> irq 292 l2-clear locally-masked
[  331.079121]   0: event 27 -> irq 297 l2-clear globally-masked
[  331.085116]   0: event 38 -> irq 303 l2-clear
[  331.089716] 
[  331.089717] vcpu 2
[  331.089718]   0: masked=0 pending=0 event_sel 0000000000000000
[  331.094979]   1: masked=1 pending=1 event_sel 0000000000000001
[  331.101064]   2: masked=0 pending=1 event_sel 0000000000000001
[  331.107149]   3: masked=1 pending=1 event_sel 0000000000000001
[  331.113234]   
[  331.119319] pending:
[  331.119319]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.134175]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.148136]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.162097]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.176058]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.190020]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.203981]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.217944]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000820e106
[  331.231904]    
[  331.235214] global mask:
[  331.235214]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.250428]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.264389]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.278351]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.292312]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.306274]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.320234]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.334196]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  331.348157]    
[  331.351468] globally unmasked:
[  331.351468]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.367220]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.381181]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.395141]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.409103]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.423064]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.437025]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.450986]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000020a000
[  331.464948]    
[  331.468258] local cpu2 mask:
[  331.468259]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.483831]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.497793]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.511753]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.525715]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.539676]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.553640]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.567598]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000007e000
[  331.581560]    
[  331.584870] locally unmasked:
[  331.584871]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.600532]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.614494]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.628454]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.642416]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.656377]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.670339]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.684299]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  331.698260]    
[  331.701571] pending list:
[  331.704616]   0: event 2 -> irq 273 globally-masked locally-masked
[  331.711058]   1: event 8 -> irq 279 globally-masked locally-masked
[  331.717502]   2: event 13 -> irq 284
[  331.721261]   2: event 14 -> irq 285 globally-masked
[  331.726452]   2: event 15 -> irq 286
[  331.730211]   3: event 21 -> irq 292 locally-masked
[  331.735312]   0: event 27 -> irq 297 globally-masked locally-masked
[  331.741868] 
[  331.741869] vcpu 3
[  331.741869]   0: masked=1 pending=1 event_sel 0000000000000001
[  331.747127]   1: masked=0 pending=0 event_sel 0000000000000000
[  331.753213]   2: masked=0 pending=0 event_sel 0000000000000000
[  331.759299]   3: masked=0 pending=1 event_sel 0000000000000001
[  331.765384]   
[  331.771468] pending:
[  331.771468]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.786324]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.800285]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.814245]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.828206]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.842168]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.856129]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  331.870091]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000008304104
[  331.884053]    
[  331.887364] global mask:
[  331.887365]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.902577]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.916538]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.930500]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.944461]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.958422]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.972384]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  331.986345]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  332.000305]    
[  332.003616] globally unmasked:
[  332.003617]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.019368]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.033329]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.047291]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.061252]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.075213]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.089174]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.103135]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  332.117097]    
[  332.120407] local cpu3 mask:
[  332.120408]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.135980]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.149941]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.163902]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.177864]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.191825]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.205786]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.219747]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000001f80000
[  332.233709]    
[  332.237019] locally unmasked:
[  332.237020]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.252681]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.266643]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.280604]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.294565]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.308525]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.322487]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  332.336448]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  332.350409]    
[  332.353721] pending list:
[  332.356764]   0: event 2 -> irq 273 globally-masked locally-masked
[  332.363208]   1: event 8 -> irq 279 globally-masked locally-masked
[  332.369652]   2: event 14 -> irq 285 globally-masked locally-masked
[  332.376185]   3: event 19 -> irq 290
[  332.379942]   3: event 20 -> irq 291 globally-masked
[  332.385133]   3: event 21 -> irq 292
[  332.388893]   0: event 27 -> irq 297 globally-masked locally-masked


[-- Attachment #3: syslog-bad.txt --]
[-- Type: text/plain, Size: 33252 bytes --]

[  232.557709] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  232.569712] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0
[  235.015885] iwlwifi 0000:02:00.0: PCI INT A disabled
[  235.759635] br-eth0: port 1(eth0) entering forwarding state
[  235.773599] br-eth0: port 1(eth0) entering forwarding state
[  235.779265] br-eth0: port 1(eth0) entering forwarding state
[  235.785529] br-eth0: port 1(eth0) entering forwarding state
[  236.025497] e1000e 0000:00:19.0: PCI INT A disabled
[  236.335400] ehci_hcd 0000:00:1d.0: remove, state 1
[  236.340249] usb usb2: USB disconnect, device number 1
[  236.345538] usb 2-1: USB disconnect, device number 2
[  236.350716] usb 2-1.5: USB disconnect, device number 3
[  236.495570] usb 2-1.6: USB disconnect, device number 4
[  236.710269] ehci_hcd 0000:00:1d.0: USB bus 2 deregistered
[  236.715856] ehci_hcd 0000:00:1d.0: PCI INT A disabled
[  236.721027] ehci_hcd 0000:00:1a.0: remove, state 4
[  236.726048] usb usb1: USB disconnect, device number 1
[  236.731311] usb 1-1: USB disconnect, device number 2
[  236.740658] ehci_hcd 0000:00:1a.0: USB bus 1 deregistered
[  236.746224] ehci_hcd 0000:00:1a.0: PCI INT A disabled
[  236.795418] xhci_hcd 0000:00:14.0: remove, state 4
[  236.800265] usb usb4: USB disconnect, device number 1
[  236.805589] xHCI xhci_drop_endpoint called for root hub
[  236.811004] xHCI xhci_check_bandwidth called for root hub
[  236.816729] xhci_hcd 0000:00:14.0: USB bus 4 deregistered
[  236.822330] xhci_hcd 0000:00:14.0: remove, state 4
[  236.827299] usb usb3: USB disconnect, device number 1
[  236.832598] xHCI xhci_drop_endpoint called for root hub
[  236.838033] xHCI xhci_check_bandwidth called for root hub
[  236.843807] xhci_hcd 0000:00:14.0: USB bus 3 deregistered
[  236.895480] xhci_hcd 0000:00:14.0: PCI INT A disabled
[  237.207195] PM: Syncing filesystems ... done.
[  237.211831] PM: Preparing system for mem sleep
[  237.445419] Freezing user space processes ... (elapsed 0.01 seconds) done.
[  237.467989] Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done.
[  237.487986] PM: Entering mem sleep
[  237.491904] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  237.492275] snd_hda_intel 0000:00:1b.0: PCI INT A disabled
[  237.492367] ACPI handle has no context!
[  237.506945] sd 0:0:0:0: [sda] Stopping disk
[  237.605378] PM: suspend of drv:ahci dev:0000:00:1f.2 complete after 113.253 msecs
[  237.613008] PM: suspend of drv: dev:pci0000:00 complete after 107.641 msecs
[  237.620262] PM: suspend of devices complete after 128.451 msecs
[  237.626428] PM: suspend devices took 0.140 seconds
[  237.632623] PM: late suspend of devices complete after 1.191 msecs
[  237.639309] ACPI: Preparing to enter system sleep state S3
[  237.645122] PM: Saving platform NVS memory
[  237.842610] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[  239.622549] ACPI: Low-level resume complete
[  239.626867] PM: Restoring platform NVS memory
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[  239.779884] Enabling non-boot CPUs ...
[  239.783766] installing Xen timer for CPU 1
[  239.788000] cpu 1 spinlock event irq 279
[  239.793240] CPU1 is up
[  239.795785] installing Xen timer for CPU 2
[  239.799949] cpu 2 spinlock event irq 285
[  239.805232] CPU2 is up
[  239.807725] installing Xen timer for CPU 3
[  239.811889] cpu 3 spinlock event irq 291
[  239.817245] CPU3 is up
[  239.821657] ACPI: Waking up from system sleep state S3
[  239.827255] i915 0000:00:02.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  239.836071] i915 0000:00:02.0: restoring config space at offset 0x1 (was 0x900007, writing 0x900407)
[  239.845590] pci 0000:00:14.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  239.854418] pci 0000:00:14.0: restoring config space at offset 0x4 (was 0x4, writing 0xb02b0004)
[  239.863527] pci 0000:00:14.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  239.873139] pci 0000:00:16.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  239.881986] pci 0000:00:16.0: restoring config space at offset 0x4 (was 0xfedb0004, writing 0xb02a0004)
[  239.891760] serial 0000:00:16.3: restoring config space at offset 0xf (was 0x200, writing 0x20a)
[  239.900866] serial 0000:00:16.3: restoring config space at offset 0x5 (was 0x0, writing 0xb0290000)
[  239.910237] serial 0000:00:16.3: restoring config space at offset 0x4 (was 0x1, writing 0x30e1)
[  239.919282] serial 0000:00:16.3: restoring config space at offset 0x1 (was 0xb00000, writing 0xb00007)
[  239.928987] pci 0000:00:19.0: restoring config space at offset 0xf (was 0x100, writing 0x105)
[  239.937840] pci 0000:00:19.0: restoring config space at offset 0x1 (was 0x100002, writing 0x100003)
[  239.947249] pci 0000:00:1a.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  239.956085] pci 0000:00:1a.0: restoring config space at offset 0x4 (was 0x0, writing 0xb0270000)
[  239.965193] pci 0000:00:1a.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  239.974815] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0xf (was 0x100, writing 0x103)
[  239.984545] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x4 (was 0x4, writing 0xb0260004)
[  239.994541] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
[  240.004030] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002)
[  240.014381] pcieport 0000:00:1c.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  240.023659] pcieport 0000:00:1c.0: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  240.033474] pcieport 0000:00:1c.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100007)
[  240.043387] pcieport 0000:00:1c.6: restoring config space at offset 0xf (was 0x300, writing 0x304)
[  240.052653] pcieport 0000:00:1c.6: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  240.062471] pcieport 0000:00:1c.6: restoring config space at offset 0x1 (was 0x100000, writing 0x100007)
[  240.072385] pcieport 0000:00:1c.7: restoring config space at offset 0xf (was 0x400, writing 0x40a)
[  240.081652] pcieport 0000:00:1c.7: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  240.091533] pci 0000:00:1d.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  240.100352] pci 0000:00:1d.0: restoring config space at offset 0x4 (was 0x0, writing 0xb0250000)
[  240.109458] pci 0000:00:1d.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  240.119260] ahci 0000:00:1f.2: restoring config space at offset 0x1 (was 0x2b00007, writing 0x2b00407)
[  240.128880] pci 0000:00:1f.3: restoring config space at offset 0x1 (was 0x2800001, writing 0x2800003)
[  240.138401] pci 0000:02:00.0: restoring config space at offset 0xf (was 0x100, writing 0x104)
[  240.147250] pci 0000:02:00.0: restoring config space at offset 0x4 (was 0x4, writing 0xb0100004)
[  240.156333] pci 0000:02:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
[  240.164926] pci 0000:02:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002)
[  240.174426] pci 0000:03:00.0: restoring config space at offset 0x9 (was 0x10001, writing 0x1fff1)
[  240.183541] pci 0000:03:00.0: restoring config space at offset 0x7 (was 0x22a00101, writing 0x2a001f1)
[  240.193214] pci 0000:03:00.0: restoring config space at offset 0x3 (was 0x10000, writing 0x10010)
[  240.202485] pci 0000:04:00.0: restoring config space at offset 0xf (was 0x4020100, writing 0x402010a)
[  240.212030] pci 0000:04:00.0: restoring config space at offset 0x5 (was 0x0, writing 0xb0000000)
[  240.221129] pci 0000:04:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x2010)
[  240.229925] serial 0000:05:01.0: restoring config space at offset 0xf (was 0x1ff, writing 0x103)
[  240.239036] serial 0000:05:01.0: restoring config space at offset 0x9 (was 0x1, writing 0x2001)
[  240.248062] serial 0000:05:01.0: restoring config space at offset 0x8 (was 0x1, writing 0x2011)
[  240.257100] serial 0000:05:01.0: restoring config space at offset 0x7 (was 0x1, writing 0x2021)
[  240.266140] serial 0000:05:01.0: restoring config space at offset 0x6 (was 0x1, writing 0x2031)
[  240.275179] serial 0000:05:01.0: restoring config space at offset 0x5 (was 0x1, writing 0x2041)
[  240.284217] serial 0000:05:01.0: restoring config space at offset 0x3 (was 0x8, writing 0x2010)
[  240.293334] PM: early resume of devices complete after 466.166 msecs
[  240.299936] i915 0000:00:02.0: setting latency timer to 64
[  240.299943] xen: registering gsi 22 triggering 0 polarity 1
[  240.299943] Already setup the GSI :22
[  240.299943] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
[  240.299958] pci 0000:00:1e.0: setting latency timer to 64
[  240.299968] snd_hda_intel 0000:00:1b.0: setting latency timer to 64
[  240.299981] ahci 0000:00:1f.2: setting latency timer to 64
[  240.300001] pci 0000:03:00.0: setting latency timer to 64
[  240.300089] sd 0:0:0:0: [sda] Starting disk
[  240.645200] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  240.705202] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[  240.873380] PM: resume of drv:i915 dev:0000:00:02.0 complete after 573.456 msecs
[  245.099672] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
[  245.645194] ata1.00: qc timeout (cmd 0xec)
[  245.665194] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  245.671377] ata1.00: revalidation failed (errno=-5)
[  245.705197] ata3.00: qc timeout (cmd 0xa1)
[  245.709323] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  245.715678] ata3.00: revalidation failed (errno=-5)
[  246.045204] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  246.065200] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[  256.045195] ata1.00: qc timeout (cmd 0xec)
[  256.065170] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  256.071359] ata1.00: revalidation failed (errno=-5)
[  256.071364] ata1: limiting SATA link speed to 3.0 Gbps
[  256.071373] ata3.00: qc timeout (cmd 0xa1)
[  256.071385] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  256.071388] ata3.00: revalidation failed (errno=-5)
[  256.071391] ata3: limiting SATA link speed to 1.5 Gbps
[  256.415200] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[  256.435202] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  286.415192] ata3.00: qc timeout (cmd 0xa1)
[  286.419324] ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  286.425683] ata3.00: revalidation failed (errno=-5)
[  286.430775] ata3.00: disabled
[  286.435190] ata1.00: qc timeout (cmd 0xec)
[  286.445197] ata3: hard resetting link
[  286.455191] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
[  286.461378] ata1.00: revalidation failed (errno=-5)
[  286.466485] ata1.00: disabled
[  286.505196] ata1: hard resetting link
[  286.795199] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[  286.815193] ata3: EH complete
[  286.855201] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
[  286.895193] ata1: EH complete
[  286.898189] sd 0:0:0:0: [sda] START_STOP FAILED
[  286.902900] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  286.910888] pm_op(): scsi_bus_resume_common+0x0/0x60 returns 262144
[  286.917408] sd 0:0:0:0: [sda] Unhandled error code
[  286.922408] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  286.930374] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 19 0c b0 00 00 08 00
[  286.937623] end_request: I/O error, dev sda, sector 35196080
[  286.943533] Buffer I/O error on device dm-5, logical block 8038
[  286.949709] lost page write due to I/O error on dm-5
[  286.954904] sd 0:0:0:0: [sda] Unhandled error code
[  286.959911] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  286.967876] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 20 18 30 00 00 10 00
[  286.975121] end_request: I/O error, dev sda, sector 35657776
[  286.981035] sd 0:0:0:0: [sda] Unhandled error code
[  286.986044] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  286.994004] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 24 3c 20 00 00 08 00
[  287.001253] end_request: I/O error, dev sda, sector 35929120
[  287.007163] Buffer I/O error on device dm-5, logical block 99668
[  287.013425] lost page write due to I/O error on dm-5
[  287.018623] sd 0:0:0:0: [sda] Unhandled error code
[  287.018623] JBD2: Detected IO errors while flushing file data on dm-5-8
[  287.018623] Aborting journal on device dm-5-8.
[  287.018640] sd 0:0:0:0: [sda] Unhandled error code
[  287.018641] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.018644] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 20 11 80 00 00 08 00
[  287.018648] end_request: I/O error, dev sda, sector 35656064
[  287.018651] Buffer I/O error on device dm-5, logical block 65536
[  287.018652] lost page write due to I/O error on dm-5
[  287.018680] JBD2: I/O error detected when updating journal superblock for dm-5-8.
[  287.080551] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.088513] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 0e ac b2 98 00 00 08 00
[  287.095759] end_request: I/O error, dev sda, sector 246198936
[  287.101758] Buffer I/O error on device dm-6, logical block 26252323
[  287.108292] lost page write due to I/O error on dm-6
[  287.113485] sd 0:0:0:0: [sda] Unhandled error code
[  287.113491] JBD2: Detected IO errors while flushing file data on dm-6-8
[  287.125386] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.133348] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 0f 2c 19 f8 00 00 10 00
[  287.140597] end_request: I/O error, dev sda, sector 254548472
[  287.146604] PM: resume of drv:sd dev:0:0:0:0 complete after 46846.516 msecs
[  287.146608] Aborting journal on device dm-6-8.
[  287.146624] sd 0:0:0:0: [sda] Unhandled error code
[  287.146625] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.146627] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 0f 2c 11 80 00 00 08 00
[  287.146632] end_request: I/O error, dev sda, sector 254546304
[  287.146635] Buffer I/O error on device dm-6, logical block 27295744
[  287.146636] lost page write due to I/O error on dm-6
[  287.146648] JBD2: I/O error detected when updating journal superblock for dm-6-8.
[  287.204243] PM: resume of drv:scsi_device dev:0:0:0:0 complete after 46904.139 msecs
[  287.204251] PM: resume of drv:scsi_disk dev:0:0:0:0 complete after 46323.300 msecs
[  287.204257] PM: Device 0:0:0:0 failed to resume async: error 262144
[  287.226731] PM: resume of devices complete after 46926.837 msecs
[  287.233023] PM: resume devices took 46.930 seconds
[  287.237965] ------------[ cut here ]------------
[  287.242791] WARNING: at /data/home/bguthro/dev/orc-newdev.git/linux-3.2/kernel/power/suspend_test.c:53 suspend_test_finish+0x86/0x90()
[  287.255327] Hardware name: 2012 Client Platform
[  287.260061] Component: resume devices, time: 46930
[  287.265069] Modules linked in: ipt_MASQUERADE iscsi_scst(O) scst_vdisk(O) crc32c xt_tcpudp libcrc32c xt_state xt_multiport scst_cdrom(O) iptable_filter iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables scst(O) x_tables bridge stp llc microcode iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi arc4 snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd shpchp soundcore snd_page_alloc zram(C) usbhid hid i915 drm_kms_helper drm ahci libahci i2c_algo_bit video intel_agp intel_gtt [last unloaded: tpm_bios]
[  287.315818] Pid: 3914, comm: pm-suspend Tainted: G         C O 3.2.23-orc #1
[  287.323151] Call Trace:
[  287.325772]  [<ffffffff810642af>] warn_slowpath_common+0x7f/0xc0
[  287.332013]  [<ffffffff810643a6>] warn_slowpath_fmt+0x46/0x50
[  287.338013]  [<ffffffff810a64d6>] suspend_test_finish+0x86/0x90
[  287.344184]  [<ffffffff810a608e>] suspend_devices_and_enter+0x15e/0x310
[  287.351099]  [<ffffffff810a63a7>] enter_state+0x167/0x190
[  287.351102]  [<ffffffff810a4d27>] state_store+0xb7/0x130
[  287.351104]  [<ffffffff812b54df>] kobj_attr_store+0xf/0x30
[  287.351104]  [<ffffffff811d382f>] sysfs_write_file+0xef/0x170
[  287.351104]  [<ffffffff811668d3>] vfs_write+0xb3/0x180
[  287.351104]  [<ffffffff81166bfa>] sys_write+0x4a/0x90
[  287.351105]  [<ffffffff81582142>] system_call_fastpath+0x16/0x1b
[  287.351107] ---[ end trace 028d291ad63c1121 ]---
[  287.351125] PM: Finishing wakeup.
[  287.351126] Restarting tasks ... done.
[  287.351550] video LNXVIDEO:00: Restoring backlight state
[  287.351738] sd 0:0:0:0: [sda] Unhandled error code
[  287.351740] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.351742] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 18 11 80 00 00 08 00
[  287.351747] end_request: I/O error, dev sda, sector 35131776
[  287.351750] Buffer I/O error on device dm-5, logical block 0
[  287.351751] lost page write due to I/O error on dm-5
[  287.351866] EXT4-fs error (device dm-5): ext4_journal_start_sb:327: Detected aborted journal
[  287.351869] EXT4-fs (dm-5): Remounting filesystem read-only
[  287.351871] EXT4-fs (dm-5): previous I/O error to superblock detected
[  287.351886] sd 0:0:0:0: [sda] Unhandled error code
[  287.351887] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.351889] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 18 11 80 00 00 08 00
[  287.351893] end_request: I/O error, dev sda, sector 35131776
[  287.351895] Buffer I/O error on device dm-5, logical block 0
[  287.351897] lost page write due to I/O error on dm-5
[  287.353344] sd 0:0:0:0: [sda] Unhandled error code
[  287.353346] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.353348] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 28 11 80 00 00 08 00
[  287.353353] end_request: I/O error, dev sda, sector 36180352
[  287.353355] Buffer I/O error on device dm-6, logical block 0
[  287.353356] lost page write due to I/O error on dm-6
[  287.353365] EXT4-fs error (device dm-6): ext4_journal_start_sb:327: Detected aborted journal
[  287.353366] EXT4-fs (dm-6): Remounting filesystem read-only
[  287.353368] EXT4-fs (dm-6): previous I/O error to superblock detected
[  287.353378] sd 0:0:0:0: [sda] Unhandled error code
[  287.353379] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.353381] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 28 11 80 00 00 08 00
[  287.353385] end_request: I/O error, dev sda, sector 36180352
[  287.353386] Buffer I/O error on device dm-6, logical block 0
[  287.353387] lost page write due to I/O error on dm-6
[  287.373201] sd 0:0:0:0: [sda] Unhandled error code
[  287.373201] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.373201] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 09 24 a0 00 00 08 00
[  287.373204] end_request: I/O error, dev sda, sector 599200
[  287.373759] sd 0:0:0:0: [sda] Unhandled error code
[  287.373759] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.373759] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 08 11 80 00 00 08 00
[  287.373763] end_request: I/O error, dev sda, sector 528768
[  287.373766] Buffer I/O error on device dm-2, logical block 0
[  287.373767] lost page write due to I/O error on dm-2
[  287.373879] EXT4-fs error (device dm-2): ext4_find_entry:935: inode #373: comm cron: reading directory lblock 0
[  287.373879] Aborting journal on device dm-2-8.
[  287.373986] sd 0:0:0:0: [sda] Unhandled error code
[  287.373986] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.373986] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 8c 11 80 00 00 08 00
[  287.373986] end_request: I/O error, dev sda, sector 9179520
[  287.374018] JBD2: I/O error detected when updating journal superblock for dm-2-8.
[  287.374018] EXT4-fs (dm-2): Remounting filesystem read-only
[  287.374324] sd 0:0:0:0: [sda] Unhandled error code
[  287.374324] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.374324] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 09 24 a0 00 00 08 00
[  287.374324] end_request: I/O error, dev sda, sector 599200
[  287.374361] EXT4-fs (dm-2): previous I/O error to superblock detected
[  287.374425] sd 0:0:0:0: [sda] Unhandled error code
[  287.374425] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  287.374425] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 08 11 80 00 00 08 00
[  287.374425] end_request: I/O error, dev sda, sector 528768
[  287.374457] EXT4-fs error (device dm-2): ext4_find_entry:935: inode #373: comm cron: reading directory lblock 0
[  288.233586] init: anacron main process (4116) terminated with status 1
[  288.255807] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[  288.262496] xen: registering gsi 16 triggering 0 polarity 1
[  288.268297] Already setup the GSI :16
[  288.272112] ehci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[  288.279675] ehci_hcd 0000:00:1a.0: setting latency timer to 64
[  288.285647] ehci_hcd 0000:00:1a.0: EHCI Host Controller
[  288.291179] ehci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 1
[  288.298854] ehci_hcd 0000:00:1a.0: debug port 2
[  288.307407] ehci_hcd 0000:00:1a.0: cache line size of 64 is not supported
[  288.314356] ehci_hcd 0000:00:1a.0: irq 16, io mem 0xb0270000
[  288.335191] ehci_hcd 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[  288.341208] hub 1-0:1.0: USB hub found
[  288.345070] hub 1-0:1.0: 3 ports detected
[  288.385233] xen: registering gsi 23 triggering 0 polarity 1
[  288.390887] Already setup the GSI :23
[  288.394732] ehci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
[  288.402213] ehci_hcd 0000:00:1d.0: setting latency timer to 64
[  288.408278] ehci_hcd 0000:00:1d.0: EHCI Host Controller
[  288.413790] ehci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
[  288.421470] ehci_hcd 0000:00:1d.0: debug port 2
[  288.430050] ehci_hcd 0000:00:1d.0: cache line size of 64 is not supported
[  288.437013] ehci_hcd 0000:00:1d.0: irq 23, io mem 0xb0250000
[  288.465192] ehci_hcd 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[  288.471185] hub 2-0:1.0: USB hub found
[  288.475025] hub 2-0:1.0: 3 ports detected
[  288.535229] xen: registering gsi 16 triggering 0 polarity 1
[  288.540884] Already setup the GSI :16
[  288.544730] xhci_hcd 0000:00:14.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[  288.552269] xhci_hcd 0000:00:14.0: setting latency timer to 64
[  288.558256] xhci_hcd 0000:00:14.0: xHCI Host Controller
[  288.563790] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 3
[  288.571564] xhci_hcd 0000:00:14.0: cache line size of 64 is not supported
[  288.578479] xhci_hcd 0000:00:14.0: irq 16, io mem 0xb02b0000
[  288.584643] xHCI xhci_add_endpoint called for root hub
[  288.589856] xHCI xhci_check_bandwidth called for root hub
[  288.595513] hub 3-0:1.0: USB hub found
[  288.599424] hub 3-0:1.0: 4 ports detected
[  288.635211] xhci_hcd 0000:00:14.0: xHCI Host Controller
[  288.640583] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 4
[  288.648343] xHCI xhci_add_endpoint called for root hub
[  288.653569] xHCI xhci_check_bandwidth called for root hub
[  288.659253] hub 4-0:1.0: USB hub found
[  288.663150] hub 4-0:1.0: 4 ports detected
[  288.665198] usb 1-1: new high-speed USB device number 2 using ehci_hcd
[  288.802355] cfg80211: Calling CRDA to update world regulatory domain
[  288.816212] hub 1-1:1.0: USB hub found
[  288.819802] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree:
[  288.819805] Copyright(c) 2003-2011 Intel Corporation
[  288.819857] xen: registering gsi 18 triggering 0 polarity 1
[  288.819862] Already setup the GSI :18
[  288.819866] iwlwifi 0000:02:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
[  288.819887] iwlwifi 0000:02:00.0: setting latency timer to 64
[  288.819955] iwlwifi 0000:02:00.0: pci_resource_len = 0x00002000
[  288.819957] iwlwifi 0000:02:00.0: pci_resource_base = ffffc9001633c000
[  288.819960] iwlwifi 0000:02:00.0: HW Revision ID = 0x34
[  288.820254] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEBUG disabled
[  288.820256] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[  288.820257] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[  288.820258] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE disabled
[  288.820260] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_P2P disabled
[  288.820265] iwlwifi 0000:02:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[  288.820394] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  288.840147] iwlwifi 0000:02:00.0: device EEPROM VER=0x715, CALIB=0x6
[  288.840149] iwlwifi 0000:02:00.0: Device SKU: 0x1F0
[  288.840150] iwlwifi 0000:02:00.0: Valid Tx ant: 0x3, Valid Rx ant: 0x3
[  288.840162] iwlwifi 0000:02:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels
[  288.947397] hub 1-1:1.0: 6 ports detected
[  288.953700] iwlwifi 0000:02:00.0: loaded firmware version 17.168.5.3 build 42301
[  288.954317] e1000e: Intel(R) PRO/1000 Network Driver - 1.5.1-k
[  288.954319] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
[  288.954343] xen: registering gsi 20 triggering 0 polarity 1
[  288.954346] Already setup the GSI :20
[  288.954349] e1000e 0000:00:19.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20
[  288.954363] e1000e 0000:00:19.0: setting latency timer to 64
[  288.996474] Registered led device: phy0-led
[  289.000719] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain
[  289.011713] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs'
[  289.065203] usb 2-1: new high-speed USB device number 2 using ehci_hcd
[  289.215933] hub 2-1:1.0: USB hub found
[  289.219737] hub 2-1:1.0: 8 ports detected
[  289.221001] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:13:20:f9:de:24
[  289.221003] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection
[  289.221082] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: FFFFFF-0FF
[  289.239638] device eth0 entered promiscuous mode
[  289.545382] usb 2-1.5: new low-speed USB device number 3 using ehci_hcd
[  289.661039] input: USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.5/2-1.5:1.0/input/input13
[  289.671792] generic-usb 0003:04B3:310C.0005: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:1d.0-1.5/input0
[  289.697535] sd 0:0:0:0: [sda] Unhandled error code
[  289.702385] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.710344] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 00 75 88 00 00 02 00
[  289.717594] end_request: I/O error, dev sda, sector 30088
[  289.723242] Buffer I/O error on device dm-0, logical block 12804
[  289.729541] EXT4-fs warning (device dm-0): ext4_end_bio:251: I/O error writing to inode 22 (offset 0 size 1024 starting block 12804)
[  289.750709] sd 0:0:0:0: [sda] Unhandled error code
[  289.755230] usb 2-1.6: new low-speed USB device number 4 using ehci_hcd
[  289.762453] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.770412] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 20 00
[  289.777568] end_request: I/O error, dev sda, sector 2033536
[  289.783403] sd 0:0:0:0: [sda] Unhandled error code
[  289.788404] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.796352] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 08 00
[  289.803506] end_request: I/O error, dev sda, sector 2033536
[  289.809907] init: Failed to write to log file /var/log/upstart/xserver.log
[  289.831226] sd 0:0:0:0: [sda] Unhandled error code
[  289.836079] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.844036] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 00 81 82 00 00 02 00
[  289.851284] end_request: I/O error, dev sda, sector 33154
[  289.856930] Buffer I/O error on device dm-0, logical block 14337
[  289.863194] EXT4-fs warning (device dm-0): ext4_end_bio:251: I/O error writing to inode 22 (offset 0 size 1024 starting block 14337)
[  289.863303] sd 0:0:0:0: [sda] Unhandled error code
[  289.863305] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.863307] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 00 85 84 00 00 02 00
[  289.863312] end_request: I/O error, dev sda, sector 34180
[  289.863315] Buffer I/O error on device dm-0, logical block 14850
[  289.863318] EXT4-fs warning (device dm-0): ext4_end_bio:251: I/O error writing to inode 22 (offset 0 size 1024 starting block 14850)
[  289.923266] sd 0:0:0:0: [sda] Unhandled error code
[  289.925419] input: LITE-ON Technology USB NetVista Full Width Keyboard. as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.6/2-1.6:1.0/input/input14
[  289.925510] generic-usb 0003:04B3:3025.0006: input,hidraw1: USB HID v1.10 Keyboard [LITE-ON Technology USB NetVista Full Width Keyboard.] on usb-0000:00:1d.0-1.6/input0
[  289.957299] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  289.965258] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 08 00
[  289.972412] end_request: I/O error, dev sda, sector 2033536
[  289.987771] sd 0:0:0:0: [sda] Unhandled error code
[  289.992619] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  290.000582] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 08 00
[  290.007741] end_request: I/O error, dev sda, sector 2033536
[  290.015599] sd 0:0:0:0: [sda] Unhandled error code
[  290.020443] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  290.028418] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 00 85 86 00 00 02 00
[  290.035660] end_request: I/O error, dev sda, sector 34182
[  290.041300] Buffer I/O error on device dm-0, logical block 14851
[  290.047576] EXT4-fs warning (device dm-0): ext4_end_bio:251: I/O error writing to inode 22 (offset 0 size 1024 starting block 14851)
[  290.068958] sd 0:0:0:0: [sda] Unhandled error code
[  290.073802] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  290.081767] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 08 00
[  290.088926] end_request: I/O error, dev sda, sector 2033536
[  290.109131] sd 0:0:0:0: [sda] Unhandled error code
[  290.113977] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  290.121941] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 00 81 84 00 00 02 00
[  290.129190] end_request: I/O error, dev sda, sector 33156
[  290.134830] Buffer I/O error on device dm-0, logical block 14338
[  290.141105] EXT4-fs warning (device dm-0): ext4_end_bio:251: I/O error writing to inode 22 (offset 0 size 1024 starting block 14338)
[  290.156254] sd 0:0:0:0: [sda] Unhandled error code
[  290.161098] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  290.169070] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 00 1f 07 80 00 00 08 00
[  290.176222] end_request: I/O error, dev sda, sector 2033536
[  292.500975] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  292.508797] br-eth0: port 1(eth0) entering forwarding state
[  292.514513] br-eth0: port 1(eth0) entering forwarding state
[  294.985224] JBD2: Detected IO errors while flushing file data on dm-0-8
[  294.991966] sd 0:0:0:0: [sda] Unhandled error code
[  294.996982] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  295.004925] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 01 91 be 00 00 0a 00
[  295.012176] end_request: I/O error, dev sda, sector 102846
[  295.017953] Aborting journal on device dm-0-8.
[  295.022594] sd 0:0:0:0: [sda] Unhandled error code
[  295.027594] sd 0:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  295.035556] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 00 01 91 82 00 00 02 00
[  295.042796] end_request: I/O error, dev sda, sector 102786
[  295.048532] quiet_error: 2 callbacks suppressed
[  295.053268] Buffer I/O error on device dm-0, logical block 49153
[  295.059535] lost page write due to I/O error on dm-0
[  295.064730] JBD2: I/O error detected when updating journal superblock for dm-0-8.


[-- Attachment #4: xen-dump-good.txt --]
[-- Type: text/plain, Size: 66714 bytes --]

(XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch input to DOM0)
(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff82c4803025a0   rbx: ffff82c480302480   rcx: 0000000000000003
(XEN) rdx: 0000000000000000   rsi: ffff82c4802e25c8   rdi: ffff82c480271800
(XEN) rbp: ffff82c4802b7e30   rsp: ffff82c4802b7e30   r8:  0000000000000001
(XEN) r9:  ffff83014899aea8   r10: 0000002864230fdc   r11: 0000000000000246
(XEN) r12: ffff82c480271800   r13: ffff82c48013d757   r14: 0000002863f5d07a
(XEN) r15: ffff82c480302308   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013d96c000   cr2: ffff88002568eb98
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802b7e30:
(XEN)    ffff82c4802b7e60 ffff82c48012817f 0000000000000002 ffff82c4802e25c8
(XEN)    ffff82c480302480 ffff830148992d40 ffff82c4802b7eb0 ffff82c480128281
(XEN)    ffff82c4802b7f18 0000000000000246 0000002864230fdc ffff82c4802d8880
(XEN)    ffff82c4802d8880 ffff82c4802b7f18 ffffffffffffffff ffff82c480302308
(XEN)    ffff82c4802b7ee0 ffff82c480125405 ffff82c4802b7f18 ffff82c4802b7f18
(XEN)    00000000ffffffff 0000000000000002 ffff82c4802b7ef0 ffff82c480125484
(XEN)    ffff82c4802b7f10 ffff82c480158c05 ffff8300aa584000 ffff8300aa0fc000
(XEN)    ffff82c4802b7da8 0000000000000000 ffffffffffffffff 0000000000000000
(XEN)    ffffffff81aafda0 ffffffff81a01ee8 ffffffff81a01fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffffffff81a01ed0 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff8300aa584000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN)    [<ffff82c48012817f>] execute_timer+0x4e/0x6c
(XEN)    [<ffff82c480128281>] timer_softirq_action+0xe4/0x21a
(XEN)    [<ffff82c480125405>] __do_softirq+0x95/0xa0
(XEN)    [<ffff82c480125484>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480158c05>] idle_loop+0x6f/0x71
(XEN)    
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83013e67ff18   rcx: 0000000000000001
(XEN) rdx: 0000003ccced1d80   rsi: 000000005b340560   rdi: 0000000000000001
(XEN) rbp: ffff83013e67fef0   rsp: ffff83013e67fef0   r8:  0000002e70ba3024
(XEN) r9:  ffff8300a83fc060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83013e67ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83014d1d4088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014cdab000   cr2: ffff880003198c80
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013e67fef0:
(XEN)    ffff83013e67ff10 ffff82c480158bf8 ffff8300aa0fe000 ffff8300a83fc000
(XEN)    ffff83013e67fda8 0000000000000000 0000000000000000 0000000000000002
(XEN)    ffffffff81aafda0 ffff88002786fee0 ffff88002786ffd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786fec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000001 ffff8300aa0fe000
(XEN)    0000003ccced1d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014893ff18   rcx: 0000000000000002
(XEN) rdx: 0000003ccc6bbd80   rsi: 000000005bf98210   rdi: 0000000000000002
(XEN) rbp: ffff83014893fef0   rsp: ffff83014893fef0   r8:  0000002e93b88874
(XEN) r9:  ffff8300aa583060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014893ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83014c9be088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013dd46000   cr2: ffff880025e94050
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014893fef0:
(XEN)    ffff83014893ff10 ffff82c480158bf8 ffff8300a85c7000 ffff8300aa583000
(XEN)    ffff83014893fda8 0000000000000000 0000000000000000 0000000000000003
(XEN)    ffffffff81aafda0 ffff880027881ee0 ffff880027881fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff880027881ec8 000000000000e02b 3ec2e3268092e167 b97cec5b9e68dc93
(XEN)    cc98079249fb73b5 e1a35de4fd161a6f e1e35d6400000002 ffff8300a85c7000
(XEN)    0000003ccc6bbd80 e24d5a38f2ae051e
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014892ff18   rcx: 0000000000000003
(XEN) rdx: 0000003cc8685d80   rsi: 000000005cbf46a8   rdi: 0000000000000003
(XEN) rbp: ffff83014892fef0   rsp: ffff83014892fef0   r8:  0000002eb694f1c8
(XEN) r9:  ffff8300a83fd060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014892ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff830148988088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014cdb2000   cr2: ffff880002bd1150
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014892fef0:
(XEN)    ffff83014892ff10 ffff82c480158bf8 ffff8300a83fe000 ffff8300a83fd000
(XEN)    ffff83014892fda8 0000000000000000 0000000000000000 0000000000000001
(XEN)    ffffffff81aafda0 ffff88002786dee0 ffff88002786dfd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786dec8 000000000000e02b 3ec2e3268092e167 b97cec5b9e68dc93
(XEN)    cc98079249fb73b5 e1a35de4fd161a6f e1e35d6400000003 ffff8300a83fe000
(XEN)    0000003cc8685d80 e24d5a38f2ae051e
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffffffff81a01fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffffffff81a01ee8   rsp: ffffffff81a01ed0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000000   r14: ffffffffffffffff
(XEN) r15: 0000000000000000   cr0: 0000000000000008   cr4: 0000000000002660
(XEN) cr3: 000000013d96c000   cr2: 00007fd21e973000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff81a01ed0:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffffffff81a01f18
(XEN)    ffffffff8101c663 ffffffff81a01fd8 ffffffff81aafda0 ffff88002dee1a00
(XEN)    ffffffffffffffff ffffffff81a01f48 ffffffff81013236 ffffffffffffffff
(XEN)    8e6a1de960e75dc3 0000000000000000 ffffffff81b15160 ffffffff81a01f58
(XEN)    ffffffff81554f5e ffffffff81a01f98 ffffffff81accbf5 ffffffff81b15160
(XEN)    d0a0f0752ad9a008 0000000000cdf000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81a01fb8 ffffffff81acc34b ffffffff7fffffff
(XEN)    ffffffff84b25000 ffffffff81a01ff8 ffffffff81acfecc 0000000000000000
(XEN)    0000000100000000 00100800000306a4 1fc98b75e3b82283 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786dfd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786dee0   rsp: ffff88002786dec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000001   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000014cdb2000   cr2: 00007fd9aa42a000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786dec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786df10
(XEN)    ffffffff8101c663 ffff88002786dfd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786df40 ffffffff81013236 ffffffff8100ade9
(XEN)    a1021ef739fb27d3 0000000000000000 0000000000000000 ffff88002786df50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786df58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786ffd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786fee0   rsp: ffff88002786fec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000014cdab000   cr2: 0000000001b90048
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786fec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786ff10
(XEN)    ffffffff8101c663 ffff88002786ffd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786ff40 ffffffff81013236 ffffffff8100ade9
(XEN)    57902e18ee3212f9 0000000000000000 0000000000000000 ffff88002786ff50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786ff58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff880027881fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff880027881ee0   rsp: ffff880027881ec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000003   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013dd46000   cr2: 00007fd20400a5a0
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880027881ec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff880027881f10
(XEN)    ffffffff8101c663 ffff880027881fd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff880027881f40 ffffffff81013236 ffffffff8100ade9
(XEN)    67b1044cc4b7c391 0000000000000000 0000000000000000 ffff880027881f50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880027881f58 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x28:ADC181D2)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16128 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 130559 pages
(XEN) heap[node=0][zone=18] -> 262143 pages
(XEN) heap[node=0][zone=19] -> 172797 pages
(XEN) heap[node=0][zone=20] -> 134265 pages
(XEN) heap[node=0][zone=21] -> 0 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) [I: dump HVM irq info]
(XEN) 'I' pressed -> dumping HVM irq info
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) ==== segment 0000 ====
(XEN) 0000:05:01.0 - dom 0   - MSIs < >
(XEN) 0000:04:00.0 - dom 0   - MSIs < >
(XEN) 0000:03:00.0 - dom 0   - MSIs < >
(XEN) 0000:02:00.0 - dom 0   - MSIs < 30 >
(XEN) 0000:00:1f.3 - dom 0   - MSIs < >
(XEN) 0000:00:1f.2 - dom 0   - MSIs < 27 >
(XEN) 0000:00:1f.0 - dom 0   - MSIs < >
(XEN) 0000:00:1e.0 - dom 0   - MSIs < >
(XEN) 0000:00:1d.0 - dom 0   - MSIs < >
(XEN) 0000:00:1c.7 - dom 0   - MSIs < >
(XEN) 0000:00:1c.6 - dom 0   - MSIs < >
(XEN) 0000:00:1c.0 - dom 0   - MSIs < >
(XEN) 0000:00:1b.0 - dom 0   - MSIs < 26 >
(XEN) 0000:00:1a.0 - dom 0   - MSIs < >
(XEN) 0000:00:19.0 - dom 0   - MSIs < 31 >
(XEN) 0000:00:16.3 - dom 0   - MSIs < >
(XEN) 0000:00:16.0 - dom 0   - MSIs < >
(XEN) 0000:00:14.0 - dom 0   - MSIs < 29 >
(XEN) 0000:00:02.0 - dom 0   - MSIs < 28 >
(XEN) 0000:00:00.0 - dom 0   - MSIs < >
(XEN) [V: dump iommu info]
(XEN) 
(XEN) iommu 0: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  0010 00000001 29    0   1  0  1  1   0 1
(XEN) 
(XEN) iommu 1: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
(XEN)   0001:  1   0  f0f8 00000001 f0    0   1  0  1  1   0 1
(XEN)   0002:  1   0  f0f8 00000001 40    0   1  0  1  1   0 1
(XEN)   0003:  1   0  f0f8 00000001 48    0   1  0  1  1   0 1
(XEN)   0004:  1   0  f0f8 00000001 50    0   1  0  1  1   0 1
(XEN)   0005:  1   0  f0f8 00000001 58    0   1  0  1  1   0 1
(XEN)   0006:  1   0  f0f8 00000001 60    0   1  0  1  1   0 1
(XEN)   0007:  1   0  f0f8 00000001 68    0   1  0  1  1   0 1
(XEN)   0008:  1   0  f0f8 00000001 70    0   1  1  1  1   0 1
(XEN)   0009:  1   0  f0f8 00000001 78    0   1  0  1  1   0 1
(XEN)   000a:  1   0  f0f8 00000001 88    0   1  0  1  1   0 1
(XEN)   000b:  1   0  f0f8 00000001 90    0   1  0  1  1   0 1
(XEN)   000c:  1   0  f0f8 00000001 98    0   1  0  1  1   0 1
(XEN)   000d:  1   0  f0f8 00000001 a0    0   1  0  1  1   0 1
(XEN)   000e:  1   0  f0f8 00000001 a8    0   1  0  1  1   0 1
(XEN)   000f:  1   0  f0f8 00000001 b0    0   1  1  1  1   0 1
(XEN)   0010:  1   0  f0f8 00000001 b8    0   1  1  1  1   0 1
(XEN)   0011:  1   0  f0f8 00000001 c0    0   1  1  1  1   0 1
(XEN)   0012:  1   0  f0f8 00000001 c8    0   1  1  1  1   0 1
(XEN)   0013:  1   0  f0f8 00000001 d0    0   1  1  1  1   0 1
(XEN)   0014:  1   0  00d8 00000001 71    0   1  0  1  1   0 1
(XEN)   0015:  1   0  00fa 00000001 21    0   1  0  1  1   0 1
(XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
(XEN)   0017:  1   0  00a0 00000001 79    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 81    0   1  0  1  1   0 1
(XEN)   0019:  1   0  00c8 00000001 99    0   1  0  1  1   0 1
(XEN) 
(XEN) Redirection table of IOAPIC 0:
(XEN)   #entry IDX FMT MASK TRIG IRR POL STAT DELI  VECTOR
(XEN)    01:  0000   1    0   0   0   0    0    0     38
(XEN)    02:  0001   1    0   0   0   0    0    0     f0
(XEN)    03:  0002   1    0   0   0   0    0    0     40
(XEN)    04:  0003   1    0   0   0   0    0    0     48
(XEN)    05:  0004   1    0   0   0   0    0    0     50
(XEN)    06:  0005   1    0   0   0   0    0    0     58
(XEN)    07:  0006   1    0   0   0   0    0    0     60
(XEN)    08:  0007   1    0   0   0   0    0    0     68
(XEN)    09:  0008   1    0   1   0   0    0    0     70
(XEN)    0a:  0009   1    0   0   0   0    0    0     78
(XEN)    0b:  000a   1    0   0   0   0    0    0     88
(XEN)    0c:  000b   1    0   0   0   0    0    0     90
(XEN)    0d:  000c   1    1   0   0   0    0    0     98
(XEN)    0e:  000d   1    0   0   0   0    0    0     a0
(XEN)    0f:  000e   1    0   0   0   0    0    0     a8
(XEN)    10:  000f   1    0   1   0   1    0    0     b0
(XEN)    12:  0010   1    1   1   0   1    0    0     b8
(XEN)    13:  0011   1    1   1   0   1    0    0     c0
(XEN)    14:  0016   1    1   1   0   1    0    0     31
(XEN)    16:  0013   1    1   1   0   1    0    0     d0
(XEN)    17:  0012   1    0   1   0   1    0    0     c8
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues:
(XEN) CPU00:
(XEN)   ex=   -1680us timer=ffff82c4802e25c8 cb=ffff82c48013d757(ffff82c480271800) ns16550_poll+0x0/0x33
(XEN)   ex=    7319us timer=ffff83014899a1b8 cb=ffff82c480119d72(ffff83014899a190) csched_acct+0x0/0x42a
(XEN)   ex=125136477us timer=ffff82c4802fe280 cb=ffff82c4801807c2(0000000000000000) plt_overflow+0x0/0x131
(XEN)   ex= 9260293us timer=ffff82c480300580 cb=ffff82c4801a8850(0000000000000000) mce_work_fn+0x0/0xa9
(XEN)   ex=    7319us timer=ffff83014899aea8 cb=ffff82c48011aaf0(0000000000000000) csched_tick+0x0/0x314
(XEN) CPU01:
(XEN)   ex=   61380us timer=ffff830136a49a28 cb=ffff82c48011aaf0(0000000000000001) csched_tick+0x0/0x314
(XEN)   ex=  301640us timer=ffff8300a83fc060 cb=ffff82c480121c6b(ffff8300a83fc000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU02:
(XEN)   ex=   82945us timer=ffff83011c9e9f48 cb=ffff82c48011aaf0(0000000000000002) csched_tick+0x0/0x314
(XEN)   ex=  666002us timer=ffff8300aa583060 cb=ffff82c480121c6b(ffff8300aa583000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU03:
(XEN)   ex=  103262us timer=ffff83011c9e9a38 cb=ffff82c48011aaf0(0000000000000003) csched_tick+0x0/0x314
(XEN)   ex=  178930us timer=ffff8300a83fd060 cb=ffff82c480121c6b(ffff8300a83fd000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[175474269964]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu1==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[175499060650]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu2==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[175523849933]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu3==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[175548639964]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Event channel information for domain 0:
(XEN) Polling vCPUs: {}
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 x=0 v=0
(XEN)        2 [1/1]: s=6 n=0 x=0
(XEN)        3 [1/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0
(XEN)        5 [0/0]: s=5 n=0 x=0 v=1
(XEN)        6 [0/0]: s=6 n=0 x=0
(XEN)        7 [0/0]: s=5 n=1 x=0 v=0
(XEN)        8 [0/1]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=6 n=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=1 x=0 v=1
(XEN)       12 [0/0]: s=6 n=1 x=0
(XEN)       13 [0/0]: s=5 n=2 x=0 v=0
(XEN)       14 [1/1]: s=6 n=2 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=6 n=2 x=0
(XEN)       17 [0/0]: s=5 n=2 x=0 v=1
(XEN)       18 [0/0]: s=6 n=2 x=0
(XEN)       19 [0/0]: s=5 n=3 x=0 v=0
(XEN)       20 [1/1]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=6 n=3 x=0
(XEN)       22 [0/0]: s=6 n=3 x=0
(XEN)       23 [0/0]: s=5 n=3 x=0 v=1
(XEN)       24 [0/0]: s=6 n=3 x=0
(XEN)       25 [0/0]: s=3 n=0 x=0 d=0 p=36
(XEN)       26 [0/0]: s=4 n=0 x=0 p=9 i=9
(XEN)       27 [0/1]: s=5 n=0 x=0 v=2
(XEN)       28 [0/0]: s=4 n=0 x=0 p=8 i=8
(XEN)       29 [0/0]: s=4 n=0 x=0 p=279 i=26
(XEN)       30 [0/0]: s=4 n=0 x=0 p=277 i=28
(XEN)       31 [0/0]: s=4 n=0 x=0 p=16 i=16
(XEN)       32 [0/0]: s=4 n=0 x=0 p=278 i=27
(XEN)       33 [0/0]: s=4 n=0 x=0 p=23 i=23
(XEN)       34 [0/0]: s=4 n=0 x=0 p=276 i=29
(XEN)       35 [1/0]: s=4 n=0 x=0 p=275 i=30
(XEN)       36 [0/0]: s=3 n=0 x=0 d=0 p=25
(XEN)       37 [0/0]: s=5 n=0 x=0 v=3
(XEN)       38 [1/0]: s=4 n=0 x=0 p=274 i=31
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:0001 vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:0001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   2 affinity:ffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:0001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   4 affinity:0001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   5 affinity:0001 vec:50 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:0001 vec:58 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:0001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:0001 vec:68 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:0001 vec:70 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:0001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:0001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:0001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  13 affinity:000f vec:98 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:0001 vec:a0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:0001 vec:a8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:0001 vec:b0 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 16(-S--),
(XEN)    IRQ:  18 affinity:000f vec:b8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  19 affinity:0001 vec:c0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  20 affinity:000f vec:31 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  22 affinity:0001 vec:d0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:0001 vec:c8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 23(-S--),
(XEN)    IRQ:  24 affinity:0001 vec:28 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  25 affinity:0001 vec:30 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
(XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
(XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  1 Vec 56:
(XEN)       Apic 0x00, Pin  1: vec=38 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  3 Vec 64:
(XEN)       Apic 0x00, Pin  3: vec=40 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  4 Vec 72:
(XEN)       Apic 0x00, Pin  4: vec=48 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  5 Vec 80:
(XEN)       Apic 0x00, Pin  5: vec=50 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  6 Vec 88:
(XEN)       Apic 0x00, Pin  6: vec=58 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  7 Vec 96:
(XEN)       Apic 0x00, Pin  7: vec=60 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  8 Vec104:
(XEN)       Apic 0x00, Pin  8: vec=68 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  9 Vec112:
(XEN)       Apic 0x00, Pin  9: vec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 10 Vec120:
(XEN)       Apic 0x00, Pin 10: vec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 11 Vec136:
(XEN)       Apic 0x00, Pin 11: vec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 12 Vec144:
(XEN)       Apic 0x00, Pin 12: vec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 13 Vec152:
(XEN)       Apic 0x00, Pin 13: vec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:0
(XEN)     IRQ 14 Vec160:
(XEN)       Apic 0x00, Pin 14: vec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 15 Vec168:
(XEN)       Apic 0x00, Pin 15: vec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 16 Vec176:
(XEN)       Apic 0x00, Pin 16: vec=b0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 18 Vec184:
(XEN)       Apic 0x00, Pin 18: vec=b8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 19 Vec192:
(XEN)       Apic 0x00, Pin 19: vec=c0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 20 Vec 49:
(XEN)       Apic 0x00, Pin 20: vec=31 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 22 Vec208:
(XEN)       Apic 0x00, Pin 22: vec=d0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 23 Vec200:
(XEN)       Apic 0x00, Pin 23: vec=c8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64512kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 522236kB free
(XEN)     heap[18]: 1048572kB free
(XEN)     heap[19]: 691188kB free
(XEN)     heap[20]: 537060kB free
(XEN)     Dom heap: 3256784kB free
(XEN) [n: NMI statistics]
(XEN) CPU	NMI
(XEN)   0	  0
(XEN)   1	  0
(XEN)   2	  0
(XEN)   3	  0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x29:0DB700B1)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=187539 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-3} max_pages=188147
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-204f, 2058-ffff }
(XEN)     Interrupts { 0-279 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 00000000001476f6: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 00000000001476f5: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000001476f4: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000001476f3: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000aa0fd: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000011c9ea: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN)     VCPU1: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/-1)
(XEN) Notifying guest 0:1 (virq 1, port 11, stat 0/0/0)
(XEN) Notifying guest 0:2 (virq 1, port 17, stat 0/0/0)
(XEN) Notifying guest 0:3 (virq 1, port 23, stat 0/0/0)

(XEN) Shared frames 0 -- Saved frames 0
[  176.501955] v(XEN) [r: dump run queues]
cpu 1
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=0x00000029197E3502
(XEN) Idle cpupool:
[  176.501955]  (XEN) Scheduler: SMP Credit Scheduler (credit)
 (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -100
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 1945
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
0: masked=0 pend(XEN) idlers: 0006
(XEN) active vcpus:
(XEN) 	  1: ing=1 event_sel [0.1] pri=-2 flags=0 cpu=3 credit=-703 [w=256]
0000000000000001(XEN) Cpupool 0:

(XEN) Scheduler: SMP Credit Scheduler (credit)
[  176.536137]  (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -100
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 1945
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
 (XEN) idlers: 0006
(XEN) active vcpus:
(XEN) 	  1: [0.1] pri=-2 flags=0 cpu=3 credit=-1264 [w=256]
1: masked=0 pend(XEN) CPU[00] ing=1 event_sel  sort=1945, sibling=0001, 0000000000000001core=000f
(XEN) 	run: [32767.0] pri=0 flags=0 cpu=0

(XEN) 	  1: [0.0] pri=0 flags=0 cpu=0 credit=62 [w=256]
[  176.630195]  (XEN) CPU[01]   sort=1945, sibling=0002, 2: masked=1 pendcore=000f
(XEN) 	run: ing=1 event_sel [32767.1] pri=-64 flags=0 cpu=1
0000000000000001(XEN) CPU[02] 
 sort=1945, sibling=0004, [  176.660175]  core=000f
(XEN) 	run:  [32767.2] pri=-64 flags=0 cpu=2
3: masked=1 pend(XEN) CPU[03] ing=0 event_sel  sort=1945, sibling=0008, 0000000000000000core=000f
(XEN) 	run: [0.1] pri=-2 flags=0 cpu=3 credit=-1863 [w=256]
(XEN) 	  1: [32767.3] pri=-64 flags=0 cpu=3

(XEN) [s: dump softtsc stats]
[  176.679237]  (XEN) TSC marked as reliable, warp = 0 (count=2)
 (XEN) No domains have emulated TSC

(XEN) [t: display multi-cpu clock info]
[  176.729407] p(XEN) Synced stime skew: max=8492ns avg=8492ns samples=1 current=8492ns
(XEN) Synced cycles skew: max=170 avg=170 samples=1 current=170
ending:
(XEN) [u: dump numa info]
[  176.729408]  (XEN) 'u' pressed -> dumping numa info (now-0x29:2756B63B)
  (XEN) idx0 -> NODE0 start->0 size->1369600 free->814196
0000000000000000(XEN) phys_to_nid(0000000000001000) -> 0 should be 0
 (XEN) CPU0 -> NODE0
(XEN) CPU1 -> NODE0
(XEN) CPU2 -> NODE0
(XEN) CPU3 -> NODE0
(XEN) Memory location of each domain:
(XEN) Domain 0 (total: 187539):
0000000000000000(XEN)     Node 0: 187539
 (XEN) [v: dump Intel's VMCS]
0000000000000000(XEN) *********** VMCS Areas **************
 (XEN) **************************************
0000000000000000(XEN) [z: print ioapic info]
 (XEN) number of MP IRQ sources: 15.
0000000000000000(XEN) number of IO-APIC #2 registers: 24.
(XEN) testing the IO APIC.......................
 (XEN) IO APIC #2......
(XEN) .... register #00: 02000000
(XEN) .......    : physical APIC id: 02
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
0000000000000000(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
 (XEN)  00 000 00  00000000000000001    0    0   0   0    0    0    00
 (XEN)  01 000 00  0    0    0   0   0    1    1    38
0000000000000000(XEN)  02 000 00  0    0    0   0   0    1    1    F0

(XEN)  03 000 00  0    0    0   0   0    1    1    40
[  176.866872]  (XEN)  04 000 00  0    0    0   0   0    1    1    48
  (XEN)  05 000 00  0    0    0   0   0    1    1    50
0000000000000000(XEN)  06 000 00  0    0    0   0   0    1    1    58
 (XEN)  07 000 00  0    0    0   0   0    1    1    60
0000000000000000(XEN)  08 000 00  0    0    0   0   0    1    1    68
 (XEN)  09 000 00  0    1    0   0   0    1    1    70
0000000000000000(XEN)  0a 000 00  0    0    0   0   0    1    1    78
 (XEN)  0b 000 00  0    0    0   0   0    1    1    88
0000000000000000(XEN)  0c 000 00  0    0    0   0   0    1    1    90
 (XEN)  0d 000 00  1    0    0   0   0    1    1    98
0000000000000000(XEN)  0e 000 00  0    0    0   0   0    1    1    A0
 (XEN)  0f 000 00  0    0    0   0   0    1    1    A8
0000000000000000(XEN)  10 000 00  0    1    0   1   0    1    1    B0
 (XEN)  11 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  12 000 00  1    1    0   1   0    1    1    B8
 (XEN)  13 000 00  1    1    0   1   0    1    1    C0
0000000000000000(XEN)  14 000 00  1    1    0   1   0    1    1    31

(XEN)  15 000 00  1    0    0   0   0    0    0    00
[  176.969433]  (XEN)  16 000 00  1    1    0   1   0    1    1    D0
  (XEN)  17 000 00  0    1    0   1   0    1    1    C8
0000000000000000(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
 (XEN) IRQ240 -> 0:2
(XEN) IRQ56 -> 0:1
(XEN) IRQ64 -> 0:3
(XEN) IRQ72 -> 0:4
0000000000000000(XEN) IRQ80 -> 0:5
(XEN) IRQ88 -> 0:6
(XEN) IRQ96 -> 0:7
(XEN) IRQ104 -> 0:8
(XEN) IRQ112 -> 0:9
(XEN) IRQ120 -> 0:10
(XEN) IRQ136 -> 0:11
(XEN) IRQ144 -> 0:12
(XEN) IRQ152 -> 0:13
(XEN) IRQ160 -> 0:14
(XEN) IRQ168 -> 0:15
(XEN) IRQ176 -> 0:16
 (XEN) IRQ184 -> 0:18
(XEN) IRQ192 -> 0:19
(XEN) IRQ49 -> 0:20
(XEN) IRQ208 -> 0:22
(XEN) IRQ200 -> 0:23
0000000000000000(XEN) .................................... done.
 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.058032]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.071994]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.085955]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.099916]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.113878]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004800082282
[  177.127839]    
[  177.131150] global mask:
[  177.131150]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.146364]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.160326]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.174286]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.188248]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.202209]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.216170]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.230132]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  177.244092]    
[  177.247404] globally unmasked:
[  177.247404]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.263155]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.277116]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.291078]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.305039]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.319000]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.332961]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.346922]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004800082282
[  177.360883]    
[  177.364195] local cpu1 mask:
[  177.364196]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.379767]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.393729]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.407689]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.421651]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.435611]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.449573]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.463534]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000001f80
[  177.477495]    
[  177.480806] locally unmasked:
[  177.480806]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.496469]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.510429]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.524390]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.538352]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.552313]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.566274]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.580236]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000280
[  177.594196]    
[  177.597508] pending list:
[  177.600551]   0: event 1 -> irq 272 locally-masked
[  177.605562]   1: event 7 -> irq 278
[  177.609232]   1: event 9 -> irq 280
[  177.612902]   2: event 13 -> irq 284 locally-masked
[  177.618002]   3: event 19 -> irq 290 locally-masked
[  177.623104]   0: event 35 -> irq 302 locally-masked
[  177.628205]   0: event 38 -> irq 303 locally-masked
[  177.633326] 
[  177.633326] vcpu 0
[  177.633327]   0: masked=0 pending=1 event_sel 0000000000000001
[  177.638579]   1: masked=0 pending=0 event_sel 0000000000000000
[  177.644665]   2: masked=1 pending=1 event_sel 0000000000000001
[  177.650749]   3: masked=1 pending=1 event_sel 0000000000000001
[  177.656835]   
[  177.662920] pending:
[  177.662921]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.677776]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.691738]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.705700]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.719660]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.733622]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.747582]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.761543]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000480028a006
[  177.775505]    
[  177.778816] global mask:
[  177.778816]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.794030]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.807992]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.821952]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.835914]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.849875]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.863836]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  177.877798]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  177.891759]    
[  177.895069] globally unmasked:
[  177.895070]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.910821]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.924783]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.938743]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.952704]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.966666]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.980627]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  177.994589]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000480028a002
[  178.008550]    
[  178.011861] local cpu0 mask:
[  178.011861]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.027433]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.041394]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.055355]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.069317]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.083278]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.097239]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.111200]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 fffffffffe00007f
[  178.125161]    
[  178.128472] locally unmasked:
[  178.128473]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.144134]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.158096]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.172056]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.186018]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.199979]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.213940]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.227902]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000004800000002
[  178.241863]    
[  178.245174] pending list:
[  178.248221]   0: event 1 -> irq 272
[  178.251888]   0: event 2 -> irq 273 globally-masked
[  178.256988]   2: event 13 -> irq 284 locally-masked
[  178.262090]   2: event 15 -> irq 286 locally-masked
[  178.267190]   3: event 19 -> irq 290 locally-masked
[  178.272292]   3: event 21 -> irq 292 locally-masked
[  178.277392]   0: event 35 -> irq 302
[  178.281151]   0: event 38 -> irq 303
[  178.284938] 
[  178.284939] vcpu 2
[  178.284940]   0: masked=0 pending=0 event_sel 0000000000000000
[  178.290200]   1: masked=0 pending=0 event_sel 0000000000000000
[  178.296285]   2: masked=0 pending=1 event_sel 0000000000000001
[  178.302371]   3: masked=1 pending=1 event_sel 0000000000000001
[  178.308455]   
[  178.314541] pending:
[  178.314541]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.329397]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.343358]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.357320]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.371281]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.385242]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.399203]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.413165]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028e004
[  178.427126]    
[  178.430437] global mask:
[  178.430437]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.445651]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.459612]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.473573]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.487535]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.501496]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.515457]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  178.529418]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  178.543379]    
[  178.546690] globally unmasked:
[  178.546691]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.562441]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.576403]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.590365]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.604325]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.618286]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.632247]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.646209]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028a000
[  178.660170]    
[  178.663481] local cpu2 mask:
[  178.663482]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.679053]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.693015]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.706976]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.720937]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.734898]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.748859]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.762820]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000007e000
[  178.776784]    
[  178.780093] locally unmasked:
[  178.780093]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.795755]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.809716]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.823677]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.837638]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.851599]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.865561]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.879522]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  178.893484]    
[  178.896794] pending list:
[  178.899838]   0: event 2 -> irq 273 globally-masked locally-masked
[  178.906281]   2: event 13 -> irq 284
[  178.910039]   2: event 14 -> irq 285 globally-masked
[  178.915231]   2: event 15 -> irq 286
[  178.918990]   3: event 19 -> irq 290 locally-masked
[  178.924091]   3: event 21 -> irq 292 locally-masked
[  178.929214] 
[  178.929215] vcpu 3
[  178.929216]   0: masked=0 pending=0 event_sel 0000000000000000
[  178.934473]   1: masked=0 pending=1 event_sel 0000000000000001
[  178.940558]   2: masked=1 pending=0 event_sel 0000000000000000
[  178.946644]   3: masked=0 pending=1 event_sel 0000000000000001
[  178.952729]   
[  178.958815] pending:
[  178.958816]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.973671]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  178.987632]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.001594]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.015554]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.029515]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.043477]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.057438]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000384004
[  179.071399]    
[  179.074710] global mask:
[  179.074711]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.089925]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.103885]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.117847]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.131808]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.145770]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.159730]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  179.173692]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffff8008104105
[  179.187652]    
[  179.190963] globally unmasked:
[  179.190964]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.206715]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.220676]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.234638]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.248598]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.262560]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.276522]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.290482]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  179.304446]    
[  179.307755] local cpu3 mask:
[  179.307756]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.323327]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.337288]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.351249]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.365211]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.379172]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.393133]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.407094]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000001f80000
[  179.421055]    
[  179.424367] locally unmasked:
[  179.424367]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.440029]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.453990]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.467951]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.481912]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.495873]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.509835]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  179.523795]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  179.537757]    
[  179.541068] pending list:
[  179.544111]   0: event 2 -> irq 273 globally-masked locally-masked
[  179.550555]   2: event 14 -> irq 285 globally-masked locally-masked
[  179.557088]   3: event 19 -> irq 290
[  179.560847]   3: event 20 -> irq 291 globally-masked
[  179.566038]   3: event 21 -> irq 292


[-- Attachment #5: syslog-good.txt --]
[-- Type: text/plain, Size: 19112 bytes --]

[  116.970130] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  116.982125] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0
[  119.680495] iwlwifi 0000:02:00.0: PCI INT A disabled
[  120.354671] br-eth0: port 1(eth0) entering forwarding state
[  120.368694] br-eth0: port 1(eth0) entering forwarding state
[  120.374359] br-eth0: port 1(eth0) entering forwarding state
[  120.380786] br-eth0: port 1(eth0) entering forwarding state
[  120.750101] e1000e 0000:00:19.0: PCI INT A disabled
[  121.080008] ehci_hcd 0000:00:1d.0: remove, state 1
[  121.084857] usb usb4: USB disconnect, device number 1
[  121.090145] usb 4-1: USB disconnect, device number 2
[  121.095440] usb 4-1.5: USB disconnect, device number 3
[  121.221750] usb 4-1.6: USB disconnect, device number 4
[  121.556011] ehci_hcd 0000:00:1d.0: USB bus 4 deregistered
[  121.562678] ehci_hcd 0000:00:1d.0: PCI INT A disabled
[  121.567800] ehci_hcd 0000:00:1a.0: remove, state 4
[  121.573294] usb usb3: USB disconnect, device number 1
[  121.578405] usb 3-1: USB disconnect, device number 2
[  121.588397] ehci_hcd 0000:00:1a.0: USB bus 3 deregistered
[  121.594202] ehci_hcd 0000:00:1a.0: PCI INT A disabled
[  121.670021] xhci_hcd 0000:00:14.0: remove, state 4
[  121.674867] usb usb2: USB disconnect, device number 1
[  121.680867] xHCI xhci_drop_endpoint called for root hub
[  121.686158] xHCI xhci_check_bandwidth called for root hub
[  121.692709] xhci_hcd 0000:00:14.0: USB bus 2 deregistered
[  121.698460] xhci_hcd 0000:00:14.0: remove, state 4
[  121.703822] usb usb1: USB disconnect, device number 1
[  121.709156] xHCI xhci_drop_endpoint called for root hub
[  121.714455] xHCI xhci_check_bandwidth called for root hub
[  121.720560] xhci_hcd 0000:00:14.0: USB bus 1 deregistered
[  121.770334] xhci_hcd 0000:00:14.0: PCI INT A disabled
[  122.117143] PM: Syncing filesystems ... done.
[  122.122000] PM: Preparing system for mem sleep
[  122.360029] Freezing user space processes ... (elapsed 0.01 seconds) done.
[  122.382595] Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done.
[  122.402595] PM: Entering mem sleep
[  122.406514] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  122.411778] sd 0:0:0:0: [sda] Stopping disk
[  122.416441] ACPI handle has no context!
[  122.520068] snd_hda_intel 0000:00:1b.0: PCI INT A disabled
[  122.539986] PM: suspend of drv:snd_hda_intel dev:0000:00:1b.0 complete after 123.619 msecs
[  122.548425] PM: suspend of drv: dev:pci0000:00 complete after 131.890 msecs
[  122.555675] PM: suspend of devices complete after 149.250 msecs
[  122.561838] PM: suspend devices took 0.160 seconds
[  122.567754] PM: late suspend of devices complete after 0.909 msecs
[  122.574445] ACPI: Preparing to enter system sleep state S3
[  122.580277] PM: Saving platform NVS memory
[  122.778492] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Breaking vcpu affinity for domain 0 vcpu 2
(XEN) Breaking vcpu affinity for domain 0 vcpu 3
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) Enabling non-boot CPUs  ...
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[  123.862739] ACPI: Low-level resume complete
[  123.867056] PM: Restoring platform NVS memory
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
(XEN) microcode: collect_cpu_info : sig=0x306a4, pf=0x2, rev=0x7
[  124.019715] Enabling non-boot CPUs ...
[  124.023593] installing Xen timer for CPU 1
[  124.027833] cpu 1 spinlock event irq 279
[  124.034080] CPU1 is up
[  124.036625] installing Xen timer for CPU 2
[  124.040793] cpu 2 spinlock event irq 285
[  124.045903] CPU2 is up
[  124.048398] installing Xen timer for CPU 3
[  124.052561] cpu 3 spinlock event irq 291
[  124.057782] CPU3 is up
[  124.061959] ACPI: Waking up from system sleep state S3
[  124.067567] i915 0000:00:02.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.076384] i915 0000:00:02.0: restoring config space at offset 0x1 (was 0x900007, writing 0x900407)
[  124.085899] pci 0000:00:14.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.094727] pci 0000:00:14.0: restoring config space at offset 0x4 (was 0x4, writing 0xb02b0004)
[  124.103839] pci 0000:00:14.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  124.113444] pci 0000:00:16.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.122293] pci 0000:00:16.0: restoring config space at offset 0x4 (was 0xfedb0004, writing 0xb02a0004)
[  124.132025] pci 0000:00:16.0: restoring config space at offset 0x1 (was 0x180006, writing 0x100006)
[  124.141459] serial 0000:00:16.3: restoring config space at offset 0xf (was 0x200, writing 0x20a)
[  124.150572] serial 0000:00:16.3: restoring config space at offset 0x5 (was 0x0, writing 0xb0290000)
[  124.159944] serial 0000:00:16.3: restoring config space at offset 0x4 (was 0x1, writing 0x30e1)
[  124.168988] serial 0000:00:16.3: restoring config space at offset 0x1 (was 0xb00000, writing 0xb00007)
[  124.178697] pci 0000:00:19.0: restoring config space at offset 0xf (was 0x100, writing 0x105)
[  124.187546] pci 0000:00:19.0: restoring config space at offset 0x1 (was 0x100002, writing 0x100003)
[  124.196955] pci 0000:00:1a.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.205793] pci 0000:00:1a.0: restoring config space at offset 0x4 (was 0x0, writing 0xb0270000)
[  124.214896] pci 0000:00:1a.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  124.224524] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0xf (was 0x100, writing 0x103)
[  124.234254] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x4 (was 0x4, writing 0xb0260004)
[  124.244248] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
[  124.253739] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002)
[  124.264089] pcieport 0000:00:1c.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.273356] pcieport 0000:00:1c.0: restoring config space at offset 0x7 (was 0xf0, writing 0x200000f0)
[  124.283008] pcieport 0000:00:1c.0: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  124.292848] pcieport 0000:00:1c.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100007)
[  124.302760] pcieport 0000:00:1c.6: restoring config space at offset 0xf (was 0x300, writing 0x304)
[  124.312018] pcieport 0000:00:1c.6: restoring config space at offset 0x7 (was 0xf0, writing 0x200000f0)
[  124.321670] pcieport 0000:00:1c.6: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  124.331509] pcieport 0000:00:1c.6: restoring config space at offset 0x1 (was 0x100000, writing 0x100007)
[  124.341422] pcieport 0000:00:1c.7: restoring config space at offset 0xf (was 0x400, writing 0x40a)
[  124.350677] pcieport 0000:00:1c.7: restoring config space at offset 0x7 (was 0xf0, writing 0x200000f0)
[  124.360333] pcieport 0000:00:1c.7: restoring config space at offset 0x3 (was 0x810000, writing 0x810010)
[  124.370237] pci 0000:00:1d.0: restoring config space at offset 0xf (was 0x100, writing 0x10b)
[  124.379055] pci 0000:00:1d.0: restoring config space at offset 0x4 (was 0x0, writing 0xb0250000)
[  124.388162] pci 0000:00:1d.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002)
[  124.397963] ahci 0000:00:1f.2: restoring config space at offset 0x1 (was 0x2b00007, writing 0x2b00407)
[  124.407581] pci 0000:00:1f.3: restoring config space at offset 0x1 (was 0x2800001, writing 0x2800003)
[  124.417105] pci 0000:02:00.0: restoring config space at offset 0xf (was 0x100, writing 0x104)
[  124.425954] pci 0000:02:00.0: restoring config space at offset 0x4 (was 0x4, writing 0xb0100004)
[  124.435032] pci 0000:02:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x10)
[  124.443633] pci 0000:02:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002)
[  124.453128] pci 0000:03:00.0: restoring config space at offset 0x9 (was 0x10001, writing 0x1fff1)
[  124.462244] pci 0000:03:00.0: restoring config space at offset 0x7 (was 0x22a00101, writing 0x22a001f1)
[  124.472006] pci 0000:03:00.0: restoring config space at offset 0x3 (was 0x10000, writing 0x10010)
[  124.481279] pci 0000:04:00.0: restoring config space at offset 0xf (was 0x4020100, writing 0x402010a)
[  124.490822] pci 0000:04:00.0: restoring config space at offset 0x5 (was 0x0, writing 0xb0000000)
[  124.499921] pci 0000:04:00.0: restoring config space at offset 0x3 (was 0x0, writing 0x2010)
[  124.508718] serial 0000:05:01.0: restoring config space at offset 0xf (was 0x1ff, writing 0x103)
[  124.517829] serial 0000:05:01.0: restoring config space at offset 0x9 (was 0x1, writing 0x2001)
[  124.526853] serial 0000:05:01.0: restoring config space at offset 0x8 (was 0x1, writing 0x2011)
[  124.535893] serial 0000:05:01.0: restoring config space at offset 0x7 (was 0x1, writing 0x2021)
[  124.544929] serial 0000:05:01.0: restoring config space at offset 0x6 (was 0x1, writing 0x2031)
[  124.553971] serial 0000:05:01.0: restoring config space at offset 0x5 (was 0x1, writing 0x2041)
[  124.563012] serial 0000:05:01.0: restoring config space at offset 0x3 (was 0x8, writing 0x2010)
[  124.572128] PM: early resume of devices complete after 504.645 msecs
[  124.578730] i915 0000:00:02.0: setting latency timer to 64
[  124.578743] xen: registering gsi 22 triggering 0 polarity 1
[  124.578747] Already setup the GSI :22
[  124.578749] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
[  124.578753] pci 0000:00:1e.0: setting latency timer to 64
[  124.578761] snd_hda_intel 0000:00:1b.0: setting latency timer to 64
[  124.578797] ahci 0000:00:1f.2: setting latency timer to 64
[  124.578963] pci 0000:03:00.0: setting latency timer to 64
[  124.578998] sd 0:0:0:0: [sda] Starting disk
[  124.925374] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[  124.945383] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[  124.974351] ata3.00: configured for UDMA/100
[  125.153549] PM: resume of drv:i915 dev:0000:00:02.0 complete after 574.833 msecs
[  126.853968] ata1.00: configured for UDMA/133
[  126.875428] PM: resume of drv:sd dev:0:0:0:0 complete after 2296.428 msecs
[  126.882437] PM: resume of drv:scsi_device dev:0:0:0:0 complete after 2303.413 msecs
[  126.882448] PM: resume of drv:scsi_disk dev:0:0:0:0 complete after 1721.320 msecs
[  126.898194] PM: resume of devices complete after 2319.506 msecs
[  126.904422] PM: resume devices took 2.320 seconds
[  126.909276] PM: Finishing wakeup.
[  126.912750] Restarting tasks ... done.
[  126.917317] video LNXVIDEO:00: Restoring backlight state
[  126.993907] [drm:pch_irq_handler] *ERROR* PCH poison interrupt
[  127.531007] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[  127.538005] xen: registering gsi 16 triggering 0 polarity 1
[  127.543658] Already setup the GSI :16
[  127.547515] ehci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[  127.555000] ehci_hcd 0000:00:1a.0: setting latency timer to 64
[  127.561031] ehci_hcd 0000:00:1a.0: EHCI Host Controller
[  127.566561] ehci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 1
[  127.574214] ehci_hcd 0000:00:1a.0: debug port 2
[  127.582823] ehci_hcd 0000:00:1a.0: cache line size of 64 is not supported
[  127.589783] ehci_hcd 0000:00:1a.0: irq 16, io mem 0xb0270000
[  127.615361] ehci_hcd 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[  127.621379] hub 1-0:1.0: USB hub found
[  127.625403] hub 1-0:1.0: 3 ports detected
[  127.725432] xen: registering gsi 23 triggering 0 polarity 1
[  127.731085] Already setup the GSI :23
[  127.734932] ehci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
[  127.742410] ehci_hcd 0000:00:1d.0: setting latency timer to 64
[  127.748459] ehci_hcd 0000:00:1d.0: EHCI Host Controller
[  127.753995] ehci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
[  127.761655] ehci_hcd 0000:00:1d.0: debug port 2
[  127.770227] ehci_hcd 0000:00:1d.0: cache line size of 64 is not supported
[  127.777172] ehci_hcd 0000:00:1d.0: irq 23, io mem 0xb0250000
[  127.805366] ehci_hcd 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[  127.811350] hub 2-0:1.0: USB hub found
[  127.815134] hub 2-0:1.0: 3 ports detected
[  127.876181] xen: registering gsi 16 triggering 0 polarity 1
[  127.881836] Already setup the GSI :16
[  127.885704] xhci_hcd 0000:00:14.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[  127.893190] xhci_hcd 0000:00:14.0: setting latency timer to 64
[  127.899212] xhci_hcd 0000:00:14.0: xHCI Host Controller
[  127.904716] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 3
[  127.912532] xhci_hcd 0000:00:14.0: cache line size of 64 is not supported
[  127.919450] xhci_hcd 0000:00:14.0: irq 16, io mem 0xb02b0000
[  127.925612] xHCI xhci_add_endpoint called for root hub
[  127.930812] xHCI xhci_check_bandwidth called for root hub
[  127.936489] hub 3-0:1.0: USB hub found
[  127.940392] hub 3-0:1.0: 4 ports detected
[  127.945380] usb 1-1: new high-speed USB device number 2 using ehci_hcd
[  127.985384] xhci_hcd 0000:00:14.0: xHCI Host Controller
[  127.990760] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 4
[  127.998518] xHCI xhci_add_endpoint called for root hub
[  128.003747] xHCI xhci_check_bandwidth called for root hub
[  128.009439] hub 4-0:1.0: USB hub found
[  128.013329] hub 4-0:1.0: 4 ports detected
[  128.096221] hub 1-1:1.0: USB hub found
[  128.100047] hub 1-1:1.0: 6 ports detected
[  128.190325] cfg80211: Calling CRDA to update world regulatory domain
[  128.205282] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree:
[  128.211949] Copyright(c) 2003-2011 Intel Corporation
[  128.217188] xen: registering gsi 18 triggering 0 polarity 1
[  128.222927] Already setup the GSI :18
[  128.225372] usb 2-1: new high-speed USB device number 2 using ehci_hcd
[  128.233611] iwlwifi 0000:02:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
[  128.240935] iwlwifi 0000:02:00.0: setting latency timer to 64
[  128.246958] iwlwifi 0000:02:00.0: pci_resource_len = 0x00002000
[  128.253084] iwlwifi 0000:02:00.0: pci_resource_base = ffffc900159f8000
[  128.259894] iwlwifi 0000:02:00.0: HW Revision ID = 0x34
[  128.265595] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEBUG disabled
[  128.271600] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[  128.277963] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[  128.284846] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_DEVICE_TESTMODE disabled
[  128.291932] iwlwifi 0000:02:00.0: CONFIG_IWLWIFI_P2P disabled
[  128.297923] iwlwifi 0000:02:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[  128.306885] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  128.329749] iwlwifi 0000:02:00.0: device EEPROM VER=0x715, CALIB=0x6
[  128.336207] iwlwifi 0000:02:00.0: Device SKU: 0x1F0
[  128.341303] iwlwifi 0000:02:00.0: Valid Tx ant: 0x3, Valid Rx ant: 0x3
[  128.348120] iwlwifi 0000:02:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels
[  128.363417] iwlwifi 0000:02:00.0: loaded firmware version 17.168.5.3 build 42301
[  128.371490] Registered led device: phy0-led
[  128.375755] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain
[  128.386809] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs'
[  128.396074] hub 2-1:1.0: USB hub found
[  128.399918] hub 2-1:1.0: 8 ports detected
[  128.400953] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  128.407276] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0
[  128.420150] e1000e: Intel(R) PRO/1000 Network Driver - 1.5.1-k
[  128.426114] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
[  128.432275] xen: registering gsi 20 triggering 0 polarity 1
[  128.438084] Already setup the GSI :20
[  128.441918] e1000e 0000:00:19.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20
[  128.449201] e1000e 0000:00:19.0: setting latency timer to 64
[  128.705531] usb 2-1.5: new low-speed USB device number 3 using ehci_hcd
[  128.719604] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S
[  128.731596] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0
[  128.831572] input: USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.5/2-1.5:1.0/input/input11
[  128.842399] generic-usb 0003:04B3:310C.0003: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:1d.0-1.5/input0
[  128.859111] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:13:20:f9:de:24
[  128.867288] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection
[  128.874578] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: FFFFFF-0FF
[  128.882417] device eth0 entered promiscuous mode
[  128.945534] usb 2-1.6: new low-speed USB device number 4 using ehci_hcd
[  129.087545] input: LITE-ON Technology USB NetVista Full Width Keyboard. as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.6/2-1.6:1.0/input/input12
[  129.102031] generic-usb 0003:04B3:3025.0004: input,hidraw1: USB HID v1.10 Keyboard [LITE-ON Technology USB NetVista Full Width Keyboard.] on usb-0000:00:1d.0-1.6/input0
[  129.885052] cfg80211: Found new beacon on frequency: 5180 MHz (Ch 36) on phy0
[  129.892336] cfg80211: Found new beacon on frequency: 5180 MHz (Ch 36) on phy0
[  129.899760] cfg80211: Pending regulatory request, waiting for it to be processed...
[  130.000627] cfg80211: Found new beacon on frequency: 5200 MHz (Ch 40) on phy0
[  130.007917] cfg80211: Found new beacon on frequency: 5200 MHz (Ch 40) on phy0
[  130.015326] cfg80211: Pending regulatory request, waiting for it to be processed...
[  130.092810] cfg80211: Found new beacon on frequency: 5220 MHz (Ch 44) on phy0
[  130.100100] cfg80211: Found new beacon on frequency: 5220 MHz (Ch 44) on phy0
[  130.107527] cfg80211: Pending regulatory request, waiting for it to be processed...
[  130.162467] cfg80211: Found new beacon on frequency: 5240 MHz (Ch 48) on phy0
[  130.169745] cfg80211: Pending regulatory request, waiting for it to be processed...
[  132.041144] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  132.048960] br-eth0: port 1(eth0) entering forwarding state
[  132.054698] br-eth0: port 1(eth0) entering forwarding state
[  132.221474] cfg80211: Found new beacon on frequency: 5785 MHz (Ch 157) on phy0
[  132.229209] cfg80211: Pending regulatory request, waiting for it to be processed...


[-- Attachment #6: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-07 20:14       ` Ben Guthro
@ 2012-08-08  8:35         ` Jan Beulich
  2012-08-08 10:39           ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-08  8:35 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, xen-devel

>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
> Any suggestions on how best to chase this down?
> 
> The first S3 suspend/resume cycle works, but the second does not.
> 
> On the second try, I never get any interrupts delivered to ahci.
> (at least according to /proc/interrupts)
> 
> 
> syslog traces from the first (good) and the second (bad) are attached,
> as well as the output from the "*" debug Ctrl+a handler in both cases.

You should have provided this also for the state before the
first suspend. The state after the first resume already looks
corrupted (presumably just not as badly):

(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
                     ^^
(XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1

so this is likely the reason for thing falling apart on the second
iteration:

(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
...
(XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
(XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
                                              ^     ^  ^
(XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
(XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
(XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1

Surprisingly in both cases we get (with the other vector fields varying
accordingly)

(XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
                                    ^^
(XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
(XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),

The interrupt in question belongs to 0000:00:1f.2, i.e. the
AHCI contoller.

Unfortunately I can't make sense of the kernel side config space
restore messages - an offset of 1 gets reported for the device in
question (and various other odd offsets exist), yet 3.5's
drivers/pci/pci.c:pci_restore_config_space_range() calls
pci_restore_config_dword() with an offset that's always divisible
by 4. Could you clarify which kernel version you were using here?
We first need to determine whether the kernel corrupts something
(after all, config space isn't protected from Dom0 modifications) -
if that's the case, we may need to understand why older Xen was
immune against that. If that's not the case, adding some extra
logging to Xen's pci_restore_msi_state() would seem the best
first step, plus (maybe) logging of Dom0 post-resume config space
accesses to the device in question.

The most likely thing happening (though unclear where) is that
the corresponding struct msi_msg instance gets cleared in the
course of the first resume (but after the corresponding interrupt
remapping entry already got restored).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-08  8:35         ` Jan Beulich
@ 2012-08-08 10:39           ` Ben Guthro
  2012-08-09 15:21             ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-08 10:39 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, Thomas Goetz, xen-devel

Thanks for taking the time to reply.

I'm out of the office today, so don't have direct access to the
machine in question until tomorrow... but I'll do my best to answer
(inline below) and I'll follow up tomorrow with concrete answers.

On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>> Any suggestions on how best to chase this down?
>>
>> The first S3 suspend/resume cycle works, but the second does not.
>>
>> On the second try, I never get any interrupts delivered to ahci.
>> (at least according to /proc/interrupts)
>>
>>
>> syslog traces from the first (good) and the second (bad) are attached,
>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>
> You should have provided this also for the state before the
> first suspend. The state after the first resume already looks
> corrupted (presumably just not as badly):

I'll be able to send this tomorrow.

>
> (XEN) PCI-MSI interrupt information:
> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>                      ^^
> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>
> so this is likely the reason for thing falling apart on the second
> iteration:
>
> (XEN)   Interrupt Remapping: supported and enabled.
> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
> ...
> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>                                               ^     ^  ^
> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>
> Surprisingly in both cases we get (with the other vector fields varying
> accordingly)
>
> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>                                     ^^
> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>
> The interrupt in question belongs to 0000:00:1f.2, i.e. the
> AHCI contoller.

This would be consistent with what I've observed.

>
> Unfortunately I can't make sense of the kernel side config space
> restore messages - an offset of 1 gets reported for the device in
> question (and various other odd offsets exist), yet 3.5's
> drivers/pci/pci.c:pci_restore_config_space_range() calls
> pci_restore_config_dword() with an offset that's always divisible
> by 4. Could you clarify which kernel version you were using here?
> We first need to determine whether the kernel corrupts something
> (after all, config space isn't protected from Dom0 modifications) -
> if that's the case, we may need to understand why older Xen was
> immune against that. If that's not the case, adding some extra
> logging to Xen's pci_restore_msi_state() would seem the best
> first step, plus (maybe) logging of Dom0 post-resume config space
> accesses to the device in question.

This particular failure is using linux-3.2.23 + some of Konrad's
branches that haven't been merged into mainline (s3 branches, are
probably the most appropriate here)

>
> The most likely thing happening (though unclear where) is that
> the corresponding struct msi_msg instance gets cleared in the
> course of the first resume (but after the corresponding interrupt
> remapping entry already got restored).
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-08 10:39           ` Ben Guthro
@ 2012-08-09 15:21             ` Ben Guthro
  2012-08-09 15:37               ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-09 15:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, Thomas Goetz, xen-devel

[-- Attachment #1: Type: text/plain, Size: 5522 bytes --]

Attached is a new run for
new boot (pre-s3)
first suspend / resume cycle (s3-first)
second (failing) suspend / resume cycle (s3-second)



To go into greater detail on the kernel used -

It is a 3.2.23 kernel based off of the Ubuntu 12.04 git tree here
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-precise.git;a=summary

To that, I also have some of Konrad's branches - specifically
/devel/ioperm
/devel/acpi-s3.v7
/stable/misc  (mostly for the microcode fixes)
/stable/for-linus-fixes-3.3
/stable/for-linus-3.3
/devel/ttm.dma_pool.v2.9
/stable/for-x86

On top of that, are some more patches specific to our operations, not
terribly interesting here, but I can provide them, if necessary.


The 3.5 tree I tested with has a similar makeup - with some fewer
branches from Konrad.


On Wed, Aug 8, 2012 at 6:39 AM, Ben Guthro <ben@guthro.net> wrote:
> Thanks for taking the time to reply.
>
> I'm out of the office today, so don't have direct access to the
> machine in question until tomorrow... but I'll do my best to answer
> (inline below) and I'll follow up tomorrow with concrete answers.
>
> On Wed, Aug 8, 2012 at 4:35 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 07.08.12 at 22:14, Ben Guthro <ben@guthro.net> wrote:
>>> Any suggestions on how best to chase this down?
>>>
>>> The first S3 suspend/resume cycle works, but the second does not.
>>>
>>> On the second try, I never get any interrupts delivered to ahci.
>>> (at least according to /proc/interrupts)
>>>
>>>
>>> syslog traces from the first (good) and the second (bad) are attached,
>>> as well as the output from the "*" debug Ctrl+a handler in both cases.
>>
>> You should have provided this also for the state before the
>> first suspend. The state after the first resume already looks
>> corrupted (presumably just not as badly):
>
> I'll be able to send this tomorrow.
>
>>
>> (XEN) PCI-MSI interrupt information:
>> (XEN)  MSI    26 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
>>                      ^^
>> (XEN)  MSI    28 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    29 vec=79 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    30 vec=81 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>> (XEN)  MSI    31 vec=99 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
>>
>> so this is likely the reason for thing falling apart on the second
>> iteration:
>>
>> (XEN)   Interrupt Remapping: supported and enabled.
>> (XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
>> (XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
>> (XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
>> ...
>> (XEN)   0014:  1   0  00d8 00000001 a1    0   1  0  1  1   0 1
>> (XEN)   0015:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
>>                                               ^     ^  ^
>> (XEN)   0016:  1   0  f0f8 00000001 31    0   1  1  1  1   0 1
>> (XEN)   0017:  1   0  00a0 00000001 a9    0   1  0  1  1   0 1
>> (XEN)   0018:  1   0  0200 00000001 b1    0   1  0  1  1   0 1
>> (XEN)   0019:  1   0  00c8 00000001 c9    0   1  0  1  1   0 1
>>
>> Surprisingly in both cases we get (with the other vector fields varying
>> accordingly)
>>
>> (XEN)    IRQ:  26 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
>> (XEN)    IRQ:  27 affinity:0001 vec:21 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
>>                                     ^^
>> (XEN)    IRQ:  28 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
>> (XEN)    IRQ:  29 affinity:0001 vec:79 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
>> (XEN)    IRQ:  30 affinity:0001 vec:81 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
>> (XEN)    IRQ:  31 affinity:0001 vec:99 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:274(PS--),
>>
>> The interrupt in question belongs to 0000:00:1f.2, i.e. the
>> AHCI contoller.
>
> This would be consistent with what I've observed.
>
>>
>> Unfortunately I can't make sense of the kernel side config space
>> restore messages - an offset of 1 gets reported for the device in
>> question (and various other odd offsets exist), yet 3.5's
>> drivers/pci/pci.c:pci_restore_config_space_range() calls
>> pci_restore_config_dword() with an offset that's always divisible
>> by 4. Could you clarify which kernel version you were using here?
>> We first need to determine whether the kernel corrupts something
>> (after all, config space isn't protected from Dom0 modifications) -
>> if that's the case, we may need to understand why older Xen was
>> immune against that. If that's not the case, adding some extra
>> logging to Xen's pci_restore_msi_state() would seem the best
>> first step, plus (maybe) logging of Dom0 post-resume config space
>> accesses to the device in question.
>
> This particular failure is using linux-3.2.23 + some of Konrad's
> branches that haven't been merged into mainline (s3 branches, are
> probably the most appropriate here)
>
>>
>> The most likely thing happening (though unclear where) is that
>> the corresponding struct msi_msg instance gets cleared in the
>> course of the first resume (but after the corresponding interrupt
>> remapping entry already got restored).
>>
>> Jan
>>

[-- Attachment #2: xen-dump-s3-second.txt --]
[-- Type: text/plain, Size: 65814 bytes --]

(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff82c4803025a0   rbx: ffff82c480302480   rcx: 0000000000000003
(XEN) rdx: 0000000000000000   rsi: ffff82c4802e25c8   rdi: ffff82c480271800
(XEN) rbp: ffff82c4802b7e30   rsp: ffff82c4802b7e30   r8:  0000005c513ddd00
(XEN) r9:  ffff82c480302600   r10: 0000005c506d20f8   r11: 0000000000000246
(XEN) r12: ffff82c480271800   r13: ffff82c48013d757   r14: 0000005c501cbd32
(XEN) r15: ffff82c480302308   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014d00f000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802b7e30:
(XEN)    ffff82c4802b7e60 ffff82c48012817f 0000000000000002 ffff82c4802e25c8
(XEN)    ffff82c480302480 ffff8301489b3d40 ffff82c4802b7eb0 ffff82c480128281
(XEN)    ffff82c4802b7f18 0000000000000246 0000005c501bf4ee ffff82c4802d8880
(XEN)    ffff82c4802d8880 ffff82c4802b7f18 ffffffffffffffff ffff82c480302308
(XEN)    ffff82c4802b7ee0 ffff82c480125405 ffff82c4802b7f18 ffff82c4802b7f18
(XEN)    00000000ffffffff 0000000000000002 ffff82c4802b7ef0 ffff82c480125484
(XEN)    ffff82c4802b7f10 ffff82c480158c05 ffff8300aa584000 ffff8300aa0fc000
(XEN)    ffff82c4802b7da8 0000000000000000 ffffffffffffffff 0000000000000000
(XEN)    ffffffff81aafda0 ffffffff81a01ee8 ffffffff81a01fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffffffff81a01ed0 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff8300aa584000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN)    [<ffff82c48012817f>] execute_timer+0x4e/0x6c
(XEN)    [<ffff82c480128281>] timer_softirq_action+0xe4/0x21a
(XEN)    [<ffff82c480125405>] __do_softirq+0x95/0xa0
(XEN)    [<ffff82c480125484>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480158c05>] idle_loop+0x6f/0x71
(XEN)    
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83013e6e7f18   rcx: 0000000000000001
(XEN) rdx: 0000003cbd1b5d80   rsi: 00000000356cd386   rdi: 0000000000000001
(XEN) rbp: ffff83013e6e7ef0   rsp: ffff83013e6e7ef0   r8:  0000001763e180ac
(XEN) r9:  000000000000003e   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83013e6e7f18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013d4b8088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014d00f000   cr2: ffff880025fc6b98
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013e6e7ef0:
(XEN)    ffff83013e6e7f10 ffff82c480158bf8 ffff8300aa0fe000 ffff8300a83fd000
(XEN)    ffff83013e6e7da8 0000000000000000 0000000000000000 0000000000000001
(XEN)    ffffffff81aafda0 ffff88002786dee0 ffff88002786dfd8 0000000000000246
(XEN)    0000000000000001 0000000000000040 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786dec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000001 ffff8300aa0fe000
(XEN)    0000003cbd1b5d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014899ff18   rcx: 0000000000000002
(XEN) rdx: 0000003cbe3eed80   rsi: 0000000036327a52   rdi: 0000000000000002
(XEN) rbp: ffff83014899fef0   rsp: ffff83014899fef0   r8:  000000178683f4ec
(XEN) r9:  ffff8300a83fc060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014899ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013e6f1088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 0000000141a05000   cr2: ffff8800278d00f8
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014899fef0:
(XEN)    ffff83014899ff10 ffff82c480158bf8 ffff8300a85c7000 ffff8300a83fc000
(XEN)    ffff83014899fda8 0000000000000000 0000000000000000 0000000000000002
(XEN)    ffffffff81aafda0 ffff88002786fee0 ffff88002786ffd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786fec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000002 ffff8300a85c7000
(XEN)    0000003cbe3eed80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014898ff18   rcx: 0000000000000003
(XEN) rdx: 0000003cc8692d80   rsi: 0000000036f832fe   rdi: 0000000000000003
(XEN) rbp: ffff83014898fef0   rsp: ffff83014898fef0   r8:  00000017a8dbf394
(XEN) r9:  000000000000003c   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014898ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff830148995088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 0000000141a05000   cr2: 0000000000000000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014898fef0:
(XEN)    ffff83014898ff10 ffff82c480158bf8 ffff8300a83fe000 ffff8300aa583000
(XEN)    ffff83014898fda8 0000000000000000 0000000000000000 0000000000000003
(XEN)    ffffffff81aafda0 ffff880027881ee0 ffff880027881fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff880027881ec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000003 ffff8300a83fe000
(XEN)    0000003cc8692d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffffffff81a01fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffffffff81a01ee8   rsp: ffffffff81a01ed0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000000   r14: ffffffffffffffff
(XEN) r15: 0000000000000000   cr0: 0000000000000008   cr4: 0000000000002660
(XEN) cr3: 000000014d00f000   cr2: 00007f8d9e7d33d0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff81a01ed0:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffffffff81a01f18
(XEN)    ffffffff8101c663 ffffffff81a01fd8 ffffffff81aafda0 ffff88002dee1a00
(XEN)    ffffffffffffffff ffffffff81a01f48 ffffffff81013236 ffffffffffffffff
(XEN)    a3fc987339ed013b 0000000000000000 ffffffff81b15160 ffffffff81a01f58
(XEN)    ffffffff81554f5e ffffffff81a01f98 ffffffff81accbf5 ffffffff81b15160
(XEN)    e4b159ba3eea094c 0000000000cdf000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81a01fb8 ffffffff81acc34b ffffffff7fffffff
(XEN)    ffffffff84b04000 ffffffff81a01ff8 ffffffff81acfecc 0000000000000000
(XEN)    0000000100000000 00100800000306a4 1fc98b75e3b82283 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786dfd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786dee0   rsp: ffff88002786dec8   r8:  0000000000000000
(XEN) r9:  0000000000000040   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000001   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000014d00f000   cr2: 0000000001fe93f0
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786dec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786df10
(XEN)    ffffffff8101c663 ffff88002786dfd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786df40 ffffffff81013236 ffffffff8100ade9
(XEN)    adcf45807c2d04fb 0000000000000000 0000000000000000 ffff88002786df50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786df58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786ffd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786fee0   rsp: ffff88002786fec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 0000000141a05000   cr2: 00007f818bffecd6
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786fec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786ff10
(XEN)    ffffffff8101c663 ffff88002786ffd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786ff40 ffffffff81013236 ffffffff8100ade9
(XEN)    1fe7b5a822150499 0000000000000000 0000000000000000 ffff88002786ff50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786ff58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff880027881fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff880027881ee0   rsp: ffff880027881ec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000003   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 0000000141a05000   cr2: 00007f8d9e800a00
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880027881ec8:
(XEN)    0000000000000040 00000000ffffffff ffffffff8100a5c0 ffff880027881f10
(XEN)    ffffffff8101c663 ffff880027881fd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff880027881f40 ffffffff81013236 ffffffff8100ade9
(XEN)    49de1833d13f2a26 0000000000000000 0000000000000000 ffff880027881f50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880027881f58 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x5C:99E82155)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16128 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 130559 pages
(XEN) heap[node=0][zone=18] -> 262143 pages
(XEN) heap[node=0][zone=19] -> 172837 pages
(XEN) heap[node=0][zone=20] -> 134225 pages
(XEN) heap[node=0][zone=21] -> 0 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) [I: dump HVM irq info]
(XEN) 'I' pressed -> dumping HVM irq info
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=91 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    28 vec=31 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) ==== segment 0000 ====
(XEN) 0000:05:01.0 - dom 0   - MSIs < >
(XEN) 0000:04:00.0 - dom 0   - MSIs < >
(XEN) 0000:03:00.0 - dom 0   - MSIs < >
(XEN) 0000:02:00.0 - dom 0   - MSIs < >
(XEN) 0000:00:1f.3 - dom 0   - MSIs < >
(XEN) 0000:00:1f.2 - dom 0   - MSIs < 27 >
(XEN) 0000:00:1f.0 - dom 0   - MSIs < >
(XEN) 0000:00:1e.0 - dom 0   - MSIs < >
(XEN) 0000:00:1d.0 - dom 0   - MSIs < >
(XEN) 0000:00:1c.7 - dom 0   - MSIs < >
(XEN) 0000:00:1c.6 - dom 0   - MSIs < >
(XEN) 0000:00:1c.0 - dom 0   - MSIs < >
(XEN) 0000:00:1b.0 - dom 0   - MSIs < 26 >
(XEN) 0000:00:1a.0 - dom 0   - MSIs < >
(XEN) 0000:00:19.0 - dom 0   - MSIs < >
(XEN) 0000:00:16.3 - dom 0   - MSIs < >
(XEN) 0000:00:16.0 - dom 0   - MSIs < >
(XEN) 0000:00:14.0 - dom 0   - MSIs < >
(XEN) 0000:00:02.0 - dom 0   - MSIs < 28 >
(XEN) 0000:00:00.0 - dom 0   - MSIs < >
(XEN) [V: dump iommu info]
(XEN) 
(XEN) iommu 0: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  0010 00000001 31    0   1  0  1  1   0 1
(XEN) 
(XEN) iommu 1: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
(XEN)   0001:  1   0  f0f8 00000001 f0    0   1  0  1  1   0 1
(XEN)   0002:  1   0  f0f8 00000001 40    0   1  0  1  1   0 1
(XEN)   0003:  1   0  f0f8 00000001 48    0   1  0  1  1   0 1
(XEN)   0004:  1   0  f0f8 00000001 50    0   1  0  1  1   0 1
(XEN)   0005:  1   0  f0f8 00000001 58    0   1  0  1  1   0 1
(XEN)   0006:  1   0  f0f8 00000001 60    0   1  0  1  1   0 1
(XEN)   0007:  1   0  f0f8 00000001 68    0   1  0  1  1   0 1
(XEN)   0008:  1   0  f0f8 00000001 70    0   1  1  1  1   0 1
(XEN)   0009:  1   0  f0f8 00000001 78    0   1  0  1  1   0 1
(XEN)   000a:  1   0  f0f8 00000001 88    0   1  0  1  1   0 1
(XEN)   000b:  1   0  f0f8 00000001 90    0   1  0  1  1   0 1
(XEN)   000c:  1   0  f0f8 00000001 98    0   1  0  1  1   0 1
(XEN)   000d:  1   0  f0f8 00000001 a0    0   1  0  1  1   0 1
(XEN)   000e:  1   0  f0f8 00000001 a8    0   1  0  1  1   0 1
(XEN)   000f:  1   0  f0f8 00000001 b0    0   1  1  1  1   0 1
(XEN)   0010:  1   0  f0f8 00000001 b8    0   1  1  1  1   0 1
(XEN)   0011:  1   0  f0f8 00000001 c0    0   1  1  1  1   0 1
(XEN)   0012:  1   0  f0f8 00000001 c8    0   1  1  1  1   0 1
(XEN)   0013:  1   0  f0f8 00000001 d0    0   1  1  1  1   0 1
(XEN)   0014:  1   0  f0f8 00000001 d8    0   1  1  1  1   0 1
(XEN)   0015:  1   0  00d8 00000001 91    0   1  0  1  1   0 1
(XEN)   0016:  1   0  00fa 00000001 00    0   0  0  0  0   0 1
(XEN) 
(XEN) Redirection table of IOAPIC 0:
(XEN)   #entry IDX FMT MASK TRIG IRR POL STAT DELI  VECTOR
(XEN)    01:  0000   1    0   0   0   0    0    0     38
(XEN)    02:  0001   1    0   0   0   0    0    0     f0
(XEN)    03:  0002   1    0   0   0   0    0    0     40
(XEN)    04:  0003   1    0   0   0   0    0    0     48
(XEN)    05:  0004   1    0   0   0   0    0    0     50
(XEN)    06:  0005   1    0   0   0   0    0    0     58
(XEN)    07:  0006   1    0   0   0   0    0    0     60
(XEN)    08:  0007   1    0   0   0   0    0    0     68
(XEN)    09:  0008   1    0   1   0   0    0    0     70
(XEN)    0a:  0009   1    0   0   0   0    0    0     78
(XEN)    0b:  000a   1    0   0   0   0    0    0     88
(XEN)    0c:  000b   1    0   0   0   0    0    0     90
(XEN)    0d:  000c   1    1   0   0   0    0    0     98
(XEN)    0e:  000d   1    0   0   0   0    0    0     a0
(XEN)    0f:  000e   1    0   0   0   0    0    0     a8
(XEN)    10:  000f   1    1   1   0   1    0    0     b0
(XEN)    12:  0010   1    1   1   0   1    0    0     b8
(XEN)    13:  0011   1    1   1   0   1    0    0     c0
(XEN)    14:  0014   1    1   1   0   1    0    0     d8
(XEN)    16:  0013   1    1   1   0   1    0    0     d0
(XEN)    17:  0012   1    1   1   0   1    0    0     c8
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues:
(XEN) CPU00:
(XEN)   ex=   -1682us timer=ffff82c4802e25c8 cb=ffff82c48013d757(ffff82c480271800) ns16550_poll+0x0/0x33
(XEN)   ex=    7317us timer=ffff8301489731b8 cb=ffff82c480119d72(ffff830148973190) csched_acct+0x0/0x42a
(XEN)   ex=    -964us timer=ffff82c480302600 cb=ffff82c48013f6f8(ffff82c4803025c0) do_dbs_timer+0x0/0x21f
(XEN)   ex=10096939us timer=ffff82c480300580 cb=ffff82c4801a8850(0000000000000000) mce_work_fn+0x0/0xa9
(XEN)   ex=51954963us timer=ffff82c4802fe280 cb=ffff82c4801807c2(0000000000000000) plt_overflow+0x0/0x131
(XEN)   ex=    7317us timer=ffff830148973ea8 cb=ffff82c48011aaf0(0000000000000000) csched_tick+0x0/0x314
(XEN) CPU01:
(XEN)   ex=   68636us timer=ffff8300a83fd060 cb=ffff82c480121c6b(ffff8300a83fd000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN)   ex=   70861us timer=ffff83014b3292c8 cb=ffff82c48011aaf0(0000000000000001) csched_tick+0x0/0x314
(XEN)   ex=   79035us timer=ffff83013d4b8380 cb=ffff82c48013f6f8(ffff83013d4b8340) do_dbs_timer+0x0/0x21f
(XEN) CPU02:
(XEN)   ex=   93502us timer=ffff8301489945f8 cb=ffff82c48011aaf0(0000000000000002) csched_tick+0x0/0x314
(XEN)   ex=   99035us timer=ffff83013e6f1380 cb=ffff82c48013f6f8(ffff83013e6f1340) do_dbs_timer+0x0/0x21f
(XEN)   ex=   98636us timer=ffff8300a83fc060 cb=ffff82c480121c6b(ffff8300a83fc000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU03:
(XEN)   ex=  123862us timer=ffff83014b329908 cb=ffff82c48011aaf0(0000000000000003) csched_tick+0x0/0x314
(XEN)   ex= 3373680us timer=ffff8300aa583060 cb=ffff82c480121c6b(ffff8300aa583000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN)   ex=  139035us timer=ffff830148995380 cb=ffff82c48013f6f8(ffff830148995340) do_dbs_timer+0x0/0x21f
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[398477086766]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu1==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[398501876777]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu2==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[398526665568]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu3==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[398551455511]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Event channel information for domain 0:
(XEN) Polling vCPUs: {}
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 x=0 v=0
(XEN)        2 [1/1]: s=6 n=0 x=0
(XEN)        3 [1/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0
(XEN)        5 [0/0]: s=5 n=0 x=0 v=1
(XEN)        6 [0/0]: s=6 n=0 x=0
(XEN)        7 [0/0]: s=5 n=1 x=0 v=0
(XEN)        8 [1/1]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=6 n=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=1 x=0 v=1
(XEN)       12 [0/0]: s=6 n=1 x=0
(XEN)       13 [0/0]: s=5 n=2 x=0 v=0
(XEN)       14 [0/1]: s=6 n=2 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=6 n=2 x=0
(XEN)       17 [0/0]: s=5 n=2 x=0 v=1
(XEN)       18 [0/0]: s=6 n=2 x=0
(XEN)       19 [0/0]: s=5 n=3 x=0 v=0
(XEN)       20 [1/1]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=6 n=3 x=0
(XEN)       22 [0/0]: s=6 n=3 x=0
(XEN)       23 [0/0]: s=5 n=3 x=0 v=1
(XEN)       24 [0/0]: s=6 n=3 x=0
(XEN)       25 [0/0]: s=3 n=0 x=0 d=0 p=35
(XEN)       26 [0/0]: s=4 n=0 x=0 p=9 i=9
(XEN)       27 [0/0]: s=5 n=0 x=0 v=2
(XEN)       28 [0/0]: s=4 n=0 x=0 p=8 i=8
(XEN)       29 [0/0]: s=4 n=0 x=0 p=278 i=27
(XEN)       30 [0/0]: s=4 n=0 x=0 p=279 i=26
(XEN)       31 [0/0]: s=4 n=0 x=0 p=277 i=28
(XEN)       35 [0/0]: s=3 n=0 x=0 d=0 p=25
(XEN)       36 [0/0]: s=5 n=0 x=0 v=3
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:0001 vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:0001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   2 affinity:ffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:0001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   4 affinity:0001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   5 affinity:0001 vec:50 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:0001 vec:58 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:0001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:0001 vec:68 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:0001 vec:70 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:0001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:0001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:0001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  13 affinity:000f vec:98 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:0001 vec:a0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:0001 vec:a8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:0001 vec:b0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  18 affinity:000f vec:b8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  19 affinity:0001 vec:c0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  20 affinity:000f vec:d8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  22 affinity:0001 vec:d0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:0001 vec:c8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  24 affinity:0001 vec:28 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  25 affinity:0001 vec:30 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  26 affinity:0001 vec:91 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
(XEN)    IRQ:  28 affinity:0001 vec:31 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  1 Vec 56:
(XEN)       Apic 0x00, Pin  1: vec=38 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  3 Vec 64:
(XEN)       Apic 0x00, Pin  3: vec=40 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  4 Vec 72:
(XEN)       Apic 0x00, Pin  4: vec=48 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  5 Vec 80:
(XEN)       Apic 0x00, Pin  5: vec=50 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  6 Vec 88:
(XEN)       Apic 0x00, Pin  6: vec=58 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  7 Vec 96:
(XEN)       Apic 0x00, Pin  7: vec=60 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  8 Vec104:
(XEN)       Apic 0x00, Pin  8: vec=68 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  9 Vec112:
(XEN)       Apic 0x00, Pin  9: vec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 10 Vec120:
(XEN)       Apic 0x00, Pin 10: vec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 11 Vec136:
(XEN)       Apic 0x00, Pin 11: vec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 12 Vec144:
(XEN)       Apic 0x00, Pin 12: vec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 13 Vec152:
(XEN)       Apic 0x00, Pin 13: vec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:0
(XEN)     IRQ 14 Vec160:
(XEN)       Apic 0x00, Pin 14: vec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 15 Vec168:
(XEN)       Apic 0x00, Pin 15: vec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 16 Vec176:
(XEN)       Apic 0x00, Pin 16: vec=b0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 18 Vec184:
(XEN)       Apic 0x00, Pin 18: vec=b8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 19 Vec192:
(XEN)       Apic 0x00, Pin 19: vec=c0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 20 Vec216:
(XEN)       Apic 0x00, Pin 20: vec=d8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 22 Vec208:
(XEN)       Apic 0x00, Pin 22: vec=d0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 23 Vec200:
(XEN)       Apic 0x00, Pin 23: vec=c8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64512kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 522236kB free
(XEN)     heap[18]: 1048572kB free
(XEN)     heap[19]: 691348kB free
(XEN)     heap[20]: 536900kB free
(XEN)     Dom heap: 3256784kB free
(XEN) [n: NMI statistics]
(XEN) CPU	NMI
(XEN)   0	  0
(XEN)   1	  0
(XEN)   2	  0
(XEN)   3	  0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x5C:F673FE16)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=187539 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-2} max_pages=188147
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-204f, 2058-ffff }
(XEN)     Interrupts { 0-274, 277-279 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 0000000000148917: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000148916: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148915: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148914: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000aa0fd: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000013f428: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN)     VCPU1: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/-1)
(XEN) Notifying guest 0:1 (virq 1, port 11, stat 0/0/0)
(XEN) Notifying guest 0:2 (virq 1, port 17, stat 0/0/0)
(XEN) Notifying guest 0:3 (virq 1, port 23, stat 0/0/0)

(XEN) Shared frames 0 -- Saved frames 0
[  399.450697] v(XEN) [r: dump run queues]
cpu 1
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=0x0000005D0246155A
(XEN) Idle cpupool:
(XEN) Scheduler: SMP Credit Scheduler (credit)
[  399.450698]  (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = 67
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2802
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
 (XEN) idlers: 000c
(XEN) active vcpus:
0: masked=0 pend(XEN) 	  1: ing=1 event_sel [0.1] pri=-1 flags=0 cpu=1 credit=-521 [w=256]
0000000000000001(XEN) Cpupool 0:
(XEN) Scheduler: SMP Credit Scheduler (credit)
(XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = 67
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2802
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us

(XEN) idlers: 000c
(XEN) active vcpus:
(XEN) 	  1: [0.1] pri=-1 flags=0 cpu=1 credit=-1079 [w=256]
[  399.521213]  (XEN) CPU[00]   sort=2802, sibling=0001, 1: masked=0 pendcore=000f
(XEN) 	run: [32767.0] pri=0 flags=0 cpu=0
(XEN) 	  1: [0.0] pri=0 flags=0 cpu=0 credit=84 [w=256]
ing=1 event_sel (XEN) CPU[01]  sort=2802, sibling=0002, core=000f
(XEN) 	run: [0.1] pri=-1 flags=0 cpu=1 credit=-1349 [w=256]
(XEN) 	  1: [32767.1] pri=-64 flags=0 cpu=1
(XEN) CPU[02] 0000000000000001 sort=2802, sibling=0004, 
core=000f
[  399.588957]  (XEN) 	run:  [32767.2] pri=-64 flags=0 cpu=2
2: masked=1 pend(XEN) CPU[03] ing=1 event_sel  sort=2802, sibling=0008, 0000000000000001core=000f
(XEN) 	run: 
[32767.3] pri=-64 flags=0 cpu=3
[  399.626992]  (XEN) [s: dump softtsc stats]
 (XEN) TSC marked as reliable, warp = 249940157685 (count=4)
3: masked=1 pend(XEN) No domains have emulated TSC
ing=0 event_sel (XEN) [t: display multi-cpu clock info]
0000000000000000(XEN) Synced stime skew: max=8199ns avg=4416ns samples=3 current=5025ns
(XEN) Synced cycles skew: max=173 avg=165 samples=3 current=173

(XEN) [u: dump numa info]
[  399.667401]  (XEN) 'u' pressed -> dumping numa info (now-0x5D:0FE942E1)
 (XEN) idx0 -> NODE0 start->0 size->1369600 free->814196
(XEN) phys_to_nid(0000000000001000) -> 0 should be 0

(XEN) CPU0 -> NODE0
(XEN) CPU1 -> NODE0
(XEN) CPU2 -> NODE0
(XEN) CPU3 -> NODE0
(XEN) Memory location of each domain:
(XEN) Domain 0 (total: 187539):
[  399.705799] p(XEN)     Node 0: 187539
ending:
(XEN) [v: dump Intel's VMCS]
[  399.705800]  (XEN) *********** VMCS Areas **************
  (XEN) **************************************
0000000000000000(XEN) [z: print ioapic info]
 (XEN) number of MP IRQ sources: 15.
0000000000000000(XEN) number of IO-APIC #2 registers: 24.
(XEN) testing the IO APIC.......................
 (XEN) IO APIC #2......
(XEN) .... register #00: 02000000
(XEN) .......    : physical APIC id: 02
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
0000000000000000(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
 (XEN)  00 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  01 000 00  0    0    0   0   0    1    1    38
 (XEN)  02 000 00  0    0    0   0   0    1    1    F0
0000000000000000(XEN)  03 000 00  0    0    0   0   0    1    1    40
 (XEN)  04 000 00  0    0    0   0   0    1    1    48
0000000000000000(XEN)  05 000 00  0    0    0   0   0    1    1    50
 (XEN)  06 000 00  0    0    0   0   0    1    1    58
0000000000000000(XEN)  07 000 00  0    0    0   0   0    1    1    60
 (XEN)  08 000 00  0    0    0   0   0    1    1    68
0000000000000000(XEN)  09 000 00  0    1    0   0   0    1    1    70

(XEN)  0a 000 00  0    0    0   0   0    1    1    78
[  399.850690]  (XEN)  0b 000 00  0    0    0   0   0    1    1    88
  (XEN)  0c 000 00  0    0    0   0   0    1    1    90
0000000000000000(XEN)  0d 000 00  1    0    0   0   0    1    1    98
 (XEN)  0e 000 00  0    0    0   0   0    1    1    A0
0000000000000000(XEN)  0f 000 00  0    0    0   0   0    1    1    A8
 (XEN)  10 000 00  1    1    0   1   0    1    1    B0
0000000000000000(XEN)  11 000 00  1    0    0   0   0    0    0    00
 (XEN)  12 000 00  1    1    0   1   0    1    1    B8
0000000000000000(XEN)  13 000 00  1    1    0   1   0    1    1    C0
 (XEN)  14 000 00  1    1    0   1   0    1    1    D8
0000000000000000(XEN)  15 000 00  1    0    0   0   0    0    0    00
 (XEN)  16 000 00  1    1    0   1   0    1    1    D0
0000000000000000(XEN)  17 000 00  1    1    0   1   0    1    1    C8
(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
 (XEN) IRQ240 -> 0:2
(XEN) IRQ56 -> 0:1
(XEN) IRQ64 -> 0:3
(XEN) IRQ72 -> 0:4
(XEN) IRQ80 -> 0:5
(XEN) IRQ88 -> 0:6
(XEN) IRQ96 -> 0:7
(XEN) IRQ104 -> 0:8
(XEN) IRQ112 -> 0:9
(XEN) IRQ120 -> 0:10
(XEN) IRQ136 -> 0:11
(XEN) IRQ144 -> 0:12
(XEN) IRQ152 -> 0:13
(XEN) IRQ160 -> 0:14
(XEN) IRQ168 -> 0:15
(XEN) IRQ176 -> 0:16
(XEN) IRQ184 -> 0:18
(XEN) IRQ192 -> 0:19
(XEN) IRQ216 -> 0:20
(XEN) IRQ208 -> 0:22
(XEN) IRQ200 -> 0:23
(XEN) .................................... done.
0000000000000000 0000000000000000
[  399.993520]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.007481]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.021442]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.035403]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.049365]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.063326]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000002182
[  400.077287]    
[  400.080597] global mask:
[  400.080598]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.095812]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.109773]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.123734]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.137694]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.151655]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.165616]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.179577]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffe700104105
[  400.193538]    
[  400.196849] globally unmasked:
[  400.196850]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.212600]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.226561]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.240523]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.254483]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.268444]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.282405]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.296365]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000002082
[  400.310327]    
[  400.313638] local cpu1 mask:
[  400.313638]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.329210]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.343171]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.357132]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.371092]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.385054]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.399014]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.412975]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000001f80
[  400.426937]    
[  400.430247] locally unmasked:
[  400.430248]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.445909]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.459870]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.473831]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.487793]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.501754]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.515714]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.529676]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000080
[  400.543636]    
[  400.546947] pending list:
[  400.549991]   0: event 1 -> irq 272 locally-masked
[  400.555002]   1: event 7 -> irq 278
[  400.558671]   1: event 8 -> irq 279 globally-masked
[  400.563772]   2: event 13 -> irq 284 locally-masked
[  400.568894] 
[  400.568894] vcpu 0
[  400.568894]   0: masked=0 pending=0 event_sel 0000000000000000
[  400.574150]   1: masked=0 pending=0 event_sel 0000000000000000
[  400.580235]   2: masked=1 pending=1 event_sel 0000000000000001
[  400.586321]   3: masked=1 pending=0 event_sel 0000000000000000
[  400.592406]   
[  400.598491] pending:
[  400.598492]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.613347]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.627308]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.641269]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.655230]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.669191]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.683151]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.697112]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000002106
[  400.711074]    
[  400.714384] global mask:
[  400.714385]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.729598]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.743560]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.757521]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.771482]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.785442]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.799403]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  400.813364]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffe700104105
[  400.827326]    
[  400.830636] globally unmasked:
[  400.830636]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.846387]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.860348]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.874310]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.888271]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.902232]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.916192]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.930153]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000002002
[  400.944114]    
[  400.947425] local cpu0 mask:
[  400.947425]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.962997]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.976958]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  400.990919]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.004879]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.018841]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.032802]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.046763]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 fffffffffe00007f
[  401.060724]    
[  401.064035] locally unmasked:
[  401.064036]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.079697]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.093657]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.107618]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.121580]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.135541]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.149502]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.163462]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000002
[  401.177423]    
[  401.180735] pending list:
[  401.183782]   0: event 1 -> irq 272 l2-clear
[  401.188253]   0: event 2 -> irq 273 l2-clear globally-masked
[  401.194159]   1: event 8 -> irq 279 l2-clear globally-masked locally-masked
[  401.201412]   2: event 13 -> irq 284 l2-clear locally-masked
[  401.207341] 
[  401.207341] vcpu 2
[  401.207342]   0: masked=0 pending=0 event_sel 0000000000000000
[  401.212601]   1: masked=0 pending=0 event_sel 0000000000000000
[  401.218686]   2: masked=0 pending=1 event_sel 0000000000000001
[  401.224771]   3: masked=1 pending=0 event_sel 0000000000000000
[  401.230857]   
[  401.236941] pending:
[  401.236942]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.251797]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.265759]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.279719]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.293681]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.307641]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.321602]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.335563]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000e104
[  401.349523]    
[  401.352835] global mask:
[  401.352835]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.368049]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.382011]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.395971]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.409932]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.423893]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.437854]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.451815]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffe700104105
[  401.465776]    
[  401.469087] globally unmasked:
[  401.469088]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.484838]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.498799]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.512760]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.526721]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.540682]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.554643]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.568604]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  401.582565]    
[  401.585876] local cpu2 mask:
[  401.585877]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.601447]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.615408]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.629370]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.643330]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.657291]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.671253]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.685214]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000007e000
[  401.699174]    
[  401.702486] locally unmasked:
[  401.702487]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.718147]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.732108]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.746069]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.760030]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.773991]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.787951]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.801913]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  401.815874]    
[  401.819185] pending list:
[  401.822228]   0: event 2 -> irq 273 globally-masked locally-masked
[  401.828671]   1: event 8 -> irq 279 globally-masked locally-masked
[  401.835115]   2: event 13 -> irq 284
[  401.838874]   2: event 14 -> irq 285 globally-masked
[  401.844065]   2: event 15 -> irq 286
[  401.847823]   3: event 19 -> irq 290 locally-masked
[  401.852946] 
[  401.852947] vcpu 3
[  401.852947]   0: masked=0 pending=0 event_sel 0000000000000000
[  401.858209]   1: masked=0 pending=0 event_sel 0000000000000000
[  401.858211]   2: masked=0 pending=0 event_sel 0000000000000000
[  401.858213]   3: masked=0 pending=1 event_sel 0000000000000001
[  401.858215]   
[  401.858216] pending:
[  401.858217]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858223]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858230]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858243]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858247]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858250]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858253]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858256]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000384104
[  401.858259]    
[  401.858260] global mask:
[  401.858260]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858264]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858267]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858270]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858274]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858277]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858281]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  401.858284]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffe700104105
[  401.858287]    
[  401.858288] globally unmasked:
[  401.858288]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858291]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858295]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858298]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858301]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858304]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858307]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858310]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  401.858313]    
[  401.858314] local cpu3 mask:
[  401.858315]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858318]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858321]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858324]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858327]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858330]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858334]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858337]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000001f80000
[  401.858340]    
[  401.858341] locally unmasked:
[  401.858341]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858344]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858347]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858350]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858354]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858357]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858360]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  401.858363]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  401.858366]    
[  401.858367] pending list:
[  401.858368]   0: event 2 -> irq 273 globally-masked locally-masked
[  401.858370]   1: event 8 -> irq 279 globally-masked locally-masked
[  401.858371]   2: event 14 -> irq 285 globally-masked locally-masked
[  401.858373]   3: event 19 -> irq 290
[  401.858374]   3: event 20 -> irq 291 globally-masked
[  401.858375]   3: event 21 -> irq 292


[-- Attachment #3: xen-dump-s3-first.txt --]
[-- Type: text/plain, Size: 66482 bytes --]

(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff82c4803025a0   rbx: ffff82c480302480   rcx: 0000000000000003
(XEN) rdx: 0000000000000000   rsi: ffff82c4802e25c8   rdi: ffff82c480271800
(XEN) rbp: ffff82c4802b7e30   rsp: ffff82c4802b7e30   r8:  0000000000000001
(XEN) r9:  ffff830148973ea8   r10: 0000004a93f1f467   r11: 0000000000000246
(XEN) r12: ffff82c480271800   r13: ffff82c48013d757   r14: 0000004a93b9f85e
(XEN) r15: ffff82c480302308   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013c656000   cr2: ffff8800268b5040
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802b7e30:
(XEN)    ffff82c4802b7e60 ffff82c48012817f 0000000000000002 ffff82c4802e25c8
(XEN)    ffff82c480302480 ffff8301489b3d40 ffff82c4802b7eb0 ffff82c480128281
(XEN)    ffff82c4802b7f18 0000000000000246 0000004a93f1f467 ffff82c4802d8880
(XEN)    ffff82c4802d8880 ffff82c4802b7f18 ffffffffffffffff ffff82c480302308
(XEN)    ffff82c4802b7ee0 ffff82c480125405 ffff82c4802b7f18 ffff82c4802b7f18
(XEN)    00000000ffffffff 0000000000000002 ffff82c4802b7ef0 ffff82c480125484
(XEN)    ffff82c4802b7f10 ffff82c480158c05 ffff8300aa584000 ffff8300aa0fc000
(XEN)    ffff82c4802b7da8 0000000000000000 ffffffffffffffff 0000000000000000
(XEN)    ffffffff81aafda0 ffffffff81a01ee8 ffffffff81a01fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffffffff81a01ed0 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff8300aa584000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN)    [<ffff82c48012817f>] execute_timer+0x4e/0x6c
(XEN)    [<ffff82c480128281>] timer_softirq_action+0xe4/0x21a
(XEN)    [<ffff82c480125405>] __do_softirq+0x95/0xa0
(XEN)    [<ffff82c480125484>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480158c05>] idle_loop+0x6f/0x71
(XEN)    
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83013e6e7f18   rcx: 0000000000000001
(XEN) rdx: 0000003cbaf25d80   rsi: 00000000416956a2   rdi: 0000000000000001
(XEN) rbp: ffff83013e6e7ef0   rsp: ffff83013e6e7ef0   r8:  0000000c23161fa0
(XEN) r9:  ffff8300aa583060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83013e6e7f18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013b228088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013c49a000   cr2: ffff880025817df0
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83013e6e7ef0:
(XEN)    ffff83013e6e7f10 ffff82c480158bf8 ffff8300aa0fe000 ffff8300aa583000
(XEN)    ffff83013e6e7da8 0000000000000000 0000000000000000 0000000000000003
(XEN)    ffffffff81aafda0 ffff880027881ee0 ffff880027881fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff880027881ec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000001 ffff8300aa0fe000
(XEN)    0000003cbaf25d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014899ff18   rcx: 0000000000000002
(XEN) rdx: 0000003cbb966d80   rsi: 00000000422ed5ca   rdi: 0000000000000002
(XEN) rbp: ffff83014899fef0   rsp: ffff83014899fef0   r8:  0000000c45648fe0
(XEN) r9:  ffff8300a83fd060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014899ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013bc69088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013d95f000   cr2: ffff880025ea8c10
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014899fef0:
(XEN)    ffff83014899ff10 ffff82c480158bf8 ffff8300a85c7000 ffff8300a83fd000
(XEN)    ffff83014899fda8 0000000000000000 0000000000000000 0000000000000001
(XEN)    ffffffff81aafda0 ffff88002786dee0 ffff88002786dfd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786dec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000002 ffff8300a85c7000
(XEN)    0000003cbb966d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014898ff18   rcx: 0000000000000003
(XEN) rdx: 0000003cbe4a5d80   rsi: 0000000042f49352   rdi: 0000000000000003
(XEN) rbp: ffff83014898fef0   rsp: ffff83014898fef0   r8:  0000000c684f1af0
(XEN) r9:  ffff8300a83fc060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014898ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff83013e7a8088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013db62000   cr2: ffff880025e91260
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014898fef0:
(XEN)    ffff83014898ff10 ffff82c480158bf8 ffff8300a83fe000 ffff8300a83fc000
(XEN)    ffff83014898fda8 0000000000000000 0000000000000000 0000000000000002
(XEN)    ffffffff81aafda0 ffff88002786fee0 ffff88002786ffd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786fec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000003 ffff8300a83fe000
(XEN)    0000003cbe4a5d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffffffff81a01fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffffffff81a01ee8   rsp: ffffffff81a01ed0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000000   r14: ffffffffffffffff
(XEN) r15: 0000000000000000   cr0: 0000000000000008   cr4: 0000000000002660
(XEN) cr3: 000000013c656000   cr2: 00007f819260894c
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff81a01ed0:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffffffff81a01f18
(XEN)    ffffffff8101c663 ffffffff81a01fd8 ffffffff81aafda0 ffff88002dee1a00
(XEN)    ffffffffffffffff ffffffff81a01f48 ffffffff81013236 ffffffffffffffff
(XEN)    a3fc987339ed013b 0000000000000000 ffffffff81b15160 ffffffff81a01f58
(XEN)    ffffffff81554f5e ffffffff81a01f98 ffffffff81accbf5 ffffffff81b15160
(XEN)    e4b159ba3eea094c 0000000000cdf000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81a01fb8 ffffffff81acc34b ffffffff7fffffff
(XEN)    ffffffff84b04000 ffffffff81a01ff8 ffffffff81acfecc 0000000000000000
(XEN)    0000000100000000 00100800000306a4 1fc98b75e3b82283 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786dfd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786dee0   rsp: ffff88002786dec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000001   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013d95f000   cr2: 00000000021823e8
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786dec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786df10
(XEN)    ffffffff8101c663 ffff88002786dfd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786df40 ffffffff81013236 ffffffff8100ade9
(XEN)    adcf45807c2d04fb 0000000000000000 0000000000000000 ffff88002786df50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786df58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786ffd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786fee0   rsp: ffff88002786fec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000014c8de000   cr2: 00007feaca64c000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786fec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786ff10
(XEN)    ffffffff8101c663 ffff88002786ffd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786ff40 ffffffff81013236 ffffffff8100ade9
(XEN)    1fe7b5a822150499 0000000000000000 0000000000000000 ffff88002786ff50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786ff58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff880027881fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff880027881ee0   rsp: ffff880027881ec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000003   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013c49a000   cr2: 00007f819c3be000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880027881ec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff880027881f10
(XEN)    ffffffff8101c663 ffff880027881fd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff880027881f40 ffffffff81013236 ffffffff8100ade9
(XEN)    49de1833d13f2a26 0000000000000000 0000000000000000 ffff880027881f50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880027881f58 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x4A:DD856038)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16128 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 130559 pages
(XEN) heap[node=0][zone=18] -> 262143 pages
(XEN) heap[node=0][zone=19] -> 172837 pages
(XEN) heap[node=0][zone=20] -> 134225 pages
(XEN) heap[node=0][zone=21] -> 0 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) [I: dump HVM irq info]
(XEN) 'I' pressed -> dumping HVM irq info
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=69 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=00  fixed  edge deassert phys lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    28 vec=31 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=71 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=89 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) ==== segment 0000 ====
(XEN) 0000:05:01.0 - dom 0   - MSIs < >
(XEN) 0000:04:00.0 - dom 0   - MSIs < >
(XEN) 0000:03:00.0 - dom 0   - MSIs < >
(XEN) 0000:02:00.0 - dom 0   - MSIs < 29 >
(XEN) 0000:00:1f.3 - dom 0   - MSIs < >
(XEN) 0000:00:1f.2 - dom 0   - MSIs < 27 >
(XEN) 0000:00:1f.0 - dom 0   - MSIs < >
(XEN) 0000:00:1e.0 - dom 0   - MSIs < >
(XEN) 0000:00:1d.0 - dom 0   - MSIs < >
(XEN) 0000:00:1c.7 - dom 0   - MSIs < >
(XEN) 0000:00:1c.6 - dom 0   - MSIs < >
(XEN) 0000:00:1c.0 - dom 0   - MSIs < >
(XEN) 0000:00:1b.0 - dom 0   - MSIs < 26 >
(XEN) 0000:00:1a.0 - dom 0   - MSIs < >
(XEN) 0000:00:19.0 - dom 0   - MSIs < 30 >
(XEN) 0000:00:16.3 - dom 0   - MSIs < >
(XEN) 0000:00:16.0 - dom 0   - MSIs < >
(XEN) 0000:00:14.0 - dom 0   - MSIs < >
(XEN) 0000:00:02.0 - dom 0   - MSIs < 28 >
(XEN) 0000:00:00.0 - dom 0   - MSIs < >
(XEN) [V: dump iommu info]
(XEN) 
(XEN) iommu 0: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  0010 00000001 31    0   1  0  1  1   0 1
(XEN) 
(XEN) iommu 1: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
(XEN)   0001:  1   0  f0f8 00000001 f0    0   1  0  1  1   0 1
(XEN)   0002:  1   0  f0f8 00000001 40    0   1  0  1  1   0 1
(XEN)   0003:  1   0  f0f8 00000001 48    0   1  0  1  1   0 1
(XEN)   0004:  1   0  f0f8 00000001 50    0   1  0  1  1   0 1
(XEN)   0005:  1   0  f0f8 00000001 58    0   1  0  1  1   0 1
(XEN)   0006:  1   0  f0f8 00000001 60    0   1  0  1  1   0 1
(XEN)   0007:  1   0  f0f8 00000001 68    0   1  0  1  1   0 1
(XEN)   0008:  1   0  f0f8 00000001 70    0   1  1  1  1   0 1
(XEN)   0009:  1   0  f0f8 00000001 78    0   1  0  1  1   0 1
(XEN)   000a:  1   0  f0f8 00000001 88    0   1  0  1  1   0 1
(XEN)   000b:  1   0  f0f8 00000001 90    0   1  0  1  1   0 1
(XEN)   000c:  1   0  f0f8 00000001 98    0   1  0  1  1   0 1
(XEN)   000d:  1   0  f0f8 00000001 a0    0   1  0  1  1   0 1
(XEN)   000e:  1   0  f0f8 00000001 a8    0   1  0  1  1   0 1
(XEN)   000f:  1   0  f0f8 00000001 b0    0   1  1  1  1   0 1
(XEN)   0010:  1   0  f0f8 00000001 b8    0   1  1  1  1   0 1
(XEN)   0011:  1   0  f0f8 00000001 c0    0   1  1  1  1   0 1
(XEN)   0012:  1   0  f0f8 00000001 c8    0   1  1  1  1   0 1
(XEN)   0013:  1   0  f0f8 00000001 d0    0   1  1  1  1   0 1
(XEN)   0014:  1   0  f0f8 00000001 d8    0   1  1  1  1   0 1
(XEN)   0015:  1   0  00d8 00000001 69    0   1  0  1  1   0 1
(XEN)   0016:  1   0  00fa 00000001 29    0   1  0  1  1   0 1
(XEN)   0017:  1   0  0200 00000001 71    0   1  0  1  1   0 1
(XEN)   0018:  1   0  00c8 00000001 89    0   1  0  1  1   0 1
(XEN) 
(XEN) Redirection table of IOAPIC 0:
(XEN)   #entry IDX FMT MASK TRIG IRR POL STAT DELI  VECTOR
(XEN)    01:  0000   1    0   0   0   0    0    0     38
(XEN)    02:  0001   1    0   0   0   0    0    0     f0
(XEN)    03:  0002   1    0   0   0   0    0    0     40
(XEN)    04:  0003   1    0   0   0   0    0    0     48
(XEN)    05:  0004   1    0   0   0   0    0    0     50
(XEN)    06:  0005   1    0   0   0   0    0    0     58
(XEN)    07:  0006   1    0   0   0   0    0    0     60
(XEN)    08:  0007   1    0   0   0   0    0    0     68
(XEN)    09:  0008   1    0   1   0   0    0    0     70
(XEN)    0a:  0009   1    0   0   0   0    0    0     78
(XEN)    0b:  000a   1    0   0   0   0    0    0     88
(XEN)    0c:  000b   1    0   0   0   0    0    0     90
(XEN)    0d:  000c   1    1   0   0   0    0    0     98
(XEN)    0e:  000d   1    0   0   0   0    0    0     a0
(XEN)    0f:  000e   1    0   0   0   0    0    0     a8
(XEN)    10:  000f   1    0   1   0   1    0    0     b0
(XEN)    12:  0010   1    1   1   0   1    0    0     b8
(XEN)    13:  0011   1    1   1   0   1    0    0     c0
(XEN)    14:  0014   1    1   1   0   1    0    0     d8
(XEN)    16:  0013   1    1   1   0   1    0    0     d0
(XEN)    17:  0012   1    0   1   0   1    0    0     c8
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues:
(XEN) CPU00:
(XEN)   ex=   -1681us timer=ffff82c4802e25c8 cb=ffff82c48013d757(ffff82c480271800) ns16550_poll+0x0/0x33
(XEN)   ex=   -1679us timer=ffff83014ca92590 cb=ffff82c480166416(ffff830148941d80) irq_guest_eoi_timer_fn+0x0/0x15d
(XEN)   ex=    7320us timer=ffff8301489731b8 cb=ffff82c480119d72(ffff830148973190) csched_acct+0x0/0x42a
(XEN)   ex=128102930us timer=ffff82c4802fe280 cb=ffff82c4801807c2(0000000000000000) plt_overflow+0x0/0x131
(XEN)   ex= 6244639us timer=ffff82c480300580 cb=ffff82c4801a8850(0000000000000000) mce_work_fn+0x0/0xa9
(XEN)   ex=    7320us timer=ffff830148973ea8 cb=ffff82c48011aaf0(0000000000000000) csched_tick+0x0/0x314
(XEN) CPU01:
(XEN)   ex=   66206us timer=ffff83014b329eb8 cb=ffff82c48011aaf0(0000000000000001) csched_tick+0x0/0x314
(XEN)   ex=   70905us timer=ffff8300aa583060 cb=ffff82c480121c6b(ffff8300aa583000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU02:
(XEN)   ex=   86521us timer=ffff830148994658 cb=ffff82c48011aaf0(0000000000000002) csched_tick+0x0/0x314
(XEN)   ex=  794262us timer=ffff8300a83fd060 cb=ffff82c480121c6b(ffff8300a83fd000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU03:
(XEN)   ex=  112053us timer=ffff83014b3290b8 cb=ffff82c48011aaf0(0000000000000003) csched_tick+0x0/0x314
(XEN)   ex=  332273us timer=ffff8300a83fc060 cb=ffff82c480121c6b(ffff8300a83fc000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[322301377164]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu1==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[322326167077]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu2==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[322350957293]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu3==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[322375746720]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Event channel information for domain 0:
(XEN) Polling vCPUs: {}
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 x=0 v=0
(XEN)        2 [0/1]: s=6 n=0 x=0
(XEN)        3 [1/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0
(XEN)        5 [0/0]: s=5 n=0 x=0 v=1
(XEN)        6 [0/0]: s=6 n=0 x=0
(XEN)        7 [0/0]: s=5 n=1 x=0 v=0
(XEN)        8 [0/1]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=6 n=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=1 x=0 v=1
(XEN)       12 [0/0]: s=6 n=1 x=0
(XEN)       13 [0/0]: s=5 n=2 x=0 v=0
(XEN)       14 [0/1]: s=6 n=2 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=6 n=2 x=0
(XEN)       17 [0/0]: s=5 n=2 x=0 v=1
(XEN)       18 [0/0]: s=6 n=2 x=0
(XEN)       19 [0/0]: s=5 n=3 x=0 v=0
(XEN)       20 [1/1]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=6 n=3 x=0
(XEN)       22 [0/0]: s=6 n=3 x=0
(XEN)       23 [0/0]: s=5 n=3 x=0 v=1
(XEN)       24 [0/0]: s=6 n=3 x=0
(XEN)       25 [0/0]: s=3 n=0 x=0 d=0 p=35
(XEN)       26 [0/0]: s=4 n=0 x=0 p=9 i=9
(XEN)       27 [0/0]: s=5 n=0 x=0 v=2
(XEN)       28 [0/0]: s=4 n=0 x=0 p=8 i=8
(XEN)       29 [0/0]: s=4 n=0 x=0 p=278 i=27
(XEN)       30 [0/0]: s=4 n=0 x=0 p=279 i=26
(XEN)       31 [0/0]: s=4 n=0 x=0 p=277 i=28
(XEN)       32 [0/0]: s=4 n=0 x=0 p=16 i=16
(XEN)       33 [0/0]: s=4 n=0 x=0 p=23 i=23
(XEN)       34 [1/0]: s=4 n=0 x=0 p=276 i=29
(XEN)       35 [0/0]: s=3 n=0 x=0 d=0 p=25
(XEN)       36 [0/0]: s=5 n=0 x=0 v=3
(XEN)       37 [1/0]: s=4 n=0 x=0 p=275 i=30
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:0001 vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:0001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   2 affinity:ffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:0001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   4 affinity:0001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   5 affinity:0001 vec:50 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:0001 vec:58 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:0001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:0001 vec:68 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:0001 vec:70 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:0001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:0001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:0001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  13 affinity:000f vec:98 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:0001 vec:a0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:0001 vec:a8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:0001 vec:b0 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 16(-S--),
(XEN)    IRQ:  18 affinity:000f vec:b8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  19 affinity:0001 vec:c0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  20 affinity:000f vec:d8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  22 affinity:0001 vec:d0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:0001 vec:c8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 23(-S--),
(XEN)    IRQ:  24 affinity:0001 vec:28 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  25 affinity:0001 vec:30 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  26 affinity:0001 vec:69 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(-S--),
(XEN)    IRQ:  27 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
(XEN)    IRQ:  28 affinity:0001 vec:31 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:71 type=PCI-MSI         status=00000010 in-flight=1 domain-list=0:276(PS-M),
(XEN)    IRQ:  30 affinity:0001 vec:89 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:275(PS--),
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  1 Vec 56:
(XEN)       Apic 0x00, Pin  1: vec=38 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  3 Vec 64:
(XEN)       Apic 0x00, Pin  3: vec=40 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  4 Vec 72:
(XEN)       Apic 0x00, Pin  4: vec=48 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  5 Vec 80:
(XEN)       Apic 0x00, Pin  5: vec=50 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  6 Vec 88:
(XEN)       Apic 0x00, Pin  6: vec=58 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  7 Vec 96:
(XEN)       Apic 0x00, Pin  7: vec=60 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  8 Vec104:
(XEN)       Apic 0x00, Pin  8: vec=68 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  9 Vec112:
(XEN)       Apic 0x00, Pin  9: vec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 10 Vec120:
(XEN)       Apic 0x00, Pin 10: vec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 11 Vec136:
(XEN)       Apic 0x00, Pin 11: vec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 12 Vec144:
(XEN)       Apic 0x00, Pin 12: vec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 13 Vec152:
(XEN)       Apic 0x00, Pin 13: vec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:0
(XEN)     IRQ 14 Vec160:
(XEN)       Apic 0x00, Pin 14: vec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 15 Vec168:
(XEN)       Apic 0x00, Pin 15: vec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 16 Vec176:
(XEN)       Apic 0x00, Pin 16: vec=b0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 18 Vec184:
(XEN)       Apic 0x00, Pin 18: vec=b8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 19 Vec192:
(XEN)       Apic 0x00, Pin 19: vec=c0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 20 Vec216:
(XEN)       Apic 0x00, Pin 20: vec=d8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 22 Vec208:
(XEN)       Apic 0x00, Pin 22: vec=d0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 23 Vec200:
(XEN)       Apic 0x00, Pin 23: vec=c8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64512kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 522236kB free
(XEN)     heap[18]: 1048572kB free
(XEN)     heap[19]: 691348kB free
(XEN)     heap[20]: 536900kB free
(XEN)     Dom heap: 3256784kB free
(XEN) [n: NMI statistics]
(XEN) CPU	NMI
(XEN)   0	  0
(XEN)   1	  0
(XEN)   2	  0
(XEN)   3	  0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x4B:3C70282A)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=187539 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-3} max_pages=188147
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-204f, 2058-ffff }
(XEN)     Interrupts { 0-279 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 0000000000148917: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000148916: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148915: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148914: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000aa0fd: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000013f428: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN)     VCPU1: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/-1)
(XEN) Notifying guest 0:1 (virq 1, port 11, stat 0/0/0)
(XEN) Notifying guest 0:2 (virq 1, port 17, stat 0/0/0)
(XEN) Notifying guest 0:3 (virq 1, port 23, stat 0/0/0)

(XEN) Shared frames 0 -- Saved frames 0
[  323.314726] v(XEN) [r: dump run queues]
cpu 1
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=0x0000004B4837501F
(XEN) Idle cpupool:
[  323.314726]  (XEN) Scheduler: SMP Credit Scheduler (credit)
 (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -61
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2252
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
0: masked=0 pend(XEN) idlers: 000a
(XEN) active vcpus:
(XEN) 	  1: ing=1 event_sel [0.1] pri=-2 flags=0 cpu=2 credit=-663 [w=256]
0000000000000001(XEN) Cpupool 0:
(XEN) Scheduler: SMP Credit Scheduler (credit)

(XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -61
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 2252
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
[  323.348907]  (XEN) idlers: 000a
(XEN) active vcpus:
(XEN) 	  1: [0.1] pri=-2 flags=0 cpu=2 credit=-1222 [w=256]
 (XEN) CPU[00] 1: masked=0 pend sort=2252, sibling=0001, ing=0 event_sel core=000f
0000000000000000(XEN) 	run: [32767.0] pri=0 flags=0 cpu=0
(XEN) 	  1: [0.0] pri=0 flags=0 cpu=0 credit=76 [w=256]

(XEN) CPU[01] [  323.451913]   sort=2252, sibling=0002,  core=000f
(XEN) 	run: [32767.1] pri=-64 flags=0 cpu=1
2: masked=1 pend(XEN) CPU[02]  sort=2252, sibling=0004, core=000f
(XEN) 	run: [0.1] pri=-2 flags=0 cpu=2 credit=-1611 [w=256]
(XEN) 	  1: [32767.2] pri=-64 flags=0 cpu=2
(XEN) CPU[03] ing=1 event_sel  sort=2252, sibling=0008, 0000000000000001core=000f
(XEN) 	run: 
[32767.3] pri=-64 flags=0 cpu=3
[  323.475092]  (XEN) [s: dump softtsc stats]
 (XEN) TSC marked as reliable, warp = 249940157685 (count=3)
3: masked=1 pend(XEN) No domains have emulated TSC
ing=0 event_sel (XEN) [t: display multi-cpu clock info]
0000000000000000
(XEN) Synced stime skew: max=8199ns avg=4112ns samples=2 current=8199ns
(XEN) Synced cycles skew: max=164 avg=162 samples=2 current=160
[  323.531610]  (XEN) [u: dump numa info]
 (XEN) 'u' pressed -> dumping numa info (now-0x4B:55F31605)

(XEN) idx0 -> NODE0 start->0 size->1369600 free->814196
[  323.564638] p(XEN) phys_to_nid(0000000000001000) -> 0 should be 0
ending:
(XEN) CPU0 -> NODE0
(XEN) CPU1 -> NODE0
(XEN) CPU2 -> NODE0
(XEN) CPU3 -> NODE0
[  323.564639]  (XEN) Memory location of each domain:
(XEN) Domain 0 (total: 187539):
  0000000000000000(XEN)     Node 0: 187539
 (XEN) [v: dump Intel's VMCS]
0000000000000000(XEN) *********** VMCS Areas **************
(XEN) **************************************
 (XEN) [z: print ioapic info]
0000000000000000(XEN) number of MP IRQ sources: 15.
(XEN) number of IO-APIC #2 registers: 24.
(XEN) testing the IO APIC.......................
 (XEN) IO APIC #2......
(XEN) .... register #00: 02000000
(XEN) .......    : physical APIC id: 02
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
0000000000000000(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
 (XEN)  00 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  01 000 00   0    0    0   0   0    1    1    38
0000000000000000(XEN)  02 000 00  0    0    0   0   0    1    1    F0
 (XEN)  03 000 00  0    0    0   0   0    1    1    40
0000000000000000(XEN)  04 000 00  0    0    0   0   0    1    1    48
 (XEN)  05 000 00  0    0    0   0   0    1    1    50
0000000000000000(XEN)  06 000 00  0    0    0   0   0    1    1    58

(XEN)  07 000 00  0    0    0   0   0    1    1    60
[  323.700132]  (XEN)  08 000 00  0    0    0   0   0    1    1    68
  (XEN)  09 000 00  0    1    0   0   0    1    1    70
0000000000000000(XEN)  0a 000 00  0    0    0   0   0    1    1    78
 (XEN)  0b 000 00  0    0    0   0   0    1    1    88
0000000000000000(XEN)  0c 000 00  0    0    0   0   0    1    1    90
 (XEN)  0d 000 00  1    0    0   0   0    1    1    98
0000000000000000(XEN)  0e 000 00  0    0    0   0   0    1    1    A0
 (XEN)  0f 000 00  0    0    0   0   0    1    1    A8
0000000000000000(XEN)  10 000 00  0    1    0   1   0    1    1    B0
 (XEN)  11 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  12 000 00  1    1    0   1   0    1    1    B8
 (XEN)  13 000 00  1    1    0   1   0    1    1    C0
0000000000000000(XEN)  14 000 00  1    1    0   1   0    1    1    D8
 (XEN)  15 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  16 000 00  1    1    0   1   0    1    1    D0
 (XEN)  17 000 00  0    1    0   1   0    1    1    C8
0000000000000000(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:

(XEN) IRQ240 -> 0:2
(XEN) IRQ56 -> 0:1
(XEN) IRQ64 -> 0:3
[  323.802691]  (XEN) IRQ72 -> 0:4
(XEN) IRQ80 -> 0:5
(XEN) IRQ88 -> 0:6
(XEN) IRQ96 -> 0:7
(XEN) IRQ104 -> 0:8
(XEN) IRQ112 -> 0:9
(XEN) IRQ120 -> 0:10
(XEN) IRQ136 -> 0:11
(XEN) IRQ144 -> 0:12
(XEN) IRQ152 -> 0:13
(XEN) IRQ160 -> 0:14
(XEN) IRQ168 -> 0:15
(XEN) IRQ176 -> 0:16
(XEN) IRQ184 -> 0:18
(XEN) IRQ192 -> 0:19
(XEN) IRQ216 -> 0:20
  (XEN) IRQ208 -> 0:22
(XEN) IRQ200 -> 0:23
(XEN) .................................... done.
0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  323.871691]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  323.885651]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  323.899612]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  323.913573]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  323.927534]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000002400002002
[  323.941496]    
[  323.944807] global mask:
[  323.944807]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  323.960020]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  323.973982]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  323.987942]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.001904]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.015865]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.029826]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.043786]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  324.057747]    
[  324.061058] globally unmasked:
[  324.061059]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.076809]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.090770]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.104730]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.118691]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.132655]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.146615]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.160575]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000002400002082
[  324.174535]    
[  324.177847] local cpu1 mask:
[  324.177847]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.193419]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.207381]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.221341]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.235302]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.249262]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.263223]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.277184]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000001f80
[  324.291145]    
[  324.294457] locally unmasked:
[  324.294458]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.310118]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.324080]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.338041]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.352001]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.365962]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.379924]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.393884]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000080
[  324.407845]    
[  324.411156] pending list:
[  324.414200]   0: event 1 -> irq 272 locally-masked
[  324.419211]   1: event 7 -> irq 278
[  324.422881]   2: event 13 -> irq 284 locally-masked
[  324.427982]   3: event 19 -> irq 290 locally-masked
[  324.433083]   0: event 34 -> irq 301 locally-masked
[  324.438184]   0: event 37 -> irq 302 locally-masked
[  324.443304] 
[  324.443306] vcpu 0
[  324.443307]   0: masked=0 pending=0 event_sel 0000000000000000
[  324.448558]   1: masked=0 pending=0 event_sel 0000000000000000
[  324.454643]   2: masked=1 pending=1 event_sel 0000000000000001
[  324.460729]   3: masked=1 pending=1 event_sel 0000000000000001
[  324.466814]   
[  324.472899] pending:
[  324.472899]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.487755]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.501716]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.515676]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.529638]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.543598]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.557560]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.571521]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000240028a006
[  324.585482]    
[  324.588793] global mask:
[  324.588793]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.604007]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.617968]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.631929]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.645890]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.659851]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.673812]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  324.687772]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  324.701734]    
[  324.705045] globally unmasked:
[  324.705046]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.720796]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.734756]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.748718]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.762679]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.776640]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.790600]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.804562]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000240028a002
[  324.818522]    
[  324.821834] local cpu0 mask:
[  324.821834]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.837406]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.851367]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.865327]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.879289]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.893249]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.907210]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.921172]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 fffffffffe00007f
[  324.935132]    
[  324.938443] locally unmasked:
[  324.938444]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.954105]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.968066]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.982026]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  324.995988]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.009949]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.023909]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.037871]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000002400000002
[  325.051831]    
[  325.055143] pending list:
[  325.058191]   0: event 1 -> irq 272 l2-clear
[  325.062661]   0: event 2 -> irq 273 l2-clear globally-masked
[  325.068568]   2: event 13 -> irq 284 l2-clear locally-masked
[  325.074474]   2: event 15 -> irq 286 l2-clear locally-masked
[  325.080380]   3: event 19 -> irq 290 l2-clear locally-masked
[  325.086287]   3: event 21 -> irq 292 l2-clear locally-masked
[  325.092194]   0: event 34 -> irq 301 l2-clear
[  325.096757]   0: event 37 -> irq 302 l2-clear
[  325.101350] 
[  325.101351] vcpu 2
[  325.101352]   0: masked=0 pending=0 event_sel 0000000000000000
[  325.106611]   1: masked=0 pending=0 event_sel 0000000000000000
[  325.112696]   2: masked=0 pending=1 event_sel 0000000000000001
[  325.118782]   3: masked=1 pending=1 event_sel 0000000000000001
[  325.124867]   
[  325.130951] pending:
[  325.130952]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.145808]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.159769]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.173729]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.187691]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.201652]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.215612]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.229574]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028e004
[  325.243535]    
[  325.246845] global mask:
[  325.246846]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.262059]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.276021]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.289982]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.303942]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.317903]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.331865]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.345826]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  325.359786]    
[  325.363097] globally unmasked:
[  325.363097]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.378848]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.392809]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.406771]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.420734]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.434693]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.448654]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.462615]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028a000
[  325.476576]    
[  325.479887] local cpu2 mask:
[  325.479888]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.495459]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.509420]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.523380]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.537341]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.551302]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.565263]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.579224]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000007e000
[  325.593188]    
[  325.596497] locally unmasked:
[  325.596497]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.612157]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.626119]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.640080]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.654041]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.668001]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.681963]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.695923]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  325.709885]    
[  325.713196] pending list:
[  325.716239]   0: event 2 -> irq 273 globally-masked locally-masked
[  325.722683]   2: event 13 -> irq 284
[  325.726441]   2: event 14 -> irq 285 globally-masked
[  325.731632]   2: event 15 -> irq 286
[  325.735391]   3: event 19 -> irq 290 locally-masked
[  325.740491]   3: event 21 -> irq 292 locally-masked
[  325.745615] 
[  325.745616] vcpu 3
[  325.745616]   0: masked=1 pending=0 event_sel 0000000000000000
[  325.750873]   1: masked=0 pending=0 event_sel 0000000000000000
[  325.756959]   2: masked=0 pending=0 event_sel 0000000000000000
[  325.763045]   3: masked=0 pending=1 event_sel 0000000000000001
[  325.769129]   
[  325.775214] pending:
[  325.775215]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.790071]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.804032]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.817993]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.831954]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.845914]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.859875]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  325.873836]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000384004
[  325.887797]    
[  325.891109] global mask:
[  325.891109]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.906323]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.920284]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.934245]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.948206]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.962167]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.976128]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  325.990088]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  326.004049]    
[  326.007360] globally unmasked:
[  326.007361]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.023112]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.037072]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.051034]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.064994]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.078955]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.092917]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.106877]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  326.120838]    
[  326.124149] local cpu3 mask:
[  326.124149]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.139721]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.153683]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.167643]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.181604]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.195565]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.209526]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.223488]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000001f80000
[  326.237448]    
[  326.240758] locally unmasked:
[  326.240759]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.256421]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.270382]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.284343]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.298304]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.312264]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.326225]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  326.340186]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  326.354148]    
[  326.357459] pending list:
[  326.360503]   0: event 2 -> irq 273 globally-masked locally-masked
[  326.366946]   2: event 14 -> irq 285 globally-masked locally-masked
[  326.373478]   3: event 19 -> irq 290
[  326.377237]   3: event 20 -> irq 291 globally-masked
[  326.382427]   3: event 21 -> irq 292

[-- Attachment #4: xen-dump-pre-s3.txt --]
[-- Type: text/plain, Size: 66468 bytes --]

(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff82c4803025a0   rbx: ffff82c480302480   rcx: 0000000000000004
(XEN) rdx: 0000000000000000   rsi: ffff82c4802e25c8   rdi: ffff82c480271800
(XEN) rbp: ffff82c4802b7e30   rsp: ffff82c4802b7e30   r8:  0000000000000002
(XEN) r9:  ffff82c4802fe240   r10: 0000001c08333169   r11: 0000000000000246
(XEN) r12: ffff82c480271800   r13: ffff82c48013d757   r14: 0000001a52a1f762
(XEN) r15: ffff82c480302308   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000014c8de000   cr2: ffffe8ffffc00228
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802b7e30:
(XEN)    ffff82c4802b7e60 ffff82c48012817f 0000000000000002 ffff82c4802e25c8
(XEN)    ffff82c480302480 ffff8301489b3d40 ffff82c4802b7eb0 ffff82c480128281
(XEN)    ffff82c4802b7f18 0000000000000246 00000000deadbeef ffff82c4802d8880
(XEN)    ffff82c4802d8880 ffff82c4802b7f18 ffffffffffffffff ffff82c480302308
(XEN)    ffff82c4802b7ee0 ffff82c480125405 ffff82c4802b7f18 ffff82c4802b7f18
(XEN)    00000000ffffffff 0000000000000002 ffff82c4802b7ef0 ffff82c480125484
(XEN)    ffff82c4802b7f10 ffff82c480158c05 ffff8300aa584000 ffff8300aa0fc000
(XEN)    ffff82c4802b7da8 0000000000000000 ffffffffffffffff 0000000000000000
(XEN)    ffffffff81aafda0 ffffffff81a01ee8 ffffffff81a01fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffffffff81a01ed0 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff8300aa584000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48013d77e>] ns16550_poll+0x27/0x33
(XEN)    [<ffff82c48012817f>] execute_timer+0x4e/0x6c
(XEN)    [<ffff82c480128281>] timer_softirq_action+0xe4/0x21a
(XEN)    [<ffff82c480125405>] __do_softirq+0x95/0xa0
(XEN)    [<ffff82c480125484>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480158c05>] idle_loop+0x6f/0x71
(XEN)    
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014899ff18   rcx: 0000000000000001
(XEN) rdx: 0000003cc86a8d80   rsi: ffff8300a83fd0f8   rdi: ffff8300aa0fe000
(XEN) rbp: ffff83014899fef0   rsp: ffff83014899fef0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014899ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff8301489ab088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013c64f000   cr2: ffff880026d0e830
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014899fef0:
(XEN)    ffff83014899ff10 ffff82c480158bf8 ffff8300aa0fe000 ffff8300a83fd000
(XEN)    ffff83014899fda8 0000000000000000 0000000000000000 0000000000000001
(XEN)    ffffffff81aafda0 ffff88002786dee0 ffff88002786dfd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786dec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000001 ffff8300aa0fe000
(XEN)    0000003cc86a8d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014898ff18   rcx: 0000000000000002
(XEN) rdx: 0000003cc8692d80   rsi: 000000409c078a94   rdi: 0000000000000002
(XEN) rbp: ffff83014898fef0   rsp: ffff83014898fef0   r8:  00000001055add6c
(XEN) r9:  ffff8300a83fc060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014898ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff830148995088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013ca5f000   cr2: ffff880025817df0
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014898fef0:
(XEN)    ffff83014898ff10 ffff82c480158bf8 ffff8300a85c7000 ffff8300a83fc000
(XEN)    ffff83014898fda8 0000000000000000 0000000000000000 0000000000000002
(XEN)    ffffffff81aafda0 ffff88002786fee0 ffff88002786ffd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff88002786fec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000002 ffff8300a85c7000
(XEN)    0000003cc8692d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.2.0-rc2-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN) RFLAGS: 0000000000000246   CONTEXT: hypervisor
(XEN) rax: ffff82c480302370   rbx: ffff83014893ff18   rcx: 0000000000000003
(XEN) rdx: 0000003cc8684d80   rsi: 000000409c078ab6   rdi: 0000000000000003
(XEN) rbp: ffff83014893fef0   rsp: ffff83014893fef0   r8:  000000012880df48
(XEN) r9:  ffff8300aa583060   r10: 00000000deadbeef   r11: 0000000000000246
(XEN) r12: ffff83014893ff18   r13: 00000000ffffffff   r14: 0000000000000002
(XEN) r15: ffff830148987088   cr0: 000000008005003b   cr4: 00000000001026f0
(XEN) cr3: 000000013d95f000   cr2: ffff880025e91260
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83014893fef0:
(XEN)    ffff83014893ff10 ffff82c480158bf8 ffff8300a83fe000 ffff8300aa583000
(XEN)    ffff83014893fda8 0000000000000000 0000000000000000 0000000000000003
(XEN)    ffffffff81aafda0 ffff880027881ee0 ffff880027881fd8 0000000000000246
(XEN)    0000000000000001 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff810013aa 0000000000000000 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff810013aa 000000000000e033 0000000000000246
(XEN)    ffff880027881ec8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000003 ffff8300a83fe000
(XEN)    0000003cc8684d80 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801583c4>] default_idle+0x99/0x9e
(XEN)    [<ffff82c480158bf8>] idle_loop+0x62/0x71
(XEN)    
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffffffff81a01fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffffffff81a01ee8   rsp: ffffffff81a01ed0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000000   r14: ffffffffffffffff
(XEN) r15: 0000000000000000   cr0: 0000000000000008   cr4: 0000000000002660
(XEN) cr3: 000000014c8de000   cr2: ffffe8ffffc00228
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffffffff81a01ed0:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffffffff81a01f18
(XEN)    ffffffff8101c663 ffffffff81a01fd8 ffffffff81aafda0 ffff88002dee1a00
(XEN)    ffffffffffffffff ffffffff81a01f48 ffffffff81013236 ffffffffffffffff
(XEN)    a3fc987339ed013b 0000000000000000 ffffffff81b15160 ffffffff81a01f58
(XEN)    ffffffff81554f5e ffffffff81a01f98 ffffffff81accbf5 ffffffff81b15160
(XEN)    e4b159ba3eea094c 0000000000cdf000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81a01fb8 ffffffff81acc34b ffffffff7fffffff
(XEN)    ffffffff84b04000 ffffffff81a01ff8 ffffffff81acfecc 0000000000000000
(XEN)    0000000100000000 00100800000306a4 1fc98b75e3b82283 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786dfd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786dee0   rsp: ffff88002786dec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000001   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013c64f000   cr2: 00007f3588506210
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786dec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786df10
(XEN)    ffffffff8101c663 ffff88002786dfd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786df40 ffffffff81013236 ffffffff8100ade9
(XEN)    adcf45807c2d04fb 0000000000000000 0000000000000000 ffff88002786df50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786df58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff88002786ffd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff88002786fee0   rsp: ffff88002786fec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013ca5f000   cr2: 00007f819c3be000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff88002786fec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff88002786ff10
(XEN)    ffffffff8101c663 ffff88002786ffd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff88002786ff40 ffffffff81013236 ffffffff8100ade9
(XEN)    1fe7b5a822150499 0000000000000000 0000000000000000 ffff88002786ff50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff88002786ff58 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff810013aa>]
(XEN) RFLAGS: 0000000000000246   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff880027881fd8   rcx: ffffffff810013aa
(XEN) rdx: 0000000000000000   rsi: 00000000deadbeef   rdi: 00000000deadbeef
(XEN) rbp: ffff880027881ee0   rsp: ffff880027881ec8   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000246
(XEN) r12: ffffffff81aafda0   r13: 0000000000000003   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000002660
(XEN) cr3: 000000013db62000   cr2: 00007feaca64c000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880027881ec8:
(XEN)    0000000000000000 00000000ffffffff ffffffff8100a5c0 ffff880027881f10
(XEN)    ffffffff8101c663 ffff880027881fd8 ffffffff81aafda0 0000000000000000
(XEN)    0000000000000000 ffff880027881f40 ffffffff81013236 ffffffff8100ade9
(XEN)    49de1833d13f2a26 0000000000000000 0000000000000000 ffff880027881f50
(XEN)    ffffffff81563438 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880027881f58 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x1A:9C6E22C0)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16128 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 130559 pages
(XEN) heap[node=0][zone=18] -> 262143 pages
(XEN) heap[node=0][zone=19] -> 172845 pages
(XEN) heap[node=0][zone=20] -> 134218 pages
(XEN) heap[node=0][zone=21] -> 0 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) [I: dump HVM irq info]
(XEN) 'I' pressed -> dumping HVM irq info
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    26 vec=61 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    27 vec=29 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    28 vec=31 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    29 vec=39 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN)  MSI    30 vec=41 lowest  edge   assert  log lowest dest=00000001 mask=0/1/-1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) ==== segment 0000 ====
(XEN) 0000:05:01.0 - dom 0   - MSIs < >
(XEN) 0000:04:00.0 - dom 0   - MSIs < >
(XEN) 0000:03:00.0 - dom 0   - MSIs < >
(XEN) 0000:02:00.0 - dom 0   - MSIs < 30 >
(XEN) 0000:00:1f.3 - dom 0   - MSIs < >
(XEN) 0000:00:1f.2 - dom 0   - MSIs < 27 >
(XEN) 0000:00:1f.0 - dom 0   - MSIs < >
(XEN) 0000:00:1e.0 - dom 0   - MSIs < >
(XEN) 0000:00:1d.0 - dom 0   - MSIs < >
(XEN) 0000:00:1c.7 - dom 0   - MSIs < >
(XEN) 0000:00:1c.6 - dom 0   - MSIs < >
(XEN) 0000:00:1c.0 - dom 0   - MSIs < >
(XEN) 0000:00:1b.0 - dom 0   - MSIs < 29 >
(XEN) 0000:00:1a.0 - dom 0   - MSIs < >
(XEN) 0000:00:19.0 - dom 0   - MSIs < 26 >
(XEN) 0000:00:16.3 - dom 0   - MSIs < >
(XEN) 0000:00:16.0 - dom 0   - MSIs < >
(XEN) 0000:00:14.0 - dom 0   - MSIs < >
(XEN) 0000:00:02.0 - dom 0   - MSIs < 28 >
(XEN) 0000:00:00.0 - dom 0   - MSIs < >
(XEN) [V: dump iommu info]
(XEN) 
(XEN) iommu 0: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  0010 00000001 31    0   1  0  1  1   0 1
(XEN) 
(XEN) iommu 1: nr_pt_levels = 3.
(XEN)   Queued Invalidation: supported and enabled.
(XEN)   Interrupt Remapping: supported and enabled.
(XEN)   Interrupt remapping table (nr_entry=0x10000. Only dump P=1 entries here):
(XEN)        SVT  SQ   SID      DST  V  AVL DLM TM RH DM FPD P
(XEN)   0000:  1   0  f0f8 00000001 38    0   1  0  1  1   0 1
(XEN)   0001:  1   0  f0f8 00000001 f0    0   1  0  1  1   0 1
(XEN)   0002:  1   0  f0f8 00000001 40    0   1  0  1  1   0 1
(XEN)   0003:  1   0  f0f8 00000001 48    0   1  0  1  1   0 1
(XEN)   0004:  1   0  f0f8 00000001 50    0   1  0  1  1   0 1
(XEN)   0005:  1   0  f0f8 00000001 58    0   1  0  1  1   0 1
(XEN)   0006:  1   0  f0f8 00000001 60    0   1  0  1  1   0 1
(XEN)   0007:  1   0  f0f8 00000001 68    0   1  0  1  1   0 1
(XEN)   0008:  1   0  f0f8 00000001 70    0   1  1  1  1   0 1
(XEN)   0009:  1   0  f0f8 00000001 78    0   1  0  1  1   0 1
(XEN)   000a:  1   0  f0f8 00000001 88    0   1  0  1  1   0 1
(XEN)   000b:  1   0  f0f8 00000001 90    0   1  0  1  1   0 1
(XEN)   000c:  1   0  f0f8 00000001 98    0   1  0  1  1   0 1
(XEN)   000d:  1   0  f0f8 00000001 a0    0   1  0  1  1   0 1
(XEN)   000e:  1   0  f0f8 00000001 a8    0   1  0  1  1   0 1
(XEN)   000f:  1   0  f0f8 00000001 b0    0   1  1  1  1   0 1
(XEN)   0010:  1   0  f0f8 00000001 b8    0   1  1  1  1   0 1
(XEN)   0011:  1   0  f0f8 00000001 c0    0   1  1  1  1   0 1
(XEN)   0012:  1   0  f0f8 00000001 c8    0   1  1  1  1   0 1
(XEN)   0013:  1   0  f0f8 00000001 d0    0   1  1  1  1   0 1
(XEN)   0014:  1   0  f0f8 00000001 d8    0   1  1  1  1   0 1
(XEN)   0015:  1   0  00c8 00000001 61    0   1  0  1  1   0 1
(XEN)   0016:  1   0  00fa 00000001 29    0   1  0  1  1   0 1
(XEN)   0017:  1   0  00d8 00000001 39    0   1  0  1  1   0 1
(XEN)   0018:  1   0  0200 00000001 41    0   1  0  1  1   0 1
(XEN) 
(XEN) Redirection table of IOAPIC 0:
(XEN)   #entry IDX FMT MASK TRIG IRR POL STAT DELI  VECTOR
(XEN)    01:  0000   1    0   0   0   0    0    0     38
(XEN)    02:  0001   1    0   0   0   0    0    0     f0
(XEN)    03:  0002   1    0   0   0   0    0    0     40
(XEN)    04:  0003   1    0   0   0   0    0    0     48
(XEN)    05:  0004   1    0   0   0   0    0    0     50
(XEN)    06:  0005   1    0   0   0   0    0    0     58
(XEN)    07:  0006   1    0   0   0   0    0    0     60
(XEN)    08:  0007   1    0   0   0   0    0    0     68
(XEN)    09:  0008   1    0   1   0   0    0    0     70
(XEN)    0a:  0009   1    0   0   0   0    0    0     78
(XEN)    0b:  000a   1    0   0   0   0    0    0     88
(XEN)    0c:  000b   1    0   0   0   0    0    0     90
(XEN)    0d:  000c   1    1   0   0   0    0    0     98
(XEN)    0e:  000d   1    0   0   0   0    0    0     a0
(XEN)    0f:  000e   1    0   0   0   0    0    0     a8
(XEN)    10:  000f   1    0   1   0   1    0    0     b0
(XEN)    12:  0010   1    1   1   0   1    0    0     b8
(XEN)    13:  0011   1    1   1   0   1    0    0     c0
(XEN)    14:  0014   1    1   1   0   1    0    0     d8
(XEN)    16:  0013   1    1   1   0   1    0    0     d0
(XEN)    17:  0012   1    0   1   0   1    0    0     c8
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues:
(XEN) CPU00:
(XEN)   ex=   -1723us timer=ffff82c4802e25c8 cb=ffff82c48013d757(ffff82c480271800) ns16550_poll+0x0/0x33
(XEN)   ex=   -1722us timer=ffff83014ca923b0 cb=ffff82c480166416(ffff830148941e80) irq_guest_eoi_timer_fn+0x0/0x15d
(XEN)   ex=    7278us timer=ffff830148973ea8 cb=ffff82c48011aaf0(0000000000000000) csched_tick+0x0/0x314
(XEN)   ex=    7278us timer=ffff8301489731b8 cb=ffff82c480119d72(ffff830148973190) csched_acct+0x0/0x42a
(XEN)   ex= 5494089us timer=ffff82c480300580 cb=ffff82c4801a8850(0000000000000000) mce_work_fn+0x0/0xa9
(XEN)   ex=35387226us timer=ffff82c4802fe280 cb=ffff82c4801807c2(0000000000000000) plt_overflow+0x0/0x131
(XEN)   ex=  997323us timer=ffff82c4802fe240 cb=ffff82c4801802fe(0000000000000000) time_calibration+0x0/0x5c
(XEN) CPU01:
(XEN)   ex=   76247us timer=ffff8301489b3b98 cb=ffff82c48011aaf0(0000000000000001) csched_tick+0x0/0x314
(XEN)   ex=  214134us timer=ffff8300a83fd060 cb=ffff82c480121c6b(ffff8300a83fd000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU02:
(XEN)   ex=   96562us timer=ffff830148994088 cb=ffff82c48011aaf0(0000000000000002) csched_tick+0x0/0x314
(XEN)   ex=  184139us timer=ffff8300a83fc060 cb=ffff82c480121c6b(ffff8300a83fc000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) CPU03:
(XEN)   ex=  116877us timer=ffff830148994558 cb=ffff82c48011aaf0(0000000000000003) csched_tick+0x0/0x314
(XEN)   ex=  154148us timer=ffff8300aa583060 cb=ffff82c480121c6b(ffff8300aa583000) vcpu_singleshot_timer_fn+0x0/0xb
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[115060702016]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[0] CC7[0]
(XEN) ==cpu1==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[115085492294]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[12144978148] CC7[0]
(XEN) ==cpu2==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[115111176673]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[12150881397] CC7[0]
(XEN) ==cpu3==
(XEN) active state:		C255
(XEN) max_cstate:		C7
(XEN) states:
(XEN)     C1:	type[C1] latency[000] usage[00000000] method[ HALT] duration[0]
(XEN)     C0:	usage[00000000] duration[115136860567]
(XEN) PC2[0] PC3[0] PC6[0] PC7[0]
(XEN) CC3[0] CC6[12156878416] CC7[0]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Event channel information for domain 0:
(XEN) Polling vCPUs: {}
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 x=0 v=0
(XEN)        2 [1/1]: s=6 n=0 x=0
(XEN)        3 [1/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=6 n=0 x=0
(XEN)        5 [0/0]: s=5 n=0 x=0 v=1
(XEN)        6 [0/0]: s=6 n=0 x=0
(XEN)        7 [0/0]: s=5 n=1 x=0 v=0
(XEN)        8 [0/1]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=6 n=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=1 x=0 v=1
(XEN)       12 [0/0]: s=6 n=1 x=0
(XEN)       13 [0/0]: s=5 n=2 x=0 v=0
(XEN)       14 [1/1]: s=6 n=2 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=6 n=2 x=0
(XEN)       17 [0/0]: s=5 n=2 x=0 v=1
(XEN)       18 [0/0]: s=6 n=2 x=0
(XEN)       19 [0/0]: s=5 n=3 x=0 v=0
(XEN)       20 [1/1]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=6 n=3 x=0
(XEN)       22 [0/0]: s=6 n=3 x=0
(XEN)       23 [0/0]: s=5 n=3 x=0 v=1
(XEN)       24 [0/0]: s=6 n=3 x=0
(XEN)       25 [0/0]: s=3 n=0 x=0 d=0 p=35
(XEN)       26 [0/0]: s=4 n=0 x=0 p=9 i=9
(XEN)       27 [0/0]: s=5 n=0 x=0 v=2
(XEN)       28 [0/0]: s=4 n=0 x=0 p=8 i=8
(XEN)       29 [0/0]: s=4 n=0 x=0 p=278 i=27
(XEN)       30 [0/0]: s=4 n=0 x=0 p=16 i=16
(XEN)       31 [0/0]: s=4 n=0 x=0 p=277 i=28
(XEN)       32 [0/0]: s=4 n=0 x=0 p=23 i=23
(XEN)       33 [0/0]: s=4 n=0 x=0 p=276 i=29
(XEN)       34 [1/0]: s=4 n=0 x=0 p=275 i=30
(XEN)       35 [0/0]: s=3 n=0 x=0 d=0 p=25
(XEN)       36 [0/0]: s=5 n=0 x=0 v=3
(XEN)       37 [1/0]: s=4 n=0 x=0 p=279 i=26
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:0001 vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:0001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   2 affinity:ffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:0001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   4 affinity:0001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   5 affinity:0001 vec:50 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:0001 vec:58 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:0001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:0001 vec:68 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:0001 vec:70 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:0001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:0001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:0001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  13 affinity:000f vec:98 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:0001 vec:a0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:0001 vec:a8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:0001 vec:b0 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 16(-S--),
(XEN)    IRQ:  18 affinity:000f vec:b8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  19 affinity:0001 vec:c0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  20 affinity:000f vec:d8 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  22 affinity:0001 vec:d0 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:0001 vec:c8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 23(-S--),
(XEN)    IRQ:  24 affinity:0001 vec:28 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  25 affinity:0001 vec:30 type=DMA_MSI         status=00000000 mapped, unbound
(XEN)    IRQ:  26 affinity:0001 vec:61 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:279(PS--),
(XEN)    IRQ:  27 affinity:0001 vec:29 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:278(-S--),
(XEN)    IRQ:  28 affinity:0001 vec:31 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:277(-S--),
(XEN)    IRQ:  29 affinity:0001 vec:39 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:276(-S--),
(XEN)    IRQ:  30 affinity:0001 vec:41 type=PCI-MSI         status=00000010 in-flight=1 domain-list=0:275(PS-M),
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  1 Vec 56:
(XEN)       Apic 0x00, Pin  1: vec=38 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  3 Vec 64:
(XEN)       Apic 0x00, Pin  3: vec=40 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  4 Vec 72:
(XEN)       Apic 0x00, Pin  4: vec=48 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  5 Vec 80:
(XEN)       Apic 0x00, Pin  5: vec=50 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  6 Vec 88:
(XEN)       Apic 0x00, Pin  6: vec=58 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  7 Vec 96:
(XEN)       Apic 0x00, Pin  7: vec=60 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  8 Vec104:
(XEN)       Apic 0x00, Pin  8: vec=68 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ  9 Vec112:
(XEN)       Apic 0x00, Pin  9: vec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 10 Vec120:
(XEN)       Apic 0x00, Pin 10: vec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 11 Vec136:
(XEN)       Apic 0x00, Pin 11: vec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 12 Vec144:
(XEN)       Apic 0x00, Pin 12: vec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 13 Vec152:
(XEN)       Apic 0x00, Pin 13: vec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:0
(XEN)     IRQ 14 Vec160:
(XEN)       Apic 0x00, Pin 14: vec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 15 Vec168:
(XEN)       Apic 0x00, Pin 15: vec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:0
(XEN)     IRQ 16 Vec176:
(XEN)       Apic 0x00, Pin 16: vec=b0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN)     IRQ 18 Vec184:
(XEN)       Apic 0x00, Pin 18: vec=b8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 19 Vec192:
(XEN)       Apic 0x00, Pin 19: vec=c0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 20 Vec216:
(XEN)       Apic 0x00, Pin 20: vec=d8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 22 Vec208:
(XEN)       Apic 0x00, Pin 22: vec=d0 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=1 dest_id:0
(XEN)     IRQ 23 Vec200:
(XEN)       Apic 0x00, Pin 23: vec=c8 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64512kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 522236kB free
(XEN)     heap[18]: 1048572kB free
(XEN)     heap[19]: 691380kB free
(XEN)     heap[20]: 536872kB free
(XEN)     Dom heap: 3256788kB free
(XEN) [n: NMI statistics]
(XEN) CPU	NMI
(XEN)   0	  0
(XEN)   1	  0
(XEN)   2	  0
(XEN)   3	  0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x1A:FC175F04)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=187539 xenheap_pages=6 shared_pages=0 paged_pages=0 dirty_cpus={1-3} max_pages=188147
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-204f, 2058-ffff }
(XEN)     Interrupts { 0-279 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 0000000000148917: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000148916: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148915: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000148914: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000aa0fd: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000013f428: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} cpu_affinity={0}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN)     VCPU1: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={1}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={2}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={3}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/-1)
(XEN) Notifying guest 0:1 (virq 1, port 11, stat 0/0/0)
(XEN) Notifying guest 0:2 (virq 1, port 17, stat 0/0/0)
(XEN) Notifying guest 0:3 (virq 1, port 23, stat 0/0/0)

(XEN) Shared frames 0 -- Saved frames 0
[  116.075921] vcpu 1
(XEN) [r: dump run queues]
[  116.075922]  (XEN) sched_smt_power_savings: disabled
(XEN) NOW=0x0000001B07DDCE01
(XEN) Idle cpupool:
(XEN) Scheduler: SMP Credit Scheduler (credit)
 (XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -10
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 1333
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us
0: masked=0 pend(XEN) idlers: 000c
(XEN) active vcpus:
(XEN) 	  1: ing=1 event_sel [0.1] pri=-2 flags=0 cpu=1 credit=-612 [w=256]
0000000000000001(XEN) Cpupool 0:
(XEN) Scheduler: SMP Credit Scheduler (credit)
(XEN) info:
(XEN) 	ncpus              = 4
(XEN) 	master             = 0
(XEN) 	credit             = 400
(XEN) 	credit balance     = -10
(XEN) 	weight             = 256
(XEN) 	runq_sort          = 1333
(XEN) 	default-weight     = 256
(XEN) 	tslice             = 10ms
(XEN) 	ratelimit          = 1000us
(XEN) 	credits per msec   = 10
(XEN) 	ticks per tslice   = 1
(XEN) 	migration delay    = 0us

(XEN) idlers: 000c
(XEN) active vcpus:
(XEN) 	  1: [0.1] pri=-2 flags=0 cpu=1 credit=-1157 [w=256]
[  116.110147]  (XEN) CPU[00]   sort=1333, sibling=0001, 1: masked=0 pendcore=000f
(XEN) 	run: [32767.0] pri=0 flags=0 cpu=0
(XEN) 	  1: [0.0] pri=0 flags=0 cpu=0 credit=-411 [w=256]
ing=0 event_sel (XEN) CPU[01]  sort=1333, sibling=0002, core=000f
(XEN) 	run: [0.1] pri=-2 flags=0 cpu=1 credit=-1429 [w=256]
(XEN) 	  1: [32767.1] pri=-64 flags=0 cpu=1
(XEN) CPU[02] 0000000000000000 sort=1333, sibling=0004, 
core=000f
(XEN) 	run: [  116.214405]  [32767.2] pri=-64 flags=0 cpu=2
 (XEN) CPU[03] 2: masked=1 pend sort=1333, sibling=0008, ing=1 event_sel core=000f
(XEN) 	run: 0000000000000001[32767.3] pri=-64 flags=0 cpu=3

(XEN) [s: dump softtsc stats]
[  116.255572]  (XEN) TSC marked as reliable, warp = 0 (count=2)
 (XEN) No domains have emulated TSC
3: masked=1 pend(XEN) [t: display multi-cpu clock info]
ing=1 event_sel (XEN) Synced stime skew: max=26ns avg=26ns samples=1 current=26ns
(XEN) Synced cycles skew: max=164 avg=164 samples=1 current=164
0000000000000001(XEN) [u: dump numa info]

(XEN) 'u' pressed -> dumping numa info (now-0x1B:15603724)
[  116.297329]  (XEN) idx0 -> NODE0 start->0 size->1369600 free->814197
 (XEN) phys_to_nid(0000000000001000) -> 0 should be 0

(XEN) CPU0 -> NODE0
(XEN) CPU1 -> NODE0
(XEN) CPU2 -> NODE0
(XEN) CPU3 -> NODE0
(XEN) Memory location of each domain:
(XEN) Domain 0 (total: 187539):
[  116.335006] p(XEN)     Node 0: 187539
ending:
(XEN) [v: dump Intel's VMCS]
[  116.335006]  (XEN) *********** VMCS Areas **************
(XEN) **************************************
  (XEN) [z: print ioapic info]
0000000000000000(XEN) number of MP IRQ sources: 15.
(XEN) number of IO-APIC #2 registers: 24.
(XEN) testing the IO APIC.......................
 (XEN) IO APIC #2......
(XEN) .... register #00: 02000000
(XEN) .......    : physical APIC id: 02
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
0000000000000000(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
 (XEN)  00 000 00  1    0    0   0   0    0    0    00
0000000000000000(XEN)  01 000 00  0    0    0   0   0    1    1    38
 (XEN)  02 000 00  0    0    0   0   0    1    1    F0
0000000000000000(XEN)  03 000 00  0    0    0   0   0    1    1    40
 (XEN)  04 000 00  0    0    0   0   0    1    1    48
0000000000000000(XEN)  05 000 00  0    0    0   0   0    1    1    50
 (XEN)  06 000 00  0    0    0   0   0    1    1    58
0000000000000000(XEN)  07 000 00  0    0    0   0   0    1    1    60
 (XEN)  08 000 00  0    0    0   0   0    1    1    68
0000000000000000(XEN)  09 000 00  0    1    0   0   0    1    1    70
 (XEN)  0a 000 00  0    0    0   0   0    1    1    78
0000000000000000(XEN)  0b 000 00  0    0    0   0   0    1    1    88

(XEN)  0c 000 00  0    0    0   0   0    1    1    90
[  116.484639]  (XEN)  0d 000 00  1    0    0   0   0    1    1    98
  (XEN)  0e 000 00  0    0    0   0   0    1    1    A0
0000000000000000(XEN)  0f 000 00  0    0    0   0   0    1    1    A8
 (XEN)  10 000 00  0    1    0   1   0    1    1    B0
0000000000000000(XEN)  11 000 00  1    0    0   0   0    0    0    00
 (XEN)  12 000 00  1    1    0   1   0    1    1    B8
0000000000000000(XEN)  13 000 00  1    1    0   1   0    1    1    C0
 (XEN)  14 000 00  1    1    0   1   0    1    1    D8
0000000000000000(XEN)  15 000 00  1    0    0   0   0    0    0    00
 (XEN)  16 000 00  1    1    0   1   0    1    1    D0
0000000000000000(XEN)  17 000 00  0    1    0   1   0    1    1    C8
 (XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
0000000000000000(XEN) IRQ240 -> 0:2
(XEN) IRQ56 -> 0:1
(XEN) IRQ64 -> 0:3
(XEN) IRQ72 -> 0:4
(XEN) IRQ80 -> 0:5
(XEN) IRQ88 -> 0:6
(XEN) IRQ96 -> 0:7
(XEN) IRQ104 -> 0:8
(XEN) IRQ112 -> 0:9
(XEN) IRQ120 -> 0:10
(XEN) IRQ136 -> 0:11
(XEN) IRQ144 -> 0:12
(XEN) IRQ152 -> 0:13
(XEN) IRQ160 -> 0:14
(XEN) IRQ168 -> 0:15
 (XEN) IRQ176 -> 0:16
(XEN) IRQ184 -> 0:18
(XEN) IRQ192 -> 0:19
(XEN) IRQ216 -> 0:20
(XEN) IRQ208 -> 0:22
(XEN) IRQ200 -> 0:23
(XEN) .................................... done.
0000000000000000 0000000000000000
[  116.617624]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.631586]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.645546]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.659507]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.673468]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.687429]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000400082082
[  116.701390]    
[  116.704702] global mask:
[  116.704702]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.719915]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.733876]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.747838]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.761798]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.775760]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.789721]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  116.803681]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  116.817642]    
[  116.820954] globally unmasked:
[  116.820954]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.836704]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.850665]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.864627]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.878587]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.892549]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.906510]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.920472]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000400082082
[  116.934431]    
[  116.937743] local cpu1 mask:
[  116.937743]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.953314]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.967275]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.981236]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  116.995197]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.009158]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.023119]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.037080]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000001f80
[  117.051041]    
[  117.054352] locally unmasked:
[  117.054353]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.070013]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.083974]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.097971]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.111933]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.125893]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.139855]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.153815]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000080
[  117.167776]    
[  117.171087] pending list:
[  117.174131]   0: event 1 -> irq 272 locally-masked
[  117.179143]   1: event 7 -> irq 278
[  117.182811]   2: event 13 -> irq 284 locally-masked
[  117.187912]   3: event 19 -> irq 290 locally-masked
[  117.193014]   0: event 34 -> irq 302 locally-masked
[  117.198135] 
[  117.198136] vcpu 0
[  117.198137]   0: masked=0 pending=1 event_sel 0000000000000001
[  117.203397]   1: masked=0 pending=0 event_sel 0000000000000000
[  117.209482]   2: masked=1 pending=1 event_sel 0000000000000001
[  117.215567]   3: masked=1 pending=1 event_sel 0000000000000001
[  117.221653]   
[  117.227737] pending:
[  117.227737]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.242593]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.256554]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.270514]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.284476]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.298436]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.312397]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.326359]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000040028a00e
[  117.340320]    
[  117.343631] global mask:
[  117.343632]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.358845]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.372806]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.386767]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.400728]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.414688]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.428650]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  117.442610]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  117.456572]    
[  117.459883] globally unmasked:
[  117.459884]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.475633]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.489595]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.503556]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.517517]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.531477]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.545438]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.559400]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000040028a00a
[  117.573361]    
[  117.576671] local cpu0 mask:
[  117.576672]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.592243]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.606205]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.620166]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.634127]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.648087]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.662049]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.676009]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 fffffffffe00007f
[  117.689971]    
[  117.693281] locally unmasked:
[  117.693282]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.708942]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.722904]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.736864]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.750826]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.764787]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.778747]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.792709]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000040000000a
[  117.806670]    
[  117.809981] pending list:
[  117.813029]   0: event 1 -> irq 272
[  117.816694]   0: event 2 -> irq 273 globally-masked
[  117.821794]   0: event 3 -> irq 274
[  117.825464]   2: event 13 -> irq 284 locally-masked
[  117.830566]   2: event 15 -> irq 286 locally-masked
[  117.835666]   3: event 19 -> irq 290 locally-masked
[  117.840766]   3: event 21 -> irq 292 locally-masked
[  117.845868]   0: event 34 -> irq 302
[  117.849647] 
[  117.849648] vcpu 2
[  117.849648]   0: masked=0 pending=0 event_sel 0000000000000000
[  117.854908]   1: masked=0 pending=0 event_sel 0000000000000000
[  117.860992]   2: masked=0 pending=1 event_sel 0000000000000001
[  117.867079]   3: masked=1 pending=1 event_sel 0000000000000001
[  117.873163]   
[  117.879248] pending:
[  117.879249]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.894105]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.908066]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.922026]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.935988]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.949948]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.963910]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  117.977871]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028e004
[  117.991832]    
[  117.995143] global mask:
[  117.995143]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.010357]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.024318]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.038279]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.052239]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.066200]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.080161]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.094159]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  118.108119]    
[  118.111431] globally unmasked:
[  118.111432]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.127182]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.141143]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.155103]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.169064]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.183026]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.196986]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.210947]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000028a000
[  118.224908]    
[  118.228220] local cpu2 mask:
[  118.228221]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.243791]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.257753]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.271713]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.285674]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.299636]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.313596]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.327558]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000007e000
[  118.341518]    
[  118.344829] locally unmasked:
[  118.344829]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.360491]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.374451]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.388412]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.402373]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.416334]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.430296]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.444256]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 000000000000a000
[  118.458218]    
[  118.461529] pending list:
[  118.464573]   0: event 2 -> irq 273 globally-masked locally-masked
[  118.471015]   2: event 13 -> irq 284
[  118.474775]   2: event 14 -> irq 285 globally-masked
[  118.479965]   2: event 15 -> irq 286
[  118.483723]   3: event 19 -> irq 290 locally-masked
[  118.488825]   3: event 21 -> irq 292 locally-masked
[  118.493946] 
[  118.493947] vcpu 3
[  118.493948]   0: masked=0 pending=0 event_sel 0000000000000000
[  118.499207]   1: masked=0 pending=0 event_sel 0000000000000000
[  118.505292]   2: masked=0 pending=0 event_sel 0000000000000000
[  118.511377]   3: masked=0 pending=1 event_sel 0000000000000001
[  118.517463]   
[  118.523548] pending:
[  118.523549]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.538403]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.552365]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.566326]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.580286]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.594248]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.608208]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.622170]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000384004
[  118.636130]    
[  118.639442] global mask:
[  118.639443]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.654656]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.668616]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.682577]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.696538]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.710499]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.724461]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
[  118.738421]    ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffc000104105
[  118.752382]    
[  118.755695] globally unmasked:
[  118.755696]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.771444]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.785405]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.799367]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.813328]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.827288]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.841250]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.855210]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  118.869172]    
[  118.872482] local cpu3 mask:
[  118.872483]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.888054]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.902015]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.915977]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.929937]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.943899]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.957859]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  118.971820]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000001f80000
[  118.985781]    
[  118.989093] locally unmasked:
[  118.989093]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.004753]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.018715]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.032676]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.046637]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.060597]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.074558]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  119.088519]    0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000280000
[  119.102516]    
[  119.105827] pending list:
[  119.108870]   0: event 2 -> irq 273 globally-masked locally-masked
[  119.115314]   2: event 14 -> irq 285 globally-masked locally-masked
[  119.121847]   3: event 19 -> irq 290
[  119.125605]   3: event 20 -> irq 291 globally-masked
[  119.130795]   3: event 21 -> irq 292

[-- Attachment #5: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-09 15:21             ` Ben Guthro
@ 2012-08-09 15:37               ` Jan Beulich
  2012-08-09 15:46                 ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-09 15:37 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, Thomas Goetz, xen-devel

>>> On 09.08.12 at 17:21, Ben Guthro <ben@guthro.net> wrote:
> Attached is a new run for
> new boot (pre-s3)
> first suspend / resume cycle (s3-first)
> second (failing) suspend / resume cycle (s3-second)

That confirms that the corruption occurs during the first suspend,
but presumably towards its end (where MSI and/or interrupt
redirection stuff already got restored). There's nothing I can add
on top of my recommendations as to the debugging of this.

One thing I think you didn't tell us so far is whether without
interrupt remapping (or the IOMMU turned off altogether) the
problem would also be observed.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-09 15:37               ` Jan Beulich
@ 2012-08-09 15:46                 ` Ben Guthro
  2012-08-09 15:51                   ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-09 15:46 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, xen-devel

On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
> One thing I think you didn't tell us so far is whether without
> interrupt remapping (or the IOMMU turned off altogether) the
> problem would also be observed.

I assume you mean the xen cli param iommu=off for the second test here.
What parameter should I be flipping for the first?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-09 15:46                 ` Ben Guthro
@ 2012-08-09 15:51                   ` Jan Beulich
  2012-08-09 16:09                     ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-09 15:51 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, xen-devel

>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> One thing I think you didn't tell us so far is whether without
>> interrupt remapping (or the IOMMU turned off altogether) the
>> problem would also be observed.
> 
> I assume you mean the xen cli param iommu=off for the second test here.
> What parameter should I be flipping for the first?

iommu=no-intremap

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-09 15:51                   ` Jan Beulich
@ 2012-08-09 16:09                     ` Ben Guthro
  2012-08-10  6:50                       ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-09 16:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, xen-devel

iommu=no-intremap
This seems to work around the issue on this platform, performing
multiple suspend/resume cycles, and ahci came back afterwards just
fine.

What is the downside to flipping this off?

iommu=off
This test behaved similarly to the above, also working around the issue.


On Thu, Aug 9, 2012 at 11:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 17:46, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 9, 2012 at 11:37 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> One thing I think you didn't tell us so far is whether without
>>> interrupt remapping (or the IOMMU turned off altogether) the
>>> problem would also be observed.
>>
>> I assume you mean the xen cli param iommu=off for the second test here.
>> What parameter should I be flipping for the first?
>
> iommu=no-intremap
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-09 16:09                     ` Ben Guthro
@ 2012-08-10  6:50                       ` Jan Beulich
  2012-08-10 19:15                         ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-10  6:50 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, xen-devel

>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
> iommu=no-intremap
> This seems to work around the issue on this platform, performing
> multiple suspend/resume cycles, and ahci came back afterwards just
> fine.
> 
> What is the downside to flipping this off?

Loss of security (against misbehaving/malicious guests). So we
certainly want/need to get to the bottom of this (especially if
this is not only one kind of system that's affected).

> iommu=off
> This test behaved similarly to the above, also working around the issue.

Of course, this is a superset of the former.

This result, however, makes it more likely again to indeed be a
Xen side problem, not Dom0 induced corruption.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-10  6:50                       ` Jan Beulich
@ 2012-08-10 19:15                         ` Ben Guthro
  2012-08-14 17:31                           ` Ben Guthro
       [not found]                           ` <CAAnFQG-u1VUDgn11ZW0=UaYC4MvUtxxq8ZjjUOrNpXTSUWP41Q@mail.gmail.com>
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-10 19:15 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, xen-devel

I'll continue to investigate, as my schedule allows, but haven't found
any smoking gun just yet.

This happens to be on an Ivybridge system - I have other Sandybridge
systems that will go to sleep, but never wake up at all, forcing a
hard power cycle.

I tested these iommu= parameters on one of these machines, to no
effect. Every time they go into S3, a hard reset is necessary to get
them to come out of it.




On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>> iommu=no-intremap
>> This seems to work around the issue on this platform, performing
>> multiple suspend/resume cycles, and ahci came back afterwards just
>> fine.
>>
>> What is the downside to flipping this off?
>
> Loss of security (against misbehaving/malicious guests). So we
> certainly want/need to get to the bottom of this (especially if
> this is not only one kind of system that's affected).
>
>> iommu=off
>> This test behaved similarly to the above, also working around the issue.
>
> Of course, this is a superset of the former.
>
> This result, however, makes it more likely again to indeed be a
> Xen side problem, not Dom0 induced corruption.
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-10 19:15                         ` Ben Guthro
@ 2012-08-14 17:31                           ` Ben Guthro
  2012-08-15  8:11                             ` Jan Beulich
       [not found]                           ` <CAAnFQG-u1VUDgn11ZW0=UaYC4MvUtxxq8ZjjUOrNpXTSUWP41Q@mail.gmail.com>
  1 sibling, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-14 17:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

I've bisected this issue - and it looks like it is a rather old problem.

The following changeset introduced it in the 4.1 development stream on
Jun 17, 2010

http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
never ran into this until now.


Any thoughts as to a solution?


-Ben



On Fri, Aug 10, 2012 at 3:15 PM, Ben Guthro <ben@guthro.net> wrote:
> I'll continue to investigate, as my schedule allows, but haven't found
> any smoking gun just yet.
>
> This happens to be on an Ivybridge system - I have other Sandybridge
> systems that will go to sleep, but never wake up at all, forcing a
> hard power cycle.
>
> I tested these iommu= parameters on one of these machines, to no
> effect. Every time they go into S3, a hard reset is necessary to get
> them to come out of it.
>
>
>
>
> On Fri, Aug 10, 2012 at 2:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 09.08.12 at 18:09, Ben Guthro <ben@guthro.net> wrote:
>>> iommu=no-intremap
>>> This seems to work around the issue on this platform, performing
>>> multiple suspend/resume cycles, and ahci came back afterwards just
>>> fine.
>>>
>>> What is the downside to flipping this off?
>>
>> Loss of security (against misbehaving/malicious guests). So we
>> certainly want/need to get to the bottom of this (especially if
>> this is not only one kind of system that's affected).
>>
>>> iommu=off
>>> This test behaved similarly to the above, also working around the issue.
>>
>> Of course, this is a superset of the former.
>>
>> This result, however, makes it more likely again to indeed be a
>> Xen side problem, not Dom0 induced corruption.
>>
>> Jan
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-14 17:31                           ` Ben Guthro
@ 2012-08-15  8:11                             ` Jan Beulich
  2012-08-15 10:32                               ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-15  8:11 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 14.08.12 at 19:31, Ben Guthro <ben@guthro.net> wrote:
> I've bisected this issue - and it looks like it is a rather old problem.
> 
> The following changeset introduced it in the 4.1 development stream on
> Jun 17, 2010
> 
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 

Interesting. That I wouldn't have suspected at all.

> Since we (XenClient Enterprise / NxTop) skipped over Xen-4.1 - we
> never ran into this until now.
> 
> Any thoughts as to a solution?

First try collectively removing the three calls to
evtchn_move_pirqs() in xen/common/schedule.c. If that helps,
see whether any smaller set also does. From the result of this,
I'll have to think further, perhaps handing you a debugging
patch.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15  8:11                             ` Jan Beulich
@ 2012-08-15 10:32                               ` Ben Guthro
  2012-08-15 12:32                                 ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-15 10:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Wed, Aug 15, 2012 at 4:11 AM, Jan Beulich <JBeulich@suse.com> wrote:

> First try collectively removing the three calls to
> evtchn_move_pirqs() in xen/common/schedule.c. If that helps,

Sadly, I tried this yesterday (against the tip) with no success.

I'm starting to question my bisecting results.

It is, of course easy to know that it failed - but since it doesn't
fail until sometime after the first, second, or third suspend/resume
cycle - it can be easy to misinterpret a failure that has not occurred
yet as a success.

I will retest this morning when I get to work with this changeset, and
its parent, to better verify my results.

I'll also try the  evtchn_move_pirqs() removal against the changeset
above, to see if my results differ than when I did the same test
against the tip.

> see whether any smaller set also does. From the result of this,
> I'll have to think further, perhaps handing you a debugging
> patch.

I'm happy to test any debug patch.
I will report results from the tests above later this morning.

Ben

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 10:32                               ` Ben Guthro
@ 2012-08-15 12:32                                 ` Ben Guthro
  2012-08-15 12:58                                   ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-15 12:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Wed, Aug 15, 2012 at 6:32 AM, Ben Guthro <ben@guthro.net> wrote:
> I will retest this morning when I get to work with this changeset, and
> its parent, to better verify my results.

I retested this, and am more convinced now that this introduced a failure.
(see test results below)

> I'll also try the  evtchn_move_pirqs() removal against the changeset
> above, to see if my results differ than when I did the same test
> against the tip.

21624:b9c541d9c138
12 successful suspend / resume cycles

21625:0695a5cdcb42
2 successful suspend / resume cycles - failure on 3rd (ahci)

21625:0695a5cdcb42 + evtchn_move_pirqs() removal:
12 successsful suspend / resume cycles

This was encouraging, so I tried the same change against the tree
tip...unfortunately that didn't go as well.

tip + evtchn_move_pirqs() removal:
did not resume from the first suspend.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 12:32                                 ` Ben Guthro
@ 2012-08-15 12:58                                   ` Jan Beulich
  2012-08-15 13:11                                     ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-15 12:58 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
> This was encouraging, so I tried the same change against the tree
> tip...unfortunately that didn't go as well.
> 
> tip + evtchn_move_pirqs() removal:
> did not resume from the first suspend.

Any logs of this (i.e. indications of what's going wrong - still
the same AHCI not working, but else apparently fine)?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 12:58                                   ` Jan Beulich
@ 2012-08-15 13:11                                     ` Ben Guthro
  2012-08-15 14:50                                       ` Jan Beulich
  2012-08-16  8:31                                       ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-15 13:11 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>> This was encouraging, so I tried the same change against the tree
>> tip...unfortunately that didn't go as well.
>>
>> tip + evtchn_move_pirqs() removal:
>> did not resume from the first suspend.
>
> Any logs of this (i.e. indications of what's going wrong - still
> the same AHCI not working, but else apparently fine)?

This is a bit strange, in that the observed behavior changes when I am
logging to the serial connection.

When I am logging to serial, the failure is the same as before -
The first suspend / resume works -
The second fails with AHCI not working

However, when I am not logging to serial - the system goes down, but
never comes back up. I cannot ssh in, and no graphics are displayed on
the screen. My only recourse is to hard power cycle the machine.

This, of course makes collecting logs difficult.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 13:11                                     ` Ben Guthro
@ 2012-08-15 14:50                                       ` Jan Beulich
  2012-08-15 14:58                                         ` Ben Guthro
  2012-08-16  8:31                                       ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-15 14:50 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working
> 
> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Indeed. Did you try using the serial driver in polling mode
(without IRQ that is)?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 14:50                                       ` Jan Beulich
@ 2012-08-15 14:58                                         ` Ben Guthro
  2012-08-15 15:00                                           ` Andrew Cooper
  2012-08-15 15:06                                           ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-15 14:58 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 403 bytes --]

On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
> > This, of course makes collecting logs difficult.
>
> Indeed. Did you try using the serial driver in polling mode
> (without IRQ that is)?
>
>
I'm not familiar with how to set this up, and a quick glance through
xen/drivers/charr/ns16550.c didn't really shed much light.

Is there a README / wiki page, etc describing this?

[-- Attachment #1.2: Type: text/html, Size: 831 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 14:58                                         ` Ben Guthro
@ 2012-08-15 15:00                                           ` Andrew Cooper
  2012-08-15 15:06                                           ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Andrew Cooper @ 2012-08-15 15:00 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Thomas Goetz, xen-devel, John Baboval, Jan Beulich,
	Konrad Rzeszutek Wilk


[-- Attachment #1.1: Type: text/plain, Size: 693 bytes --]


On 15/08/12 15:58, Ben Guthro wrote:
>
>
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com
> <mailto:JBeulich@suse.com>> wrote:
>
>
>     > This, of course makes collecting logs difficult.
>
>     Indeed. Did you try using the serial driver in polling mode
>     (without IRQ that is)?
>
>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
>
> Is there a README / wiki page, etc describing this?

http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

You want the com1 entry

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


[-- Attachment #1.2: Type: text/html, Size: 1938 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 14:58                                         ` Ben Guthro
  2012-08-15 15:00                                           ` Andrew Cooper
@ 2012-08-15 15:06                                           ` Jan Beulich
  2012-08-15 15:16                                             ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-15 15:06 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>>
>> > This, of course makes collecting logs difficult.
>>
>> Indeed. Did you try using the serial driver in polling mode
>> (without IRQ that is)?
>>
>>
> I'm not familiar with how to set this up, and a quick glance through
> xen/drivers/charr/ns16550.c didn't really shed much light.
> 
> Is there a README / wiki page, etc describing this?

There's docs/misc/xen-command-line.markdown, which describes
this. It's basically

com1=<baud>,8n1,<port>,<irq>

and you'd want to set <irq> to 0 (I have a patch pending for
post-4.2 that allows omitting all the fields that you don't really
care to change from their default values, but for now you'll
have to specify them).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 15:06                                           ` Jan Beulich
@ 2012-08-15 15:16                                             ` Ben Guthro
  0 siblings, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-15 15:16 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1778 bytes --]

OK, well, I tried it with the following boot config:

With serial:

multiboot /xen.gz com1=115200,8n1,pci,0 console=com1 dom0_mem=max:1024M
cpufreq=xen cpuidle sync_console loglvl=all xsave=0
module /vmlinuz-3.2.23-orc
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
ignore_loglevel no_console_suspend  xencons=tty console=hvc

I'm not sure it matters, but I'm making use of the renamed "magic" patch to
autodetect the PCI serial card.


Without serial:

multiboot /xen.gz console=null dom0_mem=max:1024M cpufreq=xen cpuidle
xsave=0
module /vmlinuz-3.2.23-orc dummy
root=/dev/mapper/NxVG--eb56f027--0aeb--4e9a--9233--678d57b9dc9e-NxDisk5 ro
module /initrd.img-3.2.23-orc



All of that said - this exhibited the same behavior as before, with the
presence of the serial connection changing the test behavior.





On Wed, Aug 15, 2012 at 11:06 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 15.08.12 at 16:58, Ben Guthro <ben@guthro.net> wrote:
> > On Wed, Aug 15, 2012 at 10:50 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >>
> >> > This, of course makes collecting logs difficult.
> >>
> >> Indeed. Did you try using the serial driver in polling mode
> >> (without IRQ that is)?
> >>
> >>
> > I'm not familiar with how to set this up, and a quick glance through
> > xen/drivers/charr/ns16550.c didn't really shed much light.
> >
> > Is there a README / wiki page, etc describing this?
>
> There's docs/misc/xen-command-line.markdown, which describes
> this. It's basically
>
> com1=<baud>,8n1,<port>,<irq>
>
> and you'd want to set <irq> to 0 (I have a patch pending for
> post-4.2 that allows omitting all the fields that you don't really
> care to change from their default values, but for now you'll
> have to specify them).
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 3056 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-15 13:11                                     ` Ben Guthro
  2012-08-15 14:50                                       ` Jan Beulich
@ 2012-08-16  8:31                                       ` Jan Beulich
  2012-08-16 10:37                                         ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-16  8:31 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 15.08.12 at 15:11, Ben Guthro <ben@guthro.net> wrote:
> On Wed, Aug 15, 2012 at 8:58 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 15.08.12 at 14:32, Ben Guthro <ben@guthro.net> wrote:
>>> This was encouraging, so I tried the same change against the tree
>>> tip...unfortunately that didn't go as well.
>>>
>>> tip + evtchn_move_pirqs() removal:
>>> did not resume from the first suspend.
>>
>> Any logs of this (i.e. indications of what's going wrong - still
>> the same AHCI not working, but else apparently fine)?
> 
> This is a bit strange, in that the observed behavior changes when I am
> logging to the serial connection.
> 
> When I am logging to serial, the failure is the same as before -
> The first suspend / resume works -
> The second fails with AHCI not working

And this is with and/or without the evtchn_move_pirqs() calls
removed? Otherwise, this might allow us at least debugging
that part of the problem.

> However, when I am not logging to serial - the system goes down, but
> never comes back up. I cannot ssh in, and no graphics are displayed on
> the screen. My only recourse is to hard power cycle the machine.
> 
> This, of course makes collecting logs difficult.

Did you try whether anything would make it out to the screen
when you allow Xen to continue to access the video buffer even
post-boot? Quite possibly for this it would be advantageous (or
even required, as I don't think the video mode would get restored
after coming back out of S3) to also only use a (simpler) text mode
console (requiring that you have this up on the screen when you
invoke S3, i.e. you'd have to switch away from or not make use of
the GUI). That would be "vga=text-80x25,keep" on the Xen
command line (while 80x50 or 80x60 would allow for more output
to remain visible, even those modes would - I think - need code
to be added to get restored during resume from S3).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-16  8:31                                       ` Jan Beulich
@ 2012-08-16 10:37                                         ` Ben Guthro
  2012-08-16 11:07                                           ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-16 10:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> When I am logging to serial, the failure is the same as before -
>> The first suspend / resume works -
>> The second fails with AHCI not working
>
> And this is with and/or without the evtchn_move_pirqs() calls
> removed? Otherwise, this might allow us at least debugging
> that part of the problem.

I am now convinced there is more than one problem:
One is the MSI issue we are chasing here... the other seems to be a
bit more insidious, where the system does not come back from S3 at all
- as mentioned in the Intel bug report from the other thread.

Running on serial to debug the former seems to at least mask the latter.

Removing evtchn_move_pirqs() at the tip does not seem to have the same
effect as removing them from the changeset that I bisected the problem
to.

At the tip, with these changes - I observe no change in behavior -
AHCI still has problems after the 2nd suspend/resume cycle.
At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
/ resume a dozen times, or more.

>
>> However, when I am not logging to serial - the system goes down, but
>> never comes back up. I cannot ssh in, and no graphics are displayed on
>> the screen. My only recourse is to hard power cycle the machine.
>>
>> This, of course makes collecting logs difficult.
>
> Did you try whether anything would make it out to the screen
> when you allow Xen to continue to access the video buffer even
> post-boot? Quite possibly for this it would be advantageous (or
> even required, as I don't think the video mode would get restored
> after coming back out of S3) to also only use a (simpler) text mode
> console (requiring that you have this up on the screen when you
> invoke S3, i.e. you'd have to switch away from or not make use of
> the GUI). That would be "vga=text-80x25,keep" on the Xen
> command line (while 80x50 or 80x60 would allow for more output
> to remain visible, even those modes would - I think - need code
> to be added to get restored during resume from S3).
>

I did not try this, but will investigate when I get to work today.

Ben

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-16 10:37                                         ` Ben Guthro
@ 2012-08-16 11:07                                           ` Jan Beulich
  2012-08-16 11:56                                             ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-16 11:07 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> When I am logging to serial, the failure is the same as before -
>>> The first suspend / resume works -
>>> The second fails with AHCI not working
>>
>> And this is with and/or without the evtchn_move_pirqs() calls
>> removed? Otherwise, this might allow us at least debugging
>> that part of the problem.
> 
> I am now convinced there is more than one problem:
> One is the MSI issue we are chasing here... the other seems to be a
> bit more insidious, where the system does not come back from S3 at all
> - as mentioned in the Intel bug report from the other thread.
> 
> Running on serial to debug the former seems to at least mask the latter.
> 
> Removing evtchn_move_pirqs() at the tip does not seem to have the same
> effect as removing them from the changeset that I bisected the problem
> to.

Odd.

> At the tip, with these changes - I observe no change in behavior -
> AHCI still has problems after the 2nd suspend/resume cycle.
> At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> / resume a dozen times, or more.

As there ought to be at least some affinity break messages during
the suspend part, and I don't recall having seen any, could you -
for starters - provide a full serial log of the suspend/resume process,
with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
produce a debugging patch for you to try.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-16 11:07                                           ` Jan Beulich
@ 2012-08-16 11:56                                             ` Ben Guthro
  2012-08-17 10:22                                               ` Ben Guthro
  2012-09-03  9:31                                               ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-16 11:56 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Thu, Aug 16, 2012 at 7:07 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
> >>> On 16.08.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> > On Thu, Aug 16, 2012 at 4:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> When I am logging to serial, the failure is the same as before -
> >>> The first suspend / resume works -
> >>> The second fails with AHCI not working
> >>
> >> And this is with and/or without the evtchn_move_pirqs() calls
> >> removed? Otherwise, this might allow us at least debugging
> >> that part of the problem.
> >
> > I am now convinced there is more than one problem:
> > One is the MSI issue we are chasing here... the other seems to be a
> > bit more insidious, where the system does not come back from S3 at all
> > - as mentioned in the Intel bug report from the other thread.
> >
> > Running on serial to debug the former seems to at least mask the latter.
> >
> > Removing evtchn_move_pirqs() at the tip does not seem to have the same
> > effect as removing them from the changeset that I bisected the problem
> > to.
>
> Odd.


Indeed. I can't explain it either.


>
>
> > At the tip, with these changes - I observe no change in behavior -
> > AHCI still has problems after the 2nd suspend/resume cycle.
> > At 21625:0695a5cdcb42, with evtchn_move_pirqs() - I am able to suspend
> > / resume a dozen times, or more.
>
> As there ought to be at least some affinity break messages during
> the suspend part, and I don't recall having seen any, could you -
> for starters - provide a full serial log of the suspend/resume process,
> with "loglvl=all guest_loglvl=all" in place? I'll then try to get to
> produce a debugging patch for you to try.


In order to not flood the list with large logs, I put the logs here:
https://citrix.sharefile.com/d/sfab699024a54df39
I wasn't sure if you wanted a log with, or without the calls to
evtchn_move_pirqs() commented out - so I included both.

Please let me know if there are anything else you'd like me to experiment with.

Ben

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-16 11:56                                             ` Ben Guthro
@ 2012-08-17 10:22                                               ` Ben Guthro
  2012-08-17 10:40                                                 ` Jan Beulich
  2012-09-03  9:31                                               ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-17 10:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
> In order to not flood the list with large logs, I put the logs here:
> https://citrix.sharefile.com/d/sfab699024a54df39
> I wasn't sure if you wanted a log with, or without the calls to
> evtchn_move_pirqs() commented out - so I included both.

I received notifications that this got downloaded yesterday by a couple people.
Did you have an opportunity to review it?

If so - did you glean any new knowledge?


Thanks for your time

Ben

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-17 10:22                                               ` Ben Guthro
@ 2012-08-17 10:40                                                 ` Jan Beulich
  2012-08-23 18:03                                                   ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-17 10:40 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>> In order to not flood the list with large logs, I put the logs here:
>> https://citrix.sharefile.com/d/sfab699024a54df39 
>> I wasn't sure if you wanted a log with, or without the calls to
>> evtchn_move_pirqs() commented out - so I included both.
> 
> I received notifications that this got downloaded yesterday by a couple 
> people.
> Did you have an opportunity to review it?

Yes, I did.

> If so - did you glean any new knowledge?

Unfortunately not. Instead I was surprised that there were no
IRQ fixup related messages at all, of which I still will need to
make sense.

In any case, I'm planning on putting together a debugging patch,
but can't immediately tell when this will be.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-17 10:40                                                 ` Jan Beulich
@ 2012-08-23 18:03                                                   ` Ben Guthro
  2012-08-23 18:37                                                     ` Andrew Cooper
  2012-08-24 22:55                                                     ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-23 18:03 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

I did some more bisecting here, and I came up with another changeset
that seems to be problematic, Re: IRQs

After bisecting the problem discussed earlier in this thread to the
changeset below,
http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42


I worked past that issue by the following hack:

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
 void evtchn_move_pirqs(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    const cpumask_t *mask = cpumask_of(v->processor);
+    //const cpumask_t *mask = cpumask_of(v->processor);
     unsigned int port;
     struct evtchn *chn;

@@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
+#if 0
         pirq_set_affinity(d, chn->u.pirq.irq, mask);
+#endif
     }
     spin_unlock(&d->event_lock);
 }


This seemed to work for this rather old changeset, but it was not
sufficient to fix it against the 4.1, or unstable trees.

I further bisected, in combination with this hack, and found the
following changeset to also be problematic:

http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365


That is, before this change I could resume reliably (with the hack
above) - and after I could not.
This was surprising to me, as this change also looks rather innocuous.


Naturally, backing out this change seems to be non-trivial against the
tip, since so much around it has changed.




On Fri, Aug 17, 2012 at 6:40 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 17.08.12 at 12:22, Ben Guthro <ben@guthro.net> wrote:
>> On Thu, Aug 16, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>>> In order to not flood the list with large logs, I put the logs here:
>>> https://citrix.sharefile.com/d/sfab699024a54df39
>>> I wasn't sure if you wanted a log with, or without the calls to
>>> evtchn_move_pirqs() commented out - so I included both.
>>
>> I received notifications that this got downloaded yesterday by a couple
>> people.
>> Did you have an opportunity to review it?
>
> Yes, I did.
>
>> If so - did you glean any new knowledge?
>
> Unfortunately not. Instead I was surprised that there were no
> IRQ fixup related messages at all, of which I still will need to
> make sense.
>
> In any case, I'm planning on putting together a debugging patch,
> but can't immediately tell when this will be.
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 18:03                                                   ` Ben Guthro
@ 2012-08-23 18:37                                                     ` Andrew Cooper
  2012-08-24 22:11                                                       ` Jan Beulich
  2012-08-24 22:55                                                     ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Cooper @ 2012-08-23 18:37 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Thomas Goetz, xen-devel, John Baboval, Jan Beulich,
	Konrad Rzeszutek Wilk

On 23/08/12 19:03, Ben Guthro wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.

And by the looks of that changeset, the logic in fixup_irqs() in irq.c
was changed.

Jan: The commit message says "simplify operations [in] a few cases". 
Was the change in fixup_irqs() deliberate?

~Andrew

>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 18:37                                                     ` Andrew Cooper
@ 2012-08-24 22:11                                                       ` Jan Beulich
  0 siblings, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-08-24 22:11 UTC (permalink / raw)
  To: Andrew Cooper, Ben Guthro
  Cc: Konrad Rzeszutek Wilk, John Baboval, ThomasGoetz, xen-devel

>>> On 23.08.12 at 20:37, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365 
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> 
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
> 
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?

Yes, it was: There's no need to break/adjust the affinity if it
continues to be a subset of cpu_online_map (i.e. there's no
need for the to match exactly). A similar change was also done
to Linux'es fixup_irqs() later one, without any problems that I'm
aware of.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 18:03                                                   ` Ben Guthro
  2012-08-23 18:37                                                     ` Andrew Cooper
@ 2012-08-24 22:55                                                     ` Jan Beulich
  2012-08-25  0:48                                                       ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-24 22:55 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 23.08.12 at 20:03, Ben Guthro <ben@guthro.net> wrote:
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
> 
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42 
> 
> 
> I worked past that issue by the following hack:
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
> 
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }

Did you also make an attempt at figuring out which of the three calls
to evtchn_move_pirqs() is the actual problematic one?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-24 22:55                                                     ` Jan Beulich
@ 2012-08-25  0:48                                                       ` Ben Guthro
  0 siblings, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-25  0:48 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On Fri, Aug 24, 2012 at 6:55 PM, Jan Beulich <JBeulich@suse.com> wrote:
> Did you also make an attempt at figuring out which of the three calls
> to evtchn_move_pirqs() is the actual problematic one?
>

I did a lot of experimentation...I think this was one of the tests
that I did - though I've slept, and rebooted that machine so many
times over the past few days, a lot of the tests are starting to run
together.

If I recall correctly, I was unable to isolate this issue - commenting
out  just one of the three calls didn't seem to fix the problem.

I'll be traveling all next week (part of which is the Xen summit) -
but I will be able to give a more definitive answer to this when I
return to the office.

Since this is such an old changeset, I moved past it, since the same
fix at the tip didn't have the same effect.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-16 11:56                                             ` Ben Guthro
  2012-08-17 10:22                                               ` Ben Guthro
@ 2012-09-03  9:31                                               ` Jan Beulich
  2012-09-04 12:27                                                 ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-03  9:31 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

[-- Attachment #1: Type: text/plain, Size: 668 bytes --]

>>> On 16.08.12 at 13:56, Ben Guthro <ben@guthro.net> wrote:
> Please let me know if there are anything else you'd like me to experiment 
> with.

Attached a first take at a debugging patch.

While putting this together I realized that the situation gets
complicated by the fact that plain Linux 3.2.23 doesn't even
have support for post-S3 MSI restoration, so much would
depend on how exactly this is implemented in the kernel
you're running (i.e. whether you have a _complete_ backport
in place). I suppose you must be having some support for this
patched in, otherwise I wouldn't understand why things work
without IOMMU/interrupt remapping.

Jan


[-- Attachment #2: S3-MSI.patch --]
[-- Type: text/plain, Size: 2532 bytes --]

--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -207,10 +207,24 @@ static void read_msi_msg(struct msi_desc
 
 static void write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
 {
+ if(!(msg->data & MSI_DATA_VECTOR_MASK) || !(msg->data & MSI_DATA_LEVEL_ASSERT)//temp
+    || (msg->data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI) {//temp
+  printk("MSI: addr=%08x%08x data=%08x dest=%08x bogus!\n", msg->address_hi, msg->address_lo, msg->data, msg->dest32);
+  dump_execution_state();
+ }
     entry->msg = *msg;
 
     if ( iommu_enabled )
+ {//temp
         iommu_update_ire_from_msi(entry, msg);
+  if(!(entry->msg.data & MSI_DATA_VECTOR_MASK) || !(msg->data & MSI_DATA_VECTOR_MASK)
+     || !(msg->data & entry->msg.data & MSI_DATA_LEVEL_ASSERT)
+     || (entry->msg.data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI
+     || (msg->data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI)
+   printk("MSI: addr=%08x%08x data=%08x dest=%08x (%08x%08x, %08x, %08x) bogus!\n",
+          entry->msg.address_hi, entry->msg.address_lo, entry->msg.data, entry->msg.dest32,
+          msg->address_hi, msg->address_lo, msg->data, msg->dest32);
+ }
 
     switch ( entry->msi_attrib.type )
     {
@@ -268,6 +282,11 @@ static void set_msi_affinity(struct irq_
 
     memset(&msg, 0, sizeof(msg));
     read_msi_msg(msi_desc, &msg);
+ if(!(msg.data & MSI_DATA_VECTOR_MASK) || !(msg.data & MSI_DATA_LEVEL_ASSERT)//temp
+    || (msg.data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI) {//temp
+  printk("MSI: addr=%08x%08x data=%08x bogus!\n", msg.address_hi, msg.address_lo, msg.data);
+  dump_execution_state();
+ }
 
     msg.data &= ~MSI_DATA_VECTOR_MASK;
     msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
@@ -1024,6 +1043,9 @@ int pci_restore_msi_state(struct pci_dev
             spin_unlock_irqrestore(&desc->lock, flags);
             return -EINVAL;
         }
+printk("MSI[%04x:%02x:%02x:%u]: addr=%08x%08x data=%08x dest=%08x\n",//temp
+       pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),//temp
+       entry->msg.address_hi, entry->msg.address_lo, entry->msg.data, entry->msg.dest32);//temp
 
         if ( entry->msi_attrib.type == PCI_CAP_ID_MSI )
             msi_set_enable(pdev, 0);
@@ -1066,7 +1088,7 @@ unsigned int pci_msix_get_table_len(stru
     return len;
 }
 
-static void dump_msi(unsigned char key)
+/*static*/ void dump_msi(unsigned char key)
 {
     unsigned int irq;
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-03  9:31                                               ` Jan Beulich
@ 2012-09-04 12:27                                                 ` Ben Guthro
  2012-09-04 12:49                                                   ` Ben Guthro
  2012-09-06 10:22                                                   ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-04 12:27 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

I've put the console log of this test run here:
https://citrix.sharefile.com/d/sdc383e252fb41c5a

(again, so as not to clog everyone's inbox)

I have not yet gone through the log in its entirety yet, but thought I
would first send it to you to see if you had something in particular
you were looking for.

The file name is console-S3-MSI.txt


On Mon, Sep 3, 2012 at 5:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 16.08.12 at 13:56, Ben Guthro <ben@guthro.net> wrote:
>> Please let me know if there are anything else you'd like me to experiment
>> with.
>
> Attached a first take at a debugging patch.
>
> While putting this together I realized that the situation gets
> complicated by the fact that plain Linux 3.2.23 doesn't even
> have support for post-S3 MSI restoration, so much would
> depend on how exactly this is implemented in the kernel
> you're running (i.e. whether you have a _complete_ backport
> in place). I suppose you must be having some support for this
> patched in, otherwise I wouldn't understand why things work
> without IOMMU/interrupt remapping.
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 12:27                                                 ` Ben Guthro
@ 2012-09-04 12:49                                                   ` Ben Guthro
  2012-09-04 14:26                                                     ` Jan Beulich
  2012-09-06 10:22                                                   ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-04 12:49 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

I forgot to address your dom0 kernel concern -

All of these tests have been run with Konrad Wilk's acpi-s3.vX branches.

The later kernels took more recent branches, while the 3.2.yy kernels
used an older (but still functional) branch (acpi-s3.v7)

For reference:
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/acpi-s3.v7




On Tue, Sep 4, 2012 at 8:27 AM, Ben Guthro <ben@guthro.net> wrote:
> I've put the console log of this test run here:
> https://citrix.sharefile.com/d/sdc383e252fb41c5a
>
> (again, so as not to clog everyone's inbox)
>
> I have not yet gone through the log in its entirety yet, but thought I
> would first send it to you to see if you had something in particular
> you were looking for.
>
> The file name is console-S3-MSI.txt
>
>
> On Mon, Sep 3, 2012 at 5:31 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 16.08.12 at 13:56, Ben Guthro <ben@guthro.net> wrote:
>>> Please let me know if there are anything else you'd like me to experiment
>>> with.
>>
>> Attached a first take at a debugging patch.
>>
>> While putting this together I realized that the situation gets
>> complicated by the fact that plain Linux 3.2.23 doesn't even
>> have support for post-S3 MSI restoration, so much would
>> depend on how exactly this is implemented in the kernel
>> you're running (i.e. whether you have a _complete_ backport
>> in place). I suppose you must be having some support for this
>> patched in, otherwise I wouldn't understand why things work
>> without IOMMU/interrupt remapping.
>>
>> Jan
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 12:49                                                   ` Ben Guthro
@ 2012-09-04 14:26                                                     ` Jan Beulich
  2012-09-04 14:28                                                       ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-04 14:26 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 04.09.12 at 14:49, Ben Guthro <ben@guthro.net> wrote:
> I forgot to address your dom0 kernel concern -
> 
> All of these tests have been run with Konrad Wilk's acpi-s3.vX branches.
> 
> The later kernels took more recent branches, while the 3.2.yy kernels
> used an older (but still functional) branch (acpi-s3.v7)

This and the log you provided suggest that your kernel is lacking
the MSI restore code altogether (or it is not getting called as
intended). With that, there's pretty little point in continuing
the investigation on the hypervisor side.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 14:26                                                     ` Jan Beulich
@ 2012-09-04 14:28                                                       ` Ben Guthro
  2012-09-04 14:36                                                         ` Konrad Rzeszutek Wilk
  2012-09-04 15:02                                                         ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-04 14:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, xen-devel, Ben Guthro, Thomas Goetz, john.baboval

How is it that it works with an older hypervisor, then?

Konrad - is the v7 code functionally different than the v10 code, or
is it more  stylistic changes?

On Sep 4, 2012, at 10:25 AM, Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 04.09.12 at 14:49, Ben Guthro <ben@guthro.net> wrote:
>> I forgot to address your dom0 kernel concern -
>>
>> All of these tests have been run with Konrad Wilk's acpi-s3.vX branches.
>>
>> The later kernels took more recent branches, while the 3.2.yy kernels
>> used an older (but still functional) branch (acpi-s3.v7)
>
> This and the log you provided suggest that your kernel is lacking
> the MSI restore code altogether (or it is not getting called as
> intended). With that, there's pretty little point in continuing
> the investigation on the hypervisor side.
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 14:28                                                       ` Ben Guthro
@ 2012-09-04 14:36                                                         ` Konrad Rzeszutek Wilk
  2012-09-04 15:02                                                         ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-04 14:36 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Thomas Goetz, xen-devel, Ben Guthro, Jan Beulich, john.baboval

On Tue, Sep 04, 2012 at 10:28:37AM -0400, Ben Guthro wrote:
> How is it that it works with an older hypervisor, then?
> 
> Konrad - is the v7 code functionally different than the v10 code, or
> is it more  stylistic changes?

It is that v3.4 (and up) already has the restore_MSI hypervisor call:

commit 8605c6844fb9bdf55471bb87c3ac62d44eb34e04
Author: Tang Liang <liang.tang@oracle.com>
Date:   Thu Dec 8 17:36:39 2011 +0800

    xen: Utilize the restore_msi_irqs hook.
    
    to make a hypercall to restore the vectors in the MSI/MSI-X
    configuration space.
    
    Signed-off-by: Tang Liang <liang.tang@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

The only part that is left out of mainline is the part where the hypercall
is done properly. That is what the acpi/s3.v10 does.. but after talking to
Len he pointed out a different way of doing this.

> 
> On Sep 4, 2012, at 10:25 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> >>>> On 04.09.12 at 14:49, Ben Guthro <ben@guthro.net> wrote:
> >> I forgot to address your dom0 kernel concern -
> >>
> >> All of these tests have been run with Konrad Wilk's acpi-s3.vX branches.
> >>
> >> The later kernels took more recent branches, while the 3.2.yy kernels
> >> used an older (but still functional) branch (acpi-s3.v7)
> >
> > This and the log you provided suggest that your kernel is lacking
> > the MSI restore code altogether (or it is not getting called as
> > intended). With that, there's pretty little point in continuing
> > the investigation on the hypervisor side.
> >
> > Jan
> >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 14:28                                                       ` Ben Guthro
  2012-09-04 14:36                                                         ` Konrad Rzeszutek Wilk
@ 2012-09-04 15:02                                                         ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-04 15:02 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Konrad Rzeszutek Wilk, xen-devel, john.baboval, Thomas Goetz, Ben Guthro

>>> On 04.09.12 at 16:28, Ben Guthro <ben.guthro@gmail.com> wrote:
> How is it that it works with an older hypervisor, then?

Actually, checking again I see that I overlooked that specific
log entry (I expected them to be all close together, but they
aren't).

It is obvious that on the second restore the kernel passes
bad data to the hypervisor, but I can't tell yet whether that's
the kernel's fault - will need to look more closely.

Jan

> Konrad - is the v7 code functionally different than the v10 code, or
> is it more  stylistic changes?
> 
> On Sep 4, 2012, at 10:25 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>>>>> On 04.09.12 at 14:49, Ben Guthro <ben@guthro.net> wrote:
>>> I forgot to address your dom0 kernel concern -
>>>
>>> All of these tests have been run with Konrad Wilk's acpi-s3.vX branches.
>>>
>>> The later kernels took more recent branches, while the 3.2.yy kernels
>>> used an older (but still functional) branch (acpi-s3.v7)
>>
>> This and the log you provided suggest that your kernel is lacking
>> the MSI restore code altogether (or it is not getting called as
>> intended). With that, there's pretty little point in continuing
>> the investigation on the hypervisor side.
>>
>> Jan
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
       [not found]                                 ` <CAOvdn6UzdzO_sM6f9coN2udQ6eUC5=Sty-NgC7+yf3XMawF-0A@mail.gmail.com>
@ 2012-09-04 15:31                                   ` Javier Marcet
  0 siblings, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-04 15:31 UTC (permalink / raw)
  To: Ben Guthro, Xen Devel Mailing list

On Tue, Sep 4, 2012 at 12:33 PM, Ben Guthro <ben@guthro.net> wrote:

> This is getting reasonably in depth to be off-list.
>
> You really should CC xen-devel for these kinds of things.

> <snip>

>> Damn it. It is a 3.3 tree and I need drivers introduced in 3.4 and I
>> also use bcache
>> which right now has a 3.2 stable version, a 3.5 not so stable and the
>> devel branch.

> Take the top 3 commits by Konrad & apply them as a patch. It should be
> sufficient....but you'll still run into issues with S3 on Xen, related
> to MSI delivery...if it resumes at all. There are clearly still
> hypervisor issues - so you seem to be jumping to some conclusions here
> with workarounds that I'm not sure are going to get you anywhere.

I've tried the top two commits from Konrad's acpi-s3.v10 branch merged on
top of kernel 3.5.3. The third commit was already merged.

I've tried it from work and I believe it suspends fine, but there is
some problem
during resume since the machine responded to a ping for a second and then
rebooted.

The only error I have on the logs is:

(XEN) memory.c:131:d0 Could not allocate order=18 extent: id=5
memflags=0 (0 of 1)
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of
mfn 270d05: c=c000000000000002 t=7400000000000001

I don't know whether that's enough to make resume from S3 fail or it's
something else.
When I'm at home I'll see if I can dig further.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-04 12:27                                                 ` Ben Guthro
  2012-09-04 12:49                                                   ` Ben Guthro
@ 2012-09-06 10:22                                                   ` Jan Beulich
  2012-09-06 11:48                                                     ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-06 10:22 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

[-- Attachment #1: Type: text/plain, Size: 1294 bytes --]

>>> On 04.09.12 at 14:27, Ben Guthro <ben@guthro.net> wrote:
> I've put the console log of this test run here:
> https://citrix.sharefile.com/d/sdc383e252fb41c5a 
> 
> (again, so as not to clog everyone's inbox)
> 
> I have not yet gone through the log in its entirety yet, but thought I
> would first send it to you to see if you had something in particular
> you were looking for.
> 
> The file name is console-S3-MSI.txt

I think that nailed it: pci_restore_msi_state() passed a pointer
to the stored entry->msg to write_msi_msg(), but with interrupt
remapping enabled that function's call to
iommu_update_ire_from_msi() alters the passed in struct
msi_msg instance. As the stored value is used for nothing but
a subsequent (second) restore, a problem would only arise if
between the two saves to further writing of the stored entry
would occur (i.e. no intermediate call to set_msi_affinity()).

Attached the advertised next version of the debugging patch
(printks - slightly altered - left in to catch eventual further
problems or to deal with my analysis being wrong; none of the
"bogus!" ones should now trigger anymore). If this works, I'd
be curious to see how much of your other workaround code
you could then remove without breaking things again.

Jan


[-- Attachment #2: S3-MSI.patch --]
[-- Type: text/plain, Size: 3695 bytes --]

--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -207,10 +207,22 @@ static void read_msi_msg(struct msi_desc
 
 static void write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
 {
+ if(!(msg->data & MSI_DATA_VECTOR_MASK) || !(msg->data & MSI_DATA_LEVEL_ASSERT)//temp
+    || (msg->data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI) {//temp
+  printk("MSI: addr=%08x%08x data=%08x dest=%08x bogus!\n", msg->address_hi, msg->address_lo, msg->data, msg->dest32);
+  dump_execution_state();
+ }
     entry->msg = *msg;
 
     if ( iommu_enabled )
+    {
+        ASSERT(msg != &entry->msg);
         iommu_update_ire_from_msi(entry, msg);
+ if(!(entry->msg.data & MSI_DATA_VECTOR_MASK) || !(entry->msg.data & MSI_DATA_LEVEL_ASSERT)//temp
+    || (entry->msg.data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI)//temp
+  printk("MSI: addr=%08x%08x data=%08x dest=%08x (%08x) bogus!\n",//temp
+         entry->msg.address_hi, entry->msg.address_lo, entry->msg.data, entry->msg.dest32, msg->address_lo);//temp
+    }
 
     switch ( entry->msi_attrib.type )
     {
@@ -268,6 +280,11 @@ static void set_msi_affinity(struct irq_
 
     memset(&msg, 0, sizeof(msg));
     read_msi_msg(msi_desc, &msg);
+ if(!(msg.data & MSI_DATA_VECTOR_MASK) || !(msg.data & MSI_DATA_LEVEL_ASSERT)//temp
+    || (msg.data & MSI_DATA_DELIVERY_MODE_MASK) > MSI_DATA_DELIVERY_LOWPRI) {//temp
+  printk("MSI: addr=%08x%08x data=%08x bogus!\n", msg.address_hi, msg.address_lo, msg.data);
+  dump_execution_state();
+ }
 
     msg.data &= ~MSI_DATA_VECTOR_MASK;
     msg.data |= MSI_DATA_VECTOR(desc->arch.vector);
@@ -996,6 +1013,7 @@ int pci_restore_msi_state(struct pci_dev
     int ret;
     struct msi_desc *entry, *tmp;
     struct irq_desc *desc;
+    struct msi_msg msg;
 
     ASSERT(spin_is_locked(&pcidevs_lock));
 
@@ -1024,13 +1042,17 @@ int pci_restore_msi_state(struct pci_dev
             spin_unlock_irqrestore(&desc->lock, flags);
             return -EINVAL;
         }
+printk("MSI[%04x:%02x:%02x:%u]: addr=%08x%08x data=%08x dest=%08x\n",//temp
+       pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),//temp
+       entry->msg.address_hi, entry->msg.address_lo, entry->msg.data, entry->msg.dest32);//temp
 
         if ( entry->msi_attrib.type == PCI_CAP_ID_MSI )
             msi_set_enable(pdev, 0);
         else if ( entry->msi_attrib.type == PCI_CAP_ID_MSIX )
             msix_set_enable(pdev, 0);
 
-        write_msi_msg(entry, &entry->msg);
+        msg = entry->msg;
+        write_msi_msg(entry, &msg);
 
         msi_set_mask_bit(desc, entry->msi_attrib.masked);
 
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1040,8 +1040,6 @@ static void dma_msi_mask(struct irq_desc
     unsigned long flags;
     struct iommu *iommu = desc->action->dev_id;
 
-    irq_complete_move(desc);
-
     /* mask it */
     spin_lock_irqsave(&iommu->register_lock, flags);
     dmar_writel(iommu->reg, DMAR_FECTL_REG, DMA_FECTL_IM);
@@ -1054,6 +1052,13 @@ static unsigned int dma_msi_startup(stru
     return 0;
 }
 
+static void dma_msi_ack(struct irq_desc *desc)
+{
+    irq_complete_move(desc);
+    dma_msi_mask(desc);
+    move_masked_irq(desc);
+}
+
 static void dma_msi_end(struct irq_desc *desc, u8 vector)
 {
     dma_msi_unmask(desc);
@@ -1115,7 +1120,7 @@ static hw_irq_controller dma_msi_type = 
     .shutdown = dma_msi_mask,
     .enable = dma_msi_unmask,
     .disable = dma_msi_mask,
-    .ack = dma_msi_mask,
+    .ack = dma_msi_ack,
     .end = dma_msi_end,
     .set_affinity = dma_msi_set_affinity,
 };

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 10:22                                                   ` Jan Beulich
@ 2012-09-06 11:48                                                     ` Ben Guthro
  2012-09-06 11:51                                                       ` Ben Guthro
                                                                         ` (2 more replies)
  0 siblings, 3 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-06 11:48 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

Fantastic!

Initial tests are very good, with this dma_msi_ack modification.

I've been able to get past the ahci stall, and run through ~10 suspend
/ resume cycles with this fix.
Additional tests are warranted, and I'll run through an automated
sleep / wake script I have to make sure this fix holds over time.

Thanks for this.

Ben

On Thu, Sep 6, 2012 at 6:22 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 04.09.12 at 14:27, Ben Guthro <ben@guthro.net> wrote:
>> I've put the console log of this test run here:
>> https://citrix.sharefile.com/d/sdc383e252fb41c5a
>>
>> (again, so as not to clog everyone's inbox)
>>
>> I have not yet gone through the log in its entirety yet, but thought I
>> would first send it to you to see if you had something in particular
>> you were looking for.
>>
>> The file name is console-S3-MSI.txt
>
> I think that nailed it: pci_restore_msi_state() passed a pointer
> to the stored entry->msg to write_msi_msg(), but with interrupt
> remapping enabled that function's call to
> iommu_update_ire_from_msi() alters the passed in struct
> msi_msg instance. As the stored value is used for nothing but
> a subsequent (second) restore, a problem would only arise if
> between the two saves to further writing of the stored entry
> would occur (i.e. no intermediate call to set_msi_affinity()).
>
> Attached the advertised next version of the debugging patch
> (printks - slightly altered - left in to catch eventual further
> problems or to deal with my analysis being wrong; none of the
> "bogus!" ones should now trigger anymore). If this works, I'd
> be curious to see how much of your other workaround code
> you could then remove without breaking things again.
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 11:48                                                     ` Ben Guthro
@ 2012-09-06 11:51                                                       ` Ben Guthro
  2012-09-06 13:05                                                       ` Konrad Rzeszutek Wilk
  2012-09-06 16:42                                                       ` Ben Guthro
  2 siblings, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-06 11:51 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

Additionally - I was able to use this patch on its own, without any of
the prior debug patch - which, as you pointed out before, ended up
being unnecessary.

On Thu, Sep 6, 2012 at 7:48 AM, Ben Guthro <ben@guthro.net> wrote:
> Fantastic!
>
> Initial tests are very good, with this dma_msi_ack modification.
>
> I've been able to get past the ahci stall, and run through ~10 suspend
> / resume cycles with this fix.
> Additional tests are warranted, and I'll run through an automated
> sleep / wake script I have to make sure this fix holds over time.
>
> Thanks for this.
>
> Ben
>
> On Thu, Sep 6, 2012 at 6:22 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 04.09.12 at 14:27, Ben Guthro <ben@guthro.net> wrote:
>>> I've put the console log of this test run here:
>>> https://citrix.sharefile.com/d/sdc383e252fb41c5a
>>>
>>> (again, so as not to clog everyone's inbox)
>>>
>>> I have not yet gone through the log in its entirety yet, but thought I
>>> would first send it to you to see if you had something in particular
>>> you were looking for.
>>>
>>> The file name is console-S3-MSI.txt
>>
>> I think that nailed it: pci_restore_msi_state() passed a pointer
>> to the stored entry->msg to write_msi_msg(), but with interrupt
>> remapping enabled that function's call to
>> iommu_update_ire_from_msi() alters the passed in struct
>> msi_msg instance. As the stored value is used for nothing but
>> a subsequent (second) restore, a problem would only arise if
>> between the two saves to further writing of the stored entry
>> would occur (i.e. no intermediate call to set_msi_affinity()).
>>
>> Attached the advertised next version of the debugging patch
>> (printks - slightly altered - left in to catch eventual further
>> problems or to deal with my analysis being wrong; none of the
>> "bogus!" ones should now trigger anymore). If this works, I'd
>> be curious to see how much of your other workaround code
>> you could then remove without breaking things again.
>>
>> Jan
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 11:48                                                     ` Ben Guthro
  2012-09-06 11:51                                                       ` Ben Guthro
@ 2012-09-06 13:05                                                       ` Konrad Rzeszutek Wilk
  2012-09-06 13:27                                                         ` Ben Guthro
  2012-09-06 16:42                                                       ` Ben Guthro
  2 siblings, 1 reply; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-06 13:05 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Thomas Goetz, john.baboval, Jan Beulich, xen-devel

On Thu, Sep 06, 2012 at 07:48:26AM -0400, Ben Guthro wrote:
> Fantastic!
> 
> Initial tests are very good, with this dma_msi_ack modification.
> 
> I've been able to get past the ahci stall, and run through ~10 suspend
> / resume cycles with this fix.
> Additional tests are warranted, and I'll run through an automated
> sleep / wake script I have to make sure this fix holds over time.

How are you automating this? (/me imagines some robotic arm pressing
the power button every couple of minutes).

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 13:05                                                       ` Konrad Rzeszutek Wilk
@ 2012-09-06 13:27                                                         ` Ben Guthro
  2012-09-06 13:36                                                           ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-06 13:27 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Thomas Goetz, john.baboval, Jan Beulich, xen-devel

[-- Attachment #1: Type: text/plain, Size: 1855 bytes --]

We have a cycle_wakeup script (attached) that programs the RTC

This in itsself is not sufficient, due to interaction with the HPET.
We have an additional patch (see below) to deal with that.

The patch is not the cleanest solution ever, but it works...


diff -r fc9dd79fe91a xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c	Thu Mar 03 17:21:16 2011 -0500
+++ b/xen/arch/x86/hpet.c	Thu Mar 03 17:35:57 2011 -0500
@@ -520,13 +520,28 @@

     if ( index != RTC_REG_B )
         return;
-
+
+    /*
+     *  Wake on RTC alarm does not conflict with hpet.
+     * The HW will wake up the CPU when the RTC alarm kicks off
+     * even thought the interrupt is not routed to the APIC.
+     *
+     * In our distro, xen will always own the interrupt so
+     * that we never disable deep cstates.  We do not run
+     * dom0 apps/drivers that require RTC interrupts other
+     * than wakeup functionality.
+     */
+#if 0
     /* RTC Reg B, contain PIE/AIE/UIE */
     if ( value & (RTC_PIE | RTC_AIE | RTC_UIE ) )
     {
         cpuidle_disable_deep_cstate();
         pv_rtc_handler = NULL;
     }
+#else
+    if ( value & (RTC_PIE | RTC_UIE ) )
+	    printk("WARNING: dom0 attempting to use RTC interrupts!\n");
+#endif
 }

 void hpet_broadcast_init(void)


On Thu, Sep 6, 2012 at 9:05 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Thu, Sep 06, 2012 at 07:48:26AM -0400, Ben Guthro wrote:
>> Fantastic!
>>
>> Initial tests are very good, with this dma_msi_ack modification.
>>
>> I've been able to get past the ahci stall, and run through ~10 suspend
>> / resume cycles with this fix.
>> Additional tests are warranted, and I'll run through an automated
>> sleep / wake script I have to make sure this fix holds over time.
>
> How are you automating this? (/me imagines some robotic arm pressing
> the power button every couple of minutes).

[-- Attachment #2: cycle_wakeup --]
[-- Type: application/octet-stream, Size: 3020 bytes --]

#!/bin/sh

passes=100
suspend_time=15
awake_time=30

progname=$(basename $0)

uexit()
{
cat <<EOF
Usage: $progname [[-v] [-l] [-p <num>] [-s <secs>] [-a <secs>]] [<command>]
       -v          verbose, echo progess to stdout
       -l          log progress to syslog
       -p          specify number of passes
       -s          time stay in suspend in seconds
       -a          awake time before suspend in seconds
                    specifying 'net' or '-1' will wait for the
                    network to be up before re-suspending
       <command>   a shell command to run for each pass of the test
                    on resume from suspend.
EOF
}

#
# Process command line args
#
while [ -n "$1" ]; do
    case $1 in
	-v)	verbose=true;
		;;
	-l)	verbose_log=true
		;;
	-p)	[ $# -lt 2 ] && uexit
	    	passes=$2
		shift
		;;
	-s)	[ $# -lt 2 ] && uexit
	    	suspend_time=$2
		shift
		;;
	-a)	[ $# -lt 2 ] && uexit
	    	awake_time=$2
		shift
		;;
	*)	break
		;;
    esac
    shift
done

cmd="$*"

#
# Support routines
#
logit()
{
    if [ x"$1" = x"-v" ]; then
	shift
	echo "$*"
    elif [ -n "$verbose" ]; then
	echo "`date` $*"
    fi
    if [ -n "$verbose_log" ]; then
        echo "$progname: $*" > /dev/kmsg
    fi
}

gettime()
{
    cat /sys/class/rtc/rtc0/since_epoch
}

pingit()
{
    logit "Pinging repoman."
    i=60
    while [ $i -gt 0 ]; do
	if ping -c 1 10.1.1.11 >/dev/null 2>&1; then
	    return
	fi
	sleep 1
	i=$(($i - 1))
    done
    logit "Ping timed out."
}

do_standby()
{
    #
    # The suspend code that dmd calls, /usr/local/bin/standby,
    # will clear the /tmp/standby_test file when it completes.
    #
    touch /tmp/standby_test
    dmdrv suspend_platform $*
    while [ -e /tmp/standby_test ]; do
	sleep 1
    done
}

#
# /tmp/standby_test is used to pass parameters to the standby script
# in /usr/local/bin.
#
rm -f /tmp/standby_test

logit -v "Running suspend cycle test with the following parameters:"
logit -v "    run for $passes passes"
logit -v "    suspend for $suspend_time seconds"
logit -v "    stay awake for $awake_time seconds"

count=1
while [ 1 ]; do

    #
    # Do the suspend
    #
    logit "Suspending for $suspend_time seconds, Pass $count."

    start=$(gettime)
    do_standby -n -s $suspend_time PASS $count
    end=$(gettime)
    delta=$(($end - $start))

    logit "Resumed in $delta seconds."

    #
    # Sanity check the suspend duration
    #
    expected=$(($suspend_time + 60))
    if [ $delta -gt $expected ]; then
	logit "FAILURE, system in standby for too long!"
	exit 1
    fi

    #
    # Execute the command specified for each pass
    #
    [ -n "$cmd" ] && eval "$cmd"

    [ $count -eq $passes ] && break

    #
    # Stay awake for the specified period
    #
    if [ $awake_time = "net" -o $awake_time -eq -1 ]; then
	logit "Waiting for network to come up"
	pingit
    elif [ $awake_time -gt 0 ]; then
	logit "Waiting for $awake_time seconds."
	sleep $awake_time 
    else
	logit "Not waiting"
    fi

    count=$(($count + 1))
done

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 13:27                                                         ` Ben Guthro
@ 2012-09-06 13:36                                                           ` Ben Guthro
  0 siblings, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-06 13:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Thomas Goetz, john.baboval, Jan Beulich, xen-devel

Hmm - that script uses some stuff that is specific to our distro, but
could be adapted to do do something more standard, like
echo mem > /sys/power/state


On Thu, Sep 6, 2012 at 9:27 AM, Ben Guthro <ben@guthro.net> wrote:
> We have a cycle_wakeup script (attached) that programs the RTC
>
> This in itsself is not sufficient, due to interaction with the HPET.
> We have an additional patch (see below) to deal with that.
>
> The patch is not the cleanest solution ever, but it works...
>
>
> diff -r fc9dd79fe91a xen/arch/x86/hpet.c
> --- a/xen/arch/x86/hpet.c       Thu Mar 03 17:21:16 2011 -0500
> +++ b/xen/arch/x86/hpet.c       Thu Mar 03 17:35:57 2011 -0500
> @@ -520,13 +520,28 @@
>
>      if ( index != RTC_REG_B )
>          return;
> -
> +
> +    /*
> +     *  Wake on RTC alarm does not conflict with hpet.
> +     * The HW will wake up the CPU when the RTC alarm kicks off
> +     * even thought the interrupt is not routed to the APIC.
> +     *
> +     * In our distro, xen will always own the interrupt so
> +     * that we never disable deep cstates.  We do not run
> +     * dom0 apps/drivers that require RTC interrupts other
> +     * than wakeup functionality.
> +     */
> +#if 0
>      /* RTC Reg B, contain PIE/AIE/UIE */
>      if ( value & (RTC_PIE | RTC_AIE | RTC_UIE ) )
>      {
>          cpuidle_disable_deep_cstate();
>          pv_rtc_handler = NULL;
>      }
> +#else
> +    if ( value & (RTC_PIE | RTC_UIE ) )
> +           printk("WARNING: dom0 attempting to use RTC interrupts!\n");
> +#endif
>  }
>
>  void hpet_broadcast_init(void)
>
>
> On Thu, Sep 6, 2012 at 9:05 AM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Thu, Sep 06, 2012 at 07:48:26AM -0400, Ben Guthro wrote:
>>> Fantastic!
>>>
>>> Initial tests are very good, with this dma_msi_ack modification.
>>>
>>> I've been able to get past the ahci stall, and run through ~10 suspend
>>> / resume cycles with this fix.
>>> Additional tests are warranted, and I'll run through an automated
>>> sleep / wake script I have to make sure this fix holds over time.
>>
>> How are you automating this? (/me imagines some robotic arm pressing
>> the power button every couple of minutes).

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 11:48                                                     ` Ben Guthro
  2012-09-06 11:51                                                       ` Ben Guthro
  2012-09-06 13:05                                                       ` Konrad Rzeszutek Wilk
@ 2012-09-06 16:42                                                       ` Ben Guthro
  2012-09-07  8:38                                                         ` Jan Beulich
  2 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-06 16:42 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

Odd.
The behavior seems to change, when not running with serial output.

On this desktop system that worked OK, once I go back to not running
the console output over serial - things fail on resume.
It is uncertain at what point things stop working, though - as
networking is unavailable.

I'll continue to look into it, but will be traveling next week.



On Thu, Sep 6, 2012 at 7:48 AM, Ben Guthro <ben@guthro.net> wrote:
> Fantastic!
>
> Initial tests are very good, with this dma_msi_ack modification.
>
> I've been able to get past the ahci stall, and run through ~10 suspend
> / resume cycles with this fix.
> Additional tests are warranted, and I'll run through an automated
> sleep / wake script I have to make sure this fix holds over time.
>
> Thanks for this.
>
> Ben
>
> On Thu, Sep 6, 2012 at 6:22 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 04.09.12 at 14:27, Ben Guthro <ben@guthro.net> wrote:
>>> I've put the console log of this test run here:
>>> https://citrix.sharefile.com/d/sdc383e252fb41c5a
>>>
>>> (again, so as not to clog everyone's inbox)
>>>
>>> I have not yet gone through the log in its entirety yet, but thought I
>>> would first send it to you to see if you had something in particular
>>> you were looking for.
>>>
>>> The file name is console-S3-MSI.txt
>>
>> I think that nailed it: pci_restore_msi_state() passed a pointer
>> to the stored entry->msg to write_msi_msg(), but with interrupt
>> remapping enabled that function's call to
>> iommu_update_ire_from_msi() alters the passed in struct
>> msi_msg instance. As the stored value is used for nothing but
>> a subsequent (second) restore, a problem would only arise if
>> between the two saves to further writing of the stored entry
>> would occur (i.e. no intermediate call to set_msi_affinity()).
>>
>> Attached the advertised next version of the debugging patch
>> (printks - slightly altered - left in to catch eventual further
>> problems or to deal with my analysis being wrong; none of the
>> "bogus!" ones should now trigger anymore). If this works, I'd
>> be curious to see how much of your other workaround code
>> you could then remove without breaking things again.
>>
>> Jan
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-06 16:42                                                       ` Ben Guthro
@ 2012-09-07  8:38                                                         ` Jan Beulich
  2012-09-07 10:37                                                           ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-07  8:38 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 06.09.12 at 18:42, Ben Guthro <ben@guthro.net> wrote:
> Odd.
> The behavior seems to change, when not running with serial output.

Odd indeed. Could you (unless you are already) try running the
serial console in polling mode (i.e. without IRQ), to see whether
the IRQs coming from it somehow keep the system alive at a
certain point?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07  8:38                                                         ` Jan Beulich
@ 2012-09-07 10:37                                                           ` Ben Guthro
  2012-09-07 11:15                                                             ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-07 10:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1002 bytes --]

On Fri, Sep 7, 2012 at 4:38 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 06.09.12 at 18:42, Ben Guthro <ben@guthro.net> wrote:
> > Odd.
> > The behavior seems to change, when not running with serial output.
>
> Odd indeed. Could you (unless you are already) try running the
> serial console in polling mode (i.e. without IRQ), to see whether
> the IRQs coming from it somehow keep the system alive at a
> certain point?
>
>
I tried to do this at the end of yesterday, since you had suggested it
previously, in this thread.

I did so by adding a ",0" to my com1 line, per the documentation - However,
I am running with a PCI serial card, and not an "on-board" one - so my
parameter looks like:

com1=115200,8n1,pci,0

I am not totally convinced that it is actually running in polling mode, and
started to investigate ns16550.c to verify it was. I plan on resuming this
investigation this morning.
If you have any pointers on what I should be looking for - I'd appreciate
any suggestions.


/btg

[-- Attachment #1.2: Type: text/html, Size: 1606 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07 10:37                                                           ` Ben Guthro
@ 2012-09-07 11:15                                                             ` Jan Beulich
  2012-09-07 11:51                                                               ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-07 11:15 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 07.09.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
> On Fri, Sep 7, 2012 at 4:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 06.09.12 at 18:42, Ben Guthro <ben@guthro.net> wrote:
>> > Odd.
>> > The behavior seems to change, when not running with serial output.
>>
>> Odd indeed. Could you (unless you are already) try running the
>> serial console in polling mode (i.e. without IRQ), to see whether
>> the IRQs coming from it somehow keep the system alive at a
>> certain point?
>>
>>
> I tried to do this at the end of yesterday, since you had suggested it
> previously, in this thread.
> 
> I did so by adding a ",0" to my com1 line, per the documentation - However,
> I am running with a PCI serial card, and not an "on-board" one - so my
> parameter looks like:
> 
> com1=115200,8n1,pci,0
> 
> I am not totally convinced that it is actually running in polling mode, and
> started to investigate ns16550.c to verify it was. I plan on resuming this
> investigation this morning.
> If you have any pointers on what I should be looking for - I'd appreciate
> any suggestions.
> 

The only thing you need to make sure is that at the end of
ns16550_parse_port_config() uart->irq is zero. But for PCI
that should be the default right now in -unstable (but I
think I have a patch pending to alter this in certain cases).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07 11:15                                                             ` Jan Beulich
@ 2012-09-07 11:51                                                               ` Ben Guthro
  2012-09-07 12:18                                                                 ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-07 11:51 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

I have verified it is running with uart->irq == 0 when running on serial.

However, when I run with console=none, the observed behavior is very different.
The system seems to go to sleep successfully - but when I press the
power button to wake it up - the power comes on - the fans spin up -
but the system is unresponsive.
No video
No network
keyboard LEDs (Caps,Numlock) do not light up.


Alternate debugging strategies welcome.

/btg

On Fri, Sep 7, 2012 at 7:15 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.09.12 at 12:37, Ben Guthro <ben@guthro.net> wrote:
>> On Fri, Sep 7, 2012 at 4:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> >>> On 06.09.12 at 18:42, Ben Guthro <ben@guthro.net> wrote:
>>> > Odd.
>>> > The behavior seems to change, when not running with serial output.
>>>
>>> Odd indeed. Could you (unless you are already) try running the
>>> serial console in polling mode (i.e. without IRQ), to see whether
>>> the IRQs coming from it somehow keep the system alive at a
>>> certain point?
>>>
>>>
>> I tried to do this at the end of yesterday, since you had suggested it
>> previously, in this thread.
>>
>> I did so by adding a ",0" to my com1 line, per the documentation - However,
>> I am running with a PCI serial card, and not an "on-board" one - so my
>> parameter looks like:
>>
>> com1=115200,8n1,pci,0
>>
>> I am not totally convinced that it is actually running in polling mode, and
>> started to investigate ns16550.c to verify it was. I plan on resuming this
>> investigation this morning.
>> If you have any pointers on what I should be looking for - I'd appreciate
>> any suggestions.
>>
>
> The only thing you need to make sure is that at the end of
> ns16550_parse_port_config() uart->irq is zero. But for PCI
> that should be the default right now in -unstable (but I
> think I have a patch pending to alter this in certain cases).
>
> Jan
>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07 11:51                                                               ` Ben Guthro
@ 2012-09-07 12:18                                                                 ` Jan Beulich
  2012-09-07 16:06                                                                   ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-07 12:18 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 07.09.12 at 13:51, Ben Guthro <ben@guthro.net> wrote:
> However, when I run with console=none, the observed behavior is very 
> different.
> The system seems to go to sleep successfully - but when I press the
> power button to wake it up - the power comes on - the fans spin up -
> but the system is unresponsive.
> No video
> No network
> keyboard LEDs (Caps,Numlock) do not light up.
> 
> 
> Alternate debugging strategies welcome.

I'm afraid other than being lucky to spot something via code
inspection, the only alternative is an ITP/ICE. Maybe Intel folks
could help out debugging this if it's reproducible for them.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07 12:18                                                                 ` Jan Beulich
@ 2012-09-07 16:06                                                                   ` Ben Guthro
  2012-09-19 21:07                                                                     ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-07 16:06 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

[-- Attachment #1: Type: text/plain, Size: 1490 bytes --]

I'll work on getting a JTAG, ICE, or something else - it is on an
Intel SDP - so it should have the ports for it.

My current suspicion on this is that the hardware registers are not
being programmed the same way as they were in 4.0.x
(Since the "pulsing power button LED" on the laptops, and the behavior
of the Desktop SDP are now similar)

Once again - I don't have a lot of evidence to back this up - however,
if I ifdef out the register writes that actually start the low level
suspend - in
xen/arch/x86/acpi/power.c  acpi_enter_sleep_state() - the rest of the
suspend process completes as though the machine suspended, and then
immediately resumed.

In this case - the system seems to be functioning properly.





Hack to prevent low level S3 attached.



On Fri, Sep 7, 2012 at 8:18 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 07.09.12 at 13:51, Ben Guthro <ben@guthro.net> wrote:
>> However, when I run with console=none, the observed behavior is very
>> different.
>> The system seems to go to sleep successfully - but when I press the
>> power button to wake it up - the power comes on - the fans spin up -
>> but the system is unresponsive.
>> No video
>> No network
>> keyboard LEDs (Caps,Numlock) do not light up.
>>
>>
>> Alternate debugging strategies welcome.
>
> I'm afraid other than being lucky to spot something via code
> inspection, the only alternative is an ITP/ICE. Maybe Intel folks
> could help out debugging this if it's reproducible for them.
>
> Jan
>

[-- Attachment #2: prevent-s3-lowlevel.patch --]
[-- Type: application/octet-stream, Size: 1360 bytes --]

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index de86c46..9231b48 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -253,6 +253,7 @@ int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep)
     return continue_hypercall_on_cpu(0, enter_state_helper, &acpi_sinfo);
 }
 
+#if 0
 static int acpi_get_wake_status(void)
 {
     uint32_t val;
@@ -267,6 +268,7 @@ static int acpi_get_wake_status(void)
     val >>= ACPI_BITPOSITION_WAKE_STATUS;
     return val;
 }
+#endif
 
 static void tboot_sleep(u8 sleep_state)
 {
@@ -316,7 +318,7 @@ static void tboot_sleep(u8 sleep_state)
 /* System is really put into sleep state by this stub */
 acpi_status acpi_enter_sleep_state(u8 sleep_state)
 {
-    acpi_status status;
+    /* acpi_status status; */
 
     if ( tboot_in_measured_env() )
     {
@@ -326,7 +328,7 @@ acpi_status acpi_enter_sleep_state(u8 sleep_state)
     }
 
     ACPI_FLUSH_CPU_CACHE();
-
+#if 0
     status = acpi_hw_register_write(ACPI_REGISTER_PM1A_CONTROL, 
                                     acpi_sinfo.pm1a_cnt_val);
     if ( ACPI_FAILURE(status) )
@@ -343,7 +345,7 @@ acpi_status acpi_enter_sleep_state(u8 sleep_state)
     /* Wait until we enter sleep state, and spin until we wake */
     while ( !acpi_get_wake_status() )
         continue;
-
+#endif
     return_ACPI_STATUS(AE_OK);
 }
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-07 16:06                                                                   ` Ben Guthro
@ 2012-09-19 21:07                                                                     ` Ben Guthro
  2012-09-20  6:13                                                                       ` Keir Fraser
  2012-09-20  7:17                                                                       ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-19 21:07 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5143 bytes --]

No hardware debugger just yet - but I've moved to another machine (Lenovo
T400 laptop) - and am now seeing the following stack trace when I resume
(this is using the tip of the 4.2-testing tree)

It looks like either the vcpu, or the runstate is NULL, at this point in
the resume process...


(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) CPU#1 already initialized!
(XEN) Stuck ??
(XEN) Error taking CPU1 up: -5
[   38.570054] ACPI: Low-level resume complete
[   38.570054] PM: Restoring platform NVS memory
[   38.570054] Enabling non-boot CPUs ...
(XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
(XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
(XEN) rax: 00007d3b7fd17180   rbx: ffff8300bd2fe000   rcx: 0000000000000000
(XEN) rdx: ffff08003fc8bd80   rsi: ffff82c48029fe28   rdi: ffff8300bd2fe000
(XEN) rbp: ffff82c48029fe28   rsp: ffff82c48029fdf8   r8:  0000000000000008
(XEN) r9:  00000000000001c0   r10: ffff82c48021f4a0   r11: 0000000000000282
(XEN) r12: ffff82c4802e8ee0   r13: ffff880039762da0   r14: ffff82c4802d3140
(XEN) r15: fffffffffffffff2   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000139ee4000   cr2: 0000000000000060
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c48029fdf8:
(XEN)    ffff8300bd2fe000 ffff82c48029ff18 ffff880037481d40 ffff880039762da0
(XEN)    0000000000000001 ffff82c480157df4 0000000000000070 ffff82f6016db300
(XEN)    00000000000b6d98 ffff8301355d8000 0000000000000070 ffff82c4801702ab
(XEN)    ffff88003fc8bd80 0000000000000000 0000000000000020 ffff8300bd2fe000
(XEN)    ffff8301355d8000 ffff880037481d40 ffff880039762da0 0000000000000001
(XEN)    0000000000000003 ffff82c4801058df ffff82c48029ff18 ffff82c48011462e
(XEN)    0000000000000000 0000000000000000 0000000400000004 ffff82c48029ff18
(XEN)    0000000000000010 ffff8300bd6a0000 ffff8800374819a8 ffff8300bd6a0000
(XEN)    ffff880037481d48 0000000000000001 ffff880039762da0 ffff82c480214288
(XEN)    0000000000000003 0000000000000001 ffff880039762da0 0000000000000001
(XEN)    ffff880037481d48 0000000000000001 0000000000000282 ffff880002dc4240
(XEN)    00000000000001c0 00000000000001c0 0000000000000018 ffffffff8100130a
(XEN)    ffff880037481d40 0000000000000001 0000000000000005 0000010000000000
(XEN)    ffffffff8100130a 000000000000e033 0000000000000282 ffff880037481d20
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffff8300bd6a0000 0000000000000000
(XEN)    0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
(XEN)    [<ffff82c480157df4>] arch_do_vcpu_op+0x134/0x5d0
(XEN)    [<ffff82c4801702ab>] do_update_descriptor+0x1db/0x220
(XEN)    [<ffff82c4801058df>] do_vcpu_op+0x6f/0x4a0
(XEN)    [<ffff82c48011462e>] do_multicall+0x13e/0x330
(XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
(XEN)
(XEN) Pagetable walk from 0000000000000060:
(XEN)  L4[0x000] = 00000001004a5067 0000000000038c9d
(XEN)  L3[0x000] = 000000013a703067 0000000000003094
(XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000060
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


On Fri, Sep 7, 2012 at 12:06 PM, Ben Guthro <ben@guthro.net> wrote:

> I'll work on getting a JTAG, ICE, or something else - it is on an
> Intel SDP - so it should have the ports for it.
>
> My current suspicion on this is that the hardware registers are not
> being programmed the same way as they were in 4.0.x
> (Since the "pulsing power button LED" on the laptops, and the behavior
> of the Desktop SDP are now similar)
>
> Once again - I don't have a lot of evidence to back this up - however,
> if I ifdef out the register writes that actually start the low level
> suspend - in
> xen/arch/x86/acpi/power.c  acpi_enter_sleep_state() - the rest of the
> suspend process completes as though the machine suspended, and then
> immediately resumed.
>
> In this case - the system seems to be functioning properly.
>
>
>
>
>
> Hack to prevent low level S3 attached.
>
>
>
> On Fri, Sep 7, 2012 at 8:18 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 07.09.12 at 13:51, Ben Guthro <ben@guthro.net> wrote:
> >> However, when I run with console=none, the observed behavior is very
> >> different.
> >> The system seems to go to sleep successfully - but when I press the
> >> power button to wake it up - the power comes on - the fans spin up -
> >> but the system is unresponsive.
> >> No video
> >> No network
> >> keyboard LEDs (Caps,Numlock) do not light up.
> >>
> >>
> >> Alternate debugging strategies welcome.
> >
> > I'm afraid other than being lucky to spot something via code
> > inspection, the only alternative is an ITP/ICE. Maybe Intel folks
> > could help out debugging this if it's reproducible for them.
> >
> > Jan
> >
>

[-- Attachment #1.2: Type: text/html, Size: 6550 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-19 21:07                                                                     ` Ben Guthro
@ 2012-09-20  6:13                                                                       ` Keir Fraser
  2012-09-20  6:24                                                                         ` Keir Fraser
  2012-09-20  8:03                                                                         ` Jan Beulich
  2012-09-20  7:17                                                                       ` Jan Beulich
  1 sibling, 2 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-20  6:13 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich
  Cc: xen-devel, john.baboval, Thomas Goetz, Konrad Rzeszutek Wilk


[-- Attachment #1.1: Type: text/plain, Size: 6097 bytes --]

CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
carries on, but the resume code assumes all CPUs are brought back online and
crashes later.

I wonder how long this has been broken. I recall reworking the CPU bringup
code a lot early during 4.1.0 development... And I didn¹t test S3.

 -- Keir

On 19/09/2012 22:07, "Ben Guthro" <ben@guthro.net> wrote:

> No hardware debugger just yet - but I've moved to another machine (Lenovo T400
> laptop) - and am now seeing the following stack trace when I resume
> (this is using the tip of the 4.2-testing tree)
> 
> It looks like either the vcpu, or the runstate is NULL, at this point in the
> resume process...
> 
> 
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) CPU#1 already initialized!
> (XEN) Stuck ??
> (XEN) Error taking CPU1 up: -5
> [   38.570054] ACPI: Low-level resume complete
> [   38.570054] PM: Restoring platform NVS memory
> [   38.570054] Enabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
> (XEN) rax: 00007d3b7fd17180   rbx: ffff8300bd2fe000   rcx: 0000000000000000
> (XEN) rdx: ffff08003fc8bd80   rsi: ffff82c48029fe28   rdi: ffff8300bd2fe000
> (XEN) rbp: ffff82c48029fe28   rsp: ffff82c48029fdf8   r8:  0000000000000008
> (XEN) r9:  00000000000001c0   r10: ffff82c48021f4a0   r11: 0000000000000282
> (XEN) r12: ffff82c4802e8ee0   r13: ffff880039762da0   r14: ffff82c4802d3140
> (XEN) r15: fffffffffffffff2   cr0: 000000008005003b   cr4: 00000000000026f0
> (XEN) cr3: 0000000139ee4000   cr2: 0000000000000060
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff82c48029fdf8:
> (XEN)    ffff8300bd2fe000 ffff82c48029ff18 ffff880037481d40 ffff880039762da0
> (XEN)    0000000000000001 ffff82c480157df4 0000000000000070 ffff82f6016db300
> (XEN)    00000000000b6d98 ffff8301355d8000 0000000000000070 ffff82c4801702ab
> (XEN)    ffff88003fc8bd80 0000000000000000 0000000000000020 ffff8300bd2fe000
> (XEN)    ffff8301355d8000 ffff880037481d40 ffff880039762da0 0000000000000001
> (XEN)    0000000000000003 ffff82c4801058df ffff82c48029ff18 ffff82c48011462e
> (XEN)    0000000000000000 0000000000000000 0000000400000004 ffff82c48029ff18
> (XEN)    0000000000000010 ffff8300bd6a0000 ffff8800374819a8 ffff8300bd6a0000
> (XEN)    ffff880037481d48 0000000000000001 ffff880039762da0 ffff82c480214288
> (XEN)    0000000000000003 0000000000000001 ffff880039762da0 0000000000000001
> (XEN)    ffff880037481d48 0000000000000001 0000000000000282 ffff880002dc4240
> (XEN)    00000000000001c0 00000000000001c0 0000000000000018 ffffffff8100130a
> (XEN)    ffff880037481d40 0000000000000001 0000000000000005 0000010000000000
> (XEN)    ffffffff8100130a 000000000000e033 0000000000000282 ffff880037481d20
> (XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 ffff8300bd6a0000 0000000000000000
> (XEN)    0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
> (XEN)    [<ffff82c480157df4>] arch_do_vcpu_op+0x134/0x5d0
> (XEN)    [<ffff82c4801702ab>] do_update_descriptor+0x1db/0x220
> (XEN)    [<ffff82c4801058df>] do_vcpu_op+0x6f/0x4a0
> (XEN)    [<ffff82c48011462e>] do_multicall+0x13e/0x330
> (XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
> (XEN)    
> (XEN) Pagetable walk from 0000000000000060:
> (XEN)  L4[0x000] = 00000001004a5067 0000000000038c9d
> (XEN)  L3[0x000] = 000000013a703067 0000000000003094
> (XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff 
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 0000000000000060
> (XEN) ****************************************
> (XEN) 
> (XEN) Reboot in five seconds...
> 
> 
> On Fri, Sep 7, 2012 at 12:06 PM, Ben Guthro <ben@guthro.net> wrote:
>> I'll work on getting a JTAG, ICE, or something else - it is on an
>> Intel SDP - so it should have the ports for it.
>> 
>> My current suspicion on this is that the hardware registers are not
>> being programmed the same way as they were in 4.0.x
>> (Since the "pulsing power button LED" on the laptops, and the behavior
>> of the Desktop SDP are now similar)
>> 
>> Once again - I don't have a lot of evidence to back this up - however,
>> if I ifdef out the register writes that actually start the low level
>> suspend - in
>> xen/arch/x86/acpi/power.c  acpi_enter_sleep_state() - the rest of the
>> suspend process completes as though the machine suspended, and then
>> immediately resumed.
>> 
>> In this case - the system seems to be functioning properly.
>> 
>> 
>> 
>> 
>> 
>> Hack to prevent low level S3 attached.
>> 
>> 
>> 
>> On Fri, Sep 7, 2012 at 8:18 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> >>>> On 07.09.12 at 13:51, Ben Guthro <ben@guthro.net> wrote:
>>>> >> However, when I run with console=none, the observed behavior is very
>>>> >> different.
>>>> >> The system seems to go to sleep successfully - but when I press the
>>>> >> power button to wake it up - the power comes on - the fans spin up -
>>>> >> but the system is unresponsive.
>>>> >> No video
>>>> >> No network
>>>> >> keyboard LEDs (Caps,Numlock) do not light up.
>>>> >>
>>>> >>
>>>> >> Alternate debugging strategies welcome.
>>> >
>>> > I'm afraid other than being lucky to spot something via code
>>> > inspection, the only alternative is an ITP/ICE. Maybe Intel folks
>>> > could help out debugging this if it's reproducible for them.
>>> >
>>> > Jan
>>> >
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 7353 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20  6:13                                                                       ` Keir Fraser
@ 2012-09-20  6:24                                                                         ` Keir Fraser
  2012-09-20  8:03                                                                         ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-20  6:24 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich
  Cc: xen-devel, john.baboval, Thomas Goetz, Konrad Rzeszutek Wilk

On 20/09/2012 07:13, "Keir Fraser" <keir.xen@gmail.com> wrote:

> CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
> initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and carries
> on, but the resume code assumes all CPUs are brought back online and crashes
> later.
> 
> I wonder how long this has been broken. I recall reworking the CPU bringup
> code a lot early during 4.1.0 development... And I didn¹t test S3.
> 
>  -- Keir

However, I did test CPU hotplug a lot, and S3 uses the hotplug logic to take
down and bring up CPUs. So I don't think I can have broken this.

Are you able to hotplug physical CPUs from dom0 using the
tools/misc/xen-hptool utility? If not, at least this might be a friendlier
test method and environment than a full S3.

 -- Keir

> On 19/09/2012 22:07, "Ben Guthro" <ben@guthro.net> wrote:
> 
>> No hardware debugger just yet - but I've moved to another machine (Lenovo
>> T400 laptop) - and am now seeing the following stack trace when I resume
>> (this is using the tip of the 4.2-testing tree)
>> 
>> It looks like either the vcpu, or the runstate is NULL, at this point in the
>> resume process...
>> 
>> 
>> (XEN) Finishing wakeup from ACPI S3 state.
>> (XEN) Enabling non-boot CPUs  ...
>> (XEN) CPU#1 already initialized!
>> (XEN) Stuck ??
>> (XEN) Error taking CPU1 up: -5
>> [   38.570054] ACPI: Low-level resume complete
>> [   38.570054] PM: Restoring platform NVS memory
>> [   38.570054] Enabling non-boot CPUs ...
>> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
>> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
>> (XEN) rax: 00007d3b7fd17180   rbx: ffff8300bd2fe000   rcx: 0000000000000000
>> (XEN) rdx: ffff08003fc8bd80   rsi: ffff82c48029fe28   rdi: ffff8300bd2fe000
>> (XEN) rbp: ffff82c48029fe28   rsp: ffff82c48029fdf8   r8:  0000000000000008
>> (XEN) r9:  00000000000001c0   r10: ffff82c48021f4a0   r11: 0000000000000282
>> (XEN) r12: ffff82c4802e8ee0   r13: ffff880039762da0   r14: ffff82c4802d3140
>> (XEN) r15: fffffffffffffff2   cr0: 000000008005003b   cr4: 00000000000026f0
>> (XEN) cr3: 0000000139ee4000   cr2: 0000000000000060
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=ffff82c48029fdf8:
>> (XEN)    ffff8300bd2fe000 ffff82c48029ff18 ffff880037481d40 ffff880039762da0
>> (XEN)    0000000000000001 ffff82c480157df4 0000000000000070 ffff82f6016db300
>> (XEN)    00000000000b6d98 ffff8301355d8000 0000000000000070 ffff82c4801702ab
>> (XEN)    ffff88003fc8bd80 0000000000000000 0000000000000020 ffff8300bd2fe000
>> (XEN)    ffff8301355d8000 ffff880037481d40 ffff880039762da0 0000000000000001
>> (XEN)    0000000000000003 ffff82c4801058df ffff82c48029ff18 ffff82c48011462e
>> (XEN)    0000000000000000 0000000000000000 0000000400000004 ffff82c48029ff18
>> (XEN)    0000000000000010 ffff8300bd6a0000 ffff8800374819a8 ffff8300bd6a0000
>> (XEN)    ffff880037481d48 0000000000000001 ffff880039762da0 ffff82c480214288
>> (XEN)    0000000000000003 0000000000000001 ffff880039762da0 0000000000000001
>> (XEN)    ffff880037481d48 0000000000000001 0000000000000282 ffff880002dc4240
>> (XEN)    00000000000001c0 00000000000001c0 0000000000000018 ffffffff8100130a
>> (XEN)    ffff880037481d40 0000000000000001 0000000000000005 0000010000000000
>> (XEN)    ffffffff8100130a 000000000000e033 0000000000000282 ffff880037481d20
>> (XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 ffff8300bd6a0000 0000000000000000
>> (XEN)    0000000000000000
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c480120585>] vcpu_runstate_get+0xe5/0x130
>> (XEN)    [<ffff82c480157df4>] arch_do_vcpu_op+0x134/0x5d0
>> (XEN)    [<ffff82c4801702ab>] do_update_descriptor+0x1db/0x220
>> (XEN)    [<ffff82c4801058df>] do_vcpu_op+0x6f/0x4a0
>> (XEN)    [<ffff82c48011462e>] do_multicall+0x13e/0x330
>> (XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
>> (XEN)    
>> (XEN) Pagetable walk from 0000000000000060:
>> (XEN)  L4[0x000] = 00000001004a5067 0000000000038c9d
>> (XEN)  L3[0x000] = 000000013a703067 0000000000003094
>> (XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff 
>> (XEN) 
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) FATAL PAGE FAULT
>> (XEN) [error_code=0000]
>> (XEN) Faulting linear address: 0000000000000060
>> (XEN) ****************************************
>> (XEN) 
>> (XEN) Reboot in five seconds...
>> 
>> 
>> On Fri, Sep 7, 2012 at 12:06 PM, Ben Guthro <ben@guthro.net> wrote:
>>> I'll work on getting a JTAG, ICE, or something else - it is on an
>>> Intel SDP - so it should have the ports for it.
>>> 
>>> My current suspicion on this is that the hardware registers are not
>>> being programmed the same way as they were in 4.0.x
>>> (Since the "pulsing power button LED" on the laptops, and the behavior
>>> of the Desktop SDP are now similar)
>>> 
>>> Once again - I don't have a lot of evidence to back this up - however,
>>> if I ifdef out the register writes that actually start the low level
>>> suspend - in
>>> xen/arch/x86/acpi/power.c  acpi_enter_sleep_state() - the rest of the
>>> suspend process completes as though the machine suspended, and then
>>> immediately resumed.
>>> 
>>> In this case - the system seems to be functioning properly.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hack to prevent low level S3 attached.
>>> 
>>> 
>>> 
>>> On Fri, Sep 7, 2012 at 8:18 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 07.09.12 at 13:51, Ben Guthro <ben@guthro.net> wrote:
>>>>> However, when I run with console=none, the observed behavior is very
>>>>> different.
>>>>> The system seems to go to sleep successfully - but when I press the
>>>>> power button to wake it up - the power comes on - the fans spin up -
>>>>> but the system is unresponsive.
>>>>> No video
>>>>> No network
>>>>> keyboard LEDs (Caps,Numlock) do not light up.
>>>>> 
>>>>> 
>>>>> Alternate debugging strategies welcome.
>>>> 
>>>> I'm afraid other than being lucky to spot something via code
>>>> inspection, the only alternative is an ITP/ICE. Maybe Intel folks
>>>> could help out debugging this if it's reproducible for them.
>>>> 
>>>> Jan
>>>> 
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-19 21:07                                                                     ` Ben Guthro
  2012-09-20  6:13                                                                       ` Keir Fraser
@ 2012-09-20  7:17                                                                       ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-20  7:17 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 19.09.12 at 23:07, Ben Guthro <ben@guthro.net> wrote:
> No hardware debugger just yet - but I've moved to another machine (Lenovo
> T400 laptop) - and am now seeing the following stack trace when I resume
> (this is using the tip of the 4.2-testing tree)

Considering that the situation is similar to the one with very early
boot problems - as long as the system either reboots on its own
(albeit I don't think you ever said it does) or has a reset button
(or other way to initiate reset without turning off power), an
alternative debugging method would be to write data into I/O
ports the values of which persist across reset, and read them out
during the following boot. I have found the 0x008x range to be a
candidate for this, as well as certain DMA controller ones. And of
course, if the CMOS RAM has its upper 128 bytes present and
(part of it) usable (i.e. otherwise unused), that would even be
an option permitting power cycling the system without losing the
information.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20  6:13                                                                       ` Keir Fraser
  2012-09-20  6:24                                                                         ` Keir Fraser
@ 2012-09-20  8:03                                                                         ` Jan Beulich
  2012-09-20  8:14                                                                           ` Keir Fraser
  2012-09-20 12:56                                                                           ` Ben Guthro
  1 sibling, 2 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-20  8:03 UTC (permalink / raw)
  To: Keir Fraser, Ben Guthro
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
> CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
> initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
> carries on, but the resume code assumes all CPUs are brought back online and
> crashes later.

So this would suggest play_dead() (-> cpu_exit_clear() ->
cpu_uninit()) not getting reached during the suspend cycle.
That should be fairly easy to verify, as the serial console
ought to still work when the secondary CPUs get offlined.

That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
not getting reached in __cpu_disable(), which would be in line
with the observation that none of the logs provided so far
showed anything being done by fixup_irqs() (called right
after clearing the online bit).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20  8:03                                                                         ` Jan Beulich
@ 2012-09-20  8:14                                                                           ` Keir Fraser
  2012-09-20 12:56                                                                           ` Ben Guthro
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-20  8:14 UTC (permalink / raw)
  To: Jan Beulich, Keir Fraser, Ben Guthro
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On 20/09/2012 09:03, "Jan Beulich" <JBeulich@suse.com> wrote:

>>>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
>> CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
>> initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
>> carries on, but the resume code assumes all CPUs are brought back online and
>> crashes later.
> 
> So this would suggest play_dead() (-> cpu_exit_clear() ->
> cpu_uninit()) not getting reached during the suspend cycle.
> That should be fairly easy to verify, as the serial console
> ought to still work when the secondary CPUs get offlined.

Yes.

> That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
> not getting reached in __cpu_disable(), which would be in line
> with the observation that none of the logs provided so far
> showed anything being done by fixup_irqs() (called right
> after clearing the online bit).

I did just test cpu offline/online via xen-hptool myself, and that does
work. So perhaps this is platform specific, or S3 specific...

 -- Keir

> Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20  8:03                                                                         ` Jan Beulich
  2012-09-20  8:14                                                                           ` Keir Fraser
@ 2012-09-20 12:56                                                                           ` Ben Guthro
  2012-09-20 13:07                                                                             ` Keir Fraser
  2012-09-20 20:30                                                                             ` Ben Guthro
  1 sibling, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-20 12:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1261 bytes --]

It appears __cpu_disable() is not getting reached at all, for CPU1

I put a cpu id conditional BUG() call in there, to verify - and while it is
reached when using
xen-hptool cpu-offline 1
It never seems to be reached from the S3 path.


What is the expected call chain to get into this code during S3?


On Thu, Sep 20, 2012 at 4:03 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
> > CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
> > initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
> > carries on, but the resume code assumes all CPUs are brought back online
> and
> > crashes later.
>
> So this would suggest play_dead() (-> cpu_exit_clear() ->
> cpu_uninit()) not getting reached during the suspend cycle.
> That should be fairly easy to verify, as the serial console
> ought to still work when the secondary CPUs get offlined.
>
> That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
> not getting reached in __cpu_disable(), which would be in line
> with the observation that none of the logs provided so far
> showed anything being done by fixup_irqs() (called right
> after clearing the online bit).
>
> Jan
>

[-- Attachment #1.2: Type: text/html, Size: 1776 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20 12:56                                                                           ` Ben Guthro
@ 2012-09-20 13:07                                                                             ` Keir Fraser
  2012-09-20 20:30                                                                             ` Ben Guthro
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-20 13:07 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1632 bytes --]

disable_nonboot_cpus() -> cpu_down(1) -> ...

...and from there it is same as the xen-hptool case:
arch_do_sysctl() -> cpu_down_helper() -> cpu_down(1) ->
stop_machine_run(take_cpu_down, 1) -> [on CPU#1] take_cpu_down() ->
__cpu_disable()

On 20/09/2012 13:56, "Ben Guthro" <ben@guthro.net> wrote:

> It appears __cpu_disable() is not getting reached at all, for CPU1
> 
> I put a cpu id conditional BUG() call in there, to verify - and while it is
> reached when using 
> xen-hptool cpu-offline 1
> It never seems to be reached from the S3 path.
> 
> 
> What is the expected call chain to get into this code during S3?
> 
> 
> On Thu, Sep 20, 2012 at 4:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> >>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
>>> > CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
>>> > initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
>>> > carries on, but the resume code assumes all CPUs are brought back online
>>> and
>>> > crashes later.
>> 
>> So this would suggest play_dead() (-> cpu_exit_clear() ->
>> cpu_uninit()) not getting reached during the suspend cycle.
>> That should be fairly easy to verify, as the serial console
>> ought to still work when the secondary CPUs get offlined.
>> 
>> That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
>> not getting reached in __cpu_disable(), which would be in line
>> with the observation that none of the logs provided so far
>> showed anything being done by fixup_irqs() (called right
>> after clearing the online bit).
>> 
>> Jan
> 
> 


[-- Attachment #1.2: Type: text/html, Size: 2419 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20 12:56                                                                           ` Ben Guthro
  2012-09-20 13:07                                                                             ` Keir Fraser
@ 2012-09-20 20:30                                                                             ` Ben Guthro
  2012-09-21  6:34                                                                               ` Keir Fraser
  2012-09-21  6:47                                                                               ` Jan Beulich
  1 sibling, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-20 20:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3429 bytes --]

On Thu, Sep 20, 2012 at 8:56 AM, Ben Guthro <ben@guthro.net> wrote:

> It appears __cpu_disable() is not getting reached at all, for CPU1
>
>
I was incorrect about this, after messing around with various serial
configs to properly get all output.

I have traced this through to verify that the sequence in question, does,
in fact seem to be getting executed.


disable_nonboot_cpus()
cpu_down()
__cpu_disable()
play_dead()
cpu_exit_clear()
cpu_uninit()
__cpu_die()
do_suspend_lowlevel()

I also enabled the printk's in smpboot.c


[   32.145824] ACPI: Preparing to enter system sleep state S3
[   32.600118] PM: Saving platform NVS memory
[   32.671666] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Bringing CPU1 down
(XEN) Disabling CPU1
(XEN) Disabled CPU1
(XEN) play_dead: CPU1
(XEN) cpu_exit_clear: CPU1
(XEN) cpu_uninit: CPU1
(XEN) CPU1 dead
(XEN) Entering ACPI S3 state.
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) Bringing CPU1 up
(XEN) Setting warm reset code and vector.
(XEN) Asserting INIT.
(XEN) Waiting for send to finish...
(XEN) +Deasserting INIT.
(XEN) Waiting for send to finish...
(XEN) +#startup loops: 2.
(XEN) Sending STARTUP #1.
(XEN) After apic_write.
(XEN) CPU#1 already initialized!
(XEN) Startup point 1.
(XEN) Waiting for send to finish...
(XEN) +Sending STARTUP #2.
(XEN) After apic_write.
(XEN) Startup point 1.
(XEN) Waiting for send to finish...
(XEN) +After Startup.
(XEN) After Callout 1.
(XEN) Stuck ??
(XEN) cpu_exit_clear: CPU1
(XEN) cpu_uninit: CPU1
(XEN) __cpu_up - do_boot_cpu error
(XEN) cpu_up CPU1 CPU not up
(XEN) cpu_up CPU1 fail
(XEN) Error taking CPU1 up: -5
[   32.780055] ACPI: Low-level resume complete
[   32.780055] PM: Restoring platform NVS memory
[   32.780055] Enabling non-boot CPUs ...

then it crashes.

It seems that it is always falling through into the "else" clause of
the do_boot_cpu() function when attempting to bring it back up, seemingly
stuck in CPU_STATE_CALLOUT


Any ideas as to what might be causing it to get stuck in that state?






> I put a cpu id conditional BUG() call in there, to verify - and while it
> is reached when using
> xen-hptool cpu-offline 1
> It never seems to be reached from the S3 path.
>
>
> What is the expected call chain to get into this code during S3?
>
>
> On Thu, Sep 20, 2012 at 4:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
>> > CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
>> > initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
>> > carries on, but the resume code assumes all CPUs are brought back
>> online and
>> > crashes later.
>>
>> So this would suggest play_dead() (-> cpu_exit_clear() ->
>> cpu_uninit()) not getting reached during the suspend cycle.
>> That should be fairly easy to verify, as the serial console
>> ought to still work when the secondary CPUs get offlined.
>>
>> That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
>> not getting reached in __cpu_disable(), which would be in line
>> with the observation that none of the logs provided so far
>> showed anything being done by fixup_irqs() (called right
>> after clearing the online bit).
>>
>> Jan
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 5097 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20 20:30                                                                             ` Ben Guthro
@ 2012-09-21  6:34                                                                               ` Keir Fraser
  2012-09-21  6:47                                                                               ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-21  6:34 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On 20/09/2012 21:30, "Ben Guthro" <ben@guthro.net> wrote:

> (XEN) Bringing CPU1 down
> (XEN) Disabling CPU1
> (XEN) Disabled CPU1
> (XEN) play_dead: CPU1
> (XEN) cpu_exit_clear: CPU1
> (XEN) cpu_uninit: CPU1
> (XEN) CPU1 dead

So CPU1 is taken down properly, apparently...

> (XEN) Entering ACPI S3 state.

... During S3 suspend.

> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Bringing CPU1 up
> (XEN) Setting warm reset code and vector.
> (XEN) Asserting INIT.
> (XEN) Waiting for send to finish...
> (XEN) +Deasserting INIT.
> (XEN) Waiting for send to finish...
> (XEN) +#startup loops: 2.
> (XEN) Sending STARTUP #1.
> (XEN) After apic_write.
> (XEN) CPU#1 already initialized!

But here CPU1 thinks it is already initialised! *This* is the bug you need
to go look at. CPU1 will spin at this point...

> (XEN) Startup point 1.
> (XEN) Waiting for send to finish...
> (XEN) +Sending STARTUP #2.
> (XEN) After apic_write.
> (XEN) Startup point 1.
> (XEN) Waiting for send to finish...
> (XEN) +After Startup.
> (XEN) After Callout 1.
> (XEN) Stuck ??

...Causing CPU0 to think CPU1 is stuck (which is fair, because it is).

> (XEN) cpu_exit_clear: CPU1
> (XEN) cpu_uninit: CPU1
> (XEN) __cpu_up - do_boot_cpu error
> (XEN) cpu_up CPU1 CPU not up
> (XEN) cpu_up CPU1 fail
> (XEN) Error taking CPU1 up: -5
> [   32.780055] ACPI: Low-level resume complete
> [   32.780055] PM: Restoring platform NVS memory
> [   32.780055] Enabling non-boot CPUs ...
> 
> then it crashes.
> 
> It seems that it is always falling through into the "else" clause of
> the do_boot_cpu() function when attempting to bring it back up, seemingly
> stuck in CPU_STATE_CALLOUT
> 
> Any ideas as to what might be causing it to get stuck in that state?

Yes, see explanation above, which is actually the same explanation I gave
you before. You need to go investigate why CPU1 is getting confused in
cpu_init().

 -- Keir

> 
> 
> 
>  
>> I put a cpu id conditional BUG() call in there, to verify - and while it is
>> reached when using 
>> xen-hptool cpu-offline 1
>> It never seems to be reached from the S3 path.
>> 
>> 
>> What is the expected call chain to get into this code during S3?
>> 
>> 
>> On Thu, Sep 20, 2012 at 4:03 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 20.09.12 at 08:13, Keir Fraser <keir.xen@gmail.com> wrote:
>>>> CPU#1 got stuck in loop in cpu_init() as it appears to be Œalready
>>>> initialised¹ in cpu_initialized bitmap. CPU#0 detects it is stuck and
>>>> carries on, but the resume code assumes all CPUs are brought back online
>>>> and
>>>> crashes later.
>>> 
>>> So this would suggest play_dead() (-> cpu_exit_clear() ->
>>> cpu_uninit()) not getting reached during the suspend cycle.
>>> That should be fairly easy to verify, as the serial console
>>> ought to still work when the secondary CPUs get offlined.
>>> 
>>> That might imply cpumask_clear_cpu(cpu, &cpu_online_map)
>>> not getting reached in __cpu_disable(), which would be in line
>>> with the observation that none of the logs provided so far
>>> showed anything being done by fixup_irqs() (called right
>>> after clearing the online bit).
>>> 
>>> Jan
>> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-20 20:30                                                                             ` Ben Guthro
  2012-09-21  6:34                                                                               ` Keir Fraser
@ 2012-09-21  6:47                                                                               ` Jan Beulich
  2012-09-21 18:20                                                                                 ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-21  6:47 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel

>>> On 20.09.12 at 22:30, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Sep 20, 2012 at 8:56 AM, Ben Guthro <ben@guthro.net> wrote:
> [   32.145824] ACPI: Preparing to enter system sleep state S3
> [   32.600118] PM: Saving platform NVS memory
> [   32.671666] Disabling non-boot CPUs ...
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Bringing CPU1 down
> (XEN) Disabling CPU1
> (XEN) Disabled CPU1
> (XEN) play_dead: CPU1
> (XEN) cpu_exit_clear: CPU1
> (XEN) cpu_uninit: CPU1
> (XEN) CPU1 dead

So how about inserting printout of cpu_initialized here ...

> (XEN) Entering ACPI S3 state.
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...

... and here?

Plus adding "cpuinfo" to the Xen command line, which would allow
us to see whether CPU1 gets into cpu_init() twice.

Perhaps tracking cpu_state throughout the below sequence might
also be useful.

> (XEN) Bringing CPU1 up
> (XEN) Setting warm reset code and vector.
> (XEN) Asserting INIT.
> (XEN) Waiting for send to finish...
> (XEN) +Deasserting INIT.
> (XEN) Waiting for send to finish...
> (XEN) +#startup loops: 2.
> (XEN) Sending STARTUP #1.
> (XEN) After apic_write.
> (XEN) CPU#1 already initialized!
> (XEN) Startup point 1.
> (XEN) Waiting for send to finish...
> (XEN) +Sending STARTUP #2.
> (XEN) After apic_write.
> (XEN) Startup point 1.
> (XEN) Waiting for send to finish...
> (XEN) +After Startup.
> (XEN) After Callout 1.
> (XEN) Stuck ??
> (XEN) cpu_exit_clear: CPU1
> (XEN) cpu_uninit: CPU1
> (XEN) __cpu_up - do_boot_cpu error
> (XEN) cpu_up CPU1 CPU not up
> (XEN) cpu_up CPU1 fail
> (XEN) Error taking CPU1 up: -5
> [   32.780055] ACPI: Low-level resume complete
> [   32.780055] PM: Restoring platform NVS memory
> [   32.780055] Enabling non-boot CPUs ...
> 
> then it crashes.
> 
> It seems that it is always falling through into the "else" clause of
> the do_boot_cpu() function when attempting to bring it back up, seemingly
> stuck in CPU_STATE_CALLOUT
> 
> 
> Any ideas as to what might be causing it to get stuck in that state?

That's because CPU1 is stuck in cpu_init() (in the infinite loop after
printing "CPU#1 already initialized!"), as Keir pointed out yesterday.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-21  6:47                                                                               ` Jan Beulich
@ 2012-09-21 18:20                                                                                 ` Ben Guthro
  2012-09-21 18:42                                                                                   ` Keir Fraser
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-21 18:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1188 bytes --]

On Fri, Sep 21, 2012 at 2:47 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
> That's because CPU1 is stuck in cpu_init() (in the infinite loop after
> printing "CPU#1 already initialized!"), as Keir pointed out yesterday.
>
>
I've done some more tracing on this, and instrumented cpu_init(),
cpu_uninit() - and found something I cannot quite explain.
I was most interested in the cpu_initialized mask, set just above these two
functions (and only used in those two functions)

I convert  cpu_initialized to a string, using cpumask_scnprintf - and print
it out when it is read, or written in these two functions.

When CPU1 is being torn down, the cpumask bit gets cleared for CPU1, and I
am able to print this to the console to verify.
However, when the machine is returning from S3, and going through cpu_init
- the bit is set again.

Could this be an issue of caches not being flushed?

I see that the last thing done before acpi_enter_sleep_state actually
writes PM1A_CONTROL / PM1B_CONTROL to enter S3 is a ACPI_FLUSH_CPU_CACHE()

This analysis seems unlikely, at this point...but I'm not sure what to make
of the data other than a cache issue.

Am I "barking up the wrong tree" here?

[-- Attachment #1.2: Type: text/html, Size: 1765 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-21 18:20                                                                                 ` Ben Guthro
@ 2012-09-21 18:42                                                                                   ` Keir Fraser
  2012-09-24 11:22                                                                                     ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Keir Fraser @ 2012-09-21 18:42 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

On 21/09/2012 19:20, "Ben Guthro" <ben@guthro.net> wrote:

> 
> 
> On Fri, Sep 21, 2012 at 2:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> 
>> That's because CPU1 is stuck in cpu_init() (in the infinite loop after
>> printing "CPU#1 already initialized!"), as Keir pointed out yesterday.
>> 
> 
> I've done some more tracing on this, and instrumented cpu_init(), cpu_uninit()
> - and found something I cannot quite explain.
> I was most interested in the cpu_initialized mask, set just above these two
> functions (and only used in those two functions)
> 
> I convert  cpu_initialized to a string, using cpumask_scnprintf - and print it
> out when it is read, or written in these two functions.
> 
> When CPU1 is being torn down, the cpumask bit gets cleared for CPU1, and I am
> able to print this to the console to verify.
> However, when the machine is returning from S3, and going through cpu_init -
> the bit is set again.
> 
> Could this be an issue of caches not being flushed?
> 
> I see that the last thing done before acpi_enter_sleep_state actually
> writes PM1A_CONTROL / PM1B_CONTROL to enter S3 is a ACPI_FLUSH_CPU_CACHE()
> 
> This analysis seems unlikely, at this point...but I'm not sure what to make of
> the data other than a cache issue.
> 
> Am I "barking up the wrong tree" here?

Perhaps not. Try dumping it immediately before and after the actual S3
sleep. Since you probably can't print to serial line at that point, you
could just take a copy of the bitmap and print them both shortly after S3
resume. Then if it still looks bad, or the problem magically resolves with
the extra printing, you can suspect cache flush a bit more strongly.
However, WBINVD (which is what ACPI_FLUSH_CPU_CACHE() is) should be enough.

 -- Keir

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-21 18:42                                                                                   ` Keir Fraser
@ 2012-09-24 11:22                                                                                     ` Jan Beulich
  2012-09-24 11:25                                                                                       ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 11:22 UTC (permalink / raw)
  To: Ben Guthro, Keir Fraser
  Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 21.09.12 at 20:42, Keir Fraser <keir@xen.org> wrote:
> On 21/09/2012 19:20, "Ben Guthro" <ben@guthro.net> wrote:
> 
>> 
>> 
>> On Fri, Sep 21, 2012 at 2:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> 
>>> That's because CPU1 is stuck in cpu_init() (in the infinite loop after
>>> printing "CPU#1 already initialized!"), as Keir pointed out yesterday.
>>> 
>> 
>> I've done some more tracing on this, and instrumented cpu_init(), 
> cpu_uninit()
>> - and found something I cannot quite explain.
>> I was most interested in the cpu_initialized mask, set just above these two
>> functions (and only used in those two functions)
>> 
>> I convert  cpu_initialized to a string, using cpumask_scnprintf - and print 
> it
>> out when it is read, or written in these two functions.
>> 
>> When CPU1 is being torn down, the cpumask bit gets cleared for CPU1, and I 
> am
>> able to print this to the console to verify.
>> However, when the machine is returning from S3, and going through cpu_init -
>> the bit is set again.
>> 
>> Could this be an issue of caches not being flushed?
>> 
>> I see that the last thing done before acpi_enter_sleep_state actually
>> writes PM1A_CONTROL / PM1B_CONTROL to enter S3 is a ACPI_FLUSH_CPU_CACHE()
>> 
>> This analysis seems unlikely, at this point...but I'm not sure what to make 
> of
>> the data other than a cache issue.
>> 
>> Am I "barking up the wrong tree" here?
> 
> Perhaps not. Try dumping it immediately before and after the actual S3
> sleep. Since you probably can't print to serial line at that point, you
> could just take a copy of the bitmap and print them both shortly after S3
> resume. Then if it still looks bad, or the problem magically resolves with
> the extra printing, you can suspect cache flush a bit more strongly.
> However, WBINVD (which is what ACPI_FLUSH_CPU_CACHE() is) should be enough.

CPU0 issuing WBINVD might not be enough; other CPUs should
probably also do so unconditionally (currently they do this only
when using one of the advanced halt forms in acpi_dead_idle()).

While one would think that a halted CPU would not only continue
to keep its cache up-to-date, but also eventually write back its
dirty cache lines, I don't think the latter is actually guaranteed,
so if the CPU ends up getting the INIT before the line was written
back, the modification could get lost.

But of course this theory depends on Ben's system actually using
the default halt mechanism rather than one of the advanced ones.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:22                                                                                     ` Jan Beulich
@ 2012-09-24 11:25                                                                                       ` Ben Guthro
  2012-09-24 11:45                                                                                         ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 11:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2965 bytes --]

It is an older system (Core2) - so it may be using the default mechanism,
but I'd have to go dig up some processor docs to be sure.

I'm still seeing some behavior in my experiments that I can't entirely
explain (yet)
I'll be spending some time today looking into it more closely.


On Mon, Sep 24, 2012 at 7:22 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 21.09.12 at 20:42, Keir Fraser <keir@xen.org> wrote:
> > On 21/09/2012 19:20, "Ben Guthro" <ben@guthro.net> wrote:
> >
> >>
> >>
> >> On Fri, Sep 21, 2012 at 2:47 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>>
> >>> That's because CPU1 is stuck in cpu_init() (in the infinite loop after
> >>> printing "CPU#1 already initialized!"), as Keir pointed out yesterday.
> >>>
> >>
> >> I've done some more tracing on this, and instrumented cpu_init(),
> > cpu_uninit()
> >> - and found something I cannot quite explain.
> >> I was most interested in the cpu_initialized mask, set just above these
> two
> >> functions (and only used in those two functions)
> >>
> >> I convert  cpu_initialized to a string, using cpumask_scnprintf - and
> print
> > it
> >> out when it is read, or written in these two functions.
> >>
> >> When CPU1 is being torn down, the cpumask bit gets cleared for CPU1,
> and I
> > am
> >> able to print this to the console to verify.
> >> However, when the machine is returning from S3, and going through
> cpu_init -
> >> the bit is set again.
> >>
> >> Could this be an issue of caches not being flushed?
> >>
> >> I see that the last thing done before acpi_enter_sleep_state actually
> >> writes PM1A_CONTROL / PM1B_CONTROL to enter S3 is a
> ACPI_FLUSH_CPU_CACHE()
> >>
> >> This analysis seems unlikely, at this point...but I'm not sure what to
> make
> > of
> >> the data other than a cache issue.
> >>
> >> Am I "barking up the wrong tree" here?
> >
> > Perhaps not. Try dumping it immediately before and after the actual S3
> > sleep. Since you probably can't print to serial line at that point, you
> > could just take a copy of the bitmap and print them both shortly after S3
> > resume. Then if it still looks bad, or the problem magically resolves
> with
> > the extra printing, you can suspect cache flush a bit more strongly.
> > However, WBINVD (which is what ACPI_FLUSH_CPU_CACHE() is) should be
> enough.
>
> CPU0 issuing WBINVD might not be enough; other CPUs should
> probably also do so unconditionally (currently they do this only
> when using one of the advanced halt forms in acpi_dead_idle()).
>
> While one would think that a halted CPU would not only continue
> to keep its cache up-to-date, but also eventually write back its
> dirty cache lines, I don't think the latter is actually guaranteed,
> so if the CPU ends up getting the INIT before the line was written
> back, the modification could get lost.
>
> But of course this theory depends on Ben's system actually using
> the default halt mechanism rather than one of the advanced ones.
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 3968 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:25                                                                                       ` Ben Guthro
@ 2012-09-24 11:45                                                                                         ` Jan Beulich
  2012-09-24 11:54                                                                                           ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 11:45 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel

>>> On 24.09.12 at 13:25, Ben Guthro <ben@guthro.net> wrote:
> It is an older system (Core2) - so it may be using the default mechanism,
> but I'd have to go dig up some processor docs to be sure.

This is not just a matter of what the CPU supports, but also what
info gets passed down from Dom0 - if e.g. your kernel doesn't
have Konrad's P-/C-state patches, then there's no way for Xen to
use any of the advanced methods.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:45                                                                                         ` Jan Beulich
@ 2012-09-24 11:54                                                                                           ` Ben Guthro
  2012-09-24 12:05                                                                                             ` Jan Beulich
                                                                                                               ` (2 more replies)
  0 siblings, 3 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 11:54 UTC (permalink / raw)
  To: Jan Beulich, Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, Thomas Goetz, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 615 bytes --]

I see - Konrad - do you have a changeset I can look for with the features
Jan describes?


On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 24.09.12 at 13:25, Ben Guthro <ben@guthro.net> wrote:
> > It is an older system (Core2) - so it may be using the default mechanism,
> > but I'd have to go dig up some processor docs to be sure.
>
> This is not just a matter of what the CPU supports, but also what
> info gets passed down from Dom0 - if e.g. your kernel doesn't
> have Konrad's P-/C-state patches, then there's no way for Xen to
> use any of the advanced methods.
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 1081 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:54                                                                                           ` Ben Guthro
@ 2012-09-24 12:05                                                                                             ` Jan Beulich
  2012-09-24 12:24                                                                                               ` Ben Guthro
  2012-09-24 12:22                                                                                             ` Pasi Kärkkäinen
  2012-09-24 14:02                                                                                             ` Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 12:05 UTC (permalink / raw)
  To: Ben Guthro, Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, Thomas Goetz, xen-devel

>>> On 24.09.12 at 13:54, Ben Guthro <ben@guthro.net> wrote:
> I see - Konrad - do you have a changeset I can look for with the features
> Jan describes?

Alternatively you could also stick a wbinvd() in the two possibly
relevant places...

Jan

> On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 24.09.12 at 13:25, Ben Guthro <ben@guthro.net> wrote:
>> > It is an older system (Core2) - so it may be using the default mechanism,
>> > but I'd have to go dig up some processor docs to be sure.
>>
>> This is not just a matter of what the CPU supports, but also what
>> info gets passed down from Dom0 - if e.g. your kernel doesn't
>> have Konrad's P-/C-state patches, then there's no way for Xen to
>> use any of the advanced methods.
>>
>> Jan
>>
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:54                                                                                           ` Ben Guthro
  2012-09-24 12:05                                                                                             ` Jan Beulich
@ 2012-09-24 12:22                                                                                             ` Pasi Kärkkäinen
  2012-09-24 12:27                                                                                               ` Ben Guthro
  2012-09-24 14:02                                                                                             ` Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 134+ messages in thread
From: Pasi Kärkkäinen @ 2012-09-24 12:22 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Jan Beulich, Thomas Goetz

On Mon, Sep 24, 2012 at 07:54:05AM -0400, Ben Guthro wrote:
>    I see - Konrad - do you have a changeset I can look for with the features
>    Jan describes?

xen-acpi-processor driver was merged to upstream Linux kernel v3.4.

-- Pasi

>    On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <[1]JBeulich@suse.com> wrote:
> 
>      >>> On 24.09.12 at 13:25, Ben Guthro <[2]ben@guthro.net> wrote:
>      > It is an older system (Core2) - so it may be using the default
>      mechanism,
>      > but I'd have to go dig up some processor docs to be sure.
> 
>      This is not just a matter of what the CPU supports, but also what
>      info gets passed down from Dom0 - if e.g. your kernel doesn't
>      have Konrad's P-/C-state patches, then there's no way for Xen to
>      use any of the advanced methods.
>      Jan
> 
> References
> 
>    Visible links
>    1. mailto:JBeulich@suse.com
>    2. mailto:ben@guthro.net

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:05                                                                                             ` Jan Beulich
@ 2012-09-24 12:24                                                                                               ` Ben Guthro
  2012-09-24 12:32                                                                                                 ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 12:24 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Keir Fraser, john.baboval, Thomas Goetz,
	Konrad Rzeszutek Wilk


[-- Attachment #1.1: Type: text/plain, Size: 2033 bytes --]

On Mon, Sep 24, 2012 at 8:05 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 24.09.12 at 13:54, Ben Guthro <ben@guthro.net> wrote:
> > I see - Konrad - do you have a changeset I can look for with the features
> > Jan describes?
>
> Alternatively you could also stick a wbinvd() in the two possibly
> relevant places...
>
>
I tried doing this right before disable_nonboot_cpus() in power.c, to no
effect.

The behavior that I'm currently failing to understand seems to be a
heisenbug, exhibited by the 2 attached patches, which are essentially the
same, with a cpumask_copy in 2 different places.

When I have the debug copy just before do_suspend_lowlevel() -
(debug1.patch) the output changes to what I would expect:

(XEN) XXX: CACHE issue? before=1 after=1
That is, only CPU0 is initialized both before, and after the S3 cycle.

However, when that same copy is moved up to before disable_nonboot_cpus() -
as in debug2.patch - it seems to have an effect on the state afterward. I
would have expected this to be something like

(XEN) XXX: CACHE issue? before=3 after=1
But that is not what is actually printed.

I get
(XEN) XXX: CACHE issue? before=3 after=3
That is, the initialized bit is set for CPU1, when it shouldn't be.


It should be noted that even with the debug in that causes the value to be
what I expect, this does not in itself is not sufficient to get things
working entirely, as I then get stalls in the kernel rcu_sched




> Jan
>
> > On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 24.09.12 at 13:25, Ben Guthro <ben@guthro.net> wrote:
> >> > It is an older system (Core2) - so it may be using the default
> mechanism,
> >> > but I'd have to go dig up some processor docs to be sure.
> >>
> >> This is not just a matter of what the CPU supports, but also what
> >> info gets passed down from Dom0 - if e.g. your kernel doesn't
> >> have Konrad's P-/C-state patches, then there's no way for Xen to
> >> use any of the advanced methods.
> >>
> >> Jan
> >>
> >>
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3226 bytes --]

[-- Attachment #2: debug1.patch --]
[-- Type: application/octet-stream, Size: 1860 bytes --]

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 9e1f989..33e47ff 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -122,6 +122,12 @@ static void acpi_sleep_prepare(u32 state)
 
 static void acpi_sleep_post(u32 state) {}
 
+static cpumask_t tmp_mask1;
+static cpumask_t tmp_mask2;
+static char tstr1[100];
+static char tstr2[100];
+extern cpumask_t cpu_initialized;
+
 /* Main interface to do xen specific suspend/resume */
 static int enter_state(u32 state)
 {
@@ -144,6 +150,7 @@ static int enter_state(u32 state)
 
     acpi_dmar_reinstate();
 
+    //cpumask_copy(&tmp_mask1, &cpu_initialized);
     if ( (error = disable_nonboot_cpus()) )
     {
         system_state = SYS_STATE_resume;
@@ -174,7 +181,9 @@ static int enter_state(u32 state)
     switch ( state )
     {
     case ACPI_STATE_S3:
+        cpumask_copy(&tmp_mask1, &cpu_initialized);
         do_suspend_lowlevel();
+        cpumask_copy(&tmp_mask2, &cpu_initialized);
         system_reset_counter++;
         error = tboot_s3_resume();
         break;
@@ -204,6 +213,10 @@ static int enter_state(u32 state)
     if ( (state == ACPI_STATE_S3) && error )
         tboot_s3_error(error);
 
+    cpumask_scnprintf(tstr1, sizeof(tstr1), &tmp_mask1);
+    cpumask_scnprintf(tstr2, sizeof(tstr2), &tmp_mask2);
+
+    printk("XXX: CACHE issue? before=%s after=%s\n", tstr1, tstr2);
  done:
     spin_debug_enable();
     local_irq_restore(flags);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index d066ebb..b2aa30b 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -588,7 +588,7 @@ void __cpuinit print_cpu_info(unsigned int cpu)
 	printk(" stepping %02x\n", c->x86_mask);
 }
 
-static cpumask_t cpu_initialized;
+cpumask_t cpu_initialized;
 
 /* This is hacky. :)
  * We're emulating future behavior.

[-- Attachment #3: debug2.patch --]
[-- Type: application/octet-stream, Size: 1860 bytes --]

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 9e1f989..b0b88c2 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -122,6 +122,12 @@ static void acpi_sleep_prepare(u32 state)
 
 static void acpi_sleep_post(u32 state) {}
 
+static cpumask_t tmp_mask1;
+static cpumask_t tmp_mask2;
+static char tstr1[100];
+static char tstr2[100];
+extern cpumask_t cpu_initialized;
+
 /* Main interface to do xen specific suspend/resume */
 static int enter_state(u32 state)
 {
@@ -144,6 +150,7 @@ static int enter_state(u32 state)
 
     acpi_dmar_reinstate();
 
+    cpumask_copy(&tmp_mask1, &cpu_initialized);
     if ( (error = disable_nonboot_cpus()) )
     {
         system_state = SYS_STATE_resume;
@@ -174,7 +181,9 @@ static int enter_state(u32 state)
     switch ( state )
     {
     case ACPI_STATE_S3:
+        //cpumask_copy(&tmp_mask1, &cpu_initialized);
         do_suspend_lowlevel();
+        cpumask_copy(&tmp_mask2, &cpu_initialized);
         system_reset_counter++;
         error = tboot_s3_resume();
         break;
@@ -204,6 +213,10 @@ static int enter_state(u32 state)
     if ( (state == ACPI_STATE_S3) && error )
         tboot_s3_error(error);
 
+    cpumask_scnprintf(tstr1, sizeof(tstr1), &tmp_mask1);
+    cpumask_scnprintf(tstr2, sizeof(tstr2), &tmp_mask2);
+
+    printk("XXX: CACHE issue? before=%s after=%s\n", tstr1, tstr2);
  done:
     spin_debug_enable();
     local_irq_restore(flags);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index d066ebb..b2aa30b 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -588,7 +588,7 @@ void __cpuinit print_cpu_info(unsigned int cpu)
 	printk(" stepping %02x\n", c->x86_mask);
 }
 
-static cpumask_t cpu_initialized;
+cpumask_t cpu_initialized;
 
 /* This is hacky. :)
  * We're emulating future behavior.

[-- Attachment #4: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:22                                                                                             ` Pasi Kärkkäinen
@ 2012-09-24 12:27                                                                                               ` Ben Guthro
  2012-09-24 12:37                                                                                                 ` Javier Marcet
  2012-09-24 12:37                                                                                                 ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 12:27 UTC (permalink / raw)
  To: Pasi Kärkkäinen
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Jan Beulich, Thomas Goetz


[-- Attachment #1.1: Type: text/plain, Size: 1414 bytes --]

On Mon, Sep 24, 2012 at 8:22 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:

> On Mon, Sep 24, 2012 at 07:54:05AM -0400, Ben Guthro wrote:
> >    I see - Konrad - do you have a changeset I can look for with the
> features
> >    Jan describes?
>
> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>
>
I see - so the advanced methods are not being used then
This same kernel does work with Xen-4.0.x though. Is there an assumption in
the code that ties Xen-4.2 to a newer kernel?



> -- Pasi
>
> >    On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <[1]JBeulich@suse.com>
> wrote:
> >
> >      >>> On 24.09.12 at 13:25, Ben Guthro <[2]ben@guthro.net> wrote:
> >      > It is an older system (Core2) - so it may be using the default
> >      mechanism,
> >      > but I'd have to go dig up some processor docs to be sure.
> >
> >      This is not just a matter of what the CPU supports, but also what
> >      info gets passed down from Dom0 - if e.g. your kernel doesn't
> >      have Konrad's P-/C-state patches, then there's no way for Xen to
> >      use any of the advanced methods.
> >      Jan
> >
> > References
> >
> >    Visible links
> >    1. mailto:JBeulich@suse.com
> >    2. mailto:ben@guthro.net
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

[-- Attachment #1.2: Type: text/html, Size: 2337 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:24                                                                                               ` Ben Guthro
@ 2012-09-24 12:32                                                                                                 ` Jan Beulich
       [not found]                                                                                                   ` <CAOvdn6UMHmPWqedYE9GQQMDaM4oiHLDSn9ZzSgJjGf89g1DgTw@mail.gmail.com>
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 12:32 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel

>>> On 24.09.12 at 14:24, Ben Guthro <ben@guthro.net> wrote:
> On Mon, Sep 24, 2012 at 8:05 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> Alternatively you could also stick a wbinvd() in the two possibly
>> relevant places...
>>
> I tried doing this right before disable_nonboot_cpus() in power.c, to no
> effect.

That's too early - this must be done the last thing before entering
the halt loops (which is why I referred to two places).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:27                                                                                               ` Ben Guthro
@ 2012-09-24 12:37                                                                                                 ` Javier Marcet
  2012-09-24 14:04                                                                                                   ` Konrad Rzeszutek Wilk
  2012-09-24 12:37                                                                                                 ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Javier Marcet @ 2012-09-24 12:37 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Jan Beulich, Thomas Goetz

On Mon, Sep 24, 2012 at 2:27 PM, Ben Guthro <ben@guthro.net> wrote:

Hi Ben,

>> >    I see - Konrad - do you have a changeset I can look for with the
>> > features
>> >    Jan describes?
>>
>> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>>
>
> I see - so the advanced methods are not being used then
> This same kernel does work with Xen-4.0.x though. Is there an assumption in
> the code that ties Xen-4.2 to a newer kernel?

I'm being bitten by this bug with a 3.5 kernel and Xen 4.2


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:27                                                                                               ` Ben Guthro
  2012-09-24 12:37                                                                                                 ` Javier Marcet
@ 2012-09-24 12:37                                                                                                 ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 12:37 UTC (permalink / raw)
  To: Ben Guthro, pasik
  Cc: Konrad Rzeszutek Wilk, Keir Fraser, john.baboval, Thomas Goetz,
	xen-devel

>>> On 24.09.12 at 14:27, Ben Guthro <ben@guthro.net> wrote:
> On Mon, Sep 24, 2012 at 8:22 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> 
>> On Mon, Sep 24, 2012 at 07:54:05AM -0400, Ben Guthro wrote:
>> >    I see - Konrad - do you have a changeset I can look for with the
>> features
>> >    Jan describes?
>>
>> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>>
>>
> I see - so the advanced methods are not being used then
> This same kernel does work with Xen-4.0.x though. Is there an assumption in
> the code that ties Xen-4.2 to a newer kernel?

No, but if e.g. you merely look at the differences between both
cpu_uninit() versions, you'll already see that there's quite a bit
more writing of memory being done in 4.0.x than in 4.2.0 or
-unstable, which increases the chances of hiding an eventual
cache flushing problem.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 11:54                                                                                           ` Ben Guthro
  2012-09-24 12:05                                                                                             ` Jan Beulich
  2012-09-24 12:22                                                                                             ` Pasi Kärkkäinen
@ 2012-09-24 14:02                                                                                             ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-24 14:02 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Thomas Goetz, Keir Fraser, john.baboval, Jan Beulich, xen-devel

On Mon, Sep 24, 2012 at 07:54:05AM -0400, Ben Guthro wrote:
> I see - Konrad - do you have a changeset I can look for with the features
> Jan describes?

konrad@phenom:~/work/linux$ git log --oneline drivers/xen/xen-acpi-processor.c
17f9b89 xen/acpi: Fix potential memory leak.
323f90a xen-acpi-processor: Add missing #include <xen/xen.h>
b930fe5 xen/acpi: Workaround broken BIOSes exporting non-existing C-states.
27257fc xen/acpi: Remove the WARN's as they just create noise.
59a5680 xen/acpi-processor: C and P-state driver that uploads said data to hypervisor.


> 
> 
> On Mon, Sep 24, 2012 at 7:45 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> > >>> On 24.09.12 at 13:25, Ben Guthro <ben@guthro.net> wrote:
> > > It is an older system (Core2) - so it may be using the default mechanism,
> > > but I'd have to go dig up some processor docs to be sure.
> >
> > This is not just a matter of what the CPU supports, but also what
> > info gets passed down from Dom0 - if e.g. your kernel doesn't
> > have Konrad's P-/C-state patches, then there's no way for Xen to
> > use any of the advanced methods.
> >
> > Jan
> >
> >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 12:37                                                                                                 ` Javier Marcet
@ 2012-09-24 14:04                                                                                                   ` Konrad Rzeszutek Wilk
  2012-09-24 15:08                                                                                                     ` Javier Marcet
  2012-09-24 21:36                                                                                                     ` Javier Marcet
  0 siblings, 2 replies; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-24 14:04 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, xen-devel, Ben Guthro, Jan Beulich,
	Thomas Goetz

On Mon, Sep 24, 2012 at 02:37:22PM +0200, Javier Marcet wrote:
> On Mon, Sep 24, 2012 at 2:27 PM, Ben Guthro <ben@guthro.net> wrote:
> 
> Hi Ben,
> 
> >> >    I see - Konrad - do you have a changeset I can look for with the
> >> > features
> >> >    Jan describes?
> >>
> >> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
> >>
> >
> > I see - so the advanced methods are not being used then

You mean with the v3.5 kernel? I would think they would be used..

> > This same kernel does work with Xen-4.0.x though. Is there an assumption in
> > the code that ties Xen-4.2 to a newer kernel?
> 
> I'm being bitten by this bug with a 3.5 kernel and Xen 4.2

Huh? How so?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
       [not found]                                                                                                       ` <CAOvdn6XL9ebp2oUV0XEXk_WdU3-=YAj+xfz6AMLDBpVThH3Xvw@mail.gmail.com>
@ 2012-09-24 14:10                                                                                                         ` Jan Beulich
  2012-09-24 14:16                                                                                                           ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 14:10 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, xen-devel

>>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
> On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> ...; the interesting ones are
>> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
>> - xen/arch/x86/domain.c:default_dead_idle()
> 
> 
> Thanks! This fixes the issue on this machine!

Hooray!

> Is this a reasonable long-term solution - or are there reasons not to
> call wbinvd() here?

That's a perfectly valid adjustment (see my earlier reply where
I originally suggested it and explained why it may be necessary).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:10                                                                                                         ` Jan Beulich
@ 2012-09-24 14:16                                                                                                           ` Ben Guthro
  2012-09-24 14:28                                                                                                             ` Jan Beulich
  2012-09-24 14:32                                                                                                             ` Xen4.2 S3 regression? Keir Fraser
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 14:16 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 769 bytes --]

Would you prefer a separate [PATCH] email for this fix, or will you apply
it as-is?



On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >> ...; the interesting ones are
> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
> >> - xen/arch/x86/domain.c:default_dead_idle()
> >
> >
> > Thanks! This fixes the issue on this machine!
>
> Hooray!
>
> > Is this a reasonable long-term solution - or are there reasons not to
> > call wbinvd() here?
>
> That's a perfectly valid adjustment (see my earlier reply where
> I originally suggested it and explained why it may be necessary).
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 1360 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:16                                                                                                           ` Ben Guthro
@ 2012-09-24 14:28                                                                                                             ` Jan Beulich
  2012-09-24 19:02                                                                                                               ` Ben Guthro
  2012-09-24 14:32                                                                                                             ` Xen4.2 S3 regression? Keir Fraser
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-24 14:28 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, xen-devel

>>> On 24.09.12 at 16:16, Ben Guthro <ben@guthro.net> wrote:
> Would you prefer a separate [PATCH] email for this fix, or will you apply
> it as-is?

I'll put something together - the most important thing here obviously
is having a proper description. Plus I'd like to slightly extend this and
have acpi_dead_idle() actually use default_dead_idle(), just to have
things consolidated in one place. I assume I can put your S-o-b on
what you sent...

Jan

> On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
>> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
>> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >> ...; the interesting ones are
>> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
>> >> - xen/arch/x86/domain.c:default_dead_idle()
>> >
>> >
>> > Thanks! This fixes the issue on this machine!
>>
>> Hooray!
>>
>> > Is this a reasonable long-term solution - or are there reasons not to
>> > call wbinvd() here?
>>
>> That's a perfectly valid adjustment (see my earlier reply where
>> I originally suggested it and explained why it may be necessary).
>>
>> Jan
>>
>>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:16                                                                                                           ` Ben Guthro
  2012-09-24 14:28                                                                                                             ` Jan Beulich
@ 2012-09-24 14:32                                                                                                             ` Keir Fraser
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-24 14:32 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1058 bytes --]

I would like to see it myself and Ack it. Although I think I can guess what
it does and I will be happy with it if I¹m right. :)

 -- Keir

On 24/09/2012 15:16, "Ben Guthro" <ben@guthro.net> wrote:

> Would you prefer a separate [PATCH] email for this fix, or will you apply it
> as-is?
> 
> 
> 
> On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
>>> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> >> ...; the interesting ones are
>>>> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
>>>> >> - xen/arch/x86/domain.c:default_dead_idle()
>>> >
>>> >
>>> > Thanks! This fixes the issue on this machine!
>> 
>> Hooray!
>> 
>>> > Is this a reasonable long-term solution - or are there reasons not to
>>> > call wbinvd() here?
>> 
>> That's a perfectly valid adjustment (see my earlier reply where
>> I originally suggested it and explained why it may be necessary).
>> 
>> Jan
>> 
> 
> 


[-- Attachment #1.2: Type: text/html, Size: 1858 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:04                                                                                                   ` Konrad Rzeszutek Wilk
@ 2012-09-24 15:08                                                                                                     ` Javier Marcet
  2012-09-24 21:36                                                                                                     ` Javier Marcet
  1 sibling, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-24 15:08 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, xen-devel, Ben Guthro, Jan Beulich,
	Thomas Goetz

On Mon, Sep 24, 2012 at 4:04 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:

>> >> >    I see - Konrad - do you have a changeset I can look for with the
>> >> > features
>> >> >    Jan describes?
>> >>
>> >> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>> >>
>> >
>> > I see - so the advanced methods are not being used then
>
> You mean with the v3.5 kernel? I would think they would be used..
>
>> > This same kernel does work with Xen-4.0.x though. Is there an assumption in
>> > the code that ties Xen-4.2 to a newer kernel?
>>
>> I'm being bitten by this bug with a 3.5 kernel and Xen 4.2
>
> Huh? How so?

Hmmm, yes, upon resuming from S3 I have this exact same bug.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:28                                                                                                             ` Jan Beulich
@ 2012-09-24 19:02                                                                                                               ` Ben Guthro
  2012-09-24 20:30                                                                                                                 ` Keir Fraser
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 19:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5164 bytes --]

Well...knock one bug down - and another crops up.

It appears that dom0_vcpu_pin is incompatible with S3.
I'll start digging into why, but if you have any thoughts from the stack
below, I'd welcome any pointers.

/btg


(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
extended MCE MSR 0
(XEN) CMCI: CPU0 has no CMCI support
(XEN) CPU0: Thermal monitoring enabled (TM2)
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
2010-09-29
[   60.100054] ACPI: Low-level resume complete
[   60.100054] PM: Restoring platform NVS memory
[   60.100054] Enabling non-boot CPUs ...
[   60.100054] installing Xen timer for CPU 1
[   60.100054] cpu 1 spinlock event irq 279
(XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c480121562>] vcpu_migrate+0x172/0x360
(XEN) RFLAGS: 0000000000010096   CONTEXT: hypervisor
(XEN) rax: 00007d3b7fd17180   rbx: ffff82c4802e8ee0   rcx: ffff82c4802e8ee0
(XEN) rdx: ffff83013a3c5068   rsi: 0000000000000004   rdi: ffff8301300b7d68
(XEN) rbp: 0000000000000001   rsp: ffff8301300b7e28   r8:  0000000000000000
(XEN) r9:  000000000000003e   r10: 000000000000003e   r11: 0000000000000246
(XEN) r12: ffff83013a3c5068   r13: ffff83013a3c5068   r14: ffff82c4802d3140
(XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000131a05000   cr2: 0000000000000060
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff8301300b7e28:
(XEN)    ffff82c4802d3140 ffff83013a3c5068 0000000000000246 0000000000000004
(XEN)    ffff8300bd2fe000 ffff82c4802e8ee0 00000004012d3140 ffff82c4802e8ee0
(XEN)    ffff88003fc8e820 ffff8300bd2fe000 ffff8301355d8000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffff88003fc8e820 ffff82c480105a50
(XEN)    0000000000000000 ffff82c4801805ec 0000060f00000000 ffff82c480184f16
(XEN)    0000000000000032 78a20f6e65780b0f ffff88003976fdc8 ffff8300bd2fe000
(XEN)    ffff88003976fe50 ffff8300bd2fe000 ffff88003976fda0 0000000000000001
(XEN)    0000000000000000 ffff82c480214288 ffff88003fc8e820 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff88003976fda0 ffff88003fc8bdc0
(XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff 0000000000000000
(XEN)    0000000000000018 ffffffff8100130a 0000000000000000 0000000000000001
(XEN)    0000000000000007 0000010000000000 ffffffff8100130a 000000000000e033
(XEN)    0000000000000246 ffff88003976fd88 000000000000e02b d43d5f3fedaef5e7
(XEN)    d3b2ddaeed5038ff 270adb813ad76c9b ddfd6ff5f85e6775 b5881cbf00000001
(XEN)    ffff8300bd2fe000 0000003cba0dc180 0a109ac649c118a1
(XEN) Xen call trace:
(XEN)    [<ffff82c480121562>] vcpu_migrate+0x172/0x360
(XEN)    [<ffff82c480105a50>] do_vcpu_op+0x1e0/0x4a0
(XEN)    [<ffff82c4801805ec>] do_invalid_op+0x19c/0x3f0
(XEN)    [<ffff82c480184f16>] copy_from_user+0x26/0x90
(XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
(XEN)
(XEN) Pagetable walk from 0000000000000060:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000060
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


On Mon, Sep 24, 2012 at 10:28 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 24.09.12 at 16:16, Ben Guthro <ben@guthro.net> wrote:
> > Would you prefer a separate [PATCH] email for this fix, or will you apply
> > it as-is?
>
> I'll put something together - the most important thing here obviously
> is having a proper description. Plus I'd like to slightly extend this and
> have acpi_dead_idle() actually use default_dead_idle(), just to have
> things consolidated in one place. I assume I can put your S-o-b on
> what you sent...
>
> Jan
>
> > On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
> >> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com>
> wrote:
> >> >> ...; the interesting ones are
> >> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
> >> >> - xen/arch/x86/domain.c:default_dead_idle()
> >> >
> >> >
> >> > Thanks! This fixes the issue on this machine!
> >>
> >> Hooray!
> >>
> >> > Is this a reasonable long-term solution - or are there reasons not to
> >> > call wbinvd() here?
> >>
> >> That's a perfectly valid adjustment (see my earlier reply where
> >> I originally suggested it and explained why it may be necessary).
> >>
> >> Jan
> >>
> >>
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 6908 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 19:02                                                                                                               ` Ben Guthro
@ 2012-09-24 20:30                                                                                                                 ` Keir Fraser
  2012-09-24 20:46                                                                                                                   ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Keir Fraser @ 2012-09-24 20:30 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5967 bytes --]

Do a debug build so the backtrace can be trusted. It¹s a NULL pointer
dereference so shouldn¹t be too tricky to make some headway on this one.
Easier than the previous bug. :)

 -- Keir

On 24/09/2012 20:02, "Ben Guthro" <ben@guthro.net> wrote:

> Well...knock one bug down - and another crops up.
> 
> It appears that dom0_vcpu_pin is incompatible with S3.
> I'll start digging into why, but if you have any thoughts from the stack
> below, I'd welcome any pointers.
> 
> /btg
> 
> 
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
> extended MCE MSR 0
> (XEN) CMCI: CPU0 has no CMCI support
> (XEN) CPU0: Thermal monitoring enabled (TM2)
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date = 2010-09-29 
> [   60.100054] ACPI: Low-level resume complete
> [   60.100054] PM: Restoring platform NVS memory
> [   60.100054] Enabling non-boot CPUs ...
> [   60.100054] installing Xen timer for CPU 1
> [   60.100054] cpu 1 spinlock event irq 279
> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c480121562>] vcpu_migrate+0x172/0x360
> (XEN) RFLAGS: 0000000000010096   CONTEXT: hypervisor
> (XEN) rax: 00007d3b7fd17180   rbx: ffff82c4802e8ee0   rcx: ffff82c4802e8ee0
> (XEN) rdx: ffff83013a3c5068   rsi: 0000000000000004   rdi: ffff8301300b7d68
> (XEN) rbp: 0000000000000001   rsp: ffff8301300b7e28   r8:  0000000000000000
> (XEN) r9:  000000000000003e   r10: 000000000000003e   r11: 0000000000000246
> (XEN) r12: ffff83013a3c5068   r13: ffff83013a3c5068   r14: ffff82c4802d3140
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000000026f0
> (XEN) cr3: 0000000131a05000   cr2: 0000000000000060
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff8301300b7e28:
> (XEN)    ffff82c4802d3140 ffff83013a3c5068 0000000000000246 0000000000000004
> (XEN)    ffff8300bd2fe000 ffff82c4802e8ee0 00000004012d3140 ffff82c4802e8ee0
> (XEN)    ffff88003fc8e820 ffff8300bd2fe000 ffff8301355d8000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 ffff88003fc8e820 ffff82c480105a50
> (XEN)    0000000000000000 ffff82c4801805ec 0000060f00000000 ffff82c480184f16
> (XEN)    0000000000000032 78a20f6e65780b0f ffff88003976fdc8 ffff8300bd2fe000
> (XEN)    ffff88003976fe50 ffff8300bd2fe000 ffff88003976fda0 0000000000000001
> (XEN)    0000000000000000 ffff82c480214288 ffff88003fc8e820 0000000000000000
> (XEN)    0000000000000000 0000000000000001 ffff88003976fda0 ffff88003fc8bdc0
> (XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff 0000000000000000
> (XEN)    0000000000000018 ffffffff8100130a 0000000000000000 0000000000000001
> (XEN)    0000000000000007 0000010000000000 ffffffff8100130a 000000000000e033
> (XEN)    0000000000000246 ffff88003976fd88 000000000000e02b d43d5f3fedaef5e7
> (XEN)    d3b2ddaeed5038ff 270adb813ad76c9b ddfd6ff5f85e6775 b5881cbf00000001
> (XEN)    ffff8300bd2fe000 0000003cba0dc180 0a109ac649c118a1
> (XEN) Xen call trace:
> (XEN)    [<ffff82c480121562>] vcpu_migrate+0x172/0x360
> (XEN)    [<ffff82c480105a50>] do_vcpu_op+0x1e0/0x4a0
> (XEN)    [<ffff82c4801805ec>] do_invalid_op+0x19c/0x3f0
> (XEN)    [<ffff82c480184f16>] copy_from_user+0x26/0x90
> (XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
> (XEN)    
> (XEN) Pagetable walk from 0000000000000060:
> (XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 0000000000000060
> (XEN) ****************************************
> (XEN) 
> (XEN) Reboot in five seconds...
> 
> 
> On Mon, Sep 24, 2012 at 10:28 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> >>> On 24.09.12 at 16:16, Ben Guthro <ben@guthro.net> wrote:
>>> > Would you prefer a separate [PATCH] email for this fix, or will you apply
>>> > it as-is?
>> 
>> I'll put something together - the most important thing here obviously
>> is having a proper description. Plus I'd like to slightly extend this and
>> have acpi_dead_idle() actually use default_dead_idle(), just to have
>> things consolidated in one place. I assume I can put your S-o-b on
>> what you sent...
>> 
>> Jan
>> 
>>> > On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>> >
>>>>>>> >> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
>>>>> >> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com>
>>>>> wrote:
>>>>>> >> >> ...; the interesting ones are
>>>>>> >> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
>>>>>> >> >> - xen/arch/x86/domain.c:default_dead_idle()
>>>>> >> >
>>>>> >> >
>>>>> >> > Thanks! This fixes the issue on this machine!
>>>> >>
>>>> >> Hooray!
>>>> >>
>>>>> >> > Is this a reasonable long-term solution - or are there reasons not to
>>>>> >> > call wbinvd() here?
>>>> >>
>>>> >> That's a perfectly valid adjustment (see my earlier reply where
>>>> >> I originally suggested it and explained why it may be necessary).
>>>> >>
>>>> >> Jan
>>>> >>
>>>> >>
>> 
>> 
>> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 7367 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 20:30                                                                                                                 ` Keir Fraser
@ 2012-09-24 20:46                                                                                                                   ` Ben Guthro
  2012-09-24 21:12                                                                                                                     ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 20:46 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6229 bytes --]

I've managed to determine that _csched_cpu_pick is, for reasons not yet
clear, picking a cpu id outside of the range of cpus that are valid for
this system
(in this case cpu id 4, on a 2 core machine)



On Mon, Sep 24, 2012 at 4:30 PM, Keir Fraser <keir.xen@gmail.com> wrote:

>  Do a debug build so the backtrace can be trusted. It’s a NULL pointer
> dereference so shouldn’t be too tricky to make some headway on this one.
> Easier than the previous bug. :)
>
>  -- Keir
>
>
> On 24/09/2012 20:02, "Ben Guthro" <ben@guthro.net> wrote:
>
> Well...knock one bug down - and another crops up.
>
> It appears that dom0_vcpu_pin is incompatible with S3.
> I'll start digging into why, but if you have any thoughts from the stack
> below, I'd welcome any pointers.
>
> /btg
>
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
> extended MCE MSR 0
> (XEN) CMCI: CPU0 has no CMCI support
> (XEN) CPU0: Thermal monitoring enabled (TM2)
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
> 2010-09-29
> [   60.100054] ACPI: Low-level resume complete
> [   60.100054] PM: Restoring platform NVS memory
> [   60.100054] Enabling non-boot CPUs ...
> [   60.100054] installing Xen timer for CPU 1
> [   60.100054] cpu 1 spinlock event irq 279
> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c480121562>] vcpu_migrate+0x172/0x360
> (XEN) RFLAGS: 0000000000010096   CONTEXT: hypervisor
> (XEN) rax: 00007d3b7fd17180   rbx: ffff82c4802e8ee0   rcx: ffff82c4802e8ee0
> (XEN) rdx: ffff83013a3c5068   rsi: 0000000000000004   rdi: ffff8301300b7d68
> (XEN) rbp: 0000000000000001   rsp: ffff8301300b7e28   r8:  0000000000000000
> (XEN) r9:  000000000000003e   r10: 000000000000003e   r11: 0000000000000246
> (XEN) r12: ffff83013a3c5068   r13: ffff83013a3c5068   r14: ffff82c4802d3140
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000000026f0
> (XEN) cr3: 0000000131a05000   cr2: 0000000000000060
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff8301300b7e28:
> (XEN)    ffff82c4802d3140 ffff83013a3c5068 0000000000000246
> 0000000000000004
> (XEN)    ffff8300bd2fe000 ffff82c4802e8ee0 00000004012d3140
> ffff82c4802e8ee0
> (XEN)    ffff88003fc8e820 ffff8300bd2fe000 ffff8301355d8000
> 0000000000000000
> (XEN)    0000000000000000 0000000000000000 ffff88003fc8e820
> ffff82c480105a50
> (XEN)    0000000000000000 ffff82c4801805ec 0000060f00000000
> ffff82c480184f16
> (XEN)    0000000000000032 78a20f6e65780b0f ffff88003976fdc8
> ffff8300bd2fe000
> (XEN)    ffff88003976fe50 ffff8300bd2fe000 ffff88003976fda0
> 0000000000000001
> (XEN)    0000000000000000 ffff82c480214288 ffff88003fc8e820
> 0000000000000000
> (XEN)    0000000000000000 0000000000000001 ffff88003976fda0
> ffff88003fc8bdc0
> (XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff
> 0000000000000000
> (XEN)    0000000000000018 ffffffff8100130a 0000000000000000
> 0000000000000001
> (XEN)    0000000000000007 0000010000000000 ffffffff8100130a
> 000000000000e033
> (XEN)    0000000000000246 ffff88003976fd88 000000000000e02b
> d43d5f3fedaef5e7
> (XEN)    d3b2ddaeed5038ff 270adb813ad76c9b ddfd6ff5f85e6775
> b5881cbf00000001
> (XEN)    ffff8300bd2fe000 0000003cba0dc180 0a109ac649c118a1
> (XEN) Xen call trace:
> (XEN)    [<ffff82c480121562>] vcpu_migrate+0x172/0x360
> (XEN)    [<ffff82c480105a50>] do_vcpu_op+0x1e0/0x4a0
> (XEN)    [<ffff82c4801805ec>] do_invalid_op+0x19c/0x3f0
> (XEN)    [<ffff82c480184f16>] copy_from_user+0x26/0x90
> (XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
> (XEN)
> (XEN) Pagetable walk from 0000000000000060:
> (XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 0000000000000060
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
>
> On Mon, Sep 24, 2012 at 10:28 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
> >>> On 24.09.12 at 16:16, Ben Guthro <ben@guthro.net> wrote:
> > Would you prefer a separate [PATCH] email for this fix, or will you apply
> > it as-is?
>
> I'll put something together - the most important thing here obviously
> is having a proper description. Plus I'd like to slightly extend this and
> have acpi_dead_idle() actually use default_dead_idle(), just to have
> things consolidated in one place. I assume I can put your S-o-b on
> what you sent...
>
> Jan
>
> > On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
> >> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com>
> wrote:
> >> >> ...; the interesting ones are
> >> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
> >> >> - xen/arch/x86/domain.c:default_dead_idle()
> >> >
> >> >
> >> > Thanks! This fixes the issue on this machine!
> >>
> >> Hooray!
> >>
> >> > Is this a reasonable long-term solution - or are there reasons not to
> >> > call wbinvd() here?
> >>
> >> That's a perfectly valid adjustment (see my earlier reply where
> >> I originally suggested it and explained why it may be necessary).
> >>
> >> Jan
> >>
> >>
>
>
>
>
>
> ------------------------------
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

[-- Attachment #1.2: Type: text/html, Size: 8174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 20:46                                                                                                                   ` Ben Guthro
@ 2012-09-24 21:12                                                                                                                     ` Ben Guthro
  2012-09-25  7:00                                                                                                                       ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-24 21:12 UTC (permalink / raw)
  To: Keir Fraser; +Cc: Jan Beulich, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6856 bytes --]

Here's my "Big hammer" debugging patch.

If I force the cpu to be scheduled on CPU0 when the appropriate cpu is not
online, I can resume properly.

Clearly this is not the proper solution, and I'm sure the fix is subtle.
I'm not seeing it right now though. Perhaps tomorrow morning.
If you have any ideas, I'm happy to run tests then.

/btg

On Mon, Sep 24, 2012 at 4:46 PM, Ben Guthro <ben@guthro.net> wrote:

> I've managed to determine that _csched_cpu_pick is, for reasons not yet
> clear, picking a cpu id outside of the range of cpus that are valid for
> this system
> (in this case cpu id 4, on a 2 core machine)
>
>
>
> On Mon, Sep 24, 2012 at 4:30 PM, Keir Fraser <keir.xen@gmail.com> wrote:
>
>>  Do a debug build so the backtrace can be trusted. It’s a NULL pointer
>> dereference so shouldn’t be too tricky to make some headway on this one.
>> Easier than the previous bug. :)
>>
>>  -- Keir
>>
>>
>> On 24/09/2012 20:02, "Ben Guthro" <ben@guthro.net> wrote:
>>
>> Well...knock one bug down - and another crops up.
>>
>> It appears that dom0_vcpu_pin is incompatible with S3.
>> I'll start digging into why, but if you have any thoughts from the stack
>> below, I'd welcome any pointers.
>>
>> /btg
>>
>>
>> (XEN) Preparing system for ACPI S3 state.
>> (XEN) Disabling non-boot CPUs ...
>> (XEN) Entering ACPI S3 state.
>> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
>> extended MCE MSR 0
>> (XEN) CMCI: CPU0 has no CMCI support
>> (XEN) CPU0: Thermal monitoring enabled (TM2)
>> (XEN) Finishing wakeup from ACPI S3 state.
>> (XEN) Enabling non-boot CPUs  ...
>> (XEN) Booting processor 1/1 eip 8a000
>> (XEN) Initializing CPU#1
>> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
>> (XEN) CPU: L2 cache: 3072K
>> (XEN) CPU: Physical Processor ID: 0
>> (XEN) CPU: Processor Core ID: 1
>> (XEN) CMCI: CPU1 has no CMCI support
>> (XEN) CPU1: Thermal monitoring enabled (TM2)
>> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
>> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
>> 2010-09-29
>> [   60.100054] ACPI: Low-level resume complete
>> [   60.100054] PM: Restoring platform NVS memory
>> [   60.100054] Enabling non-boot CPUs ...
>> [   60.100054] installing Xen timer for CPU 1
>> [   60.100054] cpu 1 spinlock event irq 279
>> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=n  Tainted:    C ]----
>> (XEN) CPU:    1
>> (XEN) RIP:    e008:[<ffff82c480121562>] vcpu_migrate+0x172/0x360
>> (XEN) RFLAGS: 0000000000010096   CONTEXT: hypervisor
>> (XEN) rax: 00007d3b7fd17180   rbx: ffff82c4802e8ee0   rcx:
>> ffff82c4802e8ee0
>> (XEN) rdx: ffff83013a3c5068   rsi: 0000000000000004   rdi:
>> ffff8301300b7d68
>> (XEN) rbp: 0000000000000001   rsp: ffff8301300b7e28   r8:
>>  0000000000000000
>> (XEN) r9:  000000000000003e   r10: 000000000000003e   r11:
>> 0000000000000246
>> (XEN) r12: ffff83013a3c5068   r13: ffff83013a3c5068   r14:
>> ffff82c4802d3140
>> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4:
>> 00000000000026f0
>> (XEN) cr3: 0000000131a05000   cr2: 0000000000000060
>> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=ffff8301300b7e28:
>> (XEN)    ffff82c4802d3140 ffff83013a3c5068 0000000000000246
>> 0000000000000004
>> (XEN)    ffff8300bd2fe000 ffff82c4802e8ee0 00000004012d3140
>> ffff82c4802e8ee0
>> (XEN)    ffff88003fc8e820 ffff8300bd2fe000 ffff8301355d8000
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 ffff88003fc8e820
>> ffff82c480105a50
>> (XEN)    0000000000000000 ffff82c4801805ec 0000060f00000000
>> ffff82c480184f16
>> (XEN)    0000000000000032 78a20f6e65780b0f ffff88003976fdc8
>> ffff8300bd2fe000
>> (XEN)    ffff88003976fe50 ffff8300bd2fe000 ffff88003976fda0
>> 0000000000000001
>> (XEN)    0000000000000000 ffff82c480214288 ffff88003fc8e820
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000001 ffff88003976fda0
>> ffff88003fc8bdc0
>> (XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff
>> 0000000000000000
>> (XEN)    0000000000000018 ffffffff8100130a 0000000000000000
>> 0000000000000001
>> (XEN)    0000000000000007 0000010000000000 ffffffff8100130a
>> 000000000000e033
>> (XEN)    0000000000000246 ffff88003976fd88 000000000000e02b
>> d43d5f3fedaef5e7
>> (XEN)    d3b2ddaeed5038ff 270adb813ad76c9b ddfd6ff5f85e6775
>> b5881cbf00000001
>> (XEN)    ffff8300bd2fe000 0000003cba0dc180 0a109ac649c118a1
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c480121562>] vcpu_migrate+0x172/0x360
>> (XEN)    [<ffff82c480105a50>] do_vcpu_op+0x1e0/0x4a0
>> (XEN)    [<ffff82c4801805ec>] do_invalid_op+0x19c/0x3f0
>> (XEN)    [<ffff82c480184f16>] copy_from_user+0x26/0x90
>> (XEN)    [<ffff82c480214288>] syscall_enter+0x88/0x8d
>> (XEN)
>> (XEN) Pagetable walk from 0000000000000060:
>> (XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 1:
>> (XEN) FATAL PAGE FAULT
>> (XEN) [error_code=0000]
>> (XEN) Faulting linear address: 0000000000000060
>> (XEN) ****************************************
>> (XEN)
>> (XEN) Reboot in five seconds...
>>
>>
>> On Mon, Sep 24, 2012 at 10:28 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> >>> On 24.09.12 at 16:16, Ben Guthro <ben@guthro.net> wrote:
>> > Would you prefer a separate [PATCH] email for this fix, or will you
>> apply
>> > it as-is?
>>
>> I'll put something together - the most important thing here obviously
>> is having a proper description. Plus I'd like to slightly extend this and
>> have acpi_dead_idle() actually use default_dead_idle(), just to have
>> things consolidated in one place. I assume I can put your S-o-b on
>> what you sent...
>>
>> Jan
>>
>> > On Mon, Sep 24, 2012 at 10:10 AM, Jan Beulich <JBeulich@suse.com>
>> wrote:
>> >
>> >> >>> On 24.09.12 at 15:56, Ben Guthro <ben@guthro.net> wrote:
>> >> > On Mon, Sep 24, 2012 at 9:34 AM, Jan Beulich <JBeulich@suse.com>
>> wrote:
>> >> >> ...; the interesting ones are
>> >> >> - at the end of xen/arch/x86/acpu/cpu_idle.c:acpi_dead_idle()
>> >> >> - xen/arch/x86/domain.c:default_dead_idle()
>> >> >
>> >> >
>> >> > Thanks! This fixes the issue on this machine!
>> >>
>> >> Hooray!
>> >>
>> >> > Is this a reasonable long-term solution - or are there reasons not to
>> >> > call wbinvd() here?
>> >>
>> >> That's a perfectly valid adjustment (see my earlier reply where
>> >> I originally suggested it and explained why it may be necessary).
>> >>
>> >> Jan
>> >>
>> >>
>>
>>
>>
>>
>>
>> ------------------------------
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>

[-- Attachment #1.2: Type: text/html, Size: 8946 bytes --]

[-- Attachment #2: debug1.patch --]
[-- Type: application/octet-stream, Size: 2054 bytes --]

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index a9802a1..e07b123 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -23,6 +23,8 @@
 #include <xen/errno.h>
 #include <xen/keyhandler.h>
 
+static void csched_dump(const struct scheduler *ops);
+
 /*
  * CSCHED_STATS
  *
@@ -456,6 +458,9 @@ __csched_vcpu_is_migrateable(struct vcpu *vc, int dest_cpu)
            cpumask_test_cpu(dest_cpu, vc->cpu_affinity);
 }
 
+static char tstr1[100];
+static char tstr2[100];
+
 static int
 _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
 {
@@ -473,7 +478,8 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
     cpumask_and(&cpus, online, vc->cpu_affinity);
     cpu = cpumask_test_cpu(vc->processor, &cpus)
             ? vc->processor
-            : cpumask_cycle(vc->processor, &cpus);
+            : 0;
+//            : cpumask_cycle(vc->processor, &cpus);
     ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
 
     /*
@@ -542,6 +548,14 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
     if ( commit && spc )
        spc->idle_bias = cpu;
 
+
+    if (cpu > 1) {
+	    csched_dump(ops);
+	    cpumask_scnprintf(tstr1, sizeof(tstr1), online);
+	    cpumask_scnprintf(tstr2, sizeof(tstr2), vc->cpu_affinity);
+
+	    printk("MASKS: online=%s affinity=%s\n", tstr1, tstr2);
+    }
     return cpu;
 }
 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 13178e0..1dab6d3 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -445,6 +445,7 @@ static void vcpu_migrate(struct vcpu *v)
 
             /* Select a new CPU. */
             new_cpu = SCHED_OP(VCPU2OP(v), pick_cpu, v);
+            printk("%s:%d (%s) new_cpu=%d online=%d\n", __FILE__, __LINE__, __func__, new_cpu, num_online_cpus());
             if ( (new_lock == per_cpu(schedule_data, new_cpu).schedule_lock) &&
                  cpumask_test_cpu(new_cpu, v->domain->cpupool->cpu_valid) )
                 break;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 14:04                                                                                                   ` Konrad Rzeszutek Wilk
  2012-09-24 15:08                                                                                                     ` Javier Marcet
@ 2012-09-24 21:36                                                                                                     ` Javier Marcet
  2012-09-25 14:06                                                                                                       ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 134+ messages in thread
From: Javier Marcet @ 2012-09-24 21:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, xen-devel, Ben Guthro, Jan Beulich,
	Thomas Goetz

On Mon, Sep 24, 2012 at 4:04 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:

>> >> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>> >>
>> >
>> > I see - so the advanced methods are not being used then
>
> You mean with the v3.5 kernel? I would think they would be used..
>
>> > This same kernel does work with Xen-4.0.x though. Is there an assumption in
>> > the code that ties Xen-4.2 to a newer kernel?
>>
>> I'm being bitten by this bug with a 3.5 kernel and Xen 4.2
>
> Huh? How so?

I've tried adding a wbinvd() call before the halts in
xen/arch/x86/domain.c:default_idle and I could do full suspend and
resume cycle once, but a reboot later I couldn't resume anymore. Here
is the trace I get:

[  142.322778] ACPI: Preparing to enter system sleep state S3
[  142.723286] PM: Saving platform NVS memory
[  142.736453] Disabling non-boot CPUs ...
[  142.851387] ------------[ cut here ]------------
[  142.851397] WARNING: at
/home/storage/src/ubuntu-precise/kernel/rcutree.c:1550
rcu_do_batch.isra.41+0x5f/0x213()
[  142.851401] Hardware name: To Be Filled By O.E.M.
[  142.851404] Modules linked in: blktap(O) ip6table_filter ip6_tables
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp
iptable_filter ip_tables x_tables [last unloaded: cx25840]
[  142.851466] Pid: 32, comm: migration/7 Tainted: G           O
3.5.0-14-i7 #18~precise1
[  142.851469] Call Trace:
[  142.851473]  <IRQ>  [<ffffffff8106a9bd>] warn_slowpath_common+0x7e/0x96
[  142.851488]  [<ffffffff8106a9ea>] warn_slowpath_null+0x15/0x17
[  142.851496]  [<ffffffff810c6e87>] rcu_do_batch.isra.41+0x5f/0x213
[  142.851504]  [<ffffffff810c687f>] ?
check_for_new_grace_period.isra.35+0x38/0x44
[  142.851512]  [<ffffffff810c7101>] __rcu_process_callbacks+0xc6/0xee
[  142.851520]  [<ffffffff810c7149>] rcu_process_callbacks+0x20/0x3e
[  142.851527]  [<ffffffff81070c27>] __do_softirq+0x87/0x113
[  142.851536]  [<ffffffff812e7b22>] ? __xen_evtchn_do_upcall+0x1a0/0x1dd
[  142.851546]  [<ffffffff8162f05c>] call_softirq+0x1c/0x30
[  142.851557]  [<ffffffff810343a3>] do_softirq+0x41/0x7e
[  142.851566]  [<ffffffff81070e66>] irq_exit+0x3f/0x9a
[  142.851573]  [<ffffffff812e95c9>] xen_evtchn_do_upcall+0x2c/0x39
[  142.851580]  [<ffffffff8162f0ae>] xen_do_hypervisor_callback+0x1e/0x30
[  142.851588]  <EOI>  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
[  142.851601]  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
[  142.851609]  [<ffffffff8102eee9>] ? xen_force_evtchn_callback+0xd/0xf
[  142.851618]  [<ffffffff8102f552>] ? check_events+0x12/0x20
[  142.851627]  [<ffffffff8102f53f>] ? xen_restore_fl_direct_reloc+0x4/0x4
[  142.851635]  [<ffffffff810b6ff4>] ? arch_local_irq_restore+0xb/0xd
[  142.851644]  [<ffffffff810b73f4>] ? stop_machine_cpu_stop+0xc1/0xd3
[  142.851652]  [<ffffffff810b7333>] ? queue_stop_cpus_work+0xb5/0xb5
[  142.851660]  [<ffffffff810b7152>] ? cpu_stopper_thread+0xf7/0x187
[  142.851667]  [<ffffffff8108a927>] ? finish_task_switch+0x82/0xc1
[  142.851676]  [<ffffffff8162c50b>] ? __schedule+0x428/0x454
[  142.851685]  [<ffffffff8162cff0>] ? _raw_spin_unlock_irqrestore+0x15/0x18
[  142.851693]  [<ffffffff810b705b>] ? cpu_stop_signal_done+0x30/0x30
[  142.851701]  [<ffffffff81082627>] ? kthread+0x86/0x8e
[  142.851710]  [<ffffffff8162ef64>] ? kernel_thread_helper+0x4/0x10
[  142.851718]  [<ffffffff8162d338>] ? retint_restore_args+0x5/0x6
[  142.851726]  [<ffffffff8162ef60>] ? gs_change+0x13/0x13
[  142.851736] ---[ end trace 3722a99bcc5ae37a ]---
[  144.257229] ACPI: Low-level resume complete
[  144.257322] PM: Restoring platform NVS memory


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 21:12                                                                                                                     ` Ben Guthro
@ 2012-09-25  7:00                                                                                                                       ` Jan Beulich
  2012-09-25 11:56                                                                                                                         ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-25  7:00 UTC (permalink / raw)
  To: Keir Fraser, Ben Guthro; +Cc: xen-devel

>>> On 24.09.12 at 23:12, Ben Guthro <ben@guthro.net> wrote:
> Here's my "Big hammer" debugging patch.
> 
> If I force the cpu to be scheduled on CPU0 when the appropriate cpu is not
> online, I can resume properly.
> 
> Clearly this is not the proper solution, and I'm sure the fix is subtle.
> I'm not seeing it right now though. Perhaps tomorrow morning.
> If you have any ideas, I'm happy to run tests then.

I can't see how the printk() you add in the patch would ever get
reached with the other adjustment you do there. A debug build,
as Keir suggested, would not only get the stack trace right, but
would also result in the ASSERT() right after your first modification
to _csched_cpu_pick() to actually do something (and likely trigger).

Anyway, this might be connected to cpu_disable_scheduler() not
having a counterpart to restore the affinity it broke for pinned
domains (for non-pinned ones I believe this behavior is intentional,
albeit not ideal).

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25  7:00                                                                                                                       ` Jan Beulich
@ 2012-09-25 11:56                                                                                                                         ` Ben Guthro
  2012-09-25 14:22                                                                                                                           ` Ben Guthro
  2012-09-26 10:43                                                                                                                           ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-25 11:56 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5359 bytes --]

On Tue, Sep 25, 2012 at 3:00 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 24.09.12 at 23:12, Ben Guthro <ben@guthro.net> wrote:
> > Here's my "Big hammer" debugging patch.
> >
> > If I force the cpu to be scheduled on CPU0 when the appropriate cpu is
> not
> > online, I can resume properly.
> >
> > Clearly this is not the proper solution, and I'm sure the fix is subtle.
> > I'm not seeing it right now though. Perhaps tomorrow morning.
> > If you have any ideas, I'm happy to run tests then.
>
> I can't see how the printk() you add in the patch would ever get
> reached with the other adjustment you do there.


Apologies. I failed to separate prior debugging in this patch from the "big
hammer" fix


> A debug build,
> as Keir suggested, would not only get the stack trace right, but
> would also result in the ASSERT() right after your first modification
> to _csched_cpu_pick() to actually do something (and likely trigger).
>

Indeed. I was using non-debug builds for 2 reasons that, in hindsight may
not be the best of reasons.
1. It was the default
2. Mukesh's kdb debugger requires debug to be off, which I was making use
of previously, and had not disabled.

The stack from a debug build can be found below.
It did, indeed trigger the ASSERT, as you predicted.


(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
2010-09-29
[   82.310025] ACPI: Low-level resume complete
[   82.310025] PM: Restoring platform NVS memory
[   82.310025] Enabling non-boot CPUs ...
[   82.310025] installing Xen timer for CPU 1
[   82.310025] cpu 1 spinlock event irq 279
(XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
failed at sched_credit.c:477
(XEN) ----[ Xen-4.2.1-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000001   rbx: 0000000000000004   rcx: 0000000000000004
(XEN) rdx: 000000000000000f   rsi: 0000000000000004   rdi: 0000000000000000
(XEN) rbp: ffff8301355d7dd8   rsp: ffff8301355d7d08   r8:  0000000000000000
(XEN) r9:  000000000000003e   r10: ffff82c480231700   r11: 0000000000000246
(XEN) r12: ffff82c480261b20   r13: 0000000000000001   r14: ffff82c480301a60
(XEN) r15: ffff83013a542068   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000131a05000   cr2: 0000000000000000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff8301355d7d08:
(XEN)    0100000131a05000 ffff8301355d7d40 0000000000000082 0000000000000002
(XEN)    ffff8300bd503000 0000000000000001 0000000000000297 ffff8301355d7d58
(XEN)    ffff82c480125499 ffff830138216000 ffff8301355d7d98 5400000000000002
(XEN)    0000000000000286 ffff8301355d7d88 ffff82c480125499 ffff830138216000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffff830134ca6a50 ffff83013a542068 ffff83013a542068 0000000000000001
(XEN)    ffff82c480301a60 ffff83013a542068 ffff8301355d7de8 ffff82c48011a785
(XEN)    ffff8301355d7e58 ffff82c480123519 ffff82c480301a60 ffff82c480301a60
(XEN)    ffff82c480301a60 ffff8300bd503000 0000000000503060 0000000000000246
(XEN)    ffff82c480127c31 ffff8300bd503000 ffff82c480301a60 ffff82c4802ebd40
(XEN)    ffff83013a542068 ffff88003fc8e820 ffff8301355d7e88 ffff82c4801237d3
(XEN)    fffffffffffffffe ffff8301355ca000 ffff8300bd503000 0000000000000000
(XEN)    ffff8301355d7ef8 ffff82c480106335 ffff8301355d7f18 ffffffff810030e1
(XEN)    ffff8300bd503000 0000000000000000 ffff8301355d7f08 ffff82c480185390
(XEN)    ffffffff81aafd32 ffff8300bd503000 0000000000000001 0000000000000000
(XEN)    0000000000000000 ffff88003fc8e820 00007cfecaa280c7 ffff82c480227348
(XEN)    ffffffff8100130a 0000000000000018 ffff88003fc8e820 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff88003976fda0 ffff88003fc8bdc0
(XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff 0000000000000000
(XEN)    0000000000000018 ffffffff8100130a 0000000000000000 0000000000000001
(XEN) Xen call trace:
(XEN)    [<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
(XEN)    [<ffff82c48011a785>] csched_cpu_pick+0xe/0x10
(XEN)    [<ffff82c480123519>] vcpu_migrate+0x19f/0x346
(XEN)    [<ffff82c4801237d3>] vcpu_force_reschedule+0xa4/0xb6
(XEN)    [<ffff82c480106335>] do_vcpu_op+0x2c9/0x452
(XEN)    [<ffff82c480227348>] syscall_enter+0xc8/0x122
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
failed at sched_credit.c:477
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


> Anyway, this might be connected to cpu_disable_scheduler() not
> having a counterpart to restore the affinity it broke for pinned
> domains (for non-pinned ones I believe this behavior is intentional,
> albeit not ideal).
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 7057 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-24 21:36                                                                                                     ` Javier Marcet
@ 2012-09-25 14:06                                                                                                       ` Konrad Rzeszutek Wilk
  2012-09-25 14:47                                                                                                         ` Javier Marcet
  0 siblings, 1 reply; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-25 14:06 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Ben Guthro, Jan Beulich, Thomas Goetz

On Mon, Sep 24, 2012 at 11:36:23PM +0200, Javier Marcet wrote:
> On Mon, Sep 24, 2012 at 4:04 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> 
> >> >> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
> >> >>
> >> >
> >> > I see - so the advanced methods are not being used then
> >
> > You mean with the v3.5 kernel? I would think they would be used..
> >
> >> > This same kernel does work with Xen-4.0.x though. Is there an assumption in
> >> > the code that ties Xen-4.2 to a newer kernel?
> >>
> >> I'm being bitten by this bug with a 3.5 kernel and Xen 4.2
> >
> > Huh? How so?
> 
> I've tried adding a wbinvd() call before the halts in
> xen/arch/x86/domain.c:default_idle and I could do full suspend and
> resume cycle once, but a reboot later I couldn't resume anymore. Here

Did you use the patch that Jan posted?
> is the trace I get:

That is a different issue. That is the kernel, while the outstanding
issue in regards to suspend/resume is in the hypervisor.
> 
> [  142.322778] ACPI: Preparing to enter system sleep state S3
> [  142.723286] PM: Saving platform NVS memory
> [  142.736453] Disabling non-boot CPUs ...
> [  142.851387] ------------[ cut here ]------------
> [  142.851397] WARNING: at
> /home/storage/src/ubuntu-precise/kernel/rcutree.c:1550
> rcu_do_batch.isra.41+0x5f/0x213()


which ought to be fixed, but lets concentrate on one thing at a time.
If you use the patch that Jan posted, does it work? And I presume
you also have the two out-of-tree patches to make resume work in the
dom0?

> [  142.851401] Hardware name: To Be Filled By O.E.M.
> [  142.851404] Modules linked in: blktap(O) ip6table_filter ip6_tables
> ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
> xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp
> iptable_filter ip_tables x_tables [last unloaded: cx25840]
> [  142.851466] Pid: 32, comm: migration/7 Tainted: G           O
> 3.5.0-14-i7 #18~precise1
> [  142.851469] Call Trace:
> [  142.851473]  <IRQ>  [<ffffffff8106a9bd>] warn_slowpath_common+0x7e/0x96
> [  142.851488]  [<ffffffff8106a9ea>] warn_slowpath_null+0x15/0x17
> [  142.851496]  [<ffffffff810c6e87>] rcu_do_batch.isra.41+0x5f/0x213
> [  142.851504]  [<ffffffff810c687f>] ?
> check_for_new_grace_period.isra.35+0x38/0x44
> [  142.851512]  [<ffffffff810c7101>] __rcu_process_callbacks+0xc6/0xee
> [  142.851520]  [<ffffffff810c7149>] rcu_process_callbacks+0x20/0x3e
> [  142.851527]  [<ffffffff81070c27>] __do_softirq+0x87/0x113
> [  142.851536]  [<ffffffff812e7b22>] ? __xen_evtchn_do_upcall+0x1a0/0x1dd
> [  142.851546]  [<ffffffff8162f05c>] call_softirq+0x1c/0x30
> [  142.851557]  [<ffffffff810343a3>] do_softirq+0x41/0x7e
> [  142.851566]  [<ffffffff81070e66>] irq_exit+0x3f/0x9a
> [  142.851573]  [<ffffffff812e95c9>] xen_evtchn_do_upcall+0x2c/0x39
> [  142.851580]  [<ffffffff8162f0ae>] xen_do_hypervisor_callback+0x1e/0x30
> [  142.851588]  <EOI>  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
> [  142.851601]  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
> [  142.851609]  [<ffffffff8102eee9>] ? xen_force_evtchn_callback+0xd/0xf
> [  142.851618]  [<ffffffff8102f552>] ? check_events+0x12/0x20
> [  142.851627]  [<ffffffff8102f53f>] ? xen_restore_fl_direct_reloc+0x4/0x4
> [  142.851635]  [<ffffffff810b6ff4>] ? arch_local_irq_restore+0xb/0xd
> [  142.851644]  [<ffffffff810b73f4>] ? stop_machine_cpu_stop+0xc1/0xd3
> [  142.851652]  [<ffffffff810b7333>] ? queue_stop_cpus_work+0xb5/0xb5
> [  142.851660]  [<ffffffff810b7152>] ? cpu_stopper_thread+0xf7/0x187
> [  142.851667]  [<ffffffff8108a927>] ? finish_task_switch+0x82/0xc1
> [  142.851676]  [<ffffffff8162c50b>] ? __schedule+0x428/0x454
> [  142.851685]  [<ffffffff8162cff0>] ? _raw_spin_unlock_irqrestore+0x15/0x18
> [  142.851693]  [<ffffffff810b705b>] ? cpu_stop_signal_done+0x30/0x30
> [  142.851701]  [<ffffffff81082627>] ? kthread+0x86/0x8e
> [  142.851710]  [<ffffffff8162ef64>] ? kernel_thread_helper+0x4/0x10
> [  142.851718]  [<ffffffff8162d338>] ? retint_restore_args+0x5/0x6
> [  142.851726]  [<ffffffff8162ef60>] ? gs_change+0x13/0x13
> [  142.851736] ---[ end trace 3722a99bcc5ae37a ]---
> [  144.257229] ACPI: Low-level resume complete
> [  144.257322] PM: Restoring platform NVS memory
> 
> 
> -- 
> Javier Marcet <jmarcet@gmail.com>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 11:56                                                                                                                         ` Ben Guthro
@ 2012-09-25 14:22                                                                                                                           ` Ben Guthro
  2012-09-25 14:53                                                                                                                             ` Keir Fraser
  2012-09-25 15:10                                                                                                                             ` Jan Beulich
  2012-09-26 10:43                                                                                                                           ` Jan Beulich
  1 sibling, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-25 14:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6087 bytes --]

I went back to an old patch that had, since it was in this same function
that you made reference to:
http://markmail.org/message/qpnmiqzt5bngeejk

I noticed that I was not seeing the "Breaking vcpu affinity" printk - so I
tried to get that

The change proposed in that thread seems to work around this pinning
problem.
However, I'm not sure that it is the "right thing" to be doing.

Do you have any thoughts on this?


On Tue, Sep 25, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:

>
> On Tue, Sep 25, 2012 at 3:00 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 24.09.12 at 23:12, Ben Guthro <ben@guthro.net> wrote:
>> > Here's my "Big hammer" debugging patch.
>> >
>> > If I force the cpu to be scheduled on CPU0 when the appropriate cpu is
>> not
>> > online, I can resume properly.
>> >
>> > Clearly this is not the proper solution, and I'm sure the fix is subtle.
>> > I'm not seeing it right now though. Perhaps tomorrow morning.
>> > If you have any ideas, I'm happy to run tests then.
>>
>> I can't see how the printk() you add in the patch would ever get
>> reached with the other adjustment you do there.
>
>
> Apologies. I failed to separate prior debugging in this patch from the
> "big hammer" fix
>
>
>> A debug build,
>> as Keir suggested, would not only get the stack trace right, but
>> would also result in the ASSERT() right after your first modification
>> to _csched_cpu_pick() to actually do something (and likely trigger).
>>
>
> Indeed. I was using non-debug builds for 2 reasons that, in hindsight may
> not be the best of reasons.
> 1. It was the default
> 2. Mukesh's kdb debugger requires debug to be off, which I was making use
> of previously, and had not disabled.
>
> The stack from a debug build can be found below.
> It did, indeed trigger the ASSERT, as you predicted.
>
>
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
> 2010-09-29
> [   82.310025] ACPI: Low-level resume complete
> [   82.310025] PM: Restoring platform NVS memory
> [   82.310025] Enabling non-boot CPUs ...
> [   82.310025] installing Xen timer for CPU 1
> [   82.310025] cpu 1 spinlock event irq 279
> (XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
> failed at sched_credit.c:477
> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
> (XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
> (XEN) rax: 0000000000000001   rbx: 0000000000000004   rcx: 0000000000000004
> (XEN) rdx: 000000000000000f   rsi: 0000000000000004   rdi: 0000000000000000
> (XEN) rbp: ffff8301355d7dd8   rsp: ffff8301355d7d08   r8:  0000000000000000
> (XEN) r9:  000000000000003e   r10: ffff82c480231700   r11: 0000000000000246
> (XEN) r12: ffff82c480261b20   r13: 0000000000000001   r14: ffff82c480301a60
> (XEN) r15: ffff83013a542068   cr0: 000000008005003b   cr4: 00000000000026f0
> (XEN) cr3: 0000000131a05000   cr2: 0000000000000000
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff8301355d7d08:
> (XEN)    0100000131a05000 ffff8301355d7d40 0000000000000082
> 0000000000000002
> (XEN)    ffff8300bd503000 0000000000000001 0000000000000297
> ffff8301355d7d58
> (XEN)    ffff82c480125499 ffff830138216000 ffff8301355d7d98
> 5400000000000002
> (XEN)    0000000000000286 ffff8301355d7d88 ffff82c480125499
> ffff830138216000
> (XEN)    0000000000000000 0000000000000000 0000000000000000
> 0000000000000000
> (XEN)    ffff830134ca6a50 ffff83013a542068 ffff83013a542068
> 0000000000000001
> (XEN)    ffff82c480301a60 ffff83013a542068 ffff8301355d7de8
> ffff82c48011a785
> (XEN)    ffff8301355d7e58 ffff82c480123519 ffff82c480301a60
> ffff82c480301a60
> (XEN)    ffff82c480301a60 ffff8300bd503000 0000000000503060
> 0000000000000246
> (XEN)    ffff82c480127c31 ffff8300bd503000 ffff82c480301a60
> ffff82c4802ebd40
> (XEN)    ffff83013a542068 ffff88003fc8e820 ffff8301355d7e88
> ffff82c4801237d3
> (XEN)    fffffffffffffffe ffff8301355ca000 ffff8300bd503000
> 0000000000000000
> (XEN)    ffff8301355d7ef8 ffff82c480106335 ffff8301355d7f18
> ffffffff810030e1
> (XEN)    ffff8300bd503000 0000000000000000 ffff8301355d7f08
> ffff82c480185390
> (XEN)    ffffffff81aafd32 ffff8300bd503000 0000000000000001
> 0000000000000000
> (XEN)    0000000000000000 ffff88003fc8e820 00007cfecaa280c7
> ffff82c480227348
> (XEN)    ffffffff8100130a 0000000000000018 ffff88003fc8e820
> 0000000000000000
> (XEN)    0000000000000000 0000000000000001 ffff88003976fda0
> ffff88003fc8bdc0
> (XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff
> 0000000000000000
> (XEN)    0000000000000018 ffffffff8100130a 0000000000000000
> 0000000000000001
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
> (XEN)    [<ffff82c48011a785>] csched_cpu_pick+0xe/0x10
> (XEN)    [<ffff82c480123519>] vcpu_migrate+0x19f/0x346
> (XEN)    [<ffff82c4801237d3>] vcpu_force_reschedule+0xa4/0xb6
> (XEN)    [<ffff82c480106335>] do_vcpu_op+0x2c9/0x452
> (XEN)    [<ffff82c480227348>] syscall_enter+0xc8/0x122
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
> failed at sched_credit.c:477
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
>
>> Anyway, this might be connected to cpu_disable_scheduler() not
>> having a counterpart to restore the affinity it broke for pinned
>> domains (for non-pinned ones I believe this behavior is intentional,
>> albeit not ideal).
>>
>> Jan
>>
>>
>

[-- Attachment #1.2: Type: text/html, Size: 8191 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 14:06                                                                                                       ` Konrad Rzeszutek Wilk
@ 2012-09-25 14:47                                                                                                         ` Javier Marcet
  2012-09-25 15:21                                                                                                           ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Javier Marcet @ 2012-09-25 14:47 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Ben Guthro, Jan Beulich, Thomas Goetz

On Tue, Sep 25, 2012 at 4:06 PM, Konrad Rzeszutek Wilk
<konrad@kernel.org> wrote:

>> >> >> xen-acpi-processor driver was merged to upstream Linux kernel v3.4.
>> >> >>
>> >> >
>> >> > I see - so the advanced methods are not being used then
>> >
>> > You mean with the v3.5 kernel? I would think they would be used..
>> >
>> >> > This same kernel does work with Xen-4.0.x though. Is there an assumption in
>> >> > the code that ties Xen-4.2 to a newer kernel?
>> >>
>> >> I'm being bitten by this bug with a 3.5 kernel and Xen 4.2
>> >
>> > Huh? How so?
>>
>> I've tried adding a wbinvd() call before the halts in
>> xen/arch/x86/domain.c:default_idle and I could do full suspend and
>> resume cycle once, but a reboot later I couldn't resume anymore. Here
>
> Did you use the patch that Jan posted?
>> is the trace I get:
>
> That is a different issue. That is the kernel, while the outstanding
> issue in regards to suspend/resume is in the hypervisor.
>>
>> [  142.322778] ACPI: Preparing to enter system sleep state S3
>> [  142.723286] PM: Saving platform NVS memory
>> [  142.736453] Disabling non-boot CPUs ...
>> [  142.851387] ------------[ cut here ]------------
>> [  142.851397] WARNING: at
>> /home/storage/src/ubuntu-precise/kernel/rcutree.c:1550
>> rcu_do_batch.isra.41+0x5f/0x213()
>
>
> which ought to be fixed, but lets concentrate on one thing at a time.
> If you use the patch that Jan posted, does it work? And I presume
> you also have the two out-of-tree patches to make resume work in the
> dom0?

I haven't been able to see the actual patch, I only read a description
of what it should do. Two possible places where a wbinvd() call should
be added. On the first of these two places there were already calls to
wbinvd() just before the halts. I added it to the other one, within
xen/arch/x86/domain.c:default_idle and I could complete a
suspend/resume cycle without the back trace but after rebooting I
tried it again and it failed, more than once.

I do have the other two out-of-tree patches applied, otherwise it
would break way before than now.

>> [  142.851401] Hardware name: To Be Filled By O.E.M.
>> [  142.851404] Modules linked in: blktap(O) ip6table_filter ip6_tables
>> ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
>> xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp
>> iptable_filter ip_tables x_tables [last unloaded: cx25840]
>> [  142.851466] Pid: 32, comm: migration/7 Tainted: G           O
>> 3.5.0-14-i7 #18~precise1
>> [  142.851469] Call Trace:
>> [  142.851473]  <IRQ>  [<ffffffff8106a9bd>] warn_slowpath_common+0x7e/0x96
>> [  142.851488]  [<ffffffff8106a9ea>] warn_slowpath_null+0x15/0x17
>> [  142.851496]  [<ffffffff810c6e87>] rcu_do_batch.isra.41+0x5f/0x213
>> [  142.851504]  [<ffffffff810c687f>] ?
>> check_for_new_grace_period.isra.35+0x38/0x44
>> [  142.851512]  [<ffffffff810c7101>] __rcu_process_callbacks+0xc6/0xee
>> [  142.851520]  [<ffffffff810c7149>] rcu_process_callbacks+0x20/0x3e
>> [  142.851527]  [<ffffffff81070c27>] __do_softirq+0x87/0x113
>> [  142.851536]  [<ffffffff812e7b22>] ? __xen_evtchn_do_upcall+0x1a0/0x1dd
>> [  142.851546]  [<ffffffff8162f05c>] call_softirq+0x1c/0x30
>> [  142.851557]  [<ffffffff810343a3>] do_softirq+0x41/0x7e
>> [  142.851566]  [<ffffffff81070e66>] irq_exit+0x3f/0x9a
>> [  142.851573]  [<ffffffff812e95c9>] xen_evtchn_do_upcall+0x2c/0x39
>> [  142.851580]  [<ffffffff8162f0ae>] xen_do_hypervisor_callback+0x1e/0x30
>> [  142.851588]  <EOI>  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
>> [  142.851601]  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
>> [  142.851609]  [<ffffffff8102eee9>] ? xen_force_evtchn_callback+0xd/0xf
>> [  142.851618]  [<ffffffff8102f552>] ? check_events+0x12/0x20
>> [  142.851627]  [<ffffffff8102f53f>] ? xen_restore_fl_direct_reloc+0x4/0x4
>> [  142.851635]  [<ffffffff810b6ff4>] ? arch_local_irq_restore+0xb/0xd
>> [  142.851644]  [<ffffffff810b73f4>] ? stop_machine_cpu_stop+0xc1/0xd3
>> [  142.851652]  [<ffffffff810b7333>] ? queue_stop_cpus_work+0xb5/0xb5
>> [  142.851660]  [<ffffffff810b7152>] ? cpu_stopper_thread+0xf7/0x187
>> [  142.851667]  [<ffffffff8108a927>] ? finish_task_switch+0x82/0xc1
>> [  142.851676]  [<ffffffff8162c50b>] ? __schedule+0x428/0x454
>> [  142.851685]  [<ffffffff8162cff0>] ? _raw_spin_unlock_irqrestore+0x15/0x18
>> [  142.851693]  [<ffffffff810b705b>] ? cpu_stop_signal_done+0x30/0x30
>> [  142.851701]  [<ffffffff81082627>] ? kthread+0x86/0x8e
>> [  142.851710]  [<ffffffff8162ef64>] ? kernel_thread_helper+0x4/0x10
>> [  142.851718]  [<ffffffff8162d338>] ? retint_restore_args+0x5/0x6
>> [  142.851726]  [<ffffffff8162ef60>] ? gs_change+0x13/0x13
>> [  142.851736] ---[ end trace 3722a99bcc5ae37a ]---
>> [  144.257229] ACPI: Low-level resume complete
>> [  144.257322] PM: Restoring platform NVS memory


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 14:22                                                                                                                           ` Ben Guthro
@ 2012-09-25 14:53                                                                                                                             ` Keir Fraser
  2012-09-25 15:10                                                                                                                             ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-25 14:53 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich; +Cc: John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6889 bytes --]

This was introduced as part of a patch to avoid losing cpu and cpupool
affinities/memberships across S3. Looks like it breaks some assumptions in
the scheduler though, probably because all CPUs are not taken offline
atomically, nor brought back online atomically. Hence some other running CPU
can execute hypervisor code that observes VCPUs in this bad
can¹t-run-anywhere state. I guess this is what is happening. I¹m not
immediately sure of the best fix. :(

 -- Keir

On 25/09/2012 15:22, "Ben Guthro" <ben@guthro.net> wrote:

> I went back to an old patch that had, since it was in this same function that
> you made reference to:
> http://markmail.org/message/qpnmiqzt5bngeejk
> 
> I noticed that I was not seeing the "Breaking vcpu affinity" printk - so I
> tried to get that 
> 
> The change proposed in that thread seems to work around this pinning problem.
> However, I'm not sure that it is the "right thing" to be doing.
> 
> Do you have any thoughts on this?
> 
> 
> On Tue, Sep 25, 2012 at 7:56 AM, Ben Guthro <ben@guthro.net> wrote:
>> 
>> On Tue, Sep 25, 2012 at 3:00 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> >>> On 24.09.12 at 23:12, Ben Guthro <ben@guthro.net> wrote:
>>>> > Here's my "Big hammer" debugging patch.
>>>> >
>>>> > If I force the cpu to be scheduled on CPU0 when the appropriate cpu is
>>>> not
>>>> > online, I can resume properly.
>>>> >
>>>> > Clearly this is not the proper solution, and I'm sure the fix is subtle.
>>>> > I'm not seeing it right now though. Perhaps tomorrow morning.
>>>> > If you have any ideas, I'm happy to run tests then.
>>> 
>>> I can't see how the printk() you add in the patch would ever get
>>> reached with the other adjustment you do there.
>> 
>> Apologies. I failed to separate prior debugging in this patch from the "big
>> hammer" fix
>>  
>>> A debug build,
>>> as Keir suggested, would not only get the stack trace right, but
>>> would also result in the ASSERT() right after your first modification
>>> to _csched_cpu_pick() to actually do something (and likely trigger).
>> 
>> Indeed. I was using non-debug builds for 2 reasons that, in hindsight may not
>> be the best of reasons.
>> 1. It was the default
>> 2. Mukesh's kdb debugger requires debug to be off, which I was making use of
>> previously, and had not disabled.
>>  
>> The stack from a debug build can be found below.
>> It did, indeed trigger the ASSERT, as you predicted.
>> 
>> 
>> (XEN) Finishing wakeup from ACPI S3 state.
>> (XEN) Enabling non-boot CPUs  ...
>> (XEN) Booting processor 1/1 eip 8a000
>> (XEN) Initializing CPU#1
>> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
>> (XEN) CPU: L2 cache: 3072K
>> (XEN) CPU: Physical Processor ID: 0
>> (XEN) CPU: Processor Core ID: 1
>> (XEN) CMCI: CPU1 has no CMCI support
>> (XEN) CPU1: Thermal monitoring enabled (TM2)
>> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
>> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
>> 2010-09-29 
>> [   82.310025] ACPI: Low-level resume complete
>> [   82.310025] PM: Restoring platform NVS memory
>> [   82.310025] Enabling non-boot CPUs ...
>> [   82.310025] installing Xen timer for CPU 1
>> [   82.310025] cpu 1 spinlock event irq 279
>> (XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
>> failed at sched_credit.c:477
>> (XEN) ----[ Xen-4.2.1-pre  x86_64  debug=y  Tainted:    C ]----
>> (XEN) CPU:    1
>> (XEN) RIP:    e008:[<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
>> (XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
>> (XEN) rax: 0000000000000001   rbx: 0000000000000004   rcx: 0000000000000004
>> (XEN) rdx: 000000000000000f   rsi: 0000000000000004   rdi: 0000000000000000
>> (XEN) rbp: ffff8301355d7dd8   rsp: ffff8301355d7d08   r8:  0000000000000000
>> (XEN) r9:  000000000000003e   r10: ffff82c480231700   r11: 0000000000000246
>> (XEN) r12: ffff82c480261b20   r13: 0000000000000001   r14: ffff82c480301a60
>> (XEN) r15: ffff83013a542068   cr0: 000000008005003b   cr4: 00000000000026f0
>> (XEN) cr3: 0000000131a05000   cr2: 0000000000000000
>> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=ffff8301355d7d08:
>> (XEN)    0100000131a05000 ffff8301355d7d40 0000000000000082 0000000000000002
>> (XEN)    ffff8300bd503000 0000000000000001 0000000000000297 ffff8301355d7d58
>> (XEN)    ffff82c480125499 ffff830138216000 ffff8301355d7d98 5400000000000002
>> (XEN)    0000000000000286 ffff8301355d7d88 ffff82c480125499 ffff830138216000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
>> (XEN)    ffff830134ca6a50 ffff83013a542068 ffff83013a542068 0000000000000001
>> (XEN)    ffff82c480301a60 ffff83013a542068 ffff8301355d7de8 ffff82c48011a785
>> (XEN)    ffff8301355d7e58 ffff82c480123519 ffff82c480301a60 ffff82c480301a60
>> (XEN)    ffff82c480301a60 ffff8300bd503000 0000000000503060 0000000000000246
>> (XEN)    ffff82c480127c31 ffff8300bd503000 ffff82c480301a60 ffff82c4802ebd40
>> (XEN)    ffff83013a542068 ffff88003fc8e820 ffff8301355d7e88 ffff82c4801237d3
>> (XEN)    fffffffffffffffe ffff8301355ca000 ffff8300bd503000 0000000000000000
>> (XEN)    ffff8301355d7ef8 ffff82c480106335 ffff8301355d7f18 ffffffff810030e1
>> (XEN)    ffff8300bd503000 0000000000000000 ffff8301355d7f08 ffff82c480185390
>> (XEN)    ffffffff81aafd32 ffff8300bd503000 0000000000000001 0000000000000000
>> (XEN)    0000000000000000 ffff88003fc8e820 00007cfecaa280c7 ffff82c480227348
>> (XEN)    ffffffff8100130a 0000000000000018 ffff88003fc8e820 0000000000000000
>> (XEN)    0000000000000000 0000000000000001 ffff88003976fda0 ffff88003fc8bdc0
>> (XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff 0000000000000000
>> (XEN)    0000000000000018 ffffffff8100130a 0000000000000000 0000000000000001
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
>> (XEN)    [<ffff82c48011a785>] csched_cpu_pick+0xe/0x10
>> (XEN)    [<ffff82c480123519>] vcpu_migrate+0x19f/0x346
>> (XEN)    [<ffff82c4801237d3>] vcpu_force_reschedule+0xa4/0xb6
>> (XEN)    [<ffff82c480106335>] do_vcpu_op+0x2c9/0x452
>> (XEN)    [<ffff82c480227348>] syscall_enter+0xc8/0x122
>> (XEN)    
>> (XEN) 
>> (XEN) ****************************************
>> (XEN) Panic on CPU 1:
>> (XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)'
>> failed at sched_credit.c:477
>> (XEN) ****************************************
>> (XEN) 
>> (XEN) Reboot in five seconds...
>> 
>>> 
>>> Anyway, this might be connected to cpu_disable_scheduler() not
>>> having a counterpart to restore the affinity it broke for pinned
>>> domains (for non-pinned ones I believe this behavior is intentional,
>>> albeit not ideal).
>>> 
>>> Jan
>>> 
>> 
> 
> 


[-- Attachment #1.2: Type: text/html, Size: 8518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 14:22                                                                                                                           ` Ben Guthro
  2012-09-25 14:53                                                                                                                             ` Keir Fraser
@ 2012-09-25 15:10                                                                                                                             ` Jan Beulich
  2012-09-25 15:45                                                                                                                               ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-25 15:10 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 25.09.12 at 16:22, Ben Guthro <ben@guthro.net> wrote:
> I went back to an old patch that had, since it was in this same function
> that you made reference to:
> http://markmail.org/message/qpnmiqzt5bngeejk 
> 
> I noticed that I was not seeing the "Breaking vcpu affinity" printk - so I
> tried to get that
> 
> The change proposed in that thread seems to work around this pinning
> problem.
> However, I'm not sure that it is the "right thing" to be doing.

As said back then, the original change looks a little suspicious,
and hence reverting it is certainly to be considered.

However, I don't see yet how this is connected to you having
a problem only with pinned vCPU-s.

But - with S3 working again (with the change here) or originally,
did you ever check whether post-resume Dom0 vCPU-s would
still be pinned? I suspect they aren't, and hence addressing the
crash may likely be achieved by properly restoring the pinning.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 14:47                                                                                                         ` Javier Marcet
@ 2012-09-25 15:21                                                                                                           ` Jan Beulich
  2012-09-25 15:23                                                                                                             ` Javier Marcet
  2012-09-25 19:55                                                                                                             ` Javier Marcet
  0 siblings, 2 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-25 15:21 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

>>> On 25.09.12 at 16:47, Javier Marcet <jmarcet@gmail.com> wrote:
> I haven't been able to see the actual patch, I only read a description
> of what it should do. Two possible places where a wbinvd() call should
> be added. On the first of these two places there were already calls to
> wbinvd() just before the halts. I added it to the other one, within
> xen/arch/x86/domain.c:default_idle and I could complete a
> suspend/resume cycle without the back trace but after rebooting I
> tried it again and it failed, more than once.

default_idle() isn't the right place, default_dead_idle() is.

See
http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/c8d65d91a6f2

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 15:21                                                                                                           ` Jan Beulich
@ 2012-09-25 15:23                                                                                                             ` Javier Marcet
  2012-09-25 19:55                                                                                                             ` Javier Marcet
  1 sibling, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-25 15:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

On Tue, Sep 25, 2012 at 5:21 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 25.09.12 at 16:47, Javier Marcet <jmarcet@gmail.com> wrote:
>> I haven't been able to see the actual patch, I only read a description
>> of what it should do. Two possible places where a wbinvd() call should
>> be added. On the first of these two places there were already calls to
>> wbinvd() just before the halts. I added it to the other one, within
>> xen/arch/x86/domain.c:default_idle and I could complete a
>> suspend/resume cycle without the back trace but after rebooting I
>> tried it again and it failed, more than once.
>
> default_idle() isn't the right place, default_dead_idle() is.
>
> See
> http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/c8d65d91a6f2

Thanks for the pointer Jan. This evening I'll try it out and report back.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 15:10                                                                                                                             ` Jan Beulich
@ 2012-09-25 15:45                                                                                                                               ` Ben Guthro
  2012-09-25 15:52                                                                                                                                 ` Keir Fraser
  2012-09-26 11:49                                                                                                                                 ` Jan Beulich
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-25 15:45 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1547 bytes --]

oops. I didn't reply-all on this - re-replying.

On Tue, Sep 25, 2012 at 11:10 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 25.09.12 at 16:22, Ben Guthro <ben@guthro.net> wrote:
> > I went back to an old patch that had, since it was in this same function
> > that you made reference to:
> > http://markmail.org/message/qpnmiqzt5bngeejk
> >
> > I noticed that I was not seeing the "Breaking vcpu affinity" printk - so
> I
> > tried to get that
> >
> > The change proposed in that thread seems to work around this pinning
> > problem.
> > However, I'm not sure that it is the "right thing" to be doing.
>
> As said back then, the original change looks a little suspicious,
> and hence reverting it is certainly to be considered.
>

Which is one of the reasons I brought it up again.


>
> However, I don't see yet how this is connected to you having
> a problem only with pinned vCPU-s.
>

Well - when the command line parameter dom0_vcpu_pin exists -
it crashes without this change. It does not crash with it, or without
the command line parameter.

Perhaps they are unrelated.



>
> But - with S3 working again (with the change here) or originally,
> did you ever check whether post-resume Dom0 vCPU-s would
> still be pinned? I suspect they aren't, and hence addressing the
> crash may likely be achieved by properly restoring the pinning.
>

No, I don't believe they are pinned after resume...so while I
suppose it is not entirely correct, it is also better than crashing.

(it is also the observed behavior in Xen-4.0.x)


>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 2709 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 15:45                                                                                                                               ` Ben Guthro
@ 2012-09-25 15:52                                                                                                                                 ` Keir Fraser
  2012-09-26 11:49                                                                                                                                 ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-25 15:52 UTC (permalink / raw)
  To: Ben Guthro, Jan Beulich; +Cc: John Baboval, xen-devel

On 25/09/2012 16:45, "Ben Guthro" <ben@guthro.net> wrote:

> Which is one of the reasons I brought it up again.
>  
>> 
>> However, I don't see yet how this is connected to you having
>> a problem only with pinned vCPU-s.
> 
> Well - when the command line parameter dom0_vcpu_pin exists - 
> it crashes without this change. It does not crash with it, or without 
> the command line parameter.
> 
> Perhaps they are unrelated.

Of course they're related. Dom0 kernel is fixing itself up after S3, while
hypervisor is still bringing CPUs back online. It's a race for one of dom0
kernel's running VCPUs to do a hypercall that observes that one of its other
VCPUs is unschedulable (because there are no currently-online CPUs in its
cpu affinity mask!).

 -- Keir

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 15:21                                                                                                           ` Jan Beulich
  2012-09-25 15:23                                                                                                             ` Javier Marcet
@ 2012-09-25 19:55                                                                                                             ` Javier Marcet
  2012-09-25 19:57                                                                                                               ` Ben Guthro
  2012-09-26  7:17                                                                                                               ` Jan Beulich
  1 sibling, 2 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-25 19:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

On Tue, Sep 25, 2012 at 5:21 PM, Jan Beulich <JBeulich@suse.com> wrote:

>>>> On 25.09.12 at 16:47, Javier Marcet <jmarcet@gmail.com> wrote:
>> I haven't been able to see the actual patch, I only read a description
>> of what it should do. Two possible places where a wbinvd() call should
>> be added. On the first of these two places there were already calls to
>> wbinvd() just before the halts. I added it to the other one, within
>> xen/arch/x86/domain.c:default_idle and I could complete a
>> suspend/resume cycle without the back trace but after rebooting I
>> tried it again and it failed, more than once.
>
> default_idle() isn't the right place, default_dead_idle() is.
>
> See
> http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/c8d65d91a6f2

I'm sorry to say it doesn't fix it on my system, i.e., kernel 3.5.3
and xen 4.2.1-pre (the head from
git://xenbits.xen.org/xen.git#stable-4.2)

It really was basically what I had tried yesterday, just with duplicated code.

Adding the wbinvd() only in xen/arch/x86/domain.c was how I was able
to suspend and resume once without the back trace.

Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
instant reboot on resume.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 19:55                                                                                                             ` Javier Marcet
@ 2012-09-25 19:57                                                                                                               ` Ben Guthro
  2012-09-25 20:08                                                                                                                 ` Javier Marcet
  2012-09-26  7:17                                                                                                               ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-25 19:57 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Jan Beulich, Thomas Goetz


[-- Attachment #1.1: Type: text/plain, Size: 1516 bytes --]

You still haven't said if you have the out-of-tree acpi-s3 patches from
Konrad.
Without these patches applied to linux - this is a non-starter.


On Tue, Sep 25, 2012 at 3:55 PM, Javier Marcet <jmarcet@gmail.com> wrote:

> On Tue, Sep 25, 2012 at 5:21 PM, Jan Beulich <JBeulich@suse.com> wrote:
>
> >>>> On 25.09.12 at 16:47, Javier Marcet <jmarcet@gmail.com> wrote:
> >> I haven't been able to see the actual patch, I only read a description
> >> of what it should do. Two possible places where a wbinvd() call should
> >> be added. On the first of these two places there were already calls to
> >> wbinvd() just before the halts. I added it to the other one, within
> >> xen/arch/x86/domain.c:default_idle and I could complete a
> >> suspend/resume cycle without the back trace but after rebooting I
> >> tried it again and it failed, more than once.
> >
> > default_idle() isn't the right place, default_dead_idle() is.
> >
> > See
> > http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/c8d65d91a6f2
>
> I'm sorry to say it doesn't fix it on my system, i.e., kernel 3.5.3
> and xen 4.2.1-pre (the head from
> git://xenbits.xen.org/xen.git#stable-4.2)
>
> It really was basically what I had tried yesterday, just with duplicated
> code.
>
> Adding the wbinvd() only in xen/arch/x86/domain.c was how I was able
> to suspend and resume once without the back trace.
>
> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
> instant reboot on resume.
>
>
> --
> Javier Marcet <jmarcet@gmail.com>
>

[-- Attachment #1.2: Type: text/html, Size: 2385 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 19:57                                                                                                               ` Ben Guthro
@ 2012-09-25 20:08                                                                                                                 ` Javier Marcet
  0 siblings, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-25 20:08 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Jan Beulich, Thomas Goetz

On Tue, Sep 25, 2012 at 9:57 PM, Ben Guthro <ben@guthro.net> wrote:

> You still haven't said if you have the out-of-tree acpi-s3 patches from
> Konrad.
> Without these patches applied to linux - this is a non-starter.

Yes, I _do_ have them applied. If you re-read my previous messages
you'll see I already said it before. Just to be clear, the patches
are:

- x86/acpi/sleep: Provide registration for acpi_suspend_lowlevel
- xen/acpi/sleep: Register to the acpi_suspend_lowlevel a callback.

Both from Konrad from a patch series posted last December on lklm.

If there is any other patch I need for xen, then I don't have it nor I
know which patch.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 19:55                                                                                                             ` Javier Marcet
  2012-09-25 19:57                                                                                                               ` Ben Guthro
@ 2012-09-26  7:17                                                                                                               ` Jan Beulich
  2012-09-26  7:59                                                                                                                 ` Javier Marcet
  2012-09-26  8:05                                                                                                                 ` Javier Marcet
  1 sibling, 2 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-26  7:17 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

>>> On 25.09.12 at 21:55, Javier Marcet <jmarcet@gmail.com> wrote:
> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
> instant reboot on resume.

That addition can hardly be responsible for a reboot. Did you
have "noreboot" (or "reboot=no") in place on the Xen command
line? "sync_console"?

And then again, for the other failure case, iirc it was the kernel
that died, not the hypervisor, so the problem there isn't directly
related to the problems here I would guess.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26  7:17                                                                                                               ` Jan Beulich
@ 2012-09-26  7:59                                                                                                                 ` Javier Marcet
  2012-09-26 12:43                                                                                                                   ` Konrad Rzeszutek Wilk
  2012-09-26  8:05                                                                                                                 ` Javier Marcet
  1 sibling, 1 reply; 134+ messages in thread
From: Javier Marcet @ 2012-09-26  7:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

On Wed, Sep 26, 2012 at 9:17 AM, Jan Beulich <JBeulich@suse.com> wrote:

>> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
>> instant reboot on resume.
>
> That addition can hardly be responsible for a reboot. Did you

Just like I could perform a full cycle without issues, the instant
reboot might as well be another way the race ends.

> have "noreboot" (or "reboot=no") in place on the Xen command
> line? "sync_console"?

Neither of them. I have used the sync_console parameter to check
whether it changed anything but I removed it afterwards.

> And then again, for the other failure case, iirc it was the kernel
> that died, not the hypervisor, so the problem there isn't directly
> related to the problems here I would guess.

All I know is that I'm using that same kernel without hypervisor, with
lots of suspend/resume and not a single issue.

With the two kernel patches from Konrad added I can also suspend and
resume fine under the hypervisor but there is always a cpu which
receives an irq while offline.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26  7:17                                                                                                               ` Jan Beulich
  2012-09-26  7:59                                                                                                                 ` Javier Marcet
@ 2012-09-26  8:05                                                                                                                 ` Javier Marcet
  1 sibling, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-26  8:05 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Thomas Goetz

On Wed, Sep 26, 2012 at 9:17 AM, Jan Beulich <JBeulich@suse.com> wrote:

>> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
>> instant reboot on resume.
>
> That addition can hardly be responsible for a reboot. Did you
> have "noreboot" (or "reboot=no") in place on the Xen command
> line? "sync_console"?
>
> And then again, for the other failure case, iirc it was the kernel
> that died, not the hypervisor, so the problem there isn't directly
> related to the problems here I would guess.

By the way, I still don't have a serial console ready. I should have
it soon, one-two weeks tops. In the meantime, although I have very
little time right now due to what I haven't  followed all the steps
Ben has taken, I'm willing to try anything you deem worth it.


-- 
Javier Marcet <jmarcet@gmail.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 11:56                                                                                                                         ` Ben Guthro
  2012-09-25 14:22                                                                                                                           ` Ben Guthro
@ 2012-09-26 10:43                                                                                                                           ` Jan Beulich
  2012-09-26 10:47                                                                                                                             ` Ben Guthro
  2012-09-26 18:21                                                                                                                             ` Ben Guthro
  1 sibling, 2 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-26 10:43 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 25.09.12 at 13:56, Ben Guthro <ben@guthro.net> wrote:
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date = 2010-09-29

Btw., I'm also puzzled by only seeing a ucode update message
here for CPU1 - I just went through the logic again and can't see
why this would not also be done for CPU0. Could you make the
pr_debug() in microcode_intel.c actually print something, so we
can see whether at least collect_cpu_info() would get called
for it, hopefully allowing to deduce what happens subsequently?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 10:43                                                                                                                           ` Jan Beulich
@ 2012-09-26 10:47                                                                                                                             ` Ben Guthro
  2012-09-26 18:21                                                                                                                             ` Ben Guthro
  1 sibling, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-26 10:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1168 bytes --]

On Wed, Sep 26, 2012 at 6:43 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 25.09.12 at 13:56, Ben Guthro <ben@guthro.net> wrote:
> > (XEN) Finishing wakeup from ACPI S3 state.
> > (XEN) Enabling non-boot CPUs  ...
> > (XEN) Booting processor 1/1 eip 8a000
> > (XEN) Initializing CPU#1
> > (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> > (XEN) CPU: L2 cache: 3072K
> > (XEN) CPU: Physical Processor ID: 0
> > (XEN) CPU: Processor Core ID: 1
> > (XEN) CMCI: CPU1 has no CMCI support
> > (XEN) CPU1: Thermal monitoring enabled (TM2)
> > (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> > (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
> 2010-09-29
>
> Btw., I'm also puzzled by only seeing a ucode update message
> here for CPU1 - I just went through the logic again and can't see
> why this would not also be done for CPU0. Could you make the
> pr_debug() in microcode_intel.c actually print something, so we
> can see whether at least collect_cpu_info() would get called
> for it, hopefully allowing to deduce what happens subsequently?
>


Happy to do so - but I may not be able to do this test until tomorrow.

Ben

[-- Attachment #1.2: Type: text/html, Size: 1655 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-25 15:45                                                                                                                               ` Ben Guthro
  2012-09-25 15:52                                                                                                                                 ` Keir Fraser
@ 2012-09-26 11:49                                                                                                                                 ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-26 11:49 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 25.09.12 at 17:45, Ben Guthro <ben@guthro.net> wrote:
> On Tue, Sep 25, 2012 at 11:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 25.09.12 at 16:22, Ben Guthro <ben@guthro.net> wrote:
>> > I went back to an old patch that had, since it was in this same function
>> > that you made reference to:
>> > http://markmail.org/message/qpnmiqzt5bngeejk 
>> >
>> > I noticed that I was not seeing the "Breaking vcpu affinity" printk - so
>> > I tried to get that
>> >
>> > The change proposed in that thread seems to work around this pinning
>> > problem.
>> > However, I'm not sure that it is the "right thing" to be doing.
>>
>> As said back then, the original change looks a little suspicious,
>> and hence reverting it is certainly to be considered.
> 
> Which is one of the reasons I brought it up again.

So could you then do what I suggested back then - make a copy
of the two involved CPU masks (*online and *vc->cpu_affinity)
plus vc->domain->cpupool, vc->processor, and vc->vcpu_id into
on-stack variables (ensuring they don't get eliminated by the
compiler, e.g. by adding a read access anywhere after the
triggering ASSERT())?

Then the source change (i.e. the data layout) available, either
together with the xen-syms, or put them into a structure
together with some magic ID to identify where in the stack dump
the interesting bits are.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26  7:59                                                                                                                 ` Javier Marcet
@ 2012-09-26 12:43                                                                                                                   ` Konrad Rzeszutek Wilk
  2012-09-26 14:14                                                                                                                     ` Javier Marcet
  0 siblings, 1 reply; 134+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-09-26 12:43 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Ben Guthro, Jan Beulich, Thomas Goetz

On Wed, Sep 26, 2012 at 09:59:07AM +0200, Javier Marcet wrote:
> On Wed, Sep 26, 2012 at 9:17 AM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> >> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
> >> instant reboot on resume.
> >
> > That addition can hardly be responsible for a reboot. Did you
> 
> Just like I could perform a full cycle without issues, the instant
> reboot might as well be another way the race ends.
> 
> > have "noreboot" (or "reboot=no") in place on the Xen command
> > line? "sync_console"?
> 
> Neither of them. I have used the sync_console parameter to check
> whether it changed anything but I removed it afterwards.
> 
> > And then again, for the other failure case, iirc it was the kernel
> > that died, not the hypervisor, so the problem there isn't directly
> > related to the problems here I would guess.
> 
> All I know is that I'm using that same kernel without hypervisor, with
> lots of suspend/resume and not a single issue.
> 
> With the two kernel patches from Konrad added I can also suspend and
> resume fine under the hypervisor but there is always a cpu which
> receives an irq while offline.

And that error you get - can you reproduce it without going to
suspend/resume? Meaning if you offline/online a VCPU in the
guest by manipulating the /sys/../cpuX/online attribute?

And actually - we should move that discussion to a completlty different
thread as it has nothing to do with the hypervisor. That is a PV kernel
issue.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 12:43                                                                                                                   ` Konrad Rzeszutek Wilk
@ 2012-09-26 14:14                                                                                                                     ` Javier Marcet
  2012-09-26 14:26                                                                                                                       ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Javier Marcet @ 2012-09-26 14:14 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Ben Guthro, Jan Beulich, Thomas Goetz


[-- Attachment #1.1: Type: text/plain, Size: 1576 bytes --]

On Wed, Sep 26, 2012 at 2:43 PM, Konrad Rzeszutek Wilk <konrad@kernel.org>wrote:

> >> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
> > >> instant reboot on resume.
> > >
> > > That addition can hardly be responsible for a reboot. Did you
> >
> > Just like I could perform a full cycle without issues, the instant
> > reboot might as well be another way the race ends.
> >
> > > have "noreboot" (or "reboot=no") in place on the Xen command
> > > line? "sync_console"?
> >
> > Neither of them. I have used the sync_console parameter to check
> > whether it changed anything but I removed it afterwards.
> >
> > > And then again, for the other failure case, iirc it was the kernel
> > > that died, not the hypervisor, so the problem there isn't directly
> > > related to the problems here I would guess.
> >
> > All I know is that I'm using that same kernel without hypervisor, with
> > lots of suspend/resume and not a single issue.
> >
> > With the two kernel patches from Konrad added I can also suspend and
> > resume fine under the hypervisor but there is always a cpu which
> > receives an irq while offline.
>
> And that error you get - can you reproduce it without going to
> suspend/resume? Meaning if you offline/online a VCPU in the
> guest by manipulating the /sys/../cpuX/online attribute?
>
> And actually - we should move that discussion to a completlty different
> thread as it has nothing to do with the hypervisor. That is a PV kernel
> issue.
>

Konrad, this is on a dom0 with no guest running.


-- 
Javier Marcet <jmarcet@gmail.com>

[-- Attachment #1.2: Type: text/html, Size: 2221 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 14:14                                                                                                                     ` Javier Marcet
@ 2012-09-26 14:26                                                                                                                       ` Ben Guthro
  2012-09-26 14:40                                                                                                                         ` Javier Marcet
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-26 14:26 UTC (permalink / raw)
  To: Javier Marcet
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Jan Beulich, Thomas Goetz


[-- Attachment #1.1: Type: text/plain, Size: 1814 bytes --]

Yes. Pvops PV dom0 kernel.

It is really more appropriate to start a new thread, than having 2
disparate conversations in this one. It makes it confusing to follow.

On Sep 26, 2012, at 10:15 AM, Javier Marcet <jmarcet@gmail.com> wrote:

On Wed, Sep 26, 2012 at 2:43 PM, Konrad Rzeszutek Wilk <konrad@kernel.org>wrote:

> >> Adding the additional call in xen/arch/x86/acpi/cpu_idle.c causes an
> > >> instant reboot on resume.
> > >
> > > That addition can hardly be responsible for a reboot. Did you
> >
> > Just like I could perform a full cycle without issues, the instant
> > reboot might as well be another way the race ends.
> >
> > > have "noreboot" (or "reboot=no") in place on the Xen command
> > > line? "sync_console"?
> >
> > Neither of them. I have used the sync_console parameter to check
> > whether it changed anything but I removed it afterwards.
> >
> > > And then again, for the other failure case, iirc it was the kernel
> > > that died, not the hypervisor, so the problem there isn't directly
> > > related to the problems here I would guess.
> >
> > All I know is that I'm using that same kernel without hypervisor, with
> > lots of suspend/resume and not a single issue.
> >
> > With the two kernel patches from Konrad added I can also suspend and
> > resume fine under the hypervisor but there is always a cpu which
> > receives an irq while offline.
>
> And that error you get - can you reproduce it without going to
> suspend/resume? Meaning if you offline/online a VCPU in the
> guest by manipulating the /sys/../cpuX/online attribute?
>
> And actually - we should move that discussion to a completlty different
> thread as it has nothing to do with the hypervisor. That is a PV kernel
> issue.
>

Konrad, this is on a dom0 with no guest running.


-- 
Javier Marcet <jmarcet@gmail.com>

[-- Attachment #1.2: Type: text/html, Size: 2767 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 14:26                                                                                                                       ` Ben Guthro
@ 2012-09-26 14:40                                                                                                                         ` Javier Marcet
  0 siblings, 0 replies; 134+ messages in thread
From: Javier Marcet @ 2012-09-26 14:40 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Keir Fraser, john.baboval, Konrad Rzeszutek Wilk, xen-devel,
	Konrad Rzeszutek Wilk, Ben Guthro, Jan Beulich, Thomas Goetz


[-- Attachment #1.1: Type: text/plain, Size: 443 bytes --]

On Wed, Sep 26, 2012 at 4:26 PM, Ben Guthro <ben.guthro@gmail.com> wrote:

Yes. Pvops PV dom0 kernel.
>
> It is really more appropriate to start a new thread, than having 2
> disparate conversations in this one. It makes it confusing to follow.
>

OK. I started it a few days ago, with the back trace et all but I have got
no response so far. I'll try what you suggest and continue on the other
thread.


-- 
Javier Marcet <jmarcet@gmail.com>

[-- Attachment #1.2: Type: text/html, Size: 901 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 10:43                                                                                                                           ` Jan Beulich
  2012-09-26 10:47                                                                                                                             ` Ben Guthro
@ 2012-09-26 18:21                                                                                                                             ` Ben Guthro
  2012-09-27  7:38                                                                                                                               ` Jan Beulich
  1 sibling, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-26 18:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2599 bytes --]

On Wed, Sep 26, 2012 at 6:43 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
> Btw., I'm also puzzled by only seeing a ucode update message
> here for CPU1 - I just went through the logic again and can't see
> why this would not also be done for CPU0. Could you make the
> pr_debug() in microcode_intel.c actually print something, so we
> can see whether at least collect_cpu_info() would get called
> for it, hopefully allowing to deduce what happens subsequently?
>
>
I put a pr_debug at the top of collect_cpu_info - and from the trace below
- it appears everything is as it should be,
dom0 and Xen serial output get interspersed there for a few lines, but the
microcode is, in fact being loaded for both CPUs.


[   36.787452] ACPI: Preparing to enter system sleep state S3
[   37.240118] PM: Saving platform NVS memory
[   37.313160] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
extended MCE MSR 0
(XEN) CMCI: CPU0 has no CMCI support
(XEN) CPU0: Thermal monitoring enabled (TM2)
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: collect_cpu_info for CPU0
(XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: collect_cpu_info for CPU1
[   37.420030] A(XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80,
rev=0x60c
CPI: Low-level r(XEN) microcode: CPU1 found a matching microcode update
with version 0x60f (current=0x60c)
esume complete
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
2010-09-29
[   37.420030] PM: Restoring platform NVS memory
(XEN) microcode: collect_cpu_info for CPU0
(XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
(XEN) microcode: collect_cpu_info for CPU1
(XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
[   37.431008] Enabling non-boot CPUs ...
[   37.434851] installing Xen timer for CPU 1
[   37.439031] cpu 1 spinlock event irq 279
[   37.327214] Disabled fast string operations
[   37.458040] CPU1 is up
[   37.465417] ACPI: Waking up from system sleep state S3

[-- Attachment #1.2: Type: text/html, Size: 3464 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-26 18:21                                                                                                                             ` Ben Guthro
@ 2012-09-27  7:38                                                                                                                               ` Jan Beulich
  2012-09-27  7:46                                                                                                                                 ` Keir Fraser
  2012-09-27 12:12                                                                                                                                 ` Ben Guthro
  0 siblings, 2 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-27  7:38 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 26.09.12 at 20:21, Ben Guthro <ben@guthro.net> wrote:
> I put a pr_debug at the top of collect_cpu_info - and from the trace below
> - it appears everything is as it should be,
> dom0 and Xen serial output get interspersed there for a few lines, but the
> microcode is, in fact being loaded for both CPUs.

Definitely not:

> [   36.787452] ACPI: Preparing to enter system sleep state S3
> [   37.240118] PM: Saving platform NVS memory
> [   37.313160] Disabling non-boot CPUs ...
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Breaking vcpu affinity for domain 0 vcpu 1
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
> extended MCE MSR 0
> (XEN) CMCI: CPU0 has no CMCI support
> (XEN) CPU0: Thermal monitoring enabled (TM2)
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) microcode: collect_cpu_info for CPU0
> (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c

Rev 0x60c is being found installed, but no action is being done.

> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: collect_cpu_info for CPU1
> [   37.420030] A(XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
> CPI: Low-level r(XEN) microcode: CPU1 found a matching microcode update
> with version 0x60f (current=0x60c)
> esume complete
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date = 2010-09-29

Whereas here an update is being done.

> [   37.420030] PM: Restoring platform NVS memory
> (XEN) microcode: collect_cpu_info for CPU0
> (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
> (XEN) microcode: collect_cpu_info for CPU1
> (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f

And in the end we see that both cores run on different revisions.

The mixup of Xen and Dom0 messages puzzles me too - in my
understanding, all APs should be brought back up before
domains get unpaused. Is that perhaps part of your problem
with pinned vCPU-s? Or is the mixup not indicative of things
actually happening in parallel?

Jan

> [   37.431008] Enabling non-boot CPUs ...
> [   37.434851] installing Xen timer for CPU 1
> [   37.439031] cpu 1 spinlock event irq 279
> [   37.327214] Disabled fast string operations
> [   37.458040] CPU1 is up
> [   37.465417] ACPI: Waking up from system sleep state S3

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-27  7:38                                                                                                                               ` Jan Beulich
@ 2012-09-27  7:46                                                                                                                                 ` Keir Fraser
  2012-09-27 12:12                                                                                                                                 ` Ben Guthro
  1 sibling, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-27  7:46 UTC (permalink / raw)
  To: Jan Beulich, Ben Guthro; +Cc: John Baboval, xen-devel

On 27/09/2012 08:38, "Jan Beulich" <JBeulich@suse.com> wrote:

>> [   37.420030] PM: Restoring platform NVS memory
>> (XEN) microcode: collect_cpu_info for CPU0
>> (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
>> (XEN) microcode: collect_cpu_info for CPU1
>> (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
> 
> And in the end we see that both cores run on different revisions.
> 
> The mixup of Xen and Dom0 messages puzzles me too - in my
> understanding, all APs should be brought back up before
> domains get unpaused.

Ah, of course! This makes a nonsense of my explanation of the observed crash
in _csched_cpu_pick -- dom0 cannot be running while Aps are still being
brought back online, because domains are not unpaused until all CPUs are
back online.

So it's a mystery again. :)

 -- Keir

> Is that perhaps part of your problem
> with pinned vCPU-s? Or is the mixup not indicative of things
> actually happening in parallel?
> 
> Jan
> 
>> [   37.431008] Enabling non-boot CPUs ...
>> [   37.434851] installing Xen timer for CPU 1
>> [   37.439031] cpu 1 spinlock event irq 279
>> [   37.327214] Disabled fast string operations
>> [   37.458040] CPU1 is up
>> [   37.465417] ACPI: Waking up from system sleep state S3
> 
> 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-27  7:38                                                                                                                               ` Jan Beulich
  2012-09-27  7:46                                                                                                                                 ` Keir Fraser
@ 2012-09-27 12:12                                                                                                                                 ` Ben Guthro
  2012-09-27 13:41                                                                                                                                   ` Jan Beulich
  2012-09-27 15:25                                                                                                                                   ` Jan Beulich
  1 sibling, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-09-27 12:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2125 bytes --]

On Thu, Sep 27, 2012 at 3:38 AM, Jan Beulich <JBeulich@suse.com> wrote:

>
>
> > [   37.420030] PM: Restoring platform NVS memory
> > (XEN) microcode: collect_cpu_info for CPU0
> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
> > (XEN) microcode: collect_cpu_info for CPU1
> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
>
> And in the end we see that both cores run on different revisions.
>
>
very interesting. I had overlooked that.


> The mixup of Xen and Dom0 messages puzzles me too - in my
> understanding, all APs should be brought back up before
> domains get unpaused. Is that perhaps part of your problem
> with pinned vCPU-s? Or is the mixup not indicative of things
> actually happening in parallel?
>

It seems like it may be related, but I'm unsure how.
When I run without the dom0_vcpu_pin option, I don't see the
collect_cpu_info messages,
nor do I see the microcode revision printout for CPU0
nor do I see the dom0 / Xen messages mised up.

[   41.567458] ACPI: Preparing to enter system sleep state S3
[   42.020120] PM: Saving platform NVS memory
[   42.092643] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
extended MCE MSR 0
(XEN) CMCI: CPU0 has no CMCI support
(XEN) CPU0: Thermal monitoring enabled (TM2)
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
2010-09-29
[   42.200055] ACPI: Low-level resume complete
[   42.200055] PM: Restoring platform NVS memory
[   42.200055] Enabling non-boot CPUs ...


Any pointers are appreciated.

/btg

[-- Attachment #1.2: Type: text/html, Size: 3019 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-27 12:12                                                                                                                                 ` Ben Guthro
@ 2012-09-27 13:41                                                                                                                                   ` Jan Beulich
  2012-09-27 15:25                                                                                                                                   ` Jan Beulich
  1 sibling, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-27 13:41 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 27.09.12 at 14:12, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Sep 27, 2012 at 3:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> > (XEN) microcode: collect_cpu_info for CPU0
>> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
>> > (XEN) microcode: collect_cpu_info for CPU1
>> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
>>
>> And in the end we see that both cores run on different revisions.
>>
>>
> very interesting. I had overlooked that.
> 
> 
>> The mixup of Xen and Dom0 messages puzzles me too - in my
>> understanding, all APs should be brought back up before
>> domains get unpaused. Is that perhaps part of your problem
>> with pinned vCPU-s? Or is the mixup not indicative of things
>> actually happening in parallel?
>>
> 
> It seems like it may be related, but I'm unsure how.
> When I run without the dom0_vcpu_pin option, I don't see the
> collect_cpu_info messages,
> nor do I see the microcode revision printout for CPU0

That's more likely because of either running different
(unpatched) code, or at a lower log level, because ...

> nor do I see the dom0 / Xen messages mised up.
> 
> [   41.567458] ACPI: Preparing to enter system sleep state S3
> [   42.020120] PM: Saving platform NVS memory
> [   42.092643] Disabling non-boot CPUs ...
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1 extended MCE MSR 0
> (XEN) CMCI: CPU0 has no CMCI support
> (XEN) CPU0: Thermal monitoring enabled (TM2)
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) Booting processor 1/1 eip 8a000
> (XEN) Initializing CPU#1
> (XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
> (XEN) CPU: L2 cache: 3072K
> (XEN) CPU: Physical Processor ID: 0
> (XEN) CPU: Processor Core ID: 1
> (XEN) CMCI: CPU1 has no CMCI support
> (XEN) CPU1: Thermal monitoring enabled (TM2)
> (XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
> (XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date = 2010-09-29

... you can't get here without going through collect_cpu_info().

> [   42.200055] ACPI: Low-level resume complete
> [   42.200055] PM: Restoring platform NVS memory
> [   42.200055] Enabling non-boot CPUs ...
> 
> 
> Any pointers are appreciated.

Same for me (regarding the mixup of messages). Maybe widening
the console_{start,end}_sync() window from right after freezing
domains until right before thawing them could make clear whether
this is an artifact.

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-27 12:12                                                                                                                                 ` Ben Guthro
  2012-09-27 13:41                                                                                                                                   ` Jan Beulich
@ 2012-09-27 15:25                                                                                                                                   ` Jan Beulich
  2012-09-27 15:32                                                                                                                                     ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-27 15:25 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Keir Fraser, John Baboval, xen-devel

>>> On 27.09.12 at 14:12, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Sep 27, 2012 at 3:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> > (XEN) microcode: collect_cpu_info for CPU0
>> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
>> > (XEN) microcode: collect_cpu_info for CPU1
>> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
>>
>> And in the end we see that both cores run on different revisions.
>>
>>
> very interesting. I had overlooked that.

Could you give the below patch a try?

Jan

x86/ucode: fix Intel case of resume handling on boot CPU

Checking the stored version doesn't tell us anything about the need to
apply the update (during resume, what is stored doesn't necessarily
match what is loaded).

Note that the check can be removed altogether because once switched to
use what was read from the CPU (uci->cpu_sig.rev, as used in the
subsequent pr_debug()), it would become redundant with the checks that
lead to microcode_update_match() returning the indication that an
update should be applied.

Note further that this was not an issue on APs since they start with
uci->mc.mc_intel being NULL.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/microcode_intel.c
+++ b/xen/arch/x86/microcode_intel.c
@@ -261,8 +261,6 @@ static int get_matching_microcode(const 
     }
     return 0;
  find:
-    if ( uci->mc.mc_intel && uci->mc.mc_intel->hdr.rev >= mc_header->rev )
-        return 0;
     pr_debug("microcode: CPU%d found a matching microcode update with"
              " version %#x (current=%#x)\n",
              cpu, mc_header->rev, uci->cpu_sig.rev);

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-09-27 15:25                                                                                                                                   ` Jan Beulich
@ 2012-09-27 15:32                                                                                                                                     ` Ben Guthro
  2012-09-27 15:59                                                                                                                                       ` [PATCH] x86/ucode: fix Intel case of resume handling on boot CPU Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-09-27 15:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Keir Fraser, John Baboval, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2865 bytes --]

ACK.

I now see microcode updates for both CPUs.


[   47.612719] Disabling non-boot CPUs ...
(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Breaking vcpu affinity for domain 0 vcpu 1
(XEN) Entering ACPI S3 state.
(XEN) mce_intel.c:1239: MCA Capability: BCAST 1 SER 0 CMCI 0 firstbank 1
extended MCE MSR 0
(XEN) CMCI: CPU0 has no CMCI support
(XEN) CPU0: Thermal monitoring enabled (TM2)
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) microcode: CPU0 updated from revision 0x60c to 0x60f, date =
2010-09-29
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date =
2010-09-29
[   47.720054] ACPI: Low-level resume complete


On Thu, Sep 27, 2012 at 11:25 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 27.09.12 at 14:12, Ben Guthro <ben@guthro.net> wrote:
> > On Thu, Sep 27, 2012 at 3:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >> > (XEN) microcode: collect_cpu_info for CPU0
> >> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60c
> >> > (XEN) microcode: collect_cpu_info for CPU1
> >> > (XEN) microcode: collect_cpu_info : sig=0x10676, pf=0x80, rev=0x60f
> >>
> >> And in the end we see that both cores run on different revisions.
> >>
> >>
> > very interesting. I had overlooked that.
>
> Could you give the below patch a try?
>
> Jan
>
> x86/ucode: fix Intel case of resume handling on boot CPU
>
> Checking the stored version doesn't tell us anything about the need to
> apply the update (during resume, what is stored doesn't necessarily
> match what is loaded).
>
> Note that the check can be removed altogether because once switched to
> use what was read from the CPU (uci->cpu_sig.rev, as used in the
> subsequent pr_debug()), it would become redundant with the checks that
> lead to microcode_update_match() returning the indication that an
> update should be applied.
>
> Note further that this was not an issue on APs since they start with
> uci->mc.mc_intel being NULL.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/microcode_intel.c
> +++ b/xen/arch/x86/microcode_intel.c
> @@ -261,8 +261,6 @@ static int get_matching_microcode(const
>      }
>      return 0;
>   find:
> -    if ( uci->mc.mc_intel && uci->mc.mc_intel->hdr.rev >= mc_header->rev )
> -        return 0;
>      pr_debug("microcode: CPU%d found a matching microcode update with"
>               " version %#x (current=%#x)\n",
>               cpu, mc_header->rev, uci->cpu_sig.rev);
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3882 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [PATCH] x86/ucode: fix Intel case of resume handling on boot CPU
  2012-09-27 15:32                                                                                                                                     ` Ben Guthro
@ 2012-09-27 15:59                                                                                                                                       ` Jan Beulich
  2012-09-27 16:06                                                                                                                                         ` Keir Fraser
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-09-27 15:59 UTC (permalink / raw)
  To: xen-devel; +Cc: Ben Guthro

[-- Attachment #1: Type: text/plain, Size: 1114 bytes --]

Checking the stored version doesn't tell us anything about the need to
apply the update (during resume, what is stored doesn't necessarily
match what is loaded).

Note that the check can be removed altogether because once switched to
use what was read from the CPU (uci->cpu_sig.rev, as used in the
subsequent pr_debug()), it would become redundant with the checks that
lead to microcode_update_match() returning the indication that an
update should be applied.

Note further that this was not an issue on APs since they start with
uci->mc.mc_intel being NULL.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Ben Guthro <ben@guthro.net>

--- a/xen/arch/x86/microcode_intel.c
+++ b/xen/arch/x86/microcode_intel.c
@@ -261,8 +261,6 @@ static int get_matching_microcode(const 
     }
     return 0;
  find:
-    if ( uci->mc.mc_intel && uci->mc.mc_intel->hdr.rev >= mc_header->rev )
-        return 0;
     pr_debug("microcode: CPU%d found a matching microcode update with"
              " version %#x (current=%#x)\n",
              cpu, mc_header->rev, uci->cpu_sig.rev);




[-- Attachment #2: x86-ucode-Intel-resume.patch --]
[-- Type: text/plain, Size: 1168 bytes --]

x86/ucode: fix Intel case of resume handling on boot CPU

Checking the stored version doesn't tell us anything about the need to
apply the update (during resume, what is stored doesn't necessarily
match what is loaded).

Note that the check can be removed altogether because once switched to
use what was read from the CPU (uci->cpu_sig.rev, as used in the
subsequent pr_debug()), it would become redundant with the checks that
lead to microcode_update_match() returning the indication that an
update should be applied.

Note further that this was not an issue on APs since they start with
uci->mc.mc_intel being NULL.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Ben Guthro <ben@guthro.net>

--- a/xen/arch/x86/microcode_intel.c
+++ b/xen/arch/x86/microcode_intel.c
@@ -261,8 +261,6 @@ static int get_matching_microcode(const 
     }
     return 0;
  find:
-    if ( uci->mc.mc_intel && uci->mc.mc_intel->hdr.rev >= mc_header->rev )
-        return 0;
     pr_debug("microcode: CPU%d found a matching microcode update with"
              " version %#x (current=%#x)\n",
              cpu, mc_header->rev, uci->cpu_sig.rev);

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [PATCH] x86/ucode: fix Intel case of resume handling on boot CPU
  2012-09-27 15:59                                                                                                                                       ` [PATCH] x86/ucode: fix Intel case of resume handling on boot CPU Jan Beulich
@ 2012-09-27 16:06                                                                                                                                         ` Keir Fraser
  0 siblings, 0 replies; 134+ messages in thread
From: Keir Fraser @ 2012-09-27 16:06 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Ben Guthro

On 27/09/2012 16:59, "Jan Beulich" <JBeulich@suse.com> wrote:

> Checking the stored version doesn't tell us anything about the need to
> apply the update (during resume, what is stored doesn't necessarily
> match what is loaded).
> 
> Note that the check can be removed altogether because once switched to
> use what was read from the CPU (uci->cpu_sig.rev, as used in the
> subsequent pr_debug()), it would become redundant with the checks that
> lead to microcode_update_match() returning the indication that an
> update should be applied.
> 
> Note further that this was not an issue on APs since they start with
> uci->mc.mc_intel being NULL.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Ben Guthro <ben@guthro.net>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/microcode_intel.c
> +++ b/xen/arch/x86/microcode_intel.c
> @@ -261,8 +261,6 @@ static int get_matching_microcode(const
>      }
>      return 0;
>   find:
> -    if ( uci->mc.mc_intel && uci->mc.mc_intel->hdr.rev >= mc_header->rev )
> -        return 0;
>      pr_debug("microcode: CPU%d found a matching microcode update with"
>               " version %#x (current=%#x)\n",
>               cpu, mc_header->rev, uci->cpu_sig.rev);
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
       [not found]           ` <CAOvdn6U1touhawCb2GvgVQZqxhWn9CRw6-wkqdxk=uOTq015OA@mail.gmail.com>
@ 2012-09-06  9:24             ` Jan Beulich
  0 siblings, 0 replies; 134+ messages in thread
From: Jan Beulich @ 2012-09-06  9:24 UTC (permalink / raw)
  To: Ben Guthro; +Cc: Konrad Rzeszutek Wilk, john.baboval, Thomas Goetz, xen-devel

>>> On 04.09.12 at 20:34, Ben Guthro <ben@guthro.net> wrote:
> On Fri, Aug 24, 2012 at 6:16 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 24.08.12 at 17:10, Ben Guthro <ben@guthro.net> wrote:
>>> The attached patch is essentially the change you suggested, plus a
>>> check for NULL in irq_complete_move()
>>>
>>> This patch seems to fix some of the issues I was seeing, but not all.
>>> MSI's are now delivered, after a handful of suspend / resumes, which
>>> is the issue I was setting out to solve here.
>>
>> But I'm afraid this is just masking some other problem (see my
>> response to Andrew's mail that I just sent).
>>
>> Further more, the NULL check here seems pretty odd - I'd be very
>> curious what code path could get us there with the IRQ regs
>> pointer being NULL. It would likely be that code path that needs
>> fixing, not irq_complete_move(). Could you add a call to
>> dump_execution_state() to the early return path that you added?
> 
> Apologies that I'm just getting back to this.
> 
> The call trace in this early return patch looks like this:
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82c480167d1a>] irq_complete_move+0x3e/0xd9
> (XEN)    [<ffff82c48014424a>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c480169cc2>] fixup_irqs+0x19c/0x300
> (XEN)    [<ffff82c48017e762>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fe6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012756e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278d9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c480158cbd>] idle_loop+0x67/0x71
> 
> 
> This seems to get printed 4 times - twice on CPU1, and CPU2

This one appears to be relatively clear, and I'll add a tentative
fix to the next version of debugging patch that I'm in the
process of preparing: irq_complete_move() is supposed to be
getting called from the hw_irq_controllers' .ack methods, yet
VT-d currently uses the same handler for .ack and .disable
(and calling irq_complete_move() in the context of .disable
is certainly wrong) - this appears to have been the case
forever, but a flaw like this may of course get uncovered with
completely unrelated changes.

Jan

(also restored the Cc list)

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-24 15:10       ` Ben Guthro
@ 2012-08-24 22:16         ` Jan Beulich
       [not found]           ` <CAOvdn6U1touhawCb2GvgVQZqxhWn9CRw6-wkqdxk=uOTq015OA@mail.gmail.com>
  0 siblings, 1 reply; 134+ messages in thread
From: Jan Beulich @ 2012-08-24 22:16 UTC (permalink / raw)
  To: Andrew Cooper, Ben Guthro
  Cc: Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz, xen-devel

>>> On 24.08.12 at 17:10, Ben Guthro <ben@guthro.net> wrote:
> The attached patch is essentially the change you suggested, plus a
> check for NULL in irq_complete_move()
> 
> This patch seems to fix some of the issues I was seeing, but not all.
> MSI's are now delivered, after a handful of suspend / resumes, which
> is the issue I was setting out to solve here.

But I'm afraid this is just masking some other problem (see my
response to Andrew's mail that I just sent).

Further more, the NULL check here seems pretty odd - I'd be very
curious what code path could get us there with the IRQ regs
pointer being NULL. It would likely be that code path that needs
fixing, not irq_complete_move(). Could you add a call to
dump_execution_state() to the early return path that you added?

Jan

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 20:38     ` Ben Guthro
@ 2012-08-24 15:10       ` Ben Guthro
  2012-08-24 22:16         ` Jan Beulich
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-24 15:10 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel

[-- Attachment #1: Type: text/plain, Size: 1451 bytes --]

The attached patch is essentially the change you suggested, plus a
check for NULL in irq_complete_move()

This patch seems to fix some of the issues I was seeing, but not all.
MSI's are now delivered, after a handful of suspend / resumes, which
is the issue I was setting out to solve here.

However, once I get out of my debugging mode, and run without a serial
console - things start to go bad.
I am unable to get the machine to resume from S3. The fan comes on,
but I never see any graphics, or HDD activity. Of course, without a
console, I have no clues to go on, as to what is wrong.

This is similar to the other S3 problem I have observed on laptops
that never get out of the "pulsing power button" state.

I suspect they are related, but with the behavior changing so
drastically while running with the serial connection enabled - it is
rather difficult to narrow down where it might be...



On Thu, Aug 23, 2012 at 4:38 PM, Ben Guthro <ben@guthro.net> wrote:
> On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>>
>> On 23/08/12 20:06, Ben Guthro wrote:
>>> No such luck.
>>
>> Huh.  It was a shot in the dark, but I was really not expecting this.
>
> Thanks for taking a look.
>
> I tested the equivalent change against the offending changeset, and it
> did, in fact solve the S3 issue back then (but not against the 4.1.3
> tag)
>
> I guess I'll do some more bisecting, with this change in place.
>
> Ben

[-- Attachment #2: s3-debug --]
[-- Type: application/octet-stream, Size: 935 bytes --]

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 78a02e3..394dca4 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -701,11 +701,16 @@ static void send_cleanup_vector(struct irq_desc *desc)
 void irq_complete_move(struct irq_desc *desc)
 {
     unsigned vector, me;
+    struct cpu_user_regs *r;
 
     if (likely(!desc->arch.move_in_progress))
         return;
 
-    vector = get_irq_regs()->entry_vector;
+    r = get_irq_regs();
+    if (!r)
+	return;
+
+    vector = r->entry_vector;
     me = smp_processor_id();
 
     if ( vector == desc->arch.vector &&
@@ -2151,7 +2156,7 @@ void fixup_irqs(void)
         spin_lock(&desc->lock);
 
         cpumask_copy(&affinity, desc->affinity);
-        if ( !desc->action || cpumask_subset(&affinity, &cpu_online_map) )
+        if ( !desc->action || cpumask_equal(&affinity, &cpu_online_map) )
         {
             spin_unlock(&desc->lock);
             continue;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 19:38   ` Andrew Cooper
@ 2012-08-23 20:38     ` Ben Guthro
  2012-08-24 15:10       ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Ben Guthro @ 2012-08-23 20:38 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel

On Thu, Aug 23, 2012 at 3:38 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 23/08/12 20:06, Ben Guthro wrote:
>> No such luck.
>
> Huh.  It was a shot in the dark, but I was really not expecting this.

Thanks for taking a look.

I tested the equivalent change against the offending changeset, and it
did, in fact solve the S3 issue back then (but not against the 4.1.3
tag)

I guess I'll do some more bisecting, with this change in place.

Ben

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 19:06 ` Ben Guthro
  2012-08-23 19:26   ` Ben Guthro
@ 2012-08-23 19:38   ` Andrew Cooper
  2012-08-23 20:38     ` Ben Guthro
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Cooper @ 2012-08-23 19:38 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel


On 23/08/12 20:06, Ben Guthro wrote:
> No such luck.

Huh.  It was a shot in the dark, but I was really not expecting this.

>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...

Looks like some pointer has turned to NULL, although I cant identify
exactly which.

Either way, I would not pay it too much heed.

~Andrew

>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 19:06 ` Ben Guthro
@ 2012-08-23 19:26   ` Ben Guthro
  2012-08-23 19:38   ` Andrew Cooper
  1 sibling, 0 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-23 19:26 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel

Interestingly enough, just updating to the parent of this changeset,
without the hack mentioned previously seems to be enough to
suspend/resume multiple times on this machine.



On Thu, Aug 23, 2012 at 3:06 PM, Ben Guthro <ben@guthro.net> wrote:
> No such luck.
>
> Panic below:
>
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    1
> (XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
> (XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
> (XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
> (XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
> (XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
> (XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
> (XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff830148a77d50:
> (XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
> (XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
> (XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
> (XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
> (XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
> (XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
> (XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
> (XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
> (XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
> (XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
> (XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
> (XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
> (XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
> (XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
> (XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
> (XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
> (XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
> (XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
> (XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
> (XEN) Xen call trace:
> (XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
> (XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
> (XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
> (XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
> (XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
> (XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
> (XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
> (XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
> (XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
> (XEN)
> (XEN) Pagetable walk from 000000000000007c:
> (XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
> (XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
> (XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
> (XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) FATAL PAGE FAULT
> (XEN) [error_code=0000]
> (XEN) Faulting linear address: 000000000000007c
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
>
> On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 23/08/12 19:03, Ben Guthro wrote:
>>
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
>>
>> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
>> was changed.
>>
>> Jan: The commit message says "simplify operations [in] a few cases".
>> Was the change in fixup_irqs() deliberate?
>>
>> ~Andrew
>>
>>
>> Ben: Could you test the attached patch?  It is for unstable and undoes the
>> logical change to fixup_irqs()
>>
>> ~Andrew
>>
>>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>>
>> --
>> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
>> T: +44 (0)1223 225 900, http://www.citrix.com

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
  2012-08-23 18:54 Andrew Cooper
@ 2012-08-23 19:06 ` Ben Guthro
  2012-08-23 19:26   ` Ben Guthro
  2012-08-23 19:38   ` Andrew Cooper
  0 siblings, 2 replies; 134+ messages in thread
From: Ben Guthro @ 2012-08-23 19:06 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel

No such luck.

Panic below:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) ----[ Xen-4.2.0-rc3-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN) RFLAGS: 0000000000010086   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830148a81880   rcx: 0000000000000001
(XEN) rdx: ffff82c480301e48   rsi: 0000000000000028   rdi: ffff830148a81880
(XEN) rbp: ffff830148a77d80   rsp: ffff830148a77d50   r8:  0000000000000004
(XEN) r9:  ffff82c3fffff000   r10: ffff82c4803027c0   r11: 0000000000000246
(XEN) r12: 0000000000000018   r13: ffff830148a818a4   r14: 0000000000000000
(XEN) r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000001026b0
(XEN) cr3: 00000000aa2c5000   cr2: 000000000000007c
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff830148a77d50:
(XEN)    0000000000000086 ffff830148a809a4 0000000000000000 0000000000000001
(XEN)    ffff830148a77d80 ffff830148ae2620 ffff830148a77da0 ffff82c480144013
(XEN)    ffff830148a81880 0000000000000018 ffff830148a77e10 ffff82c4801696b6
(XEN)    0000000000000086 00000001030d8ac4 0000000000000001 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000202 0000000000000010
(XEN)    ffff82c480302938 0000000000000001 0000000000000001 ffff82c480302938
(XEN)    ffff830148a77e70 ffff82c48017e132 0000001048a77e70 ffff82c480302930
(XEN)    ffff82c480302938 0000000000000001 000000010115fa86 0000000000000003
(XEN)    ffff830148a77f18 0000000000000001 ffffffffffffffff ffff830148ac3080
(XEN)    ffff830148a77e80 ffff82c4801013e3 ffff830148a77ea0 ffff82c480125fb6
(XEN)    ffff830148ac30c0 ffff830148ac30f0 ffff830148a77ec0 ffff82c48012753e
(XEN)    ffff82c4801256aa ffff830148ac3110 ffff830148a77ef0 ffff82c4801278a9
(XEN)    00000000ffffffff ffff830148a77f18 ffff830148a77f18 00000008030d8ac4
(XEN)    ffff830148a77f10 ffff82c48015888d ffff8300aa303000 ffff8300aa303000
(XEN)    ffff830148a77e10 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffff81aafda0 ffff88003976df10 ffff88003976dfd8 0000000000000246
(XEN)    00000000deadbeef 0000000000000000 aaaaaaaaaaaaaaaa 0000000000000000
(XEN)    ffffffff8100130a 00000000deadbeef 00000000deadbeef 00000000deadbeef
(XEN)    0000010000000000 ffffffff8100130a 000000000000e033 0000000000000246
(XEN)    ffff88003976def8 000000000000e02b d0354dcf5bcd9824 9ea5d0ef618deca5
(XEN) Xen call trace:
(XEN)    [<ffff82c48016773e>] irq_complete_move+0x42/0xb4
(XEN)    [<ffff82c480144013>] dma_msi_mask+0x1d/0x49
(XEN)    [<ffff82c4801696b6>] fixup_irqs+0x19b/0x2ff
(XEN)    [<ffff82c48017e132>] __cpu_disable+0x337/0x37e
(XEN)    [<ffff82c4801013e3>] take_cpu_down+0x43/0x4a
(XEN)    [<ffff82c480125fb6>] stopmachine_action+0x8a/0xb3
(XEN)    [<ffff82c48012753e>] do_tasklet_work+0x8d/0xc7
(XEN)    [<ffff82c4801278a9>] do_tasklet+0x6b/0x9b
(XEN)    [<ffff82c48015888d>] idle_loop+0x67/0x71
(XEN)
(XEN) Pagetable walk from 000000000000007c:
(XEN)  L4[0x000] = 0000000148adf063 ffffffffffffffff
(XEN)  L3[0x000] = 0000000148ade063 ffffffffffffffff
(XEN)  L2[0x000] = 0000000148add063 ffffffffffffffff
(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000000007c
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


On Thu, Aug 23, 2012 at 2:54 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 23/08/12 19:03, Ben Guthro wrote:
>
> I did some more bisecting here, and I came up with another changeset
> that seems to be problematic, Re: IRQs
>
> After bisecting the problem discussed earlier in this thread to the
> changeset below,
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>
>
> I worked past that issue by the following hack:
>
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>  void evtchn_move_pirqs(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    const cpumask_t *mask = cpumask_of(v->processor);
> +    //const cpumask_t *mask = cpumask_of(v->processor);
>      unsigned int port;
>      struct evtchn *chn;
>
> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>      {
>          chn = evtchn_from_port(d, port);
> +#if 0
>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
> +#endif
>      }
>      spin_unlock(&d->event_lock);
>  }
>
>
> This seemed to work for this rather old changeset, but it was not
> sufficient to fix it against the 4.1, or unstable trees.
>
> I further bisected, in combination with this hack, and found the
> following changeset to also be problematic:
>
> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>
>
> That is, before this change I could resume reliably (with the hack
> above) - and after I could not.
> This was surprising to me, as this change also looks rather innocuous.
>
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases".
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew
>
>
> Ben: Could you test the attached patch?  It is for unstable and undoes the
> logical change to fixup_irqs()
>
> ~Andrew
>
>
> Naturally, backing out this change seems to be non-trivial against the
> tip, since so much around it has changed.
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
> --
> Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
> T: +44 (0)1223 225 900, http://www.citrix.com

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: Xen4.2 S3 regression?
@ 2012-08-23 18:54 Andrew Cooper
  2012-08-23 19:06 ` Ben Guthro
  0 siblings, 1 reply; 134+ messages in thread
From: Andrew Cooper @ 2012-08-23 18:54 UTC (permalink / raw)
  To: Ben Guthro
  Cc: Jan Beulich, Konrad Rzeszutek Wilk, John Baboval, Thomas Goetz,
	xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2410 bytes --]

> On 23/08/12 19:03, Ben Guthro wrote:
>> I did some more bisecting here, and I came up with another changeset
>> that seems to be problematic, Re: IRQs
>>
>> After bisecting the problem discussed earlier in this thread to the
>> changeset below,
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/0695a5cdcb42
>>
>>
>> I worked past that issue by the following hack:
>>
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -1103,7 +1103,7 @@ void evtchn_destroy_final(struct domain *d)
>>  void evtchn_move_pirqs(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> -    const cpumask_t *mask = cpumask_of(v->processor);
>> +    //const cpumask_t *mask = cpumask_of(v->processor);
>>      unsigned int port;
>>      struct evtchn *chn;
>>
>> @@ -1111,7 +1111,9 @@ void evtchn_move_pirqs(struct vcpu *v)
>>      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
>>      {
>>          chn = evtchn_from_port(d, port);
>> +#if 0
>>          pirq_set_affinity(d, chn->u.pirq.irq, mask);
>> +#endif
>>      }
>>      spin_unlock(&d->event_lock);
>>  }
>>
>>
>> This seemed to work for this rather old changeset, but it was not
>> sufficient to fix it against the 4.1, or unstable trees.
>>
>> I further bisected, in combination with this hack, and found the
>> following changeset to also be problematic:
>>
>> http://xenbits.xen.org/hg/xen-unstable.hg/rev/c2cb776a5365
>>
>>
>> That is, before this change I could resume reliably (with the hack
>> above) - and after I could not.
>> This was surprising to me, as this change also looks rather innocuous.
> And by the looks of that changeset, the logic in fixup_irqs() in irq.c
> was changed.
>
> Jan: The commit message says "simplify operations [in] a few cases". 
> Was the change in fixup_irqs() deliberate?
>
> ~Andrew

Ben: Could you test the attached patch?  It is for unstable and undoes
the logical change to fixup_irqs()

~Andrew

>
>> Naturally, backing out this change seems to be non-trivial against the
>> tip, since so much around it has changed.
>>
> -- Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer T: +44
> (0)1223 225 900, http://www.citrix.com
> _______________________________________________ Xen-devel mailing list
> Xen-devel@lists.xen.org http://lists.xen.org/xen-devel

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


[-- Attachment #1.2: Type: text/html, Size: 3829 bytes --]

[-- Attachment #2: s3-revert-fixup_irqs.patch --]
[-- Type: text/x-patch, Size: 515 bytes --]

# HG changeset patch
# Parent b02ac80ff6899e98b4089842843104fd8572a7cd

diff -r b02ac80ff689 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2151,7 +2151,7 @@ void fixup_irqs(void)
         spin_lock(&desc->lock);
 
         cpumask_copy(&affinity, desc->affinity);
-        if ( !desc->action || cpumask_subset(&affinity, &cpu_online_map) )
+        if ( !desc->action || cpumask_equal(&affinity, &cpu_online_map) )
         {
             spin_unlock(&desc->lock);
             continue;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 134+ messages in thread

end of thread, other threads:[~2012-09-27 16:06 UTC | newest]

Thread overview: 134+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-07 15:04 Xen4.2 S3 regression? Ben Guthro
2012-08-07 16:21 ` Ben Guthro
2012-08-07 16:33   ` Konrad Rzeszutek Wilk
2012-08-07 16:48     ` Ben Guthro
2012-08-07 20:14       ` Ben Guthro
2012-08-08  8:35         ` Jan Beulich
2012-08-08 10:39           ` Ben Guthro
2012-08-09 15:21             ` Ben Guthro
2012-08-09 15:37               ` Jan Beulich
2012-08-09 15:46                 ` Ben Guthro
2012-08-09 15:51                   ` Jan Beulich
2012-08-09 16:09                     ` Ben Guthro
2012-08-10  6:50                       ` Jan Beulich
2012-08-10 19:15                         ` Ben Guthro
2012-08-14 17:31                           ` Ben Guthro
2012-08-15  8:11                             ` Jan Beulich
2012-08-15 10:32                               ` Ben Guthro
2012-08-15 12:32                                 ` Ben Guthro
2012-08-15 12:58                                   ` Jan Beulich
2012-08-15 13:11                                     ` Ben Guthro
2012-08-15 14:50                                       ` Jan Beulich
2012-08-15 14:58                                         ` Ben Guthro
2012-08-15 15:00                                           ` Andrew Cooper
2012-08-15 15:06                                           ` Jan Beulich
2012-08-15 15:16                                             ` Ben Guthro
2012-08-16  8:31                                       ` Jan Beulich
2012-08-16 10:37                                         ` Ben Guthro
2012-08-16 11:07                                           ` Jan Beulich
2012-08-16 11:56                                             ` Ben Guthro
2012-08-17 10:22                                               ` Ben Guthro
2012-08-17 10:40                                                 ` Jan Beulich
2012-08-23 18:03                                                   ` Ben Guthro
2012-08-23 18:37                                                     ` Andrew Cooper
2012-08-24 22:11                                                       ` Jan Beulich
2012-08-24 22:55                                                     ` Jan Beulich
2012-08-25  0:48                                                       ` Ben Guthro
2012-09-03  9:31                                               ` Jan Beulich
2012-09-04 12:27                                                 ` Ben Guthro
2012-09-04 12:49                                                   ` Ben Guthro
2012-09-04 14:26                                                     ` Jan Beulich
2012-09-04 14:28                                                       ` Ben Guthro
2012-09-04 14:36                                                         ` Konrad Rzeszutek Wilk
2012-09-04 15:02                                                         ` Jan Beulich
2012-09-06 10:22                                                   ` Jan Beulich
2012-09-06 11:48                                                     ` Ben Guthro
2012-09-06 11:51                                                       ` Ben Guthro
2012-09-06 13:05                                                       ` Konrad Rzeszutek Wilk
2012-09-06 13:27                                                         ` Ben Guthro
2012-09-06 13:36                                                           ` Ben Guthro
2012-09-06 16:42                                                       ` Ben Guthro
2012-09-07  8:38                                                         ` Jan Beulich
2012-09-07 10:37                                                           ` Ben Guthro
2012-09-07 11:15                                                             ` Jan Beulich
2012-09-07 11:51                                                               ` Ben Guthro
2012-09-07 12:18                                                                 ` Jan Beulich
2012-09-07 16:06                                                                   ` Ben Guthro
2012-09-19 21:07                                                                     ` Ben Guthro
2012-09-20  6:13                                                                       ` Keir Fraser
2012-09-20  6:24                                                                         ` Keir Fraser
2012-09-20  8:03                                                                         ` Jan Beulich
2012-09-20  8:14                                                                           ` Keir Fraser
2012-09-20 12:56                                                                           ` Ben Guthro
2012-09-20 13:07                                                                             ` Keir Fraser
2012-09-20 20:30                                                                             ` Ben Guthro
2012-09-21  6:34                                                                               ` Keir Fraser
2012-09-21  6:47                                                                               ` Jan Beulich
2012-09-21 18:20                                                                                 ` Ben Guthro
2012-09-21 18:42                                                                                   ` Keir Fraser
2012-09-24 11:22                                                                                     ` Jan Beulich
2012-09-24 11:25                                                                                       ` Ben Guthro
2012-09-24 11:45                                                                                         ` Jan Beulich
2012-09-24 11:54                                                                                           ` Ben Guthro
2012-09-24 12:05                                                                                             ` Jan Beulich
2012-09-24 12:24                                                                                               ` Ben Guthro
2012-09-24 12:32                                                                                                 ` Jan Beulich
     [not found]                                                                                                   ` <CAOvdn6UMHmPWqedYE9GQQMDaM4oiHLDSn9ZzSgJjGf89g1DgTw@mail.gmail.com>
     [not found]                                                                                                     ` <50607D70020000780009D5C3@nat28.tlf.novell.com>
     [not found]                                                                                                       ` <CAOvdn6XL9ebp2oUV0XEXk_WdU3-=YAj+xfz6AMLDBpVThH3Xvw@mail.gmail.com>
2012-09-24 14:10                                                                                                         ` Jan Beulich
2012-09-24 14:16                                                                                                           ` Ben Guthro
2012-09-24 14:28                                                                                                             ` Jan Beulich
2012-09-24 19:02                                                                                                               ` Ben Guthro
2012-09-24 20:30                                                                                                                 ` Keir Fraser
2012-09-24 20:46                                                                                                                   ` Ben Guthro
2012-09-24 21:12                                                                                                                     ` Ben Guthro
2012-09-25  7:00                                                                                                                       ` Jan Beulich
2012-09-25 11:56                                                                                                                         ` Ben Guthro
2012-09-25 14:22                                                                                                                           ` Ben Guthro
2012-09-25 14:53                                                                                                                             ` Keir Fraser
2012-09-25 15:10                                                                                                                             ` Jan Beulich
2012-09-25 15:45                                                                                                                               ` Ben Guthro
2012-09-25 15:52                                                                                                                                 ` Keir Fraser
2012-09-26 11:49                                                                                                                                 ` Jan Beulich
2012-09-26 10:43                                                                                                                           ` Jan Beulich
2012-09-26 10:47                                                                                                                             ` Ben Guthro
2012-09-26 18:21                                                                                                                             ` Ben Guthro
2012-09-27  7:38                                                                                                                               ` Jan Beulich
2012-09-27  7:46                                                                                                                                 ` Keir Fraser
2012-09-27 12:12                                                                                                                                 ` Ben Guthro
2012-09-27 13:41                                                                                                                                   ` Jan Beulich
2012-09-27 15:25                                                                                                                                   ` Jan Beulich
2012-09-27 15:32                                                                                                                                     ` Ben Guthro
2012-09-27 15:59                                                                                                                                       ` [PATCH] x86/ucode: fix Intel case of resume handling on boot CPU Jan Beulich
2012-09-27 16:06                                                                                                                                         ` Keir Fraser
2012-09-24 14:32                                                                                                             ` Xen4.2 S3 regression? Keir Fraser
2012-09-24 12:22                                                                                             ` Pasi Kärkkäinen
2012-09-24 12:27                                                                                               ` Ben Guthro
2012-09-24 12:37                                                                                                 ` Javier Marcet
2012-09-24 14:04                                                                                                   ` Konrad Rzeszutek Wilk
2012-09-24 15:08                                                                                                     ` Javier Marcet
2012-09-24 21:36                                                                                                     ` Javier Marcet
2012-09-25 14:06                                                                                                       ` Konrad Rzeszutek Wilk
2012-09-25 14:47                                                                                                         ` Javier Marcet
2012-09-25 15:21                                                                                                           ` Jan Beulich
2012-09-25 15:23                                                                                                             ` Javier Marcet
2012-09-25 19:55                                                                                                             ` Javier Marcet
2012-09-25 19:57                                                                                                               ` Ben Guthro
2012-09-25 20:08                                                                                                                 ` Javier Marcet
2012-09-26  7:17                                                                                                               ` Jan Beulich
2012-09-26  7:59                                                                                                                 ` Javier Marcet
2012-09-26 12:43                                                                                                                   ` Konrad Rzeszutek Wilk
2012-09-26 14:14                                                                                                                     ` Javier Marcet
2012-09-26 14:26                                                                                                                       ` Ben Guthro
2012-09-26 14:40                                                                                                                         ` Javier Marcet
2012-09-26  8:05                                                                                                                 ` Javier Marcet
2012-09-24 12:37                                                                                                 ` Jan Beulich
2012-09-24 14:02                                                                                             ` Konrad Rzeszutek Wilk
2012-09-20  7:17                                                                       ` Jan Beulich
     [not found]                           ` <CAAnFQG-u1VUDgn11ZW0=UaYC4MvUtxxq8ZjjUOrNpXTSUWP41Q@mail.gmail.com>
     [not found]                             ` <CAOvdn6VuD_5Mhd9wvOskfZWfCBjr2nT5LppDxyY5S-5LhGhSvA@mail.gmail.com>
     [not found]                               ` <CAAnFQG_hMNvwM9Z3XPGR590=Gifos-kOftqjLFUX4YFW6tTTgg@mail.gmail.com>
     [not found]                                 ` <CAOvdn6UzdzO_sM6f9coN2udQ6eUC5=Sty-NgC7+yf3XMawF-0A@mail.gmail.com>
2012-09-04 15:31                                   ` Javier Marcet
2012-08-23 18:54 Andrew Cooper
2012-08-23 19:06 ` Ben Guthro
2012-08-23 19:26   ` Ben Guthro
2012-08-23 19:38   ` Andrew Cooper
2012-08-23 20:38     ` Ben Guthro
2012-08-24 15:10       ` Ben Guthro
2012-08-24 22:16         ` Jan Beulich
     [not found]           ` <CAOvdn6U1touhawCb2GvgVQZqxhWn9CRw6-wkqdxk=uOTq015OA@mail.gmail.com>
2012-09-06  9:24             ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.