All of lore.kernel.org
 help / color / mirror / Atom feed
* Win2003R2 64 suspend failed in self live migration
       [not found] <BAY0-MC4-F15zXiPuZe00229bef@bay0-mc4-f15.Bay0.hotmail.com>
@ 2011-06-15 12:05 ` MaoXiaoyun
  2011-06-15 12:21   ` James Harper
  2011-06-15 23:20   ` James Harper
  0 siblings, 2 replies; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-15 12:05 UTC (permalink / raw)
  To: xen devel; +Cc: james.harper


[-- Attachment #1.1: Type: text/plain, Size: 1521 bytes --]


Hi James;
 
     I've been testing Windows HVM live migration for a while, OS type covers 2003, and 2008.
     It works well most of time. I mean migration been two physical host.
     But 2003R2 64 bit failed on self live migration. (VM migration on the same host)
 
     After instal debug version PV driver inside VM, not debug log show up.
     Later I learnt that, in your code  you implied that the debug routine could not be hooked since
     "// can't patch IDT on AMD64 "(in xenpci_dbprint.c XenPci_HookDbgPrint())
 
      I was able to get the log output simply by redefine the KdPrint macro like below. But unfortunately
, VM is suffuring hang now and then. 
 
      So is it proper to do this, or how to obtain 64bit log properly?
       As for self migration, I've noticed that VM is able to migrate once, but after migration, the network 
is in trouble, VM can not access outside.  It looks like xennet is not function properly. Meanwhile, it looks
like "a fake arp" is needed after migration, as linux pv.
 
        I shall dig more, but currently the hang in log brother me a lot.
        Could you kindly offer me some help?
        Thanks.
 
----------debug log ----       
 
void xmaoDPrint(PCH Format, ...);
#undef KdPrint
#define KdPrint(A)  xmaoDPrint 
void xmaoDPrint(PCHAR fmt, ...){
 char buf[4096];
  va_list argptr; 
 memset(buf, 0, 4096);
  va_start(argptr, fmt);
  RtlStringCchVPrintfA(buf, 4095, fmt, argptr);
  va_end(argptr);
 XenDbgPrint(buf, (ULONG)strlen(buf));
 return;

  		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2352 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Win2003R2 64 suspend failed in self live migration
  2011-06-15 12:05 ` Win2003R2 64 suspend failed in self live migration MaoXiaoyun
@ 2011-06-15 12:21   ` James Harper
  2011-06-15 23:20   ` James Harper
  1 sibling, 0 replies; 13+ messages in thread
From: James Harper @ 2011-06-15 12:21 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

4kb of stack space might be too much and could cause such a hang. Declare your buffer as a global, or a per-CPU array of buffers if required

Sent from my iPhone

On 15/06/2011, at 22:05, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> Hi James;
>  
>      I've been testing Windows HVM live migration for a while, OS type covers 2003, and 2008.
>      It works well most of time. I mean migration been two physical host.
>      But 2003R2 64 bit failed on self live migration. (VM migration on the same host)
>  
>      After instal debug version PV driver inside VM, not debug log show up.
>      Later I learnt that, in your code  you implied that the debug routine could not be hooked since
>      "// can't patch IDT on AMD64 "(in xenpci_dbprint.c XenPci_HookDbgPrint())
>  
>       I was able to get the log output simply by redefine the KdPrint macro like below. But unfortunately
> , VM is suffuring hang now and then. 
>  
>       So is it proper to do this, or how to obtain 64bit log properly?
>        As for self migration, I've noticed that VM is able to migrate once, but after migration, the network 
> is in trouble, VM can not access outside.  It looks like xennet is not function properly. Meanwhile, it looks
> like "a fake arp" is needed after migration, as linux pv.
>  
>         I shall dig more, but currently the hang in log brother me a lot.
>         Could you kindly offer me some help?
>         Thanks.
>  
> ----------debug log ----       
>  
> void xmaoDPrint(PCH Format, ...);
> #undef KdPrint
> #define KdPrint(A)  xmaoDPrint 
> void xmaoDPrint(PCHAR fmt, ...){
>  char buf[4096];
>   va_list argptr; 
>  memset(buf, 0, 4096);
>   va_start(argptr, fmt);
>   RtlStringCchVPrintfA(buf, 4095, fmt, argptr);
>   va_end(argptr);
>  XenDbgPrint(buf, (ULONG)strlen(buf));
>  return;
> 
>  

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Win2003R2 64 suspend failed in self live migration
  2011-06-15 12:05 ` Win2003R2 64 suspend failed in self live migration MaoXiaoyun
  2011-06-15 12:21   ` James Harper
@ 2011-06-15 23:20   ` James Harper
  2011-06-16 10:28     ` MaoXiaoyun
  1 sibling, 1 reply; 13+ messages in thread
From: James Harper @ 2011-06-15 23:20 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

> Hi James;
> 
>      I've been testing Windows HVM live migration for a while, OS type
covers
> 2003, and 2008.
>      It works well most of time. I mean migration been two physical
host.
>      But 2003R2 64 bit failed on self live migration. (VM migration on
the
> same host)
> 
>      After instal debug version PV driver inside VM, not debug log
show up.
>      Later I learnt that, in your code  you implied that the debug
routine
> could not be hooked since
>      "// can't patch IDT on AMD64 "(in xenpci_dbprint.c
XenPci_HookDbgPrint())
> 

Windows 2003 doesn't have an API for getting debug messages so I hook
the IDT, but under x64 PatchGuard monitors the IDT and causes a BSoD if
it detects a change. A program from SysInternals called DebugView can do
it so it must be possible but I've never figured out how.

>       I was able to get the log output simply by redefine the KdPrint
macro
> like below. But unfortunately
> , VM is suffuring hang now and then.
> 

As per my previous email, if you run out of stack space at a high IRQL
then windows can hang very hard - even the debugger won't work. I think
allocating 4KB of data on the stack (your char buf[4096]) might cause it
to run out.

You can allocate a global variable as long as you protect it with a
lock, as long as you know what you are doing with locks and IRQLs.
Alternatively you can allocate one buffer per CPU.

James

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Win2003R2 64 suspend failed in self live migration
  2011-06-15 23:20   ` James Harper
@ 2011-06-16 10:28     ` MaoXiaoyun
  2011-06-16 14:24       ` PV resume failed after self migration failed MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-16 10:28 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 2199 bytes --]


Thanks, James.
 
I was able to log out by KeRaiseIrql(HIGH_LEVEL, &old_irql) to raise irq to HIGH_LEVEL.
And later, I also found that the cause of failure of self live migration. Nothing to do with PV driver.
 
That is in my env I specified vifname for Guest, so after self migration there will be vifname
conflict, and the new vif will failed to be renamed, and thus failed to be add into bridge.
Which results in net lost inside Guest and finally furthur migration failed since vif can not suspend
anymore.
 
Thanks for your help.
 
> Subject: RE: Win2003R2 64 suspend failed in self live migration
> Date: Thu, 16 Jun 2011 09:20:04 +1000
> From: james.harper@bendigoit.com.au
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> > Hi James;
> > 
> > I've been testing Windows HVM live migration for a while, OS type
> covers
> > 2003, and 2008.
> > It works well most of time. I mean migration been two physical
> host.
> > But 2003R2 64 bit failed on self live migration. (VM migration on
> the
> > same host)
> > 
> > After instal debug version PV driver inside VM, not debug log
> show up.
> > Later I learnt that, in your code you implied that the debug
> routine
> > could not be hooked since
> > "// can't patch IDT on AMD64 "(in xenpci_dbprint.c
> XenPci_HookDbgPrint())
> > 
> 
> Windows 2003 doesn't have an API for getting debug messages so I hook
> the IDT, but under x64 PatchGuard monitors the IDT and causes a BSoD if
> it detects a change. A program from SysInternals called DebugView can do
> it so it must be possible but I've never figured out how.
> 
> > I was able to get the log output simply by redefine the KdPrint
> macro
> > like below. But unfortunately
> > , VM is suffuring hang now and then.
> > 
> 
> As per my previous email, if you run out of stack space at a high IRQL
> then windows can hang very hard - even the debugger won't work. I think
> allocating 4KB of data on the stack (your char buf[4096]) might cause it
> to run out.
> 
> You can allocate a global variable as long as you protect it with a
> lock, as long as you know what you are doing with locks and IRQLs.
> Alternatively you can allocate one buffer per CPU.
> 
> James
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2869 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* PV resume failed after self migration failed
  2011-06-16 10:28     ` MaoXiaoyun
@ 2011-06-16 14:24       ` MaoXiaoyun
  2011-06-17  1:34         ` James Harper
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-16 14:24 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 1065 bytes --]


Hi James:
 
    I found another issue during test. 
    When migrating VM from host A to B, it contains following process.
    1) memory copy
    2) suspend VM on A
    3) transfer some other thing to B such as tsc state.
 
    If step (3) failed,  VM will be resumed on host A.
    Well, from the test of view,  the resume cannot be completed successfully. 
 
    Out test is migrating 12VMs between twn host over again and again.
    The attached log doing exactly below things
    1) Migrate from Host B, so fisrt resuming (line 25 to 474)
    2) Later want to migrating to B again, so suspending (line 474 to line 1116)
    3) Migrating failed and enter into resuming again  (line 1118 to line 1399).  
 
     line 1383 is waiting vbd state to be changed but can not get the response. 
And 1392 show a "Unacknowledged event word ". From the log, it looks like
this is due to XenVbd_HwScsiResetBus in line 1397. 
  
      Question is what trigger the XenVbd_HwScsiResetBus during resuming?
 
       Thanks.
 
      
       
 
 
 
   
     
          		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1910 bytes --]

[-- Attachment #2: qemu-dm-14.migrate-test.log --]
[-- Type: text/plain, Size: 82975 bytes --]

domid: 150
Using file /dev/xen/blktap-2/tapdev0 in read-write mode
Watching /local/domain/0/device-model/150/logdirty/cmd
Watching /local/domain/0/device-model/150/command
char device redirected to /dev/pts/2
qemu_map_cache_init nr_buckets = 10000 size 4194304
shared page at pfn feffd
buffered io page at pfn feffb
Guest uuid = 03155985-45f4-09ec-44e3-6988ed61d584
Time offset set 0
Register xen platform.
Done register platform.
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state.
xs_read(/local/domain/0/device-model/150/xen_extended_power_mgmt): read error
cirrus vga map change while on lfb mode
mapping video RAM from f0000000
platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state.
xs_read(): vncpasswd get error. /vm/03155985-45f4-09ec-44e3-6988ed61d584/vncpasswd.
Log-dirty: no command yet.
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
xs_read(/local/domain/150/log-throttling): read error
qemu: ignoring not-understood drive `/local/domain/150/log-throttling'
medium change watch on `/local/domain/150/log-throttling' - unknown device, ignored
12952670378390: XenPCI <-- _hvm_shutdown
12952670378390: XenPCI     back from suspend, cancelled = 0
12952670378390: XenPCI     Disabled qemu devices 03
12952670378390: XenPCI --> XenPci_Init
12952670378390: XenPCI     shared_info_area_unmapped.QuadPart = f2000000
12952670378390: XenPCI     gpfn = f2000
12952670378390: XenPCI     hypervisor memory op (XENMAPSPACE_shared_info) ret = 0
12952670378390: XenPCI <-- XenPci_Init
12952670378390: XenPCI --> GntTbl_Resume
12952670378390: XenPCI     pfn = 9b06
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b06
12952670378390: XenPCI     decreased 1 pages for grant table frame 0
12952670378390: XenPCI     pfn = 9b07
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b07
12952670378390: XenPCI     decreased 1 pages for grant table frame 1
12952670378390: XenPCI     pfn = 9b08
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b08
12952670378390: XenPCI     decreased 1 pages for grant table frame 2
12952670378390: XenPCI     pfn = 9b09
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b09
12952670378390: XenPCI     decreased 1 pages for grant table frame 3
12952670378390: XenPCI     pfn = 9b0a
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0a
12952670378390: XenPCI     decreased 1 pages for grant table frame 4
12952670378390: XenPCI     pfn = 9b0b
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0b
12952670378390: XenPCI     decreased 1 pages for grant table frame 5
12952670378390: XenPCI     pfn = 9b0c
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0c
12952670378390: XenPCI     decreased 1 pages for grant table frame 6
12952670378390: XenPCI     pfn = 9b0d
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0d
12952670378390: XenPCI     decreased 1 pages for grant table frame 7
12952670378390: XenPCI     pfn = 9b0e
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0e
12952670378390: XenPCI     decreased 1 pages for grant table frame 8
12952670378390: XenPCI     pfn = 9b0f
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0f
12952670378390: XenPCI     decreased 1 pages for grant table frame 9
12952670378390: XenPCI     pfn = 9b10
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b10
12952670378390: XenPCI     decreased 1 pages for grant table frame 10
12952670378390: XenPCI     pfn = 9b11
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b11
12952670378390: XenPCI     decreased 1 pages for grant table frame 11
12952670378390: XenPCI     pfn = 9b12
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b12
12952670378390: XenPCI     decreased 1 pages for grant table frame 12
12952670378390: XenPCI     pfn = 9b13
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b13
12952670378390: XenPCI     decreased 1 pages for grant table frame 13
12952670378390: XenPCI     pfn = 9b14
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b14
12952670378390: XenPCI     decreased 1 pages for grant table frame 14
12952670378390: XenPCI     pfn = 9b15
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b15
12952670378390: XenPCI     decreased 1 pages for grant table frame 15
12952670378390: XenPCI     pfn = 9b16
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b16
12952670378390: XenPCI     decreased 1 pages for grant table frame 16
12952670378390: XenPCI     pfn = 9b17
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b17
12952670378390: XenPCI     decreased 1 pages for grant table frame 17
12952670378390: XenPCI     pfn = 9b18
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b18
12952670378390: XenPCI     decreased 1 pages for grant table frame 18
12952670378390: XenPCI     pfn = 9b19
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b19
12952670378390: XenPCI     decreased 1 pages for grant table frame 19
12952670378390: XenPCI     pfn = 9b1a
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1a
12952670378390: XenPCI     decreased 1 pages for grant table frame 20
12952670378390: XenPCI     pfn = 9b1b
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1b
12952670378390: XenPCI     decreased 1 pages for grant table frame 21
12952670378390: XenPCI     pfn = 9b1c
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1c
12952670378390: XenPCI     decreased 1 pages for grant table frame 22
12952670378390: XenPCI     pfn = 9b1d
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1d
12952670378390: XenPCI     decreased 1 pages for grant table frame 23
12952670378390: XenPCI     pfn = 9b1e
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1e
12952670378390: XenPCI     decreased 1 pages for grant table frame 24
12952670378390: XenPCI     pfn = 9b1f
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1f
12952670378390: XenPCI     decreased 1 pages for grant table frame 25
12952670378390: XenPCI     pfn = 9b20
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b20
12952670378390: XenPCI     decreased 1 pages for grant table frame 26
12952670378390: XenPCI     pfn = 9b21
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b21
12952670378390: XenPCI     decreased 1 pages for grant table frame 27
12952670378390: XenPCI     pfn = 9b22
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b22
12952670378390: XenPCI     decreased 1 pages for grant table frame 28
12952670378390: XenPCI     pfn = 9b23
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b23
12952670378390: XenPCI     decreased 1 pages for grant table frame 29
12952670378390: XenPCI     pfn = 9b24
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b24
12952670378390: XenPCI     decreased 1 pages for grant table frame 30
12952670378390: XenPCI     pfn = 9b25
12952670378390: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b25
12952670378390: XenPCI     decreased 1 pages for grant table frame 31
12952670378390: XenPCI     new_grant_frames = 32
12952670378390: XenPCI --> GntTbl_Map
12952670378390: XenPCI <-- GntTbl_Map
12952670378390: XenPCI     GntTbl_Map result = 0
12952670378390: XenPCI <-- GntTbl_Resume
12952670378390: XenPCI --> EvtChn_Init
12952670378390: XenPCI --> _hvm_set_parameter
12952670378390: XenPCI HYPERVISOR_hvm_op retval = 0
12952670378390: XenPCI <-- _hvm_set_parameter
12952670378390: XenPCI     hvm_set_parameter(HVM_PARAM_CALLBACK_IRQ, 28) = 0
12952670378390: XenPCI --> EvtChn_AllocIpi
12952670378390: XenPCI <-- EvtChn_AllocIpi
12952670378390: XenPCI --> EvtChn_BindDpc
12952670378390: XenPCI <-- EvtChn_BindDpc
12952670378390: XenPCI     pdo_event_channel = 5
12952670378390: XenPCI <-- EvtChn_Init
12952670378390: XenPCI <-- XenPci_Suspend0
12952670393171: XenPCI --> XenPci_SuspendN
12952670393171: XenPCI     doing nothing on cpu N
12952670393187: XenPCI <-- XenPci_SuspendN
12952670393187: XenPCI <-- XenPci_HighSyncCallFunctionN
12952670393187: XenPCI <-- XenPci_HighSyncCallFunction0
12952670393187: XenPCI     Waiting for highsync_complete_event
12952670393187: XenPCI <-- XenPci_HighSync
12952670393187: XenPCI --> XenBus_Resume
12952670393187: XenPCI --> _hvm_get_parameter
12952670393187: XenPCI HYPERVISOR_hvm_op retval = 0
12952670393187: XenPCI <-- _hvm_get_parameter
12952670393187: XenPCI --> _hvm_get_parameter
12952670393203: XenPCI HYPERVISOR_hvm_op retval = 0
12952670393203: XenPCI <-- _hvm_get_parameter
12952670393203: XenPCI --> EvtChn_BindDpc
12952670393203: XenPCI <-- EvtChn_BindDpc
12952670393203: XenPCI     Adding watch for path = control/sysrq
12952670393218: XenPCI     Adding watch for path = control/shutdown
12952670393218: XenPCI --> XenPci_SysrqHandler
12952670393218: XenPCI     Adding watch for path = device
12952670393218: XenPCI     SysRq Value = (null)
12952670393218: XenPCI <-- XenPci_SysrqHandler
12952670393265: XenPCI --> XenPci_ShutdownHandler
12952670393265: XenPCI     Adding watch for path = memory/target
12952670393328: XenPCI     Adding watch for path = memory/enable
12952670393328: Error reading shutdown path - ENOENT
12952670393328: XenPCI     Adding watch for path = control/shell/stdin
12952670393328: XenPCI <-- XenPci_ShutdownHandler
12952670393343: XenPCI     Adding watch for path = control/shutdown
12952670393375: XenPCI --> XenPci_DeviceWatchHandler
12952670393390: XenPCI <-- XenBus_Resume
12952670393390: XenPCI     suspend event channel = 6
12952670393390: XenPCI <-- XenPci_DeviceWatchHandler
12952670393390: XenPCI --> EvtChn_BindDpc
12952670393406: XenPCI <-- EvtChn_BindDpc
12952670393406: XenPCI     Resuming child
12952670393406: XenPCI --> XenPci_Pdo_Resume
12952670393406: XenPCI     path = device/vbd/768
12952670393421: XenPCI --> XenPci_GetBackendAndAddWatch
12952670393421: XenPCI --> XenPci_BalloonHandler
12952670393421: XenPCI     target memory value = 2048 (2097152)
12952670393484: XenPCI <-- XenPci_BalloonHandler
12952670393484: XenPCI --> XenPci_BalloonEnableHandler
12952670393546: XenPCI  receive balloon enable = (1308226118.76:1)
12952670393546: XenPCI     Balloon enable change to 1
12952670393546: XenPCI  successfull got BalloonEnableChangedEvent
12952670393546: XenPCI     Got balloon event, current = 2048, target = 2048
12952670393546: XenPCI     No change to memory
12952670393609: XenPCI <-- XenPci_BalloonEnableHandler
12952670393609: XenPCI --> XenPci_IoWatch
12952670393609: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670393671: XenPCI <-- XenBus_ProcessReadRequest
12952670393671: XenPCI <-- XenPci_IoWatch
12952670393671: XenPCI --> XenPci_IoWatch
12952670393671: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670393671: XenPCI <-- XenBus_ProcessReadRequest
12952670393671: XenPCI <-- XenPci_IoWatch
12952670393687: XenPCI --> XenPci_DeviceWatchHandler
12952670393687: XenPCI --> XenPci_EvtIoDefault
12952670393687: XenPCI --> XenPci_EvtIoDefault
12952670393687: XenPCI <-- XenPci_GetBackendAndAddWatch
12952670393687: XenPCI --> XenBus_EvtIoWrite
12952670393687: XenPCI --> XenPci_ChangeFrontendState
12952670393687: XenPCI     Rescanning child list
12952670393687: XenPCI --> XenBus_EvtIoWrite
12952670393687: XenPCI --> XenPci_EvtChildListScanForChildren
12952670393687: XenPCI <-- XenPci_ChangeFrontendState
12952670393703: XenPCI     33 bytes of write buffer remaining
12952670393703: XenPCI --> XenPci_XenConfigDeviceSpecifyBuffers
12952670393703: XenPCI     36 bytes of write buffer remaining
12952670393703: XenPCI     XEN_INIT_TYPE_RING - ring-ref = 89019000
12952670393703: XenPCI     Found path = device/vbd/768
12952670393703: XenPCI     XEN_INIT_TYPE_RING - ring-ref = 16337
12952670393718: XenPCI     completing request with length 33
12952670393718: XenPCI     XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 7
12952670393718: XenPCI <-- XenBus_EvtIoWrite
12952670393734: XenPCI <-- XenPci_EvtIoDefault
12952670393734: XenPCI     Found path = device/vif/0
12952670393734: XenPCI --> XenPci_EvtIoDefault
12952670393734: XenPCI <-- XenPci_EvtChildListScanForChildren
12952670393734: XenPCI --> XenBus_EvtIoRead
12952670393734: XenPCI <-- XenPci_DeviceWatchHandler
12952670393750: XenPCI     found pending read
12952670393750: XenPCI --> XenPci_UpdateBackendState
12952670393750: XenPCI <-- XenBus_ProcessReadRequest
12952670393750: XenPCI --> EvtChn_BindIrq
12952670393750: XenPCI <-- XenBus_EvtIoRead
12952670393750: XenPCI     state unchanged
12952670393750: XenPCI <-- XenPci_EvtIoDefault
12952670393750: XenPCI <-- EvtChn_BindIrq
12952670393750: XenPCI --> XenPci_EvtIoDefault
12952670393750: XenPCI --> XenPci_ChangeFrontendStateMap
12952670393750: XenPCI --> XenBus_EvtIoRead
12952670393750: XenPCI --> XenPci_ChangeFrontendState
12952670393765: XenPCI     no data to read
12952670393765: XenPCI     completing request with length 36
12952670393781: XenPCI --> XenPci_DeviceWatchHandler
12952670393781: XenPCI <-- XenBus_EvtIoWrite
12952670393781: XenPCI <-- XenPci_DeviceWatchHandler
12952670393781: XenPCI <-- XenPci_EvtIoDefault
12952670393781: XenPCI --> XenPci_DeviceWatchHandler
12952670393781: XenPCI --> XenPci_EvtIoDefault
12952670393781: XenPCI <-- XenPci_DeviceWatchHandler
12952670393781: XenPCI --> XenBus_EvtIoRead
12952670393796: XenPCI <-- XenBus_EvtIoRead
12952670393796: XenPCI --> XenPci_DeviceWatchHandler
12952670393796: XenPCI <-- XenPci_EvtIoDefault
12952670393796: XenPCI <-- XenPci_DeviceWatchHandler
12952670393796: XenPCI     found pending read
12952670393796: XenPCI <-- XenBus_ProcessReadRequest
12952670393796: XenPCI --> XenPci_DeviceWatchHandler
12952670393796: XenPCI <-- XenPci_DeviceWatchHandler
12952670393796: XenPCI --> XenPci_UpdateBackendState
12952670393796: XenPCI <-- XenBus_EvtIoRead
12952670393796: XenPCI <-- XenPci_EvtIoDefault
12952670393796: XenPCI     Backend State Changed to Connected
12952670393796: XenPCI --> XenPci_EvtIoDefault
12952670393796: XenPCI <-- XenPci_UpdateBackendState
12952670393796: XenPCI --> XenBus_EvtIoWrite
12952670393796: XenPCI <-- XenPci_ChangeFrontendState
12952670393812: XenPCI     60 bytes of write buffer remaining
12952670393812: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670393812: XenPCI --> XenPci_EvtIoDefault
12952670393812: XenPCI --> XenBus_EvtIoRead
12952670393812: XenPCI     no data to read
12952670393812: XenPCI <-- XenBus_EvtIoRead
12952670393812: XenPCI <-- XenPci_EvtIoDefault
12952670393828: XenPCI     completing request with length 60
12952670393828: XenPCI <-- XenBus_EvtIoWrite
12952670393828: XenPCI <-- XenPci_EvtIoDefault
12952670393828: XenPCI --> XenPci_EvtIoDefault
12952670393828: XenPCI --> XenBus_EvtIoRead
12952670393828: XenPCI --> XenPci_ChangeFrontendStateMap
12952670393828: XenPCI     found pending read
12952670393828: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670393828: XenPCI <-- XenBus_ProcessReadRequest
12952670393828: XenPCI <-- XenPci_XenConfigDeviceSpecifyBuffers
12952670393828: XenPCI <-- XenBus_EvtIoRead
12952670393828: XenPCI --> XenPci_ChangeFrontendState
12952670393828: XenPCI <-- XenPci_EvtIoDefault
12952670393843: XenPCI --> XenPci_DeviceWatchHandler
12952670393843: XenPCI <-- XenPci_ChangeFrontendState
12952670393843: XenPCI <-- XenPci_DeviceWatchHandler
12952670393843: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670393843: XenPCI     setting pdo state to 2
12952670393843: XenPCI     Notifying event channel 5
12952670393843: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670393843: XenVbd --> XenVbd_HwScsiInterrupt
12952670393859: XenVbd     New pdo state SR_STATE_RESUMING
12952670393859: XenVbd     XEN_INIT_TYPE_VECTORS
12952670393859: XenVbd     XEN_INIT_TYPE_DEVICE_STATE - 89BE7D0C
12952670393859: XenVbd     XEN_INIT_TYPE_RING - ring-ref = 89019000
12952670393859: XenVbd     XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 7
12952670393859: XenVbd     XEN_INIT_TYPE_READ_STRING - device-type = disk
12952670393859: XenVbd     device-type = Disk
12952670393859: XenVbd     XEN_INIT_TYPE_READ_STRING - mode = w
12952670393859: XenVbd     mode = w
12952670393859: XenVbd     XEN_INIT_TYPE_READ_STRING - sectors = 104857600
12952670393859: XenVbd     XEN_INIT_TYPE_READ_STRING - sector-size = 512
12952670393875: XenVbd     XEN_INIT_TYPE_GRANT_ENTRIES - entries = 11
12952670393875: XenVbd     qemu_hide_flags_value = 3
12952670393875: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670393875: XenPCI --> EvtChn_PdoEventChannelDpc
12952670393875: XenPCI --> EvtChn_PdoEventChannelDpc
12952670393875: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670393875: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670393875: XenPCI     fdo state set to 2
12952670393875: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670393875: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670393875: XenPCI     setting pdo state to 0
12952670393875: XenPCI     Notifying event channel 5
12952670393875: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670393875: XenVbd --> XenVbd_HwScsiInterrupt
12952670393875: XenVbd     New pdo state 0
12952670393890: XenVbd     New pdo state 0
12952670393890: XenPCI --> EvtChn_PdoEventChannelDpc
12952670393890: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670393890: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670393890: XenPCI --> EvtChn_PdoEventChannelDpc
12952670393890: XenPCI     fdo state set to 0
12952670393890: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670393890: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670393890: XenPCI <-- XenPci_Pdo_Resume
12952670393890: XenPCI     Resuming child
12952670393890: XenPCI --> XenPci_Pdo_Resume
12952670393890: XenPCI     path = device/vif/0
12952670393890: XenPCI --> XenPci_GetBackendAndAddWatch
12952670393890: XenPCI <-- XenPci_GetBackendAndAddWatch
12952670393890: XenPCI --> XenPci_UpdateBackendState
12952670393890: XenPCI --> XenPci_ChangeFrontendState
12952670393906: XenPCI     state unchanged
12952670393906: XenPCI <-- XenPci_ChangeFrontendState
12952670393906: XenPCI --> XenPci_DeviceWatchHandler
12952670393906: XenPCI <-- XenPci_DeviceWatchHandler
12952670393906: XenPCI --> XenPci_XenConfigDeviceSpecifyBuffers
12952670393906: XenPCI     XEN_INIT_TYPE_RING - tx-ring-ref = 89A8B000
12952670393906: XenPCI     XEN_INIT_TYPE_RING - tx-ring-ref = 16225
12952670393906: XenPCI     XEN_INIT_TYPE_RING - rx-ring-ref = 89073000
12952670393906: XenPCI --> XenPci_DeviceWatchHandler
12952670393906: XenPCI     XEN_INIT_TYPE_RING - rx-ring-ref = 15626
12952670393906: XenPCI <-- XenPci_DeviceWatchHandler
12952670393921: XenPCI --> XenPci_DeviceWatchHandler
12952670393921: XenPCI     XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 8
12952670393921: XenPCI <-- XenPci_DeviceWatchHandler
12952670393921: XenPCI --> EvtChn_Bind
12952670393921: XenPCI --> XenPci_DeviceWatchHandler
12952670393921: XenPCI <-- EvtChn_Bind
12952670393921: XenPCI <-- XenPci_DeviceWatchHandler
12952670393921: XenPCI --> XenPci_DeviceWatchHandler
12952670393937: XenPCI <-- XenPci_DeviceWatchHandler
12952670393937: XenPCI --> XenPci_DeviceWatchHandler
12952670393937: XenPCI <-- XenPci_DeviceWatchHandler
12952670393937: XenPCI --> XenPci_DeviceWatchHandler
12952670393937: XenPCI --> XenPci_ChangeFrontendStateMap
12952670393937: XenPCI <-- XenPci_DeviceWatchHandler
12952670393937: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670393937: XenPCI --> XenPci_DeviceWatchHandler
12952670393937: XenPCI <-- XenPci_DeviceWatchHandler
12952670393937: XenPCI --> XenPci_DeviceWatchHandler
12952670393937: XenPCI <-- XenPci_DeviceWatchHandler
12952670393953: XenPCI --> XenPci_ChangeFrontendStateMap
12952670393953: XenPCI --> XenPci_ChangeFrontendState
12952670393953: XenPCI --> XenPci_DeviceWatchHandler
12952670393953: XenPCI <-- XenPci_DeviceWatchHandler
12952670393953: XenPCI --> XenPci_UpdateBackendState
12952670394046: XenPCI     Backend State Changed to Connected
12952670394046: XenPCI <-- XenPci_UpdateBackendState
12952670394046: XenPCI <-- XenPci_ChangeFrontendState
12952670394046: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670394046: XenPCI <-- XenPci_XenConfigDeviceSpecifyBuffers
12952670394046: XenPCI --> XenPci_ChangeFrontendState
12952670394046: XenPCI <-- XenPci_ChangeFrontendState
12952670394046: XenPCI --> XenPci_DeviceWatchHandler
12952670394046: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670394046: XenPCI <-- XenPci_DeviceWatchHandler
12952670394046: XenPCI     setting pdo state to 2
12952670394062: XenPCI     Notifying event channel 5
12952670394062: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670394062: XenNet --> XenNet_SuspendResume
12952670394062: XenNet     New state SR_STATE_RESUMING
12952670394062: XenNet <-- XenNet_SuspendResume
12952670394062: XenNet --> XenNet_ResumeWorkItem
12952670394062: XenPCI --> EvtChn_PdoEventChannelDpc
12952670394062: XenNet --> XenNet_TxResumeStart
12952670394062: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670394062: XenNet <-- XenNet_TxResumeStart
12952670394078: XenPCI     waiting...
12952670394078: XenNet --> XenNet_RxResumeStart
12952670394078: XenPCI     waiting...
12952670394078: XenNet <-- XenNet_RxResumeStart
12952670394078: XenNet --> XenNet_ConnectBackend
12952670394078: XenNet     XEN_INIT_TYPE_13
12952670394078: XenNet     XEN_INIT_TYPE_VECTORS
12952670394078: XenNet     XEN_INIT_TYPE_DEVICE_STATE - 89B81BFC
12952670394078: XenNet     XEN_INIT_TYPE_RING - tx-ring-ref = 89A8B000
12952670394078: XenNet     XEN_INIT_TYPE_RING - rx-ring-ref = 89073000
12952670394078: XenNet     XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 8
12952670394078: XenNet     XEN_INIT_TYPE_READ_STRING - mac = 00:16:3e:00:00:12
12952670394078: XenNet     XEN_INIT_TYPE_READ_STRING - feature-sg = 1
12952670394093: XenNet     XEN_INIT_TYPE_READ_STRING - feature-gso-tcpv4 = 1
12952670394093: XenNet     XEN_INIT_TYPE_17
12952670394093: XenNet <-- XenNet_ConnectBackend
12952670394093: XenNet --> XenNet_RxResumeEnd
12952670394093: XenNet <-- XenNet_RxResumeEnd
12952670394093: XenNet --> XenNet_TxResumeEnd
12952670394093: XenNet <-- XenNet_TxResumeEnd
12952670394093: XenNet     *Setting suspend_resume_state_fdo = 2
12952670394093: XenNet     *Notifying event channel 5
12952670394093: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670394093: XenPCI --> EvtChn_PdoEventChannelDpc
12952670394093: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670394093: XenPCI     fdo state set to 2
12952670394093: XenNet <-- XenNet_ResumeWorkItem
12952670394109: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670394109: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670394109: XenPCI     setting pdo state to 0
12952670394109: XenPCI     Notifying event channel 5
12952670394109: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670394109: XenNet --> XenNet_SuspendResume
12952670394109: XenNet     New state 2
12952670394109: XenNet     Notifying event channel 5
12952670394109: XenNet <-- XenNet_SuspendResume
12952670394109: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670394109: XenPCI --> EvtChn_PdoEventChannelDpc
12952670394109: XenPCI --> EvtChn_PdoEventChannelDpc
12952670394109: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670394109: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670394109: XenPCI     fdo state set to 0
12952670394125: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670394125: XenPCI <-- XenPci_Pdo_Resume
12952670394125: XenPCI --> PvMemoryInfoWriteOne
12952670394125: XenPCI <-- PvMemoryInfoWriteOne
12952670394125: XenPCI <-- XenPci_SuspendResume
12952670394859: XenPCI --> XenPci_BalloonHandler
12952670394875: XenPCI     target memory value = 2048 (2097152)
12952670394875: XenPCI <-- XenPci_BalloonHandler
12952670394875: XenPCI     Got balloon event, current = 2048, target = 2048
12952670394875: XenPCI     No change to memory
12952670406140: XenPCI --> XenPci_BalloonHandler
12952670406140: XenPCI     target memory value = 512 (524288)
12952670406140: XenPCI <-- XenPci_BalloonHandler
12952670406140: XenPCI     Got balloon event, current = 2048, target = 512
12952670406156: XenPCI     Trying to give 1536 MB to Xen
12952670407156: XenPCI --> XenPci_BalloonHandler
12952670407156: XenPCI     target memory value = 619 (633985)
12952670407328: XenPCI <-- XenPci_BalloonHandler
12952670408156: XenPCI --> XenPci_BalloonHandler
12952670408171: XenPCI     target memory value = 543 (556161)
12952670408203: XenPCI <-- XenPci_BalloonHandler
12952670409156: XenPCI --> XenPci_BalloonHandler
12952670409203: XenPCI     target memory value = 540 (553243)
12952670409234: XenPCI <-- XenPci_BalloonHandler
12952670410140: XenPCI     Memory = 512, Balloon Target = 512
12952670410140: XenPCI     Got balloon event, current = 512, target = 540
12952670410156: XenPCI     Trying to take 28 MB from Xen
12952670410171: XenPCI     Memory = 540, Balloon Target = 540
12952670410203: XenPCI --> XenPci_BalloonHandler
12952670410203: XenPCI     target memory value = 512 (524288)
12952670410250: XenPCI <-- XenPci_BalloonHandler
12952670410250: XenPCI     Got balloon event, current = 540, target = 512
12952670410250: XenPCI     Trying to give 28 MB to Xen
12952670410328: XenPCI     Memory = 512, Balloon Target = 512
12952670452140: XenPCI --> XenPci_BalloonEnableHandler
12952670452140: XenPCI  receive balloon enable = (1308226178.2:0)
12952670452140: XenPCI     Balloon enable change to 0
12952670452140: XenPCI  successfull got BalloonEnableChangedEvent
12952670452140: XenPCI <-- XenPci_BalloonEnableHandler
Log-dirty command enable
12952670560109: XenPCI --> XenPci_ShutdownHandler
12952670560109: XenPCI     Shutdown value = suspend
12952670560187: XenPCI     Suspend detected
12952670560187: XenPCI <-- XenPci_ShutdownHandler
12952670560187: XenPCI --> XenPci_SuspendResume
12952670560187: XenPCI --> XenPci_IoWatch
12952670560203: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670560203: XenPCI     Suspending child
12952670560203: XenPCI <-- XenBus_ProcessReadRequest
12952670560203: XenPCI --> XenPci_Pdo_Suspend (device/vbd/768)
12952670560203: XenPCI <-- XenPci_IoWatch
12952670560203: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670560203: XenPCI --> XenPci_EvtIoDefault
12952670560203: XenPCI     setting pdo state to 1
12952670560203: XenPCI --> XenBus_EvtIoWrite
12952670560203: XenPCI     Notifying event channel 5
12952670560203: XenPCI     33 bytes of write buffer remaining
12952670560203: XenPCI     waiting...
12952670560203: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670560203: XenPCI     waiting...
12952670560843: XenVbd --> XenVbd_HwScsiInterrupt
12952670560843: XenVbd     New pdo state SR_STATE_SUSPENDING
12952670560843: XenVbd     Set fdo state SR_STATE_SUSPENDING
12952670560843: XenVbd     Notifying event channel 5
12952670560843: XenVbd <-- XenVbd_HwScsiInterrupt
12952670560843: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670560843: XenPCI --> EvtChn_PdoEventChannelDpc
12952670560843: XenPCI --> EvtChn_PdoEventChannelDpc
12952670560843: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670560843: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670560859: XenPCI     fdo state set to 1
12952670560859: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670560859: XenPCI --> XenPci_ChangeFrontendState
12952670560875: XenPCI     completing request with length 33
12952670560875: XenPCI <-- XenBus_EvtIoWrite
12952670560875: XenPCI <-- XenPci_EvtIoDefault
12952670560875: XenPCI --> XenPci_EvtIoDefault
12952670560875: XenPCI --> XenBus_EvtIoRead
12952670560875: XenPCI     found pending read
12952670560875: XenPCI <-- XenBus_ProcessReadRequest
12952670560875: XenPCI <-- XenBus_EvtIoRead
12952670560875: XenPCI <-- XenPci_EvtIoDefault
12952670560875: XenPCI --> XenPci_EvtIoDefault
12952670560875: XenPCI --> XenBus_EvtIoRead
12952670560875: XenPCI     no data to read
12952670560875: XenPCI <-- XenBus_EvtIoRead
12952670560890: XenPCI <-- XenPci_EvtIoDefault
12952670560953: XenPCI --> XenPci_DeviceWatchHandler
12952670560953: XenPCI <-- XenPci_DeviceWatchHandler
12952670560984: XenPCI --> XenPci_UpdateBackendState
12952670560984: XenPCI     Backend State Changed to Closing
12952670560984: XenPCI <-- XenPci_UpdateBackendState
12952670560984: XenPCI <-- XenPci_ChangeFrontendState
12952670561031: XenPCI --> XenPci_ChangeFrontendState
12952670561109: XenPCI --> XenPci_DeviceWatchHandler
12952670561109: XenPCI <-- XenPci_DeviceWatchHandler
12952670561187: XenPCI --> XenPci_UpdateBackendState
12952670561250: XenPCI     Backend State Changed to Closed
12952670561250: XenPCI <-- XenPci_UpdateBackendState
12952670561250: XenPCI <-- XenPci_ChangeFrontendState
12952670561250: XenPCI --> XenPci_ChangeFrontendState
12952670561281: XenPCI --> XenPci_DeviceWatchHandler
12952670561281: XenPCI <-- XenPci_DeviceWatchHandler
12952670561312: XenPCI --> XenPci_UpdateBackendState
12952670561453: XenPCI     Backend State Changed to InitWait
12952670561453: XenPCI <-- XenPci_UpdateBackendState
12952670561453: XenPCI <-- XenPci_ChangeFrontendState
12952670561453: XenPCI     Match
12952670561500: XenPCI <-- XenPci_Pdo_Suspend
12952670561500: XenPCI     Suspending child
12952670561500: XenPCI --> XenPci_Pdo_Suspend (device/vif/0)
12952670561500: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670561500: XenPCI     setting pdo state to 1
12952670561500: XenPCI     Notifying event channel 5
12952670561500: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670561796: XenNet --> XenNet_SuspendResume
12952670561796: XenNet     New state SUSPENDING
12952670561796: XenNet <-- XenNet_SuspendResume
12952670561796: XenPCI --> EvtChn_PdoEventChannelDpc
12952670561796: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670561796: XenPCI     waiting...
12952670561796: XenPCI     waiting...
12952670561812: XenNet     Setting SR_STATE_SUSPENDING
12952670561812: XenNet     Notifying event channel 5
12952670561812: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670561812: XenPCI --> EvtChn_PdoEventChannelDpc
12952670561812: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670561812: XenPCI     fdo state set to 1
12952670561812: XenPCI <-- XenPci_Pdo_ChangeSuspendState
12952670561812: XenPCI --> XenPci_ChangeFrontendState
12952670561843: XenPCI --> XenPci_DeviceWatchHandler
12952670561843: XenPCI <-- XenPci_DeviceWatchHandler
12952670562406: XenPCI --> XenPci_UpdateBackendState
12952670562468: XenPCI     Backend State Changed to Closing
12952670562468: XenPCI <-- XenPci_UpdateBackendState
12952670562468: XenPCI <-- XenPci_ChangeFrontendState
12952670562468: XenPCI --> XenPci_ChangeFrontendState
12952670562468: XenPCI --> XenPci_DeviceWatchHandler
12952670562484: XenPCI <-- XenPci_DeviceWatchHandler
12952670562484: XenPCI --> XenPci_UpdateBackendState
12952670562531: XenPCI     Backend State Changed to Closed
12952670562531: XenPCI <-- XenPci_UpdateBackendState
12952670562531: XenPCI <-- XenPci_ChangeFrontendState
12952670562531: XenPCI --> XenPci_ChangeFrontendState
12952670562578: XenPCI --> XenPci_DeviceWatchHandler
12952670562578: XenPCI <-- XenPci_DeviceWatchHandler
12952670562796: XenPCI --> XenPci_UpdateBackendState
12952670562796: XenPCI     Backend State Changed to InitWait
12952670562796: XenPCI <-- XenPci_UpdateBackendState
12952670562796: XenPCI <-- XenPci_ChangeFrontendState
12952670562984: XenPCI     Match
12952670563015: XenPCI <-- XenPci_Pdo_Suspend
12952670563015: XenPCI --> _hvm_set_parameter
12952670563093: XenPCI HYPERVISOR_hvm_op retval = 0
12952670563093: XenPCI <-- _hvm_set_parameter
12952670563093: XenPCI --> XenPci_HighSync
12952670563093: XenPCI     queuing Dpc for CPU 0
12952670563093: XenPCI     queuing Dpc for CPU 1
12952670563093: XenPCI --> XenPci_HighSyncCallFunction0
12952670563093: XenPCI     All Dpc's queued
12952670563109: XenPCI --> XenPci_HighSyncCallFunctionN
12952670563109: XenPCI     (CPU = 1)
12952670563109: XenPCI     CPU 1 spinning...
12952670563109: XenPCI --> XenPci_Suspend0
12952670563109: XenPCI --> GntTbl_Suspend
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI     grant entry for XRNX from generation 0
12952670563109: XenPCI <-- GntTbl_Suspend
12952670563109: XenPCI --> _hvm_shutdown
dm-command: pause and save state
device model saving state
Log-dirty command disable
dm-command: continue after state save
12952670563109: XenPCI <-- _hvm_shutdown
12952670563109: XenPCI     back from suspend, cancelled = 0
12952670563109: XenPCI     Disabled qemu devices 03
12952670563109: XenPCI --> XenPci_Init
12952670563109: XenPCI     shared_info_area_unmapped.QuadPart = f2000000
12952670563109: XenPCI     gpfn = f2000
12952670563109: XenPCI     hypervisor memory op (XENMAPSPACE_shared_info) ret = 0
12952670563109: XenPCI <-- XenPci_Init
12952670563109: XenPCI --> GntTbl_Resume
12952670563109: XenPCI     pfn = 9b06
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b06
12952670563109: XenPCI     decreased 1 pages for grant table frame 0
12952670563109: XenPCI     pfn = 9b07
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b07
12952670563109: XenPCI     decreased 1 pages for grant table frame 1
12952670563109: XenPCI     pfn = 9b08
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b08
12952670563109: XenPCI     decreased 1 pages for grant table frame 2
12952670563109: XenPCI     pfn = 9b09
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b09
12952670563109: XenPCI     decreased 1 pages for grant table frame 3
12952670563109: XenPCI     pfn = 9b0a
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0a
12952670563109: XenPCI     decreased 1 pages for grant table frame 4
12952670563109: XenPCI     pfn = 9b0b
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0b
12952670563109: XenPCI     decreased 1 pages for grant table frame 5
12952670563109: XenPCI     pfn = 9b0c
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0c
12952670563109: XenPCI     decreased 1 pages for grant table frame 6
12952670563109: XenPCI     pfn = 9b0d
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0d
12952670563109: XenPCI     decreased 1 pages for grant table frame 7
12952670563109: XenPCI     pfn = 9b0e
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0e
12952670563109: XenPCI     decreased 1 pages for grant table frame 8
12952670563109: XenPCI     pfn = 9b0f
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b0f
12952670563109: XenPCI     decreased 1 pages for grant table frame 9
12952670563109: XenPCI     pfn = 9b10
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b10
12952670563109: XenPCI     decreased 1 pages for grant table frame 10
12952670563109: XenPCI     pfn = 9b11
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b11
12952670563109: XenPCI     decreased 1 pages for grant table frame 11
12952670563109: XenPCI     pfn = 9b12
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b12
12952670563109: XenPCI     decreased 1 pages for grant table frame 12
12952670563109: XenPCI     pfn = 9b13
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b13
12952670563109: XenPCI     decreased 1 pages for grant table frame 13
12952670563109: XenPCI     pfn = 9b14
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b14
12952670563109: XenPCI     decreased 1 pages for grant table frame 14
12952670563109: XenPCI     pfn = 9b15
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b15
12952670563109: XenPCI     decreased 1 pages for grant table frame 15
12952670563109: XenPCI     pfn = 9b16
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b16
12952670563109: XenPCI     decreased 1 pages for grant table frame 16
12952670563109: XenPCI     pfn = 9b17
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b17
12952670563109: XenPCI     decreased 1 pages for grant table frame 17
12952670563109: XenPCI     pfn = 9b18
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b18
12952670563109: XenPCI     decreased 1 pages for grant table frame 18
12952670563109: XenPCI     pfn = 9b19
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b19
12952670563109: XenPCI     decreased 1 pages for grant table frame 19
12952670563109: XenPCI     pfn = 9b1a
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1a
12952670563109: XenPCI     decreased 1 pages for grant table frame 20
12952670563109: XenPCI     pfn = 9b1b
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1b
12952670563109: XenPCI     decreased 1 pages for grant table frame 21
12952670563109: XenPCI     pfn = 9b1c
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1c
12952670563109: XenPCI     decreased 1 pages for grant table frame 22
12952670563109: XenPCI     pfn = 9b1d
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1d
12952670563109: XenPCI     decreased 1 pages for grant table frame 23
12952670563109: XenPCI     pfn = 9b1e
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1e
12952670563109: XenPCI     decreased 1 pages for grant table frame 24
12952670563109: XenPCI     pfn = 9b1f
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b1f
12952670563109: XenPCI     decreased 1 pages for grant table frame 25
12952670563109: XenPCI     pfn = 9b20
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b20
12952670563109: XenPCI     decreased 1 pages for grant table frame 26
12952670563109: XenPCI     pfn = 9b21
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b21
12952670563109: XenPCI     decreased 1 pages for grant table frame 27
12952670563109: XenPCI     pfn = 9b22
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b22
12952670563109: XenPCI     decreased 1 pages for grant table frame 28
12952670563109: XenPCI     pfn = 9b23
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b23
12952670563109: XenPCI     decreased 1 pages for grant table frame 29
12952670563109: XenPCI     pfn = 9b24
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b24
12952670563109: XenPCI     decreased 1 pages for grant table frame 30
12952670563109: XenPCI     pfn = 9b25
12952670563109: XenPCI     Calling HYPERVISOR_memory_op - pfn = 9b25
12952670563109: XenPCI     decreased 1 pages for grant table frame 31
12952670563109: XenPCI     new_grant_frames = 32
12952670563109: XenPCI --> GntTbl_Map
12952670563109: XenPCI <-- GntTbl_Map
12952670563109: XenPCI     GntTbl_Map result = 0
12952670563109: XenPCI <-- GntTbl_Resume
12952670563109: XenPCI --> EvtChn_Init
12952670563109: XenPCI --> _hvm_set_parameter
12952670563109: XenPCI HYPERVISOR_hvm_op retval = 0
12952670563109: XenPCI <-- _hvm_set_parameter
12952670563109: XenPCI     hvm_set_parameter(HVM_PARAM_CALLBACK_IRQ, 28) = 0
12952670563109: XenPCI --> EvtChn_AllocIpi
12952670563109: XenPCI <-- EvtChn_AllocIpi
12952670563109: XenPCI --> EvtChn_BindDpc
12952670563109: XenPCI <-- EvtChn_BindDpc
12952670563109: XenPCI     pdo_event_channel = 7
12952670563109: XenPCI <-- EvtChn_Init
12952670563109: XenPCI <-- XenPci_Suspend0
12952670568671: XenPCI --> XenPci_SuspendN
12952670568671: XenPCI     doing nothing on cpu N
12952670568671: XenPCI <-- XenPci_SuspendN
12952670568671: XenPCI <-- XenPci_HighSyncCallFunctionN
12952670568671: XenPCI <-- XenPci_HighSyncCallFunction0
12952670568671: XenPCI     Waiting for highsync_complete_event
12952670568812: XenPCI <-- XenPci_HighSync
12952670568812: XenPCI --> XenBus_Resume
12952670568812: XenPCI --> _hvm_get_parameter
12952670568812: XenPCI HYPERVISOR_hvm_op retval = 0
12952670568812: XenPCI <-- _hvm_get_parameter
12952670568843: XenPCI --> _hvm_get_parameter
12952670568843: XenPCI HYPERVISOR_hvm_op retval = 0
12952670568843: XenPCI <-- _hvm_get_parameter
12952670568843: XenPCI --> EvtChn_BindDpc
12952670568843: XenPCI <-- EvtChn_BindDpc
12952670568859: XenPCI     Adding watch for path = control/sysrq
12952670568859: XenPCI --> XenPci_SysrqHandler
12952670568859: XenPCI     Adding watch for path = control/shutdown
12952670568859: XenPCI     SysRq Value = (null)
12952670568859: XenPCI     Adding watch for path = device
12952670568859: XenPCI <-- XenPci_SysrqHandler
12952670569062: XenPCI --> XenPci_ShutdownHandler
12952670569156: XenPCI     Adding watch for path = memory/target
12952670569156: XenPCI     Shutdown value = 
12952670569156: XenPCI <-- XenPci_ShutdownHandler
12952670569156: XenPCI     Adding watch for path = memory/enable
12952670569156: XenPCI --> XenPci_DeviceWatchHandler
12952670569156: XenPCI <-- XenPci_DeviceWatchHandler
12952670569156: XenPCI     Adding watch for path = control/shell/stdin
12952670569156: XenPCI --> XenPci_BalloonHandler
12952670569156: XenPCI     Adding watch for path = control/shutdown
12952670569265: XenPCI <-- XenBus_Resume
12952670569265: XenPCI     suspend event channel = 8
12952670569265: XenPCI     target memory value = 512 (524288)
12952670569281: XenPCI --> EvtChn_BindDpc
12952670569281: XenPCI <-- EvtChn_BindDpc
12952670569281: XenPCI     Resuming child
12952670569281: XenPCI --> XenPci_Pdo_Resume
12952670569281: XenPCI     path = device/vbd/768
12952670569281: XenPCI --> XenPci_GetBackendAndAddWatch
12952670569312: XenPCI <-- XenPci_BalloonHandler
12952670569312: XenPCI --> XenPci_BalloonEnableHandler
12952670569312: XenPCI  receive balloon enable = (1308226178.2:0)
12952670569421: XenPCI     Balloon enable change to 0
12952670569421: XenPCI  successfull got BalloonEnableChangedEvent
12952670569437: XenPCI <-- XenPci_BalloonEnableHandler
12952670569437: XenPCI --> XenPci_IoWatch
12952670569437: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670569437: XenPCI <-- XenBus_ProcessReadRequest
12952670569437: XenPCI <-- XenPci_IoWatch
12952670569437: XenPCI --> XenPci_EvtIoDefault
12952670569437: XenPCI --> XenPci_IoWatch
12952670569437: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670569437: XenPCI <-- XenBus_ProcessReadRequest
12952670569437: XenPCI <-- XenPci_IoWatch
12952670569437: XenPCI --> XenPci_EvtIoDefault
12952670569437: XenPCI --> XenBus_EvtIoWrite
12952670569437: XenPCI --> XenPci_DeviceWatchHandler
12952670569437: XenPCI     36 bytes of write buffer remaining
12952670569437: XenPCI --> XenBus_EvtIoWrite
12952670569453: XenPCI     33 bytes of write buffer remaining
12952670569468: XenPCI     Rescanning child list
12952670569468: XenPCI --> XenPci_EvtChildListScanForChildren
12952670569468: XenPCI     completing request with length 36
12952670569468: XenPCI <-- XenBus_EvtIoWrite
12952670569468: XenPCI     completing request with length 33
12952670569468: XenPCI <-- XenPci_EvtIoDefault
12952670569468: XenPCI <-- XenBus_EvtIoWrite
12952670569468: XenPCI --> XenPci_EvtIoDefault
12952670569546: XenPCI     Found path = device/vbd/768
12952670569546: XenPCI --> XenBus_EvtIoRead
12952670569546: XenPCI <-- XenPci_EvtIoDefault
12952670569546: XenPCI     found pending read
12952670569546: XenPCI --> XenPci_EvtIoDefault
12952670569546: XenPCI <-- XenBus_ProcessReadRequest
12952670569546: XenPCI --> XenBus_EvtIoRead
12952670569546: XenPCI <-- XenBus_EvtIoRead
12952670569546: XenPCI     found pending read
12952670569546: XenPCI <-- XenPci_EvtIoDefault
12952670569546: XenPCI <-- XenBus_ProcessReadRequest
12952670569546: XenPCI --> XenPci_EvtIoDefault
12952670569546: XenPCI --> XenPci_EvtIoDefault
12952670569546: XenPCI --> XenBus_EvtIoRead
12952670569546: XenPCI --> XenBus_EvtIoWrite
12952670569546: XenPCI     no data to read
12952670569546: XenPCI     60 bytes of write buffer remaining
12952670569546: XenPCI <-- XenBus_EvtIoRead
12952670569546: XenPCI <-- XenBus_EvtIoRead
12952670569546: XenPCI <-- XenPci_EvtIoDefault
12952670569546: XenPCI <-- XenPci_EvtIoDefault
12952670569546: XenPCI --> XenPci_EvtIoDefault
12952670569546: XenPCI --> XenBus_EvtIoRead
12952670569546: XenPCI     no data to read
12952670569562: XenPCI <-- XenBus_EvtIoRead
12952670569562: XenPCI <-- XenPci_EvtIoDefault
12952670569703: XenPCI     Found path = device/vif/0
12952670569703: XenPCI <-- XenPci_EvtChildListScanForChildren
12952670569703: XenPCI     completing request with length 60
12952670569703: XenPCI <-- XenPci_DeviceWatchHandler
12952670569703: XenPCI <-- XenBus_EvtIoWrite
12952670569703: XenPCI <-- XenPci_EvtIoDefault
12952670569703: XenPCI <-- XenPci_GetBackendAndAddWatch
12952670569703: XenPCI --> XenPci_EvtIoDefault
12952670569703: XenPCI --> XenPci_ChangeFrontendState
12952670569703: XenPCI --> XenBus_EvtIoRead
12952670569781: XenPCI --> XenPci_UpdateBackendState
12952670569781: XenPCI     found pending read
12952670569890: XenPCI <-- XenBus_ProcessReadRequest
12952670569890: XenPCI <-- XenBus_EvtIoRead
12952670569890: XenPCI <-- XenPci_EvtIoDefault
12952670569953: XenPCI <-- XenPci_ChangeFrontendState
12952670569953: XenPCI     state unchanged
12952670569953: XenPCI --> XenPci_DeviceWatchHandler
12952670569953: XenPCI --> XenPci_XenConfigDeviceSpecifyBuffers
12952670569953: XenPCI <-- XenPci_DeviceWatchHandler
12952670569968: XenPCI     XEN_INIT_TYPE_RING - ring-ref = 89A8B000
12952670569968: XenPCI     XEN_INIT_TYPE_RING - ring-ref = 15626
12952670569968: XenPCI     XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 9
12952670569968: XenPCI --> XenPci_DeviceWatchHandler
12952670569968: XenPCI <-- XenPci_DeviceWatchHandler
12952670570031: XenPCI --> EvtChn_BindIrq
12952670570031: XenPCI --> XenPci_DeviceWatchHandler
12952670570031: XenPCI <-- EvtChn_BindIrq
12952670570031: XenPCI <-- XenPci_DeviceWatchHandler
12952670570031: XenPCI --> XenPci_ChangeFrontendStateMap
12952670570031: XenPCI --> XenPci_ChangeFrontendState
12952670570031: XenPCI --> XenPci_DeviceWatchHandler
12952670570031: XenPCI <-- XenPci_DeviceWatchHandler
12952670570062: XenPCI --> XenPci_UpdateBackendState
12952670570062: XenPCI     Backend State Changed to Connected
12952670570062: XenPCI <-- XenPci_UpdateBackendState
12952670570062: XenPCI <-- XenPci_ChangeFrontendState
12952670570140: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670570140: XenPCI --> XenPci_ChangeFrontendStateMap
12952670570296: XenPCI <-- XenPci_ChangeFrontendStateMap
12952670570312: XenPCI <-- XenPci_XenConfigDeviceSpecifyBuffers
12952670570312: XenPCI --> XenPci_ChangeFrontendState
12952670570328: XenPCI <-- XenPci_ChangeFrontendState
12952670570328: XenPCI --> XenPci_DeviceWatchHandler
12952670570328: XenPCI --> XenPci_Pdo_ChangeSuspendState
12952670570343: XenPCI <-- XenPci_DeviceWatchHandler
12952670570343: XenPCI     setting pdo state to 2
12952670570343: XenPCI     Notifying event channel 7
12952670570343: XenPCI     EVT_ACTION_TYPE_SUSPEND
12952670570343: XenPCI --> EvtChn_PdoEventChannelDpc
12952670570343: XenPCI <-- EvtChn_PdoEventChannelDpc
12952670570343: XenPCI     waiting...
12952670570343: XenPCI     waiting...
12952670572546: XenVbd --> HwScsiStartIo (Suspending/Resuming)
12952670572546: XenVbd <-- HwScsiStartIo (Suspending/Resuming)
12952670574140: XenPCI --> XenPci_BalloonEnableHandler
12952670574140: XenPCI     Unacknowledged event word = 0, val = 00000200
12952670574140: XenPCI  receive balloon enable = (1308226300.21:0)
12952670574156: XenPCI     Balloon enable change to 0
12952670574156: XenPCI  successfull got BalloonEnableChangedEvent
12952670574171: XenPCI <-- XenPci_BalloonEnableHandler
12952670581890: XenVbd --> XenVbd_HwScsiResetBus
12952670581890: XenVbd     IRQL = 9
12952670581890: XenVbd <-- XenVbd_HwScsiResetBus
Log-dirty command enable
Log-dirty command disable
12952670590953: XenPCI --> XenPci_BalloonEnableHandler
12952670590968: XenPCI  receive balloon enable = (1308226317.03:0)
12952670590968: XenPCI     Balloon enable change to 0
12952670590968: XenPCI  successfull got BalloonEnableChangedEvent
12952670590968: XenPCI <-- XenPci_BalloonEnableHandler
Log-dirty command enable
12952670684000: XenPCI --> XenPci_ShutdownHandler
12952670684000: XenPCI     Shutdown value = suspend
12952670684000: XenPCI     Suspend detected
12952670684000: XenPCI <-- XenPci_ShutdownHandler
12952670684015: XenPCI --> XenPci_SuspendResume
12952670684015: XenPCI --> XenPci_IoWatch
12952670684015: XenPCI --> PvMemoryInfoWriteOne
12952670684015: XenPCI     found pending read - MinorFunction = 0, length = 1024
12952670684015: XenPCI <-- XenBus_ProcessReadRequest
12952670684015: XenPCI <-- XenPci_IoWatch
12952670684015: XenPCI --> XenPci_EvtIoDefault
12952670684015: XenPCI --> XenBus_EvtIoWrite
12952670684015: XenPCI     33 bytes of write buffer remaining
12952670684015: XenPCI <-- PvMemoryInfoWriteOne
12952670684015: XenPCI <-- XenPci_SuspendResume
12952670684031: XenPCI     completing request with length 33
12952670684031: XenPCI <-- XenBus_EvtIoWrite
12952670684046: XenPCI <-- XenPci_EvtIoDefault
12952670684046: XenPCI --> XenPci_EvtIoDefault
12952670684046: XenPCI --> XenBus_EvtIoRead
12952670684046: XenPCI     found pending read
12952670684046: XenPCI <-- XenBus_ProcessReadRequest
12952670684046: XenPCI <-- XenBus_EvtIoRead
12952670684046: XenPCI <-- XenPci_EvtIoDefault
12952670684046: XenPCI --> XenPci_EvtIoDefault
12952670684046: XenPCI --> XenBus_EvtIoRead
12952670684046: XenPCI     no data to read
12952670684046: XenPCI <-- XenBus_EvtIoRead
12952670684046: XenPCI <-- XenPci_EvtIoDefault

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-16 14:24       ` PV resume failed after self migration failed MaoXiaoyun
@ 2011-06-17  1:34         ` James Harper
  2011-06-19 16:46           ` MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: James Harper @ 2011-06-17  1:34 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

> Hi James:
> 
>     I found another issue during test.
>     When migrating VM from host A to B, it contains following process.
>     1) memory copy
>     2) suspend VM on A
>     3) transfer some other thing to B such as tsc state.
> 
>     If step (3) failed,  VM will be resumed on host A.
>     Well, from the test of view,  the resume cannot be completed
successfully.
> 
>     Out test is migrating 12VMs between twn host over again and again.
>     The attached log doing exactly below things
>     1) Migrate from Host B, so fisrt resuming (line 25 to 474)
>     2) Later want to migrating to B again, so suspending (line 474 to
line
> 1116)
>     3) Migrating failed and enter into resuming again  (line 1118 to
line
> 1399).
> 
>      line 1383 is waiting vbd state to be changed but can not get the
> response.
> And 1392 show a "Unacknowledged event word ". From the log, it looks
like
> this is due to XenVbd_HwScsiResetBus in line 1397.
> 
>       Question is what trigger the XenVbd_HwScsiResetBus during
resuming?
> 

Windows will invoke a scsi reset if a request takes too long to complete
(5 seconds I think). It will also issue a reset when a crash dump
starts, just to make sure all previous requests are flushed etc.

James

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-17  1:34         ` James Harper
@ 2011-06-19 16:46           ` MaoXiaoyun
  2011-06-19 23:11             ` James Harper
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-19 16:46 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 2074 bytes --]


 

> Subject: RE: PV resume failed after self migration failed
> Date: Fri, 17 Jun 2011 11:34:09 +1000
> From: james.harper@bendigoit.com.au
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> > Hi James:
> > 
> > I found another issue during test.
> > When migrating VM from host A to B, it contains following process.
> > 1) memory copy
> > 2) suspend VM on A
> > 3) transfer some other thing to B such as tsc state.
> > 
> > If step (3) failed, VM will be resumed on host A.
> > Well, from the test of view, the resume cannot be completed
> successfully.
> > 
> > Out test is migrating 12VMs between twn host over again and again.
> > The attached log doing exactly below things
> > 1) Migrate from Host B, so fisrt resuming (line 25 to 474)
> > 2) Later want to migrating to B again, so suspending (line 474 to
> line
> > 1116)
> > 3) Migrating failed and enter into resuming again (line 1118 to
> line
> > 1399).
> > 
> > line 1383 is waiting vbd state to be changed but can not get the
> > response.
> > And 1392 show a "Unacknowledged event word ". From the log, it looks
> like
> > this is due to XenVbd_HwScsiResetBus in line 1397.
> > 
> > Question is what trigger the XenVbd_HwScsiResetBus during
> resuming?
> > 
> 
> Windows will invoke a scsi reset if a request takes too long to complete
> (5 seconds I think). It will also issue a reset when a crash dump
> starts, just to make sure all previous requests are flushed etc.
> 

Thanks for the help, sorry for the late response, I've been leaving a while lase weekend.
 
If VBD is already suspended, all further IO try to issue will find vbd states is not SR_STATE_RUNNING,
thus calls ScsiPortNotification to notify RequestComplete, right?
 
If so, I have an assumption.
at time t, VBD is suspend, an IO is try to issue, but before it calls ScsiPortNotificaiton, the whole 
VM paused(VCPU paused, last step of step),  10 or more seconds later, if VM resumes,  will the driver
found the IO mentioned before has already timed out and trigger XenVbd_HwScsiResetBus?
 
 
 
thanks.
> James
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2770 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-19 16:46           ` MaoXiaoyun
@ 2011-06-19 23:11             ` James Harper
  2011-06-21 12:13               ` MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: James Harper @ 2011-06-19 23:11 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

> >
> > Windows will invoke a scsi reset if a request takes too long to
complete
> > (5 seconds I think). It will also issue a reset when a crash dump
> > starts, just to make sure all previous requests are flushed etc.
> >
> Thanks for the help, sorry for the late response, I've been leaving a
while
> lase weekend.
> 
> If VBD is already suspended, all further IO try to issue will find vbd
states
> is not SR_STATE_RUNNING,
> thus calls ScsiPortNotification to notify RequestComplete, right?
> 
> If so, I have an assumption.
> at time t, VBD is suspend, an IO is try to issue, but before it calls
> ScsiPortNotificaiton, the whole
> VM paused(VCPU paused, last step of step),  10 or more seconds later,
if VM
> resumes,  will the driver
> found the IO mentioned before has already timed out and trigger
> XenVbd_HwScsiResetBus?
> 

The xenvbd driver doesn't do any timeout, windows does the timeout and
tells xenvbd to reset. I haven't tested the scenario you describe very
recently, and xenvbd is now two different drivers, one for scsiport (<=
2003) and one for storport (>= Vista), so there could be bugs in either.

James

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-19 23:11             ` James Harper
@ 2011-06-21 12:13               ` MaoXiaoyun
  2011-06-22  4:06                 ` James Harper
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-21 12:13 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 3278 bytes --]


> Subject: RE: PV resume failed after self migration failed
> Date: Mon, 20 Jun 2011 09:11:59 +1000
> From: james.harper@bendigoit.com.au
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> > >
> > > Windows will invoke a scsi reset if a request takes too long to
> complete
> > > (5 seconds I think). It will also issue a reset when a crash dump
> > > starts, just to make sure all previous requests are flushed etc.
> > >
> > Thanks for the help, sorry for the late response, I've been leaving a
> while
> > lase weekend.
> > 
> > If VBD is already suspended, all further IO try to issue will find vbd
> states
> > is not SR_STATE_RUNNING,
> > thus calls ScsiPortNotification to notify RequestComplete, right?
> > 
> > If so, I have an assumption.
> > at time t, VBD is suspend, an IO is try to issue, but before it calls
> > ScsiPortNotificaiton, the whole
> > VM paused(VCPU paused, last step of step), 10 or more seconds later,
> if VM
> > resumes, will the driver
> > found the IO mentioned before has already timed out and trigger
> > XenVbd_HwScsiResetBus?
> > 
> 
> The xenvbd driver doesn't do any timeout, windows does the timeout and
> tells xenvbd to reset. I haven't tested the scenario you describe very
> recently, and xenvbd is now two different drivers, one for scsiport (<=
> 2003) and one for storport (>= Vista), so there could be bugs in either.
> 
The bug can be reproduced in 2003 32bit system. We are using scsi driver.
I put some log in XenVbd_HwScsiResetBus to see if there are not completed srb(Like below)
but I didn't see the log when XenVbd_HwScsiResetBus called. So No IO is in queue.  
 
 for (i = 0; i < MAX_SHADOW_ENTRIES; i++)
    {
      if (xvdd->shadows[i].srb)
      {
        KdPrint((__DRIVER_NAME "    in-flight srb %p with status SRB_STATUS_BUS_RESET\n", xvdd->shadows[i].srb));
      }
    }
 
 
Right now, I don't think it is related to bus reset.  From the log, it looks like an event is not acked.
The log shows that PV Resuming is waiting xppdd->device_state.suspend_resume_state_fdo to change but failed.
 
that is :  XenPci_Pdo_Resume->XenPci_Pdo_ChangeSuspendState(device, SR_STATE_RESUMING)->
-> KeWaitForSingleObject(&xpdd->pdo_suspend_event, Executive, KernelMode, FALSE, NULL);

It is assumed that the change should happen in XenVbd_HwScsiInterrupt.
But for some reason the if statement in XenVbd_HwScsiInterrupt(xenvbd_scsiport.c:920) return False.
 
             /* in dump mode I think we get called on a timer, not by an actual IRQ */
                 if (!dump_mode && !xvdd->vectors.EvtChn_AckEvent(xvdd->vectors.context, xvdd->event_channel, &last_interrupt))
                         return FALSE; /* interrupt was not for us */
 
Since the event is not acked, that's why in EvtChn_EvtInterruptIsr, print out a log like "Unacknowledged event word = 0, val = 00000200"
 
12952670574140: XenPCI --> XenPci_BalloonEnableHandler
12952670574140: XenPCI     Unacknowledged event word = 0, val = 00000200  
12952670574140: XenPCI  receive balloon enable = (1308226300.21:0)
12952670574156: XenPCI     Balloon enable change to 0
12952670574156: XenPCI  successfull got BalloonEnableChangedEvent
 
I will try to take a close look EvtChn_EvtInterruptIsr to get more understanding. Thanks.

> James
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 4725 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-21 12:13               ` MaoXiaoyun
@ 2011-06-22  4:06                 ` James Harper
  2011-06-22  5:21                   ` MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: James Harper @ 2011-06-22  4:06 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

> >
> > The xenvbd driver doesn't do any timeout, windows does the timeout
and
> > tells xenvbd to reset. I haven't tested the scenario you describe
very
> > recently, and xenvbd is now two different drivers, one for scsiport
(<=
> > 2003) and one for storport (>= Vista), so there could be bugs in
either.
> >
> 
> The bug can be reproduced in 2003 32bit system. We are using scsi
driver.
> I put some log in XenVbd_HwScsiResetBus to see if there are not
completed
> srb(Like below)
> but I didn't see the log when XenVbd_HwScsiResetBus called. So No IO
is in
> queue.

Just to confirm, is this the issue that only happens when the migration
fails in xen and is cancelled?

James

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-22  4:06                 ` James Harper
@ 2011-06-22  5:21                   ` MaoXiaoyun
  2011-06-22  5:58                     ` MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-22  5:21 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 1666 bytes --]


 

> Subject: RE: PV resume failed after self migration failed
> Date: Wed, 22 Jun 2011 14:06:18 +1000
> From: james.harper@bendigoit.com.au
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> > >
> > > The xenvbd driver doesn't do any timeout, windows does the timeout
> and
> > > tells xenvbd to reset. I haven't tested the scenario you describe
> very
> > > recently, and xenvbd is now two different drivers, one for scsiport
> (<=
> > > 2003) and one for storport (>= Vista), so there could be bugs in
> either.
> > >
> > 
> > The bug can be reproduced in 2003 32bit system. We are using scsi
> driver.
> > I put some log in XenVbd_HwScsiResetBus to see if there are not
> completed
> > srb(Like below)
> > but I didn't see the log when XenVbd_HwScsiResetBus called. So No IO
> is in
> > queue.
> 
> Just to confirm, is this the issue that only happens when the migration
> fails in xen and is cancelled?
> 
Exactly.
I've noticed some difference in log.
 
In normal resuming, from the log, we can see event port assign like below:
pdo_event_channel = 5 (Notifying event channel 5)
suspend event channel = 6
XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 7  (for VBD)
XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 8  (VIF)
 
when guest resuming locally from suspend(that is migration failed in xen, guest
has already suspended, so it need resuming)
 
pdo_event_channel = 7 ( Notifying event channel 7)

suspend event channel = 8
XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 9 (vif)
 
VBD port is not allocated, since pdo is waiting fdo change.
 
It looks like port 5 and 6 is still occpuied, or pdo_event_channel bind twice?

> James
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2398 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-22  5:21                   ` MaoXiaoyun
@ 2011-06-22  5:58                     ` MaoXiaoyun
  2011-06-24 10:32                       ` MaoXiaoyun
  0 siblings, 1 reply; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-22  5:58 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 3046 bytes --]







 

> Subject: RE: PV resume failed after self migration failed
> Date: Wed, 22 Jun 2011 14:06:18 +1000
> From: james.harper@bendigoit.com.au
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> > >
> > > The xenvbd driver doesn't do any timeout, windows does the timeout
> and
> > > tells xenvbd to reset. I haven't tested the scenario you describe
> very
> > > recently, and xenvbd is now two different drivers, one for scsiport
> (<=
> > > 2003) and one for storport (>= Vista), so there could be bugs in
> either.
> > >
> > 
> > The bug can be reproduced in 2003 32bit system. We are using scsi
> driver.
> > I put some log in XenVbd_HwScsiResetBus to see if there are not
> completed
> > srb(Like below)
> > but I didn't see the log when XenVbd_HwScsiResetBus called. So No IO
> is in
> > queue.
> 
> Just to confirm, is this the issue that only happens when the migration
> fails in xen and is cancelled?
> 
>Exactly.
>I've noticed some difference in log.
 
In normal resuming, from the log, we can see event port assign like below:
pdo_event_channel = 5 (Notifying event channel 5)
suspend event channel = 6
XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 7  (for VBD)
XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 8  (VIF)
 
>when guest resuming locally from suspend(that is migration failed in xen, guest
>has already suspended, so it need resuming)
 
>pdo_event_channel = 7 ( Notifying event channel 7)

>suspend event channel = 8
>XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 9 (vif)
 
>VBD port is not allocated, since pdo is waiting fdo change.
 
>It looks like port 5 and 6 is still occpuied, or pdo_event_channel bind twice?
 
it works when I unbind pdo_event_channel & suspend_evtchn. 
===================================================================
--- xenpci_fdo.c (revision 4304)
+++ xenpci_fdo.c (working copy)
@@ -656,6 +656,12 @@
     }
     WdfChildListEndIteration(child_list, &child_iterator);
 
+    EvtChn_Unbind(xpdd, xpdd->pdo_event_channel);
+    EvtChn_Close(xpdd, xpdd->pdo_event_channel);
+
+    EvtChn_Unbind(xpdd, xpdd->suspend_evtchn);
+    EvtChn_Close(xpdd, xpdd->suspend_evtchn);
+    
     XenBus_Suspend(xpdd);
     EvtChn_Suspend(xpdd);
     XenPci_HighSync(XenPci_Suspend0, XenPci_SuspendN, xpdd);
 
 
BTW, is there a missing "break" in XenVbd_HwScsiInterrupt,  xenvbd_scsiport.c:928
before default? Well, it is harmless.
 

924 case SR_STATE_RUNNING: 
925 KdPrint((__DRIVER_NAME " New pdo state %d\n", suspend_resume_state_pdo)); 
926 xvdd->device_state->suspend_resume_state_fdo = suspend_resume_state_pdo; 
927 xvdd->vectors.EvtChn_Notify(xvdd->vectors.context, xvdd->device_state->pdo_event_channel); 
928 ScsiPortNotification(NextRequest, DeviceExtension); 
929 default: 
930 KdPrint((__DRIVER_NAME " New pdo state %d\n", suspend_resume_state_pdo)); 
931 xvdd->device_state->suspend_resume_state_fdo = suspend_resume_state_pdo; 
932 xvdd->vectors.EvtChn_Notify(xvdd->vectors.context, xvdd->device_state->pdo_event_channel); 
933 break; 
 
Thanks.
>> James
>> 
 
 

  		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 5960 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: PV resume failed after self migration failed
  2011-06-22  5:58                     ` MaoXiaoyun
@ 2011-06-24 10:32                       ` MaoXiaoyun
  0 siblings, 0 replies; 13+ messages in thread
From: MaoXiaoyun @ 2011-06-24 10:32 UTC (permalink / raw)
  To: james.harper, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 4354 bytes --]


Hi James:
 
       In addtion, I think the if statement in XenVbd_HwScsiResetBus, we might need use 
suspend_resume_state_fdo, not suspend_resume_state_pdo. 
       Since suspend_resume_state_pdo is changed to SR_STATE_SUSPENDING, but there
are still io request not finished, when reset happen, those IO can be finished.
 
       What do u think? 
        Thanks.
 
 
static BOOLEAN
XenVbd_HwScsiResetBus(PVOID DeviceExtension, ULONG PathId)
{
  PXENVBD_DEVICE_DATA xvdd = DeviceExtension;
  srb_list_entry_t *srb_entry;
  PSCSI_REQUEST_BLOCK srb;
  int i;
  UNREFERENCED_PARAMETER(DeviceExtension);
  UNREFERENCED_PARAMETER(PathId);
  FUNCTION_ENTER();
  KdPrint((__DRIVER_NAME "     IRQL = %d\n", KeGetCurrentIrql()));
  if (xvdd->ring_detect_state == RING_DETECT_STATE_COMPLETE && xvdd->device_state->suspend_resume_state_pdo == SR_STATE_RUNNING) *********this line
  {
    while((srb_entry = (srb_list_entry_t *)RemoveHeadList(&xvdd->srb_list)) != (srb_list_entry_t *)&xvdd->srb_list)
    {
      srb = srb_entry->srb;
      srb->SrbStatus = SRB_STATUS_BUS_RESET;
      KdPrint((__DRIVER_NAME "     completing queued SRB %p with status SRB_STATUS_BUS_RESET\n", srb));
      ScsiPortNotification(RequestComplete, xvdd, srb);
    }
 
 
>> Subject: RE: PV resume failed after self migration failed
>> Date: Wed, 22 Jun 2011 14:06:18 +1000
>> From: james.harper@bendigoit.com.au
>> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
>> 
>> > >
>> > > The xenvbd driver doesn't do any timeout, windows does the timeout
>> and
>> > > tells xenvbd to reset. I haven't tested the scenario you describe
>> very
>> > > recently, and xenvbd is now two different drivers, one for scsiport
>> (<=
>> > > 2003) and one for storport (>= Vista), so there could be bugs in
>> either.
>> > >
>> > 
>> > The bug can be reproduced in 2003 32bit system. We are using scsi
>> driver.
>> > I put some log in XenVbd_HwScsiResetBus to see if there are not
>> completed
>> > srb(Like below)
>> > but I didn't see the log when XenVbd_HwScsiResetBus called. So No IO
>> is in
>> > queue.
>> 
>> Just to confirm, is this the issue that only happens when the migration
>> fails in xen and is cancelled?
>> 
>>Exactly.
>>I've noticed some difference in log.
> 
>In normal resuming, from the log, we can see event port assign like below:
>pdo_event_channel = 5 (Notifying event channel 5)
>suspend event channel = 6
>XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 7  (for VBD)
>XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 8  (VIF)
> 
>>when guest resuming locally from suspend(that is migration failed in xen, guest
>>has already suspended, so it need resuming)
> 
>>pdo_event_channel = 7 ( Notifying event channel 7)
>>suspend event channel = 8
>>XEN_INIT_TYPE_EVENT_CHANNEL - event-channel = 9 (vif)
> 
>>VBD port is not allocated, since pdo is waiting fdo change.
> 
>>It looks like port 5 and 6 is still occpuied, or pdo_event_channel bind twice?
> 
>it works when I unbind pdo_event_channel & suspend_evtchn. 
>===================================================================
>--- xenpci_fdo.c (revision 4304)
>+++ xenpci_fdo.c (working copy)
>@@ -656,6 +656,12 @@
>     }
>     WdfChildListEndIteration(child_list, &child_iterator);
> 
>+    EvtChn_Unbind(xpdd, xpdd->pdo_event_channel);
>+    EvtChn_Close(xpdd, xpdd->pdo_event_channel);
>+
>+    EvtChn_Unbind(xpdd, xpdd->suspend_evtchn);
>+    EvtChn_Close(xpdd, xpdd->suspend_evtchn);
>+    
>     XenBus_Suspend(xpdd);
>     EvtChn_Suspend(xpdd);
>     XenPci_HighSync(XenPci_Suspend0, XenPci_SuspendN, xpdd);
> 
> 
>BTW, is there a missing "break" in XenVbd_HwScsiInterrupt,  xenvbd_scsiport.c:928
>before default? Well, it is harmless.
> 
>924 case SR_STATE_RUNNING: 
>925 KdPrint((__DRIVER_NAME " New pdo state %d\n", suspend_resume_state_pdo)); 
>926 xvdd->device_state->suspend_resume_state_fdo = suspend_resume_state_pdo; 
>927 xvdd->vectors.EvtChn_Notify(xvdd->vectors.context, xvdd->device_state->pdo_event_channel); 
>928 ScsiPortNotification(NextRequest, DeviceExtension); 
>929 default: 
>930 KdPrint((__DRIVER_NAME " New pdo state %d\n", suspend_resume_state_pdo)); 
>931 xvdd->device_state->suspend_resume_state_fdo = suspend_resume_state_pdo; 
>932 xvdd->vectors.EvtChn_Notify(xvdd->vectors.context, xvdd->device_state->pdo_event_channel); 
>933 break; 
> 
>Thanks.
>>> James
>>> 
>  		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6203 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-06-24 10:32 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <BAY0-MC4-F15zXiPuZe00229bef@bay0-mc4-f15.Bay0.hotmail.com>
2011-06-15 12:05 ` Win2003R2 64 suspend failed in self live migration MaoXiaoyun
2011-06-15 12:21   ` James Harper
2011-06-15 23:20   ` James Harper
2011-06-16 10:28     ` MaoXiaoyun
2011-06-16 14:24       ` PV resume failed after self migration failed MaoXiaoyun
2011-06-17  1:34         ` James Harper
2011-06-19 16:46           ` MaoXiaoyun
2011-06-19 23:11             ` James Harper
2011-06-21 12:13               ` MaoXiaoyun
2011-06-22  4:06                 ` James Harper
2011-06-22  5:21                   ` MaoXiaoyun
2011-06-22  5:58                     ` MaoXiaoyun
2011-06-24 10:32                       ` MaoXiaoyun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.