All of lore.kernel.org
 help / color / mirror / Atom feed
* VM hung after running sometime
       [not found] ` <C8ACD97B.1256D%keir.fraser@eu.citrix.com>
@ 2010-09-10 11:01   ` MaoXiaoyun
  2010-09-19 10:37     ` MaoXiaoyun
  2010-10-15 12:43     ` Domain 0 stop response on frequently reboot VMS MaoXiaoyun
  0 siblings, 2 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-10 11:01 UTC (permalink / raw)
  To: keir.fraser, jbeulich; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 3586 bytes --]


Hi Keir & Jan:
 
         Good news is, the patch for Xen panic bug works. Tests on two servers are running happily 
for almost two days. I will stop test if it doesn't fail until tomorrow evening. 
         Appreciate for all help you and Jan offered. 
 
         Well, till now, I think I've collected enough information for VM hang problem to have a discussion
with you two. Basically, we have two situation of VM hang(Those VM are all HVMs, and all are able to 
run well for some time before actually hung). 
 
1. We have two VMs in this situation. And under this situation, what we know clear are 
         
(1) The *times* column in "xm ls" command never changed after VM hangs
In below three VMs, E2EZYXVM-56-W2.786.92 is hang(its "times" 9339.3 freezed), and the other two work well.     
 
         E2EZYXVM-56-W1.786.92                        2  1024     2     -b----  29009.0
         E2EZYXVM-56-W2.786.92                        3  1024     2     ------   9339.3
         E2EZYXVM-56-W3.786.92                        4  1024     2     -b----  27538.6
 
(2) From Xenctx output it call trace is same, and never change on every xenctx run.
Call trace likes:
Call Trace:
  [<80708a5a>]  <--
  [<f76f1789>] 
  [<85f3f1f0>] 
  [<861fb0e8>] 
  [<861fb370>] 
  [<80558188>] 
  [<f76f3c1f>] 
  [<861fb370>] 
  [<85f3f1f0>] 
  [<861fb0e8>]
 
2. We have another two VMS on this situation, what we know is 
(1) The *times* column in "xm ls" command are much higher than other VMS, and become larger very fast
 
In below three VMs, E2EZYXVM-138-W5.827.92is hang(its "times" 58262.8 grows every seconds), 
and the other two work well.       

 
E2EZYXVM-138-W4.827.92                       5  1024     2     r-----  27692.5
E2EZYXVM-138-W5.827.92                       6  1024     2     r-----  58262.8
E2EZYXVM-138-W6.827.92                       7  1024     2     r-----  26954.3
 
(2) From Xenctx output it call trace is same, and never change on every xenctx run.
Call Trace:
  [<80708a66>]  <--
  [<f7c2f072>] 
  [<861fa9dc>] 
  [<805582a8>] 
  [<f7c318a5>] 
  [<861fa9dc>] 
  [<861fa9dc>] 
  [<861fa0e0>] 
  [<861faa08>] 
 

In addition, we have another VM which is in black screen and hung. But there is no abnormal information
I can get form "xm li" and xenctx.
 
Early in this afternoon, I was trying to decode those back trace to get symbols (HVMS is windows XP) but failed. 
I am wondering If I could trigger a Domain U crash, and have the dump analyzed on windbg. Basically I am trying
to find out what did VM do exactly before hung.
 
Looking forward to your suggestion, thanks. 
 

 
> Date: Wed, 8 Sep 2010 06:11:07 -0700
> Subject: Re: [Xen-devel] Xen-unstable panic: FATAL PAGE FAULT
> From: keir.fraser@eu.citrix.com
> To: tinnycloud@hotmail.com; JBeulich@novell.com
> 
> On 08/09/2010 02:03, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > Here is my plan. I hope I could find a way to make it reproduced eaiser(Right
> > now hang shows
> > in a very small possibility). Also, I will learn to use xentrace, xenanyle to
> > help locate the bug.
> > I wonder if there exists a way that I can dump the guest Domain info, or at
> > least find out 
> > where VM hang on, or have the backtrace.
> 
> There is a tool called xenctx in tools/xentrace/ directory. This will dump
> registers and stack for a specified domain and vcpu. I think it may even be
> able to dump symbolic call traces for Linux kernels, if youc an pass it the
> vmlinux file.
> 
> -- Keir
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 16791 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-10 11:01   ` VM hung after running sometime MaoXiaoyun
@ 2010-09-19 10:37     ` MaoXiaoyun
  2010-09-19 11:49       ` Keir Fraser
  2010-10-15 12:43     ` Domain 0 stop response on frequently reboot VMS MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-19 10:37 UTC (permalink / raw)
  To: keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 5789 bytes --]


Hi Keir:

 

       Regards to HVM hang , according to our recent test, it turns out this issue still exists.

       When I go through the code, I obseved something abnormal and need your help.

 

      We've noticed when VM hang, its VCPU flags is always 4, which indicates _VPF_blocked_in_xen,

      and it is invoked in prepare_wait_on_xen_event_channel. I've noticed that Domain U has setup

      a event channel  with domain 0 for each VCPU and qemu-dm select on the event fd.  

 

      notify_via_xen_event_channel is called when Domain U issue a request. And in qemu-dm it will 

      get the event,  and invoke cpu_handle_ioreq(/xen-4.0.0/tools/ioemu-qemu-xen/i386-dm/helper2.c)

     ->cpu_get_ioreq()->xc_evtchn_unmask(). In evtchn_unmask it will has operation on evtchn_pending, 

      evtchn_mask, or evtchn_pending_sel.

 

      My confusion is on notify_via_xen_event_channel()->evtchn_set_pending, the **evtchn_set_pending here

      in not locked**, while inside it also have operation on evtchn_pending, evtchn_mask, or evtchn_pending_sel.

 

      I'm afried this access competition might cause event undeliverd from dom U to qemu-dm, but I am not sure, 

     since  I still not fully understand where event_mask and is set, and where event_pending is cleared. 

 

-------------------------notify_via_xen_event_channel-------------------------------------

 989 void notify_via_xen_event_channel(int lport)
 990 {
 991     struct evtchn *lchn, *rchn;
 992     struct domain *ld = current->domain, *rd;
 993     int            rport;
 994 
 995     spin_lock(&ld->event_lock);
 996 
 997     ASSERT(port_is_valid(ld, lport));
 998     lchn = evtchn_from_port(ld, lport);
 999     ASSERT(lchn->consumer_is_xen);
1000 
1001     if ( likely(lchn->state == ECS_INTERDOMAIN) )
1002     {
1003         rd    = lchn->u.interdomain.remote_dom;
1004         rport = lchn->u.interdomain.remote_port;
1005         rchn  = evtchn_from_port(rd, rport);
1006         evtchn_set_pending(rd->vcpu[rchn->notify_vcpu_id], rport);
1007     }
1008 
1009     spin_unlock(&ld->event_lock);
1010 }

      

----------------------------evtchn_set_pending----------------------

535 static int evtchn_set_pending(struct vcpu *v, int port)
 536 {
 537     struct domain *d = v->domain;
 538     int vcpuid;
 539 
 540     /*
 541      * The following bit operations must happen in strict order.
 542      * NB. On x86, the atomic bit operations also act as memory barriers.
 543      * There is therefore sufficiently strict ordering for this architecture --
 544      * others may require explicit memory barriers.
 545      */
 546 
 547     if ( test_and_set_bit(port, &shared_info(d, evtchn_pending)) )
 548         return 1;
 549 
 550     if ( !test_bit        (port, &shared_info(d, evtchn_mask)) &&
 551          !test_and_set_bit(port / BITS_PER_EVTCHN_WORD(d),
 552                            &vcpu_info(v, evtchn_pending_sel)) )
 553     {
 554         vcpu_mark_events_pending(v);
 555     }
 556 
 557     /* Check if some VCPU might be polling for this event. */
 558     if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
 559         return 0;
 560 
 561     /* Wake any interested (or potentially interested) pollers. */
 562     for ( vcpuid = find_first_bit(d->poll_mask, d->max_vcpus);
 563           vcpuid < d->max_vcpus;
 564           vcpuid = find_next_bit(d->poll_mask, d->max_vcpus, vcpuid+1) )
 565     {
 566         v = d->vcpu[vcpuid];
 567         if ( ((v->poll_evtchn <= 0) || (v->poll_evtchn == port)) &&
 568              test_and_clear_bit(vcpuid, d->poll_mask) )
 569         {
 570             v->poll_evtchn = 0;
 571             vcpu_unblock(v);
   

--------------------------------------evtchn_unmask------------------------------

 764 
 765 int evtchn_unmask(unsigned int port)
 766 {
 767     struct domain *d = current->domain;
 768     struct vcpu   *v;
 769 
 770     spin_lock(&d->event_lock);
 771 
 772     if ( unlikely(!port_is_valid(d, port)) )
 773     {
 774         spin_unlock(&d->event_lock);
 775         return -EINVAL;
 776     }
 777 
 778     v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
 779 
 780     /*
 781      * These operations must happen in strict order. Based on
 782      * include/xen/event.h:evtchn_set_pending(). 
 783      */
 784     if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
 785          test_bit          (port, &shared_info(d, evtchn_pending)) &&
 786          !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d),
 787                             &vcpu_info(v, evtchn_pending_sel)) )
 788     {
 789         vcpu_mark_events_pending(v);
 790     }
 791 
 792     spin_unlock(&d->event_lock);
 793 
 794     return 0;
 795 }                           

 ----------------------------cpu_get_ioreq-------------------------

260 static ioreq_t *cpu_get_ioreq(void)
261 {
262     int i;
263     evtchn_port_t port;
264 
265     port = xc_evtchn_pending(xce_handle);
266     if (port != -1) {
267         for ( i = 0; i < vcpus; i++ )
268             if ( ioreq_local_port[i] == port )
269                 break;
270 
271         if ( i == vcpus ) {
272             fprintf(logfile, "Fatal error while trying to get io event!\n");
273             exit(1);
274         }
275 
276         // unmask the wanted port again
277         xc_evtchn_unmask(xce_handle, port);
278 
279         //get the io packet from shared memory
280         send_vcpu = i;
281         return __cpu_get_ioreq(i);
282     }
283 
284     //read error or read nothing
285     return NULL;
286 }
287 

       
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 10549 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: VM hung after running sometime
  2010-09-19 10:37     ` MaoXiaoyun
@ 2010-09-19 11:49       ` Keir Fraser
  2010-09-19 12:21         ` Zhang, Yang Z
  2010-09-20  6:00         ` MaoXiaoyun
  0 siblings, 2 replies; 46+ messages in thread
From: Keir Fraser @ 2010-09-19 11:49 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

On 19/09/2010 11:37, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> Hi Keir:
>  
>        Regards to HVM hang , according to our recent test, it turns out this
> issue still exists.
>        When I go through the code, I obseved something abnormal and need your
> help.
>  
>       We've noticed when VM hang, its VCPU flags is always 4, which indicates
> _VPF_blocked_in_xen,
>       and it is invoked in prepare_wait_on_xen_event_channel. I've noticed
> that Domain U has setup
>       a event channel  with domain 0 for each VCPU and qemu-dm select on the
> event fd.  
>  
>       notify_via_xen_event_channel is called when Domain U issue a request.
> And in qemu-dm it will
>       get the event,  and invoke
> cpu_handle_ioreq(/xen-4.0.0/tools/ioemu-qemu-xen/i386-dm/helper2.c)
>      ->cpu_get_ioreq()->xc_evtchn_unmask(). In evtchn_unmask it will has
> operation on evtchn_pending,
>       evtchn_mask, or evtchn_pending_sel.
>  
>       My confusion is on notify_via_xen_event_channel()->evtchn_set_pending,
> the **evtchn_set_pending here
>       in not locked**, while inside it also have operation on evtchn_pending,
> evtchn_mask, or evtchn_pending_sel.

Atomic ops are used to make the operations on evtchn_pending, evtchn_mask,
and evtchn_sel concurrency safe. Note that the locking from
notify_via_xen_event_channel() is just the same as, say, from evtchn_send():
the local domain's (ie. DomU's, in this case) event_lock is held, while the
remote domain's (ie. dom0's, in this case) does not need to be held.

If your domU is stuck in state _VPF_blocked_in_xen, it probably means
qemu-dm is toast. I would investigate whether the qemu-dm process is still
present, still doing useful work, etc etc.

 -- Keir

>       I'm afried this access competition might cause event undeliverd from dom
> U to qemu-dm, but I am not sure,
>      since  I still not fully understand where event_mask and is set, and
> where event_pending is cleared.
>  
> -------------------------notify_via_xen_event_channel-------------------------
> ------------
>  989 void notify_via_xen_event_channel(int lport)
>  990 {
>  991     struct evtchn *lchn, *rchn;
>  992     struct domain *ld = current->domain, *rd;
>  993     int            rport;
>  994 
>  995     spin_lock(&ld->event_lock);
>  996 
>  997     ASSERT(port_is_valid(ld, lport));
>  998     lchn = evtchn_from_port(ld, lport);
>  999     ASSERT(lchn->consumer_is_xen);
> 1000 
> 1001     if ( likely(lchn->state == ECS_INTERDOMAIN) )
> 1002     {
> 1003         rd    = lchn->u.interdomain.remote_dom;
> 1004         rport = lchn->u.interdomain.remote_port;
> 1005         rchn  = evtchn_from_port(rd, rport);
> 1006         evtchn_set_pending(rd->vcpu[rchn->notify_vcpu_id], rport);
> 1007     }
> 1008 
> 1009     spin_unlock(&ld->event_lock);
> 1010 }
>       
> ----------------------------evtchn_set_pending----------------------
> 535 static int evtchn_set_pending(struct vcpu *v, int port)
>  536 {
>  537     struct domain *d = v->domain;
>  538     int vcpuid;
>  539 
>  540     /*
>  541      * The following bit operations must happen in strict order.
>  542      * NB. On x86, the atomic bit operations also act as memory barriers.
>  543      * There is therefore sufficiently strict ordering for this
> architecture --
>  544      * others may require explicit memory barriers.
>  545      */
>  546 
>  547     if ( test_and_set_bit(port, &shared_info(d, evtchn_pending)) )
>  548         return 1;
>  549 
>  550     if ( !test_bit        (port, &shared_info(d, evtchn_mask)) &&
>  551          !test_and_set_bit(port / BITS_PER_EVTCHN_WORD(d),
>  552                            &vcpu_info(v, evtchn_pending_sel)) )
>  553     {
>  554         vcpu_mark_events_pending(v);
>  555     }
>  556 
>  557     /* Check if some VCPU might be polling for this event. */
>  558     if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
>  559         return 0;
>  560 
>  561     /* Wake any interested (or potentially interested) pollers. */
>  562     for ( vcpuid = find_first_bit(d->poll_mask, d->max_vcpus);
>  563           vcpuid < d->max_vcpus;
>  564           vcpuid = find_next_bit(d->poll_mask, d->max_vcpus, vcpuid+1) )
>  565     {
>  566         v = d->vcpu[vcpuid];
>  567         if ( ((v->poll_evtchn <= 0) || (v->poll_evtchn == port)) &&
>  568              test_and_clear_bit(vcpuid, d->poll_mask) )
>  569         {
>  570             v->poll_evtchn = 0;
>  571             vcpu_unblock(v);
>    
> --------------------------------------evtchn_unmask---------------------------
> ---
>  764 
>  765 int evtchn_unmask(unsigned int port)
>  766 {
>  767     struct domain *d = current->domain;
>  768     struct vcpu   *v;
>  769 
>  770     spin_lock(&d->event_lock);
>  771 
>  772     if ( unlikely(!port_is_valid(d, port)) )
>  773     {
>  774         spin_unlock(&d->event_lock);
>  775         return -EINVAL;
>  776     }
>  777 
>  778     v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
>  779 
>  780     /*
>  781      * These operations must happen in strict order. Based on
>  782      * include/xen/event.h:evtchn_set_pending().
>  783      */
>  784     if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
>  785          test_bit          (port, &shared_info(d, evtchn_pending)) &&
>  786          !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d),
>  787                             &vcpu_info(v, evtchn_pending_sel)) )
>  788     {
>  789         vcpu_mark_events_pending(v);
>  790     }
>  791 
>  792     spin_unlock(&d->event_lock);
>  793 
>  794     return 0;
>  795 }           
>  ----------------------------cpu_get_ioreq-------------------------
> 260 static ioreq_t *cpu_get_ioreq(void)
> 261 {
> 262     int i;
> 263     evtchn_port_t port;
> 264 
> 265     port = xc_evtchn_pending(xce_handle);
> 266     if (port != -1) {
> 267         for ( i = 0; i < vcpus; i++ )
> 268             if ( ioreq_local_port[i] == port )
> 269                 break;
> 270 
> 271         if ( i == vcpus ) {
> 272             fprintf(logfile, "Fatal error while trying to get io
> event!\n");
> 273             exit(1);
> 274         }
> 275 
> 276         // unmask the wanted port again
> 277         xc_evtchn_unmask(xce_handle, port);
> 278 
> 279         //get the io packet from shared memory
> 280         send_vcpu = i;
> 281         return __cpu_get_ioreq(i);
> 282     }
> 283 
> 284     //read error or read nothing
> 285     return NULL;
> 286 }
> 287 
>        
>        

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-19 11:49       ` Keir Fraser
@ 2010-09-19 12:21         ` Zhang, Yang Z
  2010-09-20  6:00         ` MaoXiaoyun
  1 sibling, 0 replies; 46+ messages in thread
From: Zhang, Yang Z @ 2010-09-19 12:21 UTC (permalink / raw)
  To: Keir Fraser, MaoXiaoyun; +Cc: xen devel

I also meet HVM guest hang in our stress testing. For detail, pls see the bugzilla:
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1664


best regards
yang


> -----Original Message-----
> From: xen-devel-bounces@lists.xensource.com
> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser
> Sent: Sunday, September 19, 2010 7:50 PM
> To: MaoXiaoyun
> Cc: xen devel
> Subject: [Xen-devel] Re: VM hung after running sometime
> 
> On 19/09/2010 11:37, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > Hi Keir:
> >
> >        Regards to HVM hang , according to our recent test, it turns out
> this
> > issue still exists.
> >        When I go through the code, I obseved something abnormal and
> need your
> > help.
> >
> >       We've noticed when VM hang, its VCPU flags is always 4, which
> indicates
> > _VPF_blocked_in_xen,
> >       and it is invoked in prepare_wait_on_xen_event_channel. I've noticed
> > that Domain U has setup
> >       a event channel  with domain 0 for each VCPU and qemu-dm select
> on the
> > event fd.
> >
> >       notify_via_xen_event_channel is called when Domain U issue a
> request.
> > And in qemu-dm it will
> >       get the event,  and invoke
> > cpu_handle_ioreq(/xen-4.0.0/tools/ioemu-qemu-xen/i386-dm/helper2.c)
> >      ->cpu_get_ioreq()->xc_evtchn_unmask(). In evtchn_unmask it will has
> > operation on evtchn_pending,
> >       evtchn_mask, or evtchn_pending_sel.
> >
> >       My confusion is on
> notify_via_xen_event_channel()->evtchn_set_pending,
> > the **evtchn_set_pending here
> >       in not locked**, while inside it also have operation on
> evtchn_pending,
> > evtchn_mask, or evtchn_pending_sel.
> 
> Atomic ops are used to make the operations on evtchn_pending, evtchn_mask,
> and evtchn_sel concurrency safe. Note that the locking from
> notify_via_xen_event_channel() is just the same as, say, from evtchn_send():
> the local domain's (ie. DomU's, in this case) event_lock is held, while the
> remote domain's (ie. dom0's, in this case) does not need to be held.
> 
> If your domU is stuck in state _VPF_blocked_in_xen, it probably means
> qemu-dm is toast. I would investigate whether the qemu-dm process is still
> present, still doing useful work, etc etc.
> 
>  -- Keir
> 
> >       I'm afried this access competition might cause event undeliverd from
> dom
> > U to qemu-dm, but I am not sure,
> >      since  I still not fully understand where event_mask and is set, and
> > where event_pending is cleared.
> >
> > -------------------------notify_via_xen_event_channel-------------------------
> > ------------
> >  989 void notify_via_xen_event_channel(int lport)
> >  990 {
> >  991     struct evtchn *lchn, *rchn;
> >  992     struct domain *ld = current->domain, *rd;
> >  993     int            rport;
> >  994
> >  995     spin_lock(&ld->event_lock);
> >  996
> >  997     ASSERT(port_is_valid(ld, lport));
> >  998     lchn = evtchn_from_port(ld, lport);
> >  999     ASSERT(lchn->consumer_is_xen);
> > 1000
> > 1001     if ( likely(lchn->state == ECS_INTERDOMAIN) )
> > 1002     {
> > 1003         rd    = lchn->u.interdomain.remote_dom;
> > 1004         rport = lchn->u.interdomain.remote_port;
> > 1005         rchn  = evtchn_from_port(rd, rport);
> > 1006         evtchn_set_pending(rd->vcpu[rchn->notify_vcpu_id], rport);
> > 1007     }
> > 1008
> > 1009     spin_unlock(&ld->event_lock);
> > 1010 }
> >
> > ----------------------------evtchn_set_pending----------------------
> > 535 static int evtchn_set_pending(struct vcpu *v, int port)
> >  536 {
> >  537     struct domain *d = v->domain;
> >  538     int vcpuid;
> >  539
> >  540     /*
> >  541      * The following bit operations must happen in strict order.
> >  542      * NB. On x86, the atomic bit operations also act as memory
> barriers.
> >  543      * There is therefore sufficiently strict ordering for this
> > architecture --
> >  544      * others may require explicit memory barriers.
> >  545      */
> >  546
> >  547     if ( test_and_set_bit(port, &shared_info(d, evtchn_pending)) )
> >  548         return 1;
> >  549
> >  550     if ( !test_bit        (port, &shared_info(d, evtchn_mask)) &&
> >  551          !test_and_set_bit(port / BITS_PER_EVTCHN_WORD(d),
> >  552                            &vcpu_info(v, evtchn_pending_sel)) )
> >  553     {
> >  554         vcpu_mark_events_pending(v);
> >  555     }
> >  556
> >  557     /* Check if some VCPU might be polling for this event. */
> >  558     if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
> >  559         return 0;
> >  560
> >  561     /* Wake any interested (or potentially interested) pollers. */
> >  562     for ( vcpuid = find_first_bit(d->poll_mask, d->max_vcpus);
> >  563           vcpuid < d->max_vcpus;
> >  564           vcpuid = find_next_bit(d->poll_mask, d->max_vcpus,
> vcpuid+1) )
> >  565     {
> >  566         v = d->vcpu[vcpuid];
> >  567         if ( ((v->poll_evtchn <= 0) || (v->poll_evtchn == port)) &&
> >  568              test_and_clear_bit(vcpuid, d->poll_mask) )
> >  569         {
> >  570             v->poll_evtchn = 0;
> >  571             vcpu_unblock(v);
> >
> > --------------------------------------evtchn_unmask---------------------------
> > ---
> >  764
> >  765 int evtchn_unmask(unsigned int port)
> >  766 {
> >  767     struct domain *d = current->domain;
> >  768     struct vcpu   *v;
> >  769
> >  770     spin_lock(&d->event_lock);
> >  771
> >  772     if ( unlikely(!port_is_valid(d, port)) )
> >  773     {
> >  774         spin_unlock(&d->event_lock);
> >  775         return -EINVAL;
> >  776     }
> >  777
> >  778     v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
> >  779
> >  780     /*
> >  781      * These operations must happen in strict order. Based on
> >  782      * include/xen/event.h:evtchn_set_pending().
> >  783      */
> >  784     if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
> >  785          test_bit          (port, &shared_info(d,
> evtchn_pending)) &&
> >  786          !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d),
> >  787                             &vcpu_info(v,
> evtchn_pending_sel)) )
> >  788     {
> >  789         vcpu_mark_events_pending(v);
> >  790     }
> >  791
> >  792     spin_unlock(&d->event_lock);
> >  793
> >  794     return 0;
> >  795 }
> >  ----------------------------cpu_get_ioreq-------------------------
> > 260 static ioreq_t *cpu_get_ioreq(void)
> > 261 {
> > 262     int i;
> > 263     evtchn_port_t port;
> > 264
> > 265     port = xc_evtchn_pending(xce_handle);
> > 266     if (port != -1) {
> > 267         for ( i = 0; i < vcpus; i++ )
> > 268             if ( ioreq_local_port[i] == port )
> > 269                 break;
> > 270
> > 271         if ( i == vcpus ) {
> > 272             fprintf(logfile, "Fatal error while trying to get io
> > event!\n");
> > 273             exit(1);
> > 274         }
> > 275
> > 276         // unmask the wanted port again
> > 277         xc_evtchn_unmask(xce_handle, port);
> > 278
> > 279         //get the io packet from shared memory
> > 280         send_vcpu = i;
> > 281         return __cpu_get_ioreq(i);
> > 282     }
> > 283
> > 284     //read error or read nothing
> > 285     return NULL;
> > 286 }
> > 287
> >
> >
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-19 11:49       ` Keir Fraser
  2010-09-19 12:21         ` Zhang, Yang Z
@ 2010-09-20  6:00         ` MaoXiaoyun
  2010-09-20  7:45           ` Keir Fraser
  1 sibling, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-20  6:00 UTC (permalink / raw)
  To: keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 10550 bytes --]


Hi Keir:

 

     Appreciate for your kindly help. 

     Just now I notice another possiblity of event missed and need your verification. 

 

    As we known, when do IO, domain U will write those requests into ring buffer, and 

notice to qemu-dm (which is waiting on select) throught event channel. And when qemu is 

actived it will notify back( helper2.c line 548) to clean possible wait on _VPF_blocked_in_xen.

 

When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke wait_on_xen_event_channel

(where it is blocked in _VPF_blocked_in_xen).

 

Here is my assumption of event missed.

 

step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY or STATE_IOREQ_INPROCESS

step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so quickly before hvm_do_resume execute line 270.

Well, the event is missed.

In other words, the _VPF_blocked_in_xen is cleared before it is actually setted, and Domian U who is blocked 

might never get unblocked, it this possible?

 

thx.

-------------------------------xen/arch/x86/hvm/hvm.c---------------

 252 void hvm_do_resume(struct vcpu *v)
 253 {
 254     ioreq_t *p;
 255     static int i;
 256 
 257     pt_restore_timer(v);
 258 
 259     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
 260     p = get_ioreq(v);
 261     while ( p->state != STATE_IOREQ_NONE )
 262     {
 263         switch ( p->state )
 264         {
 265         case STATE_IORESP_READY: /* IORESP_READY -> NONE */
 266             hvm_io_assist();
 267             break;
 268         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
 269         case STATE_IOREQ_INPROCESS:
 270             wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
 271                                       (p->state != STATE_IOREQ_READY) &&
 272                                       (p->state != STATE_IOREQ_INPROCESS));
 273             break;                    
 274         default:
 275             gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
 276             domain_crash(v->domain);
 277             return; /* bail */
 278         }   
 279     }   
 280 }   

--------------tools/ioemu-qemu-xen/i386-dm/helper2.c--------

507 static void cpu_handle_ioreq(void *opaque)
508 {
509     extern int shutdown_requested;
510     CPUState *env = opaque;
511     ioreq_t *req = cpu_get_ioreq();
512     static int i = 0;
513 
514     __handle_buffered_iopage(env);
515     if (req) {
516         __handle_ioreq(env, req);
517 
518         if (req->state != STATE_IOREQ_INPROCESS) {
519             fprintf(logfile, "Badness in I/O request ... not in service?!: "
520                     "%x, ptr: %x, port: %"PRIx64", "
521                     "data: %"PRIx64", count: %u, size: %u\n",
522                     req->state, req->data_is_ptr, req->addr,
523                     req->data, req->count, req->size);
524             destroy_hvm_domain();
525             return;
526         }
527 
528         xen_wmb(); /* Update ioreq contents /then/ update state. */
529 
530         /*
531          * We do this before we send the response so that the tools
532          * have the opportunity to pick up on the reset before the
533          * guest resumes and does a hlt with interrupts disabled which
534          * causes Xen to powerdown the domain.
535          */
536         if (vm_running) {
537             if (qemu_shutdown_requested()) {
538                 fprintf(logfile, "shutdown requested in cpu_handle_ioreq\n");
539                 destroy_hvm_domain();
540             }
541             if (qemu_reset_requested()) {
542                 fprintf(logfile, "reset requested in cpu_handle_ioreq.\n");
543                 qemu_system_reset();
544             }

545         }
546 
547         req->state = STATE_IORESP_READY;
548         xc_evtchn_notify(xce_handle, ioreq_local_port[send_vcpu]);
549     }
550 }

 


 
> Date: Sun, 19 Sep 2010 12:49:44 +0100
> Subject: Re: VM hung after running sometime
> From: keir.fraser@eu.citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> 
> On 19/09/2010 11:37, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > Hi Keir:
> > 
> > Regards to HVM hang , according to our recent test, it turns out this
> > issue still exists.
> > When I go through the code, I obseved something abnormal and need your
> > help.
> > 
> > We've noticed when VM hang, its VCPU flags is always 4, which indicates
> > _VPF_blocked_in_xen,
> > and it is invoked in prepare_wait_on_xen_event_channel. I've noticed
> > that Domain U has setup
> > a event channel with domain 0 for each VCPU and qemu-dm select on the
> > event fd. 
> > 
> > notify_via_xen_event_channel is called when Domain U issue a request.
> > And in qemu-dm it will
> > get the event, and invoke
> > cpu_handle_ioreq(/xen-4.0.0/tools/ioemu-qemu-xen/i386-dm/helper2.c)
> > ->cpu_get_ioreq()->xc_evtchn_unmask(). In evtchn_unmask it will has
> > operation on evtchn_pending,
> > evtchn_mask, or evtchn_pending_sel.
> > 
> > My confusion is on notify_via_xen_event_channel()->evtchn_set_pending,
> > the **evtchn_set_pending here
> > in not locked**, while inside it also have operation on evtchn_pending,
> > evtchn_mask, or evtchn_pending_sel.
> 
> Atomic ops are used to make the operations on evtchn_pending, evtchn_mask,
> and evtchn_sel concurrency safe. Note that the locking from
> notify_via_xen_event_channel() is just the same as, say, from evtchn_send():
> the local domain's (ie. DomU's, in this case) event_lock is held, while the
> remote domain's (ie. dom0's, in this case) does not need to be held.
> 
> If your domU is stuck in state _VPF_blocked_in_xen, it probably means
> qemu-dm is toast. I would investigate whether the qemu-dm process is still
> present, still doing useful work, etc etc.
> 
> -- Keir
> 
> > I'm afried this access competition might cause event undeliverd from dom
> > U to qemu-dm, but I am not sure,
> > since I still not fully understand where event_mask and is set, and
> > where event_pending is cleared.
> > 
> > -------------------------notify_via_xen_event_channel-------------------------
> > ------------
> > 989 void notify_via_xen_event_channel(int lport)
> > 990 {
> > 991 struct evtchn *lchn, *rchn;
> > 992 struct domain *ld = current->domain, *rd;
> > 993 int rport;
> > 994 
> > 995 spin_lock(&ld->event_lock);
> > 996 
> > 997 ASSERT(port_is_valid(ld, lport));
> > 998 lchn = evtchn_from_port(ld, lport);
> > 999 ASSERT(lchn->consumer_is_xen);
> > 1000 
> > 1001 if ( likely(lchn->state == ECS_INTERDOMAIN) )
> > 1002 {
> > 1003 rd = lchn->u.interdomain.remote_dom;
> > 1004 rport = lchn->u.interdomain.remote_port;
> > 1005 rchn = evtchn_from_port(rd, rport);
> > 1006 evtchn_set_pending(rd->vcpu[rchn->notify_vcpu_id], rport);
> > 1007 }
> > 1008 
> > 1009 spin_unlock(&ld->event_lock);
> > 1010 }
> > 
> > ----------------------------evtchn_set_pending----------------------
> > 535 static int evtchn_set_pending(struct vcpu *v, int port)
> > 536 {
> > 537 struct domain *d = v->domain;
> > 538 int vcpuid;
> > 539 
> > 540 /*
> > 541 * The following bit operations must happen in strict order.
> > 542 * NB. On x86, the atomic bit operations also act as memory barriers.
> > 543 * There is therefore sufficiently strict ordering for this
> > architecture --
> > 544 * others may require explicit memory barriers.
> > 545 */
> > 546 
> > 547 if ( test_and_set_bit(port, &shared_info(d, evtchn_pending)) )
> > 548 return 1;
> > 549 
> > 550 if ( !test_bit (port, &shared_info(d, evtchn_mask)) &&
> > 551 !test_and_set_bit(port / BITS_PER_EVTCHN_WORD(d),
> > 552 &vcpu_info(v, evtchn_pending_sel)) )
> > 553 {
> > 554 vcpu_mark_events_pending(v);
> > 555 }
> > 556 
> > 557 /* Check if some VCPU might be polling for this event. */
> > 558 if ( likely(bitmap_empty(d->poll_mask, d->max_vcpus)) )
> > 559 return 0;
> > 560 
> > 561 /* Wake any interested (or potentially interested) pollers. */
> > 562 for ( vcpuid = find_first_bit(d->poll_mask, d->max_vcpus);
> > 563 vcpuid < d->max_vcpus;
> > 564 vcpuid = find_next_bit(d->poll_mask, d->max_vcpus, vcpuid+1) )
> > 565 {
> > 566 v = d->vcpu[vcpuid];
> > 567 if ( ((v->poll_evtchn <= 0) || (v->poll_evtchn == port)) &&
> > 568 test_and_clear_bit(vcpuid, d->poll_mask) )
> > 569 {
> > 570 v->poll_evtchn = 0;
> > 571 vcpu_unblock(v);
> > 
> > --------------------------------------evtchn_unmask---------------------------
> > ---
> > 764 
> > 765 int evtchn_unmask(unsigned int port)
> > 766 {
> > 767 struct domain *d = current->domain;
> > 768 struct vcpu *v;
> > 769 
> > 770 spin_lock(&d->event_lock);
> > 771 
> > 772 if ( unlikely(!port_is_valid(d, port)) )
> > 773 {
> > 774 spin_unlock(&d->event_lock);
> > 775 return -EINVAL;
> > 776 }
> > 777 
> > 778 v = d->vcpu[evtchn_from_port(d, port)->notify_vcpu_id];
> > 779 
> > 780 /*
> > 781 * These operations must happen in strict order. Based on
> > 782 * include/xen/event.h:evtchn_set_pending().
> > 783 */
> > 784 if ( test_and_clear_bit(port, &shared_info(d, evtchn_mask)) &&
> > 785 test_bit (port, &shared_info(d, evtchn_pending)) &&
> > 786 !test_and_set_bit (port / BITS_PER_EVTCHN_WORD(d),
> > 787 &vcpu_info(v, evtchn_pending_sel)) )
> > 788 {
> > 789 vcpu_mark_events_pending(v);
> > 790 }
> > 791 
> > 792 spin_unlock(&d->event_lock);
> > 793 
> > 794 return 0;
> > 795 } 
> > ----------------------------cpu_get_ioreq-------------------------
> > 260 static ioreq_t *cpu_get_ioreq(void)
> > 261 {
> > 262 int i;
> > 263 evtchn_port_t port;
> > 264 
> > 265 port = xc_evtchn_pending(xce_handle);
> > 266 if (port != -1) {
> > 267 for ( i = 0; i < vcpus; i++ )
> > 268 if ( ioreq_local_port[i] == port )
> > 269 break;
> > 270 
> > 271 if ( i == vcpus ) {
> > 272 fprintf(logfile, "Fatal error while trying to get io
> > event!\n");
> > 273 exit(1);
> > 274 }
> > 275 
> > 276 // unmask the wanted port again
> > 277 xc_evtchn_unmask(xce_handle, port);
> > 278 
> > 279 //get the io packet from shared memory
> > 280 send_vcpu = i;
> > 281 return __cpu_get_ioreq(i);
> > 282 }
> > 283 
> > 284 //read error or read nothing
> > 285 return NULL;
> > 286 }
> > 287 
> > 
> > 
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 16144 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: VM hung after running sometime
  2010-09-20  6:00         ` MaoXiaoyun
@ 2010-09-20  7:45           ` Keir Fraser
  2010-09-20  8:23             ` MaoXiaoyun
  2010-09-20  9:15             ` MaoXiaoyun
  0 siblings, 2 replies; 46+ messages in thread
From: Keir Fraser @ 2010-09-20  7:45 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> wait_on_xen_event_channel
> (where it is blocked in _VPF_blocked_in_xen).
>  
> Here is my assumption of event missed.
>  
> step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY
> or STATE_IOREQ_INPROCESS
> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
> quickly before hvm_do_resume execute line 270.
> Well, the event is missed.
> In other words, the _VPF_blocked_in_xen is cleared before it is actually
> setted, and Domian U who is blocked
> might never get unblocked, it this possible?

Firstly, that code is very paranoid and it should never actually be the case
that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
Secondly, even if you do, take a look at the implementation of
wait_on_xen_event_channel() -- it is smart enough to avoid the race you
mention.

 -- Keir

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-20  7:45           ` Keir Fraser
@ 2010-09-20  8:23             ` MaoXiaoyun
  2010-09-20  9:15             ` MaoXiaoyun
  1 sibling, 0 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-20  8:23 UTC (permalink / raw)
  To: keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 2056 bytes --]


Thanks Keir.

 

I will kick off a test based on my assumption. 

Actually Domian U is always blocked in _VPF_blocked_in_xen, which is set only in two 

functions, wait_on_xen_event_channel, prepare_wait_on_xen_event_channel.  Both 

two functions first set the bit and will givp up or try to give up the schedule.

 

What I want to do is modify these functions not to set the bit, and only give up the 

schedule. Under this situation, the CPU will never be blocked. If the bug is due to the

race I imagined, domian U will never hang since even event is missed, VCPU still knows

when IO is ready by further schedule. If the bug is something else, say in qemu, domain

U will still confront hang since IO never ready. Am I right? 

 
> Date: Mon, 20 Sep 2010 08:45:21 +0100
> Subject: Re: VM hung after running sometime
> From: keir.fraser@eu.citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> 
> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> > wait_on_xen_event_channel
> > (where it is blocked in _VPF_blocked_in_xen).
> > 
> > Here is my assumption of event missed.
> > 
> > step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY
> > or STATE_IOREQ_INPROCESS
> > step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
> > quickly before hvm_do_resume execute line 270.
> > Well, the event is missed.
> > In other words, the _VPF_blocked_in_xen is cleared before it is actually
> > setted, and Domian U who is blocked
> > might never get unblocked, it this possible?
> 
> Firstly, that code is very paranoid and it should never actually be the case
> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
> Secondly, even if you do, take a look at the implementation of
> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
> mention.
> 
> -- Keir
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2643 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-20  7:45           ` Keir Fraser
  2010-09-20  8:23             ` MaoXiaoyun
@ 2010-09-20  9:15             ` MaoXiaoyun
  2010-09-20  9:35               ` Keir Fraser
  1 sibling, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-20  9:15 UTC (permalink / raw)
  To: keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 1566 bytes --]


Thanks Keir.
 
You're right, after I deeply looked into the wait_on_xen_event_channel, it is smart enough
to avoid the race I assumed. 
 
How about prepare_wait_on_xen_event_channel ? 
Currently Istill don't know when it will be invoked.
Could enlighten me? 

 
> Date: Mon, 20 Sep 2010 08:45:21 +0100
> Subject: Re: VM hung after running sometime
> From: keir.fraser@eu.citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> 
> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> > wait_on_xen_event_channel
> > (where it is blocked in _VPF_blocked_in_xen).
> > 
> > Here is my assumption of event missed.
> > 
> > step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY
> > or STATE_IOREQ_INPROCESS
> > step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
> > quickly before hvm_do_resume execute line 270.
> > Well, the event is missed.
> > In other words, the _VPF_blocked_in_xen is cleared before it is actually
> > setted, and Domian U who is blocked
> > might never get unblocked, it this possible?
> 
> Firstly, that code is very paranoid and it should never actually be the case
> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
> Secondly, even if you do, take a look at the implementation of
> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
> mention.
> 
> -- Keir
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 3120 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: VM hung after running sometime
  2010-09-20  9:15             ` MaoXiaoyun
@ 2010-09-20  9:35               ` Keir Fraser
  2010-09-21  5:02                 ` MaoXiaoyun
  0 siblings, 1 reply; 46+ messages in thread
From: Keir Fraser @ 2010-09-20  9:35 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> Thanks Keir.
>  
> You're right, after I deeply looked into the wait_on_xen_event_channel, it is
> smart enough
> to avoid the race I assumed.
>  
> How about prepare_wait_on_xen_event_channel ?
> Currently Istill don't know when it will be invoked.
> Could enlighten me?

As you can see it is called from hvm_send_assist_req(), hence it is called
whenever an ioreq is sent to qemu-dm. Note that it is called *before*
qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we will
not get woken until qemu-dm has done the work, and it cannot start the work
until it is notified, and it is not notified until after
prepare_wait_on_xen_event_channel has been executed.

 -- Keir

>  
>> Date: Mon, 20 Sep 2010 08:45:21 +0100
>> Subject: Re: VM hung after running sometime
>> From: keir.fraser@eu.citrix.com
>> To: tinnycloud@hotmail.com
>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
>> 
>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>> 
>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
>>> wait_on_xen_event_channel
>>> (where it is blocked in _VPF_blocked_in_xen).
>>> 
>>> Here is my assumption of event missed.
>>> 
>>> step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY
>>> or STATE_IOREQ_INPROCESS
>>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
>>> quickly before hvm_do_resume execute line 270.
>>> Well, the event is missed.
>>> In other words, the _VPF_blocked_in_xen is cleared before it is actually
>>> setted, and Domian U who is blocked
>>> might never get unblocked, it this possible?
>> 
>> Firstly, that code is very paranoid and it should never actually be the case
>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
>> Secondly, even if you do, take a look at the implementation of
>> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
>> mention.
>> 
>> -- Keir
>> 
>> 
>        

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: VM hung after running sometime
  2010-09-20  9:35               ` Keir Fraser
@ 2010-09-21  5:02                 ` MaoXiaoyun
  2010-09-21  7:53                   ` Keir Fraser
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-21  5:02 UTC (permalink / raw)
  To: keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 6320 bytes --]


Hi Keir:

 

        I spent more time on how event channel works. And now I  know that event is bind to 

irq with call of request_irq. When event is sent, the other side of the channel will run into 

asm_do_IRQ->generic_handle_irq->generic_handle_irq_desc->handle_level_irq(

here it actually invokes desc->handle_irq, and for evtchn this is handle_level_irq).

I noticed that in handle_level_irq the event mask and pending is cleared.

 

Well I have one more analysis to be discussed.

 

Attached is the evtchn when a VM is hang in physical server. Domain 10 is hang.

We can see domain 10 CPU info on the bottem the log, its has flags = 4 which means

_VPF_blocked_in_xen. 

 

(XEN) VCPU information and callbacks for domain 10:
(XEN)     VCPU0: CPU11 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={4-15}
(XEN)     paging assistance: shadowed 2-on-3
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN)     VCPU1: CPU9 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={9} cpu_affinity={4-15}
(XEN)     paging assistance: shadowed 2-on-3
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)


And its domain event info is :

(XEN) Domain 10 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 10:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=105 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=106 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=104 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=107 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=108 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=109 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=110 x=0

 

Base on our situation, we only interest in the event channel which consumer_is_xen is 1,

and here "x=1", that is port 1 and 2. According to the log, the other side of the channel

is domain 0, port 105, and 106.

 

Take a look at domain 0 event channel with port 105,106, I find on port 105, it pending is

1.(in [1,0], first bit refer to pending, and is 1, second bit refer to mask, is 0).

 

(XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
(XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0

 

In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it must set the pending bit.

Consider pending is 1, it looks like the irq is not triggered, am I  right ? 

Since if it is triggerred, it should clear the pending bit. (line 361).

 

------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---

354 void
355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
356 {
357         struct irqaction *action;
358         irqreturn_t action_ret;
359 
360         spin_lock(&desc->lock);
361         mask_ack_irq(desc, irq);
362 
363         if (unlikely(desc->status & IRQ_INPROGRESS))
364                 goto out_unlock;
365         desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
366         kstat_incr_irqs_this_cpu(irq, desc);
367 

 

BTW, the qemu still works fine when VM is hang. Below is it strace output.

No much difference between other well worked qemu instance, other than select all Timeout.

-------------------

select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0

 

 

 

 
> Date: Mon, 20 Sep 2010 10:35:46 +0100
> Subject: Re: VM hung after running sometime
> From: keir.fraser@eu.citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> 
> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > Thanks Keir.
> > 
> > You're right, after I deeply looked into the wait_on_xen_event_channel, it is
> > smart enough
> > to avoid the race I assumed.
> > 
> > How about prepare_wait_on_xen_event_channel ?
> > Currently Istill don't know when it will be invoked.
> > Could enlighten me?
> 
> As you can see it is called from hvm_send_assist_req(), hence it is called
> whenever an ioreq is sent to qemu-dm. Note that it is called *before*
> qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we will
> not get woken until qemu-dm has done the work, and it cannot start the work
> until it is notified, and it is not notified until after
> prepare_wait_on_xen_event_channel has been executed.
> 
> -- Keir
> 
> > 
> >> Date: Mon, 20 Sep 2010 08:45:21 +0100
> >> Subject: Re: VM hung after running sometime
> >> From: keir.fraser@eu.citrix.com
> >> To: tinnycloud@hotmail.com
> >> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> >> 
> >> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >> 
> >>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> >>> wait_on_xen_event_channel
> >>> (where it is blocked in _VPF_blocked_in_xen).
> >>> 
> >>> Here is my assumption of event missed.
> >>> 
> >>> step 1: hvm_do_resume execute 260, and suppose p->state is STATE_IOREQ_READY
> >>> or STATE_IOREQ_INPROCESS
> >>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
> >>> quickly before hvm_do_resume execute line 270.
> >>> Well, the event is missed.
> >>> In other words, the _VPF_blocked_in_xen is cleared before it is actually
> >>> setted, and Domian U who is blocked
> >>> might never get unblocked, it this possible?
> >> 
> >> Firstly, that code is very paranoid and it should never actually be the case
> >> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
> >> Secondly, even if you do, take a look at the implementation of
> >> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
> >> mention.
> >> 
> >> -- Keir
> >> 
> >> 
> > 
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 8426 bytes --]

[-- Attachment #2: hang.txt --]
[-- Type: text/plain, Size: 13622 bytes --]

xm li
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  7060     4     r----- 1143563.2
E2EZYXVM-80-L2W.871.92                      19  1024     2     -b---- 195875.3
E2EZYXVM-80-L2W1.871.92                     20  1024     2     -b---- 193488.9
E2EZYXVM-80-W.871.92                         1  1027     2     -b---- 276350.9
E2EZYXVM-80-W1.871.92                        2  1027     2     -b---- 266189.6
E2EZYXVM-80-W2.871.92                        3  1027     2     -b---- 278225.3
E2EZYXVM-80-W3.871.92                        4  1027     2     -b---- 269798.6
E2EZYXVM-80-W4.871.92                        5  1027     2     r----- 277067.4
E2EZYXVM-80-W5.871.92                        6  1027     2     -b---- 267076.2
E2EZYXVM-80-W6.871.92                        7  1027     2     -b---- 275250.2
E2EZYXVM-80-W7.871.92                        8  1027     2     r----- 282446.5
E2EZYXVM-80-W8.871.92                        9  1027     2     r----- 267457.9
E2EZYXVM-80-W9.871.92                       10  1027     2     r----- 312025.9   <=== this one is hang
root@r21a05004.btc.aliyun.com # xm dmesg
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Domain 0 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 0:
(XEN)     port [p/m]
(XEN)        1 [0/0]: s=5 n=0 v=0 x=0
(XEN)        2 [0/0]: s=6 n=0 x=0
(XEN)        3 [0/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=5 n=0 v=1 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=5 n=1 v=0 x=0
(XEN)        7 [0/0]: s=6 n=1 x=0
(XEN)        8 [0/0]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=5 n=1 v=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=2 v=0 x=0
(XEN)       12 [0/0]: s=6 n=2 x=0
(XEN)       13 [0/0]: s=6 n=2 x=0
(XEN)       14 [0/0]: s=5 n=2 v=1 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=5 n=3 v=0 x=0
(XEN)       17 [0/0]: s=6 n=3 x=0
(XEN)       18 [0/0]: s=6 n=3 x=0
(XEN)       19 [0/0]: s=5 n=3 v=1 x=0
(XEN)       20 [0/0]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=3 n=3 d=0 p=39 x=0
(XEN)       22 [0/0]: s=4 n=0 p=9 x=0
(XEN)       23 [0/0]: s=5 n=0 v=9 x=0
(XEN)       24 [0/0]: s=5 n=0 v=16 x=0
(XEN)       25 [1/1]: s=5 n=0 v=2 x=0
(XEN)       26 [0/0]: s=4 n=0 p=18 x=0
(XEN)       27 [0/0]: s=4 n=0 p=23 x=0
(XEN)       28 [0/0]: s=4 n=0 p=16 x=0
(XEN)       29 [0/0]: s=4 n=0 p=19 x=0
(XEN)       30 [0/0]: s=4 n=0 p=12 x=0
(XEN)       31 [0/0]: s=4 n=0 p=1 x=0
(XEN)       32 [0/0]: s=4 n=0 p=8 x=0
(XEN)       33 [0/0]: s=4 n=1 p=32 x=0
(XEN)       34 [0/0]: s=4 n=3 p=299 x=0
(XEN)       35 [0/0]: s=4 n=1 p=298 x=0
(XEN)       36 [0/0]: s=4 n=2 p=297 x=0
(XEN)       37 [0/0]: s=4 n=0 p=296 x=0
(XEN)       38 [0/0]: s=4 n=1 p=295 x=0
(XEN)       39 [0/0]: s=3 n=3 d=0 p=21 x=0
(XEN)       40 [0/0]: s=5 n=2 v=3 x=0
(XEN)       41 [0/0]: s=3 n=0 d=1 p=3 x=0
(XEN)       42 [0/0]: s=3 n=1 d=1 p=1 x=0
(XEN)       43 [0/0]: s=3 n=3 d=1 p=2 x=0
(XEN)       44 [0/0]: s=3 n=0 d=1 p=7 x=0
(XEN)       45 [0/0]: s=3 n=0 d=1 p=8 x=0
(XEN)       46 [0/0]: s=3 n=0 d=1 p=9 x=0
(XEN)       47 [0/0]: s=3 n=2 d=1 p=10 x=0
(XEN)       48 [0/0]: s=3 n=3 d=2 p=3 x=0
(XEN)       49 [0/0]: s=3 n=0 d=2 p=1 x=0
(XEN)       50 [0/0]: s=3 n=2 d=2 p=2 x=0
(XEN)       51 [0/0]: s=3 n=1 d=2 p=7 x=0
(XEN)       52 [0/0]: s=3 n=0 d=2 p=8 x=0
(XEN)       53 [0/0]: s=3 n=0 d=2 p=9 x=0
(XEN)       54 [0/0]: s=3 n=3 d=2 p=10 x=0
(XEN)       55 [0/0]: s=3 n=2 d=3 p=3 x=0
(XEN)       56 [0/0]: s=3 n=0 d=3 p=1 x=0
(XEN)       57 [0/0]: s=3 n=2 d=3 p=2 x=0
(XEN)       58 [0/0]: s=3 n=1 d=3 p=7 x=0
(XEN)       59 [0/0]: s=3 n=0 d=3 p=8 x=0
(XEN)       60 [0/0]: s=3 n=0 d=3 p=9 x=0
(XEN)       61 [0/0]: s=3 n=3 d=3 p=10 x=0
(XEN)       62 [0/0]: s=3 n=1 d=4 p=3 x=0
(XEN)       63 [0/0]: s=3 n=0 d=4 p=1 x=0
(XEN)       64 [0/0]: s=3 n=2 d=4 p=2 x=0
(XEN)       65 [0/0]: s=3 n=0 d=4 p=7 x=0
(XEN)       66 [0/0]: s=3 n=0 d=4 p=8 x=0
(XEN)       67 [0/0]: s=3 n=0 d=4 p=9 x=0
(XEN)       68 [0/0]: s=3 n=3 d=4 p=10 x=0
(XEN)       69 [0/0]: s=3 n=3 d=5 p=3 x=0
(XEN)       70 [0/0]: s=3 n=0 d=5 p=1 x=0
(XEN)       71 [0/0]: s=3 n=1 d=5 p=2 x=0
(XEN)       72 [0/0]: s=3 n=1 d=5 p=7 x=0
(XEN)       73 [0/0]: s=3 n=0 d=5 p=8 x=0
(XEN)       74 [0/0]: s=3 n=0 d=5 p=9 x=0
(XEN)       75 [0/0]: s=3 n=1 d=5 p=10 x=0
(XEN)       76 [0/0]: s=3 n=1 d=6 p=3 x=0
(XEN)       77 [0/0]: s=3 n=0 d=6 p=1 x=0
(XEN)       78 [0/0]: s=3 n=0 d=6 p=2 x=0
(XEN)       79 [0/0]: s=3 n=3 d=6 p=7 x=0
(XEN)       80 [0/0]: s=3 n=0 d=6 p=8 x=0
(XEN)       81 [0/0]: s=3 n=0 d=6 p=9 x=0
(XEN)       82 [0/0]: s=3 n=1 d=6 p=10 x=0
(XEN)       83 [0/0]: s=3 n=1 d=7 p=3 x=0
(XEN)       84 [0/0]: s=3 n=2 d=7 p=1 x=0
(XEN)       85 [0/0]: s=3 n=2 d=7 p=2 x=0
(XEN)       86 [0/0]: s=3 n=2 d=7 p=7 x=0
(XEN)       87 [0/0]: s=3 n=0 d=7 p=8 x=0
(XEN)       88 [0/0]: s=3 n=0 d=7 p=9 x=0
(XEN)       89 [0/0]: s=3 n=1 d=7 p=10 x=0
(XEN)       90 [0/0]: s=3 n=1 d=8 p=3 x=0
(XEN)       91 [0/0]: s=3 n=2 d=8 p=1 x=0
(XEN)       92 [0/0]: s=3 n=2 d=8 p=2 x=0
(XEN)       93 [0/0]: s=3 n=1 d=8 p=7 x=0
(XEN)       94 [0/0]: s=3 n=0 d=8 p=8 x=0
(XEN)       95 [0/0]: s=3 n=0 d=8 p=9 x=0
(XEN)       96 [0/0]: s=3 n=1 d=8 p=10 x=0
(XEN)       97 [0/0]: s=3 n=1 d=9 p=3 x=0
(XEN)       98 [0/0]: s=3 n=3 d=9 p=1 x=0
(XEN)       99 [0/0]: s=3 n=0 d=9 p=2 x=0
(XEN)      100 [0/0]: s=3 n=2 d=9 p=7 x=0
(XEN)      101 [0/0]: s=3 n=0 d=9 p=8 x=0
(XEN)      102 [0/0]: s=3 n=0 d=9 p=9 x=0
(XEN)      103 [0/0]: s=3 n=1 d=9 p=10 x=0
(XEN)      104 [0/0]: s=3 n=3 d=10 p=3 x=0
(XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
(XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0
(XEN)      107 [0/0]: s=3 n=3 d=10 p=7 x=0
(XEN)      108 [0/0]: s=3 n=0 d=10 p=8 x=0
(XEN)      109 [0/0]: s=3 n=0 d=10 p=9 x=0
(XEN)      110 [0/0]: s=3 n=2 d=10 p=10 x=0
(XEN)      111 [0/0]: s=3 n=1 d=19 p=3 x=0
(XEN)      112 [0/0]: s=3 n=2 d=19 p=1 x=0
(XEN)      113 [0/0]: s=3 n=1 d=19 p=2 x=0
(XEN)      114 [0/0]: s=3 n=2 d=19 p=7 x=0
(XEN)      115 [0/0]: s=3 n=0 d=19 p=8 x=0
(XEN)      116 [0/0]: s=3 n=0 d=19 p=9 x=0
(XEN)      117 [0/0]: s=3 n=0 d=19 p=10 x=0
(XEN)      118 [0/0]: s=3 n=0 d=20 p=3 x=0
(XEN)      119 [0/0]: s=3 n=3 d=20 p=1 x=0
(XEN)      120 [0/0]: s=3 n=1 d=20 p=2 x=0
(XEN)      121 [0/0]: s=3 n=3 d=20 p=7 x=0
(XEN)      122 [0/0]: s=3 n=0 d=20 p=8 x=0
(XEN)      123 [0/0]: s=3 n=0 d=20 p=9 x=0
(XEN)      124 [0/0]: s=3 n=0 d=20 p=10 x=0
(XEN) Domain 1 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 1:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=42 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=43 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=41 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=44 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=45 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=46 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=47 x=0
(XEN) Domain 2 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 2:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=49 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=50 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=48 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=51 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=52 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=53 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=54 x=0
(XEN) Domain 3 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=56 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=57 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=55 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=58 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=59 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=60 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=61 x=0
(XEN) Domain 4 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 4:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=63 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=64 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=62 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=65 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=66 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=67 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=68 x=0
(XEN) Domain 5 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 5:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=70 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=71 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=69 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=72 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=73 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=74 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=75 x=0
(XEN) Domain 6 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 6:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=77 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=78 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=76 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=79 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=80 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=81 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=82 x=0
(XEN) Domain 7 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 7:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=84 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=85 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=83 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=86 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=87 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=88 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=89 x=0
(XEN) Domain 8 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 8:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=91 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=92 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=90 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=93 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=94 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=95 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=96 x=0
(XEN) Domain 9 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 9:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=98 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=99 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=97 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=100 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=101 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=102 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=103 x=0
(XEN) Domain 10 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 10:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=105 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=106 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=104 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=107 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=108 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=109 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=110 x=0
(XEN) Domain 19 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 19:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=112 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=113 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=111 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=114 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=115 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=116 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=117 x=0
(XEN) Domain 20 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 20:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=119 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=120 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=118 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=121 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=122 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=123 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=124 x=0

(XEN) VCPU information and callbacks for domain 10:
(XEN)     VCPU0: CPU11 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={4-15}
(XEN)     paging assistance: shadowed 2-on-3
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN)     VCPU1: CPU9 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={9} cpu_affinity={4-15}
(XEN)     paging assistance: shadowed 2-on-3
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: VM hung after running sometime
  2010-09-21  5:02                 ` MaoXiaoyun
@ 2010-09-21  7:53                   ` Keir Fraser
  2010-09-21  9:24                     ` wei song
  2010-09-21 17:28                     ` Jeremy Fitzhardinge
  0 siblings, 2 replies; 46+ messages in thread
From: Keir Fraser @ 2010-09-21  7:53 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> Take a look at domain 0 event channel with port 105,106, I find on port 105,
> it pending is
> 1.(in [1,0], first bit refer to pending, and is 1, second bit refer to mask,
> is 0).
>  
> (XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
> (XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0
>  
> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it must set
> the pending bit.
> Consider pending is 1, it looks like the irq is not triggered, am I  right ?
> Since if it is triggerred, it should clear the pending bit. (line 361).

Yes it looks like dom0 is not handling the event for some reason. Qemu looks
like it still works and is waiting for a notification via select(). But that
won't happen until dom0 kernel handles the event as an IRQ and calls the
relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).

I think you're on the right track in your debugging. I don't know much about
the pv_ops irq handling path, except to say that this aspect is different
than non-pv_ops kernels which special-case handling of events bound to
user-space rather more. So at the moment my best guess would be that the bug
is in the pv_ops kernel irq handling for this type of user-space-bound
event.

 -- Keir

> ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> 354 void
> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> 356 {
> 357         struct irqaction *action;
> 358         irqreturn_t action_ret;
> 359 
> 360         spin_lock(&desc->lock);
> 361         mask_ack_irq(desc, irq);
> 362 
> 363         if (unlikely(desc->status & IRQ_INPROGRESS))
> 364                 goto out_unlock;
> 365         desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> 366         kstat_incr_irqs_this_cpu(irq, desc);
> 367 
>  
> BTW, the qemu still works fine when VM is hang. Below is it strace output.
> No much difference between other well worked qemu instance, other than select
> all Timeout.
> -------------------
> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
>  
>  
>  
>  
>> Date: Mon, 20 Sep 2010 10:35:46 +0100
>> Subject: Re: VM hung after running sometime
>> From: keir.fraser@eu.citrix.com
>> To: tinnycloud@hotmail.com
>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
>> 
>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>> 
>>> Thanks Keir.
>>> 
>>> You're right, after I deeply looked into the wait_on_xen_event_channel, it
>>> is
>>> smart enough
>>> to avoid the race I assumed.
>>> 
>>> How about prepare_wait_on_xen_event_channel ?
>>> Currently Istill don't know when it will be invoked.
>>> Could enlighten me?
>> 
>> As you can see it is called from hvm_send_assist_req(), hence it is called
>> whenever an ioreq is sent to qemu-dm. Note that it is called *before*
>> qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we will
>> not get woken until qemu-dm has done the work, and it cannot start the work
>> until it is notified, and it is not notified until after
>> prepare_wait_on_xen_event_channel has been executed.
>> 
>> -- Keir
>> 
>>> 
>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
>>>> Subject: Re: VM hung after running sometime
>>>> From: keir.fraser@eu.citrix.com
>>>> To: tinnycloud@hotmail.com
>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
>>>> 
>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>>>> 
>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
>>>>> wait_on_xen_event_channel
>>>>> (where it is blocked in _VPF_blocked_in_xen).
>>>>> 
>>>>> Here is my assumption of event missed.
>>>>> 
>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
>>>>> STATE_IOREQ_READY
>>>>> or STATE_IOREQ_INPROCESS
>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
>>>>> quickly before hvm_do_resume execute line 270.
>>>>> Well, the event is missed.
>>>>> In other words, the _VPF_blocked_in_xen is cleared before it is actually
>>>>> setted, and Domian U who is blocked
>>>>> might never get unblocked, it this possible?
>>>> 
>>>> Firstly, that code is very paranoid and it should never actually be the
>>>> case
>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
>>>> Secondly, even if you do, take a look at the implementation of
>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
>>>> mention.
>>>> 
>>>> -- Keir
>>>> 
>>>> 
>>> 
>> 
>> 
>        

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-21  7:53                   ` Keir Fraser
@ 2010-09-21  9:24                     ` wei song
  2010-09-21  9:49                       ` wei song
  2010-09-21 17:28                     ` Jeremy Fitzhardinge
  1 sibling, 1 reply; 46+ messages in thread
From: wei song @ 2010-09-21  9:24 UTC (permalink / raw)
  To: Keir Fraser, jeremy, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 5730 bytes --]

I also met this issue, especially running high work load on HVM vms with xen
4.0.0 + pvops 2.6.31.13xen. I noticed port 1 of VCPU1 always be blocked on
this port, the system of vcpu0 is normal but stopped on vcpu1.   Jeremy,
could you please take a look on this issue? Could you give some idea on it?

thanks,
James

2010/9/21 Keir Fraser <keir.fraser@eu.citrix.com>

> On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>
> > Take a look at domain 0 event channel with port 105,106, I find on port
> 105,
> > it pending is
> > 1.(in [1,0], first bit refer to pending, and is 1, second bit refer to
> mask,
> > is 0).
> >
> > (XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
> > (XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0
> >
> > In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it must
> set
> > the pending bit.
> > Consider pending is 1, it looks like the irq is not triggered, am I
>  right ?
> > Since if it is triggerred, it should clear the pending bit. (line 361).
>
> Yes it looks like dom0 is not handling the event for some reason. Qemu
> looks
> like it still works and is waiting for a notification via select(). But
> that
> won't happen until dom0 kernel handles the event as an IRQ and calls the
> relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
>
> I think you're on the right track in your debugging. I don't know much
> about
> the pv_ops irq handling path, except to say that this aspect is different
> than non-pv_ops kernels which special-case handling of events bound to
> user-space rather more. So at the moment my best guess would be that the
> bug
> is in the pv_ops kernel irq handling for this type of user-space-bound
> event.
>
>  -- Keir
>
> > ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> > 354 void
> > 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> > 356 {
> > 357         struct irqaction *action;
> > 358         irqreturn_t action_ret;
> > 359
> > 360         spin_lock(&desc->lock);
> > 361         mask_ack_irq(desc, irq);
> > 362
> > 363         if (unlikely(desc->status & IRQ_INPROGRESS))
> > 364                 goto out_unlock;
> > 365         desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> > 366         kstat_incr_irqs_this_cpu(irq, desc);
> > 367
> >
> > BTW, the qemu still works fine when VM is hang. Below is it strace
> output.
> > No much difference between other well worked qemu instance, other than
> select
> > all Timeout.
> > -------------------
> > select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> > clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> > clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> > clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> > select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> > clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> > clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
> >
> >
> >
> >
> >> Date: Mon, 20 Sep 2010 10:35:46 +0100
> >> Subject: Re: VM hung after running sometime
> >> From: keir.fraser@eu.citrix.com
> >> To: tinnycloud@hotmail.com
> >> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> >>
> >> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >>
> >>> Thanks Keir.
> >>>
> >>> You're right, after I deeply looked into the wait_on_xen_event_channel,
> it
> >>> is
> >>> smart enough
> >>> to avoid the race I assumed.
> >>>
> >>> How about prepare_wait_on_xen_event_channel ?
> >>> Currently Istill don't know when it will be invoked.
> >>> Could enlighten me?
> >>
> >> As you can see it is called from hvm_send_assist_req(), hence it is
> called
> >> whenever an ioreq is sent to qemu-dm. Note that it is called *before*
> >> qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we
> will
> >> not get woken until qemu-dm has done the work, and it cannot start the
> work
> >> until it is notified, and it is not notified until after
> >> prepare_wait_on_xen_event_channel has been executed.
> >>
> >> -- Keir
> >>
> >>>
> >>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
> >>>> Subject: Re: VM hung after running sometime
> >>>> From: keir.fraser@eu.citrix.com
> >>>> To: tinnycloud@hotmail.com
> >>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> >>>>
> >>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >>>>
> >>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> >>>>> wait_on_xen_event_channel
> >>>>> (where it is blocked in _VPF_blocked_in_xen).
> >>>>>
> >>>>> Here is my assumption of event missed.
> >>>>>
> >>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
> >>>>> STATE_IOREQ_READY
> >>>>> or STATE_IOREQ_INPROCESS
> >>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548
> so
> >>>>> quickly before hvm_do_resume execute line 270.
> >>>>> Well, the event is missed.
> >>>>> In other words, the _VPF_blocked_in_xen is cleared before it is
> actually
> >>>>> setted, and Domian U who is blocked
> >>>>> might never get unblocked, it this possible?
> >>>>
> >>>> Firstly, that code is very paranoid and it should never actually be
> the
> >>>> case
> >>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in
> hvm_do_resume().
> >>>> Secondly, even if you do, take a look at the implementation of
> >>>> wait_on_xen_event_channel() -- it is smart enough to avoid the race
> you
> >>>> mention.
> >>>>
> >>>> -- Keir
> >>>>
> >>>>
> >>>
> >>
> >>
> >
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

[-- Attachment #1.2: Type: text/html, Size: 8067 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-21  9:24                     ` wei song
@ 2010-09-21  9:49                       ` wei song
  0 siblings, 0 replies; 46+ messages in thread
From: wei song @ 2010-09-21  9:49 UTC (permalink / raw)
  To: Keir Fraser, jeremy, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2765 bytes --]

I also noticed that  there are only one port(number 2) bind to vcpu1, I
wonder that what this port use to do?
(XEN) [2010-09-21 17:09:04] Domain 3 polling vCPUs: {}
(XEN) [2010-09-21 17:09:04] Event channel information for domain 3:
(XEN) [2010-09-21 17:09:04]     port [p/m]
(XEN) [2010-09-21 17:09:04]        1 [0/1]: s=3 n=0 d=0 p=42 x=1
(XEN) [2010-09-21 17:09:04]        2 [0/1]: s=3 n=1 d=0 p=43 x=1
(XEN) [2010-09-21 17:09:04]        3 [0/0]: s=3 n=0 d=0 p=41 x=0
(XEN) [2010-09-21 17:09:04]        4 [0/1]: s=2 n=0 d=0 x=0
(XEN) [2010-09-21 17:09:04]        5 [0/0]: s=6 n=0 x=0
(XEN) [2010-09-21 17:09:04]        6 [0/0]: s=2 n=0 d=0 x=0
(XEN) [2010-09-21 17:09:04]        7 [0/0]: s=3 n=0 d=0 p=44 x=0
(XEN) [2010-09-21 17:09:04]        8 [0/0]: s=3 n=0 d=0 p=45 x=0

regards,


2010/9/21 wei song <james.songwei@gmail.com>

> I also met this issue, especially running high work load on HVM vms with
> xen 4.0.0 + pvops 2.6.31.13xen. I noticed port 1 of VCPU1 always be blocked
> on this port, the system of vcpu0 is normal but stopped on vcpu1.   Jeremy,
> could you please take a look on this issue? Could you give some idea on it?
>
> thanks,
> James
>
> 2010/9/21 Keir Fraser <keir.fraser@eu.citrix.com>
>
> On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>>
>> > Take a look at domain 0 event channel with port 105,106, I find on port
>> 105,
>> > it pending is
>> > 1.(in [1,0], first bit refer to pending, and is 1, second bit refer to
>> mask,
>> > is 0).
>> >
>> > (XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
>> > (XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0
>> >
>> > In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it
>> must set
>> > the pending bit.
>> > Consider pending is 1, it looks like the irq is not triggered, am I
>>  right ?
>> > Since if it is triggerred, it should clear the pending bit. (line 361).
>>
>> Yes it looks like dom0 is not handling the event for some reason. Qemu
>> looks
>> like it still works and is waiting for a notification via select(). But
>> that
>> won't happen until dom0 kernel handles the event as an IRQ and calls the
>> relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
>>
>> I think you're on the right track in your debugging. I don't know much
>> about
>> the pv_ops irq handling path, except to say that this aspect is different
>> than non-pv_ops kernels which special-case handling of events bound to
>> user-space rather more. So at the moment my best guess would be that the
>> bug
>> is in the pv_ops kernel irq handling for this type of user-space-bound
>> event.
>>
>>  -- Keir
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3896 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-21  7:53                   ` Keir Fraser
  2010-09-21  9:24                     ` wei song
@ 2010-09-21 17:28                     ` Jeremy Fitzhardinge
  2010-09-22  0:02                       ` MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-21 17:28 UTC (permalink / raw)
  To: Keir Fraser; +Cc: MaoXiaoyun, xen devel

 On 09/21/2010 12:53 AM, Keir Fraser wrote:
> On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>
>> Take a look at domain 0 event channel with port 105,106, I find on port 105,
>> it pending is
>> 1.(in [1,0], first bit refer to pending, and is 1, second bit refer to mask,
>> is 0).
>>  
>> (XEN)      105 [1/0]: s=3 n=2 d=10 p=1 x=0
>> (XEN)      106 [0/0]: s=3 n=2 d=10 p=2 x=0
>>  
>> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it must set
>> the pending bit.
>> Consider pending is 1, it looks like the irq is not triggered, am I  right ?
>> Since if it is triggerred, it should clear the pending bit. (line 361).
> Yes it looks like dom0 is not handling the event for some reason. Qemu looks
> like it still works and is waiting for a notification via select(). But that
> won't happen until dom0 kernel handles the event as an IRQ and calls the
> relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
>
> I think you're on the right track in your debugging. I don't know much about
> the pv_ops irq handling path, except to say that this aspect is different
> than non-pv_ops kernels which special-case handling of events bound to
> user-space rather more. So at the moment my best guess would be that the bug
> is in the pv_ops kernel irq handling for this type of user-space-bound
> event.

We no longer use handle_level_irq because there's a race which loses
events when interrupt migration is enabled.  Current xen/stable-2.6.32.x
has a proper fix for this, but the quick workaround is to disable
irqbalanced.

    J

>  -- Keir
>
>> ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
>> 354 void
>> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
>> 356 {
>> 357         struct irqaction *action;
>> 358         irqreturn_t action_ret;
>> 359 
>> 360         spin_lock(&desc->lock);
>> 361         mask_ack_irq(desc, irq);
>> 362 
>> 363         if (unlikely(desc->status & IRQ_INPROGRESS))
>> 364                 goto out_unlock;
>> 365         desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
>> 366         kstat_incr_irqs_this_cpu(irq, desc);
>> 367 
>>  
>> BTW, the qemu still works fine when VM is hang. Below is it strace output.
>> No much difference between other well worked qemu instance, other than select
>> all Timeout.
>> -------------------
>> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
>> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
>> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
>> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
>> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
>> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
>> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
>> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
>> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
>>  
>>  
>>  
>>  
>>> Date: Mon, 20 Sep 2010 10:35:46 +0100
>>> Subject: Re: VM hung after running sometime
>>> From: keir.fraser@eu.citrix.com
>>> To: tinnycloud@hotmail.com
>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
>>>
>>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>>>
>>>> Thanks Keir.
>>>>
>>>> You're right, after I deeply looked into the wait_on_xen_event_channel, it
>>>> is
>>>> smart enough
>>>> to avoid the race I assumed.
>>>>
>>>> How about prepare_wait_on_xen_event_channel ?
>>>> Currently Istill don't know when it will be invoked.
>>>> Could enlighten me?
>>> As you can see it is called from hvm_send_assist_req(), hence it is called
>>> whenever an ioreq is sent to qemu-dm. Note that it is called *before*
>>> qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we will
>>> not get woken until qemu-dm has done the work, and it cannot start the work
>>> until it is notified, and it is not notified until after
>>> prepare_wait_on_xen_event_channel has been executed.
>>>
>>> -- Keir
>>>
>>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
>>>>> Subject: Re: VM hung after running sometime
>>>>> From: keir.fraser@eu.citrix.com
>>>>> To: tinnycloud@hotmail.com
>>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
>>>>>
>>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>>>>>
>>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
>>>>>> wait_on_xen_event_channel
>>>>>> (where it is blocked in _VPF_blocked_in_xen).
>>>>>>
>>>>>> Here is my assumption of event missed.
>>>>>>
>>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
>>>>>> STATE_IOREQ_READY
>>>>>> or STATE_IOREQ_INPROCESS
>>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
>>>>>> quickly before hvm_do_resume execute line 270.
>>>>>> Well, the event is missed.
>>>>>> In other words, the _VPF_blocked_in_xen is cleared before it is actually
>>>>>> setted, and Domian U who is blocked
>>>>>> might never get unblocked, it this possible?
>>>>> Firstly, that code is very paranoid and it should never actually be the
>>>>> case
>>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
>>>>> Secondly, even if you do, take a look at the implementation of
>>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
>>>>> mention.
>>>>>
>>>>> -- Keir
>>>>>
>>>>>
>>>
>>        
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-21 17:28                     ` Jeremy Fitzhardinge
@ 2010-09-22  0:02                       ` MaoXiaoyun
  2010-09-22  0:17                         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-22  0:02 UTC (permalink / raw)
  To: jeremy, keir.fraser; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 6924 bytes --]


Thanks Jeremy.


 

Regards to fix you mentioned, did you mean the patch I searched and pasted below, if so, it this all what I need?

For irqbalance disabled, I am afried it might have negative performance impact, right?

 

-------------------------------------------------------

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 32f4a2c..06fc991 100644 (file)

--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -368,7 +368,7 @@ int bind_evtchn_to_irq(unsigned int evtchn)
                irq = find_unbound_irq();
 
                set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
-                                             handle_level_irq, "event");
+                                             handle_edge_irq, "event");
 
                evtchn_to_irq[evtchn] = irq;
                irq_info[irq] = mk_evtchn_info(evtchn);
 > Date: Tue, 21 Sep 2010 10:28:34 -0700
> From: jeremy@goop.org
> To: keir.fraser@eu.citrix.com
> CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/21/2010 12:53 AM, Keir Fraser wrote:
> > On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >
> >> Take a look at domain 0 event channel with port 105,106, I find on port 105,
> >> it pending is
> >> 1.(in [1,0], first bit refer to pending, and is 1, second bit refer to mask,
> >> is 0).
> >> 
> >> (XEN) 105 [1/0]: s=3 n=2 d=10 p=1 x=0
> >> (XEN) 106 [0/0]: s=3 n=2 d=10 p=2 x=0
> >> 
> >> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and it must set
> >> the pending bit.
> >> Consider pending is 1, it looks like the irq is not triggered, am I right ?
> >> Since if it is triggerred, it should clear the pending bit. (line 361).
> > Yes it looks like dom0 is not handling the event for some reason. Qemu looks
> > like it still works and is waiting for a notification via select(). But that
> > won't happen until dom0 kernel handles the event as an IRQ and calls the
> > relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
> >
> > I think you're on the right track in your debugging. I don't know much about
> > the pv_ops irq handling path, except to say that this aspect is different
> > than non-pv_ops kernels which special-case handling of events bound to
> > user-space rather more. So at the moment my best guess would be that the bug
> > is in the pv_ops kernel irq handling for this type of user-space-bound
> > event.
> 
> We no longer use handle_level_irq because there's a race which loses
> events when interrupt migration is enabled. Current xen/stable-2.6.32.x
> has a proper fix for this, but the quick workaround is to disable
> irqbalanced.
> 
> J
> 
> > -- Keir
> >
> >> ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> >> 354 void
> >> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> >> 356 {
> >> 357 struct irqaction *action;
> >> 358 irqreturn_t action_ret;
> >> 359 
> >> 360 spin_lock(&desc->lock);
> >> 361 mask_ack_irq(desc, irq);
> >> 362 
> >> 363 if (unlikely(desc->status & IRQ_INPROGRESS))
> >> 364 goto out_unlock;
> >> 365 desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> >> 366 kstat_incr_irqs_this_cpu(irq, desc);
> >> 367 
> >> 
> >> BTW, the qemu still works fine when VM is hang. Below is it strace output.
> >> No much difference between other well worked qemu instance, other than select
> >> all Timeout.
> >> -------------------
> >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> >> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
> >> 
> >> 
> >> 
> >> 
> >>> Date: Mon, 20 Sep 2010 10:35:46 +0100
> >>> Subject: Re: VM hung after running sometime
> >>> From: keir.fraser@eu.citrix.com
> >>> To: tinnycloud@hotmail.com
> >>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> >>>
> >>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >>>
> >>>> Thanks Keir.
> >>>>
> >>>> You're right, after I deeply looked into the wait_on_xen_event_channel, it
> >>>> is
> >>>> smart enough
> >>>> to avoid the race I assumed.
> >>>>
> >>>> How about prepare_wait_on_xen_event_channel ?
> >>>> Currently Istill don't know when it will be invoked.
> >>>> Could enlighten me?
> >>> As you can see it is called from hvm_send_assist_req(), hence it is called
> >>> whenever an ioreq is sent to qemu-dm. Note that it is called *before*
> >>> qemu-dm is notified -- hence it cannot race the wakeup from qemu, as we will
> >>> not get woken until qemu-dm has done the work, and it cannot start the work
> >>> until it is notified, and it is not notified until after
> >>> prepare_wait_on_xen_event_channel has been executed.
> >>>
> >>> -- Keir
> >>>
> >>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
> >>>>> Subject: Re: VM hung after running sometime
> >>>>> From: keir.fraser@eu.citrix.com
> >>>>> To: tinnycloud@hotmail.com
> >>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> >>>>>
> >>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> >>>>>
> >>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might invoke
> >>>>>> wait_on_xen_event_channel
> >>>>>> (where it is blocked in _VPF_blocked_in_xen).
> >>>>>>
> >>>>>> Here is my assumption of event missed.
> >>>>>>
> >>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
> >>>>>> STATE_IOREQ_READY
> >>>>>> or STATE_IOREQ_INPROCESS
> >>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute line 548 so
> >>>>>> quickly before hvm_do_resume execute line 270.
> >>>>>> Well, the event is missed.
> >>>>>> In other words, the _VPF_blocked_in_xen is cleared before it is actually
> >>>>>> setted, and Domian U who is blocked
> >>>>>> might never get unblocked, it this possible?
> >>>>> Firstly, that code is very paranoid and it should never actually be the
> >>>>> case
> >>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in hvm_do_resume().
> >>>>> Secondly, even if you do, take a look at the implementation of
> >>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the race you
> >>>>> mention.
> >>>>>
> >>>>> -- Keir
> >>>>>
> >>>>>
> >>>
> >> 
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 12473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-22  0:02                       ` MaoXiaoyun
@ 2010-09-22  0:17                         ` Jeremy Fitzhardinge
  2010-09-22  1:19                           ` MaoXiaoyun
  0 siblings, 1 reply; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-22  0:17 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir.fraser

 On 09/21/2010 05:02 PM, MaoXiaoyun wrote:
> Thanks Jeremy.
>
> Regards to fix you mentioned, did you mean the patch I searched and
> pasted below, if so, it this all what I need?

No, you need more than that. There are quite a few changes from multiple
branches, so I'd recommend just using a current kernel.

> For irqbalance disabled, I am afried it might have negative
> performance impact, right?

I doubt it. Unless you have so many interrupts that they can't all be
handled on one cpu, it shouldn't make much difference. After all, the
interrupts have to be handled *somewhere*, but it doesn't matter much
where - who cares if cpu0 is mostly handling interrupts if it leaves the
other cpus free for other work?

irqbalanced is primarily concerned with migrating interrupts according
to the CPU topology to save power and (maybe) handle interrupts closer
to the interrupting device. But that's meaningless in a domain where the
vcpus can be mapped to different pcpus from moment to moment.

J


>
> -------------------------------------------------------
> diff --git a/drivers/xen/events.c
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> b/drivers/xen/events.c
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> index 32f4a2c
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>..06fc991
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> 100644 (file)
> --- a/drivers/xen/events.c
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> +++ b/drivers/xen/events.c
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> @@ -368,7
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856#l368>
> +368,7
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608#l368>
> @@ int bind_evtchn_to_irq(unsigned int evtchn)
> irq = find_unbound_irq();
> set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
> - handle_level_irq, "event");
> + handle_edge_irq, "event");
> evtchn_to_irq[evtchn] = irq;
> irq_info[irq] = mk_evtchn_info(evtchn);
> > Date: Tue, 21 Sep 2010 10:28:34 -0700
> > From: jeremy@goop.org
> > To: keir.fraser@eu.citrix.com
> > CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> >
> > On 09/21/2010 12:53 AM, Keir Fraser wrote:
> > > On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > >
> > >> Take a look at domain 0 event channel with port 105,106, I find
> on port 105,
> > >> it pending is
> > >> 1.(in [1,0], first bit refer to pending, and is 1, second bit
> refer to mask,
> > >> is 0).
> > >>
> > >> (XEN) 105 [1/0]: s=3 n=2 d=10 p=1 x=0
> > >> (XEN) 106 [0/0]: s=3 n=2 d=10 p=2 x=0
> > >>
> > >> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and
> it must set
> > >> the pending bit.
> > >> Consider pending is 1, it looks like the irq is not triggered, am
> I right ?
> > >> Since if it is triggerred, it should clear the pending bit. (line
> 361).
> > > Yes it looks like dom0 is not handling the event for some reason.
> Qemu looks
> > > like it still works and is waiting for a notification via
> select(). But that
> > > won't happen until dom0 kernel handles the event as an IRQ and
> calls the
> > > relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
> > >
> > > I think you're on the right track in your debugging. I don't know
> much about
> > > the pv_ops irq handling path, except to say that this aspect is
> different
> > > than non-pv_ops kernels which special-case handling of events bound to
> > > user-space rather more. So at the moment my best guess would be
> that the bug
> > > is in the pv_ops kernel irq handling for this type of user-space-bound
> > > event.
> >
> > We no longer use handle_level_irq because there's a race which loses
> > events when interrupt migration is enabled. Current xen/stable-2.6.32.x
> > has a proper fix for this, but the quick workaround is to disable
> > irqbalanced.
> >
> > J
> >
> > > -- Keir
> > >
> > >>
> ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> > >> 354 void
> > >> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> > >> 356 {
> > >> 357 struct irqaction *action;
> > >> 358 irqreturn_t action_ret;
> > >> 359
> > >> 360 spin_lock(&desc->lock);
> > >> 361 mask_ack_irq(desc, irq);
> > >> 362
> > >> 363 if (unlikely(desc->status & IRQ_INPROGRESS))
> > >> 364 goto out_unlock;
> > >> 365 desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> > >> 366 kstat_incr_irqs_this_cpu(irq, desc);
> > >> 367
> > >>
> > >> BTW, the qemu still works fine when VM is hang. Below is it
> strace output.
> > >> No much difference between other well worked qemu instance, other
> than select
> > >> all Timeout.
> > >> -------------------
> > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
> > >>
> > >>
> > >>
> > >>
> > >>> Date: Mon, 20 Sep 2010 10:35:46 +0100
> > >>> Subject: Re: VM hung after running sometime
> > >>> From: keir.fraser@eu.citrix.com
> > >>> To: tinnycloud@hotmail.com
> > >>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > >>>
> > >>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > >>>
> > >>>> Thanks Keir.
> > >>>>
> > >>>> You're right, after I deeply looked into the
> wait_on_xen_event_channel, it
> > >>>> is
> > >>>> smart enough
> > >>>> to avoid the race I assumed.
> > >>>>
> > >>>> How about prepare_wait_on_xen_event_channel ?
> > >>>> Currently Istill don't know when it will be invoked.
> > >>>> Could enlighten me?
> > >>> As you can see it is called from hvm_send_assist_req(), hence it
> is called
> > >>> whenever an ioreq is sent to qemu-dm. Note that it is called
> *before*
> > >>> qemu-dm is notified -- hence it cannot race the wakeup from
> qemu, as we will
> > >>> not get woken until qemu-dm has done the work, and it cannot
> start the work
> > >>> until it is notified, and it is not notified until after
> > >>> prepare_wait_on_xen_event_channel has been executed.
> > >>>
> > >>> -- Keir
> > >>>
> > >>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
> > >>>>> Subject: Re: VM hung after running sometime
> > >>>>> From: keir.fraser@eu.citrix.com
> > >>>>> To: tinnycloud@hotmail.com
> > >>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > >>>>>
> > >>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > >>>>>
> > >>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might
> invoke
> > >>>>>> wait_on_xen_event_channel
> > >>>>>> (where it is blocked in _VPF_blocked_in_xen).
> > >>>>>>
> > >>>>>> Here is my assumption of event missed.
> > >>>>>>
> > >>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
> > >>>>>> STATE_IOREQ_READY
> > >>>>>> or STATE_IOREQ_INPROCESS
> > >>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute
> line 548 so
> > >>>>>> quickly before hvm_do_resume execute line 270.
> > >>>>>> Well, the event is missed.
> > >>>>>> In other words, the _VPF_blocked_in_xen is cleared before it
> is actually
> > >>>>>> setted, and Domian U who is blocked
> > >>>>>> might never get unblocked, it this possible?
> > >>>>> Firstly, that code is very paranoid and it should never
> actually be the
> > >>>>> case
> > >>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in
> hvm_do_resume().
> > >>>>> Secondly, even if you do, take a look at the implementation of
> > >>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the
> race you
> > >>>>> mention.
> > >>>>>
> > >>>>> -- Keir
> > >>>>>
> > >>>>>
> > >>>
> > >>
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> > >
> >

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-22  0:17                         ` Jeremy Fitzhardinge
@ 2010-09-22  1:19                           ` MaoXiaoyun
  2010-09-22 18:31                             ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-22  1:19 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 11616 bytes --]


Thanks for the details.
 
Currently guest VM hang in our heavy IO stress test, (In detail, we have created more than 12 HVMS on our 16cores physical server, 
and each of HVM inside, iometer and ab regard as heavy IO periodically run). Guest hang shows up in 1 or 2 days. So the IO is very 
heavy, so as the interrupts, I think. 
 
According to the hang log, the domain blocked in _VPF_blocked_in_xen, indicates  "x=1" in log file below, and that is port 1, 2.  And 
all our HVM are have PVdriver installed,  one thing I am not clear right now is the IO event in these two ports.  Does it only include
"mouse, vga"event, or it also includes hard disk events? (If it has hard disk events included, the interrupt would be very heavy, right?
and right now we have 4 physical CPU allocated to domain 0, is it appropriate ? )
 
Anyway, I think I can have irqbalance disabled for a quick test. 

Meanwhile, I will spent some time on the patch merge.

Many thanks.
 
And its domain event info is :
(XEN) Domain 10 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 10:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=105 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=106 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=104 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=107 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=108 x=0
(XEN)        9 [0/0]: s=3 n=0 d=0 p=109 x=0
(XEN)       10 [0/0]: s=3 n=0 d=0 p=110 x=0
 
> Date: Tue, 21 Sep 2010 17:17:12 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: keir.fraser@eu.citrix.com; xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/21/2010 05:02 PM, MaoXiaoyun wrote:
> > Thanks Jeremy.
> >
> > Regards to fix you mentioned, did you mean the patch I searched and
> > pasted below, if so, it this all what I need?
> 
> No, you need more than that. There are quite a few changes from multiple
> branches, so I'd recommend just using a current kernel.
> 
> > For irqbalance disabled, I am afried it might have negative
> > performance impact, right?
> 
> I doubt it. Unless you have so many interrupts that they can't all be
> handled on one cpu, it shouldn't make much difference. After all, the
> interrupts have to be handled *somewhere*, but it doesn't matter much
> where - who cares if cpu0 is mostly handling interrupts if it leaves the
> other cpus free for other work?
> 
> irqbalanced is primarily concerned with migrating interrupts according
> to the CPU topology to save power and (maybe) handle interrupts closer
> to the interrupting device. But that's meaningless in a domain where the
> vcpus can be mapped to different pcpus from moment to moment.
> 
> J
> 
> 
> >
> > -------------------------------------------------------
> > diff --git a/drivers/xen/events.c
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> > b/drivers/xen/events.c
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > index 32f4a2c
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>..06fc991
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > 100644 (file)
> > --- a/drivers/xen/events.c
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> > +++ b/drivers/xen/events.c
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > @@ -368,7
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856#l368>
> > +368,7
> > <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608#l368>
> > @@ int bind_evtchn_to_irq(unsigned int evtchn)
> > irq = find_unbound_irq();
> > set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
> > - handle_level_irq, "event");
> > + handle_edge_irq, "event");
> > evtchn_to_irq[evtchn] = irq;
> > irq_info[irq] = mk_evtchn_info(evtchn);
> > > Date: Tue, 21 Sep 2010 10:28:34 -0700
> > > From: jeremy@goop.org
> > > To: keir.fraser@eu.citrix.com
> > > CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> > >
> > > On 09/21/2010 12:53 AM, Keir Fraser wrote:
> > > > On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > > >
> > > >> Take a look at domain 0 event channel with port 105,106, I find
> > on port 105,
> > > >> it pending is
> > > >> 1.(in [1,0], first bit refer to pending, and is 1, second bit
> > refer to mask,
> > > >> is 0).
> > > >>
> > > >> (XEN) 105 [1/0]: s=3 n=2 d=10 p=1 x=0
> > > >> (XEN) 106 [0/0]: s=3 n=2 d=10 p=2 x=0
> > > >>
> > > >> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and
> > it must set
> > > >> the pending bit.
> > > >> Consider pending is 1, it looks like the irq is not triggered, am
> > I right ?
> > > >> Since if it is triggerred, it should clear the pending bit. (line
> > 361).
> > > > Yes it looks like dom0 is not handling the event for some reason.
> > Qemu looks
> > > > like it still works and is waiting for a notification via
> > select(). But that
> > > > won't happen until dom0 kernel handles the event as an IRQ and
> > calls the
> > > > relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
> > > >
> > > > I think you're on the right track in your debugging. I don't know
> > much about
> > > > the pv_ops irq handling path, except to say that this aspect is
> > different
> > > > than non-pv_ops kernels which special-case handling of events bound to
> > > > user-space rather more. So at the moment my best guess would be
> > that the bug
> > > > is in the pv_ops kernel irq handling for this type of user-space-bound
> > > > event.
> > >
> > > We no longer use handle_level_irq because there's a race which loses
> > > events when interrupt migration is enabled. Current xen/stable-2.6.32.x
> > > has a proper fix for this, but the quick workaround is to disable
> > > irqbalanced.
> > >
> > > J
> > >
> > > > -- Keir
> > > >
> > > >>
> > ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> > > >> 354 void
> > > >> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> > > >> 356 {
> > > >> 357 struct irqaction *action;
> > > >> 358 irqreturn_t action_ret;
> > > >> 359
> > > >> 360 spin_lock(&desc->lock);
> > > >> 361 mask_ack_irq(desc, irq);
> > > >> 362
> > > >> 363 if (unlikely(desc->status & IRQ_INPROGRESS))
> > > >> 364 goto out_unlock;
> > > >> 365 desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> > > >> 366 kstat_incr_irqs_this_cpu(irq, desc);
> > > >> 367
> > > >>
> > > >> BTW, the qemu still works fine when VM is hang. Below is it
> > strace output.
> > > >> No much difference between other well worked qemu instance, other
> > than select
> > > >> all Timeout.
> > > >> -------------------
> > > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> > > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>> Date: Mon, 20 Sep 2010 10:35:46 +0100
> > > >>> Subject: Re: VM hung after running sometime
> > > >>> From: keir.fraser@eu.citrix.com
> > > >>> To: tinnycloud@hotmail.com
> > > >>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > > >>>
> > > >>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > > >>>
> > > >>>> Thanks Keir.
> > > >>>>
> > > >>>> You're right, after I deeply looked into the
> > wait_on_xen_event_channel, it
> > > >>>> is
> > > >>>> smart enough
> > > >>>> to avoid the race I assumed.
> > > >>>>
> > > >>>> How about prepare_wait_on_xen_event_channel ?
> > > >>>> Currently Istill don't know when it will be invoked.
> > > >>>> Could enlighten me?
> > > >>> As you can see it is called from hvm_send_assist_req(), hence it
> > is called
> > > >>> whenever an ioreq is sent to qemu-dm. Note that it is called
> > *before*
> > > >>> qemu-dm is notified -- hence it cannot race the wakeup from
> > qemu, as we will
> > > >>> not get woken until qemu-dm has done the work, and it cannot
> > start the work
> > > >>> until it is notified, and it is not notified until after
> > > >>> prepare_wait_on_xen_event_channel has been executed.
> > > >>>
> > > >>> -- Keir
> > > >>>
> > > >>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
> > > >>>>> Subject: Re: VM hung after running sometime
> > > >>>>> From: keir.fraser@eu.citrix.com
> > > >>>>> To: tinnycloud@hotmail.com
> > > >>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > > >>>>>
> > > >>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > > >>>>>
> > > >>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might
> > invoke
> > > >>>>>> wait_on_xen_event_channel
> > > >>>>>> (where it is blocked in _VPF_blocked_in_xen).
> > > >>>>>>
> > > >>>>>> Here is my assumption of event missed.
> > > >>>>>>
> > > >>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
> > > >>>>>> STATE_IOREQ_READY
> > > >>>>>> or STATE_IOREQ_INPROCESS
> > > >>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute
> > line 548 so
> > > >>>>>> quickly before hvm_do_resume execute line 270.
> > > >>>>>> Well, the event is missed.
> > > >>>>>> In other words, the _VPF_blocked_in_xen is cleared before it
> > is actually
> > > >>>>>> setted, and Domian U who is blocked
> > > >>>>>> might never get unblocked, it this possible?
> > > >>>>> Firstly, that code is very paranoid and it should never
> > actually be the
> > > >>>>> case
> > > >>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in
> > hvm_do_resume().
> > > >>>>> Secondly, even if you do, take a look at the implementation of
> > > >>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the
> > race you
> > > >>>>> mention.
> > > >>>>>
> > > >>>>> -- Keir
> > > >>>>>
> > > >>>>>
> > > >>>
> > > >>
> > > >
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xensource.com
> > > > http://lists.xensource.com/xen-devel
> > > >
> > >
> 

 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 17419 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-22  1:19                           ` MaoXiaoyun
@ 2010-09-22 18:31                             ` Jeremy Fitzhardinge
  2010-09-23  0:55                               ` MaoXiaoyun
  0 siblings, 1 reply; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-22 18:31 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir.fraser

 On 09/21/2010 06:19 PM, MaoXiaoyun wrote:
> Thanks for the details.
>
> Currently guest VM hang in our heavy IO stress test, (In detail, we
> have created more than 12 HVMS on our 16cores physical server,
> and each of HVM inside, iometer and ab regard as heavy IO periodically
> run). Guest hang shows up in 1 or 2 days. So the IO is very
> heavy, so as the interrupts, I think.

What does /proc/interrupts look like?

>
> According to the hang log, the domain blocked in _VPF_blocked_in_xen,
> indicates "x=1" in log file below, and that is port 1, 2. And
> all our HVM are have PVdriver installed, one thing I am not clear
> right now is the IO event in these two ports. Does it only include
> "mouse, vga"event, or it also includes hard disk events? (If it has
> hard disk events included, the interrupt would be very heavy, right?
> and right now we have 4 physical CPU allocated to domain 0, is it
> appropriate ? )

I'm not sure of the details of how qemu<->hvm interaction works, but it
was hangs in blkfront in PV domains which brought the lost event problem
to light. At the basic event channel level, they will both look the
same, and suffer from the same problems.

>
> Anyway, I think I can have irqbalance disabled for a quick test.

Thanks; that should confirm the diagnosis.

> Meanwhile, I will spent some time on the patch merge.

If you're not willing to go to the current kernel, I can help you with
the minimal set of patches to backport.

J

> Many thanks.
>
> And its domain event info is :
> (XEN) Domain 10 polling vCPUs: {No periodic timer}
> (XEN) Event channel information for domain 10:
> (XEN) port [p/m]
> (XEN) 1 [0/1]: s=3 n=0 d=0 p=105 x=1
> (XEN) 2 [0/1]: s=3 n=1 d=0 p=106 x=1
> (XEN) 3 [0/0]: s=3 n=0 d=0 p=104 x=0
> (XEN) 4 [0/1]: s=2 n=0 d=0 x=0
> (XEN) 5 [0/0]: s=6 n=0 x=0
> (XEN) 6 [0/0]: s=2 n=0 d=0 x=0
> (XEN) 7 [0/0]: s=3 n=0 d=0 p=107 x=0
> (XEN) 8 [0/0]: s=3 n=0 d=0 p=108 x=0
> (XEN) 9 [0/0]: s=3 n=0 d=0 p=109 x=0
> (XEN) 10 [0/0]: s=3 n=0 d=0 p=110 x=0
>
> > Date: Tue, 21 Sep 2010 17:17:12 -0700
> > From: jeremy@goop.org
> > To: tinnycloud@hotmail.com
> > CC: keir.fraser@eu.citrix.com; xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> >
> > On 09/21/2010 05:02 PM, MaoXiaoyun wrote:
> > > Thanks Jeremy.
> > >
> > > Regards to fix you mentioned, did you mean the patch I searched and
> > > pasted below, if so, it this all what I need?
> >
> > No, you need more than that. There are quite a few changes from multiple
> > branches, so I'd recommend just using a current kernel.
> >
> > > For irqbalance disabled, I am afried it might have negative
> > > performance impact, right?
> >
> > I doubt it. Unless you have so many interrupts that they can't all be
> > handled on one cpu, it shouldn't make much difference. After all, the
> > interrupts have to be handled *somewhere*, but it doesn't matter much
> > where - who cares if cpu0 is mostly handling interrupts if it leaves the
> > other cpus free for other work?
> >
> > irqbalanced is primarily concerned with migrating interrupts according
> > to the CPU topology to save power and (maybe) handle interrupts closer
> > to the interrupting device. But that's meaningless in a domain where the
> > vcpus can be mapped to different pcpus from moment to moment.
> >
> > J
> >
> >
> > >
> > > -------------------------------------------------------
> > > diff --git a/drivers/xen/events.c
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> > > b/drivers/xen/events.c
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > > index 32f4a2c
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>..06fc991
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > > 100644 (file)
> > > --- a/drivers/xen/events.c
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856>
> > > +++ b/drivers/xen/events.c
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608>
> > > @@ -368,7
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=32f4a2cfe11e342104b9e568c230f2f17b5ae856;hb=32f4a2cfe11e342104b9e568c230f2f17b5ae856#l368>
> > > +368,7
> > >
> <http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=blob;f=drivers/xen/events.c;h=06fc9915176cdceca49f554c7a108c0fc3c5e608;hb=06fc9915176cdceca49f554c7a108c0fc3c5e608#l368>
> > > @@ int bind_evtchn_to_irq(unsigned int evtchn)
> > > irq = find_unbound_irq();
> > > set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
> > > - handle_level_irq, "event");
> > > + handle_edge_irq, "event");
> > > evtchn_to_irq[evtchn] = irq;
> > > irq_info[irq] = mk_evtchn_info(evtchn);
> > > > Date: Tue, 21 Sep 2010 10:28:34 -0700
> > > > From: jeremy@goop.org
> > > > To: keir.fraser@eu.citrix.com
> > > > CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > > > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> > > >
> > > > On 09/21/2010 12:53 AM, Keir Fraser wrote:
> > > > > On 21/09/2010 06:02, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > > > >
> > > > >> Take a look at domain 0 event channel with port 105,106, I find
> > > on port 105,
> > > > >> it pending is
> > > > >> 1.(in [1,0], first bit refer to pending, and is 1, second bit
> > > refer to mask,
> > > > >> is 0).
> > > > >>
> > > > >> (XEN) 105 [1/0]: s=3 n=2 d=10 p=1 x=0
> > > > >> (XEN) 106 [0/0]: s=3 n=2 d=10 p=2 x=0
> > > > >>
> > > > >> In all, we have domain U cpu blocking on _VPF_blocked_in_xen, and
> > > it must set
> > > > >> the pending bit.
> > > > >> Consider pending is 1, it looks like the irq is not triggered, am
> > > I right ?
> > > > >> Since if it is triggerred, it should clear the pending bit. (line
> > > 361).
> > > > > Yes it looks like dom0 is not handling the event for some reason.
> > > Qemu looks
> > > > > like it still works and is waiting for a notification via
> > > select(). But that
> > > > > won't happen until dom0 kernel handles the event as an IRQ and
> > > calls the
> > > > > relevant irq handler (drivers/xen/evtchn.c:evtchn_interrupt()).
> > > > >
> > > > > I think you're on the right track in your debugging. I don't know
> > > much about
> > > > > the pv_ops irq handling path, except to say that this aspect is
> > > different
> > > > > than non-pv_ops kernels which special-case handling of events
> bound to
> > > > > user-space rather more. So at the moment my best guess would be
> > > that the bug
> > > > > is in the pv_ops kernel irq handling for this type of
> user-space-bound
> > > > > event.
> > > >
> > > > We no longer use handle_level_irq because there's a race which loses
> > > > events when interrupt migration is enabled. Current
> xen/stable-2.6.32.x
> > > > has a proper fix for this, but the quick workaround is to disable
> > > > irqbalanced.
> > > >
> > > > J
> > > >
> > > > > -- Keir
> > > > >
> > > > >>
> > >
> ------------------------------/linux-2.6-pvops.git/kernel/irq/chip.c---
> > > > >> 354 void
> > > > >> 355 handle_level_irq(unsigned int irq, struct irq_desc *desc)
> > > > >> 356 {
> > > > >> 357 struct irqaction *action;
> > > > >> 358 irqreturn_t action_ret;
> > > > >> 359
> > > > >> 360 spin_lock(&desc->lock);
> > > > >> 361 mask_ack_irq(desc, irq);
> > > > >> 362
> > > > >> 363 if (unlikely(desc->status & IRQ_INPROGRESS))
> > > > >> 364 goto out_unlock;
> > > > >> 365 desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
> > > > >> 366 kstat_incr_irqs_this_cpu(irq, desc);
> > > > >> 367
> > > > >>
> > > > >> BTW, the qemu still works fine when VM is hang. Below is it
> > > strace output.
> > > > >> No much difference between other well worked qemu instance, other
> > > than select
> > > > >> all Timeout.
> > > > >> -------------------
> > > > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59535265}) = 0
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59629728}) = 0
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59717700}) = 0
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 59806552}) = 0
> > > > >> select(14, [3 7 11 12 13], [], [], {0, 10000}) = 0 (Timeout)
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70234406}) = 0
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70332116}) = 0
> > > > >> clock_gettime(CLOCK_MONOTONIC, {673470, 70419835}) = 0
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>> Date: Mon, 20 Sep 2010 10:35:46 +0100
> > > > >>> Subject: Re: VM hung after running sometime
> > > > >>> From: keir.fraser@eu.citrix.com
> > > > >>> To: tinnycloud@hotmail.com
> > > > >>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > > > >>>
> > > > >>> On 20/09/2010 10:15, "MaoXiaoyun" <tinnycloud@hotmail.com>
> wrote:
> > > > >>>
> > > > >>>> Thanks Keir.
> > > > >>>>
> > > > >>>> You're right, after I deeply looked into the
> > > wait_on_xen_event_channel, it
> > > > >>>> is
> > > > >>>> smart enough
> > > > >>>> to avoid the race I assumed.
> > > > >>>>
> > > > >>>> How about prepare_wait_on_xen_event_channel ?
> > > > >>>> Currently Istill don't know when it will be invoked.
> > > > >>>> Could enlighten me?
> > > > >>> As you can see it is called from hvm_send_assist_req(), hence it
> > > is called
> > > > >>> whenever an ioreq is sent to qemu-dm. Note that it is called
> > > *before*
> > > > >>> qemu-dm is notified -- hence it cannot race the wakeup from
> > > qemu, as we will
> > > > >>> not get woken until qemu-dm has done the work, and it cannot
> > > start the work
> > > > >>> until it is notified, and it is not notified until after
> > > > >>> prepare_wait_on_xen_event_channel has been executed.
> > > > >>>
> > > > >>> -- Keir
> > > > >>>
> > > > >>>>> Date: Mon, 20 Sep 2010 08:45:21 +0100
> > > > >>>>> Subject: Re: VM hung after running sometime
> > > > >>>>> From: keir.fraser@eu.citrix.com
> > > > >>>>> To: tinnycloud@hotmail.com
> > > > >>>>> CC: xen-devel@lists.xensource.com; jbeulich@novell.com
> > > > >>>>>
> > > > >>>>> On 20/09/2010 07:00, "MaoXiaoyun" <tinnycloud@hotmail.com>
> wrote:
> > > > >>>>>
> > > > >>>>>> When IO is not ready, domain U in VMEXIT->hvm_do_resume might
> > > invoke
> > > > >>>>>> wait_on_xen_event_channel
> > > > >>>>>> (where it is blocked in _VPF_blocked_in_xen).
> > > > >>>>>>
> > > > >>>>>> Here is my assumption of event missed.
> > > > >>>>>>
> > > > >>>>>> step 1: hvm_do_resume execute 260, and suppose p->state is
> > > > >>>>>> STATE_IOREQ_READY
> > > > >>>>>> or STATE_IOREQ_INPROCESS
> > > > >>>>>> step 2: then in cpu_handle_ioreq is in line 547, it execute
> > > line 548 so
> > > > >>>>>> quickly before hvm_do_resume execute line 270.
> > > > >>>>>> Well, the event is missed.
> > > > >>>>>> In other words, the _VPF_blocked_in_xen is cleared before it
> > > is actually
> > > > >>>>>> setted, and Domian U who is blocked
> > > > >>>>>> might never get unblocked, it this possible?
> > > > >>>>> Firstly, that code is very paranoid and it should never
> > > actually be the
> > > > >>>>> case
> > > > >>>>> that we see STATE_IOREQ_READY or STATE_IOREQ_INPROCESS in
> > > hvm_do_resume().
> > > > >>>>> Secondly, even if you do, take a look at the implementation of
> > > > >>>>> wait_on_xen_event_channel() -- it is smart enough to avoid the
> > > race you
> > > > >>>>> mention.
> > > > >>>>>
> > > > >>>>> -- Keir
> > > > >>>>>
> > > > >>>>>
> > > > >>>
> > > > >>
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xensource.com
> > > > > http://lists.xensource.com/xen-devel
> > > > >
> > > >
> >
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-22 18:31                             ` Jeremy Fitzhardinge
@ 2010-09-23  0:55                               ` MaoXiaoyun
  2010-09-23 23:20                                 ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-23  0:55 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 2296 bytes --]


The interrputs file is attached. The server has 24 HVM domains runnning about  40 hours. 

 

Well, we may upgrade to the new kernel in the further, but currently we prefer the fix has least impact on our present server.

So it is really nice of you if you could offer the sets of patches, also, it would be our fisrt choice.

Later I will kick off the irqbalance disabled test in different servers, will keep you noticed.

 

Thanks for your kindly assitance.

 
> Date: Wed, 22 Sep 2010 11:31:22 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: keir.fraser@eu.citrix.com; xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/21/2010 06:19 PM, MaoXiaoyun wrote:
> > Thanks for the details.
> >
> > Currently guest VM hang in our heavy IO stress test, (In detail, we
> > have created more than 12 HVMS on our 16cores physical server,
> > and each of HVM inside, iometer and ab regard as heavy IO periodically
> > run). Guest hang shows up in 1 or 2 days. So the IO is very
> > heavy, so as the interrupts, I think.
> 
> What does /proc/interrupts look like?
> 
> >
> > According to the hang log, the domain blocked in _VPF_blocked_in_xen,
> > indicates "x=1" in log file below, and that is port 1, 2. And
> > all our HVM are have PVdriver installed, one thing I am not clear
> > right now is the IO event in these two ports. Does it only include
> > "mouse, vga"event, or it also includes hard disk events? (If it has
> > hard disk events included, the interrupt would be very heavy, right?
> > and right now we have 4 physical CPU allocated to domain 0, is it
> > appropriate ? )
> 
> I'm not sure of the details of how qemu<->hvm interaction works, but it
> was hangs in blkfront in PV domains which brought the lost event problem
> to light. At the basic event channel level, they will both look the
> same, and suffer from the same problems.
> 
> >
> > Anyway, I think I can have irqbalance disabled for a quick test.
> 
> Thanks; that should confirm the diagnosis.
> 
> > Meanwhile, I will spent some time on the patch merge.
> 
> If you're not willing to go to the current kernel, I can help you with
> the minimal set of patches to backport.
> 
> J

 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2887 bytes --]

[-- Attachment #2: interrupts.txt --]
[-- Type: text/plain, Size: 16357 bytes --]

[root@r02k05013.yh.hello.com]$cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3       
  1:          2          0          0          0  xen-pirq-ioapic-edge  i8042
  8:          0          0          0          0  xen-pirq-ioapic-edge  rtc0
  9:          0          0          0          0  xen-pirq-ioapic-level  acpi
 12:          4          0          0          0  xen-pirq-ioapic-edge  i8042
 16:         33          0          0          0  xen-pirq-ioapic-level  uhci_hcd:usb3
 18:          2          0          0          0  xen-pirq-ioapic-level  ehci_hcd:usb1, uhci_hcd:usb6
 19:          0          0          0          0  xen-pirq-ioapic-level  uhci_hcd:usb5, ata_piix, ata_piix
 23:          0          0          0          0  xen-pirq-ioapic-level  ehci_hcd:usb2, uhci_hcd:usb4
667:        264        291        229        158   xen-dyn-event     e01834
668:          1          0          0          0   xen-dyn-event     blkif-backend
669:    1189494    1328744    1183212    1244761   xen-dyn-event     blkif-backend
670:        283        313        211        135   xen-dyn-event     e01833
671:          1          0          0          0   xen-dyn-event     blkif-backend
672:    1186175    1237317    1255610    1050001   xen-dyn-event     blkif-backend
673:        290        281        230        141   xen-dyn-event     e01832
674:    2886207    1834050    2682641     758841   xen-dyn-event     evtchn:qemu-dm
675:    2441688    2512806    2836479     820840   xen-dyn-event     evtchn:qemu-dm
676:        194          9         29         64   xen-dyn-event     evtchn:xenstored
677:          1          0          0          0   xen-dyn-event     blkif-backend
678:    1323360    1108039    1227069    1310550   xen-dyn-event     blkif-backend
679:        287        282        222        150   xen-dyn-event     e01831
680:    2398197    2463716    1963428     788417   xen-dyn-event     evtchn:qemu-dm
681:    2868413    1411144    4259463     625058   xen-dyn-event     evtchn:qemu-dm
682:        148         66         28         43   xen-dyn-event     evtchn:xenstored
683:          1          0          0          0   xen-dyn-event     blkif-backend
684:     984701    1145144     903614    1139842   xen-dyn-event     blkif-backend
685:        293        285        226        229   xen-dyn-event     e01830
686:          1          0          0          0   xen-dyn-event     blkif-backend
687:          1          0          0          0   xen-dyn-event     blkif-backend
688:    2423425    2113025    2965627     628218   xen-dyn-event     evtchn:qemu-dm
689:    2619595    2354653    2837325     838575   xen-dyn-event     evtchn:qemu-dm
690:        189          9         28         58   xen-dyn-event     evtchn:xenstored
691:    2698911    2298090    2541990     627498   xen-dyn-event     evtchn:qemu-dm
692:    2470065    2680959    2779390     688515   xen-dyn-event     evtchn:qemu-dm
693:        149         50         30         57   xen-dyn-event     evtchn:xenstored
694:        280        270        234        156   xen-dyn-event     e01829
695:    2659994    2160899    2842629     558365   xen-dyn-event     evtchn:qemu-dm
696:    2666926    2612526    2631989     665417   xen-dyn-event     evtchn:qemu-dm
697:        200          9         74          0   xen-dyn-event     evtchn:xenstored
698:          1          0          0          0   xen-dyn-event     blkif-backend
699:    1194618    1198304     980744    1213426   xen-dyn-event     blkif-backend
700:        283        308        249        155   xen-dyn-event     e01828
701:          1          0          0          0   xen-dyn-event     blkif-backend
702:    1374815    1092354    1212761    1210611   xen-dyn-event     blkif-backend
703:    3020922    1972095    2688038     492670   xen-dyn-event     evtchn:qemu-dm
704:    2774985    2560137    2582811     718108   xen-dyn-event     evtchn:qemu-dm
705:        131         65         65         28   xen-dyn-event     evtchn:xenstored
706:        277        266        241        158   xen-dyn-event     e01827
707:    1386497    1557386    1658186    1658542   xen-dyn-event     blkif-backend
708:    1237765    1006001    1101023    1387471   xen-dyn-event     blkif-backend
709:    2669620    1917537    2960802     636579   xen-dyn-event     evtchn:qemu-dm
710:    2688940    2554403    2692461     656770   xen-dyn-event     evtchn:qemu-dm
711:        164          0         34         89   xen-dyn-event     evtchn:xenstored
712:        271        260        264        146   xen-dyn-event     e01826
713:          1          0          0          0   xen-dyn-event     blkif-backend
714:    1254730    1286204    1128891    1332915   xen-dyn-event     blkif-backend
715:    2167577    2188830    3254845     592797   xen-dyn-event     evtchn:qemu-dm
716:    2790062    2474432    2499800     810987   xen-dyn-event     evtchn:qemu-dm
717:        141         53         76         31   xen-dyn-event     evtchn:xenstored
718:        283        286        229        146   xen-dyn-event     e01825
719:    2411104    2117494    2813661     827328   xen-dyn-event     evtchn:qemu-dm
720:    2805202    2680621    2307270     820281   xen-dyn-event     evtchn:qemu-dm
721:        191         58         25         31   xen-dyn-event     evtchn:xenstored
722:          1          0          0          0   xen-dyn-event     blkif-backend
723:    1290114    1672164    1521789    1799283   xen-dyn-event     blkif-backend
724:        281        265        256        139   xen-dyn-event     e01824
725:          1          0          0          0   xen-dyn-event     blkif-backend
726:    1257196    1332321    1141115    1255698   xen-dyn-event     blkif-backend
727:    2599013    2094650    2874490     640454   xen-dyn-event     evtchn:qemu-dm
728:    2951208    2638397    2203885     826387   xen-dyn-event     evtchn:qemu-dm
729:        144          0         23        140   xen-dyn-event     evtchn:xenstored
730:        276        382        254        147   xen-dyn-event     e01823
731:          1          0          0          0   xen-dyn-event     blkif-backend
732:     978257    1126588    1181035    1039810   xen-dyn-event     blkif-backend
733:    2724830    2095370    2496704     856504   xen-dyn-event     evtchn:qemu-dm
734:    2741659    2555315    2662988     662222   xen-dyn-event     evtchn:qemu-dm
735:        125         42         23         86   xen-dyn-event     evtchn:xenstored
736:        261        282        242        158   xen-dyn-event     e01822
737:          1          0          0          0   xen-dyn-event     blkif-backend
738:    1301933    1059529    1483475    1233294   xen-dyn-event     blkif-backend
739:    2783333    2125439    2765669     530499   xen-dyn-event     evtchn:qemu-dm
740:    2647397    2793379    2421217     730894   xen-dyn-event     evtchn:qemu-dm
741:        186         28         22         55   xen-dyn-event     evtchn:xenstored
742:        279        275        240        147   xen-dyn-event     e01821
743:    2980030    2025412    2385895     796660   xen-dyn-event     evtchn:qemu-dm
744:    2674747    2576637    2839176     523993   xen-dyn-event     evtchn:qemu-dm
745:          1          0          0          0   xen-dyn-event     blkif-backend
746:    1080065    1146732    1175469    1293849   xen-dyn-event     blkif-backend
747:        172         28         75         17   xen-dyn-event     evtchn:xenstored
748:        280        279        233        149   xen-dyn-event     e01820
749:          1          0          0          0   xen-dyn-event     blkif-backend
750:    2697048    2157744    2797420     547528   xen-dyn-event     evtchn:qemu-dm
751:    2566141    2721183    2323934     992508   xen-dyn-event     evtchn:qemu-dm
752:    1401569    1437218    1555361    1736942   xen-dyn-event     blkif-backend
753:        172         29         16         77   xen-dyn-event     evtchn:xenstored
754:        301        270        229        141   xen-dyn-event     e01819
755:    2759770    2107753    2761036     551617   xen-dyn-event     evtchn:qemu-dm
756:    2850946    2793582    2348832     643076   xen-dyn-event     evtchn:qemu-dm
757:          1          0          0          0   xen-dyn-event     blkif-backend
758:        109        136         15         16   xen-dyn-event     evtchn:xenstored
759:    1178475    1035199    1273963    1074503   xen-dyn-event     blkif-backend
760:        284        289        220        146   xen-dyn-event     e01818
761:          1          0          0          0   xen-dyn-event     blkif-backend
762:    1429057    1267342    1154130    1087126   xen-dyn-event     blkif-backend
763:    2682089    2269349    2661882     573138   xen-dyn-event     evtchn:qemu-dm
764:    3122733    2518573    2344835     643121   xen-dyn-event     evtchn:qemu-dm
765:         87        111         73         12   xen-dyn-event     evtchn:xenstored
766:        291        265        243        144   xen-dyn-event     e01817
767:          1          0          0          0   xen-dyn-event     blkif-backend
768:    2466972    2098633    2920335     732404   xen-dyn-event     evtchn:qemu-dm
769:    2616869    2836894    2370369     773064   xen-dyn-event     evtchn:qemu-dm
770:    1340019    1157601    1247524    1247004   xen-dyn-event     blkif-backend
771:        164         41         16         78   xen-dyn-event     evtchn:xenstored
772:        299        271        239        132   xen-dyn-event     e01816
773:          1          0          0          0   xen-dyn-event     blkif-backend
774:    1295741    1198561    1191189    1199544   xen-dyn-event     blkif-backend
775:    2919143    1894942    2696978     670286   xen-dyn-event     evtchn:qemu-dm
776:    2332743    2944239    2673486     691462   xen-dyn-event     evtchn:qemu-dm
777:        158         98         17         17   xen-dyn-event     evtchn:xenstored
778:        249        270        271        154   xen-dyn-event     e01815
779:    2952708    1931809    2588696     724771   xen-dyn-event     evtchn:qemu-dm
780:    2669335    2376495    2844760     734579   xen-dyn-event     evtchn:qemu-dm
781:        124          7        103         59   xen-dyn-event     evtchn:xenstored
782:          1          0          0          0   xen-dyn-event     blkif-backend
783:    1419414    1471978    1411064    1598980   xen-dyn-event     blkif-backend
784:        274        273        255        170   xen-dyn-event     e01814
785:    2592750    1957899    3060462     598234   xen-dyn-event     evtchn:qemu-dm
786:    2980095    2310928    2628337     690780   xen-dyn-event     evtchn:qemu-dm
787:        140         58         37         65   xen-dyn-event     evtchn:xenstored
788:          1          0          0          0   xen-dyn-event     blkif-backend
789:    1241494    1226787    1191709    1291431   xen-dyn-event     blkif-backend
790:        280        301        295        172   xen-dyn-event     e01813
791:    2172969    2319168    2267344     869920   xen-dyn-event     evtchn:qemu-dm
792:    2792196    1567252    4180147     651138   xen-dyn-event     evtchn:qemu-dm
793:        182         71         38          0   xen-dyn-event     evtchn:xenstored
794:          1          0          0          0   xen-dyn-event     blkif-backend
795:    1248123    1152952    1292707    1241223   xen-dyn-event     blkif-backend
796:        274        232        261        187   xen-dyn-event     e01812
797:    2630346    2034164    2828450     717885   xen-dyn-event     evtchn:qemu-dm
798:    2760337    2597284    2734187     514020   xen-dyn-event     evtchn:qemu-dm
799:        253         16         40          0   xen-dyn-event     evtchn:xenstored
800:          1          0          0          0   xen-dyn-event     blkif-backend
801:    1237552    1163867    1405345    1177313   xen-dyn-event     blkif-backend
802:        267        239        253        183   xen-dyn-event     e01811
803:    2978586    1922367    2772580     543348   xen-dyn-event     evtchn:qemu-dm
804:    2717496    2525253    2836331     548749   xen-dyn-event     evtchn:qemu-dm
805:        202         71         39          7   xen-dyn-event     evtchn:xenstored
806:          1          0          0          0   xen-dyn-event     blkif-backend
807:    1296795    1216838    1043934    1106972   xen-dyn-event     blkif-backend
808:    2891437    2019424    2553105     714291   xen-dyn-event     evtchn:qemu-dm
809:    2711169    2452201    2834698     651470   xen-dyn-event     evtchn:qemu-dm
810:        200          7         38         64   xen-dyn-event     evtchn:xenstored
811:          0          0          0          0   xen-dyn-event     evtchn:xenstored
812:       2458      11240       1517       6415   xen-dyn-event     evtchn:xenstored
814:     811223        886       1465     330202  xen-pirq-msi-x     peth0-4
815:    1866884     343278          0          0  xen-pirq-msi-x     peth0-3
816:    1101450       1112     446011       1242  xen-pirq-msi-x     peth0-2
817:    1943068     295148          0          0  xen-pirq-msi-x     peth0-1
818:    2771749       5849          0          0  xen-pirq-msi-x     peth0-0
819:   67998107       2128          0          0  xen-pirq-msi       ioc0
820:          0          0          0          0   xen-dyn-virq      hvc_console
821:          0          0          0          0   xen-dyn-virq      mce
826:          0          0          0          0   xen-dyn-virq      pcpu
827:       6724       8682       4204      13172   xen-dyn-event     xenbus
828:          0          0          0     132930   xen-dyn-ipi       callfuncsingle3
829:          0          0          0          0   xen-dyn-virq      debug3
830:          0          0          0       6194   xen-dyn-ipi       callfunc3
831:          0          0          0   36541766   xen-dyn-ipi       resched3
832:          0          0          0  140898970   xen-dyn-virq      timer3
833:          0          0     105742          0   xen-dyn-ipi       callfuncsingle2
834:          0          0          0          0   xen-dyn-virq      debug2
835:          0          0       6056          0   xen-dyn-ipi       callfunc2
836:          0          0   51311067          0   xen-dyn-ipi       resched2
837:          0          0  159141682          0   xen-dyn-virq      timer2
838:          0     113324          0          0   xen-dyn-ipi       callfuncsingle1
839:          0          0          0          0   xen-dyn-virq      debug1
840:          0       5579          0          0   xen-dyn-ipi       callfunc1
841:          0   51879145          0          0   xen-dyn-ipi       resched1
842:          0  165894340          0          0   xen-dyn-virq      timer1
843:      90992          0          0          0   xen-dyn-ipi       callfuncsingle0
844:          0          0          0          0   xen-dyn-virq      debug0
845:       4842          0          0          0   xen-dyn-ipi       callfunc0
846:   50688128          0          0          0   xen-dyn-ipi       resched0
847:  182082735          0          0          0   xen-dyn-virq      timer0
NMI:          0          0          0          0   Non-maskable interrupts
LOC:          0          0          0          0   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
CNT:          0          0          0          0   Performance counter interrupts
PND:          0          0          0          0   Performance pending work
RES:   50688128   51879145   51311067   36541766   Rescheduling interrupts
CAL:      95834     118903     111798     139124   Function call interrupts
TLB:          0          0          0          0   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:        483        483        483        483   Machine check polls
ERR:          0
MIS:          0

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-23  0:55                               ` MaoXiaoyun
@ 2010-09-23 23:20                                 ` Jeremy Fitzhardinge
  2010-09-24  4:29                                   ` MaoXiaoyun
                                                     ` (2 more replies)
  0 siblings, 3 replies; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-23 23:20 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir.fraser

 On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> The interrputs file is attached. The server has 24 HVM domains
> runnning about 40 hours.
>
> Well, we may upgrade to the new kernel in the further, but currently
> we prefer the fix has least impact on our present server.
> So it is really nice of you if you could offer the sets of patches,
> also, it would be our fisrt choice.

Try cherry-picking:
8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
IPIs and VIRQs
66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
edge-triggered
29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
for pirq events
f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
pirq events
0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
fasteoi
d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
enable/disable_dynirq -> unmask/mask_irq
c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
retrigger_dynirq -> irq
f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
enable/disable unmask/mask
43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
hypercall does not unmask
2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume

> Later I will kick off the irqbalance disabled test in different
> servers, will keep you noticed.

Thanks,
J

>
> Thanks for your kindly assitance.
>
> > Date: Wed, 22 Sep 2010 11:31:22 -0700
> > From: jeremy@goop.org
> > To: tinnycloud@hotmail.com
> > CC: keir.fraser@eu.citrix.com; xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> >
> > On 09/21/2010 06:19 PM, MaoXiaoyun wrote:
> > > Thanks for the details.
> > >
> > > Currently guest VM hang in our heavy IO stress test, (In detail, we
> > > have created more than 12 HVMS on our 16cores physical server,
> > > and each of HVM inside, iometer and ab regard as heavy IO periodically
> > > run). Guest hang shows up in 1 or 2 days. So the IO is very
> > > heavy, so as the interrupts, I think.
> >
> > What does /proc/interrupts look like?
> >
> > >
> > > According to the hang log, the domain blocked in _VPF_blocked_in_xen,
> > > indicates "x=1" in log file below, and that is port 1, 2. And
> > > all our HVM a re have PVdriver installed, one thing I am not clear
> > > right now is the IO event in these two ports. Does it only include
> > > "mouse, vga"event, or it also includes hard disk events? (If it has
> > > hard disk events included, the interrupt would be very heavy, right?
> > > and right now we have 4 physical CPU allocated to domain 0, is it
> > > appropriate ? )
> >
> > I'm not sure of the details of how qemu<->hvm interaction works, but it
> > was hangs in blkfront in PV domains which brought the lost event problem
> > to light. At the basic event channel level, they will both look the
> > same, and suffer from the same problems.
> >
> > >
> > > Anyway, I think I can have irqbalance disabled for a quick test.
> >
> > Thanks; that should confirm the diagnosis.
> >
> > > Meanwhile, I will spent some time on the patch merge.
> >
> > If you're not willing to go to t he current kernel, I can help you with
> > the minimal set of patches to backport.
> >
> > J
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-23 23:20                                 ` Jeremy Fitzhardinge
@ 2010-09-24  4:29                                   ` MaoXiaoyun
  2010-09-25  9:33                                   ` MaoXiaoyun
  2010-09-28  5:43                                   ` MaoXiaoyun
  2 siblings, 0 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-24  4:29 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 4382 bytes --]


Thank you very much,  Jeremy.

I will have a try.
 
> Date: Thu, 23 Sep 2010 16:20:09 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > The interrputs file is attached. The server has 24 HVM domains
> > runnning about 40 hours.
> >
> > Well, we may upgrade to the new kernel in the further, but currently
> > we prefer the fix has least impact on our present server.
> > So it is really nice of you if you could offer the sets of patches,
> > also, it would be our fisrt choice.
> 
> Try cherry-picking:
> 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> IPIs and VIRQs
> 66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
> edge-triggered
> 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> for pirq events
> f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> pirq events
> 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
> 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> fasteoi
> d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> enable/disable_dynirq -> unmask/mask_irq
> c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> retrigger_dynirq -> irq
> f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> enable/disable unmask/mask
> 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> hypercall does not unmask
> 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
> f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume
> 
> > Later I will kick off the irqbalance disabled test in different
> > servers, will keep you noticed.
> 
> Thanks,
> J
> 
> >
> > Thanks for your kindly assitance.
> >
> > > Date: Wed, 22 Sep 2010 11:31:22 -0700
> > > From: jeremy@goop.org
> > > To: tinnycloud@hotmail.com
> > > CC: keir.fraser@eu.citrix.com; xen-devel@lists.xensource.com
> > > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> > >
> > > On 09/21/2010 06:19 PM, MaoXiaoyun wrote:
> > > > Thanks for the details.
> > > >
> > > > Currently guest VM hang in our heavy IO stress test, (In detail, we
> > > > have created more than 12 HVMS on our 16cores physical server,
> > > > and each of HVM inside, iometer and ab regard as heavy IO periodically
> > > > run). Guest hang shows up in 1 or 2 days. So the IO is very
> > > > heavy, so as the interrupts, I think.
> > >
> > > What does /proc/interrupts look like?
> > >
> > > >
> > > > According to the hang log, the domain blocked in _VPF_blocked_in_xen,
> > > > indicates "x=1" in log file below, and that is port 1, 2. And
> > > > all our HVM a re have PVdriver installed, one thing I am not clear
> > > > right now is the IO event in these two ports. Does it only include
> > > > "mouse, vga"event, or it also includes hard disk events? (If it has
> > > > hard disk events included, the interrupt would be very heavy, right?
> > > > and right now we have 4 physical CPU allocated to domain 0, is it
> > > > appropriate ? )
> > >
> > > I'm not sure of the details of how qemu<->hvm interaction works, but it
> > > was hangs in blkfront in PV domains which brought the lost event problem
> > > to light. At the basic event channel level, they will both look the
> > > same, and suffer from the same problems.
> > >
> > > >
> > > > Anyway, I think I can have irqbalance disabled for a quick test.
> > >
> > > Thanks; that should confirm the diagnosis.
> > >
> > > > Meanwhile, I will spent some time on the patch merge.
> > >
> > > If you're not willing to go to t he current kernel, I can help you with
> > > the minimal set of patches to backport.
> > >
> > > J
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 5503 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-23 23:20                                 ` Jeremy Fitzhardinge
  2010-09-24  4:29                                   ` MaoXiaoyun
@ 2010-09-25  9:33                                   ` MaoXiaoyun
  2010-09-25 10:40                                     ` wei song
  2010-09-27 11:56                                     ` MaoXiaoyun
  2010-09-28  5:43                                   ` MaoXiaoyun
  2 siblings, 2 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-25  9:33 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 2706 bytes --]


Hi Jeremy:

    

      The test of  irqbalance disabled is running. Currently one server was crashed on NIC.

      Trace.jpg in attachments is the screenshot from serial port, and trace.txt is from /varl/log/message.

      Do you think it has connection with irqbalance disabled, or some other possibilities?

 

      In addition,  I find in /proc/interrupts,  all interrupts are happend on cpu0(please refer to interrputs.txt   

attached). Could it be a possible cause of server crash, and is there a way I can configure manually  to 

distribute those interrupts evenly?      

       

     Meanwhile,  I wil start the new test with kernel patched soon. Thanks. 
 
> Date: Thu, 23 Sep 2010 16:20:09 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > The interrputs file is attached. The server has 24 HVM domains
> > runnning about 40 hours.
> >
> > Well, we may upgrade to the new kernel in the further, but currently
> > we prefer the fix has least impact on our present server.
> > So it is really nice of you if you could offer the sets of patches,
> > also, it would be our fisrt choice.
> 
> Try cherry-picking:
> 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> IPIs and VIRQs
> 66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
> edge-triggered
> 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> for pirq events
> f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> pirq events
> 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
> 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> fasteoi
> d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> enable/disable_dynirq -> unmask/mask_irq
> c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> retrigger_dynirq -> irq
> f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> enable/disable unmask/mask
> 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> hypercall does not unmask
> 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
> f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume
> 

 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 3469 bytes --]

[-- Attachment #2: interrupts.txt --]
[-- Type: text/plain, Size: 11210 bytes --]

           CPU0       CPU1       CPU2       CPU3       
  1:          2          0          0          0  xen-pirq-ioapic-edge  i8042
  8:          0          0          0          0  xen-pirq-ioapic-edge  rtc0
  9:          0          0          0          0  xen-pirq-ioapic-level  acpi
 12:          4          0          0          0  xen-pirq-ioapic-edge  i8042
 16:         33          0          0          0  xen-pirq-ioapic-level  uhci_hcd:usb3
 18:          2          0          0          0  xen-pirq-ioapic-level  ehci_hcd:usb1, uhci_hcd:usb6
 19:          0          0          0          0  xen-pirq-ioapic-level  uhci_hcd:usb5, ata_piix, ata_piix
 23:          0          0          0          0  xen-pirq-ioapic-level  ehci_hcd:usb2, uhci_hcd:usb4
 32:     165111          0          0          0  xen-pirq-ioapic-level  ioc0
725:          5          0          0          0   xen-dyn-event     001264
726:          1          0          0          0   xen-dyn-event     blkif-backend
727:         24          0          0          0   xen-dyn-event     blkif-backend
728:       1011          0          0          0   xen-dyn-event     blkif-backend
729:      63443          0          0          0   xen-dyn-event     evtchn:qemu-dm
730:     351452          0          0          0   xen-dyn-event     evtchn:qemu-dm
731:        173          0          0          0   xen-dyn-event     evtchn:xenstored
732:         92          0          0          0   xen-dyn-event     001255
733:          1          0          0          0   xen-dyn-event     blkif-backend
734:         24          0          0          0   xen-dyn-event     blkif-backend
735:      16565          0          0          0   xen-dyn-event     blkif-backend
736:         40          0          0          0   xen-dyn-event     0011AB
737:          1          0          0          0   xen-dyn-event     blkif-backend
738:         24          0          0          0   xen-dyn-event     blkif-backend
739:      15396          0          0          0   xen-dyn-event     blkif-backend
740:     152517          0          0          0   xen-dyn-event     evtchn:qemu-dm
741:     445053          0          0          0   xen-dyn-event     evtchn:qemu-dm
742:        162          0          0          0   xen-dyn-event     evtchn:xenstored
743:        135          0          0          0   xen-dyn-event     001196
744:          1          0          0          0   xen-dyn-event     blkif-backend
745:         24          0          0          0   xen-dyn-event     blkif-backend
746:      15434          0          0          0   xen-dyn-event     blkif-backend
747:     149157          0          0          0   xen-dyn-event     evtchn:qemu-dm
748:     448479          0          0          0   xen-dyn-event     evtchn:qemu-dm
749:        161          0          0          0   xen-dyn-event     evtchn:xenstored
750:         41          0          0          0   xen-dyn-event     001246
751:     149992          0          0          0   xen-dyn-event     evtchn:qemu-dm
752:     454913          0          0          0   xen-dyn-event     evtchn:qemu-dm
753:        181          0          0          0   xen-dyn-event     evtchn:xenstored
754:          1          0          0          0   xen-dyn-event     blkif-backend
755:         24          0          0          0   xen-dyn-event     blkif-backend
756:      15480          0          0          0   xen-dyn-event     blkif-backend
757:     152212          0          0          0   xen-dyn-event     evtchn:qemu-dm
758:     448067          0          0          0   xen-dyn-event     evtchn:qemu-dm
759:        184          0          0          0   xen-dyn-event     evtchn:xenstored
760:         40          0          0          0   xen-dyn-event     001217
761:          1          0          0          0   xen-dyn-event     blkif-backend
762:         24          0          0          0   xen-dyn-event     blkif-backend
763:      15552          0          0          0   xen-dyn-event     blkif-backend
764:     159911          0          0          0   xen-dyn-event     evtchn:qemu-dm
765:     447792          0          0          0   xen-dyn-event     evtchn:qemu-dm
766:        174          0          0          0   xen-dyn-event     evtchn:xenstored
767:        144          0          0          0   xen-dyn-event     001273
768:          1          0          0          0   xen-dyn-event     blkif-backend
769:         24          0          0          0   xen-dyn-event     blkif-backend
770:      15423          0          0          0   xen-dyn-event     blkif-backend
771:         67          0          0          0   xen-dyn-event     001208
772:     159510          0          0          0   xen-dyn-event     evtchn:qemu-dm
773:     463235          0          0          0   xen-dyn-event     evtchn:qemu-dm
774:        175          0          0          0   xen-dyn-event     evtchn:xenstored
775:         36          0          0          0   xen-dyn-event     0011F8
776:         37          0          0          0   xen-dyn-event     001227
777:          1          0          0          0   xen-dyn-event     blkif-backend
778:         24          0          0          0   xen-dyn-event     blkif-backend
779:      15468          0          0          0   xen-dyn-event     blkif-backend
780:         37          0          0          0   xen-dyn-event     0011C0
781:         37          0          0          0   xen-dyn-event     001236
782:          1          0          0          0   xen-dyn-event     blkif-backend
783:         24          0          0          0   xen-dyn-event     blkif-backend
784:      15254          0          0          0   xen-dyn-event     blkif-backend
785:          1          0          0          0   xen-dyn-event     blkif-backend
786:         24          0          0          0   xen-dyn-event     blkif-backend
787:      15678          0          0          0   xen-dyn-event     blkif-backend
788:          1          0          0          0   xen-dyn-event     blkif-backend
789:         24          0          0          0   xen-dyn-event     blkif-backend
790:      15506          0          0          0   xen-dyn-event     blkif-backend
791:          1          0          0          0   xen-dyn-event     blkif-backend
792:         24          0          0          0   xen-dyn-event     blkif-backend
793:      15581          0          0          0   xen-dyn-event     blkif-backend
794:     165117          0          0          0   xen-dyn-event     evtchn:qemu-dm
795:     472318          0          0          0   xen-dyn-event     evtchn:qemu-dm
796:        169          0          0          0   xen-dyn-event     evtchn:xenstored
797:     162059          0          0          0   xen-dyn-event     evtchn:qemu-dm
798:     472911          0          0          0   xen-dyn-event     evtchn:qemu-dm
799:        160          0          0          0   xen-dyn-event     evtchn:xenstored
800:     170224          0          0          0   xen-dyn-event     evtchn:qemu-dm
801:     468977          0          0          0   xen-dyn-event     evtchn:qemu-dm
802:        169          0          0          0   xen-dyn-event     evtchn:xenstored
803:     161972          0          0          0   xen-dyn-event     evtchn:qemu-dm
804:     476810          0          0          0   xen-dyn-event     evtchn:qemu-dm
805:        185          0          0          0   xen-dyn-event     evtchn:xenstored
806:     171216          0          0          0   xen-dyn-event     evtchn:qemu-dm
807:     468074          0          0          0   xen-dyn-event     evtchn:qemu-dm
808:        176          0          0          0   xen-dyn-event     evtchn:xenstored
809:          0          0          0          0   xen-dyn-event     evtchn:xenstored
810:      12652          0          0          0   xen-dyn-event     evtchn:xenstored
815:     115554          0          0          0  xen-pirq-msi-x     peth0-4
816:     222540          0          0          0  xen-pirq-msi-x     peth0-3
817:     240398          0          0          0  xen-pirq-msi-x     peth0-2
818:     251425          0          0          0  xen-pirq-msi-x     peth0-1
819:     116320          0          0          0  xen-pirq-msi-x     peth0-0
821:          0          0          0          0   xen-dyn-virq      mce
826:          0          0          0          0   xen-dyn-virq      pcpu
827:      20165          0          0          0   xen-dyn-event     xenbus
828:          0          0          0       8381   xen-dyn-ipi       callfuncsingle3
829:          0          0          0          0   xen-dyn-virq      debug3
830:          0          0          0        408   xen-dyn-ipi       callfunc3
831:          0          0          0    1198602   xen-dyn-ipi       resched3
832:          0          0          0    2030913   xen-dyn-virq      timer3
833:          0          0       8819          0   xen-dyn-ipi       callfuncsingle2
834:          0          0          0          0   xen-dyn-virq      debug2
835:          0          0        422          0   xen-dyn-ipi       callfunc2
836:          0          0    1200357          0   xen-dyn-ipi       resched2
837:          0          0    2188854          0   xen-dyn-virq      timer2
838:          0       8824          0          0   xen-dyn-ipi       callfuncsingle1
839:          0          0          0          0   xen-dyn-virq      debug1
840:          0        402          0          0   xen-dyn-ipi       callfunc1
841:          0    1011937          0          0   xen-dyn-ipi       resched1
842:          0    2299517          0          0   xen-dyn-virq      timer1
843:       6184          0          0          0   xen-dyn-ipi       callfuncsingle0
844:          0          0          0          0   xen-dyn-virq      debug0
845:        130          0          0          0   xen-dyn-ipi       callfunc0
846:     473963          0          0          0   xen-dyn-ipi       resched0
847:    3185890          0          0          0   xen-dyn-virq      timer0
NMI:          0          0          0          0   Non-maskable interrupts
LOC:          0          0          0          0   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
CNT:          0          0          0          0   Performance counter interrupts
PND:          0          0          0          0   Performance pending work
RES:     473963    1011937    1200357    1198602   Rescheduling interrupts
CAL:       6314       9226       9241       8789   Function call interrupts
TLB:          0          0          0          0   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:         10         10         10         10   Machine check polls
ERR:          0
MIS:          0

[-- Attachment #3: trace.jpg --]
[-- Type: image/pjpeg, Size: 127624 bytes --]

[-- Attachment #4: trace.txt --]
[-- Type: text/plain, Size: 2966 bytes --]

Sep 25 12:36:29 r21a11045 kernel: ------------[ cut here ]------------
Sep 25 12:36:29 r21a11045 kernel: WARNING: at net/sched/sch_generic.c:246 dev_watchdog+0x105/0x16a()
Sep 25 12:36:29 r21a11045 kernel: Hardware name: RH2285                
Sep 25 12:36:29 r21a11045 kernel: NETDEV WATCHDOG: peth0 (bnx2): transmit queue 0 timed out
Sep 25 12:36:29 r21a11045 kernel: Modules linked in: xt_iprange xt_mac arptable_filter arp_tables xt_physdev bridge stp llc n
etconsole configfs autofs4 ipmi_devintf ipmi_si ipmi_msghandler lockd sunrpc ipv6 xenfs dm_multipath fuse blktap loop nbd vid
eo output sbs sbshc parport_pc lp parport serio_raw bnx2 snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device 
snd_pcm_oss snd_mixer_oss snd_pcm i2c_i801 i2c_core snd_timer snd soundcore snd_page_alloc iTCO_wdt iTCO_vendor_support pcspk
r pata_acpi ata_generic ata_piix shpchp mptsas mptscsih mptbase [last unloaded: freq_table]
Sep 25 12:36:29 r21a11045 kernel: Pid: 12406, comm: tapdisk2 Not tainted 2.6.31.13xen #1
Sep 25 12:36:29 r21a11045 kernel: Call Trace:
Sep 25 12:36:29 r21a11045 kernel:  <IRQ>  [<ffffffff81388637>] ? dev_watchdog+0x105/0x16a
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff810535ba>] warn_slowpath_common+0x7c/0x94
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8105368c>] warn_slowpath_fmt+0xa4/0xa6
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8100f0de>] ? xen_clocksource_read+0x21/0x23
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8100f1ad>] ? xen_clocksource_get_cycles+0x9/0x1c
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8100ec68>] ? HYPERVISOR_vcpu_op+0xf/0x11
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8100f14b>] ? xen_vcpuop_set_next_event+0x52/0x68
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81387c07>] ? __netif_tx_lock+0x1b/0x24
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81387d3e>] ? netif_tx_lock+0x46/0x6e
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81372fad>] ? netdev_drivername+0x48/0x50
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81388637>] dev_watchdog+0x105/0x16a
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8106cb38>] ? hrtimer_interrupt+0xe6/0x191
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81388532>] ? dev_watchdog+0x0/0x16a
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff8105de72>] run_timer_softirq+0x126/0x19f
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81059658>] __do_softirq+0xd2/0x19d
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81014f6c>] call_softirq+0x1c/0x30
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff810166dc>] do_softirq+0x47/0x8f
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff810594e0>] irq_exit+0x44/0x83
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81262338>] xen_evtchn_do_upcall+0x156/0x172
Sep 25 12:36:29 r21a11045 kernel:  [<ffffffff81014fbe>] xen_do_hypervisor_callback+0x1e/0x30
Sep 25 12:36:29 r21a11045 kernel:  <EOI>  [<ffffffff810092eb>] ? hypercall_page+0x2eb/0x1000
Sep 25 12:36:29 r21a11045 kernel: ---[ end trace c4a8db1c21ab8f50 ]---

[-- Attachment #5: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-25  9:33                                   ` MaoXiaoyun
@ 2010-09-25 10:40                                     ` wei song
  2010-09-27 18:02                                       ` Jeremy Fitzhardinge
  2010-09-27 11:56                                     ` MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: wei song @ 2010-09-25 10:40 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: jeremy, xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 3124 bytes --]

Hi Jeremy,

     Do you think this issue is cased of without CONFIG_X86_F00F_BUG.

thanks,
James

在 2010年9月25日 下午5:33,MaoXiaoyun <tinnycloud@hotmail.com>写道:

>  Hi Jeremy:
>
>       The test of  irqbalance disabled is running. Currently one server was
> crashed on NIC.
>       Trace.jpg in attachments is the screenshot from serial port, and
> trace.txt is from /varl/log/message.
>       Do you think it has connection with irqbalance disabled, or some
> other possibilities?
>
>       In addition,  I find in /proc/interrupts,  all interrupts are happend
> on cpu0(please refer to interrputs.txt
> attached). Could it be a possible cause of server crash, and is there a way
> I can configure manually  to
> distribute those interrupts evenly?
>
>      Meanwhile,  I wil start the new test with kernel patched soon. Thanks.
>
>
> > Date: Thu, 23 Sep 2010 16:20:09 -0700
>
> > From: jeremy@goop.org
> > To: tinnycloud@hotmail.com
> > CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
>
> > Subject: Re: [Xen-devel] Re: VM hung after running sometime
> >
> > On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > > The interrputs file is attached. The server has 24 HVM domains
> > > runnning about 40 hours.
> > >
> > > Well, we may upgrade to the new kernel in the further, but currently
> > > we prefer the fix has least impact on our present server.
> > > So it is really nice of you if you could offer the sets of patches,
> > > also, it would be our fisrt choice.
> >
> > Try cherry-picking:
> > 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> > IPIs and VIRQs
> > 66fd3052 fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
>
> > edge-triggered
> > 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> > for pirq events
> > f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> > pirq events
> > 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using
> fasteoi
> > 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> > fasteoi
> > d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> > enable/disable_dynirq -> unmask/mask_irq
> > c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> > retrigger_dynirq -> irq
> > f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> > enable/disable unmask/mask
> > 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> > cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> > hypercall does not unmask
> > 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> > PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> > 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> > d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI
> too
> > f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn
> resume
> >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>

[-- Attachment #1.2: Type: text/html, Size: 4450 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-25  9:33                                   ` MaoXiaoyun
  2010-09-25 10:40                                     ` wei song
@ 2010-09-27 11:56                                     ` MaoXiaoyun
  1 sibling, 0 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-27 11:56 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 8455 bytes --]


Hi Jeremy:

 

     About the NIC crash, it turns out to our NIC driver problem. 

     The crash no longer show up after the driver upgraded.

     The irqbanlance disabled test is running smoothly so far.

 

     Meanwhile, we had merged your patch to our current kernel(2.6.31), and start the test.

     Unfortunately, one of the VM hang in a few minutes after it started.

     But this time some abnormal kernel backtrace logged in /var/log/message.

 

     I wonder if the patch is compatible with our current kernel? Or some extra modifications I need?

    Consider the good result of irqbalance disabled test, I'm afried I may commit some mistakes in patch

    merge since I'm newer to git stuff(-_-!!).  

    So I attached the merged patch (only event.c), could you help to review -:)? 

 

    Thanks for your time.      

 

Kernel backtrace below:

---------------------------------------------------------------------------------------------------------------------

14 Sep 27 18:36:10 pc1 kernel: ------------[ cut here ]------------
15 Sep 27 18:36:10 pc1 kernel: WARNING: at net/core/skbuff.c:475 skb_release_head_state+0x71/0xf8()
16 Sep 27 18:36:10 pc1 kernel: Hardware name: PowerEdge R710
18 Sep 27 18:36:10 pc1 kernel: Pid: 0, comm: swapper Tainted: G        W  2.6.31.13xen #4
19 Sep 27 18:36:10 pc1 kernel: Call Trace:
20 Sep 27 18:36:10 pc1 kernel:  <IRQ>  [<ffffffff8136c751>] ? skb_release_head_state+0x71/0xf8
21 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810535ba>] warn_slowpath_common+0x7c/0x94
22 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810535e6>] warn_slowpath_null+0x14/0x16
23 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8136c751>] skb_release_head_state+0x71/0xf8
24 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8136c7ee>] skb_release_all+0x16/0x22
25 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8136c837>] __kfree_skb+0x16/0x84
26 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8136c8d2>] consume_skb+0x2d/0x2f
27 Sep 27 18:36:10 pc1 kernel:  [<ffffffffa0069aab>] bnx2_poll_work+0x1b7/0xa0f [bnx2]
28 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81260f00>] ? HYPERVISOR_event_channel_op+0x1a/0x4d
29 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8126102a>] ? unmask_evtchn+0x4f/0xa3
30 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100eb71>] ? xen_force_evtchn_callback+0xd/0xf
31 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f292>] ? check_events+0x12/0x20
32 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81414226>] ? _spin_lock_irqsave+0x1e/0x37
33 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f27f>] ? xen_restore_fl_direct_end+0x0/0x1
34 Sep 27 18:36:10 pc1 kernel:  [<ffffffffa006c8a7>] bnx2_poll_msix+0x38/0x92 [bnx2]
35 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81382eaf>] netpoll_poll+0xa3/0x38f
36 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810ec08b>] ? __kmalloc_track_caller+0x11a/0x12c
37 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f27f>] ? xen_restore_fl_direct_end+0x0/0x1
38 Sep 27 18:36:10 pc1 kernel:  [<ffffffff813832b4>] netpoll_send_skb+0x119/0x1f7
39 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8138360d>] netpoll_send_udp+0x1e4/0x1f1
40 Sep 27 18:36:10 pc1 kernel:  [<ffffffffa021d18f>] write_msg+0x8d/0xd2 [netconsole]
41 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f27f>] ? xen_restore_fl_direct_end+0x0/0x1
42 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81053937>] __call_console_drivers+0x6c/0x7e
43 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810539a9>] _call_console_drivers+0x60/0x64
44 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81414226>] ? _spin_lock_irqsave+0x1e/0x37
45 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81053df2>] release_console_sem+0x11a/0x19c
46 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810543b9>] vprintk+0x2e1/0x31a
47 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f1a5>] ? xen_clocksource_get_cycles+0x9/0x1c
48 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f0d6>] ? xen_clocksource_read+0x21/0x23
49 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100eb71>] ? xen_force_evtchn_callback+0xd/0xf
50 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81054499>] printk+0xa7/0xa9
51 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f27f>] ? xen_restore_fl_direct_end+0x0/0x1
52 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f0d6>] ? xen_clocksource_read+0x21/0x23
53 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100f1a5>] ? xen_clocksource_get_cycles+0x9/0x1c
54 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81070fcf>] ? clocksource_read+0xf/0x11
55 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81071695>] ? getnstimeofday+0x5b/0xbb
56 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8126125d>] ? cpumask_next+0x1e/0x20
57 Sep 27 18:36:10 pc1 kernel:  [<ffffffff812627c1>] xen_debug_interrupt+0x256/0x289
58 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81098276>] handle_IRQ_event+0x66/0x120
59 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81099947>] handle_percpu_irq+0x41/0x6e
60 Sep 27 18:36:10 pc1 kernel:  [<ffffffff812624dd>] xen_evtchn_do_upcall+0x102/0x190
61 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81014fbe>] xen_do_hypervisor_callback+0x1e/0x30
62 Sep 27 18:36:10 pc1 kernel:  <EOI>  [<ffffffff810093aa>] ? hypercall_page+0x3aa/0x1000
63 Sep 27 18:36:10 pc1 kernel:  [<ffffffff810093aa>] ? hypercall_page+0x3aa/0x1000
64 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100ebb7>] ? xen_safe_halt+0x10/0x1a
65 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8100c0f5>] ? xen_idle+0x3b/0x52
66 Sep 27 18:36:10 pc1 kernel:  [<ffffffff81012c9d>] ? cpu_idle+0x5d/0x8c
67 Sep 27 18:36:10 pc1 kernel:  [<ffffffff8140aaa3>] ? cpu_bringup_and_idle+0x13/0x15
68 Sep 27 18:36:10 pc1 kernel: ---[ end trace d83eb1ebe87fed96 ]---


     


 


From: tinnycloud@hotmail.com
To: jeremy@goop.org
CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
Subject: RE: [Xen-devel] Re: VM hung after running sometime
Date: Sat, 25 Sep 2010 17:33:23 +0800




Hi Jeremy:
    
      The test of  irqbalance disabled is running. Currently one server was crashed on NIC.
      Trace.jpg in attachments is the screenshot from serial port, and trace.txt is from /varl/log/message.
      Do you think it has connection with irqbalance disabled, or some other possibilities?
 
      In addition,  I find in /proc/interrupts,  all interrupts are happend on cpu0(please refer to interrputs.txt   
attached). Could it be a possible cause of server crash, and is there a way I can configure manually  to 
distribute those interrupts evenly?      
       
     Meanwhile,  I wil start the new test with kernel patched soon. Thanks. 
 
> Date: Thu, 23 Sep 2010 16:20:09 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > The interrputs file is attached. The server has 24 HVM domains
> > runnning about 40 hours.
> >
> > Well, we may upgrade to the new kernel in the further, but currently
> > we prefer the fix has least impact on our present server.
> > So it is really nice of you if you could offer the sets of patches,
> > also, it would be our fisrt choice.
> 
> Try cherry-picking:
> 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> IPIs and VIRQs
> 66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
> edge-triggered
> 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> for pirq events
> f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> pirq events
> 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
> 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> fasteoi
> d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> enable/disable_dynirq -> unmask/mask_irq
> c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> retrigger_dynirq -> irq
> f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> enable/disable unmask/mask
> 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> hypercall does not unmask
> 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
> f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume
> 

 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 10563 bytes --]

[-- Attachment #2: events.c --]
[-- Type: text/plain, Size: 34508 bytes --]

/*
 * Xen event channels
 *
 * Xen models interrupts with abstract event channels.  Because each
 * domain gets 1024 event channels, but NR_IRQ is not that large, we
 * must dynamically map irqs<->event channels.  The event channels
 * interface with the rest of the kernel by defining a xen interrupt
 * chip.  When an event is recieved, it is mapped to an irq and sent
 * through the normal interrupt processing path.
 *
 * There are four kinds of events which can be mapped to an event
 * channel:
 *
 * 1. Inter-domain notifications.  This includes all the virtual
 *    device events, since they're driven by front-ends in another domain
 *    (typically dom0).
 * 2. VIRQs, typically used for timers.  These are per-cpu events.
 * 3. IPIs.
 * 4. PIRQs - Hardware interrupts.
 *
 * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007
 */

#include <linux/linkage.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/bootmem.h>
#include <linux/irqnr.h>
#include <linux/pci_regs.h>
#include <linux/pci.h>
#include <linux/msi.h>

#include <asm/ptrace.h>
#include <asm/irq.h>
#include <asm/idle.h>
#include <asm/io_apic.h>
#include <asm/sync_bitops.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/hypervisor.h>
#include <asm/xen/pci.h>

#include <xen/xen-ops.h>
#include <xen/events.h>
#include <xen/interface/xen.h>
#include <xen/interface/event_channel.h>
#include <xen/page.h>

#include "../pci/msi.h"

/*
 * This lock protects updates to the following mapping and reference-count
 * arrays. The lock does not need to be acquired to read the mapping tables.
 */
static DEFINE_SPINLOCK(irq_mapping_update_lock);

/* IRQ <-> VIRQ mapping. */
static DEFINE_PER_CPU(int, virq_to_irq[NR_VIRQS]) = {[0 ... NR_VIRQS-1] = -1};

/* IRQ <-> IPI mapping */
static DEFINE_PER_CPU(int, ipi_to_irq[XEN_NR_IPIS]) = {[0 ... XEN_NR_IPIS-1] = -1};

/* Interrupt types. */
enum xen_irq_type {
	IRQT_UNBOUND = 0,
	IRQT_PIRQ,
	IRQT_VIRQ,
	IRQT_IPI,
	IRQT_EVTCHN
};

/*
 * Packed IRQ information:
 * type - enum xen_irq_type
 * event channel - irq->event channel mapping
 * cpu - cpu this event channel is bound to
 * index - type-specific information:
 *    PIRQ - vector, with MSB being "needs EIO"
 *    VIRQ - virq number
 *    IPI - IPI vector
 *    EVTCHN -
 */
struct irq_info
{
	enum xen_irq_type type;	/* type */
	unsigned short evtchn;	/* event channel */
	unsigned short cpu;	/* cpu bound */

	union {
		unsigned short virq;
		enum ipi_vector ipi;
		struct {
			unsigned short nr;
			unsigned char vector;
			unsigned char flags;
			domid_t domid;
		} pirq;
	} u;
};
#define PIRQ_SHAREABLE	(1 << 1)

/* Bitmap indicating which PIRQs require Xen to be notified on unmask. */
static bool pirq_eoi_does_unmask;
static unsigned long *pirq_needs_eoi_bits;

static struct irq_info *irq_info;

static int *evtchn_to_irq;
struct cpu_evtchn_s {
	unsigned long bits[NR_EVENT_CHANNELS/BITS_PER_LONG];
};

static __initdata struct cpu_evtchn_s init_evtchn_mask = {
	.bits[0 ... (NR_EVENT_CHANNELS/BITS_PER_LONG)-1] = ~0ul,
};
static struct cpu_evtchn_s *cpu_evtchn_mask_p = &init_evtchn_mask;

static inline unsigned long *cpu_evtchn_mask(int cpu)
{
	return cpu_evtchn_mask_p[cpu].bits;
}

/* Xen will never allocate port zero for any purpose. */
#define VALID_EVTCHN(chn)	((chn) != 0)

static struct irq_chip xen_dynamic_chip;
static struct irq_chip xen_pirq_chip;
static struct irq_chip xen_percpu_chip;

/* Constructor for packed IRQ information. */
static struct irq_info mk_unbound_info(void)
{
	return (struct irq_info) { .type = IRQT_UNBOUND };
}

static struct irq_info mk_evtchn_info(unsigned short evtchn)
{
	return (struct irq_info) { .type = IRQT_EVTCHN, .evtchn = evtchn,
			.cpu = 0 };
}

static struct irq_info mk_ipi_info(unsigned short evtchn, enum ipi_vector ipi)
{
	return (struct irq_info) { .type = IRQT_IPI, .evtchn = evtchn,
			.cpu = 0, .u.ipi = ipi };
}

static struct irq_info mk_virq_info(unsigned short evtchn, unsigned short virq)
{
	return (struct irq_info) { .type = IRQT_VIRQ, .evtchn = evtchn,
			.cpu = 0, .u.virq = virq };
}

static struct irq_info mk_pirq_info(unsigned short evtchn,
				    unsigned short pirq, unsigned short vector)
{
	return (struct irq_info) { .type = IRQT_PIRQ, .evtchn = evtchn,
			.cpu = 0, .u.pirq =
			{ .nr = pirq, .vector = vector, .domid = DOMID_SELF } };
}

/*
 * Accessors for packed IRQ information.
 */
static struct irq_info *info_for_irq(unsigned irq)
{
	return &irq_info[irq];
}

static unsigned int evtchn_from_irq(unsigned irq)
{
	return info_for_irq(irq)->evtchn;
}

unsigned irq_from_evtchn(unsigned int evtchn)
{
	return evtchn_to_irq[evtchn];
}
EXPORT_SYMBOL_GPL(irq_from_evtchn);

static enum ipi_vector ipi_from_irq(unsigned irq)
{
	struct irq_info *info = info_for_irq(irq);

	BUG_ON(info == NULL);
	BUG_ON(info->type != IRQT_IPI);

	return info->u.ipi;
}

static unsigned virq_from_irq(unsigned irq)
{
	struct irq_info *info = info_for_irq(irq);

	BUG_ON(info == NULL);
	BUG_ON(info->type != IRQT_VIRQ);

	return info->u.virq;
}

static unsigned gsi_from_irq(unsigned irq)
{
	struct irq_info *info = info_for_irq(irq);

	BUG_ON(info == NULL);
	BUG_ON(info->type != IRQT_PIRQ);

	return info->u.pirq.nr;
}

static unsigned vector_from_irq(unsigned irq)
{
	struct irq_info *info = info_for_irq(irq);

	BUG_ON(info == NULL);
	BUG_ON(info->type != IRQT_PIRQ);

	return info->u.pirq.vector;
}

static enum xen_irq_type type_from_irq(unsigned irq)
{
	return info_for_irq(irq)->type;
}

static unsigned cpu_from_irq(unsigned irq)
{
	return info_for_irq(irq)->cpu;
}

static unsigned int cpu_from_evtchn(unsigned int evtchn)
{
	int irq = evtchn_to_irq[evtchn];
	unsigned ret = 0;

	if (irq != -1)
		ret = cpu_from_irq(irq);

	return ret;
}

static bool pirq_needs_eoi(unsigned irq)
{
	struct irq_info *info = info_for_irq(irq);

	BUG_ON(info->type != IRQT_PIRQ);

	return test_bit(info->u.pirq.nr, pirq_needs_eoi_bits);
}

static inline unsigned long active_evtchns(unsigned int cpu,
					   struct shared_info *sh,
					   unsigned int idx)
{
	return (sh->evtchn_pending[idx] &
		cpu_evtchn_mask(cpu)[idx] &
		~sh->evtchn_mask[idx]);
}

static void bind_evtchn_to_cpu(unsigned int chn, unsigned int cpu)
{
	int irq = evtchn_to_irq[chn];

	BUG_ON(irq == -1);
#ifdef CONFIG_SMP
	cpumask_copy(irq_to_desc(irq)->affinity, cpumask_of(cpu));
#endif

	__clear_bit(chn, cpu_evtchn_mask(cpu_from_irq(irq)));
	__set_bit(chn, cpu_evtchn_mask(cpu));

	irq_info[irq].cpu = cpu;
}

static void init_evtchn_cpu_bindings(void)
{
#ifdef CONFIG_SMP
	struct irq_desc *desc;
	int i;

	/* By default all event channels notify CPU#0. */
	for_each_irq_desc(i, desc) {
		cpumask_copy(desc->affinity, cpumask_of(0));
	}
#endif

	memset(cpu_evtchn_mask(0), ~0, sizeof(cpu_evtchn_mask(0)));
}

static inline void clear_evtchn(int port)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	sync_clear_bit(port, &s->evtchn_pending[0]);
}

static inline void set_evtchn(int port)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	sync_set_bit(port, &s->evtchn_pending[0]);
}

static inline int test_evtchn(int port)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	return sync_test_bit(port, &s->evtchn_pending[0]);
}


/**
 * notify_remote_via_irq - send event to remote end of event channel via irq
 * @irq: irq of event channel to send event to
 *
 * Unlike notify_remote_via_evtchn(), this is safe to use across
 * save/restore. Notifications on a broken connection are silently
 * dropped.
 */
void notify_remote_via_irq(int irq)
{
	int evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn))
		notify_remote_via_evtchn(evtchn);
}
EXPORT_SYMBOL_GPL(notify_remote_via_irq);

static void mask_evtchn(int port)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	sync_set_bit(port, &s->evtchn_mask[0]);
}

static void mask_irq(unsigned int irq)
{
	int evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn))
		mask_evtchn(evtchn);
}

static void unmask_evtchn(int port)
{
	struct shared_info *s = HYPERVISOR_shared_info;
	unsigned int cpu = get_cpu();

	BUG_ON(!irqs_disabled());

	/* Slow path (hypercall) if this is a non-local port. */
	if (unlikely(cpu != cpu_from_evtchn(port))) {
		struct evtchn_unmask unmask = { .port = port };
		(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
	} else {
		struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);

		sync_clear_bit(port, &s->evtchn_mask[0]);

		/*
		 * The following is basically the equivalent of
		 * 'hw_resend_irq'. Just like a real IO-APIC we 'lose
		 * the interrupt edge' if the channel is masked.
		 */
		if (sync_test_bit(port, &s->evtchn_pending[0]) &&
		    !sync_test_and_set_bit(port / BITS_PER_LONG,
					   &vcpu_info->evtchn_pending_sel))
			vcpu_info->evtchn_upcall_pending = 1;
	}

	put_cpu();
}

static void unmask_irq(unsigned int irq)
{
	int evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn))
		unmask_evtchn(evtchn);
}

static int get_nr_hw_irqs(void)
{
	int ret = 1;

#ifdef CONFIG_X86_IO_APIC
	ret = get_nr_irqs_gsi();
#endif

	return ret;
}

static int find_unbound_irq(void)
{
	int irq;
	struct irq_desc *desc;
	int start = get_nr_hw_irqs();

	if (start == nr_irqs)
		goto no_irqs;

	/* nr_irqs is a magic value. Must not use it.*/
	for (irq = nr_irqs-1; irq > start; irq--)
		if (irq_info[irq].type == IRQT_UNBOUND)
			break;

	if (irq == start)
		goto no_irqs;

	desc = irq_to_desc_alloc_node(irq, 0);
	if (WARN_ON(desc == NULL))
		return -1;

	dynamic_irq_init(irq);

	return irq;

no_irqs:
	panic("No available IRQ to bind to: increase nr_irqs!\n");
}

static bool identity_mapped_irq(unsigned irq)
{
	/* identity map all the hardware irqs */
	return irq < get_nr_hw_irqs();
}

static void pirq_eoi(int irq)
{
	struct irq_info *info = info_for_irq(irq);
	struct physdev_eoi eoi = { .irq = info->u.pirq.nr };
	bool need_eoi;

	need_eoi = pirq_needs_eoi(irq);

	if (!need_eoi || !pirq_eoi_does_unmask)
		unmask_evtchn(info->evtchn);

	if (need_eoi) {
		int rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
		WARN_ON(rc);
	}
}

static void pirq_query_unmask(int irq)
{
	struct physdev_irq_status_query irq_status;
	struct irq_info *info = info_for_irq(irq);

	if (pirq_eoi_does_unmask)
		return;

	BUG_ON(info->type != IRQT_PIRQ);

	irq_status.irq = info->u.pirq.nr;
	if (HYPERVISOR_physdev_op(PHYSDEVOP_irq_status_query, &irq_status))
		irq_status.flags = 0;

	clear_bit(info->u.pirq.nr, pirq_needs_eoi_bits);
	if (irq_status.flags & XENIRQSTAT_needs_eoi)
		set_bit(info->u.pirq.nr, pirq_needs_eoi_bits);
}

static bool probing_irq(int irq)
{
	struct irq_desc *desc = irq_to_desc(irq);

	return desc && desc->action == NULL;
}

static unsigned int startup_pirq(unsigned int irq)
{
	struct evtchn_bind_pirq bind_pirq;
	struct irq_info *info = info_for_irq(irq);
	int evtchn = evtchn_from_irq(irq);
	int rc;

	BUG_ON(info->type != IRQT_PIRQ);

	if (VALID_EVTCHN(evtchn))
		goto out;

	bind_pirq.pirq = info->u.pirq.nr;
	/* NB. We are happy to share unless we are probing. */
	bind_pirq.flags = info->u.pirq.flags & PIRQ_SHAREABLE ?
					BIND_PIRQ__WILL_SHARE : 0;
	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
	if (rc != 0) {
		if (!probing_irq(irq))
			printk(KERN_INFO "Failed to obtain physical IRQ %d\n",
			       irq);
		return 0;
	}
	evtchn = bind_pirq.port;

	pirq_query_unmask(irq);

	evtchn_to_irq[evtchn] = irq;
	bind_evtchn_to_cpu(evtchn, 0);
	info->evtchn = evtchn;

 out:
	pirq_eoi(irq);

	return 0;
}

static void shutdown_pirq(unsigned int irq)
{
	struct evtchn_close close;
	struct irq_info *info = info_for_irq(irq);
	int evtchn = evtchn_from_irq(irq);

	BUG_ON(info->type != IRQT_PIRQ);

	if (!VALID_EVTCHN(evtchn))
		return;

	mask_evtchn(evtchn);

	close.port = evtchn;
	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
		BUG();

	bind_evtchn_to_cpu(evtchn, 0);
	evtchn_to_irq[evtchn] = -1;
	info->evtchn = 0;
}

static void ack_pirq(unsigned int irq)
{
	move_masked_irq(irq);
	
	pirq_eoi(irq);
}

static void end_pirq(unsigned int irq)
{
	int evtchn = evtchn_from_irq(irq);
	struct irq_desc *desc = irq_to_desc(irq);

	if (WARN_ON(!desc))
		return;

	if ((desc->status & (IRQ_DISABLED|IRQ_PENDING)) ==
	    (IRQ_DISABLED|IRQ_PENDING)) {
		shutdown_pirq(irq);
	} else if (VALID_EVTCHN(evtchn)) {
		pirq_eoi(irq);
	}
}

static int find_irq_by_gsi(unsigned gsi)
{
	int irq;

	for (irq = 0; irq < nr_irqs; irq++) {
		struct irq_info *info = info_for_irq(irq);

		if (info == NULL || info->type != IRQT_PIRQ)
			continue;

		if (gsi_from_irq(irq) == gsi)
			return irq;
	}

	return -1;
}

/*
 * Allocate a physical irq, along with a vector.  We don't assign an
 * event channel until the irq actually started up.  Return an
 * existing irq if we've already got one for the gsi.
 */
int xen_allocate_pirq(unsigned gsi, int shareable, char *name)
{
	int irq;
	struct physdev_irq irq_op;

	spin_lock(&irq_mapping_update_lock);

	irq = find_irq_by_gsi(gsi);
	if (irq != -1) {
		printk(KERN_INFO "xen_allocate_pirq: returning irq %d for gsi %u\n",
		       irq, gsi);
		goto out;	/* XXX need refcount? */
	}

	/* If we are a PV guest, we don't have GSIs (no ACPI passed). Therefore
	 * we are using the !xen_initial_domain() to drop in the function.*/
	if (identity_mapped_irq(gsi) || !xen_initial_domain()) {
		irq = gsi;
		irq_to_desc_alloc_node(irq, 0);
		dynamic_irq_init(irq);
	} else
		irq = find_unbound_irq();

	set_irq_chip_and_handler_name(irq, &xen_pirq_chip,
				      handle_fasteoi_irq, name);

	irq_op.irq = gsi;
	irq_op.vector = 0;

	/* Only the privileged domain can do this. For non-priv, the pcifront
	 * driver provides a PCI bus that does the call to do exactly
	 * this in the priv domain. */
	if (xen_initial_domain() &&
	    HYPERVISOR_physdev_op(PHYSDEVOP_alloc_irq_vector, &irq_op)) {
		dynamic_irq_cleanup(irq);
		irq = -ENOSPC;
 		goto out;
 	}

	irq_info[irq] = mk_pirq_info(0, gsi, irq_op.vector);
 	irq_info[irq].u.pirq.flags |= shareable ? PIRQ_SHAREABLE : 0;
out:
	spin_unlock(&irq_mapping_update_lock);
	return irq;
}

#ifdef CONFIG_PCI_MSI
int xen_destroy_irq(int irq)
{
	struct irq_desc *desc;
	struct physdev_unmap_pirq unmap_irq;
	struct irq_info *info = info_for_irq(irq);
	int rc = -ENOENT;

	spin_lock(&irq_mapping_update_lock);

	desc = irq_to_desc(irq);
	if (!desc)
		goto out;

 	if (xen_initial_domain()) {
 		unmap_irq.pirq = info->u.pirq.nr;
 		unmap_irq.domid = info->u.pirq.domid;
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_unmap_pirq, &unmap_irq);
 		if (rc) {
 			printk(KERN_WARNING "unmap irq failed %d\n", rc);
 			goto out;
 		}
	}

	irq_info[irq] = mk_unbound_info();

	dynamic_irq_cleanup(irq);

out:
	spin_unlock(&irq_mapping_update_lock);
	return rc;
}

int xen_create_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
		       int type, int pirq_override)
{
	int irq = 0;
	struct physdev_map_pirq map_irq;
	int rc;
	domid_t domid;
	int pos;
	u32 table_offset, bir;

	domid = rc = xen_find_device_domain_owner(dev);
	if (rc < 0)
		domid = DOMID_SELF;
	
	memset(&map_irq, 0, sizeof(map_irq));
	map_irq.domid = domid;
	map_irq.type = MAP_PIRQ_TYPE_MSI;
	map_irq.index = -1;
	map_irq.pirq = -1;
	map_irq.bus = dev->bus->number;
	map_irq.devfn = dev->devfn;

	if (type == PCI_CAP_ID_MSIX) {
		pos = pci_find_capability(dev, PCI_CAP_ID_MSIX);

		pci_read_config_dword(dev, msix_table_offset_reg(pos),
					&table_offset);
		bir = (u8)(table_offset & PCI_MSIX_FLAGS_BIRMASK);

		map_irq.table_base = pci_resource_start(dev, bir);
		map_irq.entry_nr = msidesc->msi_attrib.entry_nr;
	}

	spin_lock(&irq_mapping_update_lock);

	irq = find_unbound_irq();

	if (irq == -1)
		goto out;

	/* Only the privileged domain can do this. For non-priv PV domains
	 * we have to make a call to pci_frontend_* before so that the priv
	 * domain can do it for us. The 'pirq_override' is its return value. */

	if (xen_initial_domain())
		rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);
	else {
		rc = pirq_override ? 0 : -ENODEV;
		map_irq.pirq = pirq_override;
	}
	if (rc) {

		printk(KERN_WARNING "xen map irq failed %d\n", rc);

		dynamic_irq_cleanup(irq);

		irq = -1;
		goto out;
	}
	irq_info[irq] = mk_pirq_info(0, map_irq.pirq, map_irq.index);
	if (domid)
		irq_info[irq].u.pirq.domid = domid;

	set_irq_chip_and_handler_name(irq, &xen_pirq_chip,
				      handle_fasteoi_irq,
				      (type == PCI_CAP_ID_MSIX) ? "msi-x":"msi");

out:
	spin_unlock(&irq_mapping_update_lock);
	return irq;
}
#endif

int xen_vector_from_irq(unsigned irq)
{
	return vector_from_irq(irq);
}

int xen_gsi_from_irq(unsigned irq)
{
	return gsi_from_irq(irq);
}
EXPORT_SYMBOL_GPL(xen_gsi_from_irq);

int bind_evtchn_to_irq(unsigned int evtchn)
{
	int irq;

	spin_lock(&irq_mapping_update_lock);

	irq = evtchn_to_irq[evtchn];

	if (irq == -1) {
		irq = find_unbound_irq();

		set_irq_chip_and_handler_name(irq, &xen_dynamic_chip,
					      handle_fasteoi_irq, "event");

		evtchn_to_irq[evtchn] = irq;
		irq_info[irq] = mk_evtchn_info(evtchn);
	}

	spin_unlock(&irq_mapping_update_lock);

	return irq;
}
EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);

static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
{
	struct evtchn_bind_ipi bind_ipi;
	int evtchn, irq;

	spin_lock(&irq_mapping_update_lock);

	irq = per_cpu(ipi_to_irq, cpu)[ipi];

	if (irq == -1) {
		irq = find_unbound_irq();
		if (irq < 0)
			goto out;

                set_irq_chip_and_handler_name(irq, &xen_percpu_chip,
                                              handle_percpu_irq, "ipi");

		bind_ipi.vcpu = cpu;
		if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_ipi,
						&bind_ipi) != 0)
			BUG();
		evtchn = bind_ipi.port;

		evtchn_to_irq[evtchn] = irq;
		irq_info[irq] = mk_ipi_info(evtchn, ipi);
		per_cpu(ipi_to_irq, cpu)[ipi] = irq;

		bind_evtchn_to_cpu(evtchn, cpu);
	}

 out:
	spin_unlock(&irq_mapping_update_lock);
	return irq;
}

static int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
                                          unsigned int remote_port)
{
        struct evtchn_bind_interdomain bind_interdomain;
        int err;

        bind_interdomain.remote_dom  = remote_domain;
        bind_interdomain.remote_port = remote_port;

        err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
                                          &bind_interdomain);

        return err ? : bind_evtchn_to_irq(bind_interdomain.local_port);
}

int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
{
	struct evtchn_bind_virq bind_virq;
	int evtchn, irq;

	spin_lock(&irq_mapping_update_lock);

	irq = per_cpu(virq_to_irq, cpu)[virq];

	if (irq == -1) {
		bind_virq.virq = virq;
		bind_virq.vcpu = cpu;
		if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq,
						&bind_virq) != 0)
			BUG();
		evtchn = bind_virq.port;

		irq = find_unbound_irq();

                set_irq_chip_and_handler_name(irq, &xen_percpu_chip,                                              
                                             handle_percpu_irq, "virq"); 

		evtchn_to_irq[evtchn] = irq;
		irq_info[irq] = mk_virq_info(evtchn, virq);

		per_cpu(virq_to_irq, cpu)[virq] = irq;

		bind_evtchn_to_cpu(evtchn, cpu);
	}

	spin_unlock(&irq_mapping_update_lock);

	return irq;
}

static void unbind_from_irq(unsigned int irq)
{
	struct evtchn_close close;
	int evtchn = evtchn_from_irq(irq);

	spin_lock(&irq_mapping_update_lock);

	if (VALID_EVTCHN(evtchn)) {
		close.port = evtchn;
		if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
			BUG();

		switch (type_from_irq(irq)) {
		case IRQT_VIRQ:
			per_cpu(virq_to_irq, cpu_from_evtchn(evtchn))
				[virq_from_irq(irq)] = -1;
			break;
		case IRQT_IPI:
			per_cpu(ipi_to_irq, cpu_from_evtchn(evtchn))
				[ipi_from_irq(irq)] = -1;
			break;
		default:
			break;
		}

		/* Closed ports are implicitly re-bound to VCPU0. */
		bind_evtchn_to_cpu(evtchn, 0);

		evtchn_to_irq[evtchn] = -1;
	}

	if (irq_info[irq].type != IRQT_UNBOUND) {
		irq_info[irq] = mk_unbound_info();

		dynamic_irq_cleanup(irq);
	}

	spin_unlock(&irq_mapping_update_lock);
}

int bind_evtchn_to_irqhandler(unsigned int evtchn,
			      irq_handler_t handler,
			      unsigned long irqflags,
			      const char *devname, void *dev_id)
{
	unsigned int irq;
	int retval;

	irq = bind_evtchn_to_irq(evtchn);
	retval = request_irq(irq, handler, irqflags, devname, dev_id);
	if (retval != 0) {
		unbind_from_irq(irq);
		return retval;
	}

	return irq;
}
EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler);

int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
					  unsigned int remote_port,
					  irq_handler_t handler,
					  unsigned long irqflags,
					  const char *devname,
					  void *dev_id)
{
        int irq, retval;

        irq = bind_interdomain_evtchn_to_irq(remote_domain, remote_port);
        if (irq < 0)
                return irq;

        retval = request_irq(irq, handler, irqflags, devname, dev_id);
        if (retval != 0) {
                unbind_from_irq(irq);
                return retval;
        }

        return irq;
}
EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler);

int xen_alloc_evtchn(domid_t domid, int *port)
{
	struct evtchn_alloc_unbound alloc_unbound;
	int err;

	alloc_unbound.dom = DOMID_SELF;
	alloc_unbound.remote_dom = domid;

	err = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
					  &alloc_unbound);
	if (err == 0)
		*port = alloc_unbound.port;
	return err;
}
EXPORT_SYMBOL_GPL(xen_alloc_evtchn);

int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
			    irq_handler_t handler,
			    unsigned long irqflags, const char *devname, void *dev_id)
{
	unsigned int irq;
	int retval;

	irq = bind_virq_to_irq(virq, cpu);
	retval = request_irq(irq, handler, irqflags, devname, dev_id);
	if (retval != 0) {
		unbind_from_irq(irq);
		return retval;
	}

	return irq;
}
EXPORT_SYMBOL_GPL(bind_virq_to_irqhandler);

int bind_ipi_to_irqhandler(enum ipi_vector ipi,
			   unsigned int cpu,
			   irq_handler_t handler,
			   unsigned long irqflags,
			   const char *devname,
			   void *dev_id)
{
	int irq, retval;

	irq = bind_ipi_to_irq(ipi, cpu);
	if (irq < 0)
		return irq;

	irqflags |= IRQF_NO_SUSPEND;
	retval = request_irq(irq, handler, irqflags, devname, dev_id);
	if (retval != 0) {
		unbind_from_irq(irq);
		return retval;
	}

	return irq;
}

void unbind_from_irqhandler(unsigned int irq, void *dev_id)
{
	free_irq(irq, dev_id);
	unbind_from_irq(irq);
}
EXPORT_SYMBOL_GPL(unbind_from_irqhandler);

void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
{
	int irq = per_cpu(ipi_to_irq, cpu)[vector];
	BUG_ON(irq < 0);
	notify_remote_via_irq(irq);
}

irqreturn_t xen_debug_interrupt(int irq, void *dev_id)
{
	struct shared_info *sh = HYPERVISOR_shared_info;
	int cpu = smp_processor_id();
	int i;
	unsigned long flags;
	static DEFINE_SPINLOCK(debug_lock);

	spin_lock_irqsave(&debug_lock, flags);

	printk("vcpu %d\n  ", cpu);

	for_each_online_cpu(i) {
		struct vcpu_info *v = per_cpu(xen_vcpu, i);
		printk("%d: masked=%d pending=%d event_sel %08lx\n  ", i,
			(get_irq_regs() && i == cpu) ? xen_irqs_disabled(get_irq_regs()) : v->evtchn_upcall_mask,
			v->evtchn_upcall_pending,
			v->evtchn_pending_sel);
	}
	printk("pending:\n   ");
	for(i = ARRAY_SIZE(sh->evtchn_pending)-1; i >= 0; i--)
		printk("%08lx%s", sh->evtchn_pending[i],
			i % 8 == 0 ? "\n   " : " ");
	printk("\nmasks:\n   ");
	for(i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--)
		printk("%08lx%s", sh->evtchn_mask[i],
			i % 8 == 0 ? "\n   " : " ");

	printk("\nunmasked:\n   ");
	for(i = ARRAY_SIZE(sh->evtchn_mask)-1; i >= 0; i--)
		printk("%08lx%s", sh->evtchn_pending[i] & ~sh->evtchn_mask[i],
			i % 8 == 0 ? "\n   " : " ");

	printk("\npending list:\n");
	for(i = 0; i < NR_EVENT_CHANNELS; i++) {
		if (sync_test_bit(i, sh->evtchn_pending)) {
			printk("  %d: event %d -> irq %d\n",
			       cpu_from_evtchn(i), i,
			       evtchn_to_irq[i]);
		}
	}

	spin_unlock_irqrestore(&debug_lock, flags);

	return IRQ_HANDLED;
}

/*
 * Search the CPUs pending events bitmasks.  For each one found, map
 * the event number to an irq, and feed it into do_IRQ() for
 * handling.
 *
 * Xen uses a two-level bitmap to speed searching.  The first level is
 * a bitset of words which contain pending event bits.  The second
 * level is a bitset of pending events themselves.
 */
void xen_evtchn_do_upcall(struct pt_regs *regs)
{
	int cpu = get_cpu();
	struct pt_regs *old_regs = set_irq_regs(regs);
	struct shared_info *s = HYPERVISOR_shared_info;
	struct vcpu_info *vcpu_info = __get_cpu_var(xen_vcpu);
	static DEFINE_PER_CPU(unsigned, nesting_count);
 	unsigned count;

	exit_idle();
	irq_enter();

	do {
		unsigned long pending_words;

		vcpu_info->evtchn_upcall_pending = 0;

		if (__get_cpu_var(nesting_count)++)
			goto out;

#ifndef CONFIG_X86 /* No need for a barrier -- XCHG is a barrier on x86. */
		/* Clear master flag /before/ clearing selector flag. */
		wmb();
#endif
		pending_words = xchg(&vcpu_info->evtchn_pending_sel, 0);
		while (pending_words != 0) {
			unsigned long pending_bits;
			int word_idx = __ffs(pending_words);
			pending_words &= ~(1UL << word_idx);

			while ((pending_bits = active_evtchns(cpu, s, word_idx)) != 0) {
				int bit_idx = __ffs(pending_bits);
				int port = (word_idx * BITS_PER_LONG) + bit_idx;
				int irq = evtchn_to_irq[port];
				struct irq_desc *desc;

				mask_evtchn(port);
				clear_evtchn(port);

				if (irq != -1) {
					desc = irq_to_desc(irq);
					if (desc)
						generic_handle_irq_desc(irq, desc);
				}
			}
		}

		BUG_ON(!irqs_disabled());

		count = __get_cpu_var(nesting_count);
		__get_cpu_var(nesting_count) = 0;
	} while(count != 1);

out:
	irq_exit();
	set_irq_regs(old_regs);

	put_cpu();
}

/* Rebind a new event channel to an existing irq. */
void rebind_evtchn_irq(int evtchn, int irq)
{
	struct irq_info *info = info_for_irq(irq);

	/* Make sure the irq is masked, since the new event channel
	   will also be masked. */
	disable_irq(irq);

	spin_lock(&irq_mapping_update_lock);

	/* After resume the irq<->evtchn mappings are all cleared out */
	BUG_ON(evtchn_to_irq[evtchn] != -1);
	/* Expect irq to have been bound before,
	   so there should be a proper type */
	BUG_ON(info->type == IRQT_UNBOUND);

	evtchn_to_irq[evtchn] = irq;
	irq_info[irq] = mk_evtchn_info(evtchn);

	spin_unlock(&irq_mapping_update_lock);

	/* new event channels are always bound to cpu 0 */
	irq_set_affinity(irq, cpumask_of(0));

	/* Unmask the event channel. */
	enable_irq(irq);
}

/* Rebind an evtchn so that it gets delivered to a specific cpu */
static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
{
	struct evtchn_bind_vcpu bind_vcpu;
	int evtchn = evtchn_from_irq(irq);

	if (!VALID_EVTCHN(evtchn))
		return -1;

	/* Send future instances of this interrupt to other vcpu. */
	bind_vcpu.port = evtchn;
	bind_vcpu.vcpu = tcpu;

	/*
	 * If this fails, it usually just indicates that we're dealing with a
	 * virq or IPI channel, which don't actually need to be rebound. Ignore
	 * it, but don't do the xenlinux-level rebind in that case.
	 */
	if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
		bind_evtchn_to_cpu(evtchn, tcpu);

	return 0;
}

static int set_affinity_irq(unsigned irq, const struct cpumask *dest)
{
	unsigned tcpu = cpumask_first(dest);

	return rebind_irq_to_cpu(irq, tcpu);
}

int resend_irq_on_evtchn(unsigned int irq)
{
	int masked, evtchn = evtchn_from_irq(irq);
	struct shared_info *s = HYPERVISOR_shared_info;

	if (!VALID_EVTCHN(evtchn))
		return 1;

	masked = sync_test_and_set_bit(evtchn, s->evtchn_mask);
	sync_set_bit(evtchn, s->evtchn_pending);
	if (!masked)
		unmask_evtchn(evtchn);

	return 1;
}

static void ack_dynirq(unsigned int irq)
{
	int evtchn = evtchn_from_irq(irq);

	move_masked_irq(irq);

	if (VALID_EVTCHN(evtchn))
		unmask_evtchn(evtchn);
}

static int retrigger_irq(unsigned int irq)
{
	int evtchn = evtchn_from_irq(irq);
	struct shared_info *sh = HYPERVISOR_shared_info;
	int ret = 0;

	if (VALID_EVTCHN(evtchn)) {
		int masked;

		masked = sync_test_and_set_bit(evtchn, sh->evtchn_mask);
		sync_set_bit(evtchn, sh->evtchn_pending);
		if (!masked)
			unmask_evtchn(evtchn);
		ret = 1;
	}

	return ret;
}

static void restore_cpu_virqs(unsigned int cpu)
{
	struct evtchn_bind_virq bind_virq;
	int virq, irq, evtchn;

	for (virq = 0; virq < NR_VIRQS; virq++) {
		if ((irq = per_cpu(virq_to_irq, cpu)[virq]) == -1)
			continue;

		BUG_ON(virq_from_irq(irq) != virq);

		/* Get a new binding from Xen. */
		bind_virq.virq = virq;
		bind_virq.vcpu = cpu;
		if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_virq,
						&bind_virq) != 0)
			BUG();
		evtchn = bind_virq.port;

		/* Record the new mapping. */
		evtchn_to_irq[evtchn] = irq;
		irq_info[irq] = mk_virq_info(evtchn, virq);
		bind_evtchn_to_cpu(evtchn, cpu);

		/* Ready for use. */
		unmask_evtchn(evtchn);
	}
}

static void restore_cpu_ipis(unsigned int cpu)
{
	struct evtchn_bind_ipi bind_ipi;
	int ipi, irq, evtchn;

	for (ipi = 0; ipi < XEN_NR_IPIS; ipi++) {
		if ((irq = per_cpu(ipi_to_irq, cpu)[ipi]) == -1)
			continue;

		BUG_ON(ipi_from_irq(irq) != ipi);

		/* Get a new binding from Xen. */
		bind_ipi.vcpu = cpu;
		if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_ipi,
						&bind_ipi) != 0)
			BUG();
		evtchn = bind_ipi.port;

		/* Record the new mapping. */
		evtchn_to_irq[evtchn] = irq;
		irq_info[irq] = mk_ipi_info(evtchn, ipi);
		bind_evtchn_to_cpu(evtchn, cpu);

		/* Ready for use. */
		unmask_evtchn(evtchn);

	}
}

/* Clear an irq's pending state, in preparation for polling on it */
void xen_clear_irq_pending(int irq)
{
	int evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn))
		clear_evtchn(evtchn);
}
EXPORT_SYMBOL(xen_clear_irq_pending);
void xen_set_irq_pending(int irq)
{
	int evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn))
		set_evtchn(evtchn);
}

bool xen_test_irq_pending(int irq)
{
	int evtchn = evtchn_from_irq(irq);
	bool ret = false;

	if (VALID_EVTCHN(evtchn))
		ret = test_evtchn(evtchn);

	return ret;
}

/* Poll waiting for an irq to become pending with timeout.  In the usual case, the
   irq will be disabled so it won't deliver an interrupt. */
void xen_poll_irq_timeout(int irq, u64 timeout)
{
	evtchn_port_t evtchn = evtchn_from_irq(irq);

	if (VALID_EVTCHN(evtchn)) {
		struct sched_poll poll;

		poll.nr_ports = 1;
		poll.timeout = timeout;
		set_xen_guest_handle(poll.ports, &evtchn);

		if (HYPERVISOR_sched_op(SCHEDOP_poll, &poll) != 0)
			BUG();
	}
}
EXPORT_SYMBOL(xen_poll_irq_timeout);
/* Poll waiting for an irq to become pending.  In the usual case, the
   irq will be disabled so it won't deliver an interrupt. */
void xen_poll_irq(int irq)
{
	xen_poll_irq_timeout(irq, 0 /* no timeout */);
}

void xen_irq_resume(void)
{
	unsigned int cpu, irq, evtchn;

	init_evtchn_cpu_bindings();

	/* New event-channel space is not 'live' yet. */
	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
		mask_evtchn(evtchn);

	/* No IRQ <-> event-channel mappings. */
	for (irq = 0; irq < nr_irqs; irq++)
		irq_info[irq].evtchn = 0; /* zap event-channel binding */

	for (evtchn = 0; evtchn < NR_EVENT_CHANNELS; evtchn++)
		evtchn_to_irq[evtchn] = -1;

	for_each_possible_cpu(cpu) {
		restore_cpu_virqs(cpu);
		restore_cpu_ipis(cpu);
	}

	if (pirq_eoi_does_unmask) {
		struct physdev_pirq_eoi_gmfn eoi_gmfn;
		
		eoi_gmfn.gmfn = virt_to_mfn(pirq_needs_eoi_bits);
		if (HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn, &eoi_gmfn) != 0) {
			/* Could recover by reverting to old method...? */
			BUG();
		}
	}
}

static struct irq_chip xen_dynamic_chip __read_mostly = {
	.name		= "xen-dyn",

	.disable	= mask_irq,
	.mask		= mask_irq,
	.unmask		= unmask_irq,

	.eoi		= ack_dynirq,
	.set_affinity	= set_affinity_irq,
	.retrigger	= retrigger_irq,
};

static struct irq_chip xen_pirq_chip __read_mostly = {
	.name		= "xen-pirq",

	.startup	= startup_pirq,
	.shutdown	= shutdown_pirq,

	.enable		= pirq_eoi,
	.unmask		= unmask_irq,

	.disable	= mask_irq,
	.mask		= mask_irq,

	.eoi		= ack_pirq,
	.end		= end_pirq,

	.set_affinity	= set_affinity_irq,

	.retrigger	= retrigger_irq,
};

static struct irq_chip xen_percpu_chip __read_mostly = {
	.name		= "xen-percpu",

	.disable	= mask_irq,
	.mask		= mask_irq,
	.unmask		= unmask_irq,

	.ack		= ack_dynirq,
};

void __init xen_init_IRQ(void)
{
	int i;
	struct physdev_pirq_eoi_gmfn eoi_gmfn;
	int nr_pirqs = NR_IRQS;

	cpu_evtchn_mask_p = kcalloc(nr_cpu_ids, sizeof(struct cpu_evtchn_s),
				    GFP_KERNEL);
	irq_info = kcalloc(nr_irqs, sizeof(*irq_info), GFP_KERNEL);

	evtchn_to_irq = kcalloc(NR_EVENT_CHANNELS, sizeof(*evtchn_to_irq),
				GFP_KERNEL);
	for(i = 0; i < NR_EVENT_CHANNELS; i++)
		evtchn_to_irq[i] = -1;

	i = get_order(sizeof(unsigned long) * BITS_TO_LONGS(nr_pirqs));
	pirq_needs_eoi_bits = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, i);

 	eoi_gmfn.gmfn = virt_to_mfn(pirq_needs_eoi_bits);
	if (HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn, &eoi_gmfn) == 0)
		pirq_eoi_does_unmask = true;

	init_evtchn_cpu_bindings();

	/* No event channels are 'live' right now. */
	for (i = 0; i < NR_EVENT_CHANNELS; i++)
		mask_evtchn(i);

	irq_ctx_init(smp_processor_id());

	xen_setup_pirqs();
}

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-25 10:40                                     ` wei song
@ 2010-09-27 18:02                                       ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-27 18:02 UTC (permalink / raw)
  To: wei song; +Cc: MaoXiaoyun, xen devel, keir.fraser

 On 09/25/2010 03:40 AM, wei song wrote:
>
> Hi Jeremy,
>
> Do you think| this issue is cased of without CONFIG_X86_F00F_BUG|.

F00F_BUG? Why would that be related? F00F itself should be irrelevant in
any Xen situation, since the bug only affects P5 processors which I
assume you're not using (and I don't think are supported under Xen).

J

>
> thanks,
> James
>
> 在 2010年9月25日 下午5:33,MaoXiaoyun <tinnycloud@hotmail.com
> <mailto:tinnycloud@hotmail.com>>写 道:
>
>     Hi Jeremy:
>
>     The test of irqbalance disabled is running. Currently one server
>     was crashed on NIC.
>     Trace.jpg in attachments is the screenshot from serial port, and
>     trace.txt is from /varl/log/message.
>     Do you think it has connection with irqbalance disabled, or some
>     other possibilities?
>
>     In addition, I find in /proc/interrupts, all interrupts are
>     happend on cpu0(please refer to interrputs.txt
>     attached). Could it be a possible cause of server crash, and is
>     there a way I can configure manually to
>     distribute those interrupts evenly?
>
>     Meanwhile, I wil start the new test with kernel patched soon. Thanks.
>
>     > Date: Thu, 23 Sep 2010 16:20:09 -0700
>
>     > From: jeremy@goop.org <mailto:jeremy@goop.org>
>     > To: tinnycloud@hotmail.com <mailto:tinnycloud@hotmail.com>
>     > CC: xen-devel@lists.xensource.com
>     <mailto:xen-devel@lists.xensource.com>; keir.fraser@eu.citrix.com
>     <mailto:keir.fraser@eu.citrix.com>
>
>     > Subject: Re: [Xen-devel] Re: VM hung after running sometime
>     >
>     > On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
>     > > The interrputs file is attached. The server has 24 HVM domains
>     > > runnning about 40 hours.
>     > >
>     > > Well, we may upgrade to the new kernel in the further, but
>     currently
>     > > we prefer the fix has least impact on our present server.
>     > > So it is really nice of you if you could offer the sets of
>     patches,
>     > > also, it would be our fisrt choice.
>     >
>     > Try cherry-picking:
>     > 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu
>     interrupts for
>     > IPIs and VIRQs
>     > 66fd3052 fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
>
>     > edge-triggered
>     > 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use
>     handle_edge_irq
>     > for pirq events
>     > f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI
>     properly for
>     > pirq events
>     > 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to
>     using fasteoi
>     > 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq
>     interrupts use
>     > fasteoi
>     > d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
>     > enable/disable_dynirq -> unmask/mask_irq
>     > c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
>     > retrigger_dynirq -> irq
>     > f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
>     > enable/disable unmask/mask
>     > 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi
>     does unmask
>     > cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction,
>     pirq
>     > hypercall does not unmask
>     > 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
>     > PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
>     > 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
>     > d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi
>     for MSI too
>     > f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix
>     pirq_eoi_gmfn resume
>     >
>
>
>     _______________________________________________
>     Xen-devel mailing list
>     Xen-devel@lists.xensource.com <mailto:Xen-devel@lists.xensource.com>
>     http://lists.xensource.com/xen-devel
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-23 23:20                                 ` Jeremy Fitzhardinge
  2010-09-24  4:29                                   ` MaoXiaoyun
  2010-09-25  9:33                                   ` MaoXiaoyun
@ 2010-09-28  5:43                                   ` MaoXiaoyun
  2010-09-28 11:23                                     ` MaoXiaoyun
  2 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-28  5:43 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 2396 bytes --]


Hi Jeremy:

 

      I just found a patch you presented to Linus. http://lists.xensource.com/archives/html/xen-devel/2010-08/msg01510.html

      If I patch this one into 2.6.31. Suppose it will also fix the handle_level_irq problem, right?
   

      thanks.
 
> Date: Thu, 23 Sep 2010 16:20:09 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > The interrputs file is attached. The server has 24 HVM domains
> > runnning about 40 hours.
> >
> > Well, we may upgrade to the new kernel in the further, but currently
> > we prefer the fix has least impact on our present server.
> > So it is really nice of you if you could offer the sets of patches,
> > also, it would be our fisrt choice.
> 
> Try cherry-picking:
> 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> IPIs and VIRQs
> 66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
> edge-triggered
> 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> for pirq events
> f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> pirq events
> 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
> 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> fasteoi
> d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> enable/disable_dynirq -> unmask/mask_irq
> c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> retrigger_dynirq -> irq
> f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> enable/disable unmask/mask
> 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> hypercall does not unmask
> 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
> f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume
> 
> > Later I will kick off the irqbalance disabled test in different
> > servers, will keep you noticed.
> 


 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 3074 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-28  5:43                                   ` MaoXiaoyun
@ 2010-09-28 11:23                                     ` MaoXiaoyun
  2010-09-28 17:07                                       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-28 11:23 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 2777 bytes --]


Hi Jeremy: 

 

        Is it safe  to set irq affinity(including NIC, domain event, etc) manually?

        Will it cause irq lost? 

        Thanks. 
 


From: tinnycloud@hotmail.com
To: jeremy@goop.org
CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
Subject: RE: [Xen-devel] Re: VM hung after running sometime
Date: Tue, 28 Sep 2010 13:43:12 +0800




Hi Jeremy:
 
      I just found a patch you presented to Linus. http://lists.xensource.com/archives/html/xen-devel/2010-08/msg01510.html
      If I patch this one into 2.6.31. Suppose it will also fix the handle_level_irq problem, right?
   
      thanks.
 
> Date: Thu, 23 Sep 2010 16:20:09 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/22/2010 05:55 PM, MaoXiaoyun wrote:
> > The interrputs file is attached. The server has 24 HVM domains
> > runnning about 40 hours.
> >
> > Well, we may upgrade to the new kernel in the further, but currently
> > we prefer the fix has least impact on our present server.
> > So it is really nice of you if you could offer the sets of patches,
> > also, it would be our fisrt choice.
> 
> Try cherry-picking:
> 8401e9b96f80f9c0128e7c8fc5a01abfabbfa021 xen: use percpu interrupts for
> IPIs and VIRQs
> 66fd3052fec7e7c21a9d88ba1a03bc062f5fb53d xen: handle events as
> edge-triggered
> 29a2e2a7bd19233c62461b104c69233f15ce99ec xen/apic: use handle_edge_irq
> for pirq events
> f61692642a2a2b83a52dd7e64619ba3bb29998af xen/pirq: do EOI properly for
> pirq events
> 0672fb44a111dfb6386022071725c5b15c9de584 xen/events: change to using fasteoi
> 2789ef00cbe2cdb38deb30ee4085b88befadb1b0 xen: make pirq interrupts use
> fasteoi
> d0936845a856816af2af48ddf019366be68e96ba xen/evtchn: rename
> enable/disable_dynirq -> unmask/mask_irq
> c6a16a778f86699b339585ba5b9197035d77c40f xen/evtchn: rename
> retrigger_dynirq -> irq
> f4526f9a78ffb3d3fc9f81636c5b0357fc1beccd xen/evtchn: make pirq
> enable/disable unmask/mask
> 43d8a5030a502074f3c4aafed4d6095ebd76067c xen/evtchn: pirq_eoi does unmask
> cb23e8d58ca35b6f9e10e1ea5682bd61f2442ebd xen/evtchn: correction, pirq
> hypercall does not unmask
> 2390c371ecd32d9f06e22871636185382bf70ab7 xen/events: use
> PHYSDEVOP_pirq_eoi_gmfn to get pirq need-EOI info
> 158d6550716687486000a828c601706b55322ad0 xen/pirq: use eoi as enable
> d2ea486300ca6e207ba178a425fbd023b8621bb1 xen/pirq: use fasteoi for MSI too
> f0d4a0552f03b52027fb2c7958a1cbbe210cf418 xen/apic: fix pirq_eoi_gmfn resume
> 
> > Later I will kick off the irqbalance disabled test in different
> > servers, will keep you noticed.
> 


 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 3839 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-28 11:23                                     ` MaoXiaoyun
@ 2010-09-28 17:07                                       ` Jeremy Fitzhardinge
  2010-09-29  6:01                                         ` MaoXiaoyun
  0 siblings, 1 reply; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-28 17:07 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir.fraser

 On 09/28/2010 04:23 AM, MaoXiaoyun wrote:
>
> Is it safe to set irq affinity(including NIC, domain event, etc) manually?
> Will it cause irq lost?

There's only a very small chance of a lost interrupt, especially if the
device is mostly idle at the time. The event can only be lost if:

   1. it is handling a device interrupt
   2. you migrate it to another cpu
   3. another interrupt comes in before the first one has finished
      processing

J

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Re: VM hung after running sometime
  2010-09-28 17:07                                       ` Jeremy Fitzhardinge
@ 2010-09-29  6:01                                         ` MaoXiaoyun
  2010-09-29 16:12                                           ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-09-29  6:01 UTC (permalink / raw)
  To: jeremy; +Cc: xen devel, keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 1169 bytes --]


Well, I just go throught the source of irqbalance, it shows that it balances the irq through 

updating /proc/irq/$irq/smp_affinity.  That, in my understanding, set irq affinity is almost 

equal to irq migration.

 

I later find the NIC  interrupt is not modified in dom0, so it is safe to set its  affinity, 

but  interrupt of xen event use handle_level_irq, set its affinity will subject to irq

lost.

 

Am I right? 

 
> Date: Tue, 28 Sep 2010 10:07:28 -0700
> From: jeremy@goop.org
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; keir.fraser@eu.citrix.com
> Subject: Re: [Xen-devel] Re: VM hung after running sometime
> 
> On 09/28/2010 04:23 AM, MaoXiaoyun wrote:
> >
> > Is it safe to set irq affinity(including NIC, domain event, etc) manually?
> > Will it cause irq lost?
> 
> There's only a very small chance of a lost interrupt, especially if the
> device is mostly idle at the time. The event can only be lost if:
> 
> 1. it is handling a device interrupt
> 2. you migrate it to another cpu
> 3. another interrupt comes in before the first one has finished
> processing
> 
> J
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1602 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Re: VM hung after running sometime
  2010-09-29  6:01                                         ` MaoXiaoyun
@ 2010-09-29 16:12                                           ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-29 16:12 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir.fraser

 On 09/28/2010 11:01 PM, MaoXiaoyun wrote:
> Well, I just go throught the source of irqbalance, it shows that it
> balances the irq through
> updating /proc/irq/$irq/smp_affinity. That, in my understanding, set
> irq affinity is almost
> equal to irq migration.

Correct.

>
> I later find the NIC interrupt is not modified in dom0, so it is safe
> to set its affinity,
> but interrupt of xen event use handle_level_irq, set its affinity will
> subject to irq
> lost.
>
> Am I right?

I can't parse that sentence so I'm not sure. But if you have
successfully migrated the interrupt/evtchn to a different vcpu, then
there's no subsequent risk of lost events. The event loss can only
happen in the circumstance I mentioned earlier: when you migrate while
an event is being handled, and a second event occurs on the new cpu
before the first one has finished. So you need both bad timing and a
fairly high event delivery rate to trigger the problem.

J

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Domain 0 stop response on frequently reboot VMS
  2010-09-10 11:01   ` VM hung after running sometime MaoXiaoyun
  2010-09-19 10:37     ` MaoXiaoyun
@ 2010-10-15 12:43     ` MaoXiaoyun
  2010-10-15 12:57       ` Keir Fraser
  1 sibling, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-15 12:43 UTC (permalink / raw)
  To: xen devel; +Cc: keir.fraser


[-- Attachment #1.1: Type: text/plain, Size: 818 bytes --]



 Hi Keir:
 
         First, I'd like to express my appreciation for the help your offered before.
         Well, recently we confront a rather nasty domain 0 no response problem.
 
         We still have 12 HVMs almost continuously and concurrently reboot test on a physical server.
         A few hours later, the server looks like dead. We only can ping to the server and get right response,
the Xen works fine since we can get debug info from serial port. Attached is the full debug output.
After decode the domain 0 CPU stack, I find the CPU still works for domain 0 since the stack changed
info changed every time I dumped.
 
        Could help to take a look at the attentchment to see whether there are some hints for debugging this
problem. Thanks in advance. 
 
 
 
         
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1363 bytes --]

[-- Attachment #2: dom0.txt --]
[-- Type: text/plain, Size: 9487 bytes --]

(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e033:[<ffffffff8100922a>]
(XEN) RFLAGS: 0000000000000246   EM: 1   CONTEXT: pv guest
(XEN) rax: 0000000000040000   rbx: ffff8801cd18b968   rcx: ffffffff8100922a
(XEN) rdx: 0000000100000000   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) rbp: ffff8801c8843bd0   rsp: ffff8801c8843bb8   r8:  ffff8801cd18b958
(XEN) r9:  ffff8801b1cb8fb0   r10: ffffffff8100f2a2   r11: 0000000000000246
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000001
(XEN) r15: 0000000000000001   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 00000001f4155000   cr2: 0000000000486074
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8801c8843bb8:
(XEN)    0000000000000000 0000000000000000 ffffffff8100eb79 ffff8801c8843c38
(XEN)    ffffffff8100f2a2 0000000000000000 ffffffff8100f2a2 ffff8801b1cb8fb0
(XEN)    ffff8801cd18b958 0000000000000001 0000000000000001 0000000100000000
(XEN)    0000000000000000 ffff8801cd18b970 ffffffff8100f28f ffffffff813fce0b
(XEN)    ffff8801c8843c78 ffffffff81041f9c ffffffff8100eb79 ffff8801cd18b800
(XEN)    ffff8801b1cb8fb0 ffff88020bd7f928 0000000000000014 0000000000000d54
(XEN)    ffff8801c8843c88 ffffffff8126b0e5 ffff8801c8843ca8 ffffffff8126c7d0
(XEN)    ffff8801b1cb8fb0 ffff8801cd18b9b8 ffff8801c8843cc8 ffffffff8126c976
(XEN)    ffff8801c8843cd8 ffff880219426bf0 ffff8801c8843cf8 ffffffff8126c331
(XEN)    ffff8801cd18bc48 ffff8801cd18bc48 0000000000000014 ffff88020bd7f800
(XEN)    ffff8801c8843e78 ffffffff8126bbc0 ffff88021017a2b0 ffff8801d480faf0
(XEN)    00000000021bc200 00000d54c8843d38 ffff8801c8843d38 ffffffff81107298
(XEN)    ffff8801c8843d98 ffffffff811058c2 0000000000000001 ffffffff8114fc2a
(XEN)    0000000000000000 0000000000000001 00000000021bc200 ffff8801d480fc08
(XEN)    00000000021bc200 ffff8801b20654b0 ffff8801b2065400 000000000064f490
(XEN)    ffff8801c8843da8 ffffffff810bf088 ffff8801c8843e78 ffffffff810c07e9
(XEN)    ffff8801c7145900 fffffffffffffdef 0000000000000000 ffff8801b2065480
(XEN)    0000000000000246 ffffffff81014f9b 00007fff71ffc4f8 0000000000000001
(XEN)    0000000000000202 000000000000001b ffffffff8100eb79 0000000000000001
(XEN)    ffffffff8100f2a2 0000000000000202 0000000000000000 0000000000080000
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff813fcf5d>]
(XEN) RFLAGS: 0000000000000297   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000302   rbx: ffff8801cf4a6300   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801cf4a6300
(XEN) rbp: ffff8801adc59e78   rsp: ffff8801adc59e78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 00000001905bd000   cr2: 00007f80925220a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8801adc59e78:
(XEN)    ffff8801adc59ea8 ffffffff810ff342 0000000000000001 ffff8801cf4a6300
(XEN)    ffff88020a599010 0000000000000000 ffff8801adc59f28 ffffffff810ff869
(XEN)    ffff88021187c6f0 ffff8801d4927300 ffff8801adc59f48 0000000000000000
(XEN)    ffff8801adc59f08 ffffffff810f28e9 0000000000000000 ffff8801ce8cdb40
(XEN)    ffffffff8104917f ffff8801cf4a6300 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8801adc59f78 ffffffff810ff918
(XEN)    ffff8801adc59f78 ffffffff810f3867 0000000000000000 00000035f5e1bbc0
(XEN)    0000000000000000 00007fffda8ae9f0 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 000000000042c0d4
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000035f60cc557 000000000000e033 0000000000000202 00007fffda8ae7f8
(XEN)    000000000000e02b 02f46850d8f7c01b 5e00602402f3372a 70ff0840458b3018
(XEN)    432ce80ff1c1810c 06e0011b903d3141 0020832312417f67 91768ab82ab4c25d
(XEN)    9038ec834b60cf1a f04d13c2008b0403 3300000440c06851 15ff00f052b053db
(XEN)    0f08c33b6f5416c0 743d8b570aa02b8c 29246864b100e083 e85d890086906f54
(XEN)    018c8c0fc085d7ff 8437c068454101f0 dc0381d7ffece801 5e040fe03c931730
(XEN)    0a6ac033d008808b 40abf3bc7d8d5940 bc45c7c45d408992 8bc0554089025007
(XEN)    39004301e0458900 00c0950f046aec5d 00003a61e8d84589 1aa097416ac0596e
(XEN)    37184b0d203c600c 4589211128e10000 05f050bc458d50dc 89fffa808305f237
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff813fcf5d>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000402   rbx: ffff8801d4d1f000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801d4d1f000
(XEN) rbp: ffff8801b1033e78   rsp: ffff8801b1033e78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 00000001eed7e000   cr2: 00000035f601307b
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8801b1033e78:
(XEN)    ffff8801b1033ea8 ffffffff810ff342 0000000000000001 ffff8801d4d1f000
(XEN)    ffff8801c15cc6e0 0000000000000000 ffff8801b1033f28 ffffffff810ff869
(XEN)    ffff88021187c6f0 ffff8801ad9bd240 ffff8801b1033f48 0000000000000000
(XEN)    ffff8801b1033f08 ffffffff810f28e9 0000000000000000 ffff8801cc88db40
(XEN)    ffffffff8104917f ffff8801d4d1f000 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8801b1033f78 ffffffff810ff918
(XEN)    ffff8801b1033f78 ffffffff810f3867 0000000000000000 00000035f5e1bbc0
(XEN)    0000000000000000 00007fff06934280 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 000000000042c0d4
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000035f60cc557 000000000000e033 0000000000000202 00007fff06934088
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff813fcf5d>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000502   rbx: ffff8801cd0ab000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801cd0ab000
(XEN) rbp: ffff8801ad073e78   rsp: ffff8801ad073e78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000206
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 0000000188478000   cr2: 00000035f609a3a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8801ad073e78:
(XEN)    ffff8801ad073ea8 ffffffff810ff342 0000000000000002 ffff8801cd0ab000
(XEN)    ffff88020fcceba0 0000000000000000 ffff8801ad073f28 ffffffff810ff869
(XEN)    ffff88021187c6f0 ffff8801d4802f00 ffff8801ad073f48 0000000000000000
(XEN)    ffff8801ad073f08 ffffffff810f28e9 0000000000000000 ffff88020a6396d0
(XEN)    ffffffff8104917f ffff8801cd0ab000 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8801ad073f78 ffffffff810ff918
(XEN)    ffff8801ad073f78 ffffffff810f3867 0000000000000000 00000035f5e1bbc0
(XEN)    0000000000000000 00007fffc07cad90 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000206 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 000000000042c0d4
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000035f60cc557 000000000000e033 0000000000000206 00007fffc07cab98
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000190538067 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

[-- Attachment #3: hang.txt --]
[-- Type: text/plain, Size: 229099 bytes --]

(XEN) '*' pressed -> firing all diagnostic keyhandlers
(XEN) [d: dump registers]
(XEN) 'd' pressed -> dumping registers
(XEN) 
(XEN) *** Dumping CPU0 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000080   rcx: 0000000000000092
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: 0000000000000000
(XEN) rbp: ffff82c480357f28   rsp: ffff82c480357da8   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 00000000fffffffc   r11: ffff82c4801318d0
(XEN) r12: ffff82c480369408   r13: ffff82c480357f28   r14: ffff82c480357f28
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 0000000291d56000   cr2: ffff88000370bbf0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82c480357da8:
(XEN)    0000000000000080 ffff82c480110318 ffff82c480250080 ffff82c480221240
(XEN)    0000000000000065 ffff82c480369408 ffff82c480357f28 ffff82c480110098
(XEN)    000000000000fff1 ffff82c480221320 000000000000002a ffff82c480357f28
(XEN)    0000000000000292 ffff82c480110130 ffff82c48021e860 ffff82c4801310b0
(XEN)    ffff82c48021e8d8 ffff82c480132535 ffff82c4801318be 2a00000000000296
(XEN)    0000000000000001 ffff82c480373620 ffff82c48021e860 ffff82c480357f28
(XEN)    ffff82c480373648 ffff82c480131c86 ffff83023ff80280 0000000000000000
(XEN)    ffff83023ff80280 ffff82c480357f28 ffff82c480357f28 ffff82c4801552f8
(XEN)    ffff83023ff802b4 ffff82c4802509c0 0000000000000004 0000000400000000
(XEN)    0000000000000000 ffff82c480357f28 ffff82c48036e980 ffff82c480372980
(XEN)    0000000000000005 ffff8800aa423c00 ffff8800b14dfe78 0000000000000000
(XEN)    0000000000000001 0000000000000005 0000000000000005 ffff82c48014db00
(XEN)    0000000000000005 0000000000000005 0000000000000001 0000000000000000
(XEN)    ffff8800b14dfe78 ffff8800aa423c00 0000000000000206 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000605 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff8800aa423c00 000000f100000000
(XEN)    ffffffff813fcf63 000000000000e033 0000000000000297 ffff8800b14dfe78
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffff8300bf55a000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480110318>] dump_registers+0x58/0x120
(XEN)    [<ffff82c480110098>] run_all_keyhandlers+0x78/0xa0
(XEN)    [<ffff82c480110130>] handle_keypress+0x70/0xd0
(XEN)    [<ffff82c4801310b0>] serial_rx+0x0/0xa0
(XEN)    [<ffff82c480132535>] serial_rx_interrupt+0x65/0xd0
(XEN)    [<ffff82c4801318be>] ns16550_tx_empty+0xe/0x20
(XEN)    [<ffff82c480131c86>] ns16550_interrupt+0x56/0x90
(XEN)    [<ffff82c4801552f8>] do_IRQ+0x4f8/0x670
(XEN)    [<ffff82c48014db00>] common_interrupt+0x20/0x30
(XEN)    
(XEN) *** Dumping CPU0 guest state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e033:[<ffffffff813fcf63>]
(XEN) RFLAGS: 0000000000000297   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000605   rbx: ffff8800aa423c00   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8800aa423c00
(XEN) rbp: ffff8800b14dfe78   rsp: ffff8800b14dfe78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000206
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 0000000291d56000   cr2: 00007f8e409cd648
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800b14dfe78:
(XEN)    ffff8800b14dfea8 ffffffff810ff342 0000000000000004 ffff8800aa423c00
(XEN)    ffff880210c683d0 0000000000000000 ffff8800b14dff28 ffffffff810ff869
(XEN)    ffff8800b14dfec8 ffffffff8100f0de ffff8800b14dfed8 ffffffff8100f1ad
(XEN)    ffff8800b14dfee8 ffffffff81070fdb ffff8800b14dff08 ffffffff810716a1
(XEN)    ffff8800b14dff48 ffff8800aa423c00 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800b14dff78 ffffffff810ff918
(XEN)    ffff8800b14dff78 ffffffff81058089 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fff5021e340 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000206 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 ffffffffff6000fc
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000206 00007fff5021e148
(XEN)    000000000000e02b
(XEN) 
(XEN) *** Dumping CPU1 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000080   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023ff3ff28   rsi: 0000000000000001   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023ff3fee8   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000000
(XEN) r12: 0000000000000001   r13: 0000000000000000   r14: ffff8801d8171968
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000284896000   cr2: ffff88000370b288
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff3fee8:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff8300bf558000 ffff8801d8171958 ffff880086555bb8 ffff82c4801430ba
(XEN)    0000000000000000 ffff8801d8171968 0000000000000000 0000000000000001
(XEN)    ffff880086555bb8 ffff8801d8171958 0000000000000000 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff8800924296d0 000000fb00000000
(XEN)    ffffffff81048eb7 000000000000e033 0000000000000246 ffff880086555bb0
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff8300bf558000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    
(XEN) *** Dumping CPU1 guest state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    1
(XEN) RIP:    e033:[<ffffffff81048eb7>]
(XEN) RFLAGS: 0000000000000246   EM: 1   CONTEXT: pv guest
(XEN) rax: 0000000000000000   rbx: ffff8801d8171958   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8800924296d0
(XEN) rbp: ffff880086555bb8   rsp: ffff880086555bb0   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000000
(XEN) r12: 0000000000000001   r13: 0000000000000000   r14: ffff8801d8171968
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000284896000   cr2: 00007f8e3fbe80a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880086555bb0:
(XEN)    ffffffff81049191 ffff880086555bf8 ffffffff811015b7 0000000000000000
(XEN)    ffff8800924296d0 ffffffff8104917f 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff880086555c38 ffffffff8103f632 0000000100000000
(XEN)    ffff8801d8171968 0000000000000000 0000000000000001 0000000000000001
(XEN)    0000000000000001 ffff880086555c78 ffffffff81041f91 ffffffff8100eb79
(XEN)    ffff8801d8171800 ffff8801b05dde20 ffff8801d1a5c128 0000000000000008
(XEN)    00000000000009a3 ffff880086555c88 ffffffff8126b0e5 ffff880086555ca8
(XEN)    ffffffff8126c7d0 ffff8801b05dde20 ffff8801d81719b8 ffff880086555cc8
(XEN)    ffffffff8126c976 ffff880086555cd8 ffff880219426b50 ffff880086555cf8
(XEN)    ffffffff8126c331 ffff8801d8171c48 ffff8801d8171c48 0000000000000008
(XEN)    ffff8801d1a5c000 ffff880086555e78 ffffffff8126bbc0 ffff8800a99313b0
(XEN)    ffff88020b7c83d0 ffff880086555d28 000009a38104162b ffff880086555d38
(XEN)    ffffffff81041643 ffff880086555d48 ffffffff813fb694 ffff880086555d78
(XEN)    ffffffff81100603 0000000000000000 0000000000000008 0000000000000001
(XEN)    0000000000000006 ffff880086555f08 ffffffff81101385 ffff880086555f38
(XEN)    000000000063eb48 000000000063eac8 000000000063ea48 0000000000000024
(XEN)    0000000000000000 0000000000000000 0000000000000004 0000000000000000
(XEN)    0000000000000000 ffff8800a9931300 ffff88020b61faf0 ffff880086555e18
(XEN)    ffffffff810e9bd0 ffff880086555ec8 0000000000000008 ffff880086555e48
(XEN)    ffffffff81121f8b ffff880086555f18 ffff88020c95a3c0 0000000000000001
(XEN) 
(XEN) *** Dumping CPU2 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    2
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000100   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023ff37f28   rsi: 0000000000000001   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023ff37ee8   r8:  0000000000000001
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000002934ba000   cr2: ffff88000370bce0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff37ee8:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff8300bf2f6000 ffff8801d6819480 ffff8800acdf9e78 ffff82c4801430ba
(XEN)    0000000000000005 0000000000000005 0000000000000001 0000000000000000
(XEN)    ffff8800acdf9e78 ffff8801d6819480 0000000000000202 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000705 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff8801d6819480 000000fb00000000
(XEN)    ffffffff813fcf5d 000000000000e033 0000000000000293 ffff8800acdf9e78
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000002 ffff8300bf2f6000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    
(XEN) *** Dumping CPU2 guest state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    2
(XEN) RIP:    e033:[<ffffffff813fcf5d>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000705   rbx: ffff8801d6819480   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801d6819480
(XEN) rbp: ffff8800acdf9e78   rsp: ffff8800acdf9e78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000002934ba000   cr2: 00007f8e409cd648
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800acdf9e78:
(XEN)    ffff8800acdf9ea8 ffffffff810ff342 0000000000000003 ffff8801d6819480
(XEN)    ffff8801d0df2580 0000000000000000 ffff8800acdf9f28 ffffffff810ff869
(XEN)    ffff8800acdf9ec8 ffffffff8100f0de ffff8800acdf9ed8 ffffffff8100f1ad
(XEN)    ffff8800acdf9ee8 ffffffff81070fdb ffff8800acdf9f08 ffffffff810716a1
(XEN)    ffff8800acdf9f48 ffff8801d6819480 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800acdf9f78 ffffffff810ff918
(XEN)    ffff8800acdf9f78 ffffffff81058089 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fff63521950 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 ffffffffff6000fc
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000202 00007fff63521758
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) 
(XEN) *** Dumping CPU3 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000180   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023ff27f28   rsi: 0000000000000001   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023ff27ee8   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000002ffeef000   cr2: 00000000f6fd53fe
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff27ee8:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff8300bf2f4000 ffff8801d83a7900 ffff8800ab481e78 ffff82c4801430ba
(XEN)    0000000000000005 0000000000000005 0000000000000001 0000000000000000
(XEN)    ffff8800ab481e78 ffff8801d83a7900 0000000000000202 0000000000000000
(XEN)    0000000000000001 0000000000000001 0000000000000805 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff8801d83a7900 000000fb00000000
(XEN)    ffffffff813fcf5b 000000000000e033 0000000000000293 ffff8800ab481e78
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000003 ffff8300bf2f4000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    
(XEN) *** Dumping CPU3 guest state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    3
(XEN) RIP:    e033:[<ffffffff813fcf5b>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000805   rbx: ffff8801d83a7900   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801d83a7900
(XEN) rbp: ffff8800ab481e78   rsp: ffff8800ab481e78   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000002ffeef000   cr2: 000000398809a3a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800ab481e78:
(XEN)    ffff8800ab481ea8 ffffffff810ff342 ffff8800bbc17b00 ffff8801d83a7900
(XEN)    ffff8801d0d980c0 0000000000000000 ffff8800ab481f28 ffffffff810ff869
(XEN)    ffff8800ab481ec8 0000000000000000 0000000000000000 ffff8800bbc17b00
(XEN)    ffff8800ab481f78 ffffffff81123796 0000000000000001 0000000000000001
(XEN)    00000001ab481f48 ffff8801d83a7900 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800ab481f78 ffffffff810ff918
(XEN)    0000000000000000 0000000200000001 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fffdff8e480 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000000001 0000000000000001 0000000000000010 000000000042c127
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000202 00007fffdff8e288
(XEN)    000000000000e02b ffff8801d18e8000 ffffffff81650840 0000000000000000
(XEN)    0000000000000003 00007ffffffff000 ffffffff8105eb50 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000057ac6e9d 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) 
(XEN) *** Dumping CPU4 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    4
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000200   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023ff17f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023ff17d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480168160
(XEN) r12: ffff83023fe83a20   r13: 000000000044cd34   r14: 00002caf1aab64f6
(XEN) r15: 000000000055285c   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000c49aa000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff17d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023ff17f28 ffff83023fe83960 ffff82c4801430ba
(XEN)    000000000055285c 00002caf1aab64f6 000000000044cd34 ffff83023fe83a20
(XEN)    ffff83023fe83960 ffff83023ff17f28 ffff82c480168160 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83023fe83a20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023ff17e78
(XEN)    0000000000000000 ffff83023fe83a20 ffff82c480189e42 00002f288c5c3350
(XEN)    ffff82c4801441b5 ffff83057d630410 ffff82c48011e474 0000000000000004
(XEN)    0000000000000004 0000000000000000 0000000000000000 000000008036e980
(XEN)    000008ca0001401b ffff82c480258080 ffff83023ff17f28 ffff82c480250b00
(XEN)    ffff83023ff17e28 ffff8300bcdea000 00002caf1aab64f6 ffff82c480258080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000008000 ffff8300bf2fa000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aed50 00000000f7727ee0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 000000002b3483d7 00000000f7727ee0
(XEN)    000000000000000f 00000000f7727ec0 000000008610bce0 000000f000000000
(XEN)    00000000f76a9ca2 0000000000000000 0000000000000206 00000000f78aed34
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000004 ffff8300bf2fa000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480168160>] send_IPI_mask_phys+0x0/0xf0
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU4 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU5 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    5
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000280   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023ff07f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023ff07d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480116df0
(XEN) r12: ffff83063fde9230   r13: 000000000041fd45   r14: 00002caf1b97f56d
(XEN) r15: 0000000000640b6c   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000c13a2000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff07d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023ff07f28 ffff83063fde9170 ffff82c4801430ba
(XEN)    0000000000640b6c 00002caf1b97f56d 000000000041fd45 ffff83063fde9230
(XEN)    ffff83063fde9170 ffff83023ff07f28 ffff82c480116df0 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fde9230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023ff07e78
(XEN)    0000000000000000 ffff83063fde9230 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c480373870 ffff82c48011e474 0000000000000005
(XEN)    0000000000000005 0000000000000000 0000000000000000 000000008036e980
(XEN)    00000e94000abf08 ffff82c48025a080 ffff83023ff07f28 ffff82c480250b00
(XEN)    ffff83023ff07e28 ffff830086e30000 00002caf1b97f56d ffff82c48025a080
(XEN)    ffff82c480149ad6 0000000000000000 000000000000a000 ffff8300bf2f8000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aacb4 000000000000001f 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000305 0000000000000000
(XEN)    00000000000003ce 0000000080a5ce18 0000000000000000 000000f000000000
(XEN)    0000000080a5ce21 0000000000000000 0000000000000202 00000000f78aaca4
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000005 ffff8300bf2f8000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480116df0>] csched_tick_suspend+0x0/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU5 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU6 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    6
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000300   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fefff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023feffd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480116df0
(XEN) r12: ffff83063fde9a20   r13: 0000000000682345   r14: 00002ce523846da7
(XEN) r15: 000000000072f06c   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 0000000000ebfd18
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023feffd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff83063fde9960 ffff83023fefff28 ffff83063fde9960 ffff82c4801430ba
(XEN)    000000000072f06c 00002ce523846da7 0000000000682345 ffff83063fde9a20
(XEN)    ffff83063fde9960 ffff83023fefff28 ffff82c480116df0 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fde9a20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023feffe78
(XEN)    0000000000000000 ffff83063fde9a20 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c4803738a0 ffff82c48011e474 0000000000000006
(XEN)    0000000000000006 0000000000000000 0000000000000000 000000008036e980
(XEN)    000014bc00049903 ffff82c48025c080 ffff83023fefff28 ffff82c480250b00
(XEN)    ffff83023feffe28 ffff83009048c000 00002ce523846da7 ffff82c48025c080
(XEN)    ffff82c480149ad6 0000000000000000 000000000000c000 ffff8300bdc42000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aec64 0000000000000001 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000004
(XEN)    0000000000001f68 0000000000000000 00000000861995f8 000000f000000000
(XEN)    0000000080a5cdb6 0000000000000000 0000000000000246 00000000f78aec5c
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000006 ffff8300bdc42000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480116df0>] csched_tick_suspend+0x0/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU6 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU7 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    7
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000380   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023feeff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023feefd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480168160
(XEN) r12: ffff83063fdeb230   r13: 00000000007b6b1a   r14: 00002caf1d772ade
(XEN) r15: 000000000081d4b3   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000d3076000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023feefd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023feeff28 ffff83063fdeb170 ffff82c4801430ba
(XEN)    000000000081d4b3 00002caf1d772ade 00000000007b6b1a ffff83063fdeb230
(XEN)    ffff83063fdeb170 ffff83023feeff28 ffff82c480168160 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fdeb230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023feefe78
(XEN)    0000000000000000 ffff83063fdeb230 ffff82c480189e42 00002f28c7f6fd50
(XEN)    ffff82c4801441b5 ffff8302e2190410 ffff82c48011e474 0000000000000007
(XEN)    0000000000000007 0000000000000000 0000000000000000 000000008036e980
(XEN)    0000090a000507bf ffff82c48025e080 ffff83023feeff28 ffff82c480250b00
(XEN)    ffff83023feefe28 ffff830087fce000 00002caf1d772ade ffff82c48025e080
(XEN)    ffff82c480149ad6 0000000000000000 000000000000e000 ffff8300bdc40000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f63a2ae8 0000000000000780 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 00000000f7ada000 00000000e15a3018
(XEN)    0000000000000033 00000000f63a2bc8 00000000f63a2b24 000000f000000000
(XEN)    00000000bff64933 0000000000000000 0000000000010206 00000000f63a2ad0
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000007 ffff8300bdc40000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480168160>] send_IPI_mask_phys+0x0/0xf0
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU7 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU8 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    8
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000400   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fee7f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fee7d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000002   r11: ffff830451880758
(XEN) r12: ffff83063fdeba20   r13: 000000000085d2f4   r14: 00002caf1aabd8eb
(XEN) r15: 000000000090b6fb   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 0000000000e69f5c
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fee7d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023fee7f28 ffff83063fdeb960 ffff82c4801430ba
(XEN)    000000000090b6fb 00002caf1aabd8eb 000000000085d2f4 ffff83063fdeba20
(XEN)    ffff83063fdeb960 ffff83023fee7f28 ffff830451880758 0000000000000002
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fdeba20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fee7e78
(XEN)    0000000000000000 ffff83063fdeba20 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c480373900 ffff82c48011e474 0000000000000008
(XEN)    0000000000000008 0000000000000000 0000000000000000 000000008036e980
(XEN)    000007090003d8b8 ffff82c480260080 ffff83023fee7f28 ffff82c480250b00
(XEN)    ffff83023fee7e28 ffff8300bcd58000 00002caf1aabd8eb ffff82c480260080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000010000 ffff8300bf59e000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f6bb250c 00000000e1394018 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 00000000e1502e60
(XEN)    0000000000000000 00000000f7b27000 00000000f2400000 000000f000000000
(XEN)    00000000bff61019 0000000000000000 0000000000010202 00000000f6bb2490
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000008 ffff8300bf59e000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU8 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU9 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    9
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000480   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fed7f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fed7d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff8302cd2a0758
(XEN) r12: ffff83063fdea230   r13: 0000000000995017   r14: 00002caf2990f7ec
(XEN) r15: 00000000009f531a   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000d26e1000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fed7d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023fed7f28 ffff83063fdea170 ffff82c4801430ba
(XEN)    00000000009f531a 00002caf2990f7ec 0000000000995017 ffff83063fdea230
(XEN)    ffff83063fdea170 ffff83023fed7f28 ffff8302cd2a0758 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fdea230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fed7e78
(XEN)    0000000000000000 ffff83063fdea230 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c480373930 ffff82c48011e474 0000000000000009
(XEN)    0000000000000009 0000000000000000 0000000000000000 000000008036e980
(XEN)    000013840010491f ffff82c480262080 ffff83023fed7f28 ffff82c480250b00
(XEN)    ffff83023fed7e28 ffff8300bdc26000 00002caf2990f7ec ffff82c480262080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000012000 ffff8300bf59c000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    000000008089a4b4 0000000080a56ff0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000085e48a30
(XEN)    000000000000c102 0000000085e4f9dc 0000000085e4f0ec 000000f000000000
(XEN)    0000000080a5cdc2 0000000000000000 0000000000000246 000000008089a4ac
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000009 ffff8300bf59c000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU9 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU10 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    10
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000500   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fecff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fecfd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480116df0
(XEN) r12: ffff83063fdeaa20   r13: 0000000000a020ce   r14: 00002caf2f26cb89
(XEN) r15: 0000000000adf073   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000d3077000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fecfd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023fecff28 ffff83063fdea960 ffff82c4801430ba
(XEN)    0000000000adf073 00002caf2f26cb89 0000000000a020ce ffff83063fdeaa20
(XEN)    ffff83063fdea960 ffff83023fecff28 ffff82c480116df0 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff83063fdeaa20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fecfe78
(XEN)    0000000000000000 ffff83063fdeaa20 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c480373960 ffff82c48011e474 000000000000000a
(XEN)    000000000000000a 0000000000000000 0000000000000000 000000008036e980
(XEN)    000000dc00067696 ffff82c480264080 ffff83023fecff28 ffff82c480250b00
(XEN)    ffff83023fecfe28 ffff830087f54000 00002caf2f26cb89 ffff82c480264080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000014000 ffff8300bf59a000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    000000008089a4b4 0000000080a56ff0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000085faecd0
(XEN)    000000000000c102 0000000085dce9dc 0000000085dce0ec 000000f000000000
(XEN)    0000000080a5cdc2 0000000000000000 0000000000000246 000000008089a4ac
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000a ffff8300bf59a000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480116df0>] csched_tick_suspend+0x0/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU10 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU11 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    11
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000580   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023febff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023febfd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480116df0
(XEN) r12: ffff8302fdebe230   r13: 0000000000f8e276   r14: 00002caf62845785
(XEN) r15: 0000000000bcd8b1   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000f75714d0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023febfd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff83023ff0f508 ffff83023febff28 ffff8302fdebe170 ffff82c4801430ba
(XEN)    0000000000bcd8b1 00002caf62845785 0000000000f8e276 ffff8302fdebe230
(XEN)    ffff8302fdebe170 ffff83023febff28 ffff82c480116df0 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff8302fdebe230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023febfe78
(XEN)    0000000000000000 ffff8302fdebe230 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c480373990 ffff82c48011e474 000000000000000b
(XEN)    ffffffffffffffff 0000000000000000 0000000000000000 000000008036e980
(XEN)    00000866000a7d3f ffffffffffffffff ffff83023febff28 ffff82c480250b00
(XEN)    ffff83023febfe28 ffff83009048e000 00002caf62845785 ffff82c480266080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000016000 ffff8300bf598000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aed50 00000000f7727ee0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 00000000d8681fc6 00000000f7727ee0
(XEN)    0000000000000006 00000000f7727ec0 000000008610bfa0 000000f000000000
(XEN)    00000000f7639ca2 0000000000000000 0000000000000206 00000000f78aed34
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000b ffff8300bf598000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480116df0>] csched_tick_suspend+0x0/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU11 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU12 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    12
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000600   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023feb7f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023feb7d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480168160
(XEN) r12: ffff8302fdebea20   r13: 0000000000737e70   r14: 00002caf62844c36
(XEN) r15: 0000000000cbc1ea   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000f72481aa
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023feb7d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff83023ff0f5c8 ffff83023feb7f28 ffff8302fdebe960 ffff82c4801430ba
(XEN)    0000000000cbc1ea 00002caf62844c36 0000000000737e70 ffff8302fdebea20
(XEN)    ffff8302fdebe960 ffff83023feb7f28 ffff82c480168160 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff8302fdebea20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023feb7e78
(XEN)    0000000000000000 ffff8302fdebea20 ffff82c480189e42 00002f28bf9ec250
(XEN)    ffff82c4801441b5 ffff82c4803739c0 ffff82c48011e474 000000000000000c
(XEN)    000000000000000c 0000000000000000 0000000000000000 000000008036e980
(XEN)    00001ce400047098 ffff82c480268080 ffff83023feb7f28 ffff82c480250b00
(XEN)    ffff83023feb7e28 ffff8300bf6de000 00002caf62844c36 ffff82c480268080
(XEN)    ffff82c480149ad6 0000000000000000 0000000000018000 ffff8300bf776000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aed50 00000000f7727ee0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 00000000248d4714 00000000f7727ee0
(XEN)    000000000000000b 00000000f7727ec0 00000000861bcda0 0000000000000000
(XEN)    00000000f75b9ca2 0000000000000000 0000000000000202 00000000f78aed34
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000c ffff8300bf776000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480168160>] send_IPI_mask_phys+0x0/0xf0
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU12 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU13 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    13
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000680   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fea7f28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fea7d88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: ffff82c480116df0
(XEN) r12: ffff8302fd6d3230   r13: 0000000000af3960   r14: 00002caf1aaaccb3
(XEN) r15: 0000000000daa904   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000f713db34
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fea7d88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023fea7f28 ffff8302fd6d3170 ffff82c4801430ba
(XEN)    0000000000daa904 00002caf1aaaccb3 0000000000af3960 ffff8302fd6d3230
(XEN)    ffff8302fd6d3170 ffff83023fea7f28 ffff82c480116df0 0000000000000001
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff8302fd6d3230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fea7e78
(XEN)    0000000000000000 ffff8302fd6d3230 ffff82c480189e42 7fffffffffffffff
(XEN)    ffff82c4801441b5 ffff82c4803739f0 ffff82c48011e474 000000000000000d
(XEN)    000000000000000d 0000000000000000 0000000000000000 000000008036e980
(XEN)    00000f020005d0c0 ffff82c48026a080 ffff83023fea7f28 ffff82c480250b00
(XEN)    ffff83023fea7e28 ffff8300bf42a000 00002caf1aaaccb3 ffff82c48026a080
(XEN)    ffff82c480149ad6 0000000000000000 000000000001a000 ffff8300bf774000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78f277c 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000085dab0e0
(XEN)    000000000000c102 0000000085dab9dc 000000008088ddc4 000000f000000000
(XEN)    0000000080a5cdc2 0000000000000000 0000000000000246 00000000f78f2774
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000d ffff8300bf774000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c480116df0>] csched_tick_suspend+0x0/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU13 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU14 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    14
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000700   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fe9ff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fe9fd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000002   r11: ffff83040a3b0758
(XEN) r12: ffff8302fd6d3a20   r13: 0000000000e8a987   r14: 00002d2f86910541
(XEN) r15: 0000000000e991b9   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000f6aa9131
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fe9fd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    ffff82c4802509c0 ffff83023fe9ff28 ffff8302fd6d3960 ffff82c4801430ba
(XEN)    0000000000e991b9 00002d2f86910541 0000000000e8a987 ffff8302fd6d3a20
(XEN)    ffff8302fd6d3960 ffff83023fe9ff28 ffff83040a3b0758 0000000000000002
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff8302fd6d3a20 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fe9fe78
(XEN)    0000000000000000 ffff8302fd6d3a20 ffff82c480189e42 00002f293f2c9150
(XEN)    ffff82c4801441b5 ffff83045c0f0758 ffff82c48011e474 000000000000000e
(XEN)    000000000000000e 0000000000000000 0000000000000000 000000008036e980
(XEN)    000008d2000192c5 ffff82c48026c080 ffff83023fe9ff28 ffff82c480250b00
(XEN)    ffff83023fe9fe28 ffff8300bf67c000 00002d2f86910541 ffff82c48026c080
(XEN)    ffff82c480149ad6 0000000000000000 000000000001c000 ffff8300bf772000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aec64 0000000000000001 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000004
(XEN)    0000000000001f68 0000000000000000 0000000086199738 000000f000000000
(XEN)    0000000080a5cdb6 0000000000000000 0000000000200246 00000000f78aec5c
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000e ffff8300bf772000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU14 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) *** Dumping CPU15 host state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    15
(XEN) RIP:    e008:[<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000780   rbx: ffff82c48036e980   rcx: ffff82c480110260
(XEN) rdx: ffff83023fe8ff28   rsi: 00000000d5529a51   rdi: 0000000000000000
(XEN) rbp: 0000000000000000   rsp: ffff83023fe8fd88   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000005   r11: ffff8302e3940758
(XEN) r12: ffff8302dd6d2230   r13: 0000000000e8aa27   r14: 00002caf117d2034
(XEN) r15: 0000000000f83118   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 00000000bf560000   cr2: 00000000f75914d0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fe8fd88:
(XEN)    ffff82c48036e980 ffff82c480167d17 ffff82c4802509c0 ffff82c48016834e
(XEN)    0000000000000080 ffff83023fe8ff28 ffff8302dd6d2170 ffff82c4801430ba
(XEN)    0000000000f83118 00002caf117d2034 0000000000e8aa27 ffff8302dd6d2230
(XEN)    ffff8302dd6d2170 ffff83023fe8ff28 ffff8302e3940758 0000000000000005
(XEN)    0000000000000000 0000000000000000 0000000000000002 0000000080000000
(XEN)    00000000b3a661f6 00000000d5529a51 ffff8302dd6d2230 000000fb00000000
(XEN)    ffff82c4801877f0 000000000000e008 0000000000000297 ffff83023fe8fe78
(XEN)    0000000000000000 ffff8302dd6d2230 ffff82c480189e42 00002f293f2c9150
(XEN)    ffff82c4801441b5 ffff83040a3b0410 ffff82c48011e474 000000000000000f
(XEN)    000000000000000f 0000000000000000 0000000000000000 000000008036e980
(XEN)    0000089e0002e61d ffff82c48026e080 ffff83023fe8ff28 ffff82c480250b00
(XEN)    ffff83023fe8fe28 ffff8300bcaa0000 00002caf117d2034 ffff82c48026e080
(XEN)    ffff82c480149ad6 0000000000000000 000000000001e000 ffff8300bf770000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    00000000f78aacf8 00000000000003ce 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000008008 00000000000003ce
(XEN)    00000000000003ce 00000000808719e0 00000000f78aacf6 000000f000000000
(XEN)    0000000080a5ce2e 0000000000000000 0000000000200202 00000000f78aaca8
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 000000000000000f ffff8300bf770000
(XEN) Xen call trace:
(XEN)    [<ffff82c480110261>] __dump_execstate+0x1/0x60
(XEN)    [<ffff82c480167d17>] __smp_call_function_interrupt+0x57/0x90
(XEN)    [<ffff82c48016834e>] smp_call_function_interrupt+0x4e/0x90
(XEN)    [<ffff82c4801430ba>] call_function_interrupt+0x2a/0x30
(XEN)    [<ffff82c4801877f0>] hpet_broadcast_exit+0x0/0x150
(XEN)    [<ffff82c480189e42>] acpi_processor_idle+0x362/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) *** Dumping CPU15 guest state: ***
(XEN) No guest context (CPU is idle).
(XEN) 
(XEN) [0: dump Dom0 registers]
(XEN) '0' pressed -> dumping Dom0's registers
(XEN) *** Dumping Dom0 vcpu#0 state: ***
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e033:[<ffffffff813fcf63>]
(XEN) RFLAGS: 0000000000000297   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000605   rbx: ffff8800aa423c00   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8800aa423c00
(XEN) rbp: ffff8800b14dfe78   rsp: ffff8800b14dfe78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000206
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 0000000291d56000   cr2: 00007f8e409cd648
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800b14dfe78:
(XEN)    ffff8800b14dfea8 ffffffff810ff342 0000000000000004 ffff8800aa423c00
(XEN)    ffff880210c683d0 0000000000000000 ffff8800b14dff28 ffffffff810ff869
(XEN)    ffff8800b14dfec8 ffffffff8100f0de ffff8800b14dfed8 ffffffff8100f1ad
(XEN)    ffff8800b14dfee8 ffffffff81070fdb ffff8800b14dff08 ffffffff810716a1
(XEN)    ffff8800b14dff48 ffff8800aa423c00 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800b14dff78 ffffffff810ff918
(XEN)    ffff8800b14dff78 ffffffff81058089 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fff5021e340 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000206 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 ffffffffff6000fc
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000206 00007fff5021e148
(XEN)    000000000000e02b
(XEN) *** Dumping Dom0 vcpu#1 state: ***
(XEN) RIP:    e033:[<ffffffff8100922a>]
(XEN) RFLAGS: 0000000000000246   EM: 1   CONTEXT: pv guest
(XEN) rax: 0000000000040000   rbx: 0000000000000001   rcx: ffffffff8100922a
(XEN) rdx: 0000001911f7df8f   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) rbp: ffff880086555ad0   rsp: ffff880086555ab8   r8:  ffff880086555ab8
(XEN) r9:  0000000000000000   r10: 0000001911f7ddf2   r11: 0000000000000246
(XEN) r12: ffff8800924296d0   r13: ffffc9000002d480   r14: 0000000000000000
(XEN) r15: ffff880086555b78   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 0000000284896000   cr2: 00007f8e3fbe80a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff880086555ab8:
(XEN)    0000000000000000 0000001911f7df8f ffffffff8100eb79 ffff880086555b38
(XEN)    ffffffff8100f2a2 0000000000000000 0000001911f7ddf2 0000000000000000
(XEN)    ffff880086555ab8 0000000000000000 0000000000000000 0000001911f7df8f
(XEN)    0000001911f7df8f 0000000000000000 ffffffff8100f28f ffffffff813fce0b
(XEN)    ffff880086555b48 ffffffff81041fba ffff880086555ba8 ffffffff8104916d
(XEN)    ffff8801d8171968 0000000000000000 0000000000000001 ffffffff81049191
(XEN)    0000000000000000 ffff8801d8171958 0000000000000001 0000000000000000
(XEN)    ffff8801d8171968 0000000000000000 ffff880086555bb8 ffffffff81049191
(XEN)    ffff880086555bf8 ffffffff811015b7 0000000000000000 ffff8800924296d0
(XEN)    ffffffff8104917f 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffff880086555c38 ffffffff8103f632 0000000100000000 ffff8801d8171968
(XEN)    0000000000000000 0000000000000001 0000000000000001 0000000000000001
(XEN)    ffff880086555c78 ffffffff81041f91 ffffffff8100eb79 ffff8801d8171800
(XEN)    ffff8801b05dde20 ffff8801d1a5c128 0000000000000008 00000000000009a3
(XEN)    ffff880086555c88 ffffffff8126b0e5 ffff880086555ca8 ffffffff8126c7d0
(XEN)    ffff8801b05dde20 ffff8801d81719b8 ffff880086555cc8 ffffffff8126c976
(XEN)    ffff880086555cd8 ffff880219426b50 ffff880086555cf8 ffffffff8126c331
(XEN)    ffff8801d8171c48 ffff8801d8171c48 0000000000000008 ffff8801d1a5c000
(XEN)    ffff880086555e78 ffffffff8126bbc0 ffff8800a99313b0 ffff88020b7c83d0
(XEN)    ffff880086555d28 000009a38104162b ffff880086555d38 ffffffff81041643
(XEN) *** Dumping Dom0 vcpu#2 state: ***
(XEN) RIP:    e033:[<ffffffff813fcf63>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000705   rbx: ffff8801d6819480   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801d6819480
(XEN) rbp: ffff8800acdf9e78   rsp: ffff8800acdf9e78   r8:  0000000000000030
(XEN) r9:  0000000000080000   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 00000002934ba000   cr2: 00007f8e409cd648
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800acdf9e78:
(XEN)    ffff8800acdf9ea8 ffffffff810ff342 0000000000000003 ffff8801d6819480
(XEN)    ffff8801d0df2580 0000000000000000 ffff8800acdf9f28 ffffffff810ff869
(XEN)    ffff8800acdf9ec8 ffffffff8100f0de ffff8800acdf9ed8 ffffffff8100f1ad
(XEN)    ffff8800acdf9ee8 ffffffff81070fdb ffff8800acdf9f08 ffffffff810716a1
(XEN)    ffff8800acdf9f48 ffff8801d6819480 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800acdf9f78 ffffffff810ff918
(XEN)    ffff8800acdf9f78 ffffffff81058089 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fff63521950 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000080000 0000000000000030 0000000000000010 ffffffffff6000fc
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000202 00007fff63521758
(XEN)    000000000000e02b 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) *** Dumping Dom0 vcpu#3 state: ***
(XEN) RIP:    e033:[<ffffffff813fcf63>]
(XEN) RFLAGS: 0000000000000293   EM: 0   CONTEXT: pv guest
(XEN) rax: 0000000000000805   rbx: ffff8801d83a7900   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000001   rdi: ffff8801d83a7900
(XEN) rbp: ffff8800ab481e78   rsp: ffff8800ab481e78   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000202
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000005
(XEN) r15: 0000000000000005   cr0: 0000000080050033   cr4: 0000000000002660
(XEN) cr3: 00000002ffeef000   cr2: 000000398809a3a0
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=ffff8800ab481e78:
(XEN)    ffff8800ab481ea8 ffffffff810ff342 ffff8800bbc17b00 ffff8801d83a7900
(XEN)    ffff8801d0d980c0 0000000000000000 ffff8800ab481f28 ffffffff810ff869
(XEN)    ffff8800ab481ec8 0000000000000000 0000000000000000 ffff8800bbc17b00
(XEN)    ffff8800ab481f78 ffffffff81123796 0000000000000001 0000000000000001
(XEN)    00000001ab481f48 ffff8801d83a7900 0000000000000000 0000000000000000
(XEN)    0000000000000001 0000000000000005 ffff8800ab481f78 ffffffff810ff918
(XEN)    0000000000000000 0000000200000001 000000004cb7c317 0000003987e1bbc0
(XEN)    0000000000000000 00007fffdff8e480 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffffff81013db2 0000000000000202 0000000000000000
(XEN)    0000000000000001 0000000000000001 0000000000000010 000000000042c127
(XEN)    0000000000000000 0000000000000001 0000000000000005 0000000000000010
(XEN)    00000039880cc557 000000000000e033 0000000000000202 00007fffdff8e288
(XEN)    000000000000e02b ffff8801d18e8000 ffffffff81650840 0000000000000000
(XEN)    0000000000000003 00007ffffffff000 ffffffff8105eb50 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000057ac6e9d 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) [H: dump heap info]
(XEN) 'H' pressed -> dumping heap info (now-0x2F29:5FC59FA8)
(XEN) heap[node=0][zone=0] -> 0 pages
(XEN) heap[node=0][zone=1] -> 0 pages
(XEN) heap[node=0][zone=2] -> 0 pages
(XEN) heap[node=0][zone=3] -> 0 pages
(XEN) heap[node=0][zone=4] -> 0 pages
(XEN) heap[node=0][zone=5] -> 0 pages
(XEN) heap[node=0][zone=6] -> 0 pages
(XEN) heap[node=0][zone=7] -> 0 pages
(XEN) heap[node=0][zone=8] -> 0 pages
(XEN) heap[node=0][zone=9] -> 0 pages
(XEN) heap[node=0][zone=10] -> 0 pages
(XEN) heap[node=0][zone=11] -> 0 pages
(XEN) heap[node=0][zone=12] -> 0 pages
(XEN) heap[node=0][zone=13] -> 0 pages
(XEN) heap[node=0][zone=14] -> 16120 pages
(XEN) heap[node=0][zone=15] -> 32768 pages
(XEN) heap[node=0][zone=16] -> 65536 pages
(XEN) heap[node=0][zone=17] -> 131072 pages
(XEN) heap[node=0][zone=18] -> 262144 pages
(XEN) heap[node=0][zone=19] -> 5428 pages
(XEN) heap[node=0][zone=20] -> 4347 pages
(XEN) heap[node=0][zone=21] -> 14109 pages
(XEN) heap[node=0][zone=22] -> 0 pages
(XEN) heap[node=0][zone=23] -> 0 pages
(XEN) heap[node=0][zone=24] -> 0 pages
(XEN) heap[node=0][zone=25] -> 0 pages
(XEN) heap[node=0][zone=26] -> 0 pages
(XEN) heap[node=0][zone=27] -> 0 pages
(XEN) heap[node=0][zone=28] -> 0 pages
(XEN) heap[node=0][zone=29] -> 0 pages
(XEN) heap[node=0][zone=30] -> 0 pages
(XEN) heap[node=0][zone=31] -> 0 pages
(XEN) heap[node=0][zone=32] -> 0 pages
(XEN) heap[node=0][zone=33] -> 0 pages
(XEN) heap[node=0][zone=34] -> 0 pages
(XEN) heap[node=0][zone=35] -> 0 pages
(XEN) heap[node=0][zone=36] -> 0 pages
(XEN) heap[node=0][zone=37] -> 0 pages
(XEN) heap[node=0][zone=38] -> 0 pages
(XEN) heap[node=0][zone=39] -> 0 pages
(XEN) heap[node=1][zone=0] -> 0 pages
(XEN) heap[node=1][zone=1] -> 0 pages
(XEN) heap[node=1][zone=2] -> 0 pages
(XEN) heap[node=1][zone=3] -> 0 pages
(XEN) heap[node=1][zone=4] -> 0 pages
(XEN) heap[node=1][zone=5] -> 0 pages
(XEN) heap[node=1][zone=6] -> 0 pages
(XEN) heap[node=1][zone=7] -> 0 pages
(XEN) heap[node=1][zone=8] -> 0 pages
(XEN) heap[node=1][zone=9] -> 0 pages
(XEN) heap[node=1][zone=10] -> 0 pages
(XEN) heap[node=1][zone=11] -> 0 pages
(XEN) heap[node=1][zone=12] -> 0 pages
(XEN) heap[node=1][zone=13] -> 0 pages
(XEN) heap[node=1][zone=14] -> 0 pages
(XEN) heap[node=1][zone=15] -> 0 pages
(XEN) heap[node=1][zone=16] -> 0 pages
(XEN) heap[node=1][zone=17] -> 0 pages
(XEN) heap[node=1][zone=18] -> 0 pages
(XEN) heap[node=1][zone=19] -> 0 pages
(XEN) heap[node=1][zone=20] -> 0 pages
(XEN) heap[node=1][zone=21] -> 532000 pages
(XEN) heap[node=1][zone=22] -> 690524 pages
(XEN) heap[node=1][zone=23] -> 0 pages
(XEN) heap[node=1][zone=24] -> 0 pages
(XEN) heap[node=1][zone=25] -> 0 pages
(XEN) heap[node=1][zone=26] -> 0 pages
(XEN) heap[node=1][zone=27] -> 0 pages
(XEN) heap[node=1][zone=28] -> 0 pages
(XEN) heap[node=1][zone=29] -> 0 pages
(XEN) heap[node=1][zone=30] -> 0 pages
(XEN) heap[node=1][zone=31] -> 0 pages
(XEN) heap[node=1][zone=32] -> 0 pages
(XEN) heap[node=1][zone=33] -> 0 pages
(XEN) heap[node=1][zone=34] -> 0 pages
(XEN) heap[node=1][zone=35] -> 0 pages
(XEN) heap[node=1][zone=36] -> 0 pages
(XEN) heap[node=1][zone=37] -> 0 pages
(XEN) heap[node=1][zone=38] -> 0 pages
(XEN) heap[node=1][zone=39] -> 0 pages
(XEN) [M: dump MSI state]
(XEN) PCI-MSI interrupt information:
(XEN)  MSI    48 vec=c1  fixed  edge   assert phys    cpu dest=00000000 mask=1/1/1
(XEN)  MSI    49 vec=c9  fixed  edge   assert phys    cpu dest=00000000 mask=1/1/1
(XEN)  MSI    50 vec=d1  fixed  edge   assert phys    cpu dest=00000000 mask=1/1/1
(XEN)  MSI    51 vec=d9  fixed  edge   assert phys    cpu dest=00000000 mask=1/1/1
(XEN)  MSIX   52 vec=52  fixed  edge   assert phys    cpu dest=00000000 mask=1/0/0
(XEN)  MSIX   53 vec=5a  fixed  edge   assert phys    cpu dest=00000000 mask=1/0/0
(XEN)  MSIX   54 vec=62  fixed  edge   assert phys    cpu dest=00000000 mask=1/0/0
(XEN)  MSIX   55 vec=6a  fixed  edge   assert phys    cpu dest=00000000 mask=1/0/0
(XEN)  MSIX   56 vec=72  fixed  edge   assert phys    cpu dest=00000000 mask=1/0/0
(XEN)  MSIX   57 vec=7a  fixed  edge   assert phys    cpu dest=00000000 mask=1/1/1
(XEN) [Q: dump PCI devices]
(XEN) ==== PCI devices ====
(XEN) 05:00.0 - dom 0   - MSIs < >
(XEN) 04:00.0 - dom 0   - MSIs < >
(XEN) 01:00.1 - dom 0   - MSIs < >
(XEN) 01:00.0 - dom 0   - MSIs < 52 53 54 55 56 57 >
(XEN) 00:1f.5 - dom 0   - MSIs < >
(XEN) 00:1f.3 - dom 0   - MSIs < >
(XEN) 00:1f.2 - dom 0   - MSIs < >
(XEN) 00:1f.0 - dom 0   - MSIs < >
(XEN) 00:1e.0 - dom 0   - MSIs < >
(XEN) 00:1d.7 - dom 0   - MSIs < >
(XEN) 00:1d.2 - dom 0   - MSIs < >
(XEN) 00:1d.1 - dom 0   - MSIs < >
(XEN) 00:1d.0 - dom 0   - MSIs < >
(XEN) 00:1a.7 - dom 0   - MSIs < >
(XEN) 00:1a.0 - dom 0   - MSIs < >
(XEN) 00:14.3 - dom 0   - MSIs < >
(XEN) 00:14.2 - dom 0   - MSIs < >
(XEN) 00:14.1 - dom 0   - MSIs < >
(XEN) 00:14.0 - dom 0   - MSIs < >
(XEN) 00:13.0 - dom 0   - MSIs < >
(XEN) 00:11.1 - dom 0   - MSIs < >
(XEN) 00:11.0 - dom 0   - MSIs < >
(XEN) 00:10.1 - dom 0   - MSIs < >
(XEN) 00:10.0 - dom 0   - MSIs < >
(XEN) 00:09.0 - dom 0   - MSIs < 51 >
(XEN) 00:07.0 - dom 0   - MSIs < 50 >
(XEN) 00:03.0 - dom 0   - MSIs < 49 >
(XEN) 00:01.0 - dom 0   - MSIs < 48 >
(XEN) 00:00.0 - dom 0   - MSIs < >
(XEN) [a: dump timer queues]
(XEN) Dumping timer queues: NOW=0x00002F295FC829A9
(XEN) CPU[00]   1 : ffff82c480373780 ex=0x00002F29608CBA00 ffff82c480250240 ffff82c4801339c0
(XEN)   2 : ffff82c48036e900 ex=0x00002F296183BC8F 0000000000000000 ffff82c480117480
(XEN)   3 : ffff82c48038ce80 ex=0x00002F29CB73D320 0000000000000000 ffff82c480193830
(XEN)   4 : ffff82c4803897a0 ex=0x00002F333973B2E4 0000000000000000 ffff82c48016ae40
(XEN)   5 : ffff82c4802500a0 ex=0x00002F2961883525 0000000000000000 ffff82c48011a100
(XEN)  L0 : ffff83023fffa7c8 ex=0x00002F2960528B05 0000000000000000 ffff82c480118040
(XEN) 
(XEN) CPU[01]   1 : ffff83023fffad78 ex=0x00002F295FF42380 0000000000000001 ffff82c480118040
(XEN)   2 : ffff82c4803737b0 ex=0x00002F29608CBA00 ffff82c480252240 ffff82c4801339c0
(XEN)   3 : ffff82c4802520a0 ex=0x00002F29618CBA4B 0000000000000000 ffff82c48011a100
(XEN)  L0 : ffff8300bf558060 ex=0x00002F295FCC92F2 ffff8300bf558000 ffff82c48011a250
(XEN) 
(XEN) CPU[02]   1 : ffff83023fffae38 ex=0x00002F295FF42380 0000000000000002 ffff82c480118040
(XEN)   2 : ffff82c4802540a0 ex=0x00002F29618FA595 0000000000000000 ffff82c48011a100
(XEN)   3 : ffff82c4803737e0 ex=0x00002F29608CBA00 ffff82c480254240 ffff82c4801339c0
(XEN)   4 : ffff8302e3940410 ex=0x00002F2989AFCBC0 ffff8302e39403d0 ffff82c4801a2bf0
(XEN)  L0 : ffff8300bf2f6060 ex=0x00002F295FCE7B37 ffff8300bf2f6000 ffff82c48011a250
(XEN) 
(XEN) CPU[03]   1 : ffff83023fffaef8 ex=0x00002F295FF42380 0000000000000003 ffff82c480118040
(XEN)   2 : ffff82c480373810 ex=0x00002F29608CBA00 ffff82c480256240 ffff82c4801339c0
(XEN)   3 : ffff82c4802560a0 ex=0x00002F2961917F5D 0000000000000000 ffff82c48011a100
(XEN)  L0 : ffff8300bf2f4060 ex=0x00002F295FD06373 ffff8300bf2f4000 ffff82c48011a250
(XEN) 
(XEN) CPU[04]  L0 : ffff83057d630410 ex=0x00002F2971FBBB26 ffff83057d6303d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[05]  L0 : ffff8304c40e0410 ex=0x00002F29778EEB6B ffff8304c40e03d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[06] 
(XEN) CPU[07]  L0 : ffff8302e2190410 ex=0x00002F2965923C74 ffff8302e21903d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[08]   1 : ffff83045c0f0410 ex=0x00002F2985FA8305 ffff83045c0f03d0 ffff82c4801a2bf0
(XEN)   2 : ffff830451880758 ex=0x00002F5EF14BF566 ffff830451880738 ffff82c4801a2840
(XEN)  L0 : ffff83033e690410 ex=0x00002F2960993B5D ffff83033e6903d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[09]   1 : ffff8302cd2a0758 ex=0x00002F9F8D310543 ffff8302cd2a0738 ffff82c4801a2840
(XEN)  L0 : ffff8302cd2a0410 ex=0x00002F299B3E1063 ffff8302cd2a03d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[10]  L0 : ffff830551670410 ex=0x00002F29671337AA ffff8305516703d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[11] 
(XEN) CPU[12] 
(XEN) CPU[13]  L0 : ffff830451880410 ex=0x00002F29777F4D72 ffff8304518803d0 ffff82c4801a2bf0
(XEN) 
(XEN) CPU[14]   1 : ffff83040a3b0758 ex=0x00002F5E417FD500 ffff83040a3b0738 ffff82c4801a2840
(XEN)   2 : ffff83033e690758 ex=0x00002F6776D69907 ffff83033e690738 ffff82c4801a2840
(XEN)  L0 : ffff83045c0f0758 ex=0x00002F5A9346E966 ffff83045c0f0738 ffff82c4801a2840
(XEN) 
(XEN) CPU[15]   1 : ffff830551670758 ex=0x00002F2CDC5CB31A ffff830551670738 ffff82c4801a2840
(XEN)   2 : ffff8304c40e0758 ex=0x00002F5EB5BF99DF ffff8304c40e0738 ffff82c4801a2840
(XEN)   3 : ffff8302e3940758 ex=0x00002F6941273F47 ffff8302e3940738 ffff82c4801a2840
(XEN)   4 : ffff8302e2190758 ex=0x00002F6527C35048 ffff8302e2190738 ffff82c4801a2840
(XEN)   5 : ffff83057d630758 ex=0x00002F66D566872A ffff83057d630738 ffff82c4801a2840
(XEN)  L0 : ffff83040a3b0410 ex=0x00002F297AA1272D ffff83040a3b03d0 ffff82c4801a2bf0
(XEN) 
(XEN) [c: dump ACPI Cx structures]
(XEN) 'c' pressed -> printing ACPI Cx structures
(XEN) ==cpu0==
(XEN) active state:             C1
(XEN) max_cstate:               C7
(XEN) states:
(XEN)    *C1:   type[C1] latency[000] usage[54266020] duration[3578278140527]
(XEN)     C2:   type[C2] latency[096] usage[04696600] duration[1243442337920]
(XEN)     C3:   type[C3] latency[128] usage[10857622] duration[3059567368687]
(XEN)     C0:   usage[69820242] duration[43973620525032]
(XEN) ==cpu1==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[86666680] duration[3849298971636]
(XEN)     C2:   type[C2] latency[096] usage[03174276] duration[1219104266464]
(XEN)    *C3:   type[C3] latency[128] usage[35808371] duration[28519998226403]
(XEN)     C0:   usage[125649327] duration[18266538701533]
(XEN) ==cpu2==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[96900219] duration[3525352961495]
(XEN)     C2:   type[C2] latency[096] usage[03036369] duration[1304302071482]
(XEN)    *C3:   type[C3] latency[128] usage[35424555] duration[28944797019776]
(XEN)     C0:   usage[135361143] duration[18080519905811]
(XEN) ==cpu3==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[92574680] duration[3579080187994]
(XEN)     C2:   type[C2] latency[096] usage[03375092] duration[1270209631805]
(XEN)    *C3:   type[C3] latency[128] usage[36819503] duration[29199903284091]
(XEN)     C0:   usage[132769275] duration[17805810649201]
(XEN) ==cpu4==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[215648830] duration[9845218024349]
(XEN)     C2:   type[C2] latency[096] usage[02004283] duration[743230857747]
(XEN)    *C3:   type[C3] latency[128] usage[13515973] duration[21580654794441]
(XEN)     C0:   usage[231169086] duration[19685931870330]
(XEN) ==cpu5==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[187832221] duration[9104631612753]
(XEN)     C2:   type[C2] latency[096] usage[01939152] duration[759431246063]
(XEN)    *C3:   type[C3] latency[128] usage[14174342] duration[24314258613050]
(XEN)     C0:   usage[203945715] duration[17676745867169]
(XEN) ==cpu6==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[191506848] duration[9303776870019]
(XEN)     C2:   type[C2] latency[096] usage[01990330] duration[833918147881]
(XEN)    *C3:   type[C3] latency[128] usage[15636486] duration[27206335967523]
(XEN)     C0:   usage[209133664] duration[14511068147368]
(XEN) ==cpu7==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[159456787] duration[8133217960850]
(XEN)     C2:   type[C2] latency[096] usage[01793145] duration[770742266037]
(XEN)    *C3:   type[C3] latency[128] usage[14436128] duration[26152298127639]
(XEN)     C0:   usage[175686060] duration[16798872572809]
(XEN) ==cpu8==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[168343332] duration[9616878990225]
(XEN)     C2:   type[C2] latency[096] usage[02247025] duration[1444209007683]
(XEN)    *C3:   type[C3] latency[128] usage[18517463] duration[33568858701668]
(XEN)     C0:   usage[189107820] duration[7225216020389]
(XEN) ==cpu9==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[142224236] duration[8299681150985]
(XEN)     C2:   type[C2] latency[096] usage[02004676] duration[1738856588278]
(XEN)    *C3:   type[C3] latency[128] usage[21320471] duration[27505132158345]
(XEN)     C0:   usage[165549383] duration[14311524616565]
(XEN) ==cpu10==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[154563722] duration[8832860829347]
(XEN)     C2:   type[C2] latency[096] usage[02135353] duration[1897128565843]
(XEN)    *C3:   type[C3] latency[128] usage[21946082] duration[28357798680390]
(XEN)     C0:   usage[178645157] duration[12767438231836]
(XEN) ==cpu11==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[145115113] duration[8532964634753]
(XEN)     C2:   type[C2] latency[096] usage[02062900] duration[1755219369579]
(XEN)    *C3:   type[C3] latency[128] usage[21949719] duration[27373476495420]
(XEN)     C0:   usage[169127732] duration[14193598983984]
(XEN) ==cpu12==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[175631789] duration[9153324897229]
(XEN)     C2:   type[C2] latency[096] usage[02048380] duration[883821620523]
(XEN)    *C3:   type[C3] latency[128] usage[16289029] duration[28198648541494]
(XEN)     C0:   usage[193969198] duration[13619496218209]
(XEN) ==cpu13==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[147227934] duration[7889310680483]
(XEN)     C2:   type[C2] latency[096] usage[01787079] duration[789183845901]
(XEN)    *C3:   type[C3] latency[128] usage[14628274] duration[26505858143466]
(XEN)     C0:   usage[163643287] duration[16670970401316]
(XEN) ==cpu14==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[158297145] duration[8411555839236]
(XEN)     C2:   type[C2] latency[096] usage[01897004] duration[878393322793]
(XEN)    *C3:   type[C3] latency[128] usage[16134982] duration[28901400237229]
(XEN)     C0:   usage[176329131] duration[13664005465730]
(XEN) ==cpu15==
(XEN) active state:             C3
(XEN) max_cstate:               C7
(XEN) states:
(XEN)     C1:   type[C1] latency[000] usage[135627212] duration[7384924625889]
(XEN)     C2:   type[C2] latency[096] usage[01688228] duration[779681049597]
(XEN)    *C3:   type[C3] latency[128] usage[14466738] duration[27052449266577]
(XEN)     C0:   usage[151782178] duration[16638331716184]
(XEN) [e: dump evtchn info]
(XEN) 'e' pressed -> dumping event-channel info
(XEN) Domain 0 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 0:
(XEN)     port [p/m]
(XEN)        1 [1/0]: s=5 n=0 v=0 x=0
(XEN)        2 [1/0]: s=6 n=0 x=0
(XEN)        3 [0/0]: s=6 n=0 x=0
(XEN)        4 [0/0]: s=5 n=0 v=1 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=5 n=1 v=0 x=0
(XEN)        7 [0/0]: s=6 n=1 x=0
(XEN)        8 [0/0]: s=6 n=1 x=0
(XEN)        9 [0/0]: s=5 n=1 v=1 x=0
(XEN)       10 [0/0]: s=6 n=1 x=0
(XEN)       11 [0/0]: s=5 n=2 v=0 x=0
(XEN)       12 [0/0]: s=6 n=2 x=0
(XEN)       13 [0/0]: s=6 n=2 x=0
(XEN)       14 [0/0]: s=5 n=2 v=1 x=0
(XEN)       15 [0/0]: s=6 n=2 x=0
(XEN)       16 [0/0]: s=5 n=3 v=0 x=0
(XEN)       17 [0/0]: s=6 n=3 x=0
(XEN)       18 [0/0]: s=6 n=3 x=0
(XEN)       19 [0/0]: s=5 n=3 v=1 x=0
(XEN)       20 [0/0]: s=6 n=3 x=0
(XEN)       21 [0/0]: s=3 n=3 d=0 p=39 x=0
(XEN)       22 [0/0]: s=4 n=0 p=9 x=0
(XEN)       23 [0/0]: s=5 n=0 v=9 x=0
(XEN)       24 [0/0]: s=5 n=0 v=16 x=0
(XEN)       25 [0/0]: s=5 n=0 v=2 x=0
(XEN)       26 [0/0]: s=4 n=0 p=12 x=0
(XEN)       27 [0/0]: s=4 n=0 p=1 x=0
(XEN)       28 [0/0]: s=4 n=0 p=8 x=0
(XEN)       29 [0/0]: s=4 n=0 p=18 x=0
(XEN)       30 [0/0]: s=4 n=0 p=23 x=0
(XEN)       31 [0/0]: s=4 n=0 p=16 x=0
(XEN)       32 [0/0]: s=4 n=0 p=19 x=0
(XEN)       33 [0/0]: s=4 n=0 p=32 x=0
(XEN)       34 [1/0]: s=4 n=0 p=299 x=0
(XEN)       35 [0/0]: s=4 n=0 p=298 x=0
(XEN)       36 [1/0]: s=4 n=0 p=297 x=0
(XEN)       37 [0/0]: s=4 n=0 p=296 x=0
(XEN)       38 [0/0]: s=4 n=0 p=295 x=0
(XEN)       39 [0/0]: s=3 n=1 d=0 p=21 x=0
(XEN)       40 [0/0]: s=5 n=0 v=3 x=0
(XEN)       41 [0/0]: s=3 n=0 d=3826 p=8 x=0
(XEN)       42 [0/0]: s=3 n=0 d=3828 p=8 x=0
(XEN)       43 [0/0]: s=3 n=0 d=3832 p=3 x=0
(XEN)       44 [0/0]: s=3 n=0 d=3827 p=8 x=0
(XEN)       45 [0/0]: s=3 n=0 d=3826 p=3 x=0
(XEN)       47 [0/0]: s=3 n=0 d=435 p=8 x=0
(XEN)       48 [0/0]: s=3 n=0 d=3829 p=3 x=0
(XEN)       49 [0/0]: s=3 n=0 d=2378 p=8 x=0
(XEN)       50 [0/0]: s=3 n=0 d=435 p=3 x=0
(XEN)       51 [0/1]: s=3 n=0 d=435 p=1 x=0
(XEN)       52 [0/0]: s=3 n=0 d=3829 p=1 x=0
(XEN)       53 [0/1]: s=3 n=0 d=3826 p=1 x=0
(XEN)       54 [0/0]: s=3 n=0 d=3829 p=2 x=0
(XEN)       55 [0/0]: s=3 n=0 d=3825 p=8 x=0
(XEN)       56 [0/1]: s=3 n=0 d=435 p=2 x=0
(XEN)       57 [0/0]: s=3 n=0 d=3830 p=3 x=0
(XEN)       58 [0/0]: s=3 n=0 d=3830 p=1 x=0
(XEN)       59 [0/0]: s=3 n=0 d=3830 p=2 x=0
(XEN)       60 [0/0]: s=3 n=0 d=435 p=7 x=0
(XEN)       61 [0/0]: s=3 n=0 d=3825 p=7 x=0
(XEN)       62 [0/0]: s=3 n=0 d=3826 p=7 x=0
(XEN)       63 [0/1]: s=3 n=0 d=3826 p=2 x=0
(XEN)       64 [0/0]: s=3 n=0 d=3831 p=3 x=0
(XEN)       65 [0/0]: s=3 n=0 d=3828 p=7 x=0
(XEN)       68 [0/0]: s=3 n=0 d=3827 p=3 x=0
(XEN)       70 [0/0]: s=3 n=0 d=2378 p=3 x=0
(XEN)       71 [0/1]: s=3 n=0 d=2378 p=1 x=0
(XEN)       72 [0/0]: s=3 n=0 d=3831 p=1 x=0
(XEN)       73 [0/1]: s=3 n=0 d=3827 p=1 x=0
(XEN)       74 [0/0]: s=3 n=0 d=3827 p=2 x=0
(XEN)       75 [0/0]: s=3 n=0 d=3831 p=2 x=0
(XEN)       77 [0/0]: s=3 n=0 d=3827 p=7 x=0
(XEN)       78 [0/1]: s=3 n=0 d=2378 p=2 x=0
(XEN)       79 [0/0]: s=3 n=0 d=2378 p=7 x=0
(XEN)       80 [0/0]: s=3 n=0 d=3828 p=3 x=0
(XEN)       81 [0/1]: s=3 n=0 d=3828 p=1 x=0
(XEN)       82 [0/0]: s=3 n=0 d=3825 p=3 x=0
(XEN)       83 [0/0]: s=3 n=0 d=3828 p=2 x=0
(XEN)       84 [0/1]: s=3 n=0 d=3825 p=1 x=0
(XEN)       87 [0/0]: s=3 n=0 d=3825 p=2 x=0
(XEN) Domain 435 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 435:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=51 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=56 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=50 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=60 x=0
(XEN)        8 [1/0]: s=3 n=0 d=0 p=47 x=0
(XEN) Domain 2378 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 2378:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=71 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=78 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=70 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=79 x=0
(XEN)        8 [1/0]: s=3 n=0 d=0 p=49 x=0
(XEN) Domain 3825 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3825:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=84 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=87 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=82 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=61 x=0
(XEN)        8 [1/0]: s=3 n=0 d=0 p=55 x=0
(XEN) Domain 3826 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3826:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=53 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=63 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=45 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=62 x=0
(XEN)        8 [1/0]: s=3 n=0 d=0 p=41 x=0
(XEN) Domain 3827 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3827:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=73 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=74 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=68 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=77 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=44 x=0
(XEN) Domain 3828 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3828:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=3 n=0 d=0 p=81 x=1
(XEN)        2 [0/1]: s=3 n=1 d=0 p=83 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=80 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN)        5 [0/0]: s=6 n=0 x=0
(XEN)        6 [0/0]: s=2 n=0 d=0 x=0
(XEN)        7 [0/0]: s=3 n=0 d=0 p=65 x=0
(XEN)        8 [0/0]: s=3 n=0 d=0 p=42 x=0
(XEN) Domain 3829 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3829:
(XEN)     port [p/m]
(XEN)        1 [0/0]: s=3 n=0 d=0 p=52 x=1
(XEN)        2 [0/0]: s=3 n=1 d=0 p=54 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=48 x=0
(XEN)        4 [0/0]: s=2 n=0 d=0 x=0
(XEN) Domain 3830 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3830:
(XEN)     port [p/m]
(XEN)        1 [0/0]: s=3 n=0 d=0 p=58 x=1
(XEN)        2 [0/0]: s=3 n=1 d=0 p=59 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=57 x=0
(XEN)        4 [0/0]: s=2 n=0 d=0 x=0
(XEN) Domain 3831 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3831:
(XEN)     port [p/m]
(XEN)        1 [0/0]: s=3 n=0 d=0 p=72 x=1
(XEN)        2 [0/0]: s=3 n=1 d=0 p=75 x=1
(XEN)        3 [0/0]: s=3 n=0 d=0 p=64 x=0
(XEN)        4 [0/0]: s=2 n=0 d=0 x=0
(XEN) Domain 3832 polling vCPUs: {No periodic timer}
(XEN) Event channel information for domain 3832:
(XEN)     port [p/m]
(XEN)        1 [0/1]: s=2 n=0 d=0 x=1
(XEN)        2 [0/1]: s=2 n=1 d=0 x=1
(XEN)        3 [0/1]: s=3 n=0 d=0 p=43 x=0
(XEN)        4 [0/1]: s=2 n=0 d=0 x=0
(XEN) [g: print grant table usage]
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:  435 (v1)
(XEN) [16119]        0 0x475da0 0x00000001          0 0x005da0 0x19
(XEN) [16133]        0 0x475dc8 0x00000001          0 0x005dc8 0x19
(XEN) [16383]        0 0x53803a 0x00000001          0 0x00603a 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 2378 (v1)
(XEN) [16235]        0 0x3b07ec 0x00000001          0 0x005dec 0x19
(XEN) [16241]        0 0x3b07ed 0x00000001          0 0x005ded 0x19
(XEN) [16383]        0 0x3b05df 0x00000001          0 0x005fdf 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3825 (v1)
(XEN) [15625]        0 0x08f029 0x00000300          0 0x006029 0x09
(XEN) [15628]        0 0x08f926 0x00000300          0 0x005926 0x09
(XEN) [15650]        0 0x08f022 0x00000300          0 0x006022 0x09
(XEN) [15655]        0 0x08f02b 0x00000300          0 0x00602b 0x09
(XEN) [15668]        0 0x08f01f 0x00000300          0 0x00601f 0x09
(XEN) [15675]        0 0x08f020 0x00000300          0 0x006020 0x09
(XEN) [15683]        0 0x08f8af 0x00000300          0 0x0058af 0x09
(XEN) [15689]        0 0x0a343d 0x00000300          0 0x03ac3d 0x09
(XEN) [15690]        0 0x08f01c 0x00000300          0 0x00601c 0x09
(XEN) [15693]        0 0x08f8ba 0x00000300          0 0x0058ba 0x09
(XEN) [15695]        0 0x08f8ae 0x00000300          0 0x0058ae 0x09
(XEN) [15716]        0 0x08f929 0x00000300          0 0x005929 0x09
(XEN) [15721]        0 0x08f021 0x00000300          0 0x006021 0x09
(XEN) [16088]        0 0x08f025 0x00000300          0 0x006025 0x09
(XEN) [16119]        0 0x08f928 0x00000300          0 0x005928 0x09
(XEN) [16121]        0 0x08f02a 0x00000300          0 0x00602a 0x09
(XEN) [16134]        0 0x08f023 0x00000300          0 0x006023 0x09
(XEN) [16144]        0 0x08f37e 0x00000001          0 0x005f7e 0x19
(XEN) [16152]        0 0x08f923 0x00000300          0 0x005923 0x09
(XEN) [16158]        0 0x08f878 0x00000300          0 0x005878 0x09
(XEN) [16192]        0 0x08f026 0x00000300          0 0x006026 0x09
(XEN) [16208]        0 0x08f028 0x00000300          0 0x006028 0x09
(XEN) [16214]        0 0x08f924 0x00000300          0 0x005924 0x09
(XEN) [16216]        0 0x08f8bb 0x00000300          0 0x0058bb 0x09
(XEN) [16221]        0 0x08f01d 0x00000300          0 0x00601d 0x09
(XEN) [16224]        0 0x08f925 0x00000300          0 0x005925 0x09
(XEN) [16226]        0 0x08f5d2 0x00000001          0 0x005dd2 0x19
(XEN) [16228]        0 0x08f206 0x00000300          0 0x005e06 0x09
(XEN) [16244]        0 0x08f9c0 0x00000300          0 0x0059c0 0x09
(XEN) [16246]        0 0x08f01e 0x00000300          0 0x00601e 0x09
(XEN) [16282]        0 0x08f927 0x00000300          0 0x005927 0x09
(XEN) [16342]        0 0x08f024 0x00000300          0 0x006024 0x09
(XEN) [16343]        0 0x08f027 0x00000300          0 0x006027 0x09
(XEN) [16383]        0 0x08f2d6 0x00000001          0 0x005ed6 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3826 (v1)
(XEN) [15680]        0 0x1c1495 0x00000003          0 0x03e695 0x19
(XEN) [15681]        0 0x1c1416 0x00000003          0 0x03e616 0x19
(XEN) [15682]        0 0x1c1497 0x00000003          0 0x03e697 0x19
(XEN) [15683]        0 0x1c17d8 0x00000003          0 0x03e5d8 0x19
(XEN) [15684]        0 0x1c14d9 0x00000003          0 0x03e6d9 0x19
(XEN) [15685]        0 0x1c141a 0x00000003          0 0x03e61a 0x19
(XEN) [15686]        0 0x1c14db 0x00000003          0 0x03e6db 0x19
(XEN) [15687]        0 0x1c145c 0x00000003          0 0x03e65c 0x19
(XEN) [15688]        0 0x1c149d 0x00000003          0 0x03e69d 0x19
(XEN) [15689]        0 0x1c17de 0x00000003          0 0x03e5de 0x19
(XEN) [15690]        0 0x1c142e 0x00000003          0 0x03e62e 0x19
(XEN) [15691]        0 0x1c146f 0x00000003          0 0x03e66f 0x19
(XEN) [15702]        0 0x1c175a 0x00000003          0 0x03e55a 0x19
(XEN) [15703]        0 0x1c1300 0x00000003          0 0x03e900 0x19
(XEN) [15704]        0 0x1c1300 0x00000003          0 0x03e900 0x19
(XEN) [15710]        0 0x1c143c 0x00000001          0 0x03e63c 0x19
(XEN) [15711]        0 0x1c153e 0x00000001          0 0x03e73e 0x19
(XEN) [16118]        0 0x1c14ea 0x00000003          0 0x03e6ea 0x19
(XEN) [16119]        0 0x1c1469 0x00000003          0 0x03e669 0x19
(XEN) [16120]        0 0x1c14a8 0x00000003          0 0x03e6a8 0x19
(XEN) [16121]        0 0x1c14a5 0x00000003          0 0x03e6a5 0x19
(XEN) [16122]        0 0x1c17e4 0x00000003          0 0x03e5e4 0x19
(XEN) [16123]        0 0x1c1231 0x00000003          0 0x03e831 0x19
(XEN) [16124]        0 0x1c1530 0x00000003          0 0x03e730 0x19
(XEN) [16127]        0 0x1c17d9 0x00000003          0 0x03e5d9 0x19
(XEN) [16128]        0 0x1c142b 0x00000003          0 0x03e62b 0x19
(XEN) [16130]        0 0x1c142c 0x00000003          0 0x03e62c 0x19
(XEN) [16132]        0 0x1c14e7 0x00000003          0 0x03e6e7 0x19
(XEN) [16133]        0 0x1c1300 0x00000003          0 0x03e900 0x19
(XEN) [16134]        0 0x1c147c 0x00000003          0 0x03e67c 0x19
(XEN) [16137]        0 0x1c14b5 0x00000003          0 0x03e6b5 0x19
(XEN) [16138]        0 0x1c1436 0x00000003          0 0x03e636 0x19
(XEN) [16145]        0 0x1c14d1 0x00000003          0 0x03e6d1 0x19
(XEN) [16155]        0 0x1c1300 0x00000003          0 0x03e900 0x19
(XEN) [16157]        0 0x1c157b 0x00000003          0 0x03e77b 0x19
(XEN) [16161]        0 0x1c1799 0x00000001          0 0x03e599 0x19
(XEN) [16167]        0 0x1c14d2 0x00000003          0 0x03e6d2 0x19
(XEN) [16169]        0 0x1c17db 0x00000003          0 0x03e5db 0x19
(XEN) [16173]        0 0x1c152d 0x00000003          0 0x03e72d 0x19
(XEN) [16174]        0 0x2dd9eb 0x00000001          0 0x005deb 0x19
(XEN) [16176]        0 0x1c14b4 0x00000003          0 0x03e6b4 0x19
(XEN) [16183]        0 0x1c1720 0x00000003          0 0x03e520 0x19
(XEN) [16187]        0 0x1c1466 0x00000003          0 0x03e666 0x19
(XEN) [16212]        0 0x1c17e1 0x00000003          0 0x03e5e1 0x19
(XEN) [16214]        0 0x2dd9ea 0x00000001          0 0x005dea 0x19
(XEN) [16218]        0 0x1c1413 0x00000003          0 0x03e613 0x19
(XEN) [16219]        0 0x1c1494 0x00000003          0 0x03e694 0x19
(XEN) [16226]        0 0x1c1473 0x00000003          0 0x03e673 0x19
(XEN) [16238]        0 0x1c1438 0x00000003          0 0x03e638 0x19
(XEN) [16243]        0 0x1c16e2 0x00000003          0 0x03e4e2 0x19
(XEN) [16257]        0 0x1c14bd 0x00000003          0 0x03e6bd 0x19
(XEN) [16258]        0 0x1c14a3 0x00000003          0 0x03e6a3 0x19
(XEN) [16259]        0 0x1c1475 0x00000003          0 0x03e675 0x19
(XEN) [16263]        0 0x1c14f7 0x00000003          0 0x03e6f7 0x19
(XEN) [16267]        0 0x1c1783 0x00000003          0 0x03e583 0x19
(XEN) [16273]        0 0x1c154f 0x00000003          0 0x03e74f 0x19
(XEN) [16274]        0 0x1c1450 0x00000003          0 0x03e650 0x19
(XEN) [16275]        0 0x1c17df 0x00000003          0 0x03e5df 0x19
(XEN) [16284]        0 0x1c1532 0x00000003          0 0x03e732 0x19
(XEN) [16296]        0 0x1c1539 0x00000003          0 0x03e739 0x19
(XEN) [16297]        0 0x1c143a 0x00000003          0 0x03e63a 0x19
(XEN) [16383]        0 0x2e4ba8 0x00000001          0 0x005fa8 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3827 (v1)
(XEN) [15676]        0 0x549a1c 0x00000003          0 0x03ea1c 0x19
(XEN) [15677]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15678]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15679]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15680]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15681]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15682]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15683]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15684]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15685]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15686]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15687]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15688]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15689]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15690]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15691]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15692]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15693]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15694]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15695]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15696]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15697]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15698]        0 0x549c9a 0x00000003          0 0x03e89a 0x19
(XEN) [15699]        0 0x549a19 0x00000003          0 0x03ea19 0x19
(XEN) [15700]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15701]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15702]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15703]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15704]        0 0x549a98 0x00000003          0 0x03ea98 0x19
(XEN) [15705]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15706]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15707]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15708]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15709]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15710]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15711]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15712]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15713]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15714]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15715]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15716]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15717]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15718]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15719]        0 0x549a94 0x00000003          0 0x03ea94 0x19
(XEN) [15720]        0 0x549d12 0x00000003          0 0x03e912 0x19
(XEN) [15721]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15722]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15723]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15724]        0 0x549a10 0x00000003          0 0x03ea10 0x19
(XEN) [15725]        0 0x549bcf 0x00000003          0 0x03ebcf 0x19
(XEN) [15726]        0 0x549a4e 0x00000003          0 0x03ea4e 0x19
(XEN) [15727]        0 0x549a4d 0x00000003          0 0x03ea4d 0x19
(XEN) [15728]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15729]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15730]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15731]        0 0x549d8c 0x00000003          0 0x03e98c 0x19
(XEN) [15732]        0 0x549b0a 0x00000003          0 0x03eb0a 0x19
(XEN) [15733]        0 0x549a89 0x00000003          0 0x03ea89 0x19
(XEN) [15734]        0 0x549a08 0x00000003          0 0x03ea08 0x19
(XEN) [15735]        0 0x549cc7 0x00000003          0 0x03e8c7 0x19
(XEN) [15736]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15737]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15738]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15739]        0 0x549d46 0x00000003          0 0x03e946 0x19
(XEN) [15740]        0 0x549d05 0x00000003          0 0x03e905 0x19
(XEN) [15741]        0 0x549dc4 0x00000003          0 0x03e9c4 0x19
(XEN) [15742]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15743]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15744]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15745]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15746]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15747]        0 0x549ac3 0x00000003          0 0x03eac3 0x19
(XEN) [15748]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15749]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15750]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15751]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15752]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15753]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15754]        0 0x549ac2 0x00000003          0 0x03eac2 0x19
(XEN) [15755]        0 0x549ac1 0x00000003          0 0x03eac1 0x19
(XEN) [15756]        0 0x549dc0 0x00000003          0 0x03e9c0 0x19
(XEN) [15757]        0 0x549bbf 0x00000003          0 0x03ebbf 0x19
(XEN) [15758]        0 0x549abe 0x00000003          0 0x03eabe 0x19
(XEN) [15759]        0 0x549a3d 0x00000003          0 0x03ea3d 0x19
(XEN) [15760]        0 0x549a7c 0x00000003          0 0x03ea7c 0x19
(XEN) [15761]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15762]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15763]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15764]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15765]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15766]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [15767]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16171]        0 0x549c62 0x00000003          0 0x03e862 0x19
(XEN) [16172]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16173]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16174]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16179]        0 0x549a67 0x00000003          0 0x03ea67 0x19
(XEN) [16180]        0 0x549a6a 0x00000003          0 0x03ea6a 0x19
(XEN) [16181]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16182]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16183]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16184]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16185]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16186]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16187]        0 0x549ab1 0x00000003          0 0x03eab1 0x19
(XEN) [16188]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16189]        0 0x549a30 0x00000003          0 0x03ea30 0x19
(XEN) [16201]        0 0x5041e6 0x00000001          0 0x005de6 0x19
(XEN) [16202]        0 0x549af8 0x00000003          0 0x03eaf8 0x19
(XEN) [16205]        0 0x549a6c 0x00000003          0 0x03ea6c 0x19
(XEN) [16206]        0 0x549a6b 0x00000003          0 0x03ea6b 0x19
(XEN) [16207]        0 0x549a13 0x00000003          0 0x03ea13 0x19
(XEN) [16209]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16210]        0 0x549a51 0x00000003          0 0x03ea51 0x19
(XEN) [16211]        0 0x549de5 0x00000003          0 0x03e9e5 0x19
(XEN) [16214]        0 0x549de8 0x00000003          0 0x03e9e8 0x19
(XEN) [16218]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16221]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16222]        0 0x549a69 0x00000003          0 0x03ea69 0x19
(XEN) [16223]        0 0x549b7a 0x00000003          0 0x03eb7a 0x19
(XEN) [16225]        0 0x549a97 0x00000003          0 0x03ea97 0x19
(XEN) [16227]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16230]        0 0x549de0 0x00000003          0 0x03e9e0 0x19
(XEN) [16234]        0 0x549de3 0x00000003          0 0x03e9e3 0x19
(XEN) [16258]        0 0x549a16 0x00000003          0 0x03ea16 0x19
(XEN) [16259]        0 0x549a55 0x00000003          0 0x03ea55 0x19
(XEN) [16260]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16267]        0 0x549b76 0x00000003          0 0x03eb76 0x19
(XEN) [16290]        0 0x549d26 0x00000003          0 0x03e926 0x19
(XEN) [16297]        0 0x549ddb 0x00000003          0 0x03e9db 0x19
(XEN) [16298]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16299]        0 0x5041e5 0x00000001          0 0x005de5 0x19
(XEN) [16302]        0 0x549a1d 0x00000003          0 0x03ea1d 0x19
(XEN) [16303]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16304]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16305]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16306]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16309]        0 0x549b77 0x00000003          0 0x03eb77 0x19
(XEN) [16312]        0 0x549b3b 0x00000003          0 0x03eb3b 0x19
(XEN) [16313]        0 0x549dde 0x00000003          0 0x03e9de 0x19
(XEN) [16316]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16319]        0 0x549ddf 0x00000003          0 0x03e9df 0x19
(XEN) [16320]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16327]        0 0x549d64 0x00000003          0 0x03e964 0x19
(XEN) [16328]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16330]        0 0x549ab5 0x00000003          0 0x03eab5 0x19
(XEN) [16331]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16333]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16335]        0 0x549b39 0x00000003          0 0x03eb39 0x19
(XEN) [16336]        0 0x549a0b 0x00000003          0 0x03ea0b 0x19
(XEN) [16340]        0 0x549a34 0x00000003          0 0x03ea34 0x19
(XEN) [16351]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16354]        0 0x549a00 0x00000003          0 0x03ea00 0x19
(XEN) [16369]        0 0x549de1 0x00000003          0 0x03e9e1 0x19
(XEN) [16383]        0 0x526786 0x00000001          0 0x005f86 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3828 (v1)
(XEN) [16204]        0 0x41793a 0x00000001          0 0x005f3a 0x19
(XEN) [16205]        0 0x417b82 0x00000001          0 0x005d82 0x19
(XEN) [16383]        0 0x41760c 0x00000001          0 0x00600c 0x19
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3829 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3830 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3831 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain: 3832 ... no active grant table entries
(XEN) gnttab_usage_print_all ] done
(XEN) [i: dump interrupt bindings]
(XEN) Guest interrupt information:
(XEN)    IRQ:   0 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:f0 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   1 affinity:00000000,00000000,00000000,00000001 vec:28 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  1(-S--),
(XEN)    IRQ:   2 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:e2 type=XT-PIC          status=00000000 mapped, unbound
(XEN)    IRQ:   3 affinity:00000000,00000000,00000000,00000001 vec:30 type=IO-APIC-edge    status=00000006 mapped, unbound
(XEN)    IRQ:   4 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:f1 type=IO-APIC-edge    status=00000000 mapped, unbound
(XEN)    IRQ:   5 affinity:00000000,00000000,00000000,00000001 vec:38 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   6 affinity:00000000,00000000,00000000,00000001 vec:40 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   7 affinity:00000000,00000000,00000000,00000001 vec:48 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:   8 affinity:00000000,00000000,00000000,00000001 vec:50 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0:  8(-S--),
(XEN)    IRQ:   9 affinity:00000000,00000000,00000000,00000001 vec:58 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0:  9(-S--),
(XEN)    IRQ:  10 affinity:00000000,00000000,00000000,00000001 vec:60 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  11 affinity:00000000,00000000,00000000,00000001 vec:68 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  12 affinity:00000000,00000000,00000000,00000001 vec:70 type=IO-APIC-edge    status=00000010 in-flight=0 domain-list=0: 12(-S--),
(XEN)    IRQ:  13 affinity:00000000,00000000,00000000,00000001 vec:78 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  14 affinity:00000000,00000000,00000000,00000001 vec:88 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  15 affinity:00000000,00000000,00000000,00000001 vec:90 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  16 affinity:00000000,00000000,00000000,00000001 vec:98 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 16(-S--),
(XEN)    IRQ:  18 affinity:00000000,00000000,00000000,00000001 vec:a0 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 18(-S--),
(XEN)    IRQ:  19 affinity:00000000,00000000,00000000,00000001 vec:a8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 19(-S--),
(XEN)    IRQ:  22 affinity:00000000,00000000,00000000,00000001 vec:b0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  23 affinity:00000000,00000000,00000000,00000001 vec:b8 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 23(-S--),
(XEN)    IRQ:  24 affinity:00000000,00000000,00000000,00000001 vec:c0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  25 affinity:00000000,00000000,00000000,00000001 vec:c8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  26 affinity:00000000,00000000,00000000,00000001 vec:d0 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  27 affinity:00000000,00000000,00000000,00000001 vec:d8 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  28 affinity:00000000,00000000,00000000,00000001 vec:21 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  29 affinity:00000000,00000000,00000000,00000001 vec:29 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  30 affinity:00000000,00000000,00000000,00000001 vec:31 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  31 affinity:00000000,00000000,00000000,00000001 vec:39 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  32 affinity:00000000,00000000,00000000,00000001 vec:41 type=IO-APIC-level   status=00000010 in-flight=0 domain-list=0: 32(-S--),
(XEN)    IRQ:  33 affinity:00000000,00000000,00000000,00000001 vec:49 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  34 affinity:00000000,00000000,00000000,00000001 vec:51 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  35 affinity:00000000,00000000,00000000,00000001 vec:59 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  36 affinity:00000000,00000000,00000000,00000001 vec:61 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  37 affinity:00000000,00000000,00000000,00000001 vec:69 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  38 affinity:00000000,00000000,00000000,00000001 vec:71 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  39 affinity:00000000,00000000,00000000,00000001 vec:79 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  40 affinity:00000000,00000000,00000000,00000001 vec:81 type=IO-APIC-level   status=00000002 mapped, unbound
(XEN)    IRQ:  41 affinity:00000000,00000000,00000000,00000001 vec:89 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  42 affinity:00000000,00000000,00000000,00000001 vec:91 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  43 affinity:00000000,00000000,00000000,00000001 vec:99 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  44 affinity:00000000,00000000,00000000,00000001 vec:a1 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  45 affinity:00000000,00000000,00000000,00000001 vec:a9 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  46 affinity:00000000,00000000,00000000,00000001 vec:b1 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  47 affinity:00000000,00000000,00000000,00000001 vec:b9 type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN)    IRQ:  48 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:c1 type=PCI-MSI         status=00000002 mapped, unbound
(XEN)    IRQ:  49 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:c9 type=PCI-MSI         status=00000002 mapped, unbound
(XEN)    IRQ:  50 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:d1 type=PCI-MSI         status=00000002 mapped, unbound
(XEN)    IRQ:  51 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:d9 type=PCI-MSI         status=00000002 mapped, unbound
(XEN)    IRQ:  52 affinity:00000000,00000000,00000000,00000001 vec:52 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:299(PS--),
(XEN)    IRQ:  53 affinity:00000000,00000000,00000000,00000001 vec:5a type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:298(-S--),
(XEN)    IRQ:  54 affinity:00000000,00000000,00000000,00000001 vec:62 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:297(PS--),
(XEN)    IRQ:  55 affinity:00000000,00000000,00000000,00000001 vec:6a type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:296(-S--),
(XEN)    IRQ:  56 affinity:00000000,00000000,00000000,00000001 vec:72 type=PCI-MSI         status=00000010 in-flight=0 domain-list=0:295(-S--),
(XEN)    IRQ:  57 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:7a type=PCI-MSI         status=00000002 mapped, unbound
(XEN) IO-APIC interrupt information:
(XEN)     IRQ  0 Vec240:
(XEN)       Apic 0x00, Pin  2: vector=240, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  1 Vec 40:
(XEN)       Apic 0x00, Pin  1: vector=40, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  3 Vec 48:
(XEN)       Apic 0x00, Pin  3: vector=48, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=1, dest_id:0
(XEN)     IRQ  4 Vec241:
(XEN)       Apic 0x00, Pin  4: vector=241, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  5 Vec 56:
(XEN)       Apic 0x00, Pin  5: vector=56, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  6 Vec 64:
(XEN)       Apic 0x00, Pin  6: vector=64, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  7 Vec 72:
(XEN)       Apic 0x00, Pin  7: vector=72, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  8 Vec 80:
(XEN)       Apic 0x00, Pin  8: vector=80, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ  9 Vec 88:
(XEN)       Apic 0x00, Pin  9: vector=88, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 10 Vec 96:
(XEN)       Apic 0x00, Pin 10: vector=96, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 11 Vec104:
(XEN)       Apic 0x00, Pin 11: vector=104, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 12 Vec112:
(XEN)       Apic 0x00, Pin 12: vector=112, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 13 Vec120:
(XEN)       Apic 0x00, Pin 13: vector=120, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 14 Vec136:
(XEN)       Apic 0x00, Pin 14: vector=136, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 15 Vec144:
(XEN)       Apic 0x00, Pin 15: vector=144, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 16 Vec152:
(XEN)       Apic 0x00, Pin 16: vector=152, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 18 Vec160:
(XEN)       Apic 0x00, Pin 18: vector=160, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 19 Vec168:
(XEN)       Apic 0x00, Pin 19: vector=168, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 22 Vec176:
(XEN)       Apic 0x00, Pin 22: vector=176, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 23 Vec184:
(XEN)       Apic 0x00, Pin 23: vector=184, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 24 Vec192:
(XEN)       Apic 0x01, Pin  0: vector=192, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 25 Vec200:
(XEN)       Apic 0x01, Pin  1: vector=200, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 26 Vec208:
(XEN)       Apic 0x01, Pin  2: vector=208, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 27 Vec216:
(XEN)       Apic 0x01, Pin  3: vector=216, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 28 Vec 33:
(XEN)       Apic 0x01, Pin  4: vector=33, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=1, dest_id:0
(XEN)     IRQ 29 Vec 41:
(XEN)       Apic 0x01, Pin  5: vector=41, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 30 Vec 49:
(XEN)       Apic 0x01, Pin  6: vector=49, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 31 Vec 57:
(XEN)       Apic 0x01, Pin  7: vector=57, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 32 Vec 65:
(XEN)       Apic 0x01, Pin  8: vector=65, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=0, dest_id:0
(XEN)     IRQ 33 Vec 73:
(XEN)       Apic 0x01, Pin  9: vector=73, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 34 Vec 81:
(XEN)       Apic 0x01, Pin 10: vector=81, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 35 Vec 89:
(XEN)       Apic 0x01, Pin 11: vector=89, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 36 Vec 97:
(XEN)       Apic 0x01, Pin 12: vector=97, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 37 Vec105:
(XEN)       Apic 0x01, Pin 13: vector=105, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 38 Vec113:
(XEN)       Apic 0x01, Pin 14: vector=113, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 39 Vec121:
(XEN)       Apic 0x01, Pin 15: vector=121, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 40 Vec129:
(XEN)       Apic 0x01, Pin 16: vector=129, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=1, irr=0, trigger=level, mask=1, dest_id:0
(XEN)     IRQ 41 Vec137:
(XEN)       Apic 0x01, Pin 17: vector=137, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 42 Vec145:
(XEN)       Apic 0x01, Pin 18: vector=145, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 43 Vec153:
(XEN)       Apic 0x01, Pin 19: vector=153, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 44 Vec161:
(XEN)       Apic 0x01, Pin 20: vector=161, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 45 Vec169:
(XEN)       Apic 0x01, Pin 21: vector=169, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 46 Vec177:
(XEN)       Apic 0x01, Pin 22: vector=177, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN)     IRQ 47 Vec185:
(XEN)       Apic 0x01, Pin 23: vector=185, delivery_mode=0, dest_mode=physical, delivery_status=0, polarity=0, irr=0, trigger=edge, mask=0, dest_id:0
(XEN) [m: memory info]
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64480kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 524288kB free
(XEN)     heap[18]: 1048576kB free
(XEN)     DMA heap: 2030560kB free
(XEN)     heap[19]: 21712kB free
(XEN)     heap[20]: 17388kB free
(XEN)     heap[21]: 2184436kB free
(XEN)     heap[22]: 2762096kB free
(XEN)     Dom heap: 4985632kB free
(XEN) [n: NMI statistics]
(XEN) CPU       NMI
(XEN)   0         0
(XEN)   1         0
(XEN)   2         0
(XEN)   3         0
(XEN)   4         0
(XEN)   5         0
(XEN)   6         0
(XEN)   7         0
(XEN)   8         0
(XEN)   9         0
(XEN)  10         0
(XEN)  11         0
(XEN)  12         0
(XEN)  13         0
(XEN)  14         0
(XEN)  15         0
(XEN)  16         0
(XEN)  17         0
(XEN)  18         0
(XEN)  19         0
(XEN)  20         0
(XEN)  21         0
(XEN)  22         0
(XEN)  23         0
(XEN)  24         0
(XEN)  25         0
(XEN)  26         0
(XEN)  27         0
(XEN)  28         0
(XEN)  29         0
(XEN)  30         0
(XEN)  31         0
(XEN)  32         0
(XEN)  33         0
(XEN)  34         0
(XEN)  35         0
(XEN)  36         0
(XEN)  37         0
(XEN)  38         0
(XEN)  39         0
(XEN)  40         0
(XEN)  41         0
(XEN)  42         0
(XEN)  43         0
(XEN)  44         0
(XEN)  45         0
(XEN)  46         0
(XEN)  47         0
(XEN)  48         0
(XEN)  49         0
(XEN)  50         0
(XEN)  51         0
(XEN)  52         0
(XEN)  53         0
(XEN)  54         0
(XEN)  55         0
(XEN)  56         0
(XEN)  57         0
(XEN)  58         0
(XEN)  59         0
(XEN)  60         0
(XEN)  61         0
(XEN)  62         0
(XEN)  63         0
(XEN)  64         0
(XEN)  65         0
(XEN)  66         0
(XEN)  67         0
(XEN)  68         0
(XEN)  69         0
(XEN)  70         0
(XEN)  71         0
(XEN)  72         0
(XEN)  73         0
(XEN)  74         0
(XEN)  75         0
(XEN)  76         0
(XEN)  77         0
(XEN)  78         0
(XEN)  79         0
(XEN)  80         0
(XEN)  81         0
(XEN)  82         0
(XEN)  83         0
(XEN)  84         0
(XEN)  85         0
(XEN)  86         0
(XEN)  87         0
(XEN)  88         0
(XEN)  89         0
(XEN)  90         0
(XEN)  91         0
(XEN)  92         0
(XEN)  93         0
(XEN)  94         0
(XEN)  95         0
(XEN)  96         0
(XEN)  97         0
(XEN)  98         0
(XEN)  99         0
(XEN) 100         0
(XEN) 101         0
(XEN) 102         0
(XEN) 103         0
(XEN) 104         0
(XEN) 105         0
(XEN) 106         0
(XEN) 107         0
(XEN) 108         0
(XEN) 109         0
(XEN) 110         0
(XEN) 111         0
(XEN) 112         0
(XEN) 113         0
(XEN) 114         0
(XEN) 115         0
(XEN) 116         0
(XEN) 117         0
(XEN) 118         0
(XEN) 119         0
(XEN) 120         0
(XEN) 121         0
(XEN) 122         0
(XEN) 123         0
(XEN) 124         0
(XEN) 125         0
(XEN) 126         0
(XEN) 127         0
(XEN) dom0 vcpu0: NMI neither pending nor masked
(XEN) [q: dump domain (and guest debug) info]
(XEN) 'q' pressed -> dumping domain info (now=0x2F2A:6F677A19)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 nr_pages=1807470 xenheap_pages=6 dirty_cpus={0-3} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     Interrupts { 0-303 }
(XEN)     I/O Memory { 0-febff, fec01-fec89, fec8b-fedff, fee01-ffffffffffffffff }
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-807, 80c-cfb, d00-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 000000000023fe36: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000023fe35: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000023fe34: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000023fe33: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bf778: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 00000000002fdeb3: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=T] flags=0 poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={0} cpu_affinity={0}
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 4, stat 0/0/-1)
(XEN)     VCPU1: CPU1 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 01 dirty_cpus={1} cpu_affinity={1}
(XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 9, stat 0/0/0)
(XEN)     VCPU2: CPU2 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 vcpu 1
dirty_cpus={2}   cpu_affinity={2}
0: masked=0 pending=1 event_sel (XEN)     No periodic timer
00000001
(XEN)     Notifying guest (virq 1, port 14, stat 0/0/0)
  (XEN)     VCPU3: CPU3 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 1: masked=0 penddirty_cpus={3} ing=1 event_sel cpu_affinity={3}
(XEN)     No periodic timer
00000001
(XEN)     Notifying guest (virq 1, port 19, stat 0/0/0)
  (XEN) General information for domain 435:
2: masked=1 pend(XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
ing=1 event_sel (XEN)     handle=9303a40a-75d5-1b0e-5492-89a669ac0646 vm_assist=00000000
00000001
(XEN)     paging assistance:   hap refcounts log_dirty translate 3: masked=1 pending=1 event_sel external 
00000001
(XEN) Rangesets belonging to domain 435:
  (XEN)     Interrupts {pending:
 }
(XEN)        I/O Memory {00000000  }
(XEN)     00000000 I/O Ports  { }00000000 
(XEN) Memory pages belonging to domain 435:
00000000 (XEN)     DomPage list too long to display
00000000 (XEN)     PoD entries=0 cachesize=0
00000000 (XEN)     XenPage 0000000000429fd6: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000546074: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 0000000000546064: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 0000000000538871: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000000bf4a9: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005ace88: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000004ffc32: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005cdd53: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005cdd52: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005cddeb: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005cddea: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 00000000004f7e81: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 00000000004f7e80: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000005cdc91: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000005cdc90: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000508ca5: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000508ca4: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000043f57d: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000043f57c: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000043f205: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000043f204: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000043f20f: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000043f20e: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000004c86a3: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004c86a2: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000555a03: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000555a02: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000555a55: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000555a54: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000555a5b: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000555a5a: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000055dc2f: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000055dc2e: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000055dc5f: caf=c000000000000001, taf=7400000000000001
00000000 (XEN) VCPU information and callbacks for domain 435:
00000000 (XEN)     VCPU0: CPU10 [has=F] flags=4 poll=0 upcall_pend = 01, upcall_mask = 00 00000000 dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 3 levels
00000000 (XEN)     No periodic timer
00000000
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/-1)
   (XEN)     VCPU1: CPU7 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 00000000 dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 3 levels
00000000 (XEN)     No periodic timer
00000000 (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
00000000 (XEN) General information for domain 2378:
(XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
00000000
(XEN)     handle=f4eaecf0-c5c8-1cd1-05d4-b1f7dc352022 vm_assist=00000000
   (XEN)     paging assistance: hap 00000000 refcounts 00000000 log_dirty 00000000 translate external 
(XEN) Rangesets belonging to domain 2378:
00000000 (XEN)     00000000 Interrupts {00000000  }
(XEN)     I/O Memory { }00000000 
(XEN)     00000000
I/O Ports  { }
   (XEN) Memory pages belonging to domain 2378:
(XEN)     DomPage list too long to display
00000000 (XEN)     PoD entries=0 cachesize=0
00000000 (XEN)     XenPage 00000000002df9ce: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000002df9cd: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000002df9cc: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000002df9cb: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000000bf501: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000002dfee5: caf=c000000000000001, taf=7400000000000001
14000108d6
(XEN)     XenPage 0000000000436793: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 0000000000436792: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 0000000000436791: caf=c000000000000001, taf=7400000000000001
masks:
(XEN)     XenPage 0000000000436790: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000043679f: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000043679e: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000043679d: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000043679c: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004102c3: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004102c2: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004102c1: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004102c0: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004102bf: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004102be: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004102bd: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004102bc: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 0000000000564143: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000564142: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 0000000000564141: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000564140: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 000000000055d67b: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000055d67a: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000055d679: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000055d678: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000005f49d3: caf=c000000000000001, taf=7400000000000001
 ffffffffffffffff (XEN)     XenPage 00000000005f49d2: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff (XEN)     XenPage 00000000005f49d1: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000005f49d0: caf=c000000000000001, taf=7400000000000001
 (XEN) VCPU information and callbacks for domain 2378:
ffffffffffffffff(XEN)     VCPU0: CPU9 [has=F] flags=4 poll=0 upcall_pend = 01, upcall_mask = 00  dirty_cpus={} ffffffffffffffff cpu_affinity={4-15}
ffffffffffffffff(XEN)     paging assistance: 
hap, 3 levels
   (XEN)     No periodic timer
ffffffffffffffff (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/-1)
ffffffffffffffff(XEN)     VCPU1: CPU6 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} ffffffffffffffff cpu_affinity={4-15}
(XEN)     paging assistance: ffffffffffffffffhap, 3 levels
 (XEN)     No periodic timer
ffffffffffffffff (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
ffffffffffffffff(XEN) General information for domain 3825:
 (XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
ffffffffffffffff(XEN)     handle=09181273-5c37-eddb-83d5-572f8bbc3b62 vm_assist=00000000
 (XEN)     paging assistance: ffffffffffffffff
hap refcounts    translate external 
ffffffffffffffff(XEN) Rangesets belonging to domain 3825:
 (XEN)     Interrupts { }ffffffffffffffff
(XEN)      I/O Memory {ffffffffffffffff  }
(XEN)     ffffffffffffffff I/O Ports  {ffffffffffffffff  }
(XEN) Memory pages belonging to domain 3825:
(XEN)     DomPage list too long to display
ffffffffffffffff(XEN)     PoD entries=0 cachesize=0
 (XEN)     XenPage 0000000000512823: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000512822: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000047a48b: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000047a48a: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 00000000000bf4ad: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 0000000000423df2: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004536f7: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004536f6: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff (XEN)     XenPage 00000000004536f5: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004536f4: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004536f3: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004536f2: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000004536f1: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000004536f0: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000042729f: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000042729e: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000042729d: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000042729c: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000042729b: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000042729a: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000427299: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000427298: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000427297: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 0000000000427296: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000427295: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 0000000000427294: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000427293: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 0000000000427292: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000427291: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 0000000000427290: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff (XEN)     XenPage 00000000005f1e9f: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000005f1e9e: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000005f1e9d: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000005f1e9c: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 3825:
ffffffffffffffff(XEN)     VCPU0: CPU8 [has=F] flags=4 poll=0 upcall_pend = 01, upcall_mask = 00  dirty_cpus={} ffffffffffffffffcpu_affinity={4-15}
 (XEN)     paging assistance: ffffffffffffffff hap, 3 levels
ffffffffffffffff(XEN)     No periodic timer

(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/-1)
   (XEN)     VCPU1: CPU4 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 ffffffffffffffffdirty_cpus={}  cpu_affinity={4-15}
(XEN)     paging assistance: ffffffffffffffffhap, 3 levels
 (XEN)     No periodic timer
ffffffffffffffff (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
ffffffffffffffff(XEN) General information for domain 3826:
 (XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
ffffffffffffffff(XEN)     handle=614ddd14-a7a2-1c8b-5a61-b963fa3d201d vm_assist=00000000
 (XEN)     paging assistance: hap refcounts fffffffffffffffftranslate external 
 (XEN) Rangesets belonging to domain 3826:
ffffffffffffffff(XEN)      Interrupts {ffffffffffffffff }
(XEN)     
I/O Memory { }   
(XEN)     ffffffffffffffff I/O Ports  { }ffffffffffffffff 
(XEN) Memory pages belonging to domain 3826:
ffffffffffffffff(XEN)     DomPage list too long to display
 (XEN)     PoD entries=0 cachesize=0
ffffffffffffffff(XEN)     XenPage 00000000005a30a3: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000005b062f: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff (XEN)     XenPage 00000000005ab87c: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000055fea4: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000000bf517: caf=c000000000000001, taf=7400000000000001
ffffffffff7252ac(XEN)     XenPage 00000000004d0181: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000033ea4c: caf=c000000000000001, taf=7400000000000001
8128400000084201(XEN)     XenPage 000000000033ea4b: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033ea4a: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 000000000033ea49: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033ea48: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033ea47: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 000000000033ea46: caf=c000000000000001, taf=7400000000000001
unmasked:
(XEN)     XenPage 000000000033ea45: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033ea44: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033ea43: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033ea42: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033ea41: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033ea40: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e63f: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e63e: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e63d: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000033e63c: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e63b: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e63a: caf=c000000000000001, taf=7400000000000001
00000000 00000000 (XEN)     XenPage 000000000033e639: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e638: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e637: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e636: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e635: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000033e634: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e633: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e632: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e631: caf=c000000000000001, taf=7400000000000001
00000000 (XEN) VCPU information and callbacks for domain 3826:
00000000 (XEN)     VCPU0: CPU15 [has=F] flags=4 poll=0 upcall_pend = 01, upcall_mask = 00 dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 3 levels
00000000 (XEN)     No periodic timer
00000000
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/-1)
   (XEN)     VCPU1: CPU14 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 00000000 dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 3 levels
00000000 (XEN)     No periodic timer
00000000 (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
00000000 (XEN) General information for domain 3827:
00000000
(XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
   (XEN)     handle=d0f7e6cc-2e28-1301-dc9e-40038a4df84b vm_assist=00000000
00000000 (XEN)     paging assistance: hap refcounts translate external 00000000 
(XEN) Rangesets belonging to domain 3827:
00000000 (XEN)     00000000 Interrupts { }
(XEN)     00000000 I/O Memory {00000000  }
(XEN)     00000000 I/O Ports  { }
00000000
(XEN) Memory pages belonging to domain 3827:
(XEN)     DomPage list too long to display
   (XEN)     PoD entries=0 cachesize=0
00000000 (XEN)     XenPage 000000000049fe89: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000049fe88: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000522677: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000522676: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000000bf4a1: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000529ec8: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e781: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e780: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000033e93f: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e93e: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e93d: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e93c: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e93b: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e93a: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e939: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e938: caf=c000000000000001, taf=7400000000000001
00000000 00000000
(XEN)     XenPage 000000000033e937: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e936: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e935: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e934: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e933: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e932: caf=c000000000000001, taf=7400000000000001
00000000 00000000 (XEN)     XenPage 000000000033e931: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e930: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e92f: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e92e: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e92d: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e92c: caf=c000000000000001, taf=7400000000000001
54000108d6
(XEN)     XenPage 000000000033e92b: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e92a: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e929: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e928: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e927: caf=c000000000000001, taf=7400000000000001

(XEN)     XenPage 000000000033e926: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 3827:
pending list:
(XEN)     VCPU0: CPU5 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={}   0: event 1 -> cpu_affinity={4-15}
irq 847
(XEN)     paging assistance:   0: event 2 -> hap, 3 levels
irq 846
(XEN)     No periodic timer
  0: event 4 -> (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
irq 844
(XEN)     VCPU1: CPU11 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00   1: event 6 -> dirty_cpus={} irq 842
cpu_affinity={4-15}
(XEN)     paging assistance:   1: event 7 -> hap, 3 levels
irq 841
(XEN)     No periodic timer
  2: event 11 ->(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
 irq 837
(XEN) General information for domain 3828:
  3: event 16 ->(XEN)     refcnt=3 dying=0 nr_pages=263127 xenheap_pages=34 dirty_cpus={} max_pages=263168
 irq 832
(XEN)     handle=68ffca84-2975-8a79-d4aa-f6ddb83e7cbf vm_assist=00000000
  0: event 34 ->(XEN)     paging assistance: hap  irq 819
refcounts translate external 
  0: event 36 ->(XEN) Rangesets belonging to domain 3828:
(XEN)      irq 817
Interrupts { }
(XEN)       0: event 37 ->I/O Memory { irq 816
 }
(XEN)       0: event 38 ->I/O Ports  { irq 815
 }
(XEN) Memory pages belonging to domain 3828:
vcpu 2
(XEN)     DomPage list too long to display
  (XEN)     PoD entries=0 cachesize=0
0: masked=0 pend(XEN)     XenPage 0000000000601e91: caf=c000000000000001, taf=7400000000000001
ing=1 event_sel (XEN)     XenPage 0000000000601e90: caf=c000000000000001, taf=7400000000000001
00000001
(XEN)     XenPage 00000000005526ff: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000005526fe: caf=c000000000000001, taf=7400000000000001
  (XEN)     XenPage 00000000000bf4d7: caf=c000000000000001, taf=7400000000000001
1: masked=1 pend(XEN)     XenPage 0000000000496618: caf=c000000000000001, taf=7400000000000001
ing=0 event_sel (XEN)     XenPage 000000000033e485: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000033e484: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e483: caf=c000000000000001, taf=7400000000000001
  (XEN)     XenPage 000000000033e482: caf=c000000000000001, taf=7400000000000001
2: masked=0 pend(XEN)     XenPage 000000000033e481: caf=c000000000000001, taf=7400000000000001
ing=1 event_sel (XEN)     XenPage 000000000033e480: caf=c000000000000001, taf=7400000000000001
00000001
(XEN)     XenPage 000000000033e6bf: caf=c000000000000001, taf=7400000000000001
  (XEN)     XenPage 000000000033e6be: caf=c000000000000001, taf=7400000000000001
3: masked=1 pend(XEN)     XenPage 000000000033e6bd: caf=c000000000000001, taf=7400000000000001
ing=1 event_sel (XEN)     XenPage 000000000033e6bc: caf=c000000000000001, taf=7400000000000001
00000001
(XEN)     XenPage 000000000033e6bb: caf=c000000000000001, taf=7400000000000001
  (XEN)     XenPage 000000000033e6ba: caf=c000000000000001, taf=7400000000000001
pending:
(XEN)     XenPage 000000000033e6b9: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e6b8: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b7: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e6b6: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b5: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b4: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b3: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b2: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000033e6b1: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6b0: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6af: caf=c000000000000001, taf=7400000000000001
00000000
(XEN)     XenPage 000000000033e6ae: caf=c000000000000001, taf=7400000000000001
   (XEN)     XenPage 000000000033e6ad: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6ac: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6ab: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 000000000033e6aa: caf=c000000000000001, taf=7400000000000001
00000000 (XEN) VCPU information and callbacks for domain 3828:
00000000 (XEN)     VCPU0: CPU13 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00 00000000 dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000
(XEN)     paging assistance: hap, 3 levels
   (XEN)     No periodic timer
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
00000000 (XEN)     VCPU1: CPU12 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 00000000 dirty_cpus={} cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 3 levels
00000000 (XEN)     No periodic timer
00000000 (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
00000000 (XEN) General information for domain 3829:
00000000
(XEN)     refcnt=3 dying=0 nr_pages=263160 xenheap_pages=6 dirty_cpus={} max_pages=263168
   (XEN)     handle=cf8b4462-8dc4-c0c6-e14a-cc5dc2161e89 vm_assist=00000000
00000000 (XEN)     paging assistance: 00000000 hap refcounts translate 00000000 external 
(XEN) Rangesets belonging to domain 3829:
00000000 (XEN)     Interrupts {00000000  }
(XEN)     I/O Memory {00000000  }
(XEN)     00000000 I/O Ports  {00000000
 }
(XEN) Memory pages belonging to domain 3829:
   (XEN)     DomPage list too long to display
00000000 (XEN)     PoD entries=0 cachesize=0
00000000 (XEN)     XenPage 00000000002f939c: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000219d13: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000219d12: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 0000000000219d11: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000000bf51b: caf=c000000000000001, taf=7400000000000001
00000000 (XEN)     XenPage 00000000002cb61b: caf=c000000000000001, taf=7400000000000001
00000000 (XEN) VCPU information and callbacks for domain 3829:
00000000
(XEN)     VCPU0: CPU7 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00    dirty_cpus={} 00000000 cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 1 levels
00000000 00000000 (XEN)     No periodic timer
00000000 (XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
00000000 (XEN)     VCPU1: CPU4 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00 00000000
dirty_cpus={}    cpu_affinity={4-15}
00000000 (XEN)     paging assistance: 00000000 hap, 1 levels
00000000 (XEN)     No periodic timer
00000000 (XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
00000000 (XEN) General information for domain 3830:
00000000 (XEN)     refcnt=3 dying=0 nr_pages=263160 xenheap_pages=6 dirty_cpus={} max_pages=263168
00000000 (XEN)     handle=83baf70f-e6be-6c36-505f-6334c124a916 vm_assist=00000000
00000000
(XEN)     paging assistance:    hap refcounts 00000000 translate external 00000000 
(XEN) Rangesets belonging to domain 3830:
00000000 (XEN)     Interrupts {00000000  }
(XEN)     00000000 I/O Memory {00000000  }00000000 
(XEN)     7400010816
I/O Ports  { }   
(XEN) Memory pages belonging to domain 3830:

(XEN)     DomPage list too long to display
masks:
(XEN)     PoD entries=0 cachesize=0
   (XEN)     XenPage 00000000005956e8: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000045d4bf: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000045d4be: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000045d4bd: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000000bf4b3: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 0000000000543b23: caf=c000000000000001, taf=7400000000000001
 (XEN) VCPU information and callbacks for domain 3830:
ffffffffffffffff(XEN)     VCPU0: CPU4 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} ffffffffffffffffcpu_affinity={4-15}
 (XEN)     paging assistance: ffffffffffffffffhap, 1 levels
 (XEN)     No periodic timer
ffffffffffffffff(XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
 (XEN)     VCPU1: CPU11 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00 ffffffffffffffffdirty_cpus={} 
cpu_affinity={4-15}
   (XEN)     paging assistance: ffffffffffffffffhap, 1 levels
 (XEN)     No periodic timer
ffffffffffffffff(XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
 (XEN) General information for domain 3831:
ffffffffffffffff(XEN)     refcnt=3 dying=0 nr_pages=263160 xenheap_pages=6 dirty_cpus={} max_pages=263168
 ffffffffffffffff(XEN)     handle=05bccb09-05a0-7204-fc3c-661b81a10ae0 vm_assist=00000000
 (XEN)     paging assistance: ffffffffffffffffhap refcounts  translate external ffffffffffffffff
(XEN) Rangesets belonging to domain 3831:
 (XEN)     Interrupts {ffffffffffffffff }
(XEN)      I/O Memory {ffffffffffffffff }
(XEN)     
I/O Ports  {    }
(XEN) Memory pages belonging to domain 3831:
ffffffffffffffff(XEN)     DomPage list too long to display
 (XEN)     PoD entries=0 cachesize=0
ffffffffffffffff(XEN)     XenPage 000000000033e683: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000033e682: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000033e681: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 000000000033e680: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000000bd643: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000002ee93a: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN) VCPU information and callbacks for domain 3831:
 ffffffffffffffff(XEN)     VCPU0: CPU8 [has=F] flags=4 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} ffffffffffffffffcpu_affinity={4-15}
 (XEN)     paging assistance: ffffffffffffffffhap, 1 levels

(XEN)     No periodic timer
   (XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
ffffffffffffffff(XEN)     VCPU1: CPU15 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} cpu_affinity={4-15}
ffffffffffffffff(XEN)     paging assistance: hap, 1 levels
 (XEN)     No periodic timer
ffffffffffffffff(XEN)     Notifying guest (virq 1, port 0, stat 0/0/0)
 ffffffffffffffff(XEN) General information for domain 3832:
 (XEN)     refcnt=3 dying=0 nr_pages=262116 xenheap_pages=6 dirty_cpus={} max_pages=263168
ffffffffffffffff(XEN)     handle=b463a1ad-f81d-0ca0-2d70-71b368e049b4 vm_assist=00000000
 (XEN)     paging assistance: ffffffffffffffffhap refcounts translate  external 
(XEN) Rangesets belonging to domain 3832:
ffffffffffffffff(XEN)      Interrupts { }
ffffffffffffffff(XEN)     I/O Memory {
 }
(XEN)        I/O Ports  { }ffffffffffffffff
 (XEN) Memory pages belonging to domain 3832:
(XEN)     DomPage list too long to display
ffffffffffffffff(XEN)     PoD entries=0 cachesize=0
 (XEN)     XenPage 00000000002c8981: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000033eb4c: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000002f9c11: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 000000000021c4c6: caf=c000000000000001, taf=7400000000000001
 (XEN)     XenPage 00000000000bf4af: caf=c000000000000001, taf=7400000000000001
ffffffffffffffff(XEN)     XenPage 00000000002e3822: caf=c000000000000001, taf=7400000000000001
 (XEN) VCPU information and callbacks for domain 3832:
ffffffffffffffff(XEN)     VCPU0: CPU15 [has=F] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} ffffffffffffffffcpu_affinity={4-15}
 (XEN)     paging assistance: ffffffffffffffffhap, 1 levels

(XEN)     No periodic timer
   (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
ffffffffffffffff(XEN)     VCPU1: CPU4 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00  dirty_cpus={} ffffffffffffffffcpu_affinity={4-15}
(XEN)     paging assistance:  hap, 1 levels
ffffffffffffffff(XEN)     No periodic timer
 (XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
ffffffffffffffff(XEN) [r: dump run queues]
 (XEN) Scheduler: SMP Credit Scheduler (credit)
ffffffffffffffff(XEN) info:
(XEN)   ncpus              = 16
(XEN)   master             = 0
(XEN)   credit             = 4800
(XEN)   credit balance     = -1200
(XEN)   weight             = 256
(XEN)   runq_sort          = 1707609
(XEN)   default-weight     = 256
(XEN)   msecs per tick     = 10ms
(XEN)   credits per msec   = 10
(XEN)   ticks per tslice   = 3
(XEN)   ticks per acct     = 3
(XEN)   migration delay    = 0us
 (XEN) idlers: 00000000,00000000,00000000,0000fff0
ffffffffffffffff(XEN) active vcpus:
 (XEN)    1: ffffffffffffffff[0.3] pri=-2 flags=0 cpu=3  credit=-76972 [w=256]ffffffffffffffff
(XEN)     2: 
[0.2] pri=-2 flags=0 cpu=2    credit=-77034 [w=256]ffffffffffffffff
(XEN)     3:  [0.0] pri=-2 flags=0 cpu=0ffffffffffffffff credit=-303 [w=256] 
(XEN)     4: ffffffffffffffff[0.1] pri=-2 flags=0 cpu=1  credit=-77206 [w=256]ffffffffffffffff
(XEN) sched_smt_power_savings: disabled
 (XEN) NOW=0x00002F2B25BD97FD
ffffffffffffffff(XEN) CPU[00]   sort=1707608, sibling=00000000,00000000,00000000,00000101, ffffffffffffffffcore=00000000,00000000,00000000,00005555
 (XEN)  run: ffffffffffffffff[0.0] pri=-2 flags=0 cpu=0  credit=-303 [w=256]ffffffffffffffff

(XEN)     1:    [32767.0] pri=-64 flags=0 cpu=0ffffffffffffffff
(XEN) CPU[01]   sort=1707609, sibling=00000000,00000000,00000000,00000202, ffffffffffffffffcore=00000000,00000000,00000000,0000aaaa
 (XEN)  run: [0.1] pri=-2 flags=0 cpu=1ffffffffffffffff credit=-77606 [w=256] 
(XEN)     1: ffffffffffffffff[32767.1] pri=-64 flags=0 cpu=1 
(XEN) CPU[02]  sort=1707609, sibling=00000000,00000000,00000000,00000404, core=00000000,00000000,00000000,00005555
(XEN)   run: [0.2] pri=-2 flags=0 cpu=2 credit=-77850 [w=256]
(XEN)     1: [32767.2] pri=-64 flags=0 cpu=2
(XEN) CPU[03] ffffffffffffffff sort=1707609, sibling=00000000,00000000,00000000,00000808,  core=00000000,00000000,00000000,0000aaaa
ffffffffffffffff(XEN)   run: [0.3] pri=-2 flags=0 cpu=3  credit=-78040 [w=256]ffffffffff7252ac
(XEN)     1:  [32767.3] pri=-64 flags=0 cpu=38128400000084001
(XEN) CPU[04] 
 sort=1664084, sibling=00000000,00000000,00000000,00001010,    core=00000000,00000000,00000000,00005555

(XEN)   run: unmasked:
[32767.4] pri=-64 flags=0 cpu=4   
(XEN) CPU[05] 00000000  sort=1617618, sibling=00000000,00000000,00000000,00002020, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: 00000000 [32767.5] pri=-64 flags=0 cpu=500000000 
(XEN) CPU[06] 00000000  sort=1703968, sibling=00000000,00000000,00000000,00004040, 00000000 core=00000000,00000000,00000000,00005555
(XEN)   run: 00000000
[32767.6] pri=-64 flags=0 cpu=6   
(XEN) CPU[07] 00000000  sort=1617620, sibling=00000000,00000000,00000000,00008080, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: 00000000 [32767.7] pri=-64 flags=0 cpu=700000000 
(XEN) CPU[08] 00000000  sort=1617619, sibling=00000000,00000000,00000000,00000101, 00000000 core=00000000,00000000,00000000,00005555
(XEN)   run: 00000000
[32767.8] pri=-64 flags=0 cpu=8   
(XEN) CPU[09] 00000000  sort=1617604, sibling=00000000,00000000,00000000,00000202, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: 00000000 [32767.9] pri=-64 flags=0 cpu=900000000 
(XEN) CPU[10] 00000000  sort=1617605, sibling=00000000,00000000,00000000,00000404, 00000000 core=00000000,00000000,00000000,00005555
00000000
(XEN)   run: [32767.10] pri=-64 flags=0 cpu=10   
(XEN) CPU[11] 00000000  sort=1617619, sibling=00000000,00000000,00000000,00000808, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: 00000000 [32767.11] pri=-64 flags=0 cpu=1100000000 
(XEN) CPU[12] 00000000  sort=1703968, sibling=00000000,00000000,00000000,00001010, 00000000 core=00000000,00000000,00000000,00005555
00000000
(XEN)   run:    [32767.12] pri=-64 flags=0 cpu=1200000000 
(XEN) CPU[13] 00000000  sort=1648280, sibling=00000000,00000000,00000000,00002020, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: [32767.13] pri=-64 flags=0 cpu=1300000000 
(XEN) CPU[14] 00000000  sort=1657568, sibling=00000000,00000000,00000000,00004040, 00000000 core=00000000,00000000,00000000,00005555
00000000
(XEN)   run:    [32767.14] pri=-64 flags=0 cpu=1400000000 
(XEN) CPU[15] 00000000  sort=1617603, sibling=00000000,00000000,00000000,00008080, 00000000 core=00000000,00000000,00000000,0000aaaa
00000000 (XEN)  run: 00000000 [32767.15] pri=-64 flags=0 cpu=1500000000 
00000000 (XEN) [s: dump softtsc stats]
00000000
   00000000 (XEN) TSC marked as reliable, warp = 0 (count=3)
00000000 (XEN) dom435(hvm): mode=0,ofs=0x5b4fbe14fb600000000 ,khz=2400120,inc=100000000 
00000000 (XEN) dom2378(hvm): mode=000000000 ,ofs=0x1bfafa4cb28300000000 ,khz=240012000000000
,inc=1   
(XEN) dom3825(hvm): mode=000000000 ,ofs=0x2ca028f5ee8d00000000 ,khz=240012000000000 ,inc=100000000 
(XEN) dom3826(hvm): mode=000000000 ,ofs=0x2ca3d7554fa900000000 ,khz=240012000000000 ,inc=1
(XEN) dom3827(hvm): mode=07400010816
,ofs=0x2ca44bcdefd8   ,khz=2400120
,inc=1
(XEN) dom3828(hvm): mode=0,ofs=0x2ca488c181b0pending list:
,khz=2400120  0: event 1 -> ,inc=1irq 847

(XEN) dom3829(hvm): mode=0  0: event 2 -> ,ofs=0x2caac264291airq 846
,khz=2400120,inc=1  0: event 4 -> 
(XEN) dom3830(hvm): mode=0,ofs=0x2cac6c0e48a0irq 844
,khz=2400120  2: event 11 ->,inc=1 irq 837

(XEN) dom3831(hvm): mode=0  3: event 16 ->,ofs=0x2cad0ea66c7f irq 832
,khz=2400120,inc=1  0: event 34 ->
 irq 819
(XEN) dom3832(hvm): mode=0  0: event 36 ->,ofs=0x2caed7bc3cb5 irq 817
,khz=2400120  0: event 37 ->,inc=1
 irq 816
(XEN) No domains have emulated TSC
  0: event 38 ->(XEN) [t: display multi-cpu clock info]
 irq 815
(XEN) Synced stime skew: max=78182ns avg=75414ns samples=2 current=72647ns
vcpu 3
(XEN) Synced cycles skew: max=858 avg=813 samples=2 current=768
  (XEN) [u: dump numa info]
0: masked=0 pend(XEN) 'u' pressed -> dumping numa info (now-0x2F2B:4555715D)
ing=1 event_sel (XEN) idx0 -> NODE0 start->0 size->3407872
00000001
(XEN) phys_to_nid(0000000000001000) -> 0 should be 0
  (XEN) idx1 -> NODE1 start->3407872 size->3145728
1: masked=1 pend(XEN) phys_to_nid(0000000340001000) -> 1 should be 1
ing=0 event_sel (XEN) CPU0 -> NODE0
00000000
(XEN) CPU1 -> NODE1
  (XEN) CPU2 -> NODE0
2: masked=0 pend(XEN) CPU3 -> NODE1
ing=0 event_sel (XEN) CPU4 -> NODE0
00000000
(XEN) CPU5 -> NODE1
(XEN) CPU6 -> NODE0
  (XEN) CPU7 -> NODE1
3: masked=0 pend(XEN) CPU8 -> NODE0
ing=1 event_sel (XEN) CPU9 -> NODE1
00000001
(XEN) CPU10 -> NODE0
  (XEN) CPU11 -> NODE1
pending:
(XEN) CPU12 -> NODE0
   (XEN) CPU13 -> NODE1
00000000 (XEN) CPU14 -> NODE0
00000000 (XEN) CPU15 -> NODE1
00000000 (XEN) Memory location of each domain:
00000000 (XEN) Domain 0 (total: 1807470):
00000000 00000000 00000000 00000000
   00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
   00000000 00000000 00000000 (XEN)     Node 0: 1807470
00000000 (XEN)     Node 1: 0
00000000 (XEN) Domain 435 (total: 263127):
00000000 00000000 00000000
(XEN)     Node 0: 0
   (XEN)     Node 1: 263127
00000000 (XEN) Domain 2378 (total: 263127):
00000000 00000000 00000000 00000000 (XEN)     Node 0: 20
00000000 (XEN)     Node 1: 263107
00000000 (XEN) Domain 3825 (total: 263127):
00000000
   00000000 (XEN)     Node 0: 255447
00000000 (XEN)     Node 1: 7680
00000000 (XEN) Domain 3826 (total: 263127):
00000000 00000000 00000000 (XEN)     Node 0: 262103
00000000 (XEN)     Node 1: 1024
00000000
(XEN) Domain 3827 (total: 263127):
   00000000 00000000 00000000 00000000 (XEN)     Node 0: 0
00000000 (XEN)     Node 1: 263127
00000000 (XEN) Domain 3828 (total: 263127):
00000000 00000000
(XEN)     Node 0: 1508
   00000000 (XEN)     Node 1: 261619
00000000 (XEN) Domain 3829 (total: 263160):
00000000 00000000 00000000 (XEN)     Node 0: 0
00000000 (XEN)     Node 1: 263160
00000000 (XEN) Domain 3830 (total: 263160):
00000000
   00000000 (XEN)     Node 0: 17
00000000 (XEN)     Node 1: 263143
00000000 (XEN) Domain 3831 (total: 263160):
00000000 00000000 00000000 (XEN)     Node 0: 263143
00000000 (XEN)     Node 1: 17
7400030016
(XEN) Domain 3832 (total: 262116):
   
masks:
   (XEN)     Node 0: 0
ffffffffffffffff (XEN)     Node 1: 262116
ffffffffffffffff(XEN) [v: dump Intel's VMCS]
 (XEN) *********** VMCS Areas **************
ffffffffffffffff(XEN) 
(XEN) >>> Domain 435 <<<
 (XEN)  VCPU 0
ffffffffffffffff (XEN) *** Guest State ***
ffffffffffffffff(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
 (XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
ffffffffffffffff (XEN) CR3: actual=0x0000000000782000, target_count=0
ffffffffffffffff(XEN)      target0=0000000000000000, target1=0000000000000000
 ffffffffffffffff(XEN)      target2=0000000000000000, target3=0000000000000000

(XEN) RSP = 0x000000008089a4ac (0x000000008089a4ac)  RIP = 0x0000000080a5cdc2 (0x0000000080a5cdc2)
   (XEN) RFLAGS=0x0000000000000246 (0x0000000000000246)  DR7 = 0x0000000000000400
ffffffffffffffff(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
 (XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
 (XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
 (XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
ffffffffffffffff(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
 (XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
ffffffffffffffff(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
 (XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
ffffffffffffffff(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000
 (XEN) Guest PAT = 0x0007010600070106
ffffffffffffffff(XEN) TSC Offset = fffff21bf839d611
 (XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
ffffffffffffffff(XEN) Interruptibility=0000 ActivityState=0000

(XEN) *** Host State ***
   (XEN) RSP = 0xffff83023fecffa0  RIP = 0xffff82c4801ad9f0
ffffffffffffffff(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
 (XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480264a80
ffffffffffffffff(XEN) GDTBase=ffff83023fecb000 IDTBase=ffff83023fec4010
 (XEN) CR0=000000008005003b CR3=0000000538926000 CR4=00000000000026f0
ffffffffffffffff(XEN) Sysenter RSP=ffff83023fecffd0 CS:RIP=e008:ffff82c4801e3290
 (XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
ffffffffffffffff(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
 (XEN) EntryControls=000051ff ExitControls=000fefff
ffffffffffffffff(XEN) ExceptionBitmap=00040040
 (XEN) VMEntry: intr_info=000000d1 errcode=00000000 ilen=00000000
ffffffffffffffff(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
 (XEN)         reason=0000001e qualification=c1020009
ffffffffffffffff (XEN) IDTVectoring: info=00000000 errcode=00000000
ffffffffffffffff(XEN) TPR Threshold = 0x04

(XEN) EPT pointer = 0x000000053860801e
   (XEN) Virtual processor ID = 0x0001
ffffffffffffffff(XEN)   VCPU 1
 (XEN) *** Guest State ***
ffffffffffffffff (XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
ffffffffffffffff(XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
 (XEN) CR3: actual=0x000000003fbb1260, target_count=0
ffffffffffffffff(XEN)      target0=0000000000000000, target1=0000000000000000
 (XEN)      target2=0000000000000000, target3=0000000000000000
ffffffffffffffff(XEN) RSP = 0x00000000f63a2ad0 (0x00000000f63a2ad0)  RIP = 0x00000000bff64933 (0x00000000bff64933)
 (XEN) RFLAGS=0x0000000000010206 (0x0000000000010206)  DR7 = 0x0000000000000400
ffffffffffffffff(XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
 (XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
 (XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000

(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
   (XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
 (XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
 (XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
ffffffffffffffff (XEN) Guest PAT = 0x0007010600070106
ffffffffffffffff(XEN) TSC Offset = fffff21bf839d752
 (XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
ffffffffffffffff(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
 (XEN) RSP = 0xffff83023feeffa0  RIP = 0xffff82c4801ad9f0
ffffffffffffffff(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
 (XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48025ea80
ffffffffffffffff (XEN) GDTBase=ffff83063fde5000 IDTBase=ffff83023fef4010
ffffffffffffffff(XEN) CR0=000000008005003b CR3=0000000538929000 CR4=00000000000026f0

(XEN) Sysenter RSP=ffff83023feeffd0 CS:RIP=e008:ffff82c4801e3290
   (XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
ffffffffffffffff(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
 (XEN) EntryControls=000051ff ExitControls=000fefff
ffffffffffffffff(XEN) ExceptionBitmap=00040040
 (XEN) VMEntry: intr_info=00000041 errcode=00000000 ilen=00000000
ffffffffffffffff(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
 (XEN)         reason=00000030 qualification=00000181
ffffffffffffffff(XEN) IDTVectoring: info=00000000 errcode=00000000
 (XEN) TPR Threshold = 0x00
ffffffffffffffff(XEN) EPT pointer = 0x000000053860801e
 (XEN) Virtual processor ID = 0x0001
ffffffffffffffff (XEN) 
(XEN) >>> Domain 2378 <<<
ffffffffffffffff (XEN)  VCPU 0
ffffffffffffffff(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff

(XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
   (XEN) CR3: actual=0x0000000000782000, target_count=0
ffffffffffffffff(XEN)      target0=0000000000000000, target1=0000000000000000
 (XEN)      target2=0000000000000000, target3=0000000000000000
ffffffffffffffff(XEN) RSP = 0x000000008089a4ac (0x000000008089a4ac)  RIP = 0x0000000080a5cdc2 (0x0000000080a5cdc2)
 (XEN) RFLAGS=0x0000000000000246 (0x0000000000000246)  DR7 = 0x0000000000000400
ffffffffffffffff(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
 (XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
 (XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
ffffffffffffffff(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
 (XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
ffffffffffffffff(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
 (XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
ffffffffffffffff(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
 (XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
ffffffffffffffff(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000

(XEN) Guest PAT = 0x0007010600070106
   (XEN) TSC Offset = ffffbcacbb75e850
ffffffffffffffff(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
 (XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
ffffffffffffffff(XEN) RSP = 0xffff83023fed7fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
 (XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480262a80
ffffffffffffffff(XEN) GDTBase=ffff83063fc10000 IDTBase=ffff83023feda010
 (XEN) CR0=000000008005003b CR3=00000002c846f000 CR4=00000000000026f0
ffffffffffffffff(XEN) Sysenter RSP=ffff83023fed7fd0 CS:RIP=e008:ffff82c4801e3290
 (XEN) Host PAT = 0x0000050100070406
ffffffffffffffff(XEN) *** Control State ***
 (XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
ffffffffffffffff(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
 (XEN) VMEntry: intr_info=000000d1 errcode=00000000 ilen=00000000
ffffffffff7252ac(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
 (XEN)         reason=0000001e qualification=c1020009
8128400000080001
(XEN) IDTVectoring: info=00000000 errcode=00000000
   (XEN) TPR Threshold = 0x04

(XEN) EPT pointer = 0x00000002df9c501e
unmasked:
(XEN) Virtual processor ID = 0x0001
   (XEN)        VCPU 1
00000000 (XEN) *** Guest State ***
00000000 (XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
00000000 (XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
00000000 (XEN) CR3: actual=0x0000000000782000, target_count=0
00000000 (XEN)      target0=0000000000000000, target1=0000000000000000
00000000 (XEN)      target2=0000000000000000, target3=0000000000000000
00000000 (XEN) RSP = 0x00000000f78aec5c (0x00000000f78aec5c)  RIP = 0x0000000080a5cdb6 (0x0000000080a5cdb6)
00000000
(XEN) RFLAGS=0x0000000000000246 (0x0000000000000246)  DR7 = 0x0000000000000400
   (XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
00000000 (XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
00000000 (XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
00000000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
   (XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
00000000 (XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
00000000 (XEN) Guest PAT = 0x0007010600070106
00000000 (XEN) TSC Offset = ffffbcacbb75ee80
00000000 (XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
00000000 (XEN) Interruptibility=0000 ActivityState=0000
00000000 (XEN) *** Host State ***
00000000 (XEN) RSP = 0xffff83023fefffa0  RIP = 0xffff82c4801ad9f0
00000000
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
   (XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48025ca80
00000000 (XEN) GDTBase=ffff83023fefd000 IDTBase=ffff83023fef6010
00000000 (XEN) CR0=000000008005003b CR3=00000002c846e000 CR4=00000000000026f0
00000000 (XEN) Sysenter RSP=ffff83023fefffd0 CS:RIP=e008:ffff82c4801e3290
00000000 (XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
00000000 (XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
00000000 (XEN) EntryControls=000051ff ExitControls=000fefff
00000000 (XEN) ExceptionBitmap=00040040
00000000
(XEN) VMEntry: intr_info=000000b1 errcode=00000000 ilen=00000000
   (XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
00000000 (XEN)         reason=0000001e qualification=1f680008
00000000 (XEN) IDTVectoring: info=00000000 errcode=00000000
00000000 (XEN) TPR Threshold = 0x00
00000000 (XEN) EPT pointer = 0x00000002df9c501e
00000000 (XEN) Virtual processor ID = 0x0001
00000000 (XEN) 
(XEN) >>> Domain 3825 <<<
00000000 00000000
(XEN)   VCPU 0
   (XEN) *** Guest State ***
00000000 (XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
00000000 (XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
00000000 (XEN) CR3: actual=0x000000003fbb1080, target_count=0
00000000 (XEN)      target0=0000000000000000, target1=0000000000000000
00000000 (XEN)      target2=0000000000000000, target3=0000000000000000
00000000 (XEN) RSP = 0x00000000f6bb2490 (0x00000000f6bb2490)  RIP = 0x00000000bff61019 (0x00000000bff61019)
00000000 (XEN) RFLAGS=0x0000000000010202 (0x0000000000010202)  DR7 = 0x0000000000000400
00000000
(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
   (XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
00000000 (XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
00000000 (XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
00000000 (XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
00000000
(XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
   (XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000
00000000 (XEN) Guest PAT = 0x0007010600070106
00000000 (XEN) TSC Offset = ffff94b4f7866c30
00000000 (XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
00000000 (XEN) Interruptibility=0000 ActivityState=0000
00000000 (XEN) *** Host State ***
00000000 (XEN) RSP = 0xffff83023fee7fa0  RIP = 0xffff82c4801ad9f0
00000000 (XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
7400030016
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480260a80
   (XEN) GDTBase=ffff83023fee4000 IDTBase=ffff83023fede010

(XEN) CR0=000000008005003b CR3=000000057e53a000 CR4=00000000000026f0
pending list:
(XEN) Sysenter RSP=ffff83023fee7fd0 CS:RIP=e008:ffff82c4801e3290
  0: event 1 -> (XEN) Host PAT = 0x0000050100070406
irq 847
(XEN) *** Control State ***
  0: event 2 -> (XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
irq 846
(XEN) EntryControls=000051ff ExitControls=000fefff
  0: event 4 -> (XEN) ExceptionBitmap=00040040
irq 844
(XEN) VMEntry: intr_info=00000041 errcode=00000000 ilen=00000000
  3: event 16 ->(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
 irq 832
(XEN)         reason=00000030 qualification=00000181
  3: event 17 ->(XEN) IDTVectoring: info=00000000 errcode=00000000
 irq 831
(XEN) TPR Threshold = 0x00
  0: event 34 ->(XEN) EPT pointer = 0x000000054045e01e
 irq 819
(XEN) Virtual processor ID = 0x0001
  0: event 36 ->(XEN)   VCPU 1
 irq 817
(XEN) *** Guest State ***
  0: event 37 ->(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
 irq 816
(XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
  0: event 38 ->(XEN) CR3: actual=0x0000000000782000, target_count=0
 irq 815
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aed34 (0x00000000f78aed34)  RIP = 0x00000000f76a9ca1 (0x00000000f76a9ca2)
(XEN) RFLAGS=0x0000000000000206 (0x0000000000000206)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94b4f7866c45
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023ff17fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480258a80
(XEN) GDTBase=ffff83023ff0f000 IDTBase=ffff83023ff18010
(XEN) CR0=000000008005003b CR3=00000004ffee7000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023ff17fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=00000041 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000054045e01e
(XEN) Virtual processor ID = 0x0001
(XEN) 
(XEN) >>> Domain 3826 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f8, shadow=0x00000000000006f8, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x000000003fbb1020, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aaca8 (0x00000000f78aaca8)  RIP = 0x0000000080a5ce2c (0x0000000080a5ce2e)
(XEN) RFLAGS=0x0000000000200202 (0x0000000000200202)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94ad2f9b3606
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023fe8ffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48026ea80
(XEN) GDTBase=ffff83063fde1000 IDTBase=ffff83023fe90010
(XEN) CR0=000000008005003b CR3=0000000486a06000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023fe8ffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=000000a3 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=03ce0001
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x04
(XEN) EPT pointer = 0x0000000528bf501e
(XEN) Virtual processor ID = 0x0003
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f8, shadow=0x00000000000006f8, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x0000000000782000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aec5c (0x00000000f78aec5c)  RIP = 0x0000000080a5cdb6 (0x0000000080a5cdb6)
(XEN) RFLAGS=0x0000000000200246 (0x0000000000200246)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94ad2f9b3324
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023fe9ffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48026ca80
(XEN) GDTBase=ffff83023fe89000 IDTBase=ffff83023fe92010
(XEN) CR0=000000008005003b CR3=000000049d23a000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023fe9ffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=000000b1 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=1f680008
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x0000000528bf501e
(XEN) Virtual processor ID = 0x0003
(XEN) 
(XEN) >>> Domain 3827 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f8, shadow=0x00000000000006f8, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x000000003fbb1020, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aaca4 (0x00000000f78aaca4)  RIP = 0x0000000080a5ce20 (0x0000000080a5ce21)
(XEN) RFLAGS=0x0000000000000202 (0x0000000000000202)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94abf84b8ec3
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023ff07fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48025aa80
(XEN) GDTBase=ffff83063fbfa000 IDTBase=ffff83023ff0c010
(XEN) CR0=000000008005003b CR3=000000054c5ca000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023ff07fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=00000041 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=03ce0000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000052267201e
(XEN) Virtual processor ID = 0x0009
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f8, shadow=0x00000000000006f8, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x0000000000782000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aed34 (0x00000000f78aed34)  RIP = 0x00000000f7639ca1 (0x00000000f7639ca2)
(XEN) RFLAGS=0x0000000000000206 (0x0000000000000206)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94abf84b8faa
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023febffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480266a80
(XEN) GDTBase=ffff83063fc0b000 IDTBase=ffff83023fec2010
(XEN) CR0=000000008005003b CR3=000000054c5c9000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023febffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=000000a3 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000052267201e
(XEN) Virtual processor ID = 0x000d
(XEN) 
(XEN) >>> Domain 3828 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x000000003fbb1020, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78f2774 (0x00000000f78f2774)  RIP = 0x0000000080a5cdc2 (0x0000000080a5cdc2)
(XEN) RFLAGS=0x0000000000000246 (0x0000000000000246)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78a3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000ffdff000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x000000008003f000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x000000008003f400
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x0000000080042000
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94ad378e3b18
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023fea7fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48026aa80
(XEN) GDTBase=ffff83063fc06000 IDTBase=ffff83023fea8010
(XEN) CR0=000000008005003b CR3=00000005526f2000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023fea7fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=00000041 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=c1020009
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x00000005526fa01e
(XEN) Virtual processor ID = 0x000d
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x000000008001003b, shadow=0x000000008001003b, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x00000000000026f9, shadow=0x00000000000006f9, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x0000000000782000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x00000000f78aed34 (0x00000000f78aed34)  RIP = 0x00000000f75b9ca1 (0x00000000f75b9ca2)
(XEN) RFLAGS=0x0000000000000202 (0x0000000000000202)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=00000000f78b3000 CS:RIP=0008:0000000080888b70
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0023, attr=0x0c0f3, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0030, attr=0x0c093, limit=0x00001fff, base=0x00000000f7727000
(XEN) GS: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x000003ff, base=0x00000000f772d400
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000007ff, base=0x00000000f772d800
(XEN) TR: sel=0x0028, attr=0x0008b, limit=0x000020ab, base=0x00000000f7727fe0
(XEN) Guest PAT = 0x0007010600070106
(XEN) TSC Offset = ffff94ad378e3bd5
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023feb7fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480268a80
(XEN) GDTBase=ffff83023feb2000 IDTBase=ffff83023feac010
(XEN) CR0=000000008005003b CR3=00000005526f1000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023feb7fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a065fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=00040040
(XEN) VMEntry: intr_info=000000a3 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x00000005526fa01e
(XEN) Virtual processor ID = 0x0019
(XEN) 
(XEN) >>> Domain 3829 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000010, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002051, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000057f5c (0x0000000000057f5c)  RIP = 0x0000000000003287 (0x000000000000328a)
(XEN) RFLAGS=0x0000000000023246 (0x0000000000000246)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0xf000, attr=0x000f3, limit=0x0000ffff, base=0x00000000000f0000
(XEN) DS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) SS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) ES: sel=0x3000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000030000
(XEN) FS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) GS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) GDTR:                           limit=0x000003ff, base=0x000000000003f000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x0000ffff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x000000ff, base=0x00000000fc013000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94b04c41b161
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023feeffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48025ea80
(XEN) GDTBase=ffff83063fde5000 IDTBase=ffff83023fef4010
(XEN) CR0=000000008005003b CR3=00000002ddf05000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023feeffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=ffffffff
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=01f0003b
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000021990301e
(XEN) Virtual processor ID = 0x002e
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000011, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000171570 (0x0000000000171570)  RIP = 0x00000000001035e4 (0x00000000001035e5)
(XEN) RFLAGS=0x0000000000000006 (0x0000000000000006)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x00000017, base=0x00000000001035e8
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x0000ffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x0000ffff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94b04c41b161
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023ff17fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480258a80
(XEN) GDTBase=ffff83023ff0f000 IDTBase=ffff83023ff18010
(XEN) CR0=000000008005003b CR3=00000002ddf04000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023ff17fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=000400c0
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000021990301e
(XEN) Virtual processor ID = 0x0001
(XEN) 
(XEN) >>> Domain 3830 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000010, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002051, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000057f52 (0x0000000000057f52)  RIP = 0x000000000000055f (0x0000000000000560)
(XEN) RFLAGS=0x0000000000023002 (0x0000000000000002)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0xf000, attr=0x000f3, limit=0x0000ffff, base=0x00000000000f0000
(XEN) DS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) SS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) ES: sel=0x3000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000030000
(XEN) FS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) GS: sel=0x230c, attr=0x000f3, limit=0x0000ffff, base=0x00000000000230c0
(XEN) GDTR:                           limit=0x000003ff, base=0x000000000003f000
(XEN) LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x0000ffff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x000000ff, base=0x00000000fc013000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94ac4f3ebb34
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023ff17fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480258a80
(XEN) GDTBase=ffff83023ff0f000 IDTBase=ffff83023ff18010
(XEN) CR0=000000008005003b CR3=0000000527a19000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023ff17fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=ffffffff
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=01f70000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x0000000527cc301e
(XEN) Virtual processor ID = 0x0004
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000011, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000171570 (0x0000000000171570)  RIP = 0x00000000001035e4 (0x00000000001035e5)
(XEN) RFLAGS=0x0000000000000006 (0x0000000000000006)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x00000017, base=0x00000000001035e8
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x0000ffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x0000ffff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94ac4f3ebb34
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023febffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480266a80
(XEN) GDTBase=ffff83063fc0b000 IDTBase=ffff83023fec2010
(XEN) CR0=000000008005003b CR3=0000000527a18000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023febffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=000400c0
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x0000000527cc301e
(XEN) Virtual processor ID = 0x0001
(XEN) 
(XEN) >>> Domain 3831 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000010, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002051, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x000000000000ceb2 (0x000000000000ceb2)  RIP = 0x000000000000055f (0x0000000000000560)
(XEN) RFLAGS=0x0000000000023002 (0x0000000000000002)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0xf000, attr=0x000f3, limit=0x0000ffff, base=0x00000000000f0000
(XEN) DS: sel=0x0000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000000000
(XEN) SS: sel=0x0000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000000000
(XEN) ES: sel=0x1180, attr=0x000f3, limit=0x0000ffff, base=0x0000000000011800
(XEN) FS: sel=0x0000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x000f3, limit=0x0000ffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x0000002f, base=0x0000000000100088
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x00000000, base=0x0000000000000000
(XEN) IDTR:                           limit=0x000003ff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x000000ff, base=0x00000000fc013000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94aac8fc79fd
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023fee7fa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c480260a80
(XEN) GDTBase=ffff83023fee4000 IDTBase=ffff83023fede010
(XEN) CR0=000000008005003b CR3=00000002c82f4000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023fee7fd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=ffffffff
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000001e qualification=01f70000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x00000002c82fc01e
(XEN) Virtual processor ID = 0x0001
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000011, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000171570 (0x0000000000171570)  RIP = 0x00000000001035e4 (0x00000000001035e5)
(XEN) RFLAGS=0x0000000000000006 (0x0000000000000006)  DR7 = 0x0000000000000400
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0x0008, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0010, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x00000017, base=0x00000000001035e8
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x0000ffff, base=0x0000000000000000
(XEN) IDTR:                           limit=0x0000ffff, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94aac8fc79fd
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0xffff83023fe8ffa0  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=e040
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=ffff82c48026ea80
(XEN) GDTBase=ffff83063fde1000 IDTBase=ffff83023fe90010
(XEN) CR0=000000008005003b CR3=00000002c82f3000 CR4=00000000000026f0
(XEN) Sysenter RSP=ffff83023fe8ffd0 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000006b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=000400c0
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=0000000c qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x00000002c82fc01e
(XEN) Virtual processor ID = 0x0001
(XEN) 
(XEN) >>> Domain 3832 <<<
(XEN)   VCPU 0
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000011, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000000000 (0x0000000000000000)  RIP = 0x0000000000000000 (0x0000000000000000)
(XEN) RFLAGS=0x0000000000000000 (0x0000000000000002)  DR7 = 0x0000000000000000
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0x0000, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x00000000, base=0x0000000000000000
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x00000000, base=0x0000000000000000
(XEN) IDTR:                           limit=0x00000000, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x000000ff, base=0x0000000000000000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = ffff94a67ff90106
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0x0000000000000000  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=0000
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=0000000000000000
(XEN) GDTBase=0000000000000000 IDTBase=0000000000000000
(XEN) CR0=000000008005003b CR3=0000000219d40000 CR4=00000000000026f0
(XEN) Sysenter RSP=0000000000000000 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000004b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=000400c0
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=00000000 qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000033e41401e
(XEN) Virtual processor ID = 0x0000
(XEN)   VCPU 1
(XEN) *** Guest State ***
(XEN) CR0: actual=0x0000000080000039, shadow=0x0000000000000011, gh_mask=ffffffffffffffff
(XEN) CR4: actual=0x0000000000002050, shadow=0x0000000000000000, gh_mask=ffffffffffffffff
(XEN) CR3: actual=0x00000000feffe000, target_count=0
(XEN)      target0=0000000000000000, target1=0000000000000000
(XEN)      target2=0000000000000000, target3=0000000000000000
(XEN) RSP = 0x0000000000000000 (0x0000000000000000)  RIP = 0x0000000000000000 (0x0000000000000000)
(XEN) RFLAGS=0x0000000000000000 (0x0000000000000002)  DR7 = 0x0000000000000000
(XEN) Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000
(XEN) CS: sel=0x0000, attr=0x0c09b, limit=0xffffffff, base=0x0000000000000000
(XEN) DS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) SS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) ES: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) FS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) GS: sel=0x0000, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
(XEN) GDTR:                           limit=0x00000000, base=0x0000000000000000
(XEN) LDTR: sel=0x0000, attr=0x00082, limit=0x00000000, base=0x0000000000000000
(XEN) IDTR:                           limit=0x00000000, base=0x0000000000000000
(XEN) TR: sel=0x0000, attr=0x0008b, limit=0x000000ff, base=0x0000000000000000
(XEN) Guest PAT = 0x0007040600070406
(XEN) TSC Offset = 0000000000000000
(XEN) DebugCtl=0000000000000000 DebugExceptions=0000000000000000
(XEN) Interruptibility=0000 ActivityState=0000
(XEN) *** Host State ***
(XEN) RSP = 0x0000000000000000  RIP = 0xffff82c4801ad9f0
(XEN) CS=e008 DS=0000 ES=0000 FS=0000 GS=0000 SS=0000 TR=0000
(XEN) FSBase=0000000000000000 GSBase=0000000000000000 TRBase=0000000000000000
(XEN) GDTBase=0000000000000000 IDTBase=0000000000000000
(XEN) CR0=000000008005003b CR3=00000002cbcdb000 CR4=00000000000026f0
(XEN) Sysenter RSP=0000000000000000 CS:RIP=e008:ffff82c4801e3290
(XEN) Host PAT = 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=0000003f CPUBased=b6a1e5fa SecondaryExec=0000004b
(XEN) EntryControls=000051ff ExitControls=000fefff
(XEN) ExceptionBitmap=000400c0
(XEN) VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN) VMExit: intr_info=00000000 errcode=00000000 ilen=00000000
(XEN)         reason=00000000 qualification=00000000
(XEN) IDTVectoring: info=00000000 errcode=00000000
(XEN) TPR Threshold = 0x00
(XEN) EPT pointer = 0x000000033e41401e
(XEN) Virtual processor ID = 0x0000
(XEN) **************************************
(XEN) [z: print ioapic info]
(XEN) number of MP IRQ sources: 15.
(XEN) number of IO-APIC #8 registers: 24.
(XEN) number of IO-APIC #9 registers: 24.
(XEN) testing the IO APIC.......................
(XEN) IO APIC #8......
(XEN) .... register #00: 08000000
(XEN) .......    : physical APIC id: 08
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
(XEN)  00 0DC 0C  1    0    0   0   0    1    2    7D
(XEN)  01 000 00  0    0    0   0   0    0    0    28
(XEN)  02 000 00  0    0    0   0   0    0    0    F0
(XEN)  03 000 00  1    0    0   0   0    0    0    30
(XEN)  04 000 00  0    0    0   0   0    0    0    F1
(XEN)  05 000 00  0    0    0   0   0    0    0    38
(XEN)  06 000 00  0    0    0   0   0    0    0    40
(XEN)  07 000 00  0    0    0   0   0    0    0    48
(XEN)  08 000 00  0    0    0   0   0    0    0    50
(XEN)  09 000 00  0    1    0   0   0    0    0    58
(XEN)  0a 000 00  0    0    0   0   0    0    0    60
(XEN)  0b 000 00  0    0    0   0   0    0    0    68
(XEN)  0c 000 00  0    0    0   0   0    0    0    70
(XEN)  0d 000 00  0    0    0   0   0    0    0    78
(XEN)  0e 000 00  0    0    0   0   0    0    0    88
(XEN)  0f 000 00  0    0    0   0   0    0    0    90
(XEN)  10 000 00  0    1    0   1   0    0    0    98
(XEN)  11 051 01  1    0    0   0   0    1    2    AA
(XEN)  12 000 00  0    1    0   1   0    0    0    A0
(XEN)  13 000 00  0    1    0   1   0    0    0    A8
(XEN)  14 063 03  1    0    0   0   0    0    2    5E
(XEN)  15 0C5 05  1    0    0   0   0    1    2    8E
(XEN)  16 000 00  0    0    0   0   0    0    0    B0
(XEN)  17 000 00  0    1    0   1   0    0    0    B8
(XEN) IO APIC #9......
(XEN) .... register #00: 09000000
(XEN) .......    : physical APIC id: 09
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... register #02: 00000000
(XEN) .......     : arbitration: 00
(XEN) .... register #03: 00000001
(XEN) .......     : Boot DT    : 1
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
(XEN)  00 000 00  0    0    0   0   0    0    0    C0
(XEN)  01 000 00  0    0    0   0   0    0    0    C8
(XEN)  02 000 00  0    0    0   0   0    0    0    D0
(XEN)  03 000 00  0    0    0   0   0    0    0    D8
(XEN)  04 000 00  1    1    0   1   0    0    0    21
(XEN)  05 000 00  0    0    0   0   0    0    0    29
(XEN)  06 000 00  0    0    0   0   0    0    0    31
(XEN)  07 000 00  0    0    0   0   0    0    0    39
(XEN)  08 000 00  0    1    0   1   0    0    0    41
(XEN)  09 000 00  0    0    0   0   0    0    0    49
(XEN)  0a 000 00  0    0    0   0   0    0    0    51
(XEN)  0b 000 00  0    0    0   0   0    0    0    59
(XEN)  0c 000 00  0    0    0   0   0    0    0    61
(XEN)  0d 000 00  0    0    0   0   0    0    0    69
(XEN)  0e 000 00  0    0    0   0   0    0    0    71
(XEN)  0f 000 00  0    0    0   0   0    0    0    79
(XEN)  10 000 00  1    1    0   1   0    0    0    81
(XEN)  11 000 00  0    0    0   0   0    0    0    89
(XEN)  12 000 00  0    0    0   0   0    0    0    91
(XEN)  13 000 00  0    0    0   0   0    0    0    99
(XEN)  14 000 00  0    0    0   0   0    0    0    A1
(XEN)  15 000 00  0    0    0   0   0    0    0    A9
(XEN)  16 000 00  0    0    0   0   0    0    0    B1
(XEN)  17 000 00  0    0    0   0   0    0    0    B9
(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
(XEN) IRQ240 -> 0:2
(XEN) IRQ40 -> 0:1
(XEN) IRQ48 -> 0:3
(XEN) IRQ241 -> 0:4
(XEN) IRQ56 -> 0:5
(XEN) IRQ64 -> 0:6
(XEN) IRQ72 -> 0:7
(XEN) IRQ80 -> 0:8
(XEN) IRQ88 -> 0:9
(XEN) IRQ96 -> 0:10
(XEN) IRQ104 -> 0:11
(XEN) IRQ112 -> 0:12
(XEN) IRQ120 -> 0:13
(XEN) IRQ136 -> 0:14
(XEN) IRQ144 -> 0:15
(XEN) IRQ152 -> 0:16
(XEN) IRQ160 -> 0:18
(XEN) IRQ168 -> 0:19
(XEN) IRQ176 -> 0:22
(XEN) IRQ184 -> 0:23
(XEN) IRQ192 -> 1:0
(XEN) IRQ200 -> 1:1
(XEN) IRQ208 -> 1:2
(XEN) IRQ216 -> 1:3
(XEN) IRQ33 -> 1:4
(XEN) IRQ41 -> 1:5
(XEN) IRQ49 -> 1:6
(XEN) IRQ57 -> 1:7
(XEN) IRQ65 -> 1:8
(XEN) IRQ73 -> 1:9
(XEN) IRQ81 -> 1:10
(XEN) IRQ89 -> 1:11
(XEN) IRQ97 -> 1:12
(XEN) IRQ105 -> 1:13
(XEN) IRQ113 -> 1:14
(XEN) IRQ121 -> 1:15
(XEN) IRQ129 -> 1:16
(XEN) IRQ137 -> 1:17
(XEN) IRQ145 -> 1:18
(XEN) IRQ153 -> 1:19
(XEN) IRQ161 -> 1:20
(XEN) IRQ169 -> 1:21
(XEN) IRQ177 -> 1:22
(XEN) IRQ185 -> 1:23
(XEN) .................................... done.
vcpu 0




[-- Attachment #4: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Domain 0 stop response on frequently reboot VMS
  2010-10-15 12:43     ` Domain 0 stop response on frequently reboot VMS MaoXiaoyun
@ 2010-10-15 12:57       ` Keir Fraser
  2010-10-16  5:39         ` MaoXiaoyun
  0 siblings, 1 reply; 46+ messages in thread
From: Keir Fraser @ 2010-10-15 12:57 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

You'll probably want to see if you can get SysRq output from dom0 via serial
line. It's likely you can if it is alive enough to respond to ping. This
might tell you things like what all processes are getting blocked on, and
thus indicate what is stopping dom0 from making progress.

 -- Keir

On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> 
>  Hi Keir:
>  
>          First, I'd like to express my appreciation for the help your offered
> before.
>          Well, recently we confront a rather nasty domain 0 no response
> problem.
>  
>          We still have 12 HVMs almost continuously and concurrently reboot
> test on a physical server.
>          A few hours later, the server looks like dead. We only can ping to
> the server and get right response,
> the Xen works fine since we can get debug info from serial port. Attached is
> the full debug output.
> After decode the domain 0 CPU stack, I find the CPU still works for domain 0
> since the stack changed
> info changed every time I dumped.
>  
>         Could help to take a look at the attentchment to see whether there are
> some hints for debugging this
> problem. Thanks in advance.
>  
>  
>  
>          
> 
>        
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-15 12:57       ` Keir Fraser
@ 2010-10-16  5:39         ` MaoXiaoyun
  2010-10-16  7:16           ` Keir Fraser
  2010-10-18 21:17           ` Daniel Stodden
  0 siblings, 2 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-16  5:39 UTC (permalink / raw)
  To: keir, xen devel


[-- Attachment #1.1: Type: text/plain, Size: 3223 bytes --]


Well, Thanks Keir.
Fortunately we caught the bug, it turned out to be a tapdisk problem. 
A brief explaination for other guys might confront this issue.
 
Clear  BLKTAP_DEFERRED on line 19 will lead to the concurrent access of 
tap->deferred_queue between line 24 and 37, which will finally cause bad 
pointer of tap->deferred_queue, and infinte loop in while clause in line 22.
Lock line 24 will be a simple fix. 
 
/linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
  9 void
 10 blktap_run_deferred(void)
 11 {
 12     LIST_HEAD(queue);
 13     struct blktap *tap;
 14     unsigned long flags;
 15     
 16     spin_lock_irqsave(&deferred_work_lock, flags);
 17     list_splice_init(&deferred_work_queue, &queue);
 18     list_for_each_entry(tap, &queue, deferred_queue)
 19         clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
 20     spin_unlock_irqrestore(&deferred_work_lock, flags);
 21     
 22     while (!list_empty(&queue)) {
 23         tap = list_entry(queue.next, struct blktap, deferred_queue);
 24         list_del_init(&tap->deferred_queue);
 25         blktap_device_restart(tap);
 26     }   
 27 }   
 28 
 29 void
 30 blktap_defer(struct blktap *tap)
 31 {
 32     unsigned long flags;
 33     
 34     spin_lock_irqsave(&deferred_work_lock, flags);
 35     if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
 36         set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
 37         list_add_tail(&tap->deferred_queue, &deferred_work_queue);
 38     }   
 39     spin_unlock_irqrestore(&deferred_work_lock, flags);
 40 } 

 
> Date: Fri, 15 Oct 2010 13:57:09 +0100
> Subject: Re: [Xen-devel] Domain 0 stop response on frequently reboot VMS
> From: keir@xen.org
> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> 
> You'll probably want to see if you can get SysRq output from dom0 via serial
> line. It's likely you can if it is alive enough to respond to ping. This
> might tell you things like what all processes are getting blocked on, and
> thus indicate what is stopping dom0 from making progress.
> 
> -- Keir
> 
> On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> 
> > 
> > Hi Keir:
> > 
> > First, I'd like to express my appreciation for the help your offered
> > before.
> > Well, recently we confront a rather nasty domain 0 no response
> > problem.
> > 
> > We still have 12 HVMs almost continuously and concurrently reboot
> > test on a physical server.
> > A few hours later, the server looks like dead. We only can ping to
> > the server and get right response,
> > the Xen works fine since we can get debug info from serial port. Attached is
> > the full debug output.
> > After decode the domain 0 CPU stack, I find the CPU still works for domain 0
> > since the stack changed
> > info changed every time I dumped.
> > 
> > Could help to take a look at the attentchment to see whether there are
> > some hints for debugging this
> > problem. Thanks in advance.
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 4770 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Domain 0 stop response on frequently reboot VMS
  2010-10-16  5:39         ` MaoXiaoyun
@ 2010-10-16  7:16           ` Keir Fraser
  2010-10-18 21:17           ` Daniel Stodden
  1 sibling, 0 replies; 46+ messages in thread
From: Keir Fraser @ 2010-10-16  7:16 UTC (permalink / raw)
  To: MaoXiaoyun, xen devel

Send a patch to the list, Cc Jeremy Fitzhardinge and also a blktap
maintainer, which you should be able to derive from changeset histories and
signed-off-by lines. Flag it clearly in the subject line as a proposed
bugfix for pv_ops.

 -- Keir

On 16/10/2010 06:39, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:

> Well, Thanks Keir.
> Fortunately we caught the bug, it turned out to be a tapdisk problem.
> A brief explaination for other guys might confront this issue.
>  
> Clear  BLKTAP_DEFERRED on line 19 will lead to the concurrent access of
> tap->deferred_queue between line 24 and 37, which will finally cause bad
> pointer of tap->deferred_queue, and infinte loop in while clause in line 22.
> Lock line 24 will be a simple fix.
>  
> /linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
>   9 void
>  10 blktap_run_deferred(void)
>  11 {
>  12     LIST_HEAD(queue);
>  13     struct blktap *tap;
>  14     unsigned long flags;
>  15     
>  16     spin_lock_irqsave(&deferred_work_lock, flags);
>  17     list_splice_init(&deferred_work_queue, &queue);
>  18     list_for_each_entry(tap, &queue, deferred_queue)
>  19         clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
>  20     spin_unlock_irqrestore(&deferred_work_lock, flags);
>  21     
>  22     while (!list_empty(&queue)) {
>  23         tap = list_entry(queue.next, struct blktap, deferred_queue);
>  24 &nb sp;       list_del_init(&tap->deferred_queue);
>  25         blktap_device_restart(tap);
>  26     }   
>  27 }   
>  28 
>  29 void
>  30 blktap_defer(struct blktap *tap)
>  31 {
>  32     unsigned long flags;
>  33     
>  34     spin_lock_irqsave(&deferred_work_lock, flags);
>  35     if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
>  36         set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
>  37         list_add_tail(&tap->deferred_queue, &deferred_work_queue);
>  38     }   
>  39     spin_unlock_irqrestore(&deferred_work_lock,! f lags);
>  40 } 
> 
>  
>> Date: Fri, 15 Oct 2010 13:57:09 +0100
>> Subject: Re: [Xen-devel] Domain 0 stop response on frequently reboot VMS
>> From: keir@xen.org
>> To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
>> 
>> You'll probably want to see if you can get SysRq output from dom0 via serial
>> line. It's likely you can if it is alive enough to respond to ping. This
>> might tell you things like what all processes are getting blocked on, and
>> thus indicate what is stopping dom0 from making progress.
>> 
>> -- Keir
>> 
>> On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
>> 
>>> 
>>> Hi Keir:
>>> 
>>> First, I'd like to express my appreciation for the help your offered
>>> before.
>>> Well, recently we confront a rather nasty domain 0 no response
>>> problem.
>>> 
>>> We still have 12 HVMs almost continuously and con currently reboot
>>> test on a physical server.
>>> A few hours later, the server looks like dead. We only can ping to
>>> the server and get right response,
>>> the Xen works fine since we can get debug info from serial port. Attached is
>>> the full debug output.
>>> After decode the domain 0 CPU stack, I find the CPU still works for domain 0
>>> since the stack changed
>>> info changed every time I dumped.
>>> 
>>> Could help to take a look at the attentchment to see whether there are
>>> some hints for debugging this
>>> problem. Thanks in advance.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
>> 
>> 
>        !

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-16  5:39         ` MaoXiaoyun
  2010-10-16  7:16           ` Keir Fraser
@ 2010-10-18 21:17           ` Daniel Stodden
  2010-10-24  5:48             ` MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel Stodden @ 2010-10-18 21:17 UTC (permalink / raw)
  To: MaoXiaoyun, Jeremy Fitzhardinge; +Cc: xen devel, keir


I'd strongly suggest to try upgrading your kernel, or at least the
blktap component. The condition below is new to me, but that wait_queue
file and some related code was known to be buggy and has long since been
removed.

If you choose to only upgrade blktap from tip, let me know what kernel
version you're dealing with, you might need to backport some of the
device queue macros to match your version's needs.

Daniel


On Sat, 2010-10-16 at 01:39 -0400, MaoXiaoyun wrote:
> Well, Thanks Keir.
> Fortunately we caught the bug, it turned out to be a tapdisk problem. 
> A brief explaination for other guys might confront this issue.
>  
> Clear  BLKTAP_DEFERRED on line 19 will lead to the concurrent access
> of 
> tap->deferred_queue between line 24 and 37, which will finally cause
> bad 
> pointer of tap->deferred_queue, and infinte loop in while clause in
> line 22.
> Lock line 24 will be a simple fix. 
>  
> /linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
>   9 void
>  10 blktap_run_deferred(void)
>  11 {
>  12     LIST_HEAD(queue);
>  13     struct blktap *tap;
>  14     unsigned long flags;
>  15     
>  16     spin_lock_irqsave(&deferred_work_lock, flags);
>  17     list_splice_init(&deferred_work_queue, &queue);
>  18     list_for_each_entry(tap, &queue, deferred_queue)
>  19         clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
>  20     spin_unlock_irqrestore(&deferred_work_lock, flags);
>  21     
>  22     while (!list_empty(&queue)) {
>  23         tap = list_entry(queue.next, struct blktap,
> deferred_queue);
>  24 &nb sp;       list_del_init(&tap->deferred_queue);
>  25         blktap_device_restart(tap);
>  26     }   
>  27 }   
>  28 
>  29 void
>  30 blktap_defer(struct blktap *tap)
>  31 {
>  32     unsigned long flags;
>  33     
>  34     spin_lock_irqsave(&deferred_work_lock, flags);
>  35     if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
>  36         set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
>  37         list_add_tail(&tap->deferred_queue, &deferred_work_queue);
>  38     }   
>  39     spin_unlock_irqrestore(&deferred_work_lock, f lags);
>  40 } 
> 
>  
> > Date: Fri, 15 Oct 2010 13:57:09 +0100
> > Subject: Re: [Xen-devel] Domain 0 stop response on frequently reboot
> VMS
> > From: keir@xen.org
> > To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > 
> > You'll probably want to see if you can get SysRq output from dom0
> via serial
> > line. It's likely you can if it is alive enough to respond to ping.
> This
> > might tell you things like what all processes are getting blocked
> on, and
> > thus indicate what is stopping dom0 from making progress.
> > 
> > -- Keir
> > 
> > On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > 
> > > 
> > > Hi Keir:
> > > 
> > > First, I'd like to express my appreciation for the help your
> offered
> > > before.
> > > Well, recently we confront a rather nasty domain 0 no response
> > > problem.
> > > 
> > > We still have 12 HVMs almost continuously and con currently reboot
> > > test on a physical server.
> > > A few hours later, the server looks like dead. We only can ping to
> > > the server and get right response,
> > > the Xen works fine since we can get debug info from serial port.
> Attached is
> > > the full debug output.
> > > After decode the domain 0 CPU stack, I find the CPU still works
> for domain 0
> > > since the stack changed
> > > info changed every time I dumped.
> > > 
> > > Could help to take a look at the attentchment to see whether there
> are
> > > some hints for debugging this
> > > problem. Thanks in advance.
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> > 
> > 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-18 21:17           ` Daniel Stodden
@ 2010-10-24  5:48             ` MaoXiaoyun
  2010-10-24  5:56               ` Daniel Stodden
  2010-11-04  3:09               ` A Patch for modify DomU network transmit rate dynamically MaoXiaoyun
  0 siblings, 2 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-24  5:48 UTC (permalink / raw)
  To: daniel.stodden; +Cc: xen devel, keir


[-- Attachment #1.1: Type: text/plain, Size: 4649 bytes --]


Hi Daniel:
 
     Sorry for tht late response, and really thanks for your kindly suggestion.
     Well, I believe we will upgrade to the lastest kernel in the coming future, but currently 
we perfer to maintain for stable reason.
 
    Our kernel version is 2.6.31. Now I am going through the change set of blktap to get 
more detail info. 
 
   thanks.
 
> Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
> From: daniel.stodden@citrix.com
> To: tinnycloud@hotmail.com; jeremy@goop.org
> CC: keir@xen.org; xen-devel@lists.xensource.com
> Date: Mon, 18 Oct 2010 14:17:50 -0700
> 
> 
> I'd strongly suggest to try upgrading your kernel, or at least the
> blktap component. The condition below is new to me, but that wait_queue
> file and some related code was known to be buggy and has long since been
> removed.
> 
> If you choose to only upgrade blktap from tip, let me know what kernel
> version you're dealing with, you might need to backport some of the
> device queue macros to match your version's needs.
> 
> Daniel
> 
> 
> On Sat, 2010-10-16 at 01:39 -0400, MaoXiaoyun wrote:
> > Well, Thanks Keir.
> > Fortunately we caught the bug, it turned out to be a tapdisk problem. 
> > A brief explaination for other guys might confront this issue.
> > 
> > Clear BLKTAP_DEFERRED on line 19 will lead to the concurrent access
> > of 
> > tap->deferred_queue between line 24 and 37, which will finally cause
> > bad 
> > pointer of tap->deferred_queue, and infinte loop in while clause in
> > line 22.
> > Lock line 24 will be a simple fix. 
> > 
> > /linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
> > 9 void
> > 10 blktap_run_deferred(void)
> > 11 {
> > 12 LIST_HEAD(queue);
> > 13 struct blktap *tap;
> > 14 unsigned long flags;
> > 15 
> > 16 spin_lock_irqsave(&deferred_work_lock, flags);
> > 17 list_splice_init(&deferred_work_queue, &queue);
> > 18 list_for_each_entry(tap, &queue, deferred_queue)
> > 19 clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > 20 spin_unlock_irqrestore(&deferred_work_lock, flags);
> > 21 
> > 22 while (!list_empty(&queue)) {
> > 23 tap = list_entry(queue.next, struct blktap,
> > deferred_queue);
> > 24 &nb sp; list_del_init(&tap->deferred_queue);
> > 25 blktap_device_restart(tap);
> > 26 } 
> > 27 } 
> > 28 
> > 29 void
> > 30 blktap_defer(struct blktap *tap)
> > 31 {
> > 32 unsigned long flags;
> > 33 
> > 34 spin_lock_irqsave(&deferred_work_lock, flags);
> > 35 if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
> > 36 set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > 37 list_add_tail(&tap->deferred_queue, &deferred_work_queue);
> > 38 } 
> > 39 spin_unlock_irqrestore(&deferred_work_lock, f lags);
> > 40 } 
> > 
> > 
> > > Date: Fri, 15 Oct 2010 13:57:09 +0100
> > > Subject: Re: [Xen-devel] Domain 0 stop response on frequently reboot
> > VMS
> > > From: keir@xen.org
> > > To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > > 
> > > You'll probably want to see if you can get SysRq output from dom0
> > via serial
> > > line. It's likely you can if it is alive enough to respond to ping.
> > This
> > > might tell you things like what all processes are getting blocked
> > on, and
> > > thus indicate what is stopping dom0 from making progress.
> > > 
> > > -- Keir
> > > 
> > > On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com> wrote:
> > > 
> > > > 
> > > > Hi Keir:
> > > > 
> > > > First, I'd like to express my appreciation for the help your
> > offered
> > > > before.
> > > > Well, recently we confront a rather nasty domain 0 no response
> > > > problem.
> > > > 
> > > > We still have 12 HVMs almost continuously and con currently reboot
> > > > test on a physical server.
> > > > A few hours later, the server looks like dead. We only can ping to
> > > > the server and get right response,
> > > > the Xen works fine since we can get debug info from serial port.
> > Attached is
> > > > the full debug output.
> > > > After decode the domain 0 CPU stack, I find the CPU still works
> > for domain 0
> > > > since the stack changed
> > > > info changed every time I dumped.
> > > > 
> > > > Could help to take a look at the attentchment to see whether there
> > are
> > > > some hints for debugging this
> > > > problem. Thanks in advance.
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xensource.com
> > > > http://lists.xensource.com/xen-devel
> > > 
> > > 
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6284 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-24  5:48             ` MaoXiaoyun
@ 2010-10-24  5:56               ` Daniel Stodden
  2010-10-26  8:16                 ` MaoXiaoyun
  2010-11-04  3:09               ` A Patch for modify DomU network transmit rate dynamically MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel Stodden @ 2010-10-24  5:56 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, keir

On Sun, 2010-10-24 at 01:48 -0400, MaoXiaoyun wrote:
> Hi Daniel:
>  
>      Sorry for tht late response, and really thanks for your kindly
> suggestion.
>      Well, I believe we will upgrade to the lastest kernel in the
> coming future, but currently 
> we perfer to maintain for stable reason.
>  
>     Our kernel version is 2.6.31. Now I am going through the change
> set of blktap to get 
> more detail info. 

NP. Let me know if you have questions.

Daniel

>    thanks.
>  
> > Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot
> VMS
> > From: daniel.stodden@citrix.com
> > To: tinnycloud@hotmail.com; jeremy@goop.org
> > CC: keir@xen.org; xen-devel@lists.xensource.com
> > Date: Mon, 18 Oct 2010 14:17:50 -0700
> > 
> > 
> > I'd strongly suggest to try upgrading your kernel, or at least the
> > blktap component. The condition below is new to me, but that
> wait_queue
> > file and some related code was known to be buggy and has long since
> been
> > removed.
> > 
> > If you choose to only upgrade blktap from tip, let me know what
> kernel
> > version you're dealing with, you might need to backport some of the
> > device queue macros to match your version's needs.
> > 
> > Daniel
> > 
> > 
> > On Sat, 2010-10-16 at 01:39 -0400, MaoXiaoyun wrote:
> > > Well, Thanks Keir.
> > > Fortunately we caught the bug, it turned out to be a tapdisk
> problem. 
> > > A brief explaination for other guys might confront this issue.
> > > 
> > > Clear BLKTAP_DEFERRED on line 19 will lead to the concurrent
> access
> > > of 
> > > tap->deferred_queue between line 24 and 37, which will finally
> cause
> > > bad 
> > > pointer of tap->deferred_queue, and infinte loop in while clause
> in
> > > line 22.
> > > Lock line 24 will be a simple fix. 
> > > 
> > > /linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
> > > 9 void
> > > 10 blktap_run_deferred(void)
> > > 11 {
> > > 12 LIST_HEAD(queue);
> > > 13 struct blktap *tap;
> > > 14 unsigned long flags;
> > > 15 
> > > 16 spin_lock_irqsave(&deferred_work_lock, flags);
> > > 17 list_splice_init(&deferred_work_queue, &queue);
> > > 18 list_for_each_entry(tap, &queue, deferred_queue)
> > > 19 clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > > 20 spin_unlock_irqrestore(&deferred_work_lock, flags);
> > > 21 
> > > 22 while (!list_empty(&queue)) {
> > > 23 tap = list_entry(queue.next, struct blktap,
> > > deferred_queue);
> > > 24 &nb sp; list_del_init(&tap->deferred_queue);
> > > 25 blktap_device_restart(tap);
> > > 26 } 
> > > 27 } 
> > > 28 
> > > 29 void
> > > 30 blktap_defer(struct blktap *tap)
> > > 31 {
> > > 32 unsigned long flags;
> > > 33 
> > > 34 spin_lock_irqsave(&deferred_work_lock, flags);
> > > 35 if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
> > > 36 set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > > 37 list_add_tail(&tap->deferred_queue, &deferred_work_queue);
> > > 38 } 
> > > 39 spin_unlock_irqrestore(&deferred_work_lock, f lags);
> > > 40 } 
> > > 
> > > 
> > > > Date: Fri, 15 Oct 2010 13:57:09 +0100
> > > > Subject: Re: [Xen-devel] Domain 0 stop response on frequently
> reboot
> > > VMS
> > > > From: keir@xen.org
> > > > To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > > > 
> > > > You'll probably want to see if you can get SysRq output from
> dom0
> > > via serial
> > > > line. It's likely you can if it is alive enough to respond to
> ping.
> > > This
> > > > might tell you things like what all processes are getting
> blocked
> > > on, and
> > > > thus indicate what is stopping dom0 from making progress.
> > > > 
> > > > -- Keir
> > > > 
> > > > On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com>
> wrote:
> > > > 
> > > > > 
> > > > > Hi Keir:
> > > > > 
> > > > > First, I'd like to express my appreciation for the help your
> > > offered
> > > > > before.
> > > > > Well, recently we confront a rather nasty domain 0 no response
> > > > > problem.
> > > > > 
> > > > > We still have 12 HVMs almost continuously and con currently
> reboot
> > > > > test on a physical server.
> > > > > A few hours later, the server looks like dead. We only can
> ping to
> > > > > the server and get right response,
> > > > > the Xen works fine since we can get debug info from serial
> port.
> > > Attached is
> > > > > the full debug output.
> > > > > After decode the domain 0 CPU stack, I find the CPU still
> works
> > > for domain 0
> > > > > since the stack changed
> > > > > info changed every time I dumped.
> > > > > 
> > > > > Could help to take a look at the attentchment to see whether
> there
> > > are
> > > > > some hints for debugging this
> > > > > problem. Thanks in advance.
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xensource.com
> > > > > http://lists.xensource.com/xen-devel
> > > > 
> > > > 
> > 
> > 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-24  5:56               ` Daniel Stodden
@ 2010-10-26  8:16                 ` MaoXiaoyun
  2010-10-26  9:09                   ` Daniel Stodden
  2010-10-26  9:20                   ` Ian Campbell
  0 siblings, 2 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-26  8:16 UTC (permalink / raw)
  To: daniel.stodden; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 6104 bytes --]


Hi Dainnel:
 
      Well, where can I start if I want to maintain the current kernel(2.6.31), and only update the blktap2? 
      As I go throught the git branch of xen/dom0/backend/blktap2,  I found wait_queue.c is removed.
      It looks like blktap2 has changed a lot, right? 
      So I am courious the difference between the new and the old one.
      Could you share some brief explainations, that would be very helpful. 
      Thanks in advance.
 
> Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
> From: daniel.stodden@citrix.com
> To: tinnycloud@hotmail.com
> CC: keir@xen.org; xen-devel@lists.xensource.com
> Date: Sat, 23 Oct 2010 22:56:51 -0700
> 
> On Sun, 2010-10-24 at 01:48 -0400, MaoXiaoyun wrote:
> > Hi Daniel:
> > 
> > Sorry for tht late response, and really thanks for your kindly
> > suggestion.
> > Well, I believe we will upgrade to the lastest kernel in the
> > coming future, but currently 
> > we perfer to maintain for stable reason.
> > 
> > Our kernel version is 2.6.31. Now I am going through the change
> > set of blktap to get 
> > more detail info. 
> 
> NP. Let me know if you have questions.
> 
> Daniel
> 
> > thanks.
> > 
> > > Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot
> > VMS
> > > From: daniel.stodden@citrix.com
> > > To: tinnycloud@hotmail.com; jeremy@goop.org
> > > CC: keir@xen.org; xen-devel@lists.xensource.com
> > > Date: Mon, 18 Oct 2010 14:17:50 -0700
> > > 
> > > 
> > > I'd strongly suggest to try upgrading your kernel, or at least the
> > > blktap component. The condition below is new to me, but that
> > wait_queue
> > > file and some related code was known to be buggy and has long since
> > been
> > > removed.
> > > 
> > > If you choose to only upgrade blktap from tip, let me know what
> > kernel
> > > version you're dealing with, you might need to backport some of the
> > > device queue macros to match your version's needs.
> > > 
> > > Daniel
> > > 
> > > 
> > > On Sat, 2010-10-16 at 01:39 -0400, MaoXiaoyun wrote:
> > > > Well, Thanks Keir.
> > > > Fortunately we caught the bug, it turned out to be a tapdisk
> > problem. 
> > > > A brief explaination for other guys might confront this issue.
> > > > 
> > > > Clear BLKTAP_DEFERRED on line 19 will lead to the concurrent
> > access
> > > > of 
> > > > tap->deferred_queue between line 24 and 37, which will finally
> > cause
> > > > bad 
> > > > pointer of tap->deferred_queue, and infinte loop in while clause
> > in
> > > > line 22.
> > > > Lock line 24 will be a simple fix. 
> > > > 
> > > > /linux-2.6-pvops.git/drivers/xen/blktap/wait_queue.c
> > > > 9 void
> > > > 10 blktap_run_deferred(void)
> > > > 11 {
> > > > 12 LIST_HEAD(queue);
> > > > 13 struct blktap *tap;
> > > > 14 unsigned long flags;
> > > > 15 
> > > > 16 spin_lock_irqsave(&deferred_work_lock, flags);
> > > > 17 list_splice_init(&deferred_work_queue, &queue);
> > > > 18 list_for_each_entry(tap, &queue, deferred_queue)
> > > > 19 clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > > > 20 spin_unlock_irqrestore(&deferred_work_lock, flags);
> > > > 21 
> > > > 22 while (!list_empty(&queue)) {
> > > > 23 tap = list_entry(queue.next, struct blktap,
> > > > deferred_queue);
> > > > 24 &nb sp; list_del_init(&tap->deferred_queue);
> > > > 25 blktap_device_restart(tap);
> > > > 26 } 
> > > > 27 } 
> > > > 28 
> > > > 29 void
> > > > 30 blktap_defer(struct blktap *tap)
> > > > 31 {
> > > > 32 unsigned long flags;
> > > > 33 
> > > > 34 spin_lock_irqsave(&deferred_work_lock, flags);
> > > > 35 if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
> > > > 36 set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
> > > > 37 list_add_tail(&tap->deferred_queue, &deferred_work_queue);
> > > > 38 } 
> > > > 39 spin_unlock_irqrestore(&deferred_work_lock, f lags);
> > > > 40 } 
> > > > 
> > > > 
> > > > > Date: Fri, 15 Oct 2010 13:57:09 +0100
> > > > > Subject: Re: [Xen-devel] Domain 0 stop response on frequently
> > reboot
> > > > VMS
> > > > > From: keir@xen.org
> > > > > To: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> > > > > 
> > > > > You'll probably want to see if you can get SysRq output from
> > dom0
> > > > via serial
> > > > > line. It's likely you can if it is alive enough to respond to
> > ping.
> > > > This
> > > > > might tell you things like what all processes are getting
> > blocked
> > > > on, and
> > > > > thus indicate what is stopping dom0 from making progress.
> > > > > 
> > > > > -- Keir
> > > > > 
> > > > > On 15/10/2010 13:43, "MaoXiaoyun" <tinnycloud@hotmail.com>
> > wrote:
> > > > > 
> > > > > > 
> > > > > > Hi Keir:
> > > > > > 
> > > > > > First, I'd like to express my appreciation for the help your
> > > > offered
> > > > > > before.
> > > > > > Well, recently we confront a rather nasty domain 0 no response
> > > > > > problem.
> > > > > > 
> > > > > > We still have 12 HVMs almost continuously and con currently
> > reboot
> > > > > > test on a physical server.
> > > > > > A few hours later, the server looks like dead. We only can
> > ping to
> > > > > > the server and get right response,
> > > > > > the Xen works fine since we can get debug info from serial
> > port.
> > > > Attached is
> > > > > > the full debug output.
> > > > > > After decode the domain 0 CPU stack, I find the CPU still
> > works
> > > > for domain 0
> > > > > > since the stack changed
> > > > > > info changed every time I dumped.
> > > > > > 
> > > > > > Could help to take a look at the attentchment to see whether
> > there
> > > > are
> > > > > > some hints for debugging this
> > > > > > problem. Thanks in advance.
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > _______________________________________________
> > > > > > Xen-devel mailing list
> > > > > > Xen-devel@lists.xensource.com
> > > > > > http://lists.xensource.com/xen-devel
> > > > > 
> > > > > 
> > > 
> > > 
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 9067 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-26  8:16                 ` MaoXiaoyun
@ 2010-10-26  9:09                   ` Daniel Stodden
  2010-10-26 10:54                     ` MaoXiaoyun
  2010-10-26  9:20                   ` Ian Campbell
  1 sibling, 1 reply; 46+ messages in thread
From: Daniel Stodden @ 2010-10-26  9:09 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel

On Tue, 2010-10-26 at 04:16 -0400, MaoXiaoyun wrote:
> 
>       Well, where can I start if I want to maintain the current
> kernel(2.6.31), and only update the blktap2?
>       As I go throught the git branch of
> xen/dom0/backend/blktap2<http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=shortlog;h=refs/heads/xen/dom0/backend/blktap2>,  I found wait_queue.c is removed.
>       It looks like blktap2 has changed a lot, right?
>       So I am courious the difference between the new and the old one.
>       Could you share some brief explainations, that would be very
> helpful.
>       Thanks in advance.

For brief explanations but nothing particular I can only refer you to
the commit messages. Because there have been plenty of them. :)

They're all still backward compatible as far as the userspace ABI for
older tapdisk2s goes. My recommendation would be replacing that whole
blktap/ directory, because I can't support you with more than that.

If you choose to do so: You might need some patches fixing request queue
accesses, there might be slight differences to 2.6.32, that's a bit of a
moving target sometimes. But it's not a big deal.

Daniel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-26  8:16                 ` MaoXiaoyun
  2010-10-26  9:09                   ` Daniel Stodden
@ 2010-10-26  9:20                   ` Ian Campbell
  2010-10-26 10:59                     ` MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: Ian Campbell @ 2010-10-26  9:20 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, Daniel Stodden

On Tue, 2010-10-26 at 09:16 +0100, MaoXiaoyun wrote:
> Well, where can I start if I want to maintain the current
> kernel(2.6.31)

I don't think 2.6.31 is the default in any current Xen tree, is it? 

xen/master in xen.git still points to a 2.6.31 based tree but that's
rather misleading. That branch hasn't been updated since July.

The currently maintained stable branch is xen/stable-2.6.32.x which is
used by both xen-4.0-testing.hg and xen-unstable.hg

Ian.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-26  9:09                   ` Daniel Stodden
@ 2010-10-26 10:54                     ` MaoXiaoyun
  0 siblings, 0 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-26 10:54 UTC (permalink / raw)
  To: daniel.stodden; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 1495 bytes --]


Thanks, Daniel.
I think I can handle it myself.
 
> Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
> From: daniel.stodden@citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com
> Date: Tue, 26 Oct 2010 02:09:52 -0700
> 
> On Tue, 2010-10-26 at 04:16 -0400, MaoXiaoyun wrote:
> > 
> > Well, where can I start if I want to maintain the current
> > kernel(2.6.31), and only update the blktap2?
> > As I go throught the git branch of
> > xen/dom0/backend/blktap2<http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=shortlog;h=refs/heads/xen/dom0/backend/blktap2>, I found wait_queue.c is removed.
> > It looks like blktap2 has changed a lot, right?
> > So I am courious the difference between the new and the old one.
> > Could you share some brief explainations, that would be very
> > helpful.
> > Thanks in advance.
> 
> For brief explanations but nothing particular I can only refer you to
> the commit messages. Because there have been plenty of them. :)
> 
> They're all still backward compatible as far as the userspace ABI for
> older tapdisk2s goes. My recommendation would be replacing that whole
> blktap/ directory, because I can't support you with more than that.
> 
> If you choose to do so: You might need some patches fixing request queue
> accesses, there might be slight differences to 2.6.32, that's a bit of a
> moving target sometimes. But it's not a big deal.
> 
> Daniel
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1918 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* RE: Domain 0 stop response on frequently reboot VMS
  2010-10-26  9:20                   ` Ian Campbell
@ 2010-10-26 10:59                     ` MaoXiaoyun
  2010-10-26 11:54                       ` Domain 0 stop response on frequently reboot VMS, fix xen/master link? Pasi Kärkkäinen
  0 siblings, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-10-26 10:59 UTC (permalink / raw)
  To: ian.campbell; +Cc: xen devel, daniel.stodden


[-- Attachment #1.1: Type: text/plain, Size: 900 bytes --]


Thanks Ian. You are right. 
The code used in my server is from xen/master, also very old. 
And I am going to upgrade it. 
 
> Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
> From: Ian.Campbell@citrix.com
> To: tinnycloud@hotmail.com
> CC: Daniel.Stodden@citrix.com; xen-devel@lists.xensource.com
> Date: Tue, 26 Oct 2010 10:20:06 +0100
> 
> On Tue, 2010-10-26 at 09:16 +0100, MaoXiaoyun wrote:
> > Well, where can I start if I want to maintain the current
> > kernel(2.6.31)
> 
> I don't think 2.6.31 is the default in any current Xen tree, is it? 
> 
> xen/master in xen.git still points to a 2.6.31 based tree but that's
> rather misleading. That branch hasn't been updated since July.
> 
> The currently maintained stable branch is xen/stable-2.6.32.x which is
> used by both xen-4.0-testing.hg and xen-unstable.hg
> 
> Ian.
> 
> 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1274 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Domain 0 stop response on frequently reboot VMS, fix xen/master link?
  2010-10-26 10:59                     ` MaoXiaoyun
@ 2010-10-26 11:54                       ` Pasi Kärkkäinen
  2010-10-26 17:08                         ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 46+ messages in thread
From: Pasi Kärkkäinen @ 2010-10-26 11:54 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: Jeremy Fitzhardinge, xen devel, ian.campbell, daniel.stodden


Hello,

Jeremy: Maybe now it's time to kill the xen/master link to 2.6.31 tree
to avoid confusion? 

-- Pasi

On Tue, Oct 26, 2010 at 06:59:00PM +0800, MaoXiaoyun wrote:
>    Thanks Ian. You are right.
>    The code used in my server is from xen/master, also very old.
>    And I am going to upgrade it.
> 
>    > Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
>    > From: Ian.Campbell@citrix.com
>    > To: tinnycloud@hotmail.com
>    > CC: Daniel.Stodden@citrix.com; xen-devel@lists.xensource.com
>    > Date: Tue, 26 Oct 2010 10:20:06 +0100
>    >
>    > On Tue, 2010-10-26 at 09:16 +0100, MaoXiaoyun wrote:
>    > > Well, where can I start if I want to maintain the current
>    > > kernel(2.6.31)
>    >
>    > I don't think 2.6.31 is the default in any current Xen tree, is it?
>    >
>    > xen/master in xen.git still points to a 2.6.31 based tree but that's
>    > rather misleading. That branch hasn't been updated since July.
>    >
>    > The currently maintained stable branch is xen/stable-2.6.32.x which is
>    > used by both xen-4.0-testing.hg and xen-unstable.hg
>    >
>    > Ian.
>    >
>    >

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: Domain 0 stop response on frequently reboot VMS, fix xen/master link?
  2010-10-26 11:54                       ` Domain 0 stop response on frequently reboot VMS, fix xen/master link? Pasi Kärkkäinen
@ 2010-10-26 17:08                         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 46+ messages in thread
From: Jeremy Fitzhardinge @ 2010-10-26 17:08 UTC (permalink / raw)
  To: Pasi Kärkkäinen
  Cc: MaoXiaoyun, xen devel, ian.campbell, daniel.stodden

 On 10/26/2010 04:54 AM, Pasi Kärkkäinen wrote:
> Hello,
>
> Jeremy: Maybe now it's time to kill the xen/master link to 2.6.31 tree
> to avoid confusion? 

Yeah, esp since it is still the default branch for xen.git.  OK, done,
and default switched to xen/stable-2.6.32.x.

Thanks,
    J


> -- Pasi
>
> On Tue, Oct 26, 2010 at 06:59:00PM +0800, MaoXiaoyun wrote:
>>    Thanks Ian. You are right.
>>    The code used in my server is from xen/master, also very old.
>>    And I am going to upgrade it.
>>
>>    > Subject: RE: [Xen-devel] Domain 0 stop response on frequently reboot VMS
>>    > From: Ian.Campbell@citrix.com
>>    > To: tinnycloud@hotmail.com
>>    > CC: Daniel.Stodden@citrix.com; xen-devel@lists.xensource.com
>>    > Date: Tue, 26 Oct 2010 10:20:06 +0100
>>    >
>>    > On Tue, 2010-10-26 at 09:16 +0100, MaoXiaoyun wrote:
>>    > > Well, where can I start if I want to maintain the current
>>    > > kernel(2.6.31)
>>    >
>>    > I don't think 2.6.31 is the default in any current Xen tree, is it?
>>    >
>>    > xen/master in xen.git still points to a 2.6.31 based tree but that's
>>    > rather misleading. That branch hasn't been updated since July.
>>    >
>>    > The currently maintained stable branch is xen/stable-2.6.32.x which is
>>    > used by both xen-4.0-testing.hg and xen-unstable.hg
>>    >
>>    > Ian.
>>    >
>>    >
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* A Patch for modify DomU network transmit rate dynamically
  2010-10-24  5:48             ` MaoXiaoyun
  2010-10-24  5:56               ` Daniel Stodden
@ 2010-11-04  3:09               ` MaoXiaoyun
  2010-11-04  3:43                 ` MaoXiaoyun
  1 sibling, 1 reply; 46+ messages in thread
From: MaoXiaoyun @ 2010-11-04  3:09 UTC (permalink / raw)
  To: xen devel; +Cc: jeremy, keir, daniel.stodden


[-- Attachment #1.1: Type: text/plain, Size: 699 bytes --]


Hi :
 
     I've written a patch which supports dynamically update domU netif transmit rate.
     But after I finish it, I have some questiones on the patch itself.
 
     1. It seems I don't need to update netif->remaining_credit, since in netback,c: tx_add_credit()
         will update automatically on every transmit, so all I need to do is update netif->credit_bytes
         and netif->credit_usec, right? Also, am I do the right way?
     2. I notice that netback is also a module, so I think it can be rmmod or insmod, right? If so 
         I can apply this patch online(with no affect on the running VM)
 
       Could someone help me to confirm this, many thanks. 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1200 bytes --]

[-- Attachment #2: 4.netback_reset_rate_limit.patch --]
[-- Type: application/octet-stream, Size: 5979 bytes --]

diff --git a/drivers/xen/netback/common.h b/drivers/xen/netback/common.h
index 51f97c0..87696ec 100644
--- a/drivers/xen/netback/common.h
+++ b/drivers/xen/netback/common.h
@@ -89,9 +89,9 @@ struct xen_netif {
 	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
 	unsigned long   credit_bytes;
 	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
+	atomic64_t   remaining_credit;
 	struct timer_list credit_timeout;
-
+	
 	/* Enforce draining of the transmit queue. */
 	struct timer_list tx_queue_timeout;
 
@@ -149,11 +149,13 @@ struct backend_info {
 	enum xenbus_state frontend_state;
 	struct xenbus_watch hotplug_status_watch;
 	int have_hotplug_status_watch:1;
-
+	int have_rate_watch:1;
 	/* State relating to the netback accelerator */
 	void *netback_accel_priv;
 	/* The accelerator that this backend is currently using */
 	struct netback_accelerator *accelerator;
+	/**/
+	struct xenbus_watch rate_watch;
 };
 
 #define NETBACK_ACCEL_VERSION 0x00010001
diff --git a/drivers/xen/netback/interface.c b/drivers/xen/netback/interface.c
index b23b14d..139360b 100644
--- a/drivers/xen/netback/interface.c
+++ b/drivers/xen/netback/interface.c
@@ -218,7 +218,8 @@ struct xen_netif *netif_alloc(struct device *parent, domid_t domid, unsigned int
 
 	netback_carrier_off(netif);
 
-	netif->credit_bytes = netif->remaining_credit = ~0UL;
+	atomic64_set(&netif->remaining_credit,~0UL);
+	netif->credit_bytes = ~0UL;
 	netif->credit_usec  = 0UL;
 	init_timer(&netif->credit_timeout);
 	/* Initialize 'expires' now: it's used to track the credit window. */
diff --git a/drivers/xen/netback/netback.c b/drivers/xen/netback/netback.c
index ddc701f..3a7b048 100644
--- a/drivers/xen/netback/netback.c
+++ b/drivers/xen/netback/netback.c
@@ -733,11 +733,11 @@ static void tx_add_credit(struct xen_netif *netif)
 	max_burst = max(max_burst, netif->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = netif->remaining_credit + netif->credit_bytes;
-	if (max_credit < netif->remaining_credit)
+	max_credit = atomic64_read(&netif->remaining_credit) + netif->credit_bytes;
+	if (max_credit < atomic64_read(&netif->remaining_credit))
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	netif->remaining_credit = min(max_credit, max_burst);
+	atomic64_set(&netif->remaining_credit,min(max_credit,max_burst));
 }
 
 static void tx_credit_callback(unsigned long data)
@@ -1147,7 +1147,7 @@ static bool tx_credit_exceeded(struct xen_netif *netif, unsigned size)
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > netif->remaining_credit) {
+	if (size > atomic64_read(&netif->remaining_credit)) {
 		netif->credit_timeout.data     =
 			(unsigned long)netif;
 		netif->credit_timeout.function =
@@ -1195,13 +1195,13 @@ static unsigned net_tx_build_mops(void)
 		memcpy(&txreq, RING_GET_REQUEST(&netif->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > netif->remaining_credit &&
+		if (txreq.size > atomic64_read(&netif->remaining_credit) &&
 		    tx_credit_exceeded(netif, txreq.size)) {
 			netif_put(netif);
 			continue;
 		}
 
-		netif->remaining_credit -= txreq.size;
+		atomic64_sub(txreq.size,&netif->remaining_credit);
 
 		work_to_do--;
 		netif->tx.req_cons = ++idx;
diff --git a/drivers/xen/netback/xenbus.c b/drivers/xen/netback/xenbus.c
index 70636d0..318c5a1 100644
--- a/drivers/xen/netback/xenbus.c
+++ b/drivers/xen/netback/xenbus.c
@@ -33,6 +33,7 @@ static int connect_rings(struct backend_info *);
 static void connect(struct backend_info *);
 static void backend_create_netif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
+static void unregister_rate_watch(struct backend_info *be);
 
 static int netback_remove(struct xenbus_device *dev)
 {
@@ -41,6 +42,7 @@ static int netback_remove(struct xenbus_device *dev)
 	//netback_remove_accelerators(be, dev);
 
 	unregister_hotplug_status_watch(be);
+	unregister_rate_watch(be);
 	if (be->netif) {
 		kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
 		xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
@@ -350,6 +352,15 @@ static void unregister_hotplug_status_watch(struct backend_info *be)
 	be->have_hotplug_status_watch = 0;
 }
 
+static void unregister_rate_watch(struct backend_info *be)
+{
+	if (be->have_rate_watch) {
+		unregister_xenbus_watch(&be->rate_watch);
+		kfree(be->rate_watch.node);
+	}
+	be->have_rate_watch = 0;
+}
+
 static void hotplug_status_changed(struct xenbus_watch *watch,
 				   const char **vec,
 				   unsigned int vec_size)
@@ -371,6 +382,19 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
 	kfree(str);
 }
 
+static void rate_changed(struct xenbus_watch *watch,
+			     const char **vec, unsigned int len)
+{
+
+	struct backend_info *be=container_of(watch,struct backend_info, rate_watch);
+
+	IPRINTK("rate changed\n");
+	xen_net_read_rate(be->dev, &be->netif->credit_bytes,
+			  &be->netif->credit_usec);
+	atomic64_set(&be->netif->remaining_credit,be->netif->credit_bytes);
+	xenbus_write(XBT_NIL, be->dev->nodename, "rate_status", "changed");
+}
+
 static void connect(struct backend_info *be)
 {
 	int err;
@@ -388,7 +412,7 @@ static void connect(struct backend_info *be)
 
 	xen_net_read_rate(dev, &be->netif->credit_bytes,
 			  &be->netif->credit_usec);
-	be->netif->remaining_credit = be->netif->credit_bytes;
+	atomic64_set(&be->netif->remaining_credit,be->netif->credit_bytes);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -401,7 +425,16 @@ static void connect(struct backend_info *be)
 		be->have_hotplug_status_watch = 1;
 	}
 
-	netif_wake_queue(be->netif->dev);
+	unregister_rate_watch(be);
+	err=xenbus_watch_pathfmt(dev, &be->rate_watch,
+				   rate_changed,"%s/%s", dev->nodename, "rate");
+
+	if(!err){
+		be->have_rate_watch=1;
+	}
+	
+	netif_wake_queue(be->netif->dev);	
+	
 }
 
 

[-- Attachment #3: 15.add_xen-rate-set_interface.patch --]
[-- Type: application/octet-stream, Size: 5855 bytes --]

diff --git a/xend/XendDomain.py b/xend/XendDomain.py
index 55e8380..17e0d85 100644
--- a/xend/XendDomain.py
+++ b/xend/XendDomain.py
@@ -48,6 +48,7 @@ from xen.xend.XendConstants import DOM_STATE_CRASHED, HVM_PARAM_ACPI_S_STATE
 from xen.xend.XendConstants import TRIGGER_TYPE, TRIGGER_S3RESUME
 from xen.xend.XendDevices import XendDevices
 from xen.xend.XendAPIConstants import *
+from xen.xend.server.netif import parseRate
 
 from xen.xend.xenstore.xstransact import xstransact
 from xen.xend.xenstore.xswatch import xswatch
@@ -1541,7 +1542,6 @@ class XendDomain:
         else:
             log.debug("error: Domain is not running!")
 
-
     def domain_usb_del(self, domid, dev_id):
         dominfo = self.domain_lookup_nr(domid)
         if not dominfo:
@@ -1561,6 +1561,27 @@ class XendDomain:
         else:
             log.debug("error: Domain is not running!")
 
+
+    def domain_set_xen_rate(self, domid, vif_name,xen_rate):
+        log.debug("resetting xen rate limit...")
+	dominfo = self.domain_lookup_nr(domid)
+	vifs= [x for x in dominfo.info['vif_refs']
+                     if dominfo.info['devices'][x][1]['vifname'] == vif_name]
+	if not vifs:
+	    msg="vif %s doesn't exist"% vif_name
+	    log.error(msg)
+	    raise VmError(msg)
+	
+	devinfo = dominfo.info['devices'][vifs[0]]
+	try:
+	    status=dominfo.getDeviceController(devinfo[0]).set_xen_rate(domid,devinfo[1],xen_rate)
+	    if status ==0 :
+		log.info("succeed to reset xen rate")
+	    else:
+		raise VmError("failed to reset xen rate" )
+	except Exception, e:
+	    raise VmError("failed to reset xen rate exception")
+
     def domain_pincpu(self, domid, vcpu, cpumap):
         """Set which cpus vcpu can use
 
diff --git a/xend/server/SrvDomain.py b/xend/server/SrvDomain.py
index 77df314..1b0dfb2 100644
--- a/xend/server/SrvDomain.py
+++ b/xend/server/SrvDomain.py
@@ -225,6 +225,13 @@ class SrvDomain(SrvDir):
         self.acceptCommand(req)
         return self.xd.domain_reset(self.dom.getName())
 
+    def op_set_xen_rate(self, op, req):
+        self.acceptCommand(req)
+        return req.threadRequest(self.do_set_xen_rate, op, req)
+
+    def do_set_xen_rate(self, _, req):
+        return self.xd.domain_set_xen_rate(self.dom.getName(), req)
+ 
     def op_usb_add(self, op, req):
         self.acceptCommand(req)
         return req.threadRequest(self.do_usb_add, op, req)
diff --git a/xend/server/netif.py b/xend/server/netif.py
index 8eb62b4..ca672a3 100644
--- a/xend/server/netif.py
+++ b/xend/server/netif.py
@@ -23,13 +23,17 @@
 import os
 import random
 import re
+import xen.xend.XendDomain
 
+from xen.xend.xenstore.xswatch import xswatch
+from threading import Event
 from xen.xend import XendOptions, sxp
 from xen.xend.server.DevController import DevController
 from xen.xend.XendError import VmError
 from xen.xend.XendXSPolicyAdmin import XSPolicyAdminInstance
 import xen.util.xsm.xsm as security
 from xen.util import xsconstants
+from xen.xend.xenstore.xstransact import xstransact
 
 from xen.xend.XendLogging import log
 
@@ -279,4 +283,38 @@ class NetifController(DevController):
         log.debug("delete tx rate (dev : %s, cmd : %s)", dev, cmd)
         os.system(cmd)
         self.removeBackend(dev, 'tx_rate')
-
+    
+    def set_xen_rate(self,domid, config, limit):
+        xen_rate = parseRate(limit);
+        devid = self.convertToDeviceNumber(config['devid']) 
+
+	self.writeBackend(devid, 'rate_status','unchanged')
+
+	ev=Event()
+	
+	result={'status':-1}
+	
+	backdom_name = xen.xend.XendDomain.instance().privilegedDomain()
+	statusPath=self.backendPath(backdom_name,devid)+'/rate_status'	    
+	
+	xswatch(statusPath, rateChangedCallback,devid,ev, result)
+        self.writeBackend(devid, 'rate', xen_rate)
+	ev.wait(50)
+      
+	self.removeBackend(devid,'rate_status')	
+	return result['status']
+
+def rateChangedCallback(statusPath,devid,ev,result):
+    status=xstransact.Read(statusPath)
+    if status is not None:
+	if status == 'changed':
+	    result['status']=0
+	else:
+	    result['status']=-1
+	    return 1
+    else:
+	result['status']=-1
+	return 1
+	
+    ev.set()
+    return 0	
diff --git a/xm/main.py b/xm/main.py
index 9cb7b0e..cb9550d 100644
--- a/xm/main.py
+++ b/xm/main.py
@@ -117,6 +117,8 @@ SUBCOMMAND_HELP = {
                      'Set the rx rate for a vif.'),
     'tx-rate-set' : ('<Domain> <vifname> <tx_rate_limit>',
                      'Set the tx rate for a vif.'),
+    'xen-rate-set' : ('<Domain> <vifname> <xen_rate_limit>',
+                     'Set the xen rate for a vif.'),
     'migrate'     : ('<Domain> <Host>',
                      'Migrate a domain to another machine.'),
     'pause'       : ('<Domain>', 'Pause execution of a domain.'),
@@ -360,6 +362,7 @@ common_commands = [
     "mem-set",
     "rx-rate-set",
     "tx-rate-set",
+    "xen-rate-set",
     "migrate",
     "pause",
     "reboot",
@@ -394,6 +397,7 @@ domain_commands = [
     "mem-set",
     "rx-rate-set",
     "tx-rate-set",
+    "xen-rate-set",
     "migrate",
     "pause",
     "reboot",
@@ -1545,6 +1549,19 @@ def xm_tx_rate_set(args):
     else:
         server.xend.domain.setTxRate(dom, vifname, tx_rate)
 
+def xm_xen_rate_set(args):
+    arg_check(args, "xen-rate-set", 3)
+
+    dom = args[0]
+    vifname = args[1]
+    xen_rate = args[2]
+
+    if serverType == SERVER_XEN_API:
+        #TODO: add tx rate support
+        print "SERVER_XEN_API"
+    else:
+        server.xend.domain.set_xen_rate(dom, vifname, xen_rate)
+
 def xm_usb_add(args):
     arg_check(args, "usb-add", 2)
     server.xend.domain.usb_add(args[0],args[1])
@@ -3509,6 +3526,7 @@ commands = {
     # rate
     "rx-rate-set": xm_rx_rate_set,
     "tx-rate-set": xm_tx_rate_set,
+    "xen-rate-set": xm_xen_rate_set,
     # cpu commands
     "vcpu-pin": xm_vcpu_pin,
     "vcpu-list": xm_vcpu_list,

[-- Attachment #4: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* RE: A Patch for modify DomU network transmit rate dynamically
  2010-11-04  3:09               ` A Patch for modify DomU network transmit rate dynamically MaoXiaoyun
@ 2010-11-04  3:43                 ` MaoXiaoyun
  0 siblings, 0 replies; 46+ messages in thread
From: MaoXiaoyun @ 2010-11-04  3:43 UTC (permalink / raw)
  To: xen devel; +Cc: jeremy, keir, daniel.stodden


[-- Attachment #1.1: Type: text/plain, Size: 1158 bytes --]



 Well, I tested, it is necessary to modify netif->remaining_credit.
So question left
1. currently I use atomic64_t, is it necessary
2. Could rmmmod netback and insmod netback when apply this patch?
 
thx


From: tinnycloud@hotmail.com
To: xen-devel@lists.xensource.com
CC: keir@xen.org; daniel.stodden@citrix.com; jeremy@goop.org
Subject: A Patch for modify DomU network transmit rate dynamically
Date: Thu, 4 Nov 2010 11:09:55 +0800




Hi :
 
     I've written a patch which supports dynamically update domU netif transmit rate.
     But after I finish it, I have some questiones on the patch itself.
 
     1. It seems I don't need to update netif->remaining_credit, since in netback,c: tx_add_credit()
         will update automatically on every transmit, so all I need to do is update netif->credit_bytes
         and netif->credit_usec, right? Also, am I do the right way?
     2. I notice that netback is also a module, so I think it can be rmmod or insmod, right? If so 
         I can apply this patch online(with no affect on the running VM)
 
       Could someone help me to confirm this, many thanks.
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1897 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2010-11-04  3:43 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <BAY121-W45A47AC73BDA1A9E7474A2DA720@phx.gbl>
     [not found] ` <C8ACD97B.1256D%keir.fraser@eu.citrix.com>
2010-09-10 11:01   ` VM hung after running sometime MaoXiaoyun
2010-09-19 10:37     ` MaoXiaoyun
2010-09-19 11:49       ` Keir Fraser
2010-09-19 12:21         ` Zhang, Yang Z
2010-09-20  6:00         ` MaoXiaoyun
2010-09-20  7:45           ` Keir Fraser
2010-09-20  8:23             ` MaoXiaoyun
2010-09-20  9:15             ` MaoXiaoyun
2010-09-20  9:35               ` Keir Fraser
2010-09-21  5:02                 ` MaoXiaoyun
2010-09-21  7:53                   ` Keir Fraser
2010-09-21  9:24                     ` wei song
2010-09-21  9:49                       ` wei song
2010-09-21 17:28                     ` Jeremy Fitzhardinge
2010-09-22  0:02                       ` MaoXiaoyun
2010-09-22  0:17                         ` Jeremy Fitzhardinge
2010-09-22  1:19                           ` MaoXiaoyun
2010-09-22 18:31                             ` Jeremy Fitzhardinge
2010-09-23  0:55                               ` MaoXiaoyun
2010-09-23 23:20                                 ` Jeremy Fitzhardinge
2010-09-24  4:29                                   ` MaoXiaoyun
2010-09-25  9:33                                   ` MaoXiaoyun
2010-09-25 10:40                                     ` wei song
2010-09-27 18:02                                       ` Jeremy Fitzhardinge
2010-09-27 11:56                                     ` MaoXiaoyun
2010-09-28  5:43                                   ` MaoXiaoyun
2010-09-28 11:23                                     ` MaoXiaoyun
2010-09-28 17:07                                       ` Jeremy Fitzhardinge
2010-09-29  6:01                                         ` MaoXiaoyun
2010-09-29 16:12                                           ` Jeremy Fitzhardinge
2010-10-15 12:43     ` Domain 0 stop response on frequently reboot VMS MaoXiaoyun
2010-10-15 12:57       ` Keir Fraser
2010-10-16  5:39         ` MaoXiaoyun
2010-10-16  7:16           ` Keir Fraser
2010-10-18 21:17           ` Daniel Stodden
2010-10-24  5:48             ` MaoXiaoyun
2010-10-24  5:56               ` Daniel Stodden
2010-10-26  8:16                 ` MaoXiaoyun
2010-10-26  9:09                   ` Daniel Stodden
2010-10-26 10:54                     ` MaoXiaoyun
2010-10-26  9:20                   ` Ian Campbell
2010-10-26 10:59                     ` MaoXiaoyun
2010-10-26 11:54                       ` Domain 0 stop response on frequently reboot VMS, fix xen/master link? Pasi Kärkkäinen
2010-10-26 17:08                         ` Jeremy Fitzhardinge
2010-11-04  3:09               ` A Patch for modify DomU network transmit rate dynamically MaoXiaoyun
2010-11-04  3:43                 ` MaoXiaoyun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.