All of lore.kernel.org
 help / color / mirror / Atom feed
* Xenalyze questions
@ 2013-06-06 11:35 Simon Graham
  2013-06-06 13:46 ` George Dunlap
  0 siblings, 1 reply; 5+ messages in thread
From: Simon Graham @ 2013-06-06 11:35 UTC (permalink / raw)
  To: xen-devel

[-- Attachment #1: Type: text/plain, Size: 2489 bytes --]

I've been trying to use xenalyze to investigate some problems on our Xen 4.2.2 based product and have run into a couple of issues:

First, I ran into problem running xenalyze on 64-bit Linux because some of the trace data structures were not properly packed  - I worked around this by adding the packed attribute where needed but I suspect there is a more elegant solution (see attached patch).

With that out of the way, I am now running it on a trace file and it craps out in the following code because e->pf_case has a large -ve value (presumably uninitialized):
 
void shadow_mmio_postprocess(struct hvm_data *h)
{
    struct pf_xen_extra *e = &h->inflight.pf_xen;
    if ( opt.summary_info )
    {
        if(e->pf_case)
=>          update_summary(&h->summary.pf_xen[e->pf_case],
                           h->arc_cycles);
        else
            fprintf(warn, "Strange, pf_case 0!\n");
 
        hvm_update_short_summary(h, HVM_SHORT_SUMMARY_MMIO);
    }
 
    if(opt.with_mmio_enumeration)
        enumerate_mmio(h);
}
 
The output from running the program is:
 
Starting program: /home/sgraham/sandbox/Community/xenalyze/xenalyze ~/tmp/e.evt
No output defined, using summary.
Using VMX hardware-assisted virtualization.
scan_for_new_pcpu: Activating pcpu 0 at offset 0
Creating vcpu 0 for dom 32768
scan_for_new_pcpu: Activating pcpu 1 at offset 1560
Creating vcpu 1 for dom 32768
scan_for_new_pcpu: Activating pcpu 2 at offset 1772
Creating vcpu 2 for dom 32768
scan_for_new_pcpu: Activating pcpu 3 at offset 68428
Creating vcpu 3 for dom 32768
scan_for_new_pcpu: Activating pcpu 5 at offset 68520
Creating vcpu 5 for dom 32768
scan_for_new_pcpu: Activating pcpu 10 at offset 73632
Creating vcpu 10 for dom 32768
scan_for_new_pcpu: Activating pcpu 11 at offset 90588
Creating vcpu 11 for dom 32768
init_pcpus: through first trace write, done for now.
hvm_generic_postprocess_init: Strange, h->postprocess set!
 
Program received signal SIGSEGV, Segmentation fault.
0x0000000000402888 in update_summary (s=0x7fe8aa769ca8, c=23368) at xenalyze.c:2244
 
This is using Xen 4.2.2 with several VMs running (32-bit Win7, 64-bit HVM Linux, 64-bit Win8). The trace file was generated by a 10s run of 'xentrace -e all' and is 200MB so I haven't attached it!
 
I suspect the problem is related to the specific set of trace records being output but I'm having a hard time grok'ing the code - Any suggestions welcome!

Simon Graham




[-- Attachment #2: xenalyze-packed.patch --]
[-- Type: application/octet-stream, Size: 1416 bytes --]

diff -r 71eeac989efc Makefile
--- a/Makefile
+++ b/Makefile
@@ -1,4 +1,4 @@
-CFLAGS += -g -O2
+CFLAGS += -g -O0
 CFLAGS += -fno-strict-aliasing
 CFLAGS += -std=gnu99
 CFLAGS += -Wall -Wstrict-prototypes
diff -r 71eeac989efc xenalyze.c
--- a/xenalyze.c
+++ b/xenalyze.c
@@ -5628,7 +5628,7 @@
             unsigned long long gl1e, write_val;
             unsigned long long va;
             unsigned flags:29, emulation_count:3;
-        } gpl4;
+        } __attribute__((packed)) gpl4;
     } *r = (typeof(r))ri->d;
  
     union shadow_event sevt = { .event = ri->event };
@@ -5725,7 +5725,7 @@
         struct {
             unsigned long long gfn;
             unsigned int va;
-        } gpl3;
+        } __attribute__((packed)) gpl3;
         struct {
             unsigned long long gfn, va;
         } gpl4;
@@ -5932,7 +5932,7 @@
         struct {
             unsigned long long gl1e, va;
             unsigned int flags;
-        } gpl4;
+        } __attribute__((packed)) gpl4;
     } *r = (typeof(r))ri->d;
     union shadow_event sevt = { .event = ri->event };
     int rec_gpl = sevt.paging_levels + 2;
@@ -6120,7 +6120,7 @@
         struct {
             unsigned long long gl1e, va;
             unsigned int flags;
-        } gpl4;
+        } __attribute__((packed)) gpl4;
     } *r = (typeof(r))ri->d;
     union shadow_event sevt = { .event = ri->event };
     int rec_gpl = sevt.paging_levels + 2;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Xenalyze questions
  2013-06-06 11:35 Xenalyze questions Simon Graham
@ 2013-06-06 13:46 ` George Dunlap
  2013-06-07  9:14   ` Simon Graham
  0 siblings, 1 reply; 5+ messages in thread
From: George Dunlap @ 2013-06-06 13:46 UTC (permalink / raw)
  To: Simon Graham; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 2398 bytes --]

On Thu, Jun 6, 2013 at 12:35 PM, Simon Graham <simon.graham@citrix.com> wrote:
> I've been trying to use xenalyze to investigate some problems on our Xen 4.2.2 based product and have run into a couple of issues:
>
> First, I ran into problem running xenalyze on 64-bit Linux because some of the trace data structures were not properly packed  - I worked around this by adding the packed attribute where needed but I suspect there is a more elegant solution (see attached patch).
>
> With that out of the way, I am now running it on a trace file and it craps out in the following code because e->pf_case has a large -ve value (presumably uninitialized):
>
> void shadow_mmio_postprocess(struct hvm_data *h)
> {
>     struct pf_xen_extra *e = &h->inflight.pf_xen;
>     if ( opt.summary_info )
>     {
>         if(e->pf_case)
> =>          update_summary(&h->summary.pf_xen[e->pf_case],
>                            h->arc_cycles);
>         else
>             fprintf(warn, "Strange, pf_case 0!\n");
>
>         hvm_update_short_summary(h, HVM_SHORT_SUMMARY_MMIO);
>     }
>
>     if(opt.with_mmio_enumeration)
>         enumerate_mmio(h);
> }



>
> The output from running the program is:
>
> Starting program: /home/sgraham/sandbox/Community/xenalyze/xenalyze ~/tmp/e.evt
> No output defined, using summary.
> Using VMX hardware-assisted virtualization.
> scan_for_new_pcpu: Activating pcpu 0 at offset 0
> Creating vcpu 0 for dom 32768
> scan_for_new_pcpu: Activating pcpu 1 at offset 1560
> Creating vcpu 1 for dom 32768
> scan_for_new_pcpu: Activating pcpu 2 at offset 1772
> Creating vcpu 2 for dom 32768
> scan_for_new_pcpu: Activating pcpu 3 at offset 68428
> Creating vcpu 3 for dom 32768
> scan_for_new_pcpu: Activating pcpu 5 at offset 68520
> Creating vcpu 5 for dom 32768
> scan_for_new_pcpu: Activating pcpu 10 at offset 73632
> Creating vcpu 10 for dom 32768
> scan_for_new_pcpu: Activating pcpu 11 at offset 90588
> Creating vcpu 11 for dom 32768
> init_pcpus: through first trace write, done for now.
> hvm_generic_postprocess_init: Strange, h->postprocess set!

Hmm, it looks like this is the problem here.

Can you try adding this patch, and then running the following?

xenalyze -a -s --tolerance=1 [filename] > [filename.dump]?

The resulting file may be pretty big; I'll just need the last 50 or so lines.

 -George

[-- Attachment #2: xenalyze-warn-on-postprocess.diff --]
[-- Type: application/octet-stream, Size: 645 bytes --]

# HG changeset patch
# Parent e9dc6fd0b1be1a3b1bc77efca93f71ec2deb2ea3
diff --git a/xenalyze.c b/xenalyze.c
--- a/xenalyze.c
+++ b/xenalyze.c
@@ -4748,9 +4748,11 @@ void hvm_generic_summary(struct hvm_data
 
 void hvm_generic_postprocess_init(struct record_info *ri, struct hvm_data *h)
 {
-    if ( h->post_process != hvm_generic_postprocess )
+    if ( h->post_process != hvm_generic_postprocess ) {
         fprintf(warn, "%s: Strange, h->postprocess set!\n",
                 __func__);
+        error(ERR_WARN, NULL);
+    }
     h->inflight.generic.event = ri->event;
     bcopy(h->d, h->inflight.generic.d, sizeof(unsigned int) * 4); 
 }

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Xenalyze questions
  2013-06-06 13:46 ` George Dunlap
@ 2013-06-07  9:14   ` Simon Graham
  2013-06-07 11:07     ` George Dunlap
  0 siblings, 1 reply; 5+ messages in thread
From: Simon Graham @ 2013-06-07  9:14 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel

> > init_pcpus: through first trace write, done for now.
> > hvm_generic_postprocess_init: Strange, h->postprocess set!
> 
> Hmm, it looks like this is the problem here.
> 
> Can you try adding this patch, and then running the following?
> 
> xenalyze -a -s --tolerance=1 [filename] > [filename.dump]?
> 
> The resulting file may be pretty big; I'll just need the last 50 or so lines.
> 

Actually it was pretty small - must be the 1st record that is bad!

Tolerating errors at or below 1
scan_for_new_pcpu: Activating pcpu 0 at offset 0
Creating vcpu 0 for dom 32768
scan_for_new_pcpu: Activating pcpu 1 at offset 1560
Creating vcpu 1 for dom 32768
scan_for_new_pcpu: Activating pcpu 2 at offset 1772
Creating vcpu 2 for dom 32768
scan_for_new_pcpu: Activating pcpu 3 at offset 68428
Creating vcpu 3 for dom 32768
scan_for_new_pcpu: Activating pcpu 5 at offset 68520
Creating vcpu 5 for dom 32768
scan_for_new_pcpu: Activating pcpu 10 at offset 73632
Creating vcpu 10 for dom 32768
scan_for_new_pcpu: Activating pcpu 11 at offset 90588
Creating vcpu 11 for dom 32768
init_pcpus: through first trace write, done for now.
]               .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]               .... .    x. d32768v10 fast mmio va fffff88008808df0
]               .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
hvm_generic_postprocess_init: Strange, h->postprocess set!

Simon

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Xenalyze questions
  2013-06-07  9:14   ` Simon Graham
@ 2013-06-07 11:07     ` George Dunlap
  2013-06-07 11:46       ` Simon Graham
  0 siblings, 1 reply; 5+ messages in thread
From: George Dunlap @ 2013-06-07 11:07 UTC (permalink / raw)
  To: Simon Graham; +Cc: xen-devel

On Fri, Jun 7, 2013 at 10:14 AM, Simon Graham <simon.graham@citrix.com> wrote:
>> > init_pcpus: through first trace write, done for now.
>> > hvm_generic_postprocess_init: Strange, h->postprocess set!
>>
>> Hmm, it looks like this is the problem here.
>>
>> Can you try adding this patch, and then running the following?
>>
>> xenalyze -a -s --tolerance=1 [filename] > [filename.dump]?
>>
>> The resulting file may be pretty big; I'll just need the last 50 or so lines.
>>
>
> Actually it was pretty small - must be the 1st record that is bad!
>
> Tolerating errors at or below 1
> scan_for_new_pcpu: Activating pcpu 0 at offset 0
> Creating vcpu 0 for dom 32768
> scan_for_new_pcpu: Activating pcpu 1 at offset 1560
> Creating vcpu 1 for dom 32768
> scan_for_new_pcpu: Activating pcpu 2 at offset 1772
> Creating vcpu 2 for dom 32768
> scan_for_new_pcpu: Activating pcpu 3 at offset 68428
> Creating vcpu 3 for dom 32768
> scan_for_new_pcpu: Activating pcpu 5 at offset 68520
> Creating vcpu 5 for dom 32768
> scan_for_new_pcpu: Activating pcpu 10 at offset 73632
> Creating vcpu 10 for dom 32768
> scan_for_new_pcpu: Activating pcpu 11 at offset 90588
> Creating vcpu 11 for dom 32768
> init_pcpus: through first trace write, done for now.
> ]               .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
> ]               .... .    x. d32768v10 fast mmio va fffff88008808df0
> ]               .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
> hvm_generic_postprocess_init: Strange, h->postprocess set!

...I'm trying to figure out how on earth you got a trace like that.
It looks like:
1. The VM took a page fault (EXCEPTION_NMI)
2. Xen determined that it was an MMIO, which would mean that the guest
PT was valid, but pointed to PFN space that Xen didn't recognize
3. It somehow emulated a CPUID instruction???

CPUID doesn't do any memory accesses, so it shouldn't be able to
trigger an MMIO fault like this.

Do you have any way of telling what instruction was at the address at
the EIP (fffff88005a25ad0)?

Also, could you run xenalyze as follows, and attach the first couple
hundred lines:

xenalyze -a [filename] > [filename.dump]

(Not having the -s will remove the path that is crashing, to allow us
to see what happened after this trace.)

 -George

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Xenalyze questions
  2013-06-07 11:07     ` George Dunlap
@ 2013-06-07 11:46       ` Simon Graham
  0 siblings, 0 replies; 5+ messages in thread
From: Simon Graham @ 2013-06-07 11:46 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel

[-- Attachment #1: Type: text/plain, Size: 2482 bytes --]

> > init_pcpus: through first trace write, done for now.
> > ]               .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip
> fffff88005a25ad0
> > ]               .... .    x. d32768v10 fast mmio va fffff88008808df0
> > ]               .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
> > hvm_generic_postprocess_init: Strange, h->postprocess set!
> 
> ...I'm trying to figure out how on earth you got a trace like that.
> It looks like:
> 1. The VM took a page fault (EXCEPTION_NMI)
> 2. Xen determined that it was an MMIO, which would mean that the guest
> PT was valid, but pointed to PFN space that Xen didn't recognize
> 3. It somehow emulated a CPUID instruction???
> 
> CPUID doesn't do any memory accesses, so it shouldn't be able to
> trigger an MMIO fault like this.

According to xentrace, the first VMEXIT was a TRAP_nodevice followed indeed by a page fault that triggered emulation of a CPUID which is odd...

CPU2  243316272356568 (+   26064)  VMENTRY
CPU2  243316272362080 (+    5512)  VMEXIT      [ exitcode = 0x00000000, rIP  = 0xfffff80188e57e2e ]
CPU2  243316272362080 (+       0)  TRAP        [ vector = 0x00000007 ]
CPU2  243316272366728 (+    4648)  VMENTRY
CPU2  243316272377592 (+   10864)  VMEXIT      [ exitcode = 0x00000000, rIP  = 0xfffff88005a25ad0 ]
CPU2  243316272377592 (+       0)  shadow_fast_mmio                  [ va = 0xfffff88008808df0 ]
CPU2  243316272377592 (+       0)  CPUID       [ func = 0x00000001, eax = 0x000206c2, ebx = 0x00400800, ecx=0x83b82203, edx = 0x178bfbff ]
CPU2  243316272377592 (+       0)  PF_INJECT   [ errorcode = 0x0b, virt = 0xfffff88008808df0 ]

followed by many repeats of the same CPUID... this might explain my performance problem!!!

BTW: I should say that this system is setup to use shadow for VRAM tracking and does NOT have HAP enabled (because that slows down VRAM tracking too much)

I've seen problems in the past where the emulation setup in the shadow code is woefully limited leading to just this sort of repeated loop...

> 
> Do you have any way of telling what instruction was at the address at
> the EIP (fffff88005a25ad0)?
> 

Probably with some effort - I think this was a 64-bit Win8 VM and I can use windbag to figure this out: will get back to you on that!

> Also, could you run xenalyze as follows, and attach the first couple
> hundred lines:
> 
> xenalyze -a [filename] > [filename.dump]
> 

Attached.

Simon

[-- Attachment #2: e-extr.log --]
[-- Type: application/octet-stream, Size: 9119 bytes --]

scan_for_new_pcpu: Activating pcpu 0 at offset 0
Creating vcpu 0 for dom 32768
scan_for_new_pcpu: Activating pcpu 1 at offset 1560
Creating vcpu 1 for dom 32768
scan_for_new_pcpu: Activating pcpu 2 at offset 1772
Creating vcpu 2 for dom 32768
scan_for_new_pcpu: Activating pcpu 3 at offset 68428
Creating vcpu 3 for dom 32768
scan_for_new_pcpu: Activating pcpu 5 at offset 68520
Creating vcpu 5 for dom 32768
scan_for_new_pcpu: Activating pcpu 10 at offset 73632
Creating vcpu 10 for dom 32768
scan_for_new_pcpu: Activating pcpu 11 at offset 90588
Creating vcpu 11 for dom 32768
init_pcpus: through first trace write, done for now.
]               .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]               .... .    x. d32768v10 fast mmio va fffff88008808df0
]               .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000000000 .... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000009739 .... .    x. d32768v10 vmentry cycles 23368 !
]  0.000012039 .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000012039 .... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000012039 .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000012039 .... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
   0.000019107 x... .    .. d32768v0 hypercall  d (multicall) eip ffffffff810011aa
]  0.000021561 .... .    x. d32768v10 vmentry cycles 22848 !
   0.000023962 x... .    .. d32768v0 hypercall 18 (vcpu_op) eip ffffffff8100130a
]  0.000023995 .... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000023995 .... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000023995 .... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000023995 .... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
   0.000026966 x... .    .. d32768v0 hypercall 1d (sched_op) eip ffffffff810013aa
]  0.000027463 x... .    .. d32768v0   28006(2:8:6) 2 [ 0 0 ]
]  0.000029163 x... .    .. d32768v0   2800e(2:8:e) 2 [ 0 1b7432 ]
]  0.000029440 x... .    .. d32768v0   2800f(2:8:f) 3 [ 7fff 1b7432 ffffffff ]
]  0.000029690 x... .    .. d32768v0   2800a(2:8:a) 4 [ 0 0 7fff 0 ]
   0.000029937 x... .    .. d32768v0 runstate_change d0v0 running->blocked
Creating domain 0
Creating vcpu 0 for dom 0
Using first_tsc for d0v0 (25984 cycles)
   0.000030234 x... .    .. d?v? runstate_change d32767v0 runnable->running
Creating domain 32767
Creating vcpu 0 for dom 32767
]  0.000033488 -... .    x. d32768v10 vmentry cycles 22776 !
   0.000034805 x... .    .. d32767v0 pm_idle_start c3
]  0.000035815 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000035815 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000035815 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000035815 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000045264 -... .    x. d32768v10 vmentry cycles 22672 !
]  0.000047608 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000047608 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000047608 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000047608 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000057033 -... .    x. d32768v10 vmentry cycles 22616 !
]  0.000059347 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000059347 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000059347 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000059347 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000068776 -... .    x. d32768v10 vmentry cycles 22624 !
]  0.000071087 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000071087 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000071087 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000071087 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000080512 -... .    x. d32768v10 vmentry cycles 22616 !
]  0.000082890 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000082890 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000082890 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000082890 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000092329 -... .    x. d32768v10 vmentry cycles 22648 !
]  0.000094643 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000094643 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000094643 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000094643 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000104062 -... .    x. d32768v10 vmentry cycles 22600 !
]  0.000106375 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000106375 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000106375 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000106375 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000115801 -... .    x. d32768v10 vmentry cycles 22616 !
]  0.000118152 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000118152 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000118152 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000118152 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000127664 -... .    x. d32768v10 vmentry cycles 22824 !
]  0.000129971 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000129971 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000129971 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000129971 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000139640 -... .    x. d32768v10 vmentry cycles 23200 !
]  0.000141951 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000141951 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000141951 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000141951 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000151390 -... .    x. d32768v10 vmentry cycles 22648 !
]  0.000153694 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000153694 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000153694 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000153694 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000163133 -... .    x. d32768v10 vmentry cycles 22648 !
]  0.000165437 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000165437 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000165437 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000165437 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000174792 -... .    x. d32768v10 vmentry cycles 22448 !
]  0.000177186 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000177186 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000177186 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000177186 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000186552 -... .    x. d32768v10 vmentry cycles 22472 !
]  0.000188856 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000188856 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000188856 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000188856 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000198251 -... .    x. d32768v10 vmentry cycles 22544 !
]  0.000200555 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000200555 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000200555 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000200555 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000209964 -... .    x. d32768v10 vmentry cycles 22576 !
]  0.000212611 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000212611 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000212611 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000212611 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000221944 -... .    x. d32768v10 vmentry cycles 22392 !
]  0.000224248 -... .    x. d32768v10 vmexit exit_reason EXCEPTION_NMI eip fffff88005a25ad0
]  0.000224248 -... .    x. d32768v10 fast mmio va fffff88008808df0
]  0.000224248 -... .    x. d32768v10 cpuid [ 1 206c2 400800 83b82203 178bfbff ]
   0.000224248 -... .    x. pf_inject64 guest_cr2 fffff88008808df0  guest_ec b
]  0.000233653 -... .    x. d32768v10 vmentry cycles 22568 !

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-06-07 11:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-06 11:35 Xenalyze questions Simon Graham
2013-06-06 13:46 ` George Dunlap
2013-06-07  9:14   ` Simon Graham
2013-06-07 11:07     ` George Dunlap
2013-06-07 11:46       ` Simon Graham

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.