* about the memory paging
@ 2010-09-03 0:32 linqaingmin
2010-09-03 15:08 ` Patrick Colp
0 siblings, 1 reply; 6+ messages in thread
From: linqaingmin @ 2010-09-03 0:32 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 488 bytes --]
hi all
Generate ept entry violation into function of ept_handle_violation .
then call function of hvm_hap_nested_page_fault,this judge page type into p2m_mem_paging_populate();
Here the event to notify the user space "xenpaging" process to paging in, but not Complete the page in on the next step Instruction;
i thrink p2m_mem_paging_populate --> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you start the implementation of the above.
Is that right?
tkx
[-- Attachment #1.2: Type: text/html, Size: 8640 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: about the memory paging
2010-09-03 0:32 about the memory paging linqaingmin
@ 2010-09-03 15:08 ` Patrick Colp
2010-09-04 8:55 ` linqaingmin
0 siblings, 1 reply; 6+ messages in thread
From: Patrick Colp @ 2010-09-03 15:08 UTC (permalink / raw)
To: linqaingmin; +Cc: xen-devel
Hi,
Sorry, I'm not quite sure what you're asking and/or if you ran into a
problem? Are you just asking how the xenpaging mechanism works?
Patrick
2010/9/2 linqaingmin <linqiangmin@huawei.com>:
> hi all
>
> Generate ept entry violation into function of ept_handle_violation .
>
> then call function of hvm_hap_nested_page_fault,this judge page type into
> p2m_mem_paging_populate();
>
> Here the event to notify the user space "xenpaging" process to paging in,
> but not Complete the page in on the next step Instruction;
>
> i thrink p2m_mem_paging_populate -->
> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you
> start the implementation of the above.
>
> Is that right?
>
> tkx
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: about the memory paging
2010-09-03 15:08 ` Patrick Colp
@ 2010-09-04 8:55 ` linqaingmin
2010-09-14 14:44 ` Jiang, Yunhong
0 siblings, 1 reply; 6+ messages in thread
From: linqaingmin @ 2010-09-04 8:55 UTC (permalink / raw)
To: Patrick Colp; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 3791 bytes --]
hi
We run windows2003 HVM on xen4.0.1, The VM is 2048MB, 2 VCPU
run command "xenpaging domID 260000" and want to paging 1024MB in xen.
It causes two different crashes.
1) the first one
the xm dmesg context is follow
(XEN) vmx.c:2150:d6 EPT violation 0x1 (r--/---), gpa 0x0000007fbba020, mfn 0xffffffffff, type 10.
(XEN) p2m-ept.c:533:d6 Walking EPT tables for domain 6 gfn 7fbba
(XEN) p2m-ept.c:552:d6 epte 435b38007
(XEN) p2m-ept.c:552:d6 epte 4395b3007
(XEN) p2m-ept.c:552:d6 epte 433f7f007
(XEN) p2m-ept.c:552:d6 epte ffffffffffa00
(XEN) domain_crash called from vmx.c:2160
(XEN) Domain 6 (vcpu#1) crashed on cpu#14:
(XEN) ----[ Xen-4.0.1 x86_64 debug=n Not tainted ]----
(XEN) CPU: 14
(XEN) RIP: 0008:[<000000008088dc37>]
(XEN) RFLAGS: 0000000000010246 CONTEXT: hvm guest
(XEN) rax: 000000007fbba020 rbx: 00000000f7727000 rcx: 0000000000000000
(XEN) rdx: 0000000080010031 rsi: 000000008996a418 rdi: 00000000f772a090
(XEN) rbp: 0000000089d88648 rsp: 00000000baf2ace0 r8: 0000000000000000
(XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000
(XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000
(XEN) r15: 0000000000000000 cr0: 000000008001003b cr4: 00000000000006f9
(XEN) cr3: 0000000000790000 cr2: 00000000c3cfb008
(XEN) ds: 0023 es: 0023 fs: 0030 gs: 0000 ss: 0010 cs: 0008
this is the guest linear-address field is invalid. GLA ; 0x7fbba020
2) another crash is the qemu-dm Segmentation fault , the "s" parameter is NULL
log is following :
Program terminated with signal 11, Segmentation fault.
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
1232 if (!s->bs) return; /* ouch! (see ide_flush_cb) */
(gdb) bt
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
#1 0x000000000041745d in dma_bdrv_cb (opaque=0xbbb1f0, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:97
#2 0x00000000004172f2 in reschedule_dma (opaque=0xbbb1f0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:63
#3 0x000000000040c48a in qemu_bh_poll () at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3427
#4 0x000000000040cfe2 in main_loop_wait (timeout=10) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3831
#5 0x00000000004c2daf in main_loop () at helper2.c:577
#6 0x000000000041056e in main (argc=28, argv=0x7fff9eeee288, envp=0x7fff9eeee370) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:6153
(gdb)
When the Guest OS have install PV Driver. only first crash
lin
----- Original Message -----
From: Patrick Colp
To: linqaingmin
Cc: xen-devel@lists.xensource.com
Sent: Friday, September 03, 2010 11:08 PM
Subject: Re: [Xen-devel] about the memory paging
Hi,
Sorry, I'm not quite sure what you're asking and/or if you ran into a
problem? Are you just asking how the xenpaging mechanism works?
Patrick
2010/9/2 linqaingmin <linqiangmin@huawei.com>:
> hi all
>
> Generate ept entry violation into function of ept_handle_violation .
>
> then call function of hvm_hap_nested_page_fault,this judge page type into
> p2m_mem_paging_populate();
>
> Here the event to notify the user space "xenpaging" process to paging in,
> but not Complete the page in on the next step Instruction;
>
> i thrink p2m_mem_paging_populate -->
> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you
> start the implementation of the above.
>
> Is that right?
>
> tkx
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 6423 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: about the memory paging
2010-09-04 8:55 ` linqaingmin
@ 2010-09-14 14:44 ` Jiang, Yunhong
2010-09-15 2:57 ` linqaingmin
0 siblings, 1 reply; 6+ messages in thread
From: Jiang, Yunhong @ 2010-09-14 14:44 UTC (permalink / raw)
To: linqaingmin, Patrick Colp; +Cc: xen-devel, Li, Xin
[-- Attachment #1.1: Type: text/plain, Size: 4946 bytes --]
Do you meet this issue on Linux guest also?
Is your windows 2003 guest a PAE guest? If yes, can you check if it is a "mov cr3" instruction? You can either add a hook in xen hypervisor to dump the guest memory around the RIP (0x8088dc37), or connect to the guest through windbg with paging off, and disassmble it.
I suspect it is because the page used as guest CR3 is paging out, and for PAE guest, that will cause a EPT violation with GLA_VALID bit cleared. And then in fact the hvm_hap_nested_page_fault() function is not called at all.
The easist experiment is, remove the check for the GLA_VALID and see the result.
CC Xin who knows EPT better than me.
I didn't check log 2, so no idea of the reason.
Thanks
--jyh
From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of linqaingmin
Sent: Saturday, September 04, 2010 4:56 PM
To: Patrick Colp
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] about the memory paging
hi
We run windows2003 HVM on xen4.0.1, The VM is 2048MB, 2 VCPU
run command "xenpaging domID 260000" and want to paging 1024MB in xen.
It causes two different crashes.
1) the first one
the xm dmesg context is follow
(XEN) vmx.c:2150:d6 EPT violation 0x1 (r--/---), gpa 0x0000007fbba020, mfn 0xffffffffff, type 10.
(XEN) p2m-ept.c:533:d6 Walking EPT tables for domain 6 gfn 7fbba
(XEN) p2m-ept.c:552:d6 epte 435b38007
(XEN) p2m-ept.c:552:d6 epte 4395b3007
(XEN) p2m-ept.c:552:d6 epte 433f7f007
(XEN) p2m-ept.c:552:d6 epte ffffffffffa00
(XEN) domain_crash called from vmx.c:2160
(XEN) Domain 6 (vcpu#1) crashed on cpu#14:
(XEN) ----[ Xen-4.0.1 x86_64 debug=n Not tainted ]----
(XEN) CPU: 14
(XEN) RIP: 0008:[<000000008088dc37>]
(XEN) RFLAGS: 0000000000010246 CONTEXT: hvm guest
(XEN) rax: 000000007fbba020 rbx: 00000000f7727000 rcx: 0000000000000000
(XEN) rdx: 0000000080010031 rsi: 000000008996a418 rdi: 00000000f772a090
(XEN) rbp: 0000000089d88648 rsp: 00000000baf2ace0 r8: 0000000000000000
(XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000
(XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000
(XEN) r15: 0000000000000000 cr0: 000000008001003b cr4: 00000000000006f9
(XEN) cr3: 0000000000790000 cr2: 00000000c3cfb008
(XEN) ds: 0023 es: 0023 fs: 0030 gs: 0000 ss: 0010 cs: 0008
this is the guest linear-address field is invalid. GLA ; 0x7fbba020
2) another crash is the qemu-dm Segmentation fault , the "s" parameter is NULL
log is following :
Program terminated with signal 11, Segmentation fault.
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
1232 if (!s->bs) return; /* ouch! (see ide_flush_cb) */
(gdb) bt
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
#1 0x000000000041745d in dma_bdrv_cb (opaque=0xbbb1f0, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:97
#2 0x00000000004172f2 in reschedule_dma (opaque=0xbbb1f0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:63
#3 0x000000000040c48a in qemu_bh_poll () at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3427
#4 0x000000000040cfe2 in main_loop_wait (timeout=10) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3831
#5 0x00000000004c2daf in main_loop () at helper2.c:577
#6 0x000000000041056e in main (argc=28, argv=0x7fff9eeee288, envp=0x7fff9eeee370) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:6153
(gdb)
When the Guest OS have install PV Driver. only first crash
lin
----- Original Message -----
From: Patrick Colp<mailto:pjcolp@cs.ubc.ca>
To: linqaingmin<mailto:linqiangmin@huawei.com>
Cc: xen-devel@lists.xensource.com<mailto:xen-devel@lists.xensource.com>
Sent: Friday, September 03, 2010 11:08 PM
Subject: Re: [Xen-devel] about the memory paging
Hi,
Sorry, I'm not quite sure what you're asking and/or if you ran into a
problem? Are you just asking how the xenpaging mechanism works?
Patrick
2010/9/2 linqaingmin <linqiangmin@huawei.com<mailto:linqiangmin@huawei.com>>:
> hi all
>
> Generate ept entry violation into function of ept_handle_violation .
>
> then call function of hvm_hap_nested_page_fault,this judge page type into
> p2m_mem_paging_populate();
>
> Here the event to notify the user space "xenpaging" process to paging in,
> but not Complete the page in on the next step Instruction;
>
> i thrink p2m_mem_paging_populate -->
> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you
> start the implementation of the above.
>
> Is that right?
>
> tkx
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com<mailto:Xen-devel@lists.xensource.com>
> http://lists.xensource.com/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 14296 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: about the memory paging
2010-09-14 14:44 ` Jiang, Yunhong
@ 2010-09-15 2:57 ` linqaingmin
2010-09-15 5:41 ` Jiang, Yunhong
0 siblings, 1 reply; 6+ messages in thread
From: linqaingmin @ 2010-09-15 2:57 UTC (permalink / raw)
To: Jiang, Yunhong, Patrick Colp; +Cc: xen-devel, Li, Xin
[-- Attachment #1.1: Type: text/plain, Size: 5879 bytes --]
Thanks , you are right
the RIP of Linux guest is (XEN) RIP: 0060:[<00000000c03482d0>]
with which we can use objdump to locate the instrucation is "mov %eax %cr3"
I think that the EPT violation maybe cause by the next instruction of "mov %eax %cr3"
the linux guest objdump result:
c0347e32 <schedule>:
........
c03482cb: 05 00 00 00 40 add $0x40000000,%eax
c03482d0: 0f 22 d8 mov %eax,%cr3
c03482d3: 8b 45 cc mov -0x34(%ebp),%eax
c03482d6: 8b b1 5c 01 00 00 mov 0x15c(%ecx),%esi
lin
----- Original Message -----
From: Jiang, Yunhong
To: linqaingmin ; Patrick Colp
Cc: xen-devel@lists.xensource.com ; Li, Xin
Sent: Tuesday, September 14, 2010 10:44 PM
Subject: RE: [Xen-devel] about the memory paging
Do you meet this issue on Linux guest also?
Is your windows 2003 guest a PAE guest? If yes, can you check if it is a "mov cr3" instruction? You can either add a hook in xen hypervisor to dump the guest memory around the RIP (0x8088dc37), or connect to the guest through windbg with paging off, and disassmble it.
I suspect it is because the page used as guest CR3 is paging out, and for PAE guest, that will cause a EPT violation with GLA_VALID bit cleared. And then in fact the hvm_hap_nested_page_fault() function is not called at all.
The easist experiment is, remove the check for the GLA_VALID and see the result.
CC Xin who knows EPT better than me.
I didn't check log 2, so no idea of the reason.
Thanks
--jyh
From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of linqaingmin
Sent: Saturday, September 04, 2010 4:56 PM
To: Patrick Colp
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] about the memory paging
hi
We run windows2003 HVM on xen4.0.1, The VM is 2048MB, 2 VCPU
run command "xenpaging domID 260000" and want to paging 1024MB in xen.
It causes two different crashes.
1) the first one
the xm dmesg context is follow
(XEN) vmx.c:2150:d6 EPT violation 0x1 (r--/---), gpa 0x0000007fbba020, mfn 0xffffffffff, type 10.
(XEN) p2m-ept.c:533:d6 Walking EPT tables for domain 6 gfn 7fbba
(XEN) p2m-ept.c:552:d6 epte 435b38007
(XEN) p2m-ept.c:552:d6 epte 4395b3007
(XEN) p2m-ept.c:552:d6 epte 433f7f007
(XEN) p2m-ept.c:552:d6 epte ffffffffffa00
(XEN) domain_crash called from vmx.c:2160
(XEN) Domain 6 (vcpu#1) crashed on cpu#14:
(XEN) ----[ Xen-4.0.1 x86_64 debug=n Not tainted ]----
(XEN) CPU: 14
(XEN) RIP: 0008:[<000000008088dc37>]
(XEN) RFLAGS: 0000000000010246 CONTEXT: hvm guest
(XEN) rax: 000000007fbba020 rbx: 00000000f7727000 rcx: 0000000000000000
(XEN) rdx: 0000000080010031 rsi: 000000008996a418 rdi: 00000000f772a090
(XEN) rbp: 0000000089d88648 rsp: 00000000baf2ace0 r8: 0000000000000000
(XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000
(XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000
(XEN) r15: 0000000000000000 cr0: 000000008001003b cr4: 00000000000006f9
(XEN) cr3: 0000000000790000 cr2: 00000000c3cfb008
(XEN) ds: 0023 es: 0023 fs: 0030 gs: 0000 ss: 0010 cs: 0008
this is the guest linear-address field is invalid. GLA ; 0x7fbba020
2) another crash is the qemu-dm Segmentation fault , the "s" parameter is NULL
log is following :
Program terminated with signal 11, Segmentation fault.
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
1232 if (!s->bs) return; /* ouch! (see ide_flush_cb) */
(gdb) bt
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
#1 0x000000000041745d in dma_bdrv_cb (opaque=0xbbb1f0, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:97
#2 0x00000000004172f2 in reschedule_dma (opaque=0xbbb1f0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:63
#3 0x000000000040c48a in qemu_bh_poll () at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3427
#4 0x000000000040cfe2 in main_loop_wait (timeout=10) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3831
#5 0x00000000004c2daf in main_loop () at helper2.c:577
#6 0x000000000041056e in main (argc=28, argv=0x7fff9eeee288, envp=0x7fff9eeee370) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:6153
(gdb)
When the Guest OS have install PV Driver. only first crash
lin
----- Original Message -----
From: Patrick Colp
To: linqaingmin
Cc: xen-devel@lists.xensource.com
Sent: Friday, September 03, 2010 11:08 PM
Subject: Re: [Xen-devel] about the memory paging
Hi,
Sorry, I'm not quite sure what you're asking and/or if you ran into a
problem? Are you just asking how the xenpaging mechanism works?
Patrick
2010/9/2 linqaingmin <linqiangmin@huawei.com>:
> hi all
>
> Generate ept entry violation into function of ept_handle_violation .
>
> then call function of hvm_hap_nested_page_fault,this judge page type into
> p2m_mem_paging_populate();
>
> Here the event to notify the user space "xenpaging" process to paging in,
> but not Complete the page in on the next step Instruction;
>
> i thrink p2m_mem_paging_populate -->
> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you
> start the implementation of the above.
>
> Is that right?
>
> tkx
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 18044 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: about the memory paging
2010-09-15 2:57 ` linqaingmin
@ 2010-09-15 5:41 ` Jiang, Yunhong
0 siblings, 0 replies; 6+ messages in thread
From: Jiang, Yunhong @ 2010-09-15 5:41 UTC (permalink / raw)
To: linqaingmin, Patrick Colp; +Cc: xen-devel, Li, Xin
[-- Attachment #1.1: Type: text/plain, Size: 6399 bytes --]
Simply remove the check for EPT_GLA_VALID does not work?
if ( (qualification & EPT_GLA_VALID) &&
hvm_hap_nested_page_fault(gfn) )
return;
Thanks
--jyh
From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of linqaingmin
Sent: Wednesday, September 15, 2010 10:57 AM
To: Jiang, Yunhong; Patrick Colp
Cc: xen-devel@lists.xensource.com; Li, Xin
Subject: Re: [Xen-devel] about the memory paging
Thanks , you are right
the RIP of Linux guest is (XEN) RIP: 0060:[<00000000c03482d0>]
with which we can use objdump to locate the instrucation is "mov %eax %cr3"
I think that the EPT violation maybe cause by the next instruction of "mov %eax %cr3"
the linux guest objdump result:
c0347e32 <schedule>:
........
c03482cb: 05 00 00 00 40 add $0x40000000,%eax
c03482d0: 0f 22 d8 mov %eax,%cr3
c03482d3: 8b 45 cc mov -0x34(%ebp),%eax
c03482d6: 8b b1 5c 01 00 00 mov 0x15c(%ecx),%esi
lin
----- Original Message -----
From: Jiang, Yunhong<mailto:yunhong.jiang@intel.com>
To: linqaingmin<mailto:linqiangmin@huawei.com> ; Patrick Colp<mailto:pjcolp@cs.ubc.ca>
Cc: xen-devel@lists.xensource.com<mailto:xen-devel@lists.xensource.com> ; Li, Xin<mailto:xin.li@intel.com>
Sent: Tuesday, September 14, 2010 10:44 PM
Subject: RE: [Xen-devel] about the memory paging
Do you meet this issue on Linux guest also?
Is your windows 2003 guest a PAE guest? If yes, can you check if it is a "mov cr3" instruction? You can either add a hook in xen hypervisor to dump the guest memory around the RIP (0x8088dc37), or connect to the guest through windbg with paging off, and disassmble it.
I suspect it is because the page used as guest CR3 is paging out, and for PAE guest, that will cause a EPT violation with GLA_VALID bit cleared. And then in fact the hvm_hap_nested_page_fault() function is not called at all.
The easist experiment is, remove the check for the GLA_VALID and see the result.
CC Xin who knows EPT better than me.
I didn't check log 2, so no idea of the reason.
Thanks
--jyh
From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of linqaingmin
Sent: Saturday, September 04, 2010 4:56 PM
To: Patrick Colp
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] about the memory paging
hi
We run windows2003 HVM on xen4.0.1, The VM is 2048MB, 2 VCPU
run command "xenpaging domID 260000" and want to paging 1024MB in xen.
It causes two different crashes.
1) the first one
the xm dmesg context is follow
(XEN) vmx.c:2150:d6 EPT violation 0x1 (r--/---), gpa 0x0000007fbba020, mfn 0xffffffffff, type 10.
(XEN) p2m-ept.c:533:d6 Walking EPT tables for domain 6 gfn 7fbba
(XEN) p2m-ept.c:552:d6 epte 435b38007
(XEN) p2m-ept.c:552:d6 epte 4395b3007
(XEN) p2m-ept.c:552:d6 epte 433f7f007
(XEN) p2m-ept.c:552:d6 epte ffffffffffa00
(XEN) domain_crash called from vmx.c:2160
(XEN) Domain 6 (vcpu#1) crashed on cpu#14:
(XEN) ----[ Xen-4.0.1 x86_64 debug=n Not tainted ]----
(XEN) CPU: 14
(XEN) RIP: 0008:[<000000008088dc37>]
(XEN) RFLAGS: 0000000000010246 CONTEXT: hvm guest
(XEN) rax: 000000007fbba020 rbx: 00000000f7727000 rcx: 0000000000000000
(XEN) rdx: 0000000080010031 rsi: 000000008996a418 rdi: 00000000f772a090
(XEN) rbp: 0000000089d88648 rsp: 00000000baf2ace0 r8: 0000000000000000
(XEN) r9: 0000000000000000 r10: 0000000000000000 r11: 0000000000000000
(XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000
(XEN) r15: 0000000000000000 cr0: 000000008001003b cr4: 00000000000006f9
(XEN) cr3: 0000000000790000 cr2: 00000000c3cfb008
(XEN) ds: 0023 es: 0023 fs: 0030 gs: 0000 ss: 0010 cs: 0008
this is the guest linear-address field is invalid. GLA ; 0x7fbba020
2) another crash is the qemu-dm Segmentation fault , the "s" parameter is NULL
log is following :
Program terminated with signal 11, Segmentation fault.
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
1232 if (!s->bs) return; /* ouch! (see ide_flush_cb) */
(gdb) bt
#0 0x00000000004451d2 in ide_read_dma_cb (opaque=0xb79028, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/hw/ide.c:1232
#1 0x000000000041745d in dma_bdrv_cb (opaque=0xbbb1f0, ret=0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:97
#2 0x00000000004172f2 in reschedule_dma (opaque=0xbbb1f0) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/dma-helpers.c:63
#3 0x000000000040c48a in qemu_bh_poll () at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3427
#4 0x000000000040cfe2 in main_loop_wait (timeout=10) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:3831
#5 0x00000000004c2daf in main_loop () at helper2.c:577
#6 0x000000000041056e in main (argc=28, argv=0x7fff9eeee288, envp=0x7fff9eeee370) at /home/Lucifer/xen-4.0.1/tools/ioemu-dir/vl.c:6153
(gdb)
When the Guest OS have install PV Driver. only first crash
lin
----- Original Message -----
From: Patrick Colp<mailto:pjcolp@cs.ubc.ca>
To: linqaingmin<mailto:linqiangmin@huawei.com>
Cc: xen-devel@lists.xensource.com<mailto:xen-devel@lists.xensource.com>
Sent: Friday, September 03, 2010 11:08 PM
Subject: Re: [Xen-devel] about the memory paging
Hi,
Sorry, I'm not quite sure what you're asking and/or if you ran into a
problem? Are you just asking how the xenpaging mechanism works?
Patrick
2010/9/2 linqaingmin <linqiangmin@huawei.com<mailto:linqiangmin@huawei.com>>:
> hi all
>
> Generate ept entry violation into function of ept_handle_violation .
>
> then call function of hvm_hap_nested_page_fault,this judge page type into
> p2m_mem_paging_populate();
>
> Here the event to notify the user space "xenpaging" process to paging in,
> but not Complete the page in on the next step Instruction;
>
> i thrink p2m_mem_paging_populate -->
> p2m_mem_paging_prep-->p2m_mem_paging_resume ,Complete the process before you
> start the implementation of the above.
>
> Is that right?
>
> tkx
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com<mailto:Xen-devel@lists.xensource.com>
> http://lists.xensource.com/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 21543 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2010-09-15 5:41 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-03 0:32 about the memory paging linqaingmin
2010-09-03 15:08 ` Patrick Colp
2010-09-04 8:55 ` linqaingmin
2010-09-14 14:44 ` Jiang, Yunhong
2010-09-15 2:57 ` linqaingmin
2010-09-15 5:41 ` Jiang, Yunhong
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.