All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
       [not found] <E88DD564E9DC5446A76B2B47C3BCCA1540A67C2A@pdsmsx503.ccr.corp.intel.com>
@ 2008-12-07  8:41 ` Keir Fraser
  2008-12-08  3:00   ` Cui, Dexuan
  2008-12-12 20:37   ` Gianluca Guida
  0 siblings, 2 replies; 19+ messages in thread
From: Keir Fraser @ 2008-12-07  8:41 UTC (permalink / raw)
  To: Li, Xin, Li, Haicheng, 'xen-devel@lists.xensource.com'
  Cc: Gianluca Guida

On 07/12/2008 02:23, "Li, Xin" <xin.li@intel.com> wrote:

>>> There's a good chance that at least bug #1 is fixed on current tip
>>> (c/s 18881).
>> 
>> OK, we will check it with c/s 18881, thanks.
> 
> The root cause of the crash when booting a 64bit Solaris 10u5 guest is that
> Xen hypervisor has turned off NX as guest AP has not turned on NX, but shadow
> already has NX set...
> So can't understand why c/s 18881 can fix the crash.

It's just a guess. There were shadow bug fixes between the tested c/s and
tip c/s.

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-07  8:41 ` Weekly VMX status report. Xen: #18846 & Xen0: #749 Keir Fraser
@ 2008-12-08  3:00   ` Cui, Dexuan
  2008-12-12 20:37   ` Gianluca Guida
  1 sibling, 0 replies; 19+ messages in thread
From: Cui, Dexuan @ 2008-12-08  3:00 UTC (permalink / raw)
  To: Keir Fraser, Li, Xin, Li, Haicheng,
	'xen-devel@lists.xensource.com'
  Cc: Gianluca Guida

[-- Attachment #1: Type: text/plain, Size: 1227 bytes --]

> 1. SMP 64bit Solaris10u5 causes Xen crash.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1393 

I  tried the bug  just now. It's still there with the latest tip 18881.

-- Dexuan

-----Original Message-----
From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser
Sent: 2008年12月7日 16:41
To: Li, Xin; Li, Haicheng; 'xen-devel@lists.xensource.com'
Cc: Gianluca Guida
Subject: Re: [Xen-devel] Weekly VMX status report. Xen: #18846 & Xen0: #749

On 07/12/2008 02:23, "Li, Xin" <xin.li@intel.com> wrote:

>>> There's a good chance that at least bug #1 is fixed on current tip
>>> (c/s 18881).
>> 
>> OK, we will check it with c/s 18881, thanks.
> 
> The root cause of the crash when booting a 64bit Solaris 10u5 guest is that
> Xen hypervisor has turned off NX as guest AP has not turned on NX, but shadow
> already has NX set...
> So can't understand why c/s 18881 can fix the crash.

It's just a guess. There were shadow bug fixes between the tested c/s and
tip c/s.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-07  8:41 ` Weekly VMX status report. Xen: #18846 & Xen0: #749 Keir Fraser
  2008-12-08  3:00   ` Cui, Dexuan
@ 2008-12-12 20:37   ` Gianluca Guida
  2008-12-12 23:22     ` Keir Fraser
  1 sibling, 1 reply; 19+ messages in thread
From: Gianluca Guida @ 2008-12-12 20:37 UTC (permalink / raw)
  To: Keir Fraser
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

Hello,

Keir Fraser wrote:
> On 07/12/2008 02:23, "Li, Xin" <xin.li@intel.com> wrote:
> 
>>>> There's a good chance that at least bug #1 is fixed on current tip
>>>> (c/s 18881).
>>> OK, we will check it with c/s 18881, thanks.
>> The root cause of the crash when booting a 64bit Solaris 10u5 guest is that
>> Xen hypervisor has turned off NX as guest AP has not turned on NX, but shadow
>> already has NX set...

This is what I think is going on:

BSP has finished its bootstrap phase, has enabled the EFER's NX bit and 
set the kernel mapping to pages that are going to be used as pagetable 
non-executable.

AP enables long mode, but not the EFER's NX. It accesses an address 
whose guest walk has pages still not shadowed, and the shadow code 
enters the game trying to remove writable mappings of that given guest page.

And here's -- I think -- the bug: when we update the MSR (in context 
switch) it is my understanding that we update the MSR based on the 
guest's vcpu state. So, when the shadow code will try to read the shadow 
mapping of the soon-to-be-promoted page will access a shadow mapping 
with NX bit and get a reserved-bit pagefault, because the host's EFER 
will have NX feature disabled.

I see two ways to fix this:

- Disable NX support in shadows until all vcpus have EFER's NX enabled. 
This would means that the guest thinks it has NX bit protection in at 
least one vcpus but in reality it doesn't. Also, to properly support 
execute-disable protection, we would need to blow the shadows when we 
can finally enable NX bit in shadows.

- Always enable EFER's NX in host mode. We could also avoid changing 
EFER's status between vmentry and vmexits, but this would cause some 
issue in reserved bit handling in page faults. This could be easily 
fixed in shadow code, but in HAP would make the whole thing more 
complicated.

Do the people that know better than me the actual VMX code have any 
opinion about the best way to fix this?

Thanks,
Gianluca

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-12 20:37   ` Gianluca Guida
@ 2008-12-12 23:22     ` Keir Fraser
  2008-12-12 23:30       ` Gianluca Guida
  0 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-12 23:22 UTC (permalink / raw)
  To: Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 12/12/2008 20:37, "Gianluca Guida" <gianluca.guida@eu.citrix.com> wrote:

> - Disable NX support in shadows until all vcpus have EFER's NX enabled.
> This would means that the guest thinks it has NX bit protection in at
> least one vcpus but in reality it doesn't. Also, to properly support
> execute-disable protection, we would need to blow the shadows when we
> can finally enable NX bit in shadows.
> 
> - Always enable EFER's NX in host mode. We could also avoid changing
> EFER's status between vmentry and vmexits, but this would cause some
> issue in reserved bit handling in page faults. This could be easily
> fixed in shadow code, but in HAP would make the whole thing more
> complicated.
> 
> Do the people that know better than me the actual VMX code have any
> opinion about the best way to fix this?

Is there any guest that actually cares about having EFER_NX really cleared?
Presumably the only way of detecting this would be reserved-bit page faults,
which no OS is likely to want to deliberately cause?

There's been some talk of NX'ing up Xen's data areas. In that case we
*would* need NX enabled always in host mode. Would it actually be worth
enabling/disabling on vmexit/vmentry?

SVM actually does automatically save/restore EFER on vmentry/vmexit. Could
we use VMX's MSR load/save support for the same effect? Would it be slow, or
interact badly with the existing support for switching EFER.LME?

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-12 23:22     ` Keir Fraser
@ 2008-12-12 23:30       ` Gianluca Guida
  2008-12-13 14:06         ` Keir Fraser
  0 siblings, 1 reply; 19+ messages in thread
From: Gianluca Guida @ 2008-12-12 23:30 UTC (permalink / raw)
  To: Keir Fraser
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin



Keir Fraser wrote:
> Is there any guest that actually cares about having EFER_NX really cleared?
> Presumably the only way of detecting this would be reserved-bit page faults,
> which no OS is likely to want to deliberately cause?

Yes, no OS we've actually experienced at the moment rely on reserved bit 
faults (with the most notable exception of Tim's fast path for MMIO and 
non present pages in Xen's shadow entries).
I am sure about this for a very simple reason: -- some kind of secret I 
would like to share with you and xen-devel -- shadow code doesn't check 
at all for reserved bits when propagating changes from guest to shadows, 
so we never propagate reserved bit faults to guests. [working on this]

> There's been some talk of NX'ing up Xen's data areas. In that case we
> *would* need NX enabled always in host mode. Would it actually be worth
> enabling/disabling on vmexit/vmentry?
> 
> SVM actually does automatically save/restore EFER on vmentry/vmexit. Could
> we use VMX's MSR load/save support for the same effect? Would it be slow, or
> interact badly with the existing support for switching EFER.LME?

AFAIK, this should be slow.

Thanks,
Gianluca

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-12 23:30       ` Gianluca Guida
@ 2008-12-13 14:06         ` Keir Fraser
  2008-12-13 15:14           ` Nakajima, Jun
  0 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-13 14:06 UTC (permalink / raw)
  To: Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 12/12/2008 23:30, "Gianluca Guida" <gianluca.guida@eu.citrix.com> wrote:

> Keir Fraser wrote:
>> Is there any guest that actually cares about having EFER_NX really cleared?
>> Presumably the only way of detecting this would be reserved-bit page faults,
>> which no OS is likely to want to deliberately cause?
> 
> Yes, no OS we've actually experienced at the moment rely on reserved bit
> faults (with the most notable exception of Tim's fast path for MMIO and
> non present pages in Xen's shadow entries).
> I am sure about this for a very simple reason: -- some kind of secret I
> would like to share with you and xen-devel -- shadow code doesn't check
> at all for reserved bits when propagating changes from guest to shadows,
> so we never propagate reserved bit faults to guests. [working on this]

Well, I vote for leaving EFER_NX always on then. It makes the code simpler
too. Anyone against this?

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-13 14:06         ` Keir Fraser
@ 2008-12-13 15:14           ` Nakajima, Jun
  2008-12-13 15:40             ` Keir Fraser
  0 siblings, 1 reply; 19+ messages in thread
From: Nakajima, Jun @ 2008-12-13 15:14 UTC (permalink / raw)
  To: Keir Fraser, Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 12/13/2008 6:06:18 AM, Keir Fraser wrote:
> On 12/12/2008 23:30, "Gianluca Guida" <gianluca.guida@eu.citrix.com>
> wrote:
>
> > Keir Fraser wrote:
> > > Is there any guest that actually cares about having EFER_NX really
> > > cleared? Presumably the only way of detecting this would be
> > > reserved-bit page faults, which no OS is likely to want to
> > > deliberately cause?
> >
> > Yes, no OS we've actually experienced at the moment rely on reserved
> > bit faults (with the most notable exception of Tim's fast path for
> > MMIO and non present pages in Xen's shadow entries).
> > I am sure about this for a very simple reason: -- some kind of
> > secret I would like to share with you and xen-devel -- shadow code
> > doesn't check at all for reserved bits when propagating changes from
> > guest to shadows, so we never propagate reserved bit faults to
> > guests. [working on this]
>
> Well, I vote for leaving EFER_NX always on then. It makes the code
> simpler too. Anyone against this?

Agree. Modern VMX-capable processors can save/restore Guest/Host IA32_EFER in the VMCS at VM exit/entry time, and I don't expect additional overheads from that.

So the options are:
1. Enable that feature (does not help old processors, though), or
2. If the guest does not enable NX but the processor does, set/reset NX at VM entry/exit. We are already handling other bits (e.g. SCE).


Thanks,
             .
Jun Nakajima | Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-13 15:14           ` Nakajima, Jun
@ 2008-12-13 15:40             ` Keir Fraser
  2008-12-13 22:43               ` Nakajima, Jun
  0 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-13 15:40 UTC (permalink / raw)
  To: Nakajima, Jun, Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 13/12/2008 15:14, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:

>> Well, I vote for leaving EFER_NX always on then. It makes the code
>> simpler too. Anyone against this?
> 
> Agree. Modern VMX-capable processors can save/restore Guest/Host IA32_EFER in
> the VMCS at VM exit/entry time, and I don't expect additional overheads from
> that.
> 
> So the options are:
> 1. Enable that feature (does not help old processors, though), or
> 2. If the guest does not enable NX but the processor does, set/reset NX at VM
> entry/exit. We are already handling other bits (e.g. SCE).

I'm not clear what your position is from the above. I should point out that
we don't mess with EFER on vmentry/vmexit at all right now. We fix up
EFER.SCE and other bits on context switch, but not on every entry/exit.

I think you agree that we don't need to keep guest 'actual' EFER.NX in sync
with its 'shadow' EFER.NX?

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-13 15:40             ` Keir Fraser
@ 2008-12-13 22:43               ` Nakajima, Jun
  2008-12-13 23:21                 ` Keir Fraser
  2008-12-15 13:02                 ` Keir Fraser
  0 siblings, 2 replies; 19+ messages in thread
From: Nakajima, Jun @ 2008-12-13 22:43 UTC (permalink / raw)
  To: Keir Fraser, Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 12/13/2008 7:40:51 AM, Keir Fraser wrote:
> On 13/12/2008 15:14, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
>
> > > Well, I vote for leaving EFER_NX always on then. It makes the code
> > > simpler too. Anyone against this?
> >
> > Agree. Modern VMX-capable processors can save/restore Guest/Host
> > IA32_EFER in the VMCS at VM exit/entry time, and I don't expect
> > additional overheads from that.
> >
> > So the options are:
> > 1. Enable that feature (does not help old processors, though), or 2.
> > If the guest does not enable NX but the processor does, set/reset NX
> > at VM entry/exit. We are already handling other bits (e.g. SCE).
>
> I'm not clear what your position is from the above. I should point out
> that we don't mess with EFER on vmentry/vmexit at all right now. We
> fix up EFER.SCE and other bits on context switch, but not on every entry/exit.

I misunderstood what you want to do; I thought you wanted to leaving EFER_NX always on in _Xen_.

>
> I think you agree that we don't need to keep guest 'actual' EFER.NX in
> sync with its 'shadow' EFER.NX?
>

That should be okay. The fact we see the NX bit in the shadow page tables means at least the BSP enabled NX. And I don't expect other processors would do otherwise. In other words, such out-of-sync situations be transient anyway.
             .
Jun Nakajima | Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-13 22:43               ` Nakajima, Jun
@ 2008-12-13 23:21                 ` Keir Fraser
  2008-12-15 13:02                 ` Keir Fraser
  1 sibling, 0 replies; 19+ messages in thread
From: Keir Fraser @ 2008-12-13 23:21 UTC (permalink / raw)
  To: Nakajima, Jun, Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

On 13/12/2008 22:43, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:

>> I think you agree that we don't need to keep guest 'actual' EFER.NX in
>> sync with its 'shadow' EFER.NX?
>> 
> 
> That should be okay. The fact we see the NX bit in the shadow page tables
> means at least the BSP enabled NX. And I don't expect other processors would
> do otherwise. In other words, such out-of-sync situations be transient anyway.

It only matters if we think any guest depends on correct behaviour (i.e.,
reserved-bit #PF) when EFER.NX=0. Which I doubt.

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-13 22:43               ` Nakajima, Jun
  2008-12-13 23:21                 ` Keir Fraser
@ 2008-12-15 13:02                 ` Keir Fraser
  2008-12-16  5:54                   ` Li, Haicheng
  1 sibling, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-15 13:02 UTC (permalink / raw)
  To: Nakajima, Jun, Gianluca Guida
  Cc: Li, Haicheng, 'xen-devel@lists.xensource.com', Li, Xin

[-- Attachment #1: Type: text/plain, Size: 568 bytes --]

On 13/12/2008 22:43, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:

>> I think you agree that we don't need to keep guest 'actual' EFER.NX in
>> sync with its 'shadow' EFER.NX?
>> 
> 
> That should be okay. The fact we see the NX bit in the shadow page tables
> means at least the BSP enabled NX. And I don't expect other processors would
> do otherwise. In other words, such out-of-sync situations be transient anyway.

Attached is my proposed patch. Does it look okay to everyone? Haicheng:
could you test if it gets rid of the HVM Solaris crash?

 Thanks,
 Keir


[-- Attachment #2: nx-patch --]
[-- Type: application/octet-stream, Size: 2383 bytes --]

diff -r f827181eadd4 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c	Mon Dec 15 11:37:14 2008 +0000
+++ b/xen/arch/x86/hvm/vmx/vmx.c	Mon Dec 15 12:59:57 2008 +0000
@@ -306,9 +306,6 @@ static void vmx_restore_host_msrs(void)
         wrmsrl(msr_index[i], host_msr_state->msrs[i]);
         clear_bit(i, &host_msr_state->flags);
     }
-
-    if ( cpu_has_nx && !(read_efer() & EFER_NX) )
-        write_efer(read_efer() | EFER_NX);
 }
 
 static void vmx_save_guest_msrs(struct vcpu *v)
@@ -342,39 +339,23 @@ static void vmx_restore_guest_msrs(struc
         clear_bit(i, &guest_flags);
     }
 
-    if ( (v->arch.hvm_vcpu.guest_efer ^ read_efer()) & (EFER_NX | EFER_SCE) )
+    if ( (v->arch.hvm_vcpu.guest_efer ^ read_efer()) & EFER_SCE )
     {
         HVM_DBG_LOG(DBG_LEVEL_2,
                     "restore guest's EFER with value %lx",
                     v->arch.hvm_vcpu.guest_efer);
-        write_efer((read_efer() & ~(EFER_NX | EFER_SCE)) |
-                   (v->arch.hvm_vcpu.guest_efer & (EFER_NX | EFER_SCE)));
+        write_efer((read_efer() & ~EFER_SCE) |
+                   (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
     }
 }
 
 #else  /* __i386__ */
 
 #define vmx_save_host_msrs()        ((void)0)
-
-static void vmx_restore_host_msrs(void)
-{
-    if ( cpu_has_nx && !(read_efer() & EFER_NX) )
-        write_efer(read_efer() | EFER_NX);
-}
+#define vmx_restore_host_msrs()     ((void)0)
 
 #define vmx_save_guest_msrs(v)      ((void)0)
-
-static void vmx_restore_guest_msrs(struct vcpu *v)
-{
-    if ( (v->arch.hvm_vcpu.guest_efer ^ read_efer()) & EFER_NX )
-    {
-        HVM_DBG_LOG(DBG_LEVEL_2,
-                    "restore guest's EFER with value %lx",
-                    v->arch.hvm_vcpu.guest_efer);
-        write_efer((read_efer() & ~EFER_NX) |
-                   (v->arch.hvm_vcpu.guest_efer & EFER_NX));
-    }
-}
+#define vmx_restore_guest_msrs(v)   ((void)0)
 
 static enum handler_return long_mode_do_msr_read(struct cpu_user_regs *regs)
 {
@@ -1190,8 +1171,8 @@ static void vmx_update_guest_efer(struct
 #endif
 
     if ( v == current )
-        write_efer((read_efer() & ~(EFER_NX|EFER_SCE)) |
-                   (v->arch.hvm_vcpu.guest_efer & (EFER_NX|EFER_SCE)));
+        write_efer((read_efer() & ~EFER_SCE) |
+                   (v->arch.hvm_vcpu.guest_efer & EFER_SCE));
 }
 
 static void vmx_flush_guest_tlbs(void)

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-15 13:02                 ` Keir Fraser
@ 2008-12-16  5:54                   ` Li, Haicheng
  2008-12-16  7:24                     ` Li, Haicheng
  0 siblings, 1 reply; 19+ messages in thread
From: Li, Haicheng @ 2008-12-16  5:54 UTC (permalink / raw)
  To: Keir Fraser, Nakajima, Jun, Gianluca Guida
  Cc: 'xen-devel@lists.xensource.com', Li, Xin

Keir Fraser wrote:
> On 13/12/2008 22:43, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
> 
>>> I think you agree that we don't need to keep guest 'actual' EFER.NX
>>> in sync with its 'shadow' EFER.NX?
>>> 
>> 
>> That should be okay. The fact we see the NX bit in the shadow page
>> tables means at least the BSP enabled NX. And I don't expect other
>> processors would do otherwise. In other words, such out-of-sync
>> situations be transient anyway. 
> 
> Attached is my proposed patch. Does it look okay to everyone?
> Haicheng: could you test if it gets rid of the HVM Solaris crash?
> 
>  Thanks,
>  Keir

Yes, we will test your patch and keep you posted with result. 

-haicheng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-16  5:54                   ` Li, Haicheng
@ 2008-12-16  7:24                     ` Li, Haicheng
  2008-12-16 11:55                       ` Keir Fraser
  0 siblings, 1 reply; 19+ messages in thread
From: Li, Haicheng @ 2008-12-16  7:24 UTC (permalink / raw)
  To: Li, Haicheng, Keir Fraser, Nakajima, Jun, Gianluca Guida
  Cc: 'xen-devel@lists.xensource.com', Li, Xin

Li, Haicheng wrote:
> Keir Fraser wrote:
>> On 13/12/2008 22:43, "Nakajima, Jun" <jun.nakajima@intel.com> wrote:
>> 
>>>> I think you agree that we don't need to keep guest 'actual' EFER.NX
>>>> in sync with its 'shadow' EFER.NX?
>>>> 
>>> 
>>> That should be okay. The fact we see the NX bit in the shadow page
>>> tables means at least the BSP enabled NX. And I don't expect other
>>> processors would do otherwise. In other words, such out-of-sync
>>> situations be transient anyway.
>> 
>> Attached is my proposed patch. Does it look okay to everyone?
>> Haicheng: could you test if it gets rid of the HVM Solaris crash?
>> 
>>  Thanks,
>>  Keir
> 
> Yes, we will test your patch and keep you posted with result.
> 
> -haicheng

Hi Keir,

We tested your patch, it does fix the bug of HVM Solaris crash, viz. SMP 64bit Solaris10u5 HVM can boot up successfully with your patch applied. Thanks.



-haicheng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-16  7:24                     ` Li, Haicheng
@ 2008-12-16 11:55                       ` Keir Fraser
  0 siblings, 0 replies; 19+ messages in thread
From: Keir Fraser @ 2008-12-16 11:55 UTC (permalink / raw)
  To: Li, Haicheng, Nakajima, Jun, Gianluca Guida
  Cc: 'xen-devel@lists.xensource.com', Li, Xin

On 16/12/2008 07:24, "Li, Haicheng" <haicheng.li@intel.com> wrote:

>> 
>> Yes, we will test your patch and keep you posted with result.
>> 
>> -haicheng
> 
> Hi Keir,
> 
> We tested your patch, it does fix the bug of HVM Solaris crash, viz. SMP 64bit
> Solaris10u5 HVM can boot up successfully with your patch applied. Thanks.

Applied to xen-unstable as c/s 18922.

Do we need this for 3.3 as well? And what about 3.2?

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-16 12:29 ` Keir Fraser
@ 2008-12-16 12:33   ` Li, Xin
  0 siblings, 0 replies; 19+ messages in thread
From: Li, Xin @ 2008-12-16 12:33 UTC (permalink / raw)
  To: Keir Fraser, Li, Haicheng, Nakajima, Jun, Gianluca Guida
  Cc: 'xen-devel@lists.xensource.com'

>>> Applied to Xen-unstable as c/s 18922.
>>>
>>> Do we need this for 3.3 as well? And what about 3.2?
>>>
>>
>> 3.3 needs it, but 3.2 may not, as this crash is introduced by 17781. But an
>> evil guest may crash a Xen 3.2 hypervisor...
>
>Yes, I think it is probably needed really.

Agree!
-Xin

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
       [not found] <E88DD564E9DC5446A76B2B47C3BCCA15432C0FC2@pdsmsx503.ccr.corp.intel.com>
@ 2008-12-16 12:29 ` Keir Fraser
  2008-12-16 12:33   ` Li, Xin
  0 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-16 12:29 UTC (permalink / raw)
  To: Li, Xin, Li, Haicheng, Nakajima, Jun, Gianluca Guida
  Cc: 'xen-devel@lists.xensource.com'

On 16/12/2008 12:18, "Li, Xin" <xin.li@intel.com> wrote:

>> Applied to Xen-unstable as c/s 18922.
>> 
>> Do we need this for 3.3 as well? And what about 3.2?
>> 
> 
> 3.3 needs it, but 3.2 may not, as this crash is introduced by 17781. But an
> evil guest may crash a Xen 3.2 hypervisor...

Yes, I think it is probably needed really.

 -- Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-06 12:16 ` Keir Fraser
@ 2008-12-06 12:21   ` Li, Haicheng
  0 siblings, 0 replies; 19+ messages in thread
From: Li, Haicheng @ 2008-12-06 12:21 UTC (permalink / raw)
  To: Keir Fraser, 'xen-devel@lists.xensource.com'

Keir Fraser wrote:
> On 06/12/2008 11:45, "Li, Haicheng" <haicheng.li@intel.com> wrote:
> 
>> Hi all,
>> 
>> This is our weekly test report for Xen-unstable tree. Two new issues
>> were found; P1 bug #1393 should be always there, and was just
>> exposed by a new test case. 
>> 
>> New Bugs:
>> =====================================================================
>> 1. SMP 64bit Solaris10u5 causes Xen crash.
>> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1393
>> 2. cpu idle time / cpufreq residency becomes extremely large.
>> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1394
> 
> There's a good chance that at least bug #1 is fixed on current tip
> (c/s 18881).
> 

OK, we will check it with c/s 18881, thanks.



-haicheng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Weekly VMX status report. Xen: #18846 & Xen0: #749
  2008-12-06 11:45 Li, Haicheng
@ 2008-12-06 12:16 ` Keir Fraser
  2008-12-06 12:21   ` Li, Haicheng
  0 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2008-12-06 12:16 UTC (permalink / raw)
  To: Li, Haicheng, 'xen-devel@lists.xensource.com'

On 06/12/2008 11:45, "Li, Haicheng" <haicheng.li@intel.com> wrote:

> Hi all,
> 
> This is our weekly test report for Xen-unstable tree. Two new issues were
> found; P1 bug #1393 should be always there, and was just exposed by a new test
> case.
> 
> New Bugs:
> =====================================================================
> 1. SMP 64bit Solaris10u5 causes Xen crash.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1393
> 2. cpu idle time / cpufreq residency becomes extremely large.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1394

There's a good chance that at least bug #1 is fixed on current tip (c/s
18881).

 Thanks,
 Keir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Weekly VMX status report. Xen: #18846 & Xen0: #749
@ 2008-12-06 11:45 Li, Haicheng
  2008-12-06 12:16 ` Keir Fraser
  0 siblings, 1 reply; 19+ messages in thread
From: Li, Haicheng @ 2008-12-06 11:45 UTC (permalink / raw)
  To: 'xen-devel@lists.xensource.com'

Hi all,

This is our weekly test report for Xen-unstable tree. Two new issues were found; P1 bug #1393 should be always there, and was just exposed by a new test case.

New Bugs:
=====================================================================
1. SMP 64bit Solaris10u5 causes Xen crash.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1393
2. cpu idle time / cpufreq residency becomes extremely large.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1394

Old Bugs:
=====================================================================
1. stubdom based guest crashes at creating stage.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1382
2. with AHCI disk drive, dom S3 resume is failed.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1374
3. stubdom based guest hangs when assigning hdc to it.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1373
4. [stubdom]The xm save command hangs while saving <Domain-dm>.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1377
5. [stubdom] cannot restore stubdom based domain.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1378
6. [VT-d] failed to reassign some PCI-e NICs.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1379

Xen Info:
============================================================================
xen-changeset:   18846:a00eb6595d3c
dom0-changeset:   749:cdc6729dc702

ioemu git: 
commit b4d410a1c28fcd1ea528d94eb8b94b79286c25ed
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Oct 23 10:26:02 2008 +0100

Testing Environment:
=====================================================================
IA32E
CPU                 :      Nehalem
Dom0 OS         :      RHEL5.1
Memory size     :      4G

PAE
CPU                 :       Nehalem
Dom0 OS          :       RHEL5.1
Memory size     :       4G

Details:
=====================================================================
Platform : x86_64
Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga)
Hardware : Nehalem
Xen package: 18846:a00eb6595d3c
Date: Wed Nov 26 21:01:09 EST 2008

               Summary Test Report of Last Session
=====================================================================
  	                    Total   Pass    Fail    NoResult   Crash
=====================================================================
device_model_ept            2       2       0         0        0
stubdom_ept                 2       0       2         0        0
ras_ept                     1       1       0         0        0
vtd_ept                     16      13      3         0        0
control_panel_ept           19      18      1         0        0
gtest_ept                   20      20      0         0        0
=====================================================================
device_model_ept            2       2       0         0        0
 :pv_on_up_64_g32e          1       1       0         0        0
 :pv_on_smp_64_g32e         1       1       0         0        0
stubdom_ept                 2       0       2         0        0
 :boot_stubdom_no_qcow_64   1       0       1         0        0
 :boot_stubdom_qcow_64_g3   1       0       1         0        0
ras_ept                     1       1       0         0        0
 :cpu_online_offline_64_g   1       1       0         0        0
vtd_ept                     16      13      3         0        0
 :two_dev_up_xp_nomsi_64_   1       1       0         0        0
 :hp_pci_up_nomsi_64_g32e   1       1       0         0        0
 :hp_pci_smp_xp_nomsi_64_   1       1       0         0        0
 :two_dev_up_64_g32e        1       1       0         0        0
 :hp_pci_up_xp_nomsi_64_g   1       1       0         0        0
 :two_dev_up_nomsi_64_g32   1       1       0         0        0
 :hp_pcie_up_xp_nomsi_64_   1       1       0         0        0
 :two_dev_scp_nomsi_64_g3   1       1       0         0        0
 :hp_pcie_smp_64_g32e       1       0       1         0        0
 :two_dev_smp_nomsi_64_g3   1       1       0         0        0
 :hp_pcie_smp_xp_nomsi_64   1       1       0         0        0
 :two_dev_scp_64_g32e       1       1       0         0        0
 :two_dev_smp_xp_nomsi_64   1       0       1         0        0
 :two_dev_smp_64_g32e       1       1       0         0        0
 :hp_pci_smp_nomsi_64_g32   1       1       0         0        0
 :hp_pcie_up_64_g32e        1       0       1         0        0
control_panel_ept           19      18      1         0        0
 :XEN_1500M_guest_64_g32e   1       1       0         0        0
 :XEN_LM_Continuity_64_g3   1       1       0         0        0
 :XEN_256M_xenu_64_gPAE     1       1       0         0        0
 :XEN_four_vmx_xenu_seq_6   1       1       0         0        0
 :XEN_vmx_vcpu_pin_64_g32   1       1       0         0        0
 :XEN_SR_Continuity_64_g3   1       1       0         0        0
 :XEN_linux_win_64_g32e     1       1       0         0        0
 :XEN_vmx_2vcpu_64_g32e     1       1       0         0        0
 :XEN_1500M_guest_64_gPAE   1       1       0         0        0
 :XEN_four_dguest_co_64_g   1       1       0         0        0
 :XEN_two_winxp_64_g32e     1       1       0         0        0
 :XEN_4G_guest_64_gPAE      1       0       1         0        0
 :XEN_four_sguest_seq_64_   1       1       0         0        0
 :XEN_256M_guest_64_gPAE    1       1       0         0        0
 :XEN_LM_SMP_64_g32e        1       1       0         0        0
 :XEN_Nevada_xenu_64_g32e   1       1       0         0        0
 :XEN_256M_guest_64_g32e    1       1       0         0        0
 :XEN_SR_SMP_64_g32e        1       1       0         0        0
 :XEN_four_sguest_seq_64_   1       1       0         0        0
gtest_ept                   20      20      0         0        0
 :boot_up_acpi_win2k_64_g   1       1       0         0        0
 :boot_up_noacpi_win2k_64   1       1       0         0        0
 :reboot_xp_64_g32e         1       1       0         0        0
 :boot_up_vista_64_g32e     1       1       0         0        0
 :boot_up_acpi_xp_64_g32e   1       1       0         0        0
 :boot_smp_acpi_xp_64_g32   1       1       0         0        0
 :boot_up_acpi_64_g32e      1       1       0         0        0
 :boot_base_kernel_64_g32   1       1       0         0        0
 :boot_up_win2008_64_g32e   1       1       0         0        0
 :kb_nightly_64_g32e        1       1       0         0        0
 :boot_up_acpi_win2k3_64_   1       1       0         0        0
 :boot_nevada_64_g32e       1       1       0         0        0
 :boot_smp_vista_64_g32e    1       1       0         0        0
 :ltp_nightly_64_g32e       1       1       0         0        0
 :boot_fc9_64_g32e          1       1       0         0        0
 :boot_smp_win2008_64_g32   1       1       0         0        0
 :boot_smp_acpi_win2k3_64   1       1       0         0        0
 :boot_rhel5u1_64_g32e      1       1       0         0        0
 :reboot_fc6_64_g32e        1       1       0         0        0
 :boot_smp_acpi_win2k_64_   1       1       0         0        0
=====================================================================
Total                       60      54      6         0        0

Platform : PAE
Service OS : Red Hat Enterprise Linux Server release 5.2 (Tikanga)
Hardware : Nehalem
Xen package: 18846:a00eb6595d3c
Date: Thu Nov 27 13:07:55 CST 2008

               Summary Test Report of Last Session
=====================================================================
  	                    Total   Pass    Fail    NoResult   Crash
=====================================================================
device_model_ept            2       0       0         2        0
stubdom_ept                 2       0       2         0        0
ras_ept                     1       1       0         0        0
vtd_ept                     16      13      3         0        0
control_panel_ept           14      14      0         0        0
gtest_ept                   22      22      0         0        0
=====================================================================
device_model_ept            2       0       0         2        0
 :pv_on_up_PAE_gPAE         1       0       0         1        0
 :pv_on_smp_PAE_gPAE        1       0       0         1        0
stubdom_ept                 2       0       2         0        0
 :boot_stubdom_no_qcow_PA   1       0       1         0        0
 :boot_stubdom_qcow_PAE_g   1       0       1         0        0
ras_ept                     1       1       0         0        0
 :cpu_online_offline_PAE_   1       1       0         0        0
vtd_ept                     16      13      3         0        0
 :hp_pcie_up_PAE_gPAE       1       0       1         0        0
 :two_dev_scp_nomsi_PAE_g   1       1       0         0        0
 :hp_pcie_smp_xp_nomsi_PA   1       1       0         0        0
 :two_dev_up_xp_nomsi_PAE   1       1       0         0        0
 :two_dev_smp_xp_nomsi_PA   1       0       1         0        0
 :two_dev_smp_PAE_gPAE      1       1       0         0        0
 :two_dev_up_nomsi_PAE_gP   1       1       0         0        0
 :two_dev_scp_PAE_gPAE      1       1       0         0        0
 :hp_pcie_up_xp_nomsi_PAE   1       1       0         0        0
 :hp_pci_up_nomsi_PAE_gPA   1       1       0         0        0
 :hp_pcie_smp_PAE_gPAE      1       0       1         0        0
 :hp_pci_smp_xp_nomsi_PAE   1       1       0         0        0
 :two_dev_up_PAE_gPAE       1       1       0         0        0
 :hp_pci_smp_nomsi_PAE_gP   1       1       0         0        0
 :two_dev_smp_nomsi_PAE_g   1       1       0         0        0
 :hp_pci_up_xp_nomsi_PAE_   1       1       0         0        0
control_panel_ept           14      14      0         0        0
 :XEN_four_vmx_xenu_seq_P   1       1       0         0        0
 :XEN_four_dguest_co_PAE_   1       1       0         0        0
 :XEN_SR_SMP_PAE_gPAE       1       1       0         0        0
 :XEN_linux_win_PAE_gPAE    1       1       0         0        0
 :XEN_Nevada_xenu_PAE_gPA   1       1       0         0        0
 :XEN_LM_SMP_PAE_gPAE       1       1       0         0        0
 :XEN_SR_Continuity_PAE_g   1       1       0         0        0
 :XEN_vmx_vcpu_pin_PAE_gP   1       1       0         0        0
 :XEN_LM_Continuity_PAE_g   1       1       0         0        0
 :XEN_256M_guest_PAE_gPAE   1       1       0         0        0
 :XEN_1500M_guest_PAE_gPA   1       1       0         0        0
 :XEN_two_winxp_PAE_gPAE    1       1       0         0        0
 :XEN_four_sguest_seq_PAE   1       1       0         0        0
 :XEN_vmx_2vcpu_PAE_gPAE    1       1       0         0        0
gtest_ept                   22      22      0         0        0
 :boot_up_acpi_PAE_gPAE     1       1       0         0        0
 :ltp_nightly_PAE_gPAE      1       1       0         0        0
 :boot_fc9_PAE_gPAE         1       1       0         0        0
 :reboot_xp_PAE_gPAE        1       1       0         0        0
 :boot_up_acpi_xp_PAE_gPA   1       1       0         0        0
 :boot_up_vista_PAE_gPAE    1       1       0         0        0
 :boot_up_acpi_win2k3_PAE   1       1       0         0        0
 :boot_smp_acpi_win2k3_PA   1       1       0         0        0
 :boot_smp_acpi_win2k_PAE   1       1       0         0        0
 :boot_up_acpi_win2k_PAE_   1       1       0         0        0
 :boot_smp_acpi_xp_PAE_gP   1       1       0         0        0
 :boot_up_noacpi_win2k_PA   1       1       0         0        0
 :boot_smp_vista_PAE_gPAE   1       1       0         0        0
 :boot_up_noacpi_win2k3_P   1       1       0         0        0
 :boot_nevada_PAE_gPAE      1       1       0         0        0
 :boot_rhel5u1_PAE_gPAE     1       1       0         0        0
 :boot_base_kernel_PAE_gP   1       1       0         0        0
 :boot_up_win2008_PAE_gPA   1       1       0         0        0
 :boot_up_noacpi_xp_PAE_g   1       1       0         0        0
 :boot_smp_win2008_PAE_gP   1       1       0         0        0
 :reboot_fc6_PAE_gPAE       1       1       0         0        0
 :kb_nightly_PAE_gPAE       1       1       0         0        0
=====================================================================
Total                       57      50      5         2        0


-haicheng

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2008-12-16 12:33 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <E88DD564E9DC5446A76B2B47C3BCCA1540A67C2A@pdsmsx503.ccr.corp.intel.com>
2008-12-07  8:41 ` Weekly VMX status report. Xen: #18846 & Xen0: #749 Keir Fraser
2008-12-08  3:00   ` Cui, Dexuan
2008-12-12 20:37   ` Gianluca Guida
2008-12-12 23:22     ` Keir Fraser
2008-12-12 23:30       ` Gianluca Guida
2008-12-13 14:06         ` Keir Fraser
2008-12-13 15:14           ` Nakajima, Jun
2008-12-13 15:40             ` Keir Fraser
2008-12-13 22:43               ` Nakajima, Jun
2008-12-13 23:21                 ` Keir Fraser
2008-12-15 13:02                 ` Keir Fraser
2008-12-16  5:54                   ` Li, Haicheng
2008-12-16  7:24                     ` Li, Haicheng
2008-12-16 11:55                       ` Keir Fraser
     [not found] <E88DD564E9DC5446A76B2B47C3BCCA15432C0FC2@pdsmsx503.ccr.corp.intel.com>
2008-12-16 12:29 ` Keir Fraser
2008-12-16 12:33   ` Li, Xin
2008-12-06 11:45 Li, Haicheng
2008-12-06 12:16 ` Keir Fraser
2008-12-06 12:21   ` Li, Haicheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.