All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
@ 2018-05-24 13:41 Jan Beulich
  2018-05-24 13:48 ` Andrew Cooper
  2018-05-24 14:00 ` Simon Gaiser
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 13:41 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Simon Gaiser, Juergen Gross

In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
I've failed to remember the fact that multiple CPUs share a stub
mapping page. Therefore it is wrong to unconditionally zap the mapping
when bringing down a CPU; it may only be unmapped when no other online
CPU uses that same page.

Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
 
     free_xen_pagetable(rpt);
 
-    /* Also zap the stub mapping for this CPU. */
+    /*
+     * Also zap the stub mapping for this CPU, if no other online one uses
+     * the same page.
+     */
+    if ( stub_linear )
+    {
+        unsigned int other;
+
+        for_each_online_cpu(other)
+            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
+            {
+                stub_linear = 0;
+                break;
+            }
+    }
     if ( stub_linear )
     {
         l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 13:41 [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general) Jan Beulich
@ 2018-05-24 13:48 ` Andrew Cooper
  2018-05-24 14:05   ` Jan Beulich
  2018-05-24 14:00 ` Simon Gaiser
  1 sibling, 1 reply; 20+ messages in thread
From: Andrew Cooper @ 2018-05-24 13:48 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, Simon Gaiser, Juergen Gross

On 24/05/18 14:41, Jan Beulich wrote:
> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
> I've failed to remember the fact that multiple CPUs share a stub
> mapping page. Therefore it is wrong to unconditionally zap the mapping
> when bringing down a CPU; it may only be unmapped when no other online
> CPU uses that same page.
>
> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>  
>      free_xen_pagetable(rpt);
>  
> -    /* Also zap the stub mapping for this CPU. */
> +    /*
> +     * Also zap the stub mapping for this CPU, if no other online one uses
> +     * the same page.
> +     */
> +    if ( stub_linear )
> +    {
> +        unsigned int other;
> +
> +        for_each_online_cpu(other)

Look over the code, it seems that with spaces is the more common style,
but it is admittedly fairly mixed.

Either way (as that's trivial to fix), Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>

> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
> +            {
> +                stub_linear = 0;
> +                break;
> +            }
> +    }
>      if ( stub_linear )
>      {
>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 13:41 [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general) Jan Beulich
  2018-05-24 13:48 ` Andrew Cooper
@ 2018-05-24 14:00 ` Simon Gaiser
  2018-05-24 14:08   ` Jan Beulich
  1 sibling, 1 reply; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 14:00 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, Andrew Cooper, Juergen Gross


[-- Attachment #1.1.1: Type: text/plain, Size: 1290 bytes --]

Jan Beulich:
> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
> I've failed to remember the fact that multiple CPUs share a stub
> mapping page. Therefore it is wrong to unconditionally zap the mapping
> when bringing down a CPU; it may only be unmapped when no other online
> CPU uses that same page.
> 
> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>  
>      free_xen_pagetable(rpt);
>  
> -    /* Also zap the stub mapping for this CPU. */
> +    /*
> +     * Also zap the stub mapping for this CPU, if no other online one uses
> +     * the same page.
> +     */
> +    if ( stub_linear )
> +    {
> +        unsigned int other;
> +
> +        for_each_online_cpu(other)
> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
> +            {
> +                stub_linear = 0;
> +                break;
> +            }
> +    }
>      if ( stub_linear )
>      {
>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);

Tried this on-top of staging (fc5805daef) and I still get the same
double fault.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 13:48 ` Andrew Cooper
@ 2018-05-24 14:05   ` Jan Beulich
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 14:05 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: George Dunlap, Simon Gaiser, Juergen Gross, xen-devel

>>> On 24.05.18 at 15:48, <andrew.cooper3@citrix.com> wrote:
> On 24/05/18 14:41, Jan Beulich wrote:
>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>> I've failed to remember the fact that multiple CPUs share a stub
>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>> when bringing down a CPU; it may only be unmapped when no other online
>> CPU uses that same page.
>>
>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/smpboot.c
>> +++ b/xen/arch/x86/smpboot.c
>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>  
>>      free_xen_pagetable(rpt);
>>  
>> -    /* Also zap the stub mapping for this CPU. */
>> +    /*
>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>> +     * the same page.
>> +     */
>> +    if ( stub_linear )
>> +    {
>> +        unsigned int other;
>> +
>> +        for_each_online_cpu(other)
> 
> Look over the code, it seems that with spaces is the more common style,
> but it is admittedly fairly mixed.

I'd prefer to leave it as is - personally I don't consider "for_each_online_cpu"
and alike keywords, which is what ./CODING_STYLE talks about. I accept
others taking a different position, i.e. I don't normally demand a particular
style to be used there, but in code I write I prefer to only apply spaces to
real keywords.

> Either way (as that's trivial to fix), Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks, Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:00 ` Simon Gaiser
@ 2018-05-24 14:08   ` Jan Beulich
  2018-05-24 14:14     ` Simon Gaiser
  0 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 14:08 UTC (permalink / raw)
  To: Simon Gaiser; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel

>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
> Jan Beulich:
>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>> I've failed to remember the fact that multiple CPUs share a stub
>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>> when bringing down a CPU; it may only be unmapped when no other online
>> CPU uses that same page.
>> 
>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> --- a/xen/arch/x86/smpboot.c
>> +++ b/xen/arch/x86/smpboot.c
>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>  
>>      free_xen_pagetable(rpt);
>>  
>> -    /* Also zap the stub mapping for this CPU. */
>> +    /*
>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>> +     * the same page.
>> +     */
>> +    if ( stub_linear )
>> +    {
>> +        unsigned int other;
>> +
>> +        for_each_online_cpu(other)
>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>> +            {
>> +                stub_linear = 0;
>> +                break;
>> +            }
>> +    }
>>      if ( stub_linear )
>>      {
>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
> 
> Tried this on-top of staging (fc5805daef) and I still get the same
> double fault.

Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
are in the same page (and I'd never unmap anything here at all).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:08   ` Jan Beulich
@ 2018-05-24 14:14     ` Simon Gaiser
  2018-05-24 14:18       ` Andrew Cooper
  2018-05-24 14:28       ` Jan Beulich
  0 siblings, 2 replies; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 14:14 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 1721 bytes --]

Jan Beulich:
>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>> Jan Beulich:
>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>> I've failed to remember the fact that multiple CPUs share a stub
>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>> when bringing down a CPU; it may only be unmapped when no other online
>>> CPU uses that same page.
>>>
>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/smpboot.c
>>> +++ b/xen/arch/x86/smpboot.c
>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>  
>>>      free_xen_pagetable(rpt);
>>>  
>>> -    /* Also zap the stub mapping for this CPU. */
>>> +    /*
>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>> +     * the same page.
>>> +     */
>>> +    if ( stub_linear )
>>> +    {
>>> +        unsigned int other;
>>> +
>>> +        for_each_online_cpu(other)
>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>> +            {
>>> +                stub_linear = 0;
>>> +                break;
>>> +            }
>>> +    }
>>>      if ( stub_linear )
>>>      {
>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>
>> Tried this on-top of staging (fc5805daef) and I still get the same
>> double fault.
> 
> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
> are in the same page (and I'd never unmap anything here at all).

4 cores + HT, so 8 CPUs from Xen's PoV.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:14     ` Simon Gaiser
@ 2018-05-24 14:18       ` Andrew Cooper
  2018-05-24 14:22         ` Jan Beulich
  2018-05-24 14:35         ` Simon Gaiser
  2018-05-24 14:28       ` Jan Beulich
  1 sibling, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-05-24 14:18 UTC (permalink / raw)
  To: Simon Gaiser, Jan Beulich; +Cc: George Dunlap, Juergen Gross, xen-devel

On 24/05/18 15:14, Simon Gaiser wrote:
> Jan Beulich:
>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>> Jan Beulich:
>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>> CPU uses that same page.
>>>>
>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/smpboot.c
>>>> +++ b/xen/arch/x86/smpboot.c
>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>  
>>>>      free_xen_pagetable(rpt);
>>>>  
>>>> -    /* Also zap the stub mapping for this CPU. */
>>>> +    /*
>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>> +     * the same page.
>>>> +     */
>>>> +    if ( stub_linear )
>>>> +    {
>>>> +        unsigned int other;
>>>> +
>>>> +        for_each_online_cpu(other)
>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>> +            {
>>>> +                stub_linear = 0;
>>>> +                break;
>>>> +            }
>>>> +    }
>>>>      if ( stub_linear )
>>>>      {
>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>> double fault.
>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>> are in the same page (and I'd never unmap anything here at all).
> 4 cores + HT, so 8 CPUs from Xen's PoV.

Can you try with the "x86/traps: Dump the instruction stream even for
double faults" patch I've just posted, and show the full #DF panic log
please?  (Its conceivable that there are multiple different issues here.)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:18       ` Andrew Cooper
@ 2018-05-24 14:22         ` Jan Beulich
  2018-05-24 14:24           ` Andrew Cooper
  2018-05-24 14:35         ` Simon Gaiser
  1 sibling, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 14:22 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: George Dunlap, Simon Gaiser, Juergen Gross, xen-devel

>>> On 24.05.18 at 16:18, <andrew.cooper3@citrix.com> wrote:
> Can you try with the "x86/traps: Dump the instruction stream even for
> double faults" patch I've just posted, and show the full #DF panic log
> please?  (Its conceivable that there are multiple different issues here.)

Well, as long as we're on a guest kernel stack rather than our own, I
don't think the exact insn causing the #DF really matters. See earlier
mails I have sent in this regard.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:22         ` Jan Beulich
@ 2018-05-24 14:24           ` Andrew Cooper
  2018-05-24 14:31             ` Jan Beulich
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Cooper @ 2018-05-24 14:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Simon Gaiser, Juergen Gross, xen-devel

On 24/05/18 15:22, Jan Beulich wrote:
>>>> On 24.05.18 at 16:18, <andrew.cooper3@citrix.com> wrote:
>> Can you try with the "x86/traps: Dump the instruction stream even for
>> double faults" patch I've just posted, and show the full #DF panic log
>> please?  (Its conceivable that there are multiple different issues here.)
> Well, as long as we're on a guest kernel stack rather than our own, I
> don't think the exact insn causing the #DF really matters. See earlier
> mails I have sent in this regard.

In George's crash, we were in a weird place on the hypervisor stack, not
a guest stack...

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:14     ` Simon Gaiser
  2018-05-24 14:18       ` Andrew Cooper
@ 2018-05-24 14:28       ` Jan Beulich
  2018-05-24 15:10         ` Simon Gaiser
  1 sibling, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 14:28 UTC (permalink / raw)
  To: Simon Gaiser; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel

>>> On 24.05.18 at 16:14, <simon@invisiblethingslab.com> wrote:
> Jan Beulich:
>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>> Jan Beulich:
>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>> CPU uses that same page.
>>>>
>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/smpboot.c
>>>> +++ b/xen/arch/x86/smpboot.c
>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>  
>>>>      free_xen_pagetable(rpt);
>>>>  
>>>> -    /* Also zap the stub mapping for this CPU. */
>>>> +    /*
>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>> +     * the same page.
>>>> +     */
>>>> +    if ( stub_linear )
>>>> +    {
>>>> +        unsigned int other;
>>>> +
>>>> +        for_each_online_cpu(other)
>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>> +            {
>>>> +                stub_linear = 0;
>>>> +                break;
>>>> +            }
>>>> +    }
>>>>      if ( stub_linear )
>>>>      {
>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>
>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>> double fault.
>> 
>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>> are in the same page (and I'd never unmap anything here at all).
> 
> 4 cores + HT, so 8 CPUs from Xen's PoV.

May I ask you to do two things:
1) confirm that you can offline CPUs successfully using xen-hptool,
2) add a printk() to the code above making clear whether/when any
of the mappings actually get zapped?

Thanks, Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:24           ` Andrew Cooper
@ 2018-05-24 14:31             ` Jan Beulich
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 14:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: George Dunlap, Simon Gaiser, Juergen Gross, xen-devel

>>> On 24.05.18 at 16:24, <andrew.cooper3@citrix.com> wrote:
> On 24/05/18 15:22, Jan Beulich wrote:
>>>>> On 24.05.18 at 16:18, <andrew.cooper3@citrix.com> wrote:
>>> Can you try with the "x86/traps: Dump the instruction stream even for
>>> double faults" patch I've just posted, and show the full #DF panic log
>>> please?  (Its conceivable that there are multiple different issues here.)
>> Well, as long as we're on a guest kernel stack rather than our own, I
>> don't think the exact insn causing the #DF really matters. See earlier
>> mails I have sent in this regard.
> 
> In George's crash, we were in a weird place on the hypervisor stack, not
> a guest stack...

Go look again - %rsp pointed outside of hypervisor space in all cases that
I had looked at. And that's explained by the unmapping of the stubs: We'd
#PF right after first SYSCALL, and the handler would then run on the stack
that's still active from guest context.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:18       ` Andrew Cooper
  2018-05-24 14:22         ` Jan Beulich
@ 2018-05-24 14:35         ` Simon Gaiser
  2018-05-24 14:53           ` Andrew Cooper
  1 sibling, 1 reply; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 14:35 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich; +Cc: George Dunlap, Juergen Gross, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 4714 bytes --]

Andrew Cooper:
> On 24/05/18 15:14, Simon Gaiser wrote:
>> Jan Beulich:
>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>> Jan Beulich:
>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>> CPU uses that same page.
>>>>>
>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/arch/x86/smpboot.c
>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>  
>>>>>      free_xen_pagetable(rpt);
>>>>>  
>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>> +    /*
>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>> +     * the same page.
>>>>> +     */
>>>>> +    if ( stub_linear )
>>>>> +    {
>>>>> +        unsigned int other;
>>>>> +
>>>>> +        for_each_online_cpu(other)
>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>>> +            {
>>>>> +                stub_linear = 0;
>>>>> +                break;
>>>>> +            }
>>>>> +    }
>>>>>      if ( stub_linear )
>>>>>      {
>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>> double fault.
>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>> are in the same page (and I'd never unmap anything here at all).
>> 4 cores + HT, so 8 CPUs from Xen's PoV.
> 
> Can you try with the "x86/traps: Dump the instruction stream even for
> double faults" patch I've just posted, and show the full #DF panic log
> please?  (Its conceivable that there are multiple different issues here.)

With Jan's and your patch:

(XEN) mce_intel.c:782: MCA Capability: firstbank 0, extended MCE MSR 0, BCAST, CMCI
(XEN) CPU0 CMCI LVT vector (0xf2) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
(XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
(XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
(XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
(XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
(XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
(XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
(XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
(XEN) *** DOUBLE FAULT ***
(XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
(XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
(XEN) rax: ffffc90040ce40d8   rbx: 0000000000000000   rcx: 0000000000000003
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) rbp: 000036ffbf31bf07   rsp: ffffc90040ce4000   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040ce7fff
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
(XEN) cr3: 000000022200a000   cr2: ffffc90040ce3ff8
(XEN) fsb: 00007fa9e7909740   gsb: ffff88021e740000   gss: 0000000000000000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
(XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
(XEN) Current stack base ffffc90040ce0000 differs from expected ffff8300cec88000
(XEN) Valid stack range: ffffc90040ce6000-ffffc90040ce8000, sp=ffffc90040ce4000, tss.rsp0=ffff8300cec8ffa0
(XEN) No stack overflow detected. Skipping stack trace.
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) DOUBLE FAULT -- system shutdown
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:35         ` Simon Gaiser
@ 2018-05-24 14:53           ` Andrew Cooper
  2018-05-24 15:10             ` George Dunlap
  2018-05-24 15:16             ` Simon Gaiser
  0 siblings, 2 replies; 20+ messages in thread
From: Andrew Cooper @ 2018-05-24 14:53 UTC (permalink / raw)
  To: Simon Gaiser, Jan Beulich; +Cc: George Dunlap, Juergen Gross, xen-devel

On 24/05/18 15:35, Simon Gaiser wrote:
> Andrew Cooper:
>> On 24/05/18 15:14, Simon Gaiser wrote:
>>> Jan Beulich:
>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>> Jan Beulich:
>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>> CPU uses that same page.
>>>>>>
>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>  
>>>>>>      free_xen_pagetable(rpt);
>>>>>>  
>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>> +    /*
>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>> +     * the same page.
>>>>>> +     */
>>>>>> +    if ( stub_linear )
>>>>>> +    {
>>>>>> +        unsigned int other;
>>>>>> +
>>>>>> +        for_each_online_cpu(other)
>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>>>> +            {
>>>>>> +                stub_linear = 0;
>>>>>> +                break;
>>>>>> +            }
>>>>>> +    }
>>>>>>      if ( stub_linear )
>>>>>>      {
>>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>> double fault.
>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>> are in the same page (and I'd never unmap anything here at all).
>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>> Can you try with the "x86/traps: Dump the instruction stream even for
>> double faults" patch I've just posted, and show the full #DF panic log
>> please?  (Its conceivable that there are multiple different issues here.)
> With Jan's and your patch:
>
> (XEN) mce_intel.c:782: MCA Capability: firstbank 0, extended MCE MSR 0, BCAST, CMCI
> (XEN) CPU0 CMCI LVT vector (0xf2) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800

/sigh - Naughty Linux.  The PVOps really ought to know that they don't
have an APIC to play with, not that this related to the crash.

> (XEN) *** DOUBLE FAULT ***
> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
> (XEN) rax: ffffc90040ce40d8   rbx: 0000000000000000   rcx: 0000000000000003
> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
> (XEN) rbp: 000036ffbf31bf07   rsp: ffffc90040ce4000   r8:  0000000000000000
> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040ce7fff
> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
> (XEN) cr3: 000000022200a000   cr2: ffffc90040ce3ff8
> (XEN) fsb: 00007fa9e7909740   gsb: ffff88021e740000   gss: 0000000000000000
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
> (XEN) Current stack base ffffc90040ce0000 differs from expected ffff8300cec88000
> (XEN) Valid stack range: ffffc90040ce6000-ffffc90040ce8000, sp=ffffc90040ce4000, tss.rsp0=ffff8300cec8ffa0
> (XEN) No stack overflow detected. Skipping stack trace.

Ok - this is the same as George's crash, and yes - I did misdiagnose the
stack we were on.  I presume this hardware doesn't have SMAP? (or we've
expected to take a #DF immediately at the head of the syscall hander.)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:28       ` Jan Beulich
@ 2018-05-24 15:10         ` Simon Gaiser
  2018-05-24 15:31           ` Jan Beulich
  2018-05-24 15:46           ` Jan Beulich
  0 siblings, 2 replies; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 15:10 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 7527 bytes --]

Jan Beulich:
>>>> On 24.05.18 at 16:14, <simon@invisiblethingslab.com> wrote:
>> Jan Beulich:
>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>> Jan Beulich:
>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>> CPU uses that same page.
>>>>>
>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/arch/x86/smpboot.c
>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>  
>>>>>      free_xen_pagetable(rpt);
>>>>>  
>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>> +    /*
>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>> +     * the same page.
>>>>> +     */
>>>>> +    if ( stub_linear )
>>>>> +    {
>>>>> +        unsigned int other;
>>>>> +
>>>>> +        for_each_online_cpu(other)
>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>>> +            {
>>>>> +                stub_linear = 0;
>>>>> +                break;
>>>>> +            }
>>>>> +    }
>>>>>      if ( stub_linear )
>>>>>      {
>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>
>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>> double fault.
>>>
>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>> are in the same page (and I'd never unmap anything here at all).
>>
>> 4 cores + HT, so 8 CPUs from Xen's PoV.
> 
> May I ask you to do two things:
> 1) confirm that you can offline CPUs successfully using xen-hptool,
> 2) add a printk() to the code above making clear whether/when any
> of the mappings actually get zapped?

There seem to be two failure modes now. It seems that both can be
triggered either by offlining a cpu or by suspend. Using cpu offlining
below since during suspend I often loose part of the serial output.

Failure mode 1, the double fault as before:

root@localhost:~# xen-hptool cpu-offline 3
Prepare to offline CPU 3
(XEN) Broke affinity for irq 9
(XEN) Broke affinity for irq 29
(XEN) dbg: stub_linear't1 = 18446606431818858880
(XEN) dbg: first stub_linear if
(XEN) dbg: stub_linear't2 = 18446606431818858880
(XEN) dbg: second stub_linear if
CPU 3 offlined successfully
root@localhost:~# (XEN) *** DOUBLE FAULT ***
(XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
(XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
(XEN) rax: ffffc90040cdc0a8   rbx: 0000000000000000   rcx: 0000000000000006
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) rbp: 000036ffbf323f37   rsp: ffffc90040cdc000   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040cdffff
(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
(XEN) cr3: 0000000128109000   cr2: ffffc90040cdbff8
(XEN) fsb: 00007fc01c3c6dc0   gsb: ffff88021e700000   gss: 0000000000000000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
(XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
(XEN) Current stack base ffffc90040cd8000 differs from expected ffff8300cec88000
(XEN) Valid stack range: ffffc90040cde000-ffffc90040ce0000, sp=ffffc90040cdc000, tss.rsp0=ffff8300cec8ffa0
(XEN) No stack overflow detected. Skipping stack trace.
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) DOUBLE FAULT -- system shutdown
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...

Failure mode 2, dom0 kernel panic (Debian unstable default kernel):

root@localhost:~# xen-hptool cpu-offline 3
Prepare to offline CPU 3
(XEN) Broke affinity for irq 9
(XEN) Broke affinity for irq 26
(XEN) Broke affinity for irq 28
(XEN) dbg: stub_linear't1 = 18446606431818858880
(XEN) dbg: first stub_linear if
(XEN) dbg: stub_linear't2 = 18446606431818858880
(XEN) dbg: second stub_linear if
CPU 3 offlined successfully
root@localhost:~# [   42.976030] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[   42.976178] IP: __evtchn_fifo_handle_events+0x43/0x1a0
[   42.976256] PGD 0 P4D 0 
[   42.976305] Oops: 0002 [#1] SMP NOPTI
[   42.976367] Modules linked in: ctr ccm bnep ip6t_REJECT nf_reject_ipv6 ip6table_filter ip6_tables ipt_REJECT nf_reject_ipv4 xt_tcpudp iptable_filter arc4 snd_hda_codec_hdmi dell_rbtn iwldvm nouveau intel_rapl intel_powerclamp dell_laptop dell_wmi btusb btrtl dell_smbios iTCO_wdt btbcm mxm_wmi btintel crct10dif_pclmul sparse_keymap wmi_bmof dell_wmi_descriptor iTCO_vendor_support ppdev mac80211 snd_hda_codec_idt dcdbas crc32_pclmul snd_hda_codec_generic ttm dell_smm_hwmon ghash_clmulni_intel bluetooth drm_kms_helper iwlwifi snd_hda_intel joydev evdev intel_rapl_perf snd_hda_codec serio_raw drm drbg pcspkr snd_hda_core i2c_algo_bit cfg80211 ansi_cprng snd_hwdep snd_pcm snd_timer mei_me snd ecdh_generic rfkill sg soundcore mei lpc_ich shpchp wmi tpm_tis parport_pc tpm_tis_core tpm parport rng_core video
[   42.977349]  dell_smo8800 ac battery button xen_acpi_processor xen_pciback xenfs xen_privcmd xen_netback xen_blkback xen_gntalloc xen_gntdev xen_evtchn ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 crc32c_generic fscrypto ecb sr_mod cdrom sd_mod crc32c_intel ahci libahci aesni_intel sdhci_pci aes_x86_64 crypto_simd cqhci ehci_pci libata cryptd firewire_ohci sdhci ehci_hcd glue_helper firewire_core psmouse i2c_i801 scsi_mod mmc_core crc_itu_t usbcore e1000e usb_common thermal
[   42.977953] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.16.0-1-amd64 #1 Debian 4.16.5-1
[   42.978061] Hardware name: Dell Inc. Latitude E6520/0692FT, BIOS A20 05/12/2017
[   42.978167] RIP: e030:__evtchn_fifo_handle_events+0x43/0x1a0
[   42.978250] RSP: e02b:ffff88021e643f60 EFLAGS: 00010046
[   42.978327] RAX: 0000000000000000 RBX: ffff88021e6540e0 RCX: 0000000000000000
[   42.978427] RDX: ffff88021e640000 RSI: 0000000000000000 RDI: 0000000000000001
[   42.978527] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[   42.978626] R10: ffffc90040cc7e28 R11: 0000000000000000 R12: 0000000000000001
[   42.978725] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[   42.978857] FS:  00007fb6cddee8c0(0000) GS:ffff88021e640000(0000) knlGS:0000000000000000
[   42.978969] CS:  e033 DS: 002b ES: 002b CR0: 0000000080050033
[   42.979052] CR2: 0000000000000000 CR3: 00000002126e2000 CR4: 0000000000042660
[   42.979165] Call Trace:
[   42.979210]  <IRQ>
[   42.979256]  ? __tick_nohz_idle_enter+0xee/0x440
[   42.979331]  __xen_evtchn_do_upcall+0x42/0x80
[   42.979402]  xen_evtchn_do_upcall+0x27/0x40
[   42.979471]  xen_do_hypervisor_callback+0x29/0x40

As you can see in both cases it branches into the ifs. The logged
stub_linear values are directly before the ifs.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:53           ` Andrew Cooper
@ 2018-05-24 15:10             ` George Dunlap
  2018-05-24 15:16             ` Simon Gaiser
  1 sibling, 0 replies; 20+ messages in thread
From: George Dunlap @ 2018-05-24 15:10 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Simon Gaiser, Juergen Gross, George Dunlap, Jan Beulich, xen-devel



> On May 24, 2018, at 3:53 PM, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> 
> On 24/05/18 15:35, Simon Gaiser wrote:
>> Andrew Cooper:
>>> On 24/05/18 15:14, Simon Gaiser wrote:
>>>> Jan Beulich:
>>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>>> Jan Beulich:
>>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>>> CPU uses that same page.
>>>>>>> 
>>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>> 
>>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>> 
>>>>>>>     free_xen_pagetable(rpt);
>>>>>>> 
>>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>>> +    /*
>>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>>> +     * the same page.
>>>>>>> +     */
>>>>>>> +    if ( stub_linear )
>>>>>>> +    {
>>>>>>> +        unsigned int other;
>>>>>>> +
>>>>>>> +        for_each_online_cpu(other)
>>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>>>>> +            {
>>>>>>> +                stub_linear = 0;
>>>>>>> +                break;
>>>>>>> +            }
>>>>>>> +    }
>>>>>>>     if ( stub_linear )
>>>>>>>     {
>>>>>>>         l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>>> double fault.
>>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>>> are in the same page (and I'd never unmap anything here at all).
>>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>>> Can you try with the "x86/traps: Dump the instruction stream even for
>>> double faults" patch I've just posted, and show the full #DF panic log
>>> please?  (Its conceivable that there are multiple different issues here.)
>> With Jan's and your patch:
>> 
>> (XEN) mce_intel.c:782: MCA Capability: firstbank 0, extended MCE MSR 0, BCAST, CMCI
>> (XEN) CPU0 CMCI LVT vector (0xf2) already installed
>> (XEN) Finishing wakeup from ACPI S3 state.
>> (XEN) Enabling non-boot CPUs  ...
>> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
> 
> /sigh - Naughty Linux.  The PVOps really ought to know that they don't
> have an APIC to play with, not that this related to the crash.
> 
>> (XEN) *** DOUBLE FAULT ***
>> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
>> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
>> (XEN) rax: ffffc90040ce40d8   rbx: 0000000000000000   rcx: 0000000000000003
>> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
>> (XEN) rbp: 000036ffbf31bf07   rsp: ffffc90040ce4000   r8:  0000000000000000
>> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
>> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040ce7fff
>> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
>> (XEN) cr3: 000000022200a000   cr2: ffffc90040ce3ff8
>> (XEN) fsb: 00007fa9e7909740   gsb: ffff88021e740000   gss: 0000000000000000
>> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
>> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
>> (XEN) Current stack base ffffc90040ce0000 differs from expected ffff8300cec88000
>> (XEN) Valid stack range: ffffc90040ce6000-ffffc90040ce8000, sp=ffffc90040ce4000, tss.rsp0=ffff8300cec8ffa0
>> (XEN) No stack overflow detected. Skipping stack trace.
> 
> Ok - this is the same as George's crash, and yes - I did misdiagnose the
> stack we were on.

FWIW I just tried the patch and got a similar result.  (Let me know if you want an actual stack trace.)

 -George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 14:53           ` Andrew Cooper
  2018-05-24 15:10             ` George Dunlap
@ 2018-05-24 15:16             ` Simon Gaiser
  1 sibling, 0 replies; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 15:16 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich; +Cc: George Dunlap, Juergen Gross, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 5180 bytes --]

Andrew Cooper:
> On 24/05/18 15:35, Simon Gaiser wrote:
>> Andrew Cooper:
>>> On 24/05/18 15:14, Simon Gaiser wrote:
>>>> Jan Beulich:
>>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>>> Jan Beulich:
>>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>>> CPU uses that same page.
>>>>>>>
>>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>>
>>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>>  
>>>>>>>      free_xen_pagetable(rpt);
>>>>>>>  
>>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>>> +    /*
>>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>>> +     * the same page.
>>>>>>> +     */
>>>>>>> +    if ( stub_linear )
>>>>>>> +    {
>>>>>>> +        unsigned int other;
>>>>>>> +
>>>>>>> +        for_each_online_cpu(other)
>>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) )
>>>>>>> +            {
>>>>>>> +                stub_linear = 0;
>>>>>>> +                break;
>>>>>>> +            }
>>>>>>> +    }
>>>>>>>      if ( stub_linear )
>>>>>>>      {
>>>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>>> double fault.
>>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>>> are in the same page (and I'd never unmap anything here at all).
>>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>>> Can you try with the "x86/traps: Dump the instruction stream even for
>>> double faults" patch I've just posted, and show the full #DF panic log
>>> please?  (Its conceivable that there are multiple different issues here.)
>> With Jan's and your patch:
>>
>> (XEN) mce_intel.c:782: MCA Capability: firstbank 0, extended MCE MSR 0, BCAST, CMCI
>> (XEN) CPU0 CMCI LVT vector (0xf2) already installed
>> (XEN) Finishing wakeup from ACPI S3 state.
>> (XEN) Enabling non-boot CPUs  ...
>> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v1 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v2 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v3 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
>> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00000
>> (XEN) emul-priv-op.c:1166:d0v4 Domain attempted WRMSR 0000001b from 0x00000000fee00c00 to 0x00000000fee00800
> 
> /sigh - Naughty Linux.  The PVOps really ought to know that they don't
> have an APIC to play with, not that this related to the crash.
> 
>> (XEN) *** DOUBLE FAULT ***
>> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
>> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
>> (XEN) rax: ffffc90040ce40d8   rbx: 0000000000000000   rcx: 0000000000000003
>> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
>> (XEN) rbp: 000036ffbf31bf07   rsp: ffffc90040ce4000   r8:  0000000000000000
>> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
>> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040ce7fff
>> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
>> (XEN) cr3: 000000022200a000   cr2: ffffc90040ce3ff8
>> (XEN) fsb: 00007fa9e7909740   gsb: ffff88021e740000   gss: 0000000000000000
>> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
>> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
>> (XEN) Current stack base ffffc90040ce0000 differs from expected ffff8300cec88000
>> (XEN) Valid stack range: ffffc90040ce6000-ffffc90040ce8000, sp=ffffc90040ce4000, tss.rsp0=ffff8300cec8ffa0
>> (XEN) No stack overflow detected. Skipping stack trace.
> 
> Ok - this is the same as George's crash, and yes - I did misdiagnose the
> stack we were on.  I presume this hardware doesn't have SMAP? (or we've
> expected to take a #DF immediately at the head of the syscall hander.)

Yes, it's too old for SMAP. It's a i7-2760QM.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 15:10         ` Simon Gaiser
@ 2018-05-24 15:31           ` Jan Beulich
  2018-05-24 15:46           ` Jan Beulich
  1 sibling, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 15:31 UTC (permalink / raw)
  To: Simon Gaiser; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel

>>> On 24.05.18 at 17:10, <simon@invisiblethingslab.com> wrote:
> Jan Beulich:
>>>>> On 24.05.18 at 16:14, <simon@invisiblethingslab.com> wrote:
>>> Jan Beulich:
>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>> Jan Beulich:
>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>> CPU uses that same page.
>>>>>>
>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>  
>>>>>>      free_xen_pagetable(rpt);
>>>>>>  
>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>> +    /*
>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>> +     * the same page.
>>>>>> +     */
>>>>>> +    if ( stub_linear )
>>>>>> +    {
>>>>>> +        unsigned int other;
>>>>>> +
>>>>>> +        for_each_online_cpu(other)
>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) 
> )
>>>>>> +            {
>>>>>> +                stub_linear = 0;
>>>>>> +                break;
>>>>>> +            }
>>>>>> +    }
>>>>>>      if ( stub_linear )
>>>>>>      {
>>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>>
>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>> double fault.
>>>>
>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>> are in the same page (and I'd never unmap anything here at all).
>>>
>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>> 
>> May I ask you to do two things:
>> 1) confirm that you can offline CPUs successfully using xen-hptool,
>> 2) add a printk() to the code above making clear whether/when any
>> of the mappings actually get zapped?
> 
> There seem to be two failure modes now. It seems that both can be
> triggered either by offlining a cpu or by suspend. Using cpu offlining
> below since during suspend I often loose part of the serial output.
> 
> Failure mode 1, the double fault as before:
> 
> root@localhost:~# xen-hptool cpu-offline 3
> Prepare to offline CPU 3
> (XEN) Broke affinity for irq 9
> (XEN) Broke affinity for irq 29
> (XEN) dbg: stub_linear't1 = 18446606431818858880
> (XEN) dbg: first stub_linear if
> (XEN) dbg: stub_linear't2 = 18446606431818858880
> (XEN) dbg: second stub_linear if
> CPU 3 offlined successfully
> root@localhost:~# (XEN) *** DOUBLE FAULT ***
> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
> (XEN) rax: ffffc90040cdc0a8   rbx: 0000000000000000   rcx: 0000000000000006
> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
> (XEN) rbp: 000036ffbf323f37   rsp: ffffc90040cdc000   r8:  0000000000000000
> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040cdffff
> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
> (XEN) cr3: 0000000128109000   cr2: ffffc90040cdbff8
> (XEN) fsb: 00007fc01c3c6dc0   gsb: ffff88021e700000   gss: 0000000000000000
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 75
> (XEN) Current stack base ffffc90040cd8000 differs from expected ffff8300cec88000
> (XEN) Valid stack range: ffffc90040cde000-ffffc90040ce0000, 
> sp=ffffc90040cdc000, tss.rsp0=ffff8300cec8ffa0
> (XEN) No stack overflow detected. Skipping stack trace.
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) DOUBLE FAULT -- system shutdown
> (XEN) ****************************************
> (XEN) 
> (XEN) Reboot in five seconds...

And I see now why I thought the patch works - I should have removed
"xpti=no" from the command line. The diagnosis was wrong altogether:
While we share physical pages for stubs, we don#t share virtual space.
See alloc_stub_page().

But I'm pretty sure it has something to do with setting up stub space.
Looking around again ...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 15:10         ` Simon Gaiser
  2018-05-24 15:31           ` Jan Beulich
@ 2018-05-24 15:46           ` Jan Beulich
  2018-05-24 16:12             ` Simon Gaiser
  1 sibling, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2018-05-24 15:46 UTC (permalink / raw)
  To: Simon Gaiser; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel

>>> On 24.05.18 at 17:10, <simon@invisiblethingslab.com> wrote:
> Jan Beulich:
>>>>> On 24.05.18 at 16:14, <simon@invisiblethingslab.com> wrote:
>>> Jan Beulich:
>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>> Jan Beulich:
>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>> CPU uses that same page.
>>>>>>
>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>  
>>>>>>      free_xen_pagetable(rpt);
>>>>>>  
>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>> +    /*
>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>> +     * the same page.
>>>>>> +     */
>>>>>> +    if ( stub_linear )
>>>>>> +    {
>>>>>> +        unsigned int other;
>>>>>> +
>>>>>> +        for_each_online_cpu(other)
>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) 
> )
>>>>>> +            {
>>>>>> +                stub_linear = 0;
>>>>>> +                break;
>>>>>> +            }
>>>>>> +    }
>>>>>>      if ( stub_linear )
>>>>>>      {
>>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>>
>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>> double fault.
>>>>
>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>> are in the same page (and I'd never unmap anything here at all).
>>>
>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>> 
>> May I ask you to do two things:
>> 1) confirm that you can offline CPUs successfully using xen-hptool,
>> 2) add a printk() to the code above making clear whether/when any
>> of the mappings actually get zapped?
> 
> There seem to be two failure modes now. It seems that both can be
> triggered either by offlining a cpu or by suspend. Using cpu offlining
> below since during suspend I often loose part of the serial output.
> 
> Failure mode 1, the double fault as before:
> 
> root@localhost:~# xen-hptool cpu-offline 3
> Prepare to offline CPU 3
> (XEN) Broke affinity for irq 9
> (XEN) Broke affinity for irq 29
> (XEN) dbg: stub_linear't1 = 18446606431818858880
> (XEN) dbg: first stub_linear if
> (XEN) dbg: stub_linear't2 = 18446606431818858880
> (XEN) dbg: second stub_linear if
> CPU 3 offlined successfully
> root@localhost:~# (XEN) *** DOUBLE FAULT ***
> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
> (XEN) rax: ffffc90040cdc0a8   rbx: 0000000000000000   rcx: 0000000000000006
> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
> (XEN) rbp: 000036ffbf323f37   rsp: ffffc90040cdc000   r8:  0000000000000000
> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040cdffff
> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
> (XEN) cr3: 0000000128109000   cr2: ffffc90040cdbff8
> (XEN) fsb: 00007fc01c3c6dc0   gsb: ffff88021e700000   gss: 0000000000000000
> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 
> 75
> (XEN) Current stack base ffffc90040cd8000 differs from expected 
> ffff8300cec88000
> (XEN) Valid stack range: ffffc90040cde000-ffffc90040ce0000, 
> sp=ffffc90040cdc000, tss.rsp0=ffff8300cec8ffa0
> (XEN) No stack overflow detected. Skipping stack trace.
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) DOUBLE FAULT -- system shutdown
> (XEN) ****************************************
> (XEN) 
> (XEN) Reboot in five seconds...

Oh, so CPU 0 gets screwed by offlining CPU 3. How about this alternative
(but so far untested) patch:

--- unstable.orig/xen/arch/x86/smpboot.c
+++ unstable/xen/arch/x86/smpboot.c
@@ -874,7 +874,7 @@ static void cleanup_cpu_root_pgt(unsigne
         l2_pgentry_t *l2t = l3e_to_l2e(l3t[l3_table_offset(stub_linear)]);
         l1_pgentry_t *l1t = l2e_to_l1e(l2t[l2_table_offset(stub_linear)]);
 
-        l1t[l2_table_offset(stub_linear)] = l1e_empty();
+        l1t[l1_table_offset(stub_linear)] = l1e_empty();
     }
 }
 

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
  2018-05-24 15:46           ` Jan Beulich
@ 2018-05-24 16:12             ` Simon Gaiser
  0 siblings, 0 replies; 20+ messages in thread
From: Simon Gaiser @ 2018-05-24 16:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Juergen Gross, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 5187 bytes --]

Jan Beulich:
>>>> On 24.05.18 at 17:10, <simon@invisiblethingslab.com> wrote:
>> Jan Beulich:
>>>>>> On 24.05.18 at 16:14, <simon@invisiblethingslab.com> wrote:
>>>> Jan Beulich:
>>>>>>>> On 24.05.18 at 16:00, <simon@invisiblethingslab.com> wrote:
>>>>>> Jan Beulich:
>>>>>>> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
>>>>>>> I've failed to remember the fact that multiple CPUs share a stub
>>>>>>> mapping page. Therefore it is wrong to unconditionally zap the mapping
>>>>>>> when bringing down a CPU; it may only be unmapped when no other online
>>>>>>> CPU uses that same page.
>>>>>>>
>>>>>>> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>>
>>>>>>> --- a/xen/arch/x86/smpboot.c
>>>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>>>> @@ -876,7 +876,21 @@ static void cleanup_cpu_root_pgt(unsigne
>>>>>>>  
>>>>>>>      free_xen_pagetable(rpt);
>>>>>>>  
>>>>>>> -    /* Also zap the stub mapping for this CPU. */
>>>>>>> +    /*
>>>>>>> +     * Also zap the stub mapping for this CPU, if no other online one uses
>>>>>>> +     * the same page.
>>>>>>> +     */
>>>>>>> +    if ( stub_linear )
>>>>>>> +    {
>>>>>>> +        unsigned int other;
>>>>>>> +
>>>>>>> +        for_each_online_cpu(other)
>>>>>>> +            if ( !((per_cpu(stubs.addr, other) ^ stub_linear) >> PAGE_SHIFT) 
>> )
>>>>>>> +            {
>>>>>>> +                stub_linear = 0;
>>>>>>> +                break;
>>>>>>> +            }
>>>>>>> +    }
>>>>>>>      if ( stub_linear )
>>>>>>>      {
>>>>>>>          l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
>>>>>>
>>>>>> Tried this on-top of staging (fc5805daef) and I still get the same
>>>>>> double fault.
>>>>>
>>>>> Hmm, it worked for me offlining (and later re-onlining) several pCPU-s. What
>>>>> size a system are you testing on? Mine has got only 12 CPUs, i.e. all stubs
>>>>> are in the same page (and I'd never unmap anything here at all).
>>>>
>>>> 4 cores + HT, so 8 CPUs from Xen's PoV.
>>>
>>> May I ask you to do two things:
>>> 1) confirm that you can offline CPUs successfully using xen-hptool,
>>> 2) add a printk() to the code above making clear whether/when any
>>> of the mappings actually get zapped?
>>
>> There seem to be two failure modes now. It seems that both can be
>> triggered either by offlining a cpu or by suspend. Using cpu offlining
>> below since during suspend I often loose part of the serial output.
>>
>> Failure mode 1, the double fault as before:
>>
>> root@localhost:~# xen-hptool cpu-offline 3
>> Prepare to offline CPU 3
>> (XEN) Broke affinity for irq 9
>> (XEN) Broke affinity for irq 29
>> (XEN) dbg: stub_linear't1 = 18446606431818858880
>> (XEN) dbg: first stub_linear if
>> (XEN) dbg: stub_linear't2 = 18446606431818858880
>> (XEN) dbg: second stub_linear if
>> CPU 3 offlined successfully
>> root@localhost:~# (XEN) *** DOUBLE FAULT ***
>> (XEN) ----[ Xen-4.11-rc  x86_64  debug=y   Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) RIP:    e008:[<ffff82d08037b964>] handle_exception+0x9c/0xff
>> (XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor
>> (XEN) rax: ffffc90040cdc0a8   rbx: 0000000000000000   rcx: 0000000000000006
>> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
>> (XEN) rbp: 000036ffbf323f37   rsp: ffffc90040cdc000   r8:  0000000000000000
>> (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
>> (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: ffffc90040cdffff
>> (XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 0000000000042660
>> (XEN) cr3: 0000000128109000   cr2: ffffc90040cdbff8
>> (XEN) fsb: 00007fc01c3c6dc0   gsb: ffff88021e700000   gss: 0000000000000000
>> (XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen code around <ffff82d08037b964> (handle_exception+0x9c/0xff):
>> (XEN)  00 f3 90 0f ae e8 eb f9 <e8> 07 00 00 00 f3 90 0f ae e8 eb f9 83 e9 01 
>> 75
>> (XEN) Current stack base ffffc90040cd8000 differs from expected 
>> ffff8300cec88000
>> (XEN) Valid stack range: ffffc90040cde000-ffffc90040ce0000, 
>> sp=ffffc90040cdc000, tss.rsp0=ffff8300cec8ffa0
>> (XEN) No stack overflow detected. Skipping stack trace.
>> (XEN) 
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) DOUBLE FAULT -- system shutdown
>> (XEN) ****************************************
>> (XEN) 
>> (XEN) Reboot in five seconds...
> 
> Oh, so CPU 0 gets screwed by offlining CPU 3. How about this alternative
> (but so far untested) patch:
> 
> --- unstable.orig/xen/arch/x86/smpboot.c
> +++ unstable/xen/arch/x86/smpboot.c
> @@ -874,7 +874,7 @@ static void cleanup_cpu_root_pgt(unsigne
>          l2_pgentry_t *l2t = l3e_to_l2e(l3t[l3_table_offset(stub_linear)]);
>          l1_pgentry_t *l1t = l2e_to_l1e(l2t[l2_table_offset(stub_linear)]);
>  
> -        l1t[l2_table_offset(stub_linear)] = l1e_empty();
> +        l1t[l1_table_offset(stub_linear)] = l1e_empty();
>      }
>  }
>  

Yes, this fixes cpu on-/offlining and suspend for me on staging.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general)
       [not found] <5B06C0F902000078001C5925@suse.com>
@ 2018-05-28  4:26 ` Juergen Gross
  0 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2018-05-28  4:26 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: George Dunlap, Andrew Cooper, Simon Gaiser

On 24/05/18 15:41, Jan Beulich wrote:
> In commit d1d6fc97d6 ("x86/xpti: really hide almost all of Xen image")
> I've failed to remember the fact that multiple CPUs share a stub
> mapping page. Therefore it is wrong to unconditionally zap the mapping
> when bringing down a CPU; it may only be unmapped when no other online
> CPU uses that same page.
> 
> Reported-by: Simon Gaiser <simon@invisiblethingslab.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-acked-by: Juergen Gross <jgross@suse.com>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-05-28 15:53 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-24 13:41 [PATCH] x86/XPTI: fix S3 resume (and CPU offlining in general) Jan Beulich
2018-05-24 13:48 ` Andrew Cooper
2018-05-24 14:05   ` Jan Beulich
2018-05-24 14:00 ` Simon Gaiser
2018-05-24 14:08   ` Jan Beulich
2018-05-24 14:14     ` Simon Gaiser
2018-05-24 14:18       ` Andrew Cooper
2018-05-24 14:22         ` Jan Beulich
2018-05-24 14:24           ` Andrew Cooper
2018-05-24 14:31             ` Jan Beulich
2018-05-24 14:35         ` Simon Gaiser
2018-05-24 14:53           ` Andrew Cooper
2018-05-24 15:10             ` George Dunlap
2018-05-24 15:16             ` Simon Gaiser
2018-05-24 14:28       ` Jan Beulich
2018-05-24 15:10         ` Simon Gaiser
2018-05-24 15:31           ` Jan Beulich
2018-05-24 15:46           ` Jan Beulich
2018-05-24 16:12             ` Simon Gaiser
     [not found] <5B06C0F902000078001C5925@suse.com>
2018-05-28  4:26 ` Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.