All of lore.kernel.org
 help / color / mirror / Atom feed
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andre Przywara <andre.przywara@amd.com>
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"Diestelhorst, Stephan" <Stephan.Diestelhorst@amd.com>
Subject: Re: Hypervisor crash(!) on xl cpupool-numa-split
Date: Mon, 14 Feb 2011 17:57:09 +0000	[thread overview]
Message-ID: <AANLkTimkRAHtM4CoTskQ7w6B-8Pis4B2+k7=frxM3oyW@mail.gmail.com> (raw)
In-Reply-To: <4D54E79E.3000800@amd.com>

[-- Attachment #1: Type: text/plain, Size: 5299 bytes --]

The good news is, I've managed to reproduce this on my local test
hardware with 1x4x2 (1 socket, 4 cores, 2 threads per core) using the
attached script.  It's time to go home now, but I should be able to
dig something up tomorrow.

To use the script:
* Rename cpupool0 to "p0", and create an empty second pool, "p1"
* You can modify elements by adding "arg=val" as arguments.
* Arguments are:
 + dryrun={true,false} Do the work, but don't actually execute any xl
arguments.  Default false.
 + left: Number commands to execute.  Default 10.
 + maxcpus: highest numerical value for a cpu.  Default 7 (i.e., 0-7 is 8 cpus).
 + verbose={true,false} Print what you're doing.  Default is true.

The script sometimes attempts to remove the last cpu from cpupool0; in
this case, libxl will print an error.  If the script gets an error
under that condition, it will ignore it; under any other condition, it
will print diagnostic information.

What finally crashed it for me was this command:
# ./cpupool-test.sh verbose=false left=1000

 -George

On Fri, Feb 11, 2011 at 7:39 AM, Andre Przywara <andre.przywara@amd.com> wrote:
> Juergen Gross wrote:
>>
>> On 02/10/11 15:18, Andre Przywara wrote:
>>>
>>> Andre Przywara wrote:
>>>>
>>>> On 02/10/2011 07:42 AM, Juergen Gross wrote:
>>>>>
>>>>> On 02/09/11 15:21, Juergen Gross wrote:
>>>>>>
>>>>>> Andre, George,
>>>>>>
>>>>>>
>>>>>> What seems to be interesting: I think the problem did always occur
>>>>>> when
>>>>>> a new cpupool was created and the first cpu was moved to it.
>>>>>>
>>>>>> I think my previous assumption regarding the master_ticker was not
>>>>>> too bad.
>>>>>> I think somehow the master_ticker of the new cpupool is becoming
>>>>>> active
>>>>>> before the scheduler is really initialized properly. This could
>>>>>> happen, if
>>>>>> enough time is spent between alloc_pdata for the cpu to be moved and
>>>>>> the
>>>>>> critical section in schedule_cpu_switch().
>>>>>>
>>>>>> The solution should be to activate the timers only if the scheduler is
>>>>>> ready for them.
>>>>>>
>>>>>> George, do you think the master_ticker should be stopped in
>>>>>> suspend_ticker
>>>>>> as well? I still see potential problems for entering deep C-States.
>>>>>> I think
>>>>>> I'll prepare a patch which will keep the master_ticker active for the
>>>>>> C-State case and migrate it for the schedule_cpu_switch() case.
>>>>>
>>>>> Okay, here is a patch for this. It ran on my 4-core machine without any
>>>>> problems.
>>>>> Andre, could you give it a try?
>>>>
>>>> Did, but unfortunately it crashed as always. Tried twice and made sure
>>>> I booted the right kernel. Sorry.
>>>> The idea with the race between the timer and the state changing
>>>> sounded very appealing, actually that was suspicious to me from the
>>>> beginning.
>>>>
>>>> I will add some code to dump the state of all cpupools to the BUG_ON
>>>> to see in which situation we are when the bug triggers.
>>>
>>> OK, here is a first try of this, the patch iterates over all CPU pools
>>> and outputs some data if the BUG_ON
>>> ((sdom->weight * sdom->active_vcpu_count) > weight_left) condition
>>> triggers:
>>> (XEN) CPU pool #0: 1 domains (SMP Credit Scheduler), mask: fffffffc003f
>>> (XEN) CPU pool #1: 0 domains (SMP Credit Scheduler), mask: fc0
>>> (XEN) CPU pool #2: 0 domains (SMP Credit Scheduler), mask: 1000
>>> (XEN) Xen BUG at sched_credit.c:1010
>>> ....
>>> The masks look proper (6 cores per node), the bug triggers when the
>>> first CPU is about to be(?) inserted.
>>
>> Sure? I'm missing the cpu with mask 2000.
>> I'll try to reproduce the problem on a larger machine here (24 cores, 4
>> numa
>> nodes).
>> Andre, can you give me your xen boot parameters? Which xen changeset are
>> you
>> running, and do you have any additional patches in use?
>
> The grub lines:
> kernel (hd1,0)/boot/xen-22858_debug_04.gz console=com1,vga com1=115200
> module (hd1,0)/boot/vmlinuz-2.6.32.27_pvops console=tty0
> console=ttyS0,115200 ro root=/dev/sdb1 xencons=hvc0
>
> All of my experiments are use c/s 22858 as a base.
> If you use a AMD Magny-Cours box for your experiments (socket C32 or G34),
> you should add the following patch (removing the line)
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -803,7 +803,6 @@ static void pv_cpuid(struct cpu_user_regs *regs)
>         __clear_bit(X86_FEATURE_SKINIT % 32, &c);
>         __clear_bit(X86_FEATURE_WDT % 32, &c);
>         __clear_bit(X86_FEATURE_LWP % 32, &c);
> -        __clear_bit(X86_FEATURE_NODEID_MSR % 32, &c);
>         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
>         break;
>     case 5: /* MONITOR/MWAIT */
>
> This is not necessary (in fact that reverts my patch c/s 22815), but raises
> the probability to trigger the bug, probably because it increases the
> pressure of the Dom0 scheduler. If you cannot trigger it with Dom0, try to
> create a guest with many VCPUs and squeeze it into a small CPU-pool.
>
> Good luck ;-)
> Andre.
>
> --
> Andre Przywara
> AMD-OSRC (Dresden)
> Tel: x29712
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>

[-- Attachment #2: cpupool-test.sh --]
[-- Type: application/x-sh, Size: 3043 bytes --]

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

  reply	other threads:[~2011-02-14 17:57 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-27 23:18 Hypervisor crash(!) on xl cpupool-numa-split Andre Przywara
2011-01-28  6:47 ` Juergen Gross
2011-01-28 11:07   ` Andre Przywara
2011-01-28 11:44     ` Juergen Gross
2011-01-28 13:14       ` Andre Przywara
2011-01-31  7:04         ` Juergen Gross
2011-01-31 14:59           ` Andre Przywara
2011-01-31 15:28             ` George Dunlap
2011-02-01 16:32               ` Andre Przywara
2011-02-02  6:27                 ` Juergen Gross
2011-02-02  8:49                   ` Juergen Gross
2011-02-02 10:05                     ` Juergen Gross
2011-02-02 10:59                       ` Andre Przywara
2011-02-02 14:39                 ` Stephan Diestelhorst
2011-02-02 15:14                   ` Juergen Gross
2011-02-02 16:01                     ` Stephan Diestelhorst
2011-02-03  5:57                       ` Juergen Gross
2011-02-03  9:18                         ` Juergen Gross
2011-02-04 14:09                           ` Andre Przywara
2011-02-07 12:38                             ` Andre Przywara
2011-02-07 13:32                               ` Juergen Gross
2011-02-07 15:55                                 ` George Dunlap
2011-02-08  5:43                                   ` Juergen Gross
2011-02-08 12:08                                     ` George Dunlap
2011-02-08 12:14                                       ` George Dunlap
2011-02-08 16:33                                         ` Andre Przywara
2011-02-09 12:27                                           ` George Dunlap
2011-02-09 12:27                                             ` George Dunlap
2011-02-09 13:04                                               ` Juergen Gross
2011-02-09 13:39                                                 ` Andre Przywara
2011-02-09 13:51                                               ` Andre Przywara
2011-02-09 14:21                                                 ` Juergen Gross
2011-02-10  6:42                                                   ` Juergen Gross
2011-02-10  9:25                                                     ` Andre Przywara
2011-02-10 14:18                                                       ` Andre Przywara
2011-02-11  6:17                                                         ` Juergen Gross
2011-02-11  7:39                                                           ` Andre Przywara
2011-02-14 17:57                                                             ` George Dunlap [this message]
2011-02-15  7:22                                                               ` Juergen Gross
2011-02-16  9:47                                                                 ` Juergen Gross
2011-02-16 13:54                                                                   ` George Dunlap
     [not found]                                                                     ` <4D6237C6.1050206@amd.c om>
2011-02-16 14:11                                                                     ` Juergen Gross
2011-02-16 14:28                                                                       ` Juergen Gross
2011-02-17  0:05                                                                       ` André Przywara
2011-02-17  7:05                                                                     ` Juergen Gross
2011-02-17  9:11                                                                       ` Juergen Gross
2011-02-21 10:00                                                                     ` Andre Przywara
2011-02-21 13:19                                                                       ` Juergen Gross
2011-02-21 14:45                                                                         ` Andre Przywara
2011-02-21 14:50                                                                           ` Juergen Gross
2011-02-08 12:23                                       ` Juergen Gross
2011-01-28 11:13   ` George Dunlap
2011-01-28 13:05     ` Andre Przywara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTimkRAHtM4CoTskQ7w6B-8Pis4B2+k7=frxM3oyW@mail.gmail.com' \
    --to=george.dunlap@eu.citrix.com \
    --cc=Stephan.Diestelhorst@amd.com \
    --cc=andre.przywara@amd.com \
    --cc=juergen.gross@ts.fujitsu.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.