All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andre Przywara <andre.przywara@amd.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stephan, Diestelhorst <stephan.diestelhorst@amd.com>
Subject: Re: Hypervisor crash(!) on xl cpupool-numa-split
Date: Tue, 1 Feb 2011 17:32:25 +0100	[thread overview]
Message-ID: <4D483599.1060807@amd.com> (raw)
In-Reply-To: <AANLkTi=ppBtb1nhdfbhGZa0Rt6kVyopdS3iJPr5fVA1x@mail.gmail.com>

Hi folks,

I asked Stephan Diestelhorst for help and after I convinced him that 
removing credit and making SEDF the default again is not an option he 
worked together with me on that ;-) Many thanks for that!
We haven't come to a final solution but could gather some debug data.
I will simply dump some data here, maybe somebody has got a clue. We 
will work further on this tomorrow.

First I replaced the BUG_ON with some printks to get some insight:
(XEN) sdom->active_vcpu_count: 18
(XEN) sdom->weight: 256
(XEN) weight_left: 4096, weight_total: 4096
(XEN) credit_balance: 0, credit_xtra: 0, credit_cap: 0
(XEN) Xen BUG at sched_credit.c:591
(XEN) ----[ Xen-4.1.0-rc2-pre  x86_64  debug=y  Not tainted ]----

So that one shows that the number of VCPUs is not up-to-date with the 
computed weight sum, we have seen a difference of one or two VCPUs (in 
this case here the weight has been computed from 16 VCPUs). Also it 
shows that the assertion kicks in in the first iteration of the loop, 
where weight_left and weight_total are still equal.

So I additionally instrumented alloc_pdata and free_pdata, the 
unprefixed lines come from a shell script mimicking the functionality of 
cpupool-numa-split.
------------
Removing CPUs from Pool 0
Creating new pool
Using config file "cpupool.test"
cpupool name:   Pool-node6
scheduler:      credit
number of cpus: 1
(XEN) adding CPU 36, now 1 CPUs
(XEN) removing CPU 36, remaining: 17
Populating new pool
(XEN) sdom->active_vcpu_count: 9
(XEN) sdom->weight: 256
(XEN) weight_left: 2048, weight_total: 2048
(XEN) credit_balance: 0, credit_xtra: 0, credit_cap: 0
(XEN) adding CPU 37, now 2 CPUs
(XEN) removing CPU 37, remaining: 16
(XEN) adding CPU 38, now 3 CPUs
(XEN) removing CPU 38, remaining: 15
(XEN) adding CPU 39, now 4 CPUs
(XEN) removing CPU 39, remaining: 14
(XEN) adding CPU 40, now 5 CPUs
(XEN) removing CPU 40, remaining: 13
(XEN) sdom->active_vcpu_count: 17
(XEN) sdom->weight: 256
(XEN) weight_left: 4096, weight_total: 4096
(XEN) credit_balance: 0, credit_xtra: 0, credit_cap: 0
(XEN) adding CPU 41, now 6 CPUs
(XEN) removing CPU 41, remaining: 12
...
Two thing startled me:
1) There is quite some between the "Removing CPUs" message from the 
script and the actual HV printk showing it's done, why is that not 
synchronous? Looking at the code it shows that 
__csched_vcpu_acct_start() is eventually triggered by a timer, shouldn't 
that be triggered synchronously by add/removal events?
2) It clearly shows that each CPU gets added to the new pool _before_ it 
gets removed from the old one (Pool-0), isn't that violating the "only 
one pool per CPU" rule? Even it that is fine for a short period of time, 
maybe the timer kicks in in this very moment resulting in violated 
invariants?

Yours confused,
Andre.

George Dunlap wrote:
> On Mon, Jan 31, 2011 at 2:59 PM, Andre Przywara <andre.przywara@amd.com> wrote:
>> Right, that was also my impression.
>>
>> I seemed to get a bit further, though:
>> By accident I found that in c/s 22846 the issue is fixed, it works now
>> without crashing. I bisected it down to my own patch, which disables the
>> NODEID_MSR in Dom0. I could confirm this theory by a) applying this single
>> line (clear_bit(NODEID_MSR)) to 22799 and _not_ seeing it crash and b) by
>> removing this line from 22846 and seeing it crash.
>>
>> So my theory is that Dom0 sees different nodes on its virtual CPUs via the
>> physical NodeID MSR, but this association can (and will) be changed every
>> moment by the Xen scheduler. So Dom0 will build a bogus topology based upon
>> these values. As soon as all vCPUs of Dom0 are contained into one node (node
>> 0, this is caused by the cpupool-numa-split call), the Xen scheduler somehow
>> hicks up.
>> So it seems to be bad combination caused by the NodeID-MSR (on newer AMD
>> platforms: sockets C32 and G34) and a NodeID MSR aware Dom0 (2.6.32.27).
>> Since this is a hypervisor crash, I assume that the bug is still there, only
>> the current tip will make it much less likely to be triggered.
>>
>> Hope that help, I will dig deeper now.
> 
> Thanks.  The crashes you're getting are in fact very strange.  They
> have to do with assumptions that the credit scheduler makes as part of
> its accounting process.  It would only make sense for those to be
> triggered if a vcpu was moved from one pool to another pool without
> the proper accounting being done.  (Specifically, each vcpu is
> classified as either "active" or "inactive"; and each scheduler
> instance keeps track of the total weight of all "active" vcpus.  The
> BUGs you're tripping over are saying that this invariant has been
> violated.)  However, I've looked at the cpupools vcpu-migrate code,
> and it looks like it does everything right.  So I'm a bit mystified.
> My only thought is if possibly a cpumask somewhere that wasn't getting
> set properly, such that a vcpu was being run on a cpu from another
> pool.
> 
> Unfortunately I can't take a good look at this right now; hopefully
> I'll be able to take a look next week.
> 
> Andre, if you were keen, you might go through the credit code and put
> in a bunch of ASSERTs that the current pcpu is in the mask of the
> current vcpu; and that the current vcpu is assigned to the pool of the
> current pcpu, and so on.
> 
>  -George
> 


-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany

  reply	other threads:[~2011-02-01 16:32 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-27 23:18 Hypervisor crash(!) on xl cpupool-numa-split Andre Przywara
2011-01-28  6:47 ` Juergen Gross
2011-01-28 11:07   ` Andre Przywara
2011-01-28 11:44     ` Juergen Gross
2011-01-28 13:14       ` Andre Przywara
2011-01-31  7:04         ` Juergen Gross
2011-01-31 14:59           ` Andre Przywara
2011-01-31 15:28             ` George Dunlap
2011-02-01 16:32               ` Andre Przywara [this message]
2011-02-02  6:27                 ` Juergen Gross
2011-02-02  8:49                   ` Juergen Gross
2011-02-02 10:05                     ` Juergen Gross
2011-02-02 10:59                       ` Andre Przywara
2011-02-02 14:39                 ` Stephan Diestelhorst
2011-02-02 15:14                   ` Juergen Gross
2011-02-02 16:01                     ` Stephan Diestelhorst
2011-02-03  5:57                       ` Juergen Gross
2011-02-03  9:18                         ` Juergen Gross
2011-02-04 14:09                           ` Andre Przywara
2011-02-07 12:38                             ` Andre Przywara
2011-02-07 13:32                               ` Juergen Gross
2011-02-07 15:55                                 ` George Dunlap
2011-02-08  5:43                                   ` Juergen Gross
2011-02-08 12:08                                     ` George Dunlap
2011-02-08 12:14                                       ` George Dunlap
2011-02-08 16:33                                         ` Andre Przywara
2011-02-09 12:27                                           ` George Dunlap
2011-02-09 12:27                                             ` George Dunlap
2011-02-09 13:04                                               ` Juergen Gross
2011-02-09 13:39                                                 ` Andre Przywara
2011-02-09 13:51                                               ` Andre Przywara
2011-02-09 14:21                                                 ` Juergen Gross
2011-02-10  6:42                                                   ` Juergen Gross
2011-02-10  9:25                                                     ` Andre Przywara
2011-02-10 14:18                                                       ` Andre Przywara
2011-02-11  6:17                                                         ` Juergen Gross
2011-02-11  7:39                                                           ` Andre Przywara
2011-02-14 17:57                                                             ` George Dunlap
2011-02-15  7:22                                                               ` Juergen Gross
2011-02-16  9:47                                                                 ` Juergen Gross
2011-02-16 13:54                                                                   ` George Dunlap
     [not found]                                                                     ` <4D6237C6.1050206@amd.c om>
2011-02-16 14:11                                                                     ` Juergen Gross
2011-02-16 14:28                                                                       ` Juergen Gross
2011-02-17  0:05                                                                       ` André Przywara
2011-02-17  7:05                                                                     ` Juergen Gross
2011-02-17  9:11                                                                       ` Juergen Gross
2011-02-21 10:00                                                                     ` Andre Przywara
2011-02-21 13:19                                                                       ` Juergen Gross
2011-02-21 14:45                                                                         ` Andre Przywara
2011-02-21 14:50                                                                           ` Juergen Gross
2011-02-08 12:23                                       ` Juergen Gross
2011-01-28 11:13   ` George Dunlap
2011-01-28 13:05     ` Andre Przywara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D483599.1060807@amd.com \
    --to=andre.przywara@amd.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=juergen.gross@ts.fujitsu.com \
    --cc=keir@xen.org \
    --cc=stephan.diestelhorst@amd.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.