From: George Dunlap <George.Dunlap@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <jgross@suse.com>,
Charles Arnold <carnold@suse.com>,
Tomas Mozes <hydrapolic@gmail.com>, Glen <glenbarney@gmail.com>,
Jan Beulich <jbeulich@suse.com>, Sarah Newman <srn@prgmr.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/2] xen: credit2: avoid vCPUs to ever reach lower credits than idle
Date: Thu, 12 Mar 2020 14:45:53 +0000 [thread overview]
Message-ID: <2B668743-662D-4A34-9ADE-F699A7BABF8A@citrix.com> (raw)
In-Reply-To: <158402065414.753.15785539969715690913.stgit@Palanthas>
> On Mar 12, 2020, at 1:44 PM, Dario Faggioli <dfaggioli@suse.com> wrote:
>
> There have been report of stalls of guest vCPUs, when Credit2 was used.
> It seemed like these vCPUs were not getting scheduled for very long
> time, even under light load conditions (e.g., during dom0 boot).
>
> Investigations led to the discovery that --although rarely-- it can
> happen that a vCPU manages to run for very long timeslices. In Credit2,
> this means that, when runtime accounting happens, the vCPU will lose a
> large quantity of credits. This in turn may lead to the vCPU having less
> credits than the idle vCPUs (-2^30). At this point, the scheduler will
> pick the idle vCPU, instead of the ready to run vCPU, for a few
> "epochs", which often times is enough for the guest kernel to think the
> vCPU is not responding and crashing.
>
> An example of this situation is shown here. In fact, we can see d0v1
> sitting in the runqueue while all the CPUs are idle, as it has
> -1254238270 credits, which is smaller than -2^30 = −1073741824:
>
> (XEN) Runqueue 0:
> (XEN) ncpus = 28
> (XEN) cpus = 0-27
> (XEN) max_weight = 256
> (XEN) pick_bias = 22
> (XEN) instload = 1
> (XEN) aveload = 293391 (~111%)
> (XEN) idlers: 00,00000000,00000000,00000000,00000000,00000000,0fffffff
> (XEN) tickled: 00,00000000,00000000,00000000,00000000,00000000,00000000
> (XEN) fully idle cores: 00,00000000,00000000,00000000,00000000,00000000,0fffffff
> [...]
> (XEN) Runqueue 0:
> (XEN) CPU[00] runq=0, sibling=00,..., core=00,...
> (XEN) CPU[01] runq=0, sibling=00,..., core=00,...
> [...]
> (XEN) CPU[26] runq=0, sibling=00,..., core=00,...
> (XEN) CPU[27] runq=0, sibling=00,..., core=00,...
> (XEN) RUNQ:
> (XEN) 0: [0.1] flags=0 cpu=5 credit=-1254238270 [w=256] load=262144 (~100%)
>
> We certainly don't want, under any circumstance, this to happen.
> Therefore, let's use INT_MIN for the credits of the idle vCPU, in
> Credit2, to be sure that no vCPU can get below that value.
>
> NOTE: investigations have been done about _how_ it is possible for a
> vCPU to execute for so long that its credits becomes so low. While still
> not completely clear, there are evidence that:
> - it only happens very rarely
> - it appears to be both machine and workload specific
> - it does not look to be a Credit2 (e.g., as it happens when running
> with Credit1 as well) issue, or a scheduler issue
>
> This patch makes Credit2 more robust to events like this, whatever
> the cause is, and should hence be backported (as far as possible).
>
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> Reported-by: Glen <glenbarney@gmail.com>
> Reported-by: Tomas Mozes <hydrapolic@gmail.com>
Nit: The reported-by’s should be before the SoB (i.e., tags roughly in time order).
I think this is a good change to make the algorithm more robust, so:
Acked-by: George Dunlap <george.dunlap@citrix.com>
But it seems like allowing a guest to rack up -2^63 credits is still a bad thing, and it would be nice to have some other backstop / reset mechanism. But I guess to have an effective mechanism of that sort we’d want to understand how it happened in the first place.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2020-03-12 14:46 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-12 13:44 [Xen-devel] [PATCH 0/2] xen: credit2: fix vcpu starvation due to too few credits Dario Faggioli
2020-03-12 13:44 ` [Xen-devel] [PATCH 1/2] xen: credit2: avoid vCPUs to ever reach lower credits than idle Dario Faggioli
2020-03-12 13:55 ` Andrew Cooper
2020-03-12 14:40 ` George Dunlap
2020-03-12 15:10 ` Dario Faggioli
2020-03-12 14:58 ` Dario Faggioli
2020-03-12 14:45 ` George Dunlap [this message]
2020-03-12 17:03 ` Dario Faggioli
2020-03-12 15:26 ` Jan Beulich
2020-03-12 16:00 ` Jürgen Groß
2020-03-12 16:59 ` Dario Faggioli
2020-03-12 16:11 ` Dario Faggioli
2020-03-12 16:36 ` Jan Beulich
2020-03-12 13:44 ` [Xen-devel] [PATCH 2/2] xen: credit2: fix credit reset happening too few times Dario Faggioli
2020-03-12 15:08 ` [Xen-devel] [PATCH 0/2] xen: credit2: fix vcpu starvation due to too few credits Roger Pau Monné
2020-03-12 17:02 ` Dario Faggioli
2020-03-12 17:59 ` Roger Pau Monné
2020-03-13 6:19 ` Dario Faggioli
2020-03-12 15:51 ` Jürgen Groß
2020-03-12 16:27 ` Andrew Cooper
2020-03-13 7:26 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2B668743-662D-4A34-9ADE-F699A7BABF8A@citrix.com \
--to=george.dunlap@citrix.com \
--cc=carnold@suse.com \
--cc=dfaggioli@suse.com \
--cc=glenbarney@gmail.com \
--cc=hydrapolic@gmail.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=srn@prgmr.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).