xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [Xen-devel] [PATCH] xen/sched: fix get_cpu_idle_time() with core scheduling
Date: Thu, 13 Feb 2020 15:35:04 +0100	[thread overview]
Message-ID: <20200213143504.23777-1-jgross@suse.com> (raw)

get_cpu_idle_time() is calling vcpu_runstate_get() for an idle vcpu.
With core scheduling active this is fragile, as idle vcpus are assigned
to other scheduling units temporarily, and that assignment is changed
in some cases without holding the scheduling lock, and
vcpu_runstate_get() is using v->sched_unit as parameter for
unit_schedule_[un]lock_irq(), resulting in an ASSERT() triggering in
unlock in case v->sched_unit has changed meanwhile.

Fix that by using a local unit variable holding the correct unit.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
I have verified that all other uses of v->sched_unit are not
problematic: they are all for non-idle vcpus, or in scheduling paths
dealing with scheduling themselves and thus being aware of the
potential problem or not vulnerable by it.
---
 xen/common/sched/core.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2e43f8029f..de5a6b1a57 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -308,17 +308,26 @@ void vcpu_runstate_get(const struct vcpu *v,
 {
     spinlock_t *lock;
     s_time_t delta;
+    struct sched_unit *unit;
 
     rcu_read_lock(&sched_res_rculock);
 
-    lock = likely(v == current) ? NULL : unit_schedule_lock_irq(v->sched_unit);
+    /*
+     * Be careful in case of an idle vcpu: the assignment to a unit might
+     * change even with the scheduling lock held, so be sure to use the
+     * correct unit for locking in order to avoid triggering an ASSERT() in
+     * the unlock function.
+     */
+    unit = is_idle_vcpu(v) ? get_sched_res(v->processor)->sched_unit_idle
+                           : v->sched_unit;
+    lock = likely(v == current) ? NULL : unit_schedule_lock_irq(unit);
     memcpy(runstate, &v->runstate, sizeof(*runstate));
     delta = NOW() - runstate->state_entry_time;
     if ( delta > 0 )
         runstate->time[runstate->state] += delta;
 
     if ( unlikely(lock != NULL) )
-        unit_schedule_unlock_irq(lock, v->sched_unit);
+        unit_schedule_unlock_irq(lock, unit);
 
     rcu_read_unlock(&sched_res_rculock);
 }
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

             reply	other threads:[~2020-02-13 14:35 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-13 14:35 Juergen Gross [this message]
2020-02-19 12:28 ` [Xen-devel] [PATCH] xen/sched: fix get_cpu_idle_time() with core scheduling Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200213143504.23777-1-jgross@suse.com \
    --to=jgross@suse.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).