From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755930AbcANPgR (ORCPT ); Thu, 14 Jan 2016 10:36:17 -0500 Received: from mail-wm0-f53.google.com ([74.125.82.53]:36432 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755369AbcANPZe (ORCPT ); Thu, 14 Jan 2016 10:25:34 -0500 From: Luca Abeni To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Luca Abeni Subject: [RFC 2/8] Correctly track the active utilisation for migrating tasks Date: Thu, 14 Jan 2016 16:24:47 +0100 Message-Id: <1452785094-3086-3-git-send-email-luca.abeni@unitn.it> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1452785094-3086-1-git-send-email-luca.abeni@unitn.it> References: <1452785094-3086-1-git-send-email-luca.abeni@unitn.it> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Fix active utilisation accounting on migration: when a task is migrated from CPUi to CPUj, immediately subtract the task's utilisation from CPUi and add it to CPUj. This mechanism is implemented by modifying the pull and push functions. Note: this is not fully correct from the theoretical point of view (the utilisation should be removed from CPUi only at the 0 lag time), but doing the right thing would be _MUCH_ more complex (leaving the timer armed when the task is on a different CPU... Inactive timers should be moved from per-task timers to per-runqueue lists of timers! Bah...) --- kernel/sched/deadline.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e779cce..8d7ee79 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1541,7 +1541,9 @@ retry: } deactivate_task(rq, next_task, 0); + clear_running_bw(&next_task->dl, &rq->dl); set_task_cpu(next_task, later_rq->cpu); + add_running_bw(&next_task->dl, &later_rq->dl); activate_task(later_rq, next_task, 0); ret = 1; @@ -1629,7 +1631,9 @@ static void pull_dl_task(struct rq *this_rq) resched = true; deactivate_task(src_rq, p, 0); + clear_running_bw(&p->dl, &src_rq->dl); set_task_cpu(p, this_cpu); + add_running_bw(&p->dl, &this_rq->dl); activate_task(this_rq, p, 0); dmin = p->dl.deadline; -- 1.9.1