From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754791AbbBBLbp (ORCPT ); Mon, 2 Feb 2015 06:31:45 -0500 Received: from service87.mimecast.com ([91.220.42.44]:41821 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752030AbbBBLbo convert rfc822-to-8bit (ORCPT ); Mon, 2 Feb 2015 06:31:44 -0500 Message-ID: <54CF601C.2080202@arm.com> Date: Mon, 02 Feb 2015 11:31:40 +0000 From: Juri Lelli User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Peter Zijlstra CC: Luca Abeni , Kirill Tkhai , Ingo Molnar , "linux-kernel@vger.kernel.org" Subject: Re: Another SCHED_DEADLINE bug (with bisection and possible fix) References: <1420633741.12772.10.camel@yandex.ru> <54B4D2DF.9010308@arm.com> <4500351421141200@web2m.yandex.ru> <20150113140436.GI25256@twins.programming.kicks-ass.net> <4632021421239387@web25g.yandex.ru> <54B7A33F.20904@unitn.it> <20150115122323.GU23965@worktop.programming.kicks-ass.net> <54B7C232.8060806@unitn.it> <20150128140803.GF23038@twins.programming.kicks-ass.net> <54CB5E56.9080506@arm.com> <20150131095659.GD32343@twins.programming.kicks-ass.net> In-Reply-To: <20150131095659.GD32343@twins.programming.kicks-ass.net> X-OriginalArrivalTime: 02 Feb 2015 11:31:41.0757 (UTC) FILETIME=[D0EB0ED0:01D03EDB] X-MC-Unique: 115020211314206501 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 31/01/2015 09:56, Peter Zijlstra wrote: > On Fri, Jan 30, 2015 at 10:35:02AM +0000, Juri Lelli wrote: >> So, we do the safe thing only in case of throttling. > > No, even for the !throttle aka running tasks. We only use > dl_{runtime,deadline,period} for replenishment, until that time we > observe the old runtime/deadline set by the previous replenishment. > Oh, right. We set dl_new in __dl_clear_params(), nice. Thanks, - Juri >> I guess is more than >> ok for now, while we hopefully find some spare cycle to implement a >> complete solution :/. > > Yeah, I bet the fun part is computing the 0-lag across the entire root > domain, per-cpu 0-lag isn't correct afaict. >