From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753194AbdC0OEo (ORCPT ); Mon, 27 Mar 2017 10:04:44 -0400 Received: from merlin.infradead.org ([205.233.59.134]:56114 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753132AbdC0OEP (ORCPT ); Mon, 27 Mar 2017 10:04:15 -0400 Date: Mon, 27 Mar 2017 16:03:41 +0200 From: Peter Zijlstra To: luca abeni Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Juri Lelli , Claudio Scordino , Steven Rostedt , Tommaso Cucinotta , Daniel Bristot de Oliveira , Joel Fernandes , Mathieu Poirier Subject: Re: [RFC v5 9/9] sched/deadline: also reclaim bandwidth not used by dl tasks Message-ID: <20170327140341.yvjjr6hbow2jug3t@hirez.programming.kicks-ass.net> References: <1490327582-4376-1-git-send-email-luca.abeni@santannapisa.it> <1490327582-4376-10-git-send-email-luca.abeni@santannapisa.it> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1490327582-4376-10-git-send-email-luca.abeni@santannapisa.it> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 24, 2017 at 04:53:02AM +0100, luca abeni wrote: > +static inline > +void __dl_update(struct dl_bw *dl_b, s64 bw) > +{ > + struct root_domain *rd = container_of(dl_b, struct root_domain, dl_bw); > + int i; > + > + RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(), > + "sched RCU must be held"); > + for_each_cpu_and(i, rd->span, cpu_active_mask) { > + struct rq *rq = cpu_rq(i); > + > + rq->dl.extra_bw += bw; > + } So this is unfortunate (and we already have one such instance). It effectively does an for_each_online_cpu() with IRQs disabled, and on SGI class hardware that takes _forever_. This is also what I got stuck on trying to rewrite AC to use Tommaso's recoverable thing. In the end I had to do a 2 stage try/commit variant. Which ended up being a pain and I didn't finish. I'm not saying this patch is bad, but this is something we need to thing about.