From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751432AbdEAQYR (ORCPT ); Mon, 1 May 2017 12:24:17 -0400 Received: from merlin.infradead.org ([205.233.59.134]:50688 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750771AbdEAQYM (ORCPT ); Mon, 1 May 2017 12:24:12 -0400 Date: Mon, 1 May 2017 18:24:05 +0200 From: Peter Zijlstra To: Tejun Heo Cc: Ingo Molnar , =?utf-8?B?4oCcbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZ+KAnQ==?= , Linus Torvalds , Mike Galbraith , Paul Turner , Chris Mason , =?utf-8?B?4oCca2VybmVsLXRlYW1AZmIuY29t4oCd?= Subject: Re: [PATCH 1/2] sched/fair: Use task_groups instead of leaf_cfs_rq_list to walk all cfs_rqs Message-ID: <20170501162405.gws4fyib67q4iezx@hirez.programming.kicks-ass.net> References: <20170426004039.GA3222@wtj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170426004039.GA3222@wtj.duckdns.org> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 25, 2017 at 05:40:39PM -0700, Tejun Heo wrote: > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -4644,23 +4644,32 @@ static void destroy_cfs_bandwidth(struct > > static void __maybe_unused update_runtime_enabled(struct rq *rq) > { > - struct cfs_rq *cfs_rq; > + struct task_group *tg; > > - for_each_leaf_cfs_rq(rq, cfs_rq) { > - struct cfs_bandwidth *cfs_b = &cfs_rq->tg->cfs_bandwidth; > + rcu_read_lock(); > + list_for_each_entry_rcu(tg, &task_groups, list) { > + struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth; > + struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)]; > + > + if (!cfs_rq->online) > + continue; > > raw_spin_lock(&cfs_b->lock); > cfs_rq->runtime_enabled = cfs_b->quota != RUNTIME_INF; > raw_spin_unlock(&cfs_b->lock); > } > + rcu_read_unlock(); > } > > static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) > { > - struct cfs_rq *cfs_rq; > + struct task_group *tg; > > - for_each_leaf_cfs_rq(rq, cfs_rq) { > - if (!cfs_rq->runtime_enabled) > + rcu_read_lock(); > + list_for_each_entry_rcu(tg, &task_groups, list) { > + struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)]; > + > + if (!cfs_rq->online || !cfs_rq->runtime_enabled) > continue; > > /* > @@ -4677,6 +4686,7 @@ static void __maybe_unused unthrottle_of > if (cfs_rq_throttled(cfs_rq)) > unthrottle_cfs_rq(cfs_rq); > } > + rcu_read_unlock(); > } Note that both these are called with rq->lock held. I don't think we need to actually add rcu_read_lock() here, as it will fully serialize against your ->online = 0 that holds rq->lock. Also, arguably you can keep using for_each_leaf_cfs_rq() for unthrottle, because I don't think we can (should?) remove a throttled group from the leaf list -- its not empty after all. Then again, this is CPU hotplug code and nobody cares about its performance much, and its better to be safe than sorry, so yes, use task_groups list to make sure to reach all groups.