From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756683AbcASQBf (ORCPT ); Tue, 19 Jan 2016 11:01:35 -0500 Received: from foss.arm.com ([217.140.101.70]:40678 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754824AbcASQB2 (ORCPT ); Tue, 19 Jan 2016 11:01:28 -0500 Date: Tue, 19 Jan 2016 16:01:55 +0000 From: Juri Lelli To: Peter Zijlstra Cc: Michael Turquette , Viresh Kumar , linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net, steve.muckle@linaro.org, vincent.guittot@linaro.org, morten.rasmussen@arm.com, dietmar.eggemann@arm.com Subject: Re: [RFC PATCH 18/19] cpufreq: remove transition_lock Message-ID: <20160119160155.GH8573@e106622-lin> References: <1452533760-13787-1-git-send-email-juri.lelli@arm.com> <1452533760-13787-19-git-send-email-juri.lelli@arm.com> <20160112112409.GJ1084@ubuntu> <20160113005452.10884.77606@quark.deferred.io> <20160113063148.GJ6050@ubuntu> <20160113182131.1168.45753@quark.deferred.io> <20160119140036.GG6344@twins.programming.kicks-ass.net> <20160119144233.GG8573@e106622-lin> <20160119153007.GZ6357@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160119153007.GZ6357@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 19/01/16 16:30, Peter Zijlstra wrote: > On Tue, Jan 19, 2016 at 02:42:33PM +0000, Juri Lelli wrote: > > On 19/01/16 15:00, Peter Zijlstra wrote: > > > On Wed, Jan 13, 2016 at 10:21:31AM -0800, Michael Turquette wrote: > > > > RCU is absolutely not a magic bullet or elixir that lets us kick off > > > > DVFS transitions from the schedule() context. The frequency transitions > > > > are write-side operations, as we invariably touch struct cpufreq_policy. > > > > This means that the read-side stuff can live in the schedule() context, > > > > but write-side needs to be kicked out to a thread. > > > > > > Why? If the state is per-cpu and acquired by RCU, updates should be no > > > problem at all. > > > > > > If you need inter-cpu state, then things get to be a little tricky > > > though, but you can actually nest a raw_spinlock_t in there if you > > > absolutely have to. > > > > > > > We have at least two problems. First one is that state is per frequency > > domain (struct cpufreq_policy) and this usually spans more than one cpu. > > Second one is that we might need to sleep while servicing the frequency > > transition, both because platform needs to sleep and because some paths > > of cpufreq core use sleeping locks (yes, that might be changed as well I > > guess). A solution based on spinlocks only might not be usable on > > platforms that needs to sleep, also. > > Sure, if you need to actually sleep to poke the hardware you've lost and > you do indeed need the kthread thingy. > Yeah, also cpufreq relies on blocking notifiers (to name one thing). So, it seems to me quite some things needs to be changed to make it fully non sleeping. > > Another thing that I was thinking of actually is that since struct > > cpufreq_policy is updated a lot (more or less at every frequency > > transition), is it actually suitable for RCU? > > That entirely depends on how 'hard' it is to 'replace/change' the > cpufreq policy. > > Typically I envision that to be very hard and require mutexes and the > like, in which case RCU can provide a cheap lookup and existence. > Right, read path is fast, but write path still requires some sort of locking (malloc, copy and update). So, I'm wondering if this still pays off for a structure that gets written a lot. > So on 'sane' hardware with per logical cpu hints you can get away > without any locks. > But maybe you are saying that there are ways we can make that work :). Thanks, - Juri