From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751787AbcBEC7y (ORCPT ); Thu, 4 Feb 2016 21:59:54 -0500 Received: from mail-pf0-f182.google.com ([209.85.192.182]:33995 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750752AbcBEC7w (ORCPT ); Thu, 4 Feb 2016 21:59:52 -0500 Date: Fri, 5 Feb 2016 08:29:48 +0530 From: Viresh Kumar To: "Rafael J. Wysocki" Cc: "Rafael J. Wysocki" , Linux PM list , Linux Kernel Mailing List , Srinivas Pandruvada , Juri Lelli , Steve Muckle , Saravana Kannan Subject: Re: [PATCH 2/11] cpufreq: governor: Use common mutex for dbs_data protection Message-ID: <20160205025948.GE3068@vireshk> References: <3705929.bslqXH980s@vostro.rjw.lan> <1529283.0IedZktI9q@vostro.rjw.lan> <20160204050954.GU3469@vireshk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04-02-16, 17:46, Rafael J. Wysocki wrote: > On Thu, Feb 4, 2016 at 6:09 AM, Viresh Kumar wrote: > > On 04-02-16, 00:16, Rafael J. Wysocki wrote: > >> From: Rafael J. Wysocki > >> > >> Every governor relying on the common code in cpufreq_governor.c > >> has to provide its own mutex in struct common_dbs_data. However, > >> those mutexes are never used at the same time > > > > Why do you think so? I thought they can always be used in parallel. > > > > Consider 2 or more policies, one can have ondemand as the governor, > > whereas other one can have conservative. > > > > If CPUs go online/offline or if governors are switching in parallel, > > then cpufreq_governor_dbs() can very much run in parallel for ondemand > > and conservative. > > > > Or am I missing something here ? > > Well, so perhaps the changelog is inaccurate. > > However, what's wrong with using a single mutex then? You are killing the possibility of running the code faster. Consider this: - A 16 policy system with N CPUs in every policy (IBM has something similar only :) ).. - 4 policies using ondemand, 4 using conservative, 4 using powersave and 4 with performance. - Now if we try to change governors for all of them in parallel, only one will be done at a time and others have to wait for this BIG-kernel lock. - Ideally the lock shouldn't have been in cdata itself, but dbs_data only. But there was a specific race because of which we were required to move it to a higher level, i.e. cdata. And so we killed the possibility of parallelism of multiple governors of same type (ofcourse only of update-sampling-rate and cpufreq_governor_dbs().. So, it makes thing much slower.. -- viresh