From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751996AbdATMYU (ORCPT ); Fri, 20 Jan 2017 07:24:20 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:53582 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751885AbdATMYT (ORCPT ); Fri, 20 Jan 2017 07:24:19 -0500 Date: Fri, 20 Jan 2017 10:20:33 +0100 From: Peter Zijlstra To: David Carrillo-Cisneros Cc: linux-kernel@vger.kernel.org, "x86@kernel.org" , Ingo Molnar , Thomas Gleixner , Andi Kleen , Kan Liang , Borislav Petkov , Srinivas Pandruvada , Dave Hansen , Vikas Shivappa , Mark Rutland , Arnaldo Carvalho de Melo , Vince Weaver , Paul Turner , Stephane Eranian Subject: Re: [PATCH v2 2/2] perf/core: Remove perf_cpu_context::unique_pmu Message-ID: <20170120092033.GK6515@twins.programming.kicks-ass.net> References: <20170118192454.58008-1-davidcc@google.com> <20170118192454.58008-3-davidcc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170118192454.58008-3-davidcc@google.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 18, 2017 at 11:24:54AM -0800, David Carrillo-Cisneros wrote: > cpuctx->unique_pmu was originally introduced as a way to identify cpuctxs > with shared pmus in order to avoid visiting the same cpuctx more than once > in a for_each_pmu loop. > > cpuctx->unique_pmu == cpuctx->pmu in non-software task contexts since they > have only one pmu per cpuctx. Since perf_pmu_sched_task is only called in > hw contexts, this patch replaces cpuctx->unique_pmu by cpuctx->pmu in it. > > The change above, together with the previous patch in this series, removed > the remaining uses of cpuctx->unique_pmu, so we remove it altogether. > > Signed-off-by: David Carrillo-Cisneros > Acked-by: Mark Rutland > @@ -8572,37 +8572,10 @@ static struct perf_cpu_context __percpu *find_pmu_context(int ctxn) > return NULL; > } > > -static void update_pmu_context(struct pmu *pmu, struct pmu *old_pmu) > -{ > - int cpu; > - > - for_each_possible_cpu(cpu) { > - struct perf_cpu_context *cpuctx; > - > - cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); > - > - if (cpuctx->unique_pmu == old_pmu) > - cpuctx->unique_pmu = pmu; > - } > -} > - > static void free_pmu_context(struct pmu *pmu) > { > - struct pmu *i; > - > mutex_lock(&pmus_lock); > - /* > - * Like a real lame refcount. > - */ > - list_for_each_entry(i, &pmus, entry) { > - if (i->pmu_cpu_context == pmu->pmu_cpu_context) { > - update_pmu_context(i, pmu); > - goto out; > - } > - } > - > free_percpu(pmu->pmu_cpu_context); > -out: > mutex_unlock(&pmus_lock); > } This very much relies on us never calling perf_pmu_unregister() on the software PMUs afaict. A condition not mention in the Changelog.