From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934005AbcAKQfg (ORCPT ); Mon, 11 Jan 2016 11:35:36 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:46881 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933970AbcAKQfA (ORCPT ); Mon, 11 Jan 2016 11:35:00 -0500 Message-Id: <20160111163229.175028123@infradead.org> User-Agent: quilt/0.61-1 Date: Mon, 11 Jan 2016 17:25:06 +0100 From: Peter Zijlstra To: mingo@kernel.org, alexander.shishkin@linux.intel.com, eranian@google.com Cc: linux-kernel@vger.kernel.org, vince@deater.net, dvyukov@google.com, andi@firstfloor.org, jolsa@redhat.com, peterz@infradead.org Subject: [RFC][PATCH 08/12] perf: Optimize perf_sched_events usage References: <20160111162458.427203780@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-perf-fixes-9.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It doesn't make sense to take up-to _4_ references on perf_sched_events per event, avoid doing this. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3493,11 +3493,13 @@ static void unaccount_event_cpu(struct p static void unaccount_event(struct perf_event *event) { + bool dec = false; + if (event->parent) return; if (event->attach_state & PERF_ATTACH_TASK) - static_key_slow_dec_deferred(&perf_sched_events); + dec = true; if (event->attr.mmap || event->attr.mmap_data) atomic_dec(&nr_mmap_events); if (event->attr.comm) @@ -3507,12 +3509,15 @@ static void unaccount_event(struct perf_ if (event->attr.freq) atomic_dec(&nr_freq_events); if (event->attr.context_switch) { - static_key_slow_dec_deferred(&perf_sched_events); + dec = true; atomic_dec(&nr_switch_events); } if (is_cgroup_event(event)) - static_key_slow_dec_deferred(&perf_sched_events); + dec = true; if (has_branch_stack(event)) + dec = true; + + if (dec) static_key_slow_dec_deferred(&perf_sched_events); unaccount_event_cpu(event, event->cpu); @@ -7728,11 +7733,13 @@ static void account_event_cpu(struct per static void account_event(struct perf_event *event) { + bool inc = false; + if (event->parent) return; if (event->attach_state & PERF_ATTACH_TASK) - static_key_slow_inc(&perf_sched_events.key); + inc = true; if (event->attr.mmap || event->attr.mmap_data) atomic_inc(&nr_mmap_events); if (event->attr.comm) @@ -7745,11 +7752,14 @@ static void account_event(struct perf_ev } if (event->attr.context_switch) { atomic_inc(&nr_switch_events); - static_key_slow_inc(&perf_sched_events.key); + inc = true; } if (has_branch_stack(event)) - static_key_slow_inc(&perf_sched_events.key); + inc = true; if (is_cgroup_event(event)) + inc = true; + + if (inc) static_key_slow_inc(&perf_sched_events.key); account_event_cpu(event, event->cpu);