From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932995AbbCDIgw (ORCPT ); Wed, 4 Mar 2015 03:36:52 -0500 Received: from e9.ny.us.ibm.com ([32.97.182.139]:46009 "EHLO e9.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760451AbbCDIfy (ORCPT ); Wed, 4 Mar 2015 03:35:54 -0500 From: Sukadev Bhattiprolu To: Michael Ellerman , Paul Mackerras , peterz@infradead.org Cc: dev@codyps.com, , linuxppc-dev@lists.ozlabs.org Subject: [PATCH 2/4] perf: Split perf_event_read() and perf_event_count() Date: Wed, 4 Mar 2015 00:35:06 -0800 Message-Id: <1425458108-3341-3-git-send-email-sukadev@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1425458108-3341-1-git-send-email-sukadev@linux.vnet.ibm.com> References: <1425458108-3341-1-git-send-email-sukadev@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15030408-0033-0000-0000-00000202890D Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org perf_event_read() does two things: - call the PMU to read/update the counter value - and compute the total count of the event and its children perf_event_reset() needs the first piece but doesn't need the second. Similarly, when we implement the ability to read a group of events using the transaction interface, we would sometimes need one but not both. Break up perf_event_read() and have it just read/update the counter and have the callers compute the total count if necessary. Signed-off-by: Sukadev Bhattiprolu --- kernel/events/core.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index dbc12bf..11c4154 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3223,7 +3223,7 @@ static inline u64 perf_event_count(struct perf_event *event) return local64_read(&event->count) + atomic64_read(&event->child_count); } -static u64 perf_event_read(struct perf_event *event) +static void perf_event_read(struct perf_event *event) { /* * If event is enabled and currently active on a CPU, update the @@ -3249,8 +3249,6 @@ static u64 perf_event_read(struct perf_event *event) update_event_times(event); raw_spin_unlock_irqrestore(&ctx->lock, flags); } - - return perf_event_count(event); } /* @@ -3654,14 +3652,18 @@ u64 perf_event_read_value(struct perf_event *event, u64 *enabled, u64 *running) *running = 0; mutex_lock(&event->child_mutex); - total += perf_event_read(event); + + perf_event_read(event); + total += perf_event_count(event); + *enabled += event->total_time_enabled + atomic64_read(&event->child_total_time_enabled); *running += event->total_time_running + atomic64_read(&event->child_total_time_running); list_for_each_entry(child, &event->child_list, child_list) { - total += perf_event_read(child); + perf_event_read(child); + total += perf_event_count(child); *enabled += child->total_time_enabled; *running += child->total_time_running; } @@ -3821,7 +3823,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait) static void _perf_event_reset(struct perf_event *event) { - (void)perf_event_read(event); + perf_event_read(event); local64_set(&event->count, 0); perf_event_update_userpage(event); } -- 1.8.3.1