From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751707AbdH1TYI (ORCPT ); Mon, 28 Aug 2017 15:24:08 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:48273 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751231AbdH1TYF (ORCPT ); Mon, 28 Aug 2017 15:24:05 -0400 Date: Mon, 28 Aug 2017 21:23:59 +0200 From: Peter Zijlstra To: Jiri Olsa Cc: Arnaldo Carvalho de Melo , lkml , Ingo Molnar , Alexander Shishkin , Namhyung Kim , David Ahern , Andi Kleen , Mark Rutland Subject: Re: [PATCH 03/10] perf: Make sure we read only scheduled events Message-ID: <20170828192359.jfcb55q5remlhfbw@hirez.programming.kicks-ass.net> References: <20170824162737.7813-1-jolsa@kernel.org> <20170824162737.7813-4-jolsa@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170824162737.7813-4-jolsa@kernel.org> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 24, 2017 at 06:27:30PM +0200, Jiri Olsa wrote: > Adding leader's state check into perf_output_read_group > to ensure we read only leader, which is scheduled in. > > Similar check is already there for siblings. > > Signed-off-by: Jiri Olsa > --- > kernel/events/core.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 30e30e94ea32..9a2791afe051 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -5760,6 +5760,11 @@ void perf_event__output_id_sample(struct perf_event *event, > __perf_event__output_id_sample(handle, sample); > } > > +static bool can_read(struct perf_event *event) > +{ > + return event->state == PERF_EVENT_STATE_ACTIVE; > +} > + > static void perf_output_read_one(struct perf_output_handle *handle, > struct perf_event *event, > u64 enabled, u64 running) > @@ -5800,7 +5805,7 @@ static void perf_output_read_group(struct perf_output_handle *handle, > if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) > values[n++] = running; > > - if (leader != event) > + if ((leader != event) && can_read(leader)) > leader->pmu->read(leader); > > values[n++] = perf_event_count(leader); > @@ -5812,8 +5817,7 @@ static void perf_output_read_group(struct perf_output_handle *handle, > list_for_each_entry(sub, &leader->sibling_list, group_entry) { > n = 0; > > - if ((sub != event) && > - (sub->state == PERF_EVENT_STATE_ACTIVE)) > + if ((sub != event) && can_read(sub)) > sub->pmu->read(sub); > > values[n++] = perf_event_count(sub); I'm not seeing how this makes sense. Groups should either _all_ be scheduled or not at all. Please explain.