From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754014AbdHXQaB (ORCPT ); Thu, 24 Aug 2017 12:30:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42060 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753839AbdHXQ1s (ORCPT ); Thu, 24 Aug 2017 12:27:48 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com ABFD537EEF Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=none smtp.mailfrom=jolsa@kernel.org DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com ABFD537EEF From: Jiri Olsa To: Arnaldo Carvalho de Melo , Peter Zijlstra Cc: lkml , Ingo Molnar , Alexander Shishkin , Namhyung Kim , David Ahern , Andi Kleen , Mark Rutland Subject: [PATCH 03/10] perf: Make sure we read only scheduled events Date: Thu, 24 Aug 2017 18:27:30 +0200 Message-Id: <20170824162737.7813-4-jolsa@kernel.org> In-Reply-To: <20170824162737.7813-1-jolsa@kernel.org> References: <20170824162737.7813-1-jolsa@kernel.org> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 24 Aug 2017 16:27:47 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding leader's state check into perf_output_read_group to ensure we read only leader, which is scheduled in. Similar check is already there for siblings. Signed-off-by: Jiri Olsa --- kernel/events/core.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 30e30e94ea32..9a2791afe051 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5760,6 +5760,11 @@ void perf_event__output_id_sample(struct perf_event *event, __perf_event__output_id_sample(handle, sample); } +static bool can_read(struct perf_event *event) +{ + return event->state == PERF_EVENT_STATE_ACTIVE; +} + static void perf_output_read_one(struct perf_output_handle *handle, struct perf_event *event, u64 enabled, u64 running) @@ -5800,7 +5805,7 @@ static void perf_output_read_group(struct perf_output_handle *handle, if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) values[n++] = running; - if (leader != event) + if ((leader != event) && can_read(leader)) leader->pmu->read(leader); values[n++] = perf_event_count(leader); @@ -5812,8 +5817,7 @@ static void perf_output_read_group(struct perf_output_handle *handle, list_for_each_entry(sub, &leader->sibling_list, group_entry) { n = 0; - if ((sub != event) && - (sub->state == PERF_EVENT_STATE_ACTIVE)) + if ((sub != event) && can_read(sub)) sub->pmu->read(sub); values[n++] = perf_event_count(sub); -- 2.9.5