From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757185AbcLBVWy (ORCPT ); Fri, 2 Dec 2016 16:22:54 -0500 Received: from mga05.intel.com ([192.55.52.43]:20536 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750900AbcLBVUl (ORCPT ); Fri, 2 Dec 2016 16:20:41 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,288,1477983600"; d="scan'208";a="1093847488" From: kan.liang@intel.com To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org Cc: alexander.shishkin@linux.intel.com, tglx@linutronix.de, namhyung@kernel.org, jolsa@kernel.org, adrian.hunter@intel.com, wangnan0@huawei.com, mark.rutland@arm.com, andi@firstfloor.org, Kan Liang Subject: [PATCH V2 02/13] perf/core: output overhead when sched out from context Date: Fri, 2 Dec 2016 16:19:10 -0500 Message-Id: <1480713561-6617-3-git-send-email-kan.liang@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1480713561-6617-1-git-send-email-kan.liang@intel.com> References: <1480713561-6617-1-git-send-email-kan.liang@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kan Liang Outputing every overhead when it happens is very costly. The accumulated time is more meaningful. So the overhead information should be outputted at the very end. The overhead information is outputted when task is scheduling out or the event is going to be disabled. The arch specific overhead is outputted in event pmu delete, when Flag PERF_EF_LOG is set. Signed-off-by: Kan Liang --- include/linux/perf_event.h | 2 ++ kernel/events/core.c | 9 ++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 5bc8156..ebd356e 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -313,6 +313,7 @@ struct pmu { #define PERF_EF_START 0x01 /* start the counter when adding */ #define PERF_EF_RELOAD 0x02 /* reload the counter when starting */ #define PERF_EF_UPDATE 0x04 /* update the counter when stopping */ +#define PERF_EF_LOG 0x08 /* log overhead information */ /* * Adds/Removes a counter to/from the PMU, can be done inside a @@ -741,6 +742,7 @@ struct perf_event_context { int nr_stat; int nr_freq; int rotate_disable; + int log_overhead; atomic_t refcount; struct task_struct *task; diff --git a/kernel/events/core.c b/kernel/events/core.c index 5312744..306bc92 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1787,6 +1787,7 @@ event_sched_out(struct perf_event *event, struct perf_cpu_context *cpuctx, struct perf_event_context *ctx) { + bool log_overhead = needs_log_overhead(event) & ctx->log_overhead; u64 tstamp = perf_event_time(event); u64 delta; @@ -1812,7 +1813,7 @@ event_sched_out(struct perf_event *event, perf_pmu_disable(event->pmu); event->tstamp_stopped = tstamp; - event->pmu->del(event, 0); + event->pmu->del(event, log_overhead ? PERF_EF_LOG : 0); event->oncpu = -1; event->state = PERF_EVENT_STATE_INACTIVE; if (event->pending_disable) { @@ -1914,6 +1915,9 @@ static void __perf_event_disable(struct perf_event *event, if (event->state < PERF_EVENT_STATE_INACTIVE) return; + /* log overhead when disable event */ + ctx->log_overhead = true; + update_context_time(ctx); update_cgrp_time_from_event(event); update_group_times(event); @@ -10177,6 +10181,9 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) if (!child_ctx) return; + /* log overhead when exit task context */ + child_ctx->log_overhead = true; + /* * In order to reduce the amount of tricky in ctx tear-down, we hold * ctx::mutex over the entire thing. This serializes against almost -- 2.5.5