From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756627Ab3BEBtt (ORCPT ); Mon, 4 Feb 2013 20:49:49 -0500 Received: from mga11.intel.com ([192.55.52.93]:29594 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755197Ab3BEBt2 (ORCPT ); Mon, 4 Feb 2013 20:49:28 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,602,1355126400"; d="scan'208";a="286574521" From: Andi Kleen To: mingo@kernel.org Cc: linux-kernel@vger.kernel.org, a.p.zijlstra@chello.nl, akpm@linux-foundation.org, acme@redhat.com, eranian@google.com, jolsa@redhat.com, namhyung@kernel.org, Andi Kleen Subject: [PATCH 4/5] perf, x86: Support full width counting Date: Mon, 4 Feb 2013 17:49:13 -0800 Message-Id: <1360028954-16946-5-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 1.7.7.6 In-Reply-To: <1360028954-16946-1-git-send-email-andi@firstfloor.org> References: <1360028954-16946-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen Recent Intel CPUs have a new alternative MSR range for perfctrs that allows writing the full counter width. Enable this range if the hardware reports it using a new capability bit. This lowers overhead of perf stat slightly because it has to do less interrupts to accumulate the counter value. On Haswell it also avoids some problems with TSX aborting when the end of the counter range is reached. Signed-off-by: Andi Kleen --- arch/x86/include/uapi/asm/msr-index.h | 3 +++ arch/x86/kernel/cpu/perf_event.h | 1 + arch/x86/kernel/cpu/perf_event_intel.c | 6 ++++++ 3 files changed, 10 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h index 433a59f..af41a77 100644 --- a/arch/x86/include/uapi/asm/msr-index.h +++ b/arch/x86/include/uapi/asm/msr-index.h @@ -163,6 +163,9 @@ #define MSR_KNC_EVNTSEL0 0x00000028 #define MSR_KNC_EVNTSEL1 0x00000029 +/* Alternative perfctr range with full access. */ +#define MSR_IA32_PMC0 0x000004c1 + /* AMD64 MSRs. Not complete. See the architecture manual for a more complete list. */ diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h index 1567b0d..ce2a863 100644 --- a/arch/x86/kernel/cpu/perf_event.h +++ b/arch/x86/kernel/cpu/perf_event.h @@ -278,6 +278,7 @@ union perf_capabilities { u64 pebs_arch_reg:1; u64 pebs_format:4; u64 smm_freeze:1; + u64 fw_write:1; }; u64 capabilities; }; diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index aa48048..d96010a 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -2228,5 +2228,11 @@ __init int intel_pmu_init(void) } } + /* Support full width counters using alternative MSR range */ + if (x86_pmu.intel_cap.fw_write) { + x86_pmu.max_period = x86_pmu.cntval_mask; + x86_pmu.perfctr = MSR_IA32_PMC0; + } + return 0; } -- 1.7.7.6