From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752891AbcFBCnP (ORCPT ); Wed, 1 Jun 2016 22:43:15 -0400 Received: from mail-pa0-f45.google.com ([209.85.220.45]:34455 "EHLO mail-pa0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752777AbcFBCmp (ORCPT ); Wed, 1 Jun 2016 22:42:45 -0400 From: David Carrillo-Cisneros To: linux-kernel@vger.kernel.org Cc: "x86@kernel.org" , Ingo Molnar , "Yan, Zheng" , Andi Kleen , Kan Liang , Peter Zijlstra , David Carrillo-Cisneros , Stephane Eranian Subject: [PATCH 2/3] perf/x86/intel: fix for MSR_LAST_BRANCH_FROM_x quirk when no TSX Date: Wed, 1 Jun 2016 19:42:02 -0700 Message-Id: <1464835323-33872-3-git-send-email-davidcc@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 In-Reply-To: <1464835323-33872-1-git-send-email-davidcc@google.com> References: <1464835323-33872-1-git-send-email-davidcc@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Intel's SDM states that bits 61:62 in MSR_LAST_BRANCH_FROM_x are the TSX flags for formats with LBR_TSX flags (i.e. LBR_FORMAT_EIP_EFLAGS2). However, when the CPU has TSX support deactivated, bits 61:62 actually behave as follows: - For wrmsr, bits 61:62 are considered part of the sign-extension. - LBR hw updates to the MSR (no through wrmsr) will clear bits 61:62, regardless of the sign of bit at position 47, i.e. bit 61:62 are not part of the sign-extension. Therefore, if the following conditions are all true: 1) LBR has TSX format. 2) CPU has no TSX support enabled. 3) data in MSR (bits 0:48) is negative. then any value passed to wrmsr must be signed-extended to 63 bits and any value from rdmsr must be converted to 61 bits, ignoring the TSX flags. We also need the quirk at context switch, when save/restore the value of MSR_LAST_BRANCH_FROM_x This bug was masked by the work-around to the bug: "LBR May Contain Incorrect Information When Using FREEZE_LBRS_ON_PMI" The work-around workis by forbidding the capture of kernel memory addresses by filtering out all kernel branches in hw and therefore causing all branches in MSR_LAST_BRANCH to be user addresses. User addresses have bits 61:62 CLEAR and do no trigger the wrmsr bug in MSR_LAST_BRANCH_FROM_x when saved/restored at context switch. To verify the hw bug: $ perf record -b -e cycles sleep 1 $ iotools/rdmsr 0 0x680 0x1fffffffb0b9b0cc $ iotools/wrmsr 0 0x680 0x1fffffffb0b9b0cc write(): Input/output error To test the workaround, use a perf tool and kernel using the patch next in this series. That patch removes the work around that masked the hw bug: $ ./lbr_perf record --call-graph lbr -e cycles:k ./cqm_easy where lbr_perf is the patched perf tool, that allows to specify :k on lbr mode. The above command will trigger a #GPF : [ 411.191445] ------------[ cut here ]------------ [ 411.196015] WARNING: CPU: 28 PID: 14096 at arch/x86/mm/extable.c:65 ex_handler_wrmsr_unsafe+0x70/0x80 [ 411.205123] unchecked MSR access error: WRMSR to 0x681 (tried to write 0x1fffffff81010794) [ 411.213286] Modules linked in: bonding w1_therm wire mpt3sas scsi_transport_sas raid_class cdc_acm ehci_pci ehci_hcd mlx4_en ib_uverbs mlx4_ib ib_core mlx4_core [ 411.227611] CPU: 28 PID: 14096 Comm: cqm_easy Not tainted 4.7.0-smp-unfixed_lbr_from_bug #675 [ 411.236033] Hardware name: Intel Grantley,Wellsburg/Ixion_QC_15, BIOS 2.50.0 01/21/2016 [ 411.243940] 0000000000000000 ffff883ff0453b48 ffffffff8167af49 ffff883ff0453ba8 [ 411.251279] 0000000000000000 ffff883ff0453b98 ffffffff810b9b15 ffff883ff0453b78 [ 411.258619] 0000004100000001 ffff883ffee9c8f8 ffffffff8168c21c ffff883ff0453c78 [ 411.265962] Call Trace: [ 411.268384] [] dump_stack+0x4d/0x63 [ 411.273462] [] __warn+0xe5/0x100 [ 411.278278] [] warn_slowpath_fmt+0x49/0x50 [ 411.283955] [] ex_handler_wrmsr_unsafe+0x70/0x80 [ 411.290144] [] fixup_exception+0x42/0x50 [ 411.295658] [] do_general_protection+0x8a/0x160 [ 411.301764] [] general_protection+0x22/0x30 [ 411.307527] [] ? intel_pmu_lbr_sched_task+0xc9/0x380 [ 411.314063] [] intel_pmu_sched_task+0x3c/0x60 [ 411.319996] [] x86_pmu_sched_task+0x1b/0x20 [ 411.325762] [] perf_pmu_sched_task+0x6b/0xb0 [ 411.331610] [] __perf_event_task_sched_in+0x7d/0x150 [ 411.338145] [] finish_task_switch+0x15c/0x200 [ 411.344078] [] __schedule+0x274/0x6cc [ 411.349325] [] schedule+0x39/0x90 [ 411.354229] [] exit_to_usermode_loop+0x39/0x89 [ 411.360246] [] prepare_exit_to_usermode+0x2e/0x30 [ 411.366524] [] retint_user+0x8/0x10 [ 411.371599] ---[ end trace 1ed61b8a551e95d3 ]--- Reviewed-by: Stephane Eranian Signed-off-by: David Carrillo-Cisneros --- arch/x86/events/intel/core.c | 18 +++++++++++ arch/x86/events/intel/lbr.c | 73 +++++++++++++++++++++++++++++++++++++++++--- arch/x86/events/perf_event.h | 2 ++ 3 files changed, 89 insertions(+), 4 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index a5e52ad4..1ce172d 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3309,6 +3309,7 @@ static void intel_snb_check_microcode(void) static bool check_msr(unsigned long msr, u64 mask) { u64 val_old, val_new, val_tmp; + u64 (*wr_quirk)(u64); /* * Read the current value, change it and read it back to see if it @@ -3322,13 +3323,30 @@ static bool check_msr(unsigned long msr, u64 mask) * Only change the bits which can be updated by wrmsrl. */ val_tmp = val_old ^ mask; + + /* Use wr quirk for lbr msr's. */ + if ((x86_pmu.lbr_from <= msr && + msr < x86_pmu.lbr_from + x86_pmu.lbr_nr) || + (x86_pmu.lbr_to <= msr && + msr < x86_pmu.lbr_to + x86_pmu.lbr_nr)) + wr_quirk = lbr_from_signext_quirk_wr; + + if (wr_quirk) + val_tmp = wr_quirk(val_tmp); + + if (wrmsrl_safe(msr, val_tmp) || rdmsrl_safe(msr, &val_new)) return false; + /* quirk only affects validation in wrmsr, so wrmsrl'value + * should equal rdmsrl's one even with the quirk. + */ if (val_new != val_tmp) return false; + if (wr_quirk) + val_old = wr_quirk(val_old); /* Here it's sure that the MSR can be safely accessed. * Restore the old value and return. */ diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c index 2dca66c..6aa2d8a 100644 --- a/arch/x86/events/intel/lbr.c +++ b/arch/x86/events/intel/lbr.c @@ -80,6 +80,7 @@ static enum { #define LBR_FROM_FLAG_MISPRED (1ULL << 63) #define LBR_FROM_FLAG_IN_TX (1ULL << 62) #define LBR_FROM_FLAG_ABORT (1ULL << 61) +#define LBR_FROM_SIGNEXT_MSB (1ULL << 60) /* * x86control flow change classification @@ -235,6 +236,62 @@ enum { LBR_VALID, }; +/* + * For formats with LBR_TSX flags (e.g. LBR_FORMAT_EIP_FLAGS2), bits 61:62 in + * MSR_LAST_BRANCH_FROM_x are the TSX flags when TSX is supported. + * When TSX support is disabled the behavior differs as follows: + * - For wrmsr, bits 61:62 are considered part of the sign-extension. + * - HW updates to the MSR (no through wrmsr) will clear bits 61:62, + * regardless of the sign of bit at position 47, i.e. bit 61:62 are not part + * of the sign-extension. + * + * Therefore, if the conditions: + * 1) LBR has TSX format. + * 2) CPU has no TSX support enabled. + * 3) data in MSR (bits 0:48) is negative. + * are all true, then any value passed to wrmsr must be signed-extended to + * 63 bits and any value from rdmsr must be converted to 61 bits, ignoring + * the TSX flags. + */ + +static inline bool lbr_from_signext_quirk_on(void) +{ + int lbr_format = x86_pmu.intel_cap.lbr_format; + bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) || + boot_cpu_has(X86_FEATURE_RTM); + + return !tsx_support && (lbr_desc[lbr_format] & LBR_TSX); +} + +DEFINE_STATIC_KEY_FALSE(lbr_from_quirk_key); + +static inline bool lbr_from_signext_quirk_test(u64 val) +{ + return static_branch_unlikely(&lbr_from_quirk_key) && + (val & LBR_FROM_SIGNEXT_MSB); +} + +/* + * If quirk is needed, do sign extension to 63 bits. + */ +inline u64 lbr_from_signext_quirk_wr(u64 val) +{ + if (lbr_from_signext_quirk_test(val)) + val |= (LBR_FROM_FLAG_IN_TX | LBR_FROM_FLAG_ABORT); + return val; +} + +/* + * If quirk is needed, ensure sign extension is 61 bits. + */ + +u64 lbr_from_signext_quirk_rd(u64 val) +{ + if (lbr_from_signext_quirk_test(val)) + val &= ~(LBR_FROM_FLAG_IN_TX | LBR_FROM_FLAG_ABORT); + return val; +} + static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) { int i; @@ -251,7 +308,8 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx) tos = task_ctx->tos; for (i = 0; i < tos; i++) { lbr_idx = (tos - i) & mask; - wrmsrl(x86_pmu.lbr_from + lbr_idx, task_ctx->lbr_from[i]); + wrmsrl(x86_pmu.lbr_from + lbr_idx, + lbr_from_signext_quirk_wr(task_ctx->lbr_from[i])); wrmsrl(x86_pmu.lbr_to + lbr_idx, task_ctx->lbr_to[i]); if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); @@ -264,7 +322,7 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) { int i; unsigned lbr_idx, mask; - u64 tos; + u64 tos, val; if (task_ctx->lbr_callstack_users == 0) { task_ctx->lbr_stack_state = LBR_NONE; @@ -275,7 +333,8 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx) tos = intel_pmu_lbr_tos(); for (i = 0; i < tos; i++) { lbr_idx = (tos - i) & mask; - rdmsrl(x86_pmu.lbr_from + lbr_idx, task_ctx->lbr_from[i]); + rdmsrl(x86_pmu.lbr_from + lbr_idx, val); + task_ctx->lbr_from[i] = lbr_from_signext_quirk_rd(val); rdmsrl(x86_pmu.lbr_to + lbr_idx, task_ctx->lbr_to[i]); if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO) rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]); @@ -316,7 +375,7 @@ void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in) * level (which could be a useful measurement in system-wide * mode). In that case, the risk is high of having a branch * stack with branch from multiple tasks. - */ + */ if (sched_in) { intel_pmu_lbr_reset(); cpuc->lbr_context = ctx; @@ -453,6 +512,8 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc) int lbr_flags = lbr_desc[lbr_format]; rdmsrl(x86_pmu.lbr_from + lbr_idx, from); + from = lbr_from_signext_quirk_rd(from); + rdmsrl(x86_pmu.lbr_to + lbr_idx, to); if (lbr_format == LBR_FORMAT_INFO && need_info) { @@ -536,6 +597,7 @@ static int intel_pmu_setup_sw_lbr_filter(struct perf_event *event) u64 br_type = event->attr.branch_sample_type; int mask = 0; + if (br_type & PERF_SAMPLE_BRANCH_USER) mask |= X86_BR_USER; @@ -1007,6 +1069,9 @@ void intel_pmu_lbr_init_hsw(void) x86_pmu.lbr_sel_mask = LBR_SEL_MASK; x86_pmu.lbr_sel_map = hsw_lbr_sel_map; + + if (lbr_from_signext_quirk_on()) + static_branch_enable(&lbr_from_quirk_key); } /* skylake */ diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 8bd764d..c934931 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -892,6 +892,8 @@ void intel_ds_init(void); void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in); +u64 lbr_from_signext_quirk_wr(u64 val); + void intel_pmu_lbr_reset(void); void intel_pmu_lbr_enable(struct perf_event *event); -- 2.8.0.rc3.226.g39d4020