From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751856AbcL1NbQ (ORCPT ); Wed, 28 Dec 2016 08:31:16 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52646 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751760AbcL1NbO (ORCPT ); Wed, 28 Dec 2016 08:31:14 -0500 From: Jiri Olsa To: Peter Zijlstra Cc: lkml , Ingo Molnar , Andi Kleen , Alexander Shishkin , Arnaldo Carvalho de Melo , Vince Weaver Subject: [PATCH 2/4] perf/x86: Fix period for non sampling events Date: Wed, 28 Dec 2016 14:31:04 +0100 Message-Id: <1482931866-6018-3-git-send-email-jolsa@kernel.org> In-Reply-To: <1482931866-6018-1-git-send-email-jolsa@kernel.org> References: <1482931866-6018-1-git-send-email-jolsa@kernel.org> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 28 Dec 2016 13:31:13 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When in counting mode we setup the counter with the longest possible period and read the value with read syscall. We also still setup the PMI to be triggered when such counter overflow to reconfigure it. We also get PEBS interrupt if such counter has precise_ip set (which makes no sense, but it's possible). Having such counter with: - counting mode - precise_ip set I watched my server to get stuck serving PEBS interrupt again and again because of following (AFAICS): - PEBS interrupt is triggered before PMI - when PEBS handling path reconfigured counter it had remaining value of -256 - the x86_perf_event_set_period does not consider this as an extreme value, so it's configured back as the new counter value - this makes the PEBS interrupt to be triggered right away again - and because it's non sampling event, this irq storm is never throttled Forcing the non sampling events to reconfigure from scratch is probably not the best solution, but it seems to work. Signed-off-by: Jiri Olsa --- arch/x86/events/core.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index f1c22584a46f..657486be9780 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1116,6 +1116,13 @@ int x86_perf_event_set_period(struct perf_event *event) return 0; /* + * For non sampling event, we are not interested + * in leftover, force the count from beginning. + */ + if (left && !is_sampling_event(event)) + left = 0; + + /* * If we are way outside a reasonable range then just skip forward: */ if (unlikely(left <= -period)) { -- 2.7.4