From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751722AbdF1Rgj (ORCPT ); Wed, 28 Jun 2017 13:36:39 -0400 Received: from mail-yb0-f195.google.com ([209.85.213.195]:36224 "EHLO mail-yb0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751741AbdF1RgW (ORCPT ); Wed, 28 Jun 2017 13:36:22 -0400 MIME-Version: 1.0 In-Reply-To: <20170628171943.GF8252@leverpostej> References: <2256f9b5-1277-c4b1-1472-61a10cd1db9a@linux.intel.com> <20170628101248.GB5981@leverpostej> <20170628171943.GF8252@leverpostej> From: Kyle Huey Date: Wed, 28 Jun 2017 10:36:20 -0700 Message-ID: Subject: Re: [REGRESSION] perf/core: PMU interrupts dropped if we entered the kernel in the "skid" region To: Mark Rutland Cc: "Jin, Yao" , Ingo Molnar , "Peter Zijlstra (Intel)" , stable@vger.kernel.org, Alexander Shishkin , Arnaldo Carvalho de Melo , Jiri Olsa , Linus Torvalds , Namhyung Kim , Stephane Eranian , Thomas Gleixner , Vince Weaver , acme@kernel.org, jolsa@kernel.org, kan.liang@intel.com, Will Deacon , gregkh@linuxfoundation.org, "Robert O'Callahan" , open list Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 28, 2017 at 10:19 AM, Mark Rutland wrote: > On Wed, Jun 28, 2017 at 09:46:43AM -0700, Kyle Huey wrote: >> On Wed, Jun 28, 2017 at 3:12 AM, Mark Rutland wrote: >> > On Tue, Jun 27, 2017 at 09:51:00PM -0700, Kyle Huey wrote: >> >> My understanding of the situation is as follows: >> >> >> >> There is some time, call it t_0, where the hardware counter overflows. >> >> The PMU triggers an interrupt, but this is not instantaneous. Call >> >> the time when the interrupt is actually delivered t_1. Then t_1 - t_0 >> >> is the "skid". >> >> >> >> Note that if the counter is `exclude_kernel`, then at t_0 the CPU >> >> *must* be running a userspace program. But by t_1, the CPU may be >> >> doing something else. Your patch changed things so that if at t_1 the >> >> CPU is in the kernel, then the interrupt is discarded. But rr has >> >> programmed the counter to deliver a signal on overflow (via F_SETSIG >> >> on the fd returned by perf_event_open). This change results in the >> >> signal never being delivered, because the interrupt was ignored. >> >> (More accurately, the signal is delivered the *next* time the counter >> >> overflows, which is far past where we wanted to inject our >> >> asynchronous event into our tracee. >> > >> > Yes, this is a bug. >> > >> > As we're trying to avoid smapling state, I think we can move the check >> > into perf_prepare_sample() or __perf_event_output(), where that state is >> > actually sampled. I'll take a look at that momentarily. >> > >> > Just to clarify, you don't care about the sample state at all? i.e. you >> > don't need the user program counter? >> >> Right. `sample_regs_user`, `sample_star_user`, `branch_sample_type`, >> etc are all 0. >> https://github.com/mozilla/rr/blob/cf594dd01f07d96a61409e9f41a29f78c8c51693/src/PerfCounters.cc#L194 >> is what we do use. > > Given that, I must be missing something. > > In __perf_event_overflow(), we already bail out early if > !is_sampling_event(event), i.e. when the sample_period is 0. > > Your attr has a sample_period of zero, so something must be initialising > that. > > Do you always call PERF_EVENT_IOC_PERIOD, or is something in the core > fiddling with the sample period behind your back? We always either set sample_period or call PERF_EVENT_IOC_PERIOD (with an enormous number if we don't actually want an interrupt. See `PerfCounters::reset`, line 446. > It seems odd that an event without any samples to take has a sample > period. I'm surprised that there's not *some* sample_type set. Perhaps sample_period is misleadingly named :) Alternatively, you could imagine it as sampling where we're only interested in whether the counter passed the sampling value or not. >> > Is that signal delivered to the tracee, or to a different process that >> > traces it? If the latter, what ensures that the task is stopped >> > sufficiently quickly? >> >> It's delivered to the tracee (via an F_SETOWN_EX with the tracee tid). >> In practice we've found that on modern Intel hardware that the >> interrupt and resulting signal delivery delay is bounded by a >> relatively small number of counter events. > > Ok. > > Thanks, > Mark. - Kyle