From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761607AbdEWGjb (ORCPT ); Tue, 23 May 2017 02:39:31 -0400 Received: from merlin.infradead.org ([205.233.59.134]:35202 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752433AbdEWGja (ORCPT ); Tue, 23 May 2017 02:39:30 -0400 Date: Tue, 23 May 2017 08:39:13 +0200 From: Peter Zijlstra To: Stephane Eranian Cc: "Liang, Kan" , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , "alexander.shishkin@linux.intel.com" , "acme@redhat.com" , "jolsa@redhat.com" , "torvalds@linux-foundation.org" , "tglx@linutronix.de" , "vincent.weaver@maine.edu" , "ak@linux.intel.com" Subject: Re: [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter Message-ID: <20170523063913.363ssgcy7kmeesye@hirez.programming.kicks-ass.net> References: <1495213582-3635-1-git-send-email-kan.liang@intel.com> <20170522091916.3gydvflk4fnqkzw5@hirez.programming.kicks-ass.net> <37D7C6CF3E00A74B8858931C1DB2F077536F079F@SHSMSX103.ccr.corp.intel.com> <20170522192335.v4gvhz24ix2jeihg@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 22, 2017 at 12:28:26PM -0700, Stephane Eranian wrote: > On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra wrote: > > On Mon, May 22, 2017 at 04:55:47PM +0000, Liang, Kan wrote: > >> > >> > >> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.liang@intel.com wrote: > >> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index > >> > > 580b60f..e8b2326 100644 > >> > > --- a/arch/x86/events/core.c > >> > > +++ b/arch/x86/events/core.c > >> > > @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct perf_event > >> > *event) > >> > > delta = (new_raw_count << shift) - (prev_raw_count << shift); > >> > > delta >>= shift; > >> > > > >> > > + /* Correct the count number if applying ref_cycles replacement */ > >> > > + if (!is_sampling_event(event) && > >> > > + (hwc->flags & PERF_X86_EVENT_REF_CYCLES_REP)) > >> > > + delta *= x86_pmu.ref_cycles_factor; > >> > > >> > That condition seems wrong, why only correct for !sampling events? > >> > > >> > >> For sampling, it's either fixed freq mode or fixed period mode. > >> - In the fixed freq mode, we should do nothing, because the adaptive > >> frequency algorithm will handle it. > >> - In the fixed period mode, we have already adjusted the period in > >> ref_cycles_rep(). > >> Therefore, we should only handle !sampling events here. > > > > How so? For sampling events the actual event count should also be > > accurate. > > Yes, it must be. Because you can reconstruct the total number of > occurrences of the event by adding > all the periods recorded in each sample. So the period in each sample > must reflect user event and not > kernel event. Well, that, but you can equally use read() or the mmap()'ed rdpmc stuff on a sampling event. The fact that is also generates samples does not mean it should not also function as a non-sampling event.