linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Andi Kleen <ak@linux.intel.com>
Cc: Stephane Eranian <eranian@google.com>,
	Vince Weaver <vincent.weaver@maine.edu>,
	"Liang, Kan" <kan.liang@intel.com>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"alexander.shishkin@linux.intel.com" 
	<alexander.shishkin@linux.intel.com>,
	"acme@redhat.com" <acme@redhat.com>,
	"jolsa@redhat.com" <jolsa@redhat.com>,
	"torvalds@linux-foundation.org" <torvalds@linux-foundation.org>,
	"tglx@linutronix.de" <tglx@linutronix.de>
Subject: Re: [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter
Date: Tue, 30 May 2017 18:28:38 +0200	[thread overview]
Message-ID: <20170530162838.h5tzdnrxpy6upbka@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20170530135128.GI24144@tassilo.jf.intel.com>

On Tue, May 30, 2017 at 06:51:28AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> > > Ultimately, I would like to see the watchdog move out of the PMU. That
> > > is the only sensible solution.
> > > You just need a resource able to interrupt on NMI or you handle
> > > interrupt masking in software as has
> > > been proposed on LKML.
> > 
> > So even if we do the soft masking, we still need to deal with regions
> > where the interrupts are disabled. Once an interrupt hits the soft mask
> > we still hardware mask.
> > 
> > So to get full and reliable coverage we still need an NMI source.
> 
> You would only need a single one per system however, not one per CPU.
> RCU already tracks all the CPUs, all we need is a single NMI watchdog
> that makes sure RCU itself does not get stuck.
> 
> So we just have to find a single watchdog somewhere that can trigger
> NMI.

But then you have to IPI broadcast the NMI, which is less than ideal.

RCU doesn't have that problem because the quiescent state is a global
thing. CPU progress, which is what the NMI watchdog tests, is very much
per logical CPU though.

> > I agree that it would be lovely to free up the one counter though.
> 
> One option is to use the TCO watchdog in the chipset instead. 
> Unfortunatley it's not an universal solution because some BIOS lock
> the TCO watchdog for their own use. But if you have a BIOS that
> doesn't do that it should work.

I suppose you could also route the HPET to the NMI vector and other
similar things. Still, you're then stuck with IPI broadcasts, which
suck.

> > One other approach is running the watchdog off of _any_ PMI, then all we
> > need to ensure is that PMIs happen semi regularly. There are two cases
> > where this becomes 'interesting':
> 
> Seems fairly complex.

Yes.. :/

  reply	other threads:[~2017-05-30 16:29 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-19 17:06 [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter kan.liang
2017-05-19 17:06 ` [PATCH 2/2] perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86 kan.liang
2017-05-22 12:03   ` Peter Zijlstra
2017-05-22 12:04     ` Peter Zijlstra
2017-05-22 16:58     ` Liang, Kan
2017-05-22 19:24       ` Peter Zijlstra
2017-05-22 18:20   ` Stephane Eranian
2017-05-22 20:01     ` Andi Kleen
2017-05-22  8:30 ` [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter Peter Zijlstra
2017-05-22 18:15   ` Stephane Eranian
2017-05-22  9:19 ` Peter Zijlstra
2017-05-22 12:22   ` Peter Zijlstra
2017-05-22 16:59     ` Liang, Kan
2017-05-22 16:55   ` Liang, Kan
2017-05-22 19:23     ` Peter Zijlstra
2017-05-22 19:28       ` Stephane Eranian
2017-05-22 21:51         ` Liang, Kan
2017-05-23  6:39         ` Peter Zijlstra
2017-05-23  6:42           ` Stephane Eranian
2017-05-24 15:45             ` Andi Kleen
2017-05-24 16:01               ` Vince Weaver
2017-05-24 16:55                 ` Andi Kleen
2017-05-28 20:31                 ` Stephane Eranian
2017-05-30  9:25                   ` Peter Zijlstra
2017-05-30 13:51                     ` Andi Kleen
2017-05-30 16:28                       ` Peter Zijlstra [this message]
2017-05-30 16:41                         ` Stephane Eranian
2017-05-30 17:22                         ` Andi Kleen
2017-05-30 17:40                           ` Peter Zijlstra
2017-05-30 17:51                             ` Andi Kleen
2017-05-30 18:59                               ` Peter Zijlstra
2017-05-30 19:40                                 ` Andi Kleen
2017-05-30 16:39                     ` Stephane Eranian
2017-05-30 16:55                       ` Thomas Gleixner
2017-05-30 17:25                 ` Peter Zijlstra
2017-05-31 20:57                   ` Vince Weaver
2017-05-28  2:56 ` kbuild test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170530162838.h5tzdnrxpy6upbka@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jolsa@redhat.com \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vincent.weaver@maine.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).