linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Luck, Tony" <tony.luck@intel.com>
To: David Carrillo-Cisneros <davidcc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Vikas Shivappa <vikas.shivappa@linux.intel.com>,
	"Shivappa, Vikas" <vikas.shivappa@intel.com>,
	Stephane Eranian <eranian@google.com>,
	linux-kernel <linux-kernel@vger.kernel.org>, x86 <x86@kernel.org>,
	"hpa@zytor.com" <hpa@zytor.com>, Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"Shankar, Ravi V" <ravi.v.shankar@intel.com>,
	"Yu, Fenghua" <fenghua.yu@intel.com>,
	"Kleen, Andi" <andi.kleen@intel.com>,
	"Anvin, H Peter" <h.peter.anvin@intel.com>
Subject: Re: [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes
Date: Thu, 2 Feb 2017 15:41:01 -0800	[thread overview]
Message-ID: <20170202234100.GA17145@intel.com> (raw)
In-Reply-To: <CALcN6mhLz3B6w+o2ZFtG5PV1M-5Uz7o=34DN2U8uEW6xco+KSQ@mail.gmail.com>

On Thu, Feb 02, 2017 at 12:22:42PM -0800, David Carrillo-Cisneros wrote:
> There is no need to change perf(1) to support
>  # perf stat -I 1000 -e intel_cqm/llc_occupancy {command}
> 
> the PMU can work with resctrl to provide the support through
> perf_event_open, with the advantage that tools other than perf could
> also use it.

I agree it would be better to expose the counters through
a standard perf_event_open() interface ... but we don't seem
to have had much luck doing that so far.

That would need the requirements to be re-written with the
focus of what does resctrl need to do to support each of the
perf(1) command line modes of operation.  The fact that these
counters work rather differently from normal h/w counters
has resulted in massively complex volumes of code trying
to map them into what perf_event_open() expects.

The key points of weirdness seem to be:

1) We need to allocate an RMID for the duration of monitoring. While
   there are quite a lot of RMIDs, it is easy to envision scenarios
   where there are not enough.

2) We need to load that RMID into PQR_ASSOC on a logical CPU whenever a process
   of interest is running.

3) An RMID is shared by llc_occupancy, local_bytes and total_bytes events

4) For llc_occupancy the count can change even when none of the processes
   are running becauase cache lines are evicted

5) llc_occupancy measures the delta, not the absolute occupancy. To
   get a good result requires monitoring from process creation (or
   lots of patience, or the nuclear option "wbinvd").

6) RMID counters are package scoped


These result in all sorts of hard to resolve situations. E.g. you are
monitoring local bandwidth coming from logical CPU2 using RMID=22. I'm
looking at the cache occupancy of PID=234 using RMID=45. The scheduler
decides to run my proocess on your CPU.  We can only load one RMID, so
one of us will be disappointed (unless we have some crazy complex code
where your instance of perf borrows RMID=45 and reads out the local
byte count on sched_in() and sched_out() to add to the runing count
you were keeping against RMID=22).

How can we document such restrictions for people who haven't been
digging in this code for over a year?

I think a perf_event_open() interface would make some simple cases
work, but result in some swearing once people start running multiple
complex monitors at the same time.

-Tony

  reply	other threads:[~2017-02-02 23:40 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-06 21:59 [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Vikas Shivappa
2017-01-06 21:59 ` [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation Vikas Shivappa
2017-01-06 21:59 ` [PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling Vikas Shivappa
2017-01-06 21:59 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa
2017-01-16 18:05   ` Thomas Gleixner
2017-01-17 17:25     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support Vikas Shivappa
2017-01-16 18:15   ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support\ Thomas Gleixner
2017-01-17 19:11     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare Vikas Shivappa
2017-01-17 12:11   ` Thomas Gleixner
2017-01-17 12:31     ` Peter Zijlstra
2017-01-18  2:14     ` Shivappa Vikas
2017-01-17 13:46   ` Thomas Gleixner
2017-01-17 20:22     ` Shivappa Vikas
2017-01-17 21:31       ` Thomas Gleixner
2017-01-17 15:26   ` Peter Zijlstra
2017-01-17 20:27     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support Vikas Shivappa
2017-01-17 14:07   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 07/12] x86/rdt,cqm: Scheduling support update Vikas Shivappa
2017-01-17 21:58   ` Thomas Gleixner
2017-01-17 22:30     ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup together Vikas Shivappa
2017-01-17 16:11   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 09/12] x86/cqm: Add RMID reuse Vikas Shivappa
2017-01-17 16:59   ` Thomas Gleixner
2017-01-18  0:26     ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg reads Vikas Shivappa
2017-01-06 22:00 ` [PATCH 11/12] perf/stat: fix bug in handling events in error state Vikas Shivappa
2017-01-06 22:00 ` [PATCH 12/12] perf/stat: revamp read error handling, snapshot and per_pkg events Vikas Shivappa
2017-01-17 17:31 ` [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Thomas Gleixner
2017-01-18  2:38   ` Shivappa Vikas
2017-01-18  8:53     ` Thomas Gleixner
2017-01-18  9:56       ` Peter Zijlstra
2017-01-19 19:59         ` Shivappa Vikas
2017-01-18 19:41       ` Shivappa Vikas
2017-01-18 21:03       ` David Carrillo-Cisneros
2017-01-19 17:41         ` Thomas Gleixner
2017-01-20  7:37           ` David Carrillo-Cisneros
2017-01-20  8:30             ` Thomas Gleixner
2017-01-20 20:27               ` David Carrillo-Cisneros
2017-01-18 21:16       ` Yu, Fenghua
2017-01-19  2:09       ` David Carrillo-Cisneros
2017-01-19 16:58         ` David Carrillo-Cisneros
2017-01-19 17:54           ` Thomas Gleixner
2017-01-19  2:21       ` Vikas Shivappa
2017-01-19  6:45       ` Stephane Eranian
2017-01-19 18:03         ` Thomas Gleixner
2017-01-20  2:32       ` Vikas Shivappa
2017-01-20  7:58         ` David Carrillo-Cisneros
2017-01-20 13:28           ` Thomas Gleixner
2017-01-20 20:11             ` David Carrillo-Cisneros
2017-01-20 21:08               ` Shivappa Vikas
2017-01-20 21:44                 ` David Carrillo-Cisneros
2017-01-20 23:51                   ` Shivappa Vikas
2017-02-08 10:13                     ` Peter Zijlstra
2017-01-23  9:47               ` Thomas Gleixner
2017-01-23 11:30                 ` Peter Zijlstra
2017-02-01 20:08                 ` Luck, Tony
2017-02-01 23:12                   ` David Carrillo-Cisneros
2017-02-02 17:39                     ` Luck, Tony
2017-02-02 19:33                     ` Luck, Tony
2017-02-02 20:20                       ` Shivappa Vikas
2017-02-02 20:22                       ` David Carrillo-Cisneros
2017-02-02 23:41                         ` Luck, Tony [this message]
2017-02-03  1:40                           ` David Carrillo-Cisneros
2017-02-03  2:14                             ` David Carrillo-Cisneros
2017-02-03 17:52                               ` Luck, Tony
2017-02-03 21:08                                 ` David Carrillo-Cisneros
2017-02-03 22:24                                   ` Luck, Tony
2017-02-07  8:08                                 ` Stephane Eranian
2017-02-07 18:52                                   ` Luck, Tony
2017-02-08 19:31                                     ` Stephane Eranian
2017-02-07 20:10                                   ` Shivappa Vikas
2017-02-17 13:41                                   ` Thomas Gleixner
2017-02-06 18:54                     ` Luck, Tony
2017-02-06 21:22                     ` Luck, Tony
2017-02-06 21:36                       ` Shivappa Vikas
2017-02-06 21:46                         ` David Carrillo-Cisneros
2017-02-06 22:16                       ` David Carrillo-Cisneros
2017-02-06 23:27                         ` Luck, Tony
2017-02-07  0:33                           ` David Carrillo-Cisneros
2017-02-02  0:35                   ` Andi Kleen
2017-02-02  1:12                     ` David Carrillo-Cisneros
2017-02-02  1:19                       ` Andi Kleen
2017-02-02  1:22                     ` Yu, Fenghua
2017-02-02 17:51                       ` Shivappa Vikas
2017-02-08 10:11               ` Peter Zijlstra
2017-01-20 20:40           ` Shivappa Vikas
2017-01-20 19:31         ` Stephane Eranian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170202234100.GA17145@intel.com \
    --to=tony.luck@intel.com \
    --cc=andi.kleen@intel.com \
    --cc=davidcc@google.com \
    --cc=eranian@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=h.peter.anvin@intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ravi.v.shankar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=vikas.shivappa@intel.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).