linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes
@ 2017-01-06 21:59 Vikas Shivappa
  2017-01-06 21:59 ` [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation Vikas Shivappa
                   ` (12 more replies)
  0 siblings, 13 replies; 92+ messages in thread
From: Vikas Shivappa @ 2017-01-06 21:59 UTC (permalink / raw)
  To: vikas.shivappa, vikas.shivappa
  Cc: davidcc, eranian, linux-kernel, x86, hpa, tglx, mingo, peterz,
	ravi.v.shankar, tony.luck, fenghua.yu, andi.kleen, h.peter.anvin

Resending version 5 with updated send list. Sorry for the spam.

Cqm(cache quality monitoring) is part of Intel RDT(resource director
technology) which enables monitoring and controlling of processor shared
resources via MSR interface.

The current upstream cqm(Cache monitoring) has major issues which make
the feature almost unusable which this series tries to fix and also
address Thomas comments on previous versions of the cqm2 patch series to
better document/organize what we are trying to fix.

	Changes in V5
- Based on Peterz feedback, removed the file interface in perf_event
cgroup to start and stop continuous monitoring.
- Based on Andi's feedback and references David has sent a patch optimizing
the perf overhead as a seperate patch which is generic and not cqm
specific.

This is a continuation of patch series David(davidcc@google.com)
previously posted and hence its based on his patches and is also trying
to fix the same issues. Patches apply on 4.10-rc2 

Below are the issues and the fixes we attempt-

- Issue(1): Inaccurate data for per package data, systemwide. Just prints
zeros or arbitrary numbers.

Fix: Patches fix this by just throwing an error if the mode is not supported. 
The modes supported is task monitoring and cgroup monitoring. 
Also the per package
data for say socket x is returned with the -C <cpu on socketx> -G cgrpy option.
The systemwide data can be looked up by monitoring root cgroup.

- Issue(2): RMIDs are global and dont scale with more packages and hence
also run out of RMIDs very soon.

Fix: Support per pkg RMIDs hence scale better with more
packages, and get more RMIDs to use and use when needed (ie when tasks
are actually scheduled on the package).

- Issue(3): Cgroup monitoring is not complete. No hierarchical monitoring
support, inconsistent or wrong data seen when monitoring cgroup.

Fix: cgroup monitoring support added. 
Patch adds full cgroup monitoring support. Can monitor different cgroups
in the same hierarchy together and separately. And can also monitor a
task and the cgroup which the task belongs.

- Issue(4): Lot of inconsistent data is seen currently when we monitor different
kind of events like cgroup and task events *together*.

Fix: Patch adds support to be
able to monitor a cgroup x and as task p1 with in a cgroup x and also
monitor different cgroup and tasks together.

- Issue(5): CAT and cqm/mbm write the same PQR_ASSOC_MSR seperately
Fix: Integrate the sched in code and write the PQR_MSR only once every switch_to 

- Issue(6): RMID recycling leads to inaccurate data and complicates the
code and increases the code foot print. Currently, it almost makes the
feature *unusable* as we only see zeroes and inconsistent data once we
run out of RMIDs in the life time of a systemboot. The only way to get
right numbers is to reboot the system once we run out of RMIDs.

Root cause: Recycling steals an RMID from an existing event x and gives
it to an other event y. However due to the nature of monitoring
llc_occupancy we may miss tracking an unknown(possibly large) part of
cache fills at the time when event does not have RMID. Hence the user
ends up with inaccurate data for both events x and y and the inaccuracy
is arbitrary and cannot be measured.  Even if an event x gets another
RMID very soon after loosing the previous RMID, we still miss all the
occupancy data that was tied to the previous RMID which means we cannot
get accurate data even when for most of the time event has an RMID.
There is no way to guarantee accurate results with recycling and data is
inaccurate by arbitrary degree. The fact that an event can loose an RMID
anytime complicates a lot of code in sched_in, init, count, read. It
also complicates mbm as we may loose the RMID anytime and hence need to
keep a history of all the old counts.

Fix: Recycling is removed based on Tony's idea originally that its
introducing a lot of code, failing to provide accurate data and hence
questionable benefits. Because inspite of several attempts to improve
the recycling there is no way to guarantee accurate data as explained
above and the incorrectness is of arbitrary degree(where we cant say for
    ex: the data is off by x% ). As a fix we introduce per-pkg RMIDs to
mitigate the scarcity of RMIDs to a large extent - this is because RMIDs
are plenty - about 2 to 4 per logical processor/SMT thread on each
package. So on a 2 socket BDW system with say 44 logical processors/SMT
threads we have 176 RMIDs on each package (a total of 2x176 = 352
    RMIDs). Also cgroup is fully supported and hence many threads like
all threads in one VM/container can be grouped which use just one RMID.
The RMIDs scale with the number of sockets. If we still run out of RMIDs
perf read throws an error because we are not able to monitor as we run
out of limited h/w resource.

This may be better unlike recycling(even with a better version than the
one upstream)where the user thinks events are being monitored but they
actually are not monitored for arbitrary amount of time hence resulting
in inaccurate data of arbitrary degree. The inaccurate data defeats the
purpose of RDT whose goal is to provide a consistent system behaviour by
giving the ability to monitor and control processor resources in an
accurate and reliable fashion.  The fix instead helps provide accurate
data and for large extent mitigates the RMID scarcity.

Whats working now (unit tested):
Task monitoring, cgroup hierarchical monitoring, monitor multiple
cgroups, cgroup and task in same cgroup, 
per pkg rmids, error on read.

TBD : 
- Most of MBM is working but will need updates to hierarchical
monitoring and other new feature related changes we introduce. 

Below is a list of patches and what each patch fixes, Each commit
message also gives details on what the patch actually fixes among the
bunch:

[PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling

Before the patch: Users sees only zeros or wrong data once we run out of
RMIDs.
After: User would see either correct data or an error that we run out of
RMIDs.

[PATCH 03/12] x86/rdt: Add rdt common/cqm compile option
[PATCH 04/12] x86/cqm: Add Per pkg rmid support

Before patch: RMIds are global.
Tests: Available RMIDs increase by x times where x is # of packages.
Adds LAZY RMID alloc - RMIDs are alloced during first sched in

[PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare
[PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support
[PATCH 07/12] x86/rdt,cqm: Scheduling support update

Before patch: cgroup monitoring not supported fully.
After: cgroup monitoring is fully supported including hierarchical
monitoring.

[PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup

Before patch: cgroup and task could not be monitored together and would
result in a lot of inconsistent data.
After : Can monitor task and cgroup together and also supports
monitoring a task within a cgroup and the cgroup together.

[PATCH 9/12] x86/cqm: Add RMID reuse

Before patch: Once RMID is used , its never used again.
After: We reuse the RMIDs which are freed. User can specify NOLAZY RMID
allocation and open fails if we fail to get all RMIDs at open.

[PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg
[PATCH 11/12] perf/stat: fix bug in handling events in error state
[PATCH 12/12] perf/stat: revamp read error handling, snapshot and

Patches 1/12 - 9/12 Add all the features but the data is not visible to
the perf/core nor the perf user mode. The 11-12 fix these and make the
data availabe to the perf user mode.

^ permalink raw reply	[flat|nested] 92+ messages in thread
* [PATCH 00/12 V5] Cqm2: Intel Cache quality of monitoring fixes
@ 2017-01-06 21:56 Vikas Shivappa
  2017-01-06 21:56 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa
  0 siblings, 1 reply; 92+ messages in thread
From: Vikas Shivappa @ 2017-01-06 21:56 UTC (permalink / raw)
  To: vikas.shivappa, vikas.shivappa
  Cc: linux-kernel, x86, hpa, tglx, mingo, peterz, ravi.v.shankar,
	tony.luck, fenghua.yu, andi.kleen, h.peter.anvin

Another attempt for cqm2 series-

Cqm(cache quality monitoring) is part of Intel RDT(resource director
technology) which enables monitoring and controlling of processor shared
resources via MSR interface.

The current upstream cqm(Cache monitoring) has major issues which make
the feature almost unusable which this series tries to fix and also
address Thomas comments on previous versions of the cqm2 patch series to
better document/organize what we are trying to fix.

	Changes in V5
- Based on Peterz feedback, removed the file interface in perf_event
cgroup to start and stop continuous monitoring.
- Based on Andi's feedback and references David has sent a patch optimizing
the perf overhead as a seperate patch which is generic and not cqm
specific.

This is a continuation of patch series David(davidcc@google.com)
previously posted and hence its based on his patches and is also trying
to fix the same issues. Patches apply on 4.10-rc2 

Below are the issues and the fixes we attempt-

- Issue(1): Inaccurate data for per package data, systemwide. Just prints
zeros or arbitrary numbers.

Fix: Patches fix this by just throwing an error if the mode is not supported. 
The modes supported is task monitoring and cgroup monitoring. 
Also the per package
data for say socket x is returned with the -C <cpu on socketx> -G cgrpy option.
The systemwide data can be looked up by monitoring root cgroup.

- Issue(2): RMIDs are global and dont scale with more packages and hence
also run out of RMIDs very soon.

Fix: Support per pkg RMIDs hence scale better with more
packages, and get more RMIDs to use and use when needed (ie when tasks
are actually scheduled on the package).

- Issue(3): Cgroup monitoring is not complete. No hierarchical monitoring
support, inconsistent or wrong data seen when monitoring cgroup.

Fix: cgroup monitoring support added. 
Patch adds full cgroup monitoring support. Can monitor different cgroups
in the same hierarchy together and separately. And can also monitor a
task and the cgroup which the task belongs.

- Issue(4): Lot of inconsistent data is seen currently when we monitor different
kind of events like cgroup and task events *together*.

Fix: Patch adds support to be
able to monitor a cgroup x and as task p1 with in a cgroup x and also
monitor different cgroup and tasks together.

- Issue(5): CAT and cqm/mbm write the same PQR_ASSOC_MSR seperately
Fix: Integrate the sched in code and write the PQR_MSR only once every switch_to 

- Issue(6): RMID recycling leads to inaccurate data and complicates the
code and increases the code foot print. Currently, it almost makes the
feature *unusable* as we only see zeroes and inconsistent data once we
run out of RMIDs in the life time of a systemboot. The only way to get
right numbers is to reboot the system once we run out of RMIDs.

Root cause: Recycling steals an RMID from an existing event x and gives
it to an other event y. However due to the nature of monitoring
llc_occupancy we may miss tracking an unknown(possibly large) part of
cache fills at the time when event does not have RMID. Hence the user
ends up with inaccurate data for both events x and y and the inaccuracy
is arbitrary and cannot be measured.  Even if an event x gets another
RMID very soon after loosing the previous RMID, we still miss all the
occupancy data that was tied to the previous RMID which means we cannot
get accurate data even when for most of the time event has an RMID.
There is no way to guarantee accurate results with recycling and data is
inaccurate by arbitrary degree. The fact that an event can loose an RMID
anytime complicates a lot of code in sched_in, init, count, read. It
also complicates mbm as we may loose the RMID anytime and hence need to
keep a history of all the old counts.

Fix: Recycling is removed based on Tony's idea originally that its
introducing a lot of code, failing to provide accurate data and hence
questionable benefits. Because inspite of several attempts to improve
the recycling there is no way to guarantee accurate data as explained
above and the incorrectness is of arbitrary degree(where we cant say for
    ex: the data is off by x% ). As a fix we introduce per-pkg RMIDs to
mitigate the scarcity of RMIDs to a large extent - this is because RMIDs
are plenty - about 2 to 4 per logical processor/SMT thread on each
package. So on a 2 socket BDW system with say 44 logical processors/SMT
threads we have 176 RMIDs on each package (a total of 2x176 = 352
    RMIDs). Also cgroup is fully supported and hence many threads like
all threads in one VM/container can be grouped which use just one RMID.
The RMIDs scale with the number of sockets. If we still run out of RMIDs
perf read throws an error because we are not able to monitor as we run
out of limited h/w resource.

This may be better unlike recycling(even with a better version than the
one upstream)where the user thinks events are being monitored but they
actually are not monitored for arbitrary amount of time hence resulting
in inaccurate data of arbitrary degree. The inaccurate data defeats the
purpose of RDT whose goal is to provide a consistent system behaviour by
giving the ability to monitor and control processor resources in an
accurate and reliable fashion.  The fix instead helps provide accurate
data and for large extent mitigates the RMID scarcity.

Whats working now (unit tested):
Task monitoring, cgroup hierarchical monitoring, monitor multiple
cgroups, cgroup and task in same cgroup, 
per pkg rmids, error on read.

TBD : 
- Most of MBM is working but will need updates to hierarchical
monitoring and other new feature related changes we introduce. 

Below is a list of patches and what each patch fixes, Each commit
message also gives details on what the patch actually fixes among the
bunch:

[PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling

Before the patch: Users sees only zeros or wrong data once we run out of
RMIDs.
After: User would see either correct data or an error that we run out of
RMIDs.

[PATCH 03/12] x86/rdt: Add rdt common/cqm compile option
[PATCH 04/12] x86/cqm: Add Per pkg rmid support

Before patch: RMIds are global.
Tests: Available RMIDs increase by x times where x is # of packages.
Adds LAZY RMID alloc - RMIDs are alloced during first sched in

[PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare
[PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support
[PATCH 07/12] x86/rdt,cqm: Scheduling support update

Before patch: cgroup monitoring not supported fully.
After: cgroup monitoring is fully supported including hierarchical
monitoring.

[PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup

Before patch: cgroup and task could not be monitored together and would
result in a lot of inconsistent data.
After : Can monitor task and cgroup together and also supports
monitoring a task within a cgroup and the cgroup together.

[PATCH 9/12] x86/cqm: Add RMID reuse

Before patch: Once RMID is used , its never used again.
After: We reuse the RMIDs which are freed. User can specify NOLAZY RMID
allocation and open fails if we fail to get all RMIDs at open.

[PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg
[PATCH 11/12] perf/stat: fix bug in handling events in error state
[PATCH 12/12] perf/stat: revamp read error handling, snapshot and

Patches 1/12 - 9/12 Add all the features but the data is not visible to
the perf/core nor the perf user mode. The 11-12 fix these and make the
data availabe to the perf user mode.

^ permalink raw reply	[flat|nested] 92+ messages in thread

end of thread, other threads:[~2017-02-17 13:42 UTC | newest]

Thread overview: 92+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-06 21:59 [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Vikas Shivappa
2017-01-06 21:59 ` [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation Vikas Shivappa
2017-01-06 21:59 ` [PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling Vikas Shivappa
2017-01-06 21:59 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa
2017-01-16 18:05   ` Thomas Gleixner
2017-01-17 17:25     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support Vikas Shivappa
2017-01-16 18:15   ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support\ Thomas Gleixner
2017-01-17 19:11     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare Vikas Shivappa
2017-01-17 12:11   ` Thomas Gleixner
2017-01-17 12:31     ` Peter Zijlstra
2017-01-18  2:14     ` Shivappa Vikas
2017-01-17 13:46   ` Thomas Gleixner
2017-01-17 20:22     ` Shivappa Vikas
2017-01-17 21:31       ` Thomas Gleixner
2017-01-17 15:26   ` Peter Zijlstra
2017-01-17 20:27     ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support Vikas Shivappa
2017-01-17 14:07   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 07/12] x86/rdt,cqm: Scheduling support update Vikas Shivappa
2017-01-17 21:58   ` Thomas Gleixner
2017-01-17 22:30     ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup together Vikas Shivappa
2017-01-17 16:11   ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 09/12] x86/cqm: Add RMID reuse Vikas Shivappa
2017-01-17 16:59   ` Thomas Gleixner
2017-01-18  0:26     ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg reads Vikas Shivappa
2017-01-06 22:00 ` [PATCH 11/12] perf/stat: fix bug in handling events in error state Vikas Shivappa
2017-01-06 22:00 ` [PATCH 12/12] perf/stat: revamp read error handling, snapshot and per_pkg events Vikas Shivappa
2017-01-17 17:31 ` [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Thomas Gleixner
2017-01-18  2:38   ` Shivappa Vikas
2017-01-18  8:53     ` Thomas Gleixner
2017-01-18  9:56       ` Peter Zijlstra
2017-01-19 19:59         ` Shivappa Vikas
2017-01-18 19:41       ` Shivappa Vikas
2017-01-18 21:03       ` David Carrillo-Cisneros
2017-01-19 17:41         ` Thomas Gleixner
2017-01-20  7:37           ` David Carrillo-Cisneros
2017-01-20  8:30             ` Thomas Gleixner
2017-01-20 20:27               ` David Carrillo-Cisneros
2017-01-18 21:16       ` Yu, Fenghua
2017-01-19  2:09       ` David Carrillo-Cisneros
2017-01-19 16:58         ` David Carrillo-Cisneros
2017-01-19 17:54           ` Thomas Gleixner
2017-01-19  2:21       ` Vikas Shivappa
2017-01-19  6:45       ` Stephane Eranian
2017-01-19 18:03         ` Thomas Gleixner
2017-01-20  2:32       ` Vikas Shivappa
2017-01-20  7:58         ` David Carrillo-Cisneros
2017-01-20 13:28           ` Thomas Gleixner
2017-01-20 20:11             ` David Carrillo-Cisneros
2017-01-20 21:08               ` Shivappa Vikas
2017-01-20 21:44                 ` David Carrillo-Cisneros
2017-01-20 23:51                   ` Shivappa Vikas
2017-02-08 10:13                     ` Peter Zijlstra
2017-01-23  9:47               ` Thomas Gleixner
2017-01-23 11:30                 ` Peter Zijlstra
2017-02-01 20:08                 ` Luck, Tony
2017-02-01 23:12                   ` David Carrillo-Cisneros
2017-02-02 17:39                     ` Luck, Tony
2017-02-02 19:33                     ` Luck, Tony
2017-02-02 20:20                       ` Shivappa Vikas
2017-02-02 20:22                       ` David Carrillo-Cisneros
2017-02-02 23:41                         ` Luck, Tony
2017-02-03  1:40                           ` David Carrillo-Cisneros
2017-02-03  2:14                             ` David Carrillo-Cisneros
2017-02-03 17:52                               ` Luck, Tony
2017-02-03 21:08                                 ` David Carrillo-Cisneros
2017-02-03 22:24                                   ` Luck, Tony
2017-02-07  8:08                                 ` Stephane Eranian
2017-02-07 18:52                                   ` Luck, Tony
2017-02-08 19:31                                     ` Stephane Eranian
2017-02-07 20:10                                   ` Shivappa Vikas
2017-02-17 13:41                                   ` Thomas Gleixner
2017-02-06 18:54                     ` Luck, Tony
2017-02-06 21:22                     ` Luck, Tony
2017-02-06 21:36                       ` Shivappa Vikas
2017-02-06 21:46                         ` David Carrillo-Cisneros
2017-02-06 22:16                       ` David Carrillo-Cisneros
2017-02-06 23:27                         ` Luck, Tony
2017-02-07  0:33                           ` David Carrillo-Cisneros
2017-02-02  0:35                   ` Andi Kleen
2017-02-02  1:12                     ` David Carrillo-Cisneros
2017-02-02  1:19                       ` Andi Kleen
2017-02-02  1:22                     ` Yu, Fenghua
2017-02-02 17:51                       ` Shivappa Vikas
2017-02-08 10:11               ` Peter Zijlstra
2017-01-20 20:40           ` Shivappa Vikas
2017-01-20 19:31         ` Stephane Eranian
  -- strict thread matches above, loose matches on Subject: below --
2017-01-06 21:56 [PATCH 00/12 V5] Cqm2: Intel Cache quality of " Vikas Shivappa
2017-01-06 21:56 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).