linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code
@ 2021-04-13 18:57 kan.liang
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
  2021-04-14 12:22 ` [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code Peter Zijlstra
  0 siblings, 2 replies; 10+ messages in thread
From: kan.liang @ 2021-04-13 18:57 UTC (permalink / raw)
  To: peterz, mingo, linux-kernel
  Cc: acme, ak, mark.rutland, luto, eranian, namhyung, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The 'running' variable is only used in the P4 PMU. Current perf sets the
variable in the critical function x86_pmu_start(), which wastes cycles
for everybody not running on P4.

Move cpuc->running into the P4 specific p4_pmu_enable_event().

Add a static per-CPU 'p4_running' variable to replace the 'running'
variable in the struct cpu_hw_events. Saves space for the generic
structure.

The p4_pmu_enable_all() also invokes the p4_pmu_enable_event(), but it
should not set cpuc->running. Factor out __p4_pmu_enable_event() for
p4_pmu_enable_all().

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---

New patch for V3
- Address the suggestion from Peter.
https://lore.kernel.org/lkml/20200908155840.GC35926@hirez.programming.kicks-ass.net/

 arch/x86/events/core.c       |  1 -
 arch/x86/events/intel/p4.c   | 16 +++++++++++++---
 arch/x86/events/perf_event.h |  1 -
 3 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 18df171..dd9f3c2 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1480,7 +1480,6 @@ static void x86_pmu_start(struct perf_event *event, int flags)
 
 	cpuc->events[idx] = event;
 	__set_bit(idx, cpuc->active_mask);
-	__set_bit(idx, cpuc->running);
 	static_call(x86_pmu_enable)(event);
 	perf_event_update_userpage(event);
 }
diff --git a/arch/x86/events/intel/p4.c b/arch/x86/events/intel/p4.c
index a4cc660..9c10cbb 100644
--- a/arch/x86/events/intel/p4.c
+++ b/arch/x86/events/intel/p4.c
@@ -947,7 +947,7 @@ static void p4_pmu_enable_pebs(u64 config)
 	(void)wrmsrl_safe(MSR_P4_PEBS_MATRIX_VERT,	(u64)bind->metric_vert);
 }
 
-static void p4_pmu_enable_event(struct perf_event *event)
+static void __p4_pmu_enable_event(struct perf_event *event)
 {
 	struct hw_perf_event *hwc = &event->hw;
 	int thread = p4_ht_config_thread(hwc->config);
@@ -983,6 +983,16 @@ static void p4_pmu_enable_event(struct perf_event *event)
 				(cccr & ~P4_CCCR_RESERVED) | P4_CCCR_ENABLE);
 }
 
+static DEFINE_PER_CPU(unsigned long [BITS_TO_LONGS(X86_PMC_IDX_MAX)], p4_running);
+
+static void p4_pmu_enable_event(struct perf_event *event)
+{
+	int idx = event->hw.idx;
+
+	__set_bit(idx, per_cpu(p4_running, smp_processor_id()));
+	__p4_pmu_enable_event(event);
+}
+
 static void p4_pmu_enable_all(int added)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -992,7 +1002,7 @@ static void p4_pmu_enable_all(int added)
 		struct perf_event *event = cpuc->events[idx];
 		if (!test_bit(idx, cpuc->active_mask))
 			continue;
-		p4_pmu_enable_event(event);
+		__p4_pmu_enable_event(event);
 	}
 }
 
@@ -1012,7 +1022,7 @@ static int p4_pmu_handle_irq(struct pt_regs *regs)
 
 		if (!test_bit(idx, cpuc->active_mask)) {
 			/* catch in-flight IRQs */
-			if (__test_and_clear_bit(idx, cpuc->running))
+			if (__test_and_clear_bit(idx, per_cpu(p4_running, smp_processor_id())))
 				handled++;
 			continue;
 		}
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 53b2b5f..54a340e 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -228,7 +228,6 @@ struct cpu_hw_events {
 	 */
 	struct perf_event	*events[X86_PMC_IDX_MAX]; /* in counter order */
 	unsigned long		active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
-	unsigned long		running[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	int			enabled;
 
 	int			n_events; /* the # of events in the below arrays */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 18:57 [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code kan.liang
@ 2021-04-13 18:57 ` kan.liang
  2021-04-13 20:33   ` kernel test robot
                     ` (3 more replies)
  2021-04-14 12:22 ` [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code Peter Zijlstra
  1 sibling, 4 replies; 10+ messages in thread
From: kan.liang @ 2021-04-13 18:57 UTC (permalink / raw)
  To: peterz, mingo, linux-kernel
  Cc: acme, ak, mark.rutland, luto, eranian, namhyung, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The counter value of a perf task may leak to another RDPMC task.
For example, a perf stat task as below is running on CPU 0.

    perf stat -e 'branches,cycles' -- taskset -c 0 ./workload

In the meantime, an RDPMC task, which is also running on CPU 0, may read
the GP counters periodically. (The RDPMC task creates a fixed event,
but read four GP counters.)

    $ taskset -c 0 ./rdpmc_read_all_counters
    index 0x0 value 0x8001e5970f99
    index 0x1 value 0x8005d750edb6
    index 0x2 value 0x0
    index 0x3 value 0x0

    index 0x0 value 0x8002358e48a5
    index 0x1 value 0x8006bd1e3bc9
    index 0x2 value 0x0
    index 0x3 value 0x0

It is a potential security issue. Once the attacker knows what the other
thread is counting. The PerfMon counter can be used as a side-channel to
attack cryptosystems.

The counter value of the perf stat task leaks to the RDPMC task because
perf never clears the counter when it's stopped.

Two methods were considered to address the issue.
- Unconditionally reset the counter in x86_pmu_del(). It can bring extra
  overhead even when there is no RDPMC task running.
- Only reset the un-assigned dirty counters when the RDPMC task is
  scheduled in. The method is implemented here.

The dirty counter is a counter, on which the assigned event has been
deleted, but the counter is not reset. To track the dirty counters,
add a 'dirty' variable in the struct cpu_hw_events.

The current code doesn't reset the counter when the assigned event is
deleted. Set the corresponding bit in the 'dirty' variable in
x86_pmu_del(), if the RDPMC feature is available on the system.

The security issue can only be found with an RDPMC task. The event for
an RDPMC task requires the mmap buffer. This can be used to detect an
RDPMC task. Once the event is detected in the event_mapped(), enable
sched_task(), which is invoked in each context switch. Add a check in
the sched_task() to clear the dirty counters, when the RDPMC task is
scheduled in. Only the current un-assigned dirty counters are reset,
bacuase the RDPMC assigned dirty counters will be updated soon.

The RDPMC instruction is also supported on the older platforms. Add
sched_task() for the core_pmu. The core_pmu doesn't support large PEBS
and LBR callstack, the intel_pmu_pebs/lbr_sched_task() will be ignored.

The RDPMC is not Intel-only feature. Add the dirty counters clear code
in the X86 generic code.

After applying the patch,

        $ taskset -c 0 ./rdpmc_read_all_counters
        index 0x0 value 0x0
        index 0x1 value 0x0
        index 0x2 value 0x0
        index 0x3 value 0x0

        index 0x0 value 0x0
        index 0x1 value 0x0
        index 0x2 value 0x0
        index 0x3 value 0x0

Performance

The performance of a context switch only be impacted when there are two
or more perf users and one of the users must be an RDPMC user. In other
cases, there is no performance impact.

The worst-case occurs when there are two users: the RDPMC user only
applies one counter; while the other user applies all available
counters. When the RDPMC task is scheduled in, all the counters, other
than the RDPMC assigned one, have to be reset.

Here is the test result for the worst-case.

The test is implemented on an Ice Lake platform, which has 8 GP
counters and 3 fixed counters (Not include SLOTS counter).

The lat_ctx is used to measure the context switching time.

    lat_ctx -s 128K -N 1000 processes 2

I instrument the lat_ctx to open all 8 GP counters and 3 fixed
counters for one task. The other task opens a fixed counter and enable
RDPMC.

Without the patch:
The context switch time is 4.97 us

With the patch:
The context switch time is 5.16 us

There is ~4% performance drop for the context switching time in the
worst-case.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
The V2 can be found here.
https://lore.kernel.org/lkml/20200821195754.20159-3-kan.liang@linux.intel.com/

Changes since V2:
- Unconditionally set cpuc->dirty. The worst case for an RDPMC task is
  that we may have to clear all counters for the first time in
  x86_pmu_event_mapped. After that, the sched_task() will clear/update
  the 'dirty'. Only the real 'dirty' counters are clear. For a non-RDPMC
  task, it's harmless to unconditionally set the cpuc->dirty.
- Remove the !is_sampling_event() check
- Move the code into X86 generic file, because RDPMC is not a Intel-only
  feature.

Changes since V1:
- Drop the old method, which unconditionally reset the counter in
  x86_pmu_del().
  Only reset the dirty counters when a RDPMC task is sheduled in.

 arch/x86/events/core.c       | 47 ++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/events/perf_event.h |  1 +
 2 files changed, 48 insertions(+)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index dd9f3c2..0d4a1a3 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1585,6 +1585,8 @@ static void x86_pmu_del(struct perf_event *event, int flags)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		goto do_del;
 
+	__set_bit(event->hw.idx, cpuc->dirty);
+
 	/*
 	 * Not a TXN, therefore cleanup properly.
 	 */
@@ -2304,12 +2306,46 @@ static int x86_pmu_event_init(struct perf_event *event)
 	return err;
 }
 
+void x86_pmu_clear_dirty_counters(void)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	int i;
+
+	if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
+		return;
+
+	 /* Don't need to clear the assigned counter. */
+	for (i = 0; i < cpuc->n_events; i++)
+		__clear_bit(cpuc->assign[i], cpuc->dirty);
+
+	for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
+		/* Metrics and fake events don't have corresponding HW counters. */
+		if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
+			continue;
+		else if (i >= INTEL_PMC_IDX_FIXED)
+			wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
+		else
+			wrmsrl(x86_pmu_event_addr(i), 0);
+	}
+
+	bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
+}
+
 static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
 {
 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
 		return;
 
 	/*
+	 * Enable sched_task() for the RDPMC task,
+	 * and clear the existing dirty counters.
+	 */
+	if (x86_pmu.sched_task && event->hw.target) {
+		perf_sched_cb_inc(event->ctx->pmu);
+		x86_pmu_clear_dirty_counters();
+	}
+
+	/*
 	 * This function relies on not being called concurrently in two
 	 * tasks in the same mm.  Otherwise one task could observe
 	 * perf_rdpmc_allowed > 1 and return all the way back to
@@ -2331,6 +2367,9 @@ static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *m
 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
 		return;
 
+	if (x86_pmu.sched_task && event->hw.target)
+		perf_sched_cb_dec(event->ctx->pmu);
+
 	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
 		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
 }
@@ -2436,6 +2475,14 @@ static const struct attribute_group *x86_pmu_attr_groups[] = {
 static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in)
 {
 	static_call_cond(x86_pmu_sched_task)(ctx, sched_in);
+
+	/*
+	 * If a new task has the RDPMC enabled, clear the dirty counters
+	 * to prevent the potential leak.
+	 */
+	if (sched_in && ctx && READ_ONCE(x86_pmu.attr_rdpmc) &&
+	    current->mm && atomic_read(&current->mm->context.perf_rdpmc_allowed))
+		x86_pmu_clear_dirty_counters();
 }
 
 static void x86_pmu_swap_task_ctx(struct perf_event_context *prev,
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 54a340e..e855f20 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -228,6 +228,7 @@ struct cpu_hw_events {
 	 */
 	struct perf_event	*events[X86_PMC_IDX_MAX]; /* in counter order */
 	unsigned long		active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
+	unsigned long		dirty[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	int			enabled;
 
 	int			n_events; /* the # of events in the below arrays */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
@ 2021-04-13 20:33   ` kernel test robot
  2021-04-14 13:47     ` Liang, Kan
  2021-04-13 23:41   ` kernel test robot
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: kernel test robot @ 2021-04-13 20:33 UTC (permalink / raw)
  To: kan.liang, peterz, mingo, linux-kernel
  Cc: kbuild-all, acme, ak, mark.rutland, luto, eranian, namhyung, Kan Liang

[-- Attachment #1: Type: text/plain, Size: 2625 bytes --]

Hi,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/perf/core]
[also build test WARNING on tip/master linux/master linus/master v5.12-rc7 next-20210413]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git cface0326a6c2ae5c8f47bd466f07624b3e348a7
config: i386-tinyconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/83f02393e1b5a2723294d8697f4fd5473d70602c
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
        git checkout 83f02393e1b5a2723294d8697f4fd5473d70602c
        # save the attached .config to linux build tree
        make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> arch/x86/events/core.c:2309:6: warning: no previous prototype for 'x86_pmu_clear_dirty_counters' [-Wmissing-prototypes]
    2309 | void x86_pmu_clear_dirty_counters(void)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/x86_pmu_clear_dirty_counters +2309 arch/x86/events/core.c

  2308	
> 2309	void x86_pmu_clear_dirty_counters(void)
  2310	{
  2311		struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
  2312		int i;
  2313	
  2314		if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
  2315			return;
  2316	
  2317		 /* Don't need to clear the assigned counter. */
  2318		for (i = 0; i < cpuc->n_events; i++)
  2319			__clear_bit(cpuc->assign[i], cpuc->dirty);
  2320	
  2321		for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
  2322			/* Metrics and fake events don't have corresponding HW counters. */
  2323			if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
  2324				continue;
  2325			else if (i >= INTEL_PMC_IDX_FIXED)
  2326				wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
  2327			else
  2328				wrmsrl(x86_pmu_event_addr(i), 0);
  2329		}
  2330	
  2331		bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
  2332	}
  2333	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 7346 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
  2021-04-13 20:33   ` kernel test robot
@ 2021-04-13 23:41   ` kernel test robot
  2021-04-14  0:34   ` Andy Lutomirski
  2021-04-14 13:51   ` Namhyung Kim
  3 siblings, 0 replies; 10+ messages in thread
From: kernel test robot @ 2021-04-13 23:41 UTC (permalink / raw)
  To: kan.liang, peterz, mingo, linux-kernel
  Cc: kbuild-all, clang-built-linux, acme, ak, mark.rutland, luto,
	eranian, namhyung, Kan Liang

[-- Attachment #1: Type: text/plain, Size: 3200 bytes --]

Hi,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/perf/core]
[also build test WARNING on tip/master linux/master linus/master v5.12-rc7 next-20210413]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git cface0326a6c2ae5c8f47bd466f07624b3e348a7
config: x86_64-randconfig-a014-20210413 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project 9829f5e6b1bca9b61efc629770d28bb9014dec45)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/83f02393e1b5a2723294d8697f4fd5473d70602c
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
        git checkout 83f02393e1b5a2723294d8697f4fd5473d70602c
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> arch/x86/events/core.c:2309:6: warning: no previous prototype for function 'x86_pmu_clear_dirty_counters' [-Wmissing-prototypes]
   void x86_pmu_clear_dirty_counters(void)
        ^
   arch/x86/events/core.c:2309:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   void x86_pmu_clear_dirty_counters(void)
   ^
   static 
   1 warning generated.


vim +/x86_pmu_clear_dirty_counters +2309 arch/x86/events/core.c

  2308	
> 2309	void x86_pmu_clear_dirty_counters(void)
  2310	{
  2311		struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
  2312		int i;
  2313	
  2314		if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
  2315			return;
  2316	
  2317		 /* Don't need to clear the assigned counter. */
  2318		for (i = 0; i < cpuc->n_events; i++)
  2319			__clear_bit(cpuc->assign[i], cpuc->dirty);
  2320	
  2321		for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
  2322			/* Metrics and fake events don't have corresponding HW counters. */
  2323			if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
  2324				continue;
  2325			else if (i >= INTEL_PMC_IDX_FIXED)
  2326				wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
  2327			else
  2328				wrmsrl(x86_pmu_event_addr(i), 0);
  2329		}
  2330	
  2331		bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
  2332	}
  2333	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 36428 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
  2021-04-13 20:33   ` kernel test robot
  2021-04-13 23:41   ` kernel test robot
@ 2021-04-14  0:34   ` Andy Lutomirski
  2021-04-14  1:30     ` Liang, Kan
  2021-04-14 13:51   ` Namhyung Kim
  3 siblings, 1 reply; 10+ messages in thread
From: Andy Lutomirski @ 2021-04-14  0:34 UTC (permalink / raw)
  To: kan.liang
  Cc: Peter Zijlstra, Ingo Molnar, LKML, Arnaldo Carvalho de Melo,
	Andi Kleen, Mark Rutland, Stephane Eranian, Namhyung Kim

On Tue, Apr 13, 2021 at 12:05 PM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The counter value of a perf task may leak to another RDPMC task.
> For example, a perf stat task as below is running on CPU 0.
>
>     perf stat -e 'branches,cycles' -- taskset -c 0 ./workload

I assume this doesn't fix the leak if the sensitive counter is systemwide?

Could Intel please add proper security and ideally virtualization for
this?  Ideally RDPMC permission would be a bitmask for all RDPMC-able
counters, not just a single on/off switch.

-Andy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-14  0:34   ` Andy Lutomirski
@ 2021-04-14  1:30     ` Liang, Kan
  0 siblings, 0 replies; 10+ messages in thread
From: Liang, Kan @ 2021-04-14  1:30 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Peter Zijlstra, Ingo Molnar, LKML, Arnaldo Carvalho de Melo,
	Andi Kleen, Mark Rutland, Stephane Eranian, Namhyung Kim



On 4/13/2021 8:34 PM, Andy Lutomirski wrote:
> On Tue, Apr 13, 2021 at 12:05 PM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> The counter value of a perf task may leak to another RDPMC task.
>> For example, a perf stat task as below is running on CPU 0.
>>
>>      perf stat -e 'branches,cycles' -- taskset -c 0 ./workload
> 
> I assume this doesn't fix the leak if the sensitive counter is systemwide?
>

Right.

> Could Intel please add proper security and ideally virtualization for
> this?  Ideally RDPMC permission would be a bitmask for all RDPMC-able
> counters, not just a single on/off switch.
> 

Yes, we are working on it.

For now, I think this patch is what we can do so far.

Thanks,
Kan


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code
  2021-04-13 18:57 [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code kan.liang
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
@ 2021-04-14 12:22 ` Peter Zijlstra
  1 sibling, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2021-04-14 12:22 UTC (permalink / raw)
  To: kan.liang
  Cc: mingo, linux-kernel, acme, ak, mark.rutland, luto, eranian, namhyung



Thanks for these Kan!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 20:33   ` kernel test robot
@ 2021-04-14 13:47     ` Liang, Kan
  0 siblings, 0 replies; 10+ messages in thread
From: Liang, Kan @ 2021-04-14 13:47 UTC (permalink / raw)
  To: kernel test robot, peterz, mingo, linux-kernel
  Cc: kbuild-all, acme, ak, mark.rutland, luto, eranian, namhyung



On 4/13/2021 4:33 PM, kernel test robot wrote:
> Hi,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on tip/perf/core]
> [also build test WARNING on tip/master linux/master linus/master v5.12-rc7 next-20210413]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:    https://github.com/0day-ci/linux/commits/kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git cface0326a6c2ae5c8f47bd466f07624b3e348a7
> config: i386-tinyconfig (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
> reproduce (this is a W=1 build):
>          # https://github.com/0day-ci/linux/commit/83f02393e1b5a2723294d8697f4fd5473d70602c
>          git remote add linux-review https://github.com/0day-ci/linux
>          git fetch --no-tags linux-review kan-liang-linux-intel-com/perf-x86-Move-cpuc-running-into-P4-specific-code/20210414-030649
>          git checkout 83f02393e1b5a2723294d8697f4fd5473d70602c
>          # save the attached .config to linux build tree
>          make W=1 ARCH=i386
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All warnings (new ones prefixed by >>):
> 
>>> arch/x86/events/core.c:2309:6: warning: no previous prototype for 'x86_pmu_clear_dirty_counters' [-Wmissing-prototypes]
>      2309 | void x86_pmu_clear_dirty_counters(void)
>           |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> 
> vim +/x86_pmu_clear_dirty_counters +2309 arch/x86/events/core.c
> 
>    2308	
>> 2309	void x86_pmu_clear_dirty_counters(void)


Should be "static void x86_pmu_clear_dirty_counters(void)".

I will send V4 shortly to fix it.

Thanks,
Kan

>    2310	{
>    2311		struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>    2312		int i;
>    2313	
>    2314		if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
>    2315			return;
>    2316	
>    2317		 /* Don't need to clear the assigned counter. */
>    2318		for (i = 0; i < cpuc->n_events; i++)
>    2319			__clear_bit(cpuc->assign[i], cpuc->dirty);
>    2320	
>    2321		for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
>    2322			/* Metrics and fake events don't have corresponding HW counters. */
>    2323			if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
>    2324				continue;
>    2325			else if (i >= INTEL_PMC_IDX_FIXED)
>    2326				wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
>    2327			else
>    2328				wrmsrl(x86_pmu_event_addr(i), 0);
>    2329		}
>    2330	
>    2331		bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
>    2332	}
>    2333	
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
                     ` (2 preceding siblings ...)
  2021-04-14  0:34   ` Andy Lutomirski
@ 2021-04-14 13:51   ` Namhyung Kim
  2021-04-14 14:27     ` Liang, Kan
  3 siblings, 1 reply; 10+ messages in thread
From: Namhyung Kim @ 2021-04-14 13:51 UTC (permalink / raw)
  To: Kan Liang
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel,
	Arnaldo Carvalho de Melo, Andi Kleen, Mark Rutland, luto,
	Stephane Eranian

Hi Kan,

On Wed, Apr 14, 2021 at 4:04 AM <kan.liang@linux.intel.com> wrote:
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index dd9f3c2..0d4a1a3 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -1585,6 +1585,8 @@ static void x86_pmu_del(struct perf_event *event, int flags)
>         if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
>                 goto do_del;
>
> +       __set_bit(event->hw.idx, cpuc->dirty);
> +
>         /*
>          * Not a TXN, therefore cleanup properly.
>          */
> @@ -2304,12 +2306,46 @@ static int x86_pmu_event_init(struct perf_event *event)
>         return err;
>  }
>
> +void x86_pmu_clear_dirty_counters(void)
> +{
> +       struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> +       int i;
> +
> +       if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
> +               return;

Maybe you can check it after clearing assigned counters.

Thanks,
Namhyung

> +
> +        /* Don't need to clear the assigned counter. */
> +       for (i = 0; i < cpuc->n_events; i++)
> +               __clear_bit(cpuc->assign[i], cpuc->dirty);
> +
> +       for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
> +               /* Metrics and fake events don't have corresponding HW counters. */
> +               if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
> +                       continue;
> +               else if (i >= INTEL_PMC_IDX_FIXED)
> +                       wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
> +               else
> +                       wrmsrl(x86_pmu_event_addr(i), 0);
> +       }
> +
> +       bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
> +}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
  2021-04-14 13:51   ` Namhyung Kim
@ 2021-04-14 14:27     ` Liang, Kan
  0 siblings, 0 replies; 10+ messages in thread
From: Liang, Kan @ 2021-04-14 14:27 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel,
	Arnaldo Carvalho de Melo, Andi Kleen, Mark Rutland, luto,
	Stephane Eranian



On 4/14/2021 9:51 AM, Namhyung Kim wrote:
> Hi Kan,
> 
> On Wed, Apr 14, 2021 at 4:04 AM <kan.liang@linux.intel.com> wrote:
>> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
>> index dd9f3c2..0d4a1a3 100644
>> --- a/arch/x86/events/core.c
>> +++ b/arch/x86/events/core.c
>> @@ -1585,6 +1585,8 @@ static void x86_pmu_del(struct perf_event *event, int flags)
>>          if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
>>                  goto do_del;
>>
>> +       __set_bit(event->hw.idx, cpuc->dirty);
>> +
>>          /*
>>           * Not a TXN, therefore cleanup properly.
>>           */
>> @@ -2304,12 +2306,46 @@ static int x86_pmu_event_init(struct perf_event *event)
>>          return err;
>>   }
>>
>> +void x86_pmu_clear_dirty_counters(void)
>> +{
>> +       struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> +       int i;
>> +
>> +       if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
>> +               return;
> 
> Maybe you can check it after clearing assigned counters.
> 

It should be very likely that the cpuc->dirty is non-empty.
Move it after the clearing can skip the for_each_set_bit() and 
bitmap_zero().
OK. I will change it in V4.

Thanks,
Kan



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-04-14 14:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-13 18:57 [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code kan.liang
2021-04-13 18:57 ` [PATCH V3 2/2] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
2021-04-13 20:33   ` kernel test robot
2021-04-14 13:47     ` Liang, Kan
2021-04-13 23:41   ` kernel test robot
2021-04-14  0:34   ` Andy Lutomirski
2021-04-14  1:30     ` Liang, Kan
2021-04-14 13:51   ` Namhyung Kim
2021-04-14 14:27     ` Liang, Kan
2021-04-14 12:22 ` [PATCH V3 1/2] perf/x86: Move cpuc->running into P4 specific code Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).