[tip:,perf/core] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
diff mbox series

Message ID 161858530850.29796.5870405530830102241.tip-bot2@tip-bot2
State New, archived
Headers show
Series
  • [tip:,perf/core] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
Related show

Commit Message

tip-bot2 for Fenghua Yu April 16, 2021, 3:01 p.m. UTC
The following commit has been merged into the perf/core branch of tip:

Commit-ID:     01fd9661e168de7cfc4f947e7220fca0e6791999
Gitweb:        https://git.kernel.org/tip/01fd9661e168de7cfc4f947e7220fca0e6791999
Author:        Kan Liang <kan.liang@linux.intel.com>
AuthorDate:    Wed, 14 Apr 2021 07:36:30 -07:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 16 Apr 2021 16:32:43 +02:00

perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task

The counter value of a perf task may leak to another RDPMC task.
For example, a perf stat task as below is running on CPU 0.

    perf stat -e 'branches,cycles' -- taskset -c 0 ./workload

In the meantime, an RDPMC task, which is also running on CPU 0, may read
the GP counters periodically. (The RDPMC task creates a fixed event,
but read four GP counters.)

    $ taskset -c 0 ./rdpmc_read_all_counters
    index 0x0 value 0x8001e5970f99
    index 0x1 value 0x8005d750edb6
    index 0x2 value 0x0
    index 0x3 value 0x0

    index 0x0 value 0x8002358e48a5
    index 0x1 value 0x8006bd1e3bc9
    index 0x2 value 0x0
    index 0x3 value 0x0

It is a potential security issue. Once the attacker knows what the other
thread is counting. The PerfMon counter can be used as a side-channel to
attack cryptosystems.

The counter value of the perf stat task leaks to the RDPMC task because
perf never clears the counter when it's stopped.

Two methods were considered to address the issue.
- Unconditionally reset the counter in x86_pmu_del(). It can bring extra
  overhead even when there is no RDPMC task running.
- Only reset the un-assigned dirty counters when the RDPMC task is
  scheduled in. The method is implemented here.

The dirty counter is a counter, on which the assigned event has been
deleted, but the counter is not reset. To track the dirty counters,
add a 'dirty' variable in the struct cpu_hw_events.

The current code doesn't reset the counter when the assigned event is
deleted. Set the corresponding bit in the 'dirty' variable in
x86_pmu_del(), if the RDPMC feature is available on the system.

The security issue can only be found with an RDPMC task. The event for
an RDPMC task requires the mmap buffer. This can be used to detect an
RDPMC task. Once the event is detected in the event_mapped(), enable
sched_task(), which is invoked in each context switch. Add a check in
the sched_task() to clear the dirty counters, when the RDPMC task is
scheduled in. Only the current un-assigned dirty counters are reset,
bacuase the RDPMC assigned dirty counters will be updated soon.

The RDPMC instruction is also supported on the older platforms. Add
sched_task() for the core_pmu. The core_pmu doesn't support large PEBS
and LBR callstack, the intel_pmu_pebs/lbr_sched_task() will be ignored.

The RDPMC is not Intel-only feature. Add the dirty counters clear code
in the X86 generic code.

After applying the patch,

        $ taskset -c 0 ./rdpmc_read_all_counters
        index 0x0 value 0x0
        index 0x1 value 0x0
        index 0x2 value 0x0
        index 0x3 value 0x0

        index 0x0 value 0x0
        index 0x1 value 0x0
        index 0x2 value 0x0
        index 0x3 value 0x0

Performance

The performance of a context switch only be impacted when there are two
or more perf users and one of the users must be an RDPMC user. In other
cases, there is no performance impact.

The worst-case occurs when there are two users: the RDPMC user only
applies one counter; while the other user applies all available
counters. When the RDPMC task is scheduled in, all the counters, other
than the RDPMC assigned one, have to be reset.

Here is the test result for the worst-case.

The test is implemented on an Ice Lake platform, which has 8 GP
counters and 3 fixed counters (Not include SLOTS counter).

The lat_ctx is used to measure the context switching time.

    lat_ctx -s 128K -N 1000 processes 2

I instrument the lat_ctx to open all 8 GP counters and 3 fixed
counters for one task. The other task opens a fixed counter and enable
RDPMC.

Without the patch:
The context switch time is 4.97 us

With the patch:
The context switch time is 5.16 us

There is ~4% performance drop for the context switching time in the
worst-case.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/1618410990-21383-2-git-send-email-kan.liang@linux.intel.com
---
 arch/x86/events/core.c       | 47 +++++++++++++++++++++++++++++++++++-
 arch/x86/events/perf_event.h |  1 +-
 2 files changed, 48 insertions(+)

Comments

Peter Zijlstra April 16, 2021, 4:45 p.m. UTC | #1
On Fri, Apr 16, 2021 at 03:01:48PM -0000, tip-bot2 for Kan Liang wrote:
> @@ -2331,6 +2367,9 @@ static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *m
>  	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>  		return;
>  
> +	if (x86_pmu.sched_task && event->hw.target)
> +		perf_sched_cb_dec(event->ctx->pmu);
> +
>  	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
>  		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
>  }

'perf test' on a kernel with CONFIG_DEBUG_PREEMPT=y gives:

[  244.439538] BUG: using smp_processor_id() in preemptible [00000000] code: perf/1771
[  244.448144] caller is perf_sched_cb_dec+0xa/0x70
[  244.453314] CPU: 28 PID: 1771 Comm: perf Not tainted 5.12.0-rc3-00026-gb04c0cddff6d #595
[  244.462347] Hardware name: Intel Corporation S2600GZ/S2600GZ, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[  244.473804] Call Trace:
[  244.476535]  dump_stack+0x6d/0x89
[  244.480237]  check_preemption_disabled+0xc8/0xd0
[  244.485394]  perf_sched_cb_dec+0xa/0x70
[  244.489677]  x86_pmu_event_unmapped+0x35/0x60
[  244.494541]  perf_mmap_close+0x76/0x580
[  244.498833]  remove_vma+0x31/0x70
[  244.502535]  __do_munmap+0x2e8/0x4e0
[  244.506531]  __vm_munmap+0x7e/0x120
[  244.510429]  __x64_sys_munmap+0x28/0x30
[  244.514712]  do_syscall_64+0x33/0x80
[  244.518701]  entry_SYSCALL_64_after_hwframe+0x44/0xae


Obviously I tested that after I pushed it out :/
Liang, Kan April 16, 2021, 7:50 p.m. UTC | #2
On 4/16/2021 12:45 PM, Peter Zijlstra wrote:
> On Fri, Apr 16, 2021 at 03:01:48PM -0000, tip-bot2 for Kan Liang wrote:
>> @@ -2331,6 +2367,9 @@ static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *m
>>   	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>>   		return;
>>   
>> +	if (x86_pmu.sched_task && event->hw.target)
>> +		perf_sched_cb_dec(event->ctx->pmu);
>> +
>>   	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
>>   		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
>>   }
> 
> 'perf test' on a kernel with CONFIG_DEBUG_PREEMPT=y gives:
> 
> [  244.439538] BUG: using smp_processor_id() in preemptible [00000000] code: perf/1771

If it's a preemptible env, I think we should disable the interrupts and 
preemption to protect the sched_cb_list.

Seems we don't need perf_ctx_lock() here. I don't think we touch the 
area in NMI. I think disabling the interrupts should be good enough to 
protect the cpuctx.

How about the below patch?

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index e34eb72..45630beed 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2333,6 +2333,8 @@ static void x86_pmu_clear_dirty_counters(void)

  static void x86_pmu_event_mapped(struct perf_event *event, struct 
mm_struct *mm)
  {
+	unsigned long flags;
+
  	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
  		return;

@@ -2341,8 +2343,10 @@ static void x86_pmu_event_mapped(struct 
perf_event *event, struct mm_struct *mm)
  	 * and clear the existing dirty counters.
  	 */
  	if (x86_pmu.sched_task && event->hw.target) {
+		local_irq_save(flags);
  		perf_sched_cb_inc(event->ctx->pmu);
  		x86_pmu_clear_dirty_counters();
+		local_irq_restore(flags);
  	}

  	/*
@@ -2363,12 +2367,16 @@ static void x86_pmu_event_mapped(struct 
perf_event *event, struct mm_struct *mm)

  static void x86_pmu_event_unmapped(struct perf_event *event, struct 
mm_struct *mm)
  {
+	unsigned long flags;

  	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
  		return;

-	if (x86_pmu.sched_task && event->hw.target)
+	if (x86_pmu.sched_task && event->hw.target) {
+		local_irq_save(flags);
  		perf_sched_cb_dec(event->ctx->pmu);
+		local_irq_restore(flags);
+	}

  	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
  		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);

Thanks,
Kan

Patch
diff mbox series

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index dd9f3c2..e34eb72 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1585,6 +1585,8 @@  static void x86_pmu_del(struct perf_event *event, int flags)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		goto do_del;
 
+	__set_bit(event->hw.idx, cpuc->dirty);
+
 	/*
 	 * Not a TXN, therefore cleanup properly.
 	 */
@@ -2304,12 +2306,46 @@  static int x86_pmu_event_init(struct perf_event *event)
 	return err;
 }
 
+static void x86_pmu_clear_dirty_counters(void)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	int i;
+
+	 /* Don't need to clear the assigned counter. */
+	for (i = 0; i < cpuc->n_events; i++)
+		__clear_bit(cpuc->assign[i], cpuc->dirty);
+
+	if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
+		return;
+
+	for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
+		/* Metrics and fake events don't have corresponding HW counters. */
+		if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
+			continue;
+		else if (i >= INTEL_PMC_IDX_FIXED)
+			wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
+		else
+			wrmsrl(x86_pmu_event_addr(i), 0);
+	}
+
+	bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
+}
+
 static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
 {
 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
 		return;
 
 	/*
+	 * Enable sched_task() for the RDPMC task,
+	 * and clear the existing dirty counters.
+	 */
+	if (x86_pmu.sched_task && event->hw.target) {
+		perf_sched_cb_inc(event->ctx->pmu);
+		x86_pmu_clear_dirty_counters();
+	}
+
+	/*
 	 * This function relies on not being called concurrently in two
 	 * tasks in the same mm.  Otherwise one task could observe
 	 * perf_rdpmc_allowed > 1 and return all the way back to
@@ -2331,6 +2367,9 @@  static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *m
 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
 		return;
 
+	if (x86_pmu.sched_task && event->hw.target)
+		perf_sched_cb_dec(event->ctx->pmu);
+
 	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
 		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
 }
@@ -2436,6 +2475,14 @@  static const struct attribute_group *x86_pmu_attr_groups[] = {
 static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in)
 {
 	static_call_cond(x86_pmu_sched_task)(ctx, sched_in);
+
+	/*
+	 * If a new task has the RDPMC enabled, clear the dirty counters
+	 * to prevent the potential leak.
+	 */
+	if (sched_in && ctx && READ_ONCE(x86_pmu.attr_rdpmc) &&
+	    current->mm && atomic_read(&current->mm->context.perf_rdpmc_allowed))
+		x86_pmu_clear_dirty_counters();
 }
 
 static void x86_pmu_swap_task_ctx(struct perf_event_context *prev,
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 54a340e..e855f20 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -228,6 +228,7 @@  struct cpu_hw_events {
 	 */
 	struct perf_event	*events[X86_PMC_IDX_MAX]; /* in counter order */
 	unsigned long		active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
+	unsigned long		dirty[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
 	int			enabled;
 
 	int			n_events; /* the # of events in the below arrays */