All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: kan.liang@linux.intel.com
Cc: mingo@redhat.com, linux-kernel@vger.kernel.org,
	ak@linux.intel.com, acme@kernel.org, mark.rutland@arm.com,
	luto@amacapital.net, eranian@google.com, namhyung@kernel.org
Subject: Re: [PATCH V5] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
Date: Wed, 21 Apr 2021 10:11:03 +0200	[thread overview]
Message-ID: <YH/eF4YWg73Lkcrr@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <1618957842-103858-1-git-send-email-kan.liang@linux.intel.com>

On Tue, Apr 20, 2021 at 03:30:42PM -0700, kan.liang@linux.intel.com wrote:
>  static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
>  {
> +	unsigned long flags;
> +
>  	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>  		return;
>  
>  	/*
> +	 * Enable sched_task() for the RDPMC task,
> +	 * and clear the existing dirty counters.
> +	 */
> +	if (x86_pmu.sched_task && event->hw.target) {
> +		local_irq_save(flags);
> +		perf_sched_cb_inc(event->ctx->pmu);
> +		x86_pmu_clear_dirty_counters();
> +		local_irq_restore(flags);
> +	}
> +
> +	/*
>  	 * This function relies on not being called concurrently in two
>  	 * tasks in the same mm.  Otherwise one task could observe
>  	 * perf_rdpmc_allowed > 1 and return all the way back to
> @@ -2327,10 +2367,17 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
>  
>  static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
>  {
> +	unsigned long flags;
>  
>  	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>  		return;
>  
> +	if (x86_pmu.sched_task && event->hw.target) {
> +		local_irq_save(flags);
> +		perf_sched_cb_dec(event->ctx->pmu);
> +		local_irq_restore(flags);
> +	}
> +
>  	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
>  		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
>  }

I don't understand how this can possibly be correct. Both
perf_sched_cb_{inc,dec} modify strict per-cpu state, but the mapped
functions happen on whatever random CPU of the moment whenever the task
memory map changes.

Suppose we mmap() on CPU0 and then exit on CPU1. Suppose the task does
mmap() on CPU0 but then creates threads and runs on CPU1-4 concurrently
before existing on CPU5.

Could be I'm not seeing it due to having a snot-brain, please explain.

  reply	other threads:[~2021-04-21  8:11 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-20 22:30 [PATCH V5] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
2021-04-21  8:11 ` Peter Zijlstra [this message]
2021-04-21 15:12   ` Liang, Kan
2021-04-22 17:52     ` Rob Herring
2021-04-22 18:47       ` Liang, Kan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YH/eF4YWg73Lkcrr@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=eranian@google.com \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.