linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: peterz@infradead.org
To: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org,
	ak@linux.intel.com, Mark Rutland <mark.rutland@arm.com>,
	Andy Lutomirski <luto@amacapital.net>
Subject: Re: [PATCH] perf/x86: Reset the counter to prevent the leak for a RDPMC task
Date: Thu, 30 Jul 2020 18:44:25 +0200	[thread overview]
Message-ID: <20200730164425.GO2655@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <cd65635b-d226-3089-cb4a-8f60ae408db5@linux.intel.com>

On Thu, Jul 30, 2020 at 11:54:35AM -0400, Liang, Kan wrote:
> On 7/30/2020 8:58 AM, peterz@infradead.org wrote:
> > On Thu, Jul 30, 2020 at 05:38:15AM -0700, kan.liang@linux.intel.com wrote:
> > > From: Kan Liang <kan.liang@linux.intel.com>
> > > 
> > > The counter value of a perf task may leak to another RDPMC task.
> > 
> > Sure, but nowhere did you explain why that is a problem.
> > 
> > > The RDPMC instruction is only available for the X86 platform. Only apply
> > > the fix for the X86 platform.
> > 
> > ARM64 can also do it, although I'm not sure what the current state of
> > things is here.
> > 
> > > After applying the patch,
> > > 
> > >      $ taskset -c 0 ./rdpmc_read_all_counters
> > >      index 0x0 value 0x0
> > >      index 0x1 value 0x0
> > >      index 0x2 value 0x0
> > >      index 0x3 value 0x0
> > > 
> > >      index 0x0 value 0x0
> > >      index 0x1 value 0x0
> > >      index 0x2 value 0x0
> > >      index 0x3 value 0x0
> > 
> > You forgot about:
> > 
> >   - telling us why it's a problem,
> 
> The non-privileged RDPMC user can get the counter information from other
> perf users. It is a security issue. I will add it in the next version.

You don't know what it counted and you don't know the offset, what can
you do with it?

> >   - telling us how badly it affects performance.
> 
> I once did performance test on a HSX machine. There is no notable slow down
> with the patch. I will add the performance data in the next version.

It's still up to [4..8]+[3,4] extra WRMSRs per context switch, that's pretty naf.

> > I would feel much better if we only did this on context switches to
> > tasks that have RDPMC enabled.
> 
> AFAIK, at least for X86, we can only enable/disable RDPMC globally.
> How can we know if a specific task that have RDPMC enabled/disabled?

It has mm->context.pref_rdpmc_allowed non-zero, go read x86_pmu_event_{,un}mapped().
Without that CR4.PCE is 0 and RDPMC won't work, which is most of the
actual tasks.

Arguably we should have perf_mmap_open() check if 'event->hw.target ==
current', because without that RDPMC is still pointless.

> > So on del() mark the counter dirty (if we don't already have state that
> > implies this), but don't WRMSR. And then on
> > __perf_event_task_sched_in(), _after_ programming the new tasks'
> > counters, check for inactive dirty counters and wipe those -- IFF RDPMC
> > is on for that task.
> > 
> 
> The generic code doesn't have counters' information. It looks like we need
> to add a new callback to cleanup the dirty counters as below.
> 
> In the specific implementation of pmu_cleanup(), we can check and wipe all
> inactive dirty counters.

What about pmu::sched_task(), can't we rejig that a little?

The way I'm reading it now, it's like we iterate the task context for
calling perf_event_context_sched_*(), and then iterate a cpuctx list to
find cpuctx->task_ctx, which would be the exact same contexts we've just
iterated.

So can't we pull the pmu::sched_task() call into
perf_event_context_sched_*() ? That would save a round of
pmu_disable/enable() too afaict.

  reply	other threads:[~2020-07-30 16:44 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-30 12:38 [PATCH] perf/x86: Reset the counter to prevent the leak for a RDPMC task kan.liang
2020-07-30 12:58 ` peterz
2020-07-30 15:54   ` Liang, Kan
2020-07-30 16:44     ` peterz [this message]
2020-07-30 16:50       ` peterz
2020-07-31 18:08       ` Liang, Kan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200730164425.GO2655@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).