linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: peterz@infradead.org
Cc: mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org,
	ak@linux.intel.com, mark.rutland@arm.com, luto@amacapital.net
Subject: Re: [PATCH V2 3/3] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task
Date: Wed, 9 Sep 2020 09:24:47 -0400	[thread overview]
Message-ID: <8157c3b0-e22c-fff1-0a3c-b2ad57a7abcb@linux.intel.com> (raw)
In-Reply-To: <20200907160115.GS2674@hirez.programming.kicks-ass.net>



On 9/8/2020 11:58 AM, peterz@infradead.org wrote:
 > On Mon, Sep 07, 2020 at 06:01:15PM +0200, peterz@infradead.org wrote:
 >> On Fri, Aug 21, 2020 at 12:57:54PM -0700, kan.liang@linux.intel.com
 >>> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
 >>> index 0f3d01562ded..fa08d810dcd2 100644
 >>> --- a/arch/x86/events/core.c
 >>> +++ b/arch/x86/events/core.c
 >>> @@ -1440,7 +1440,10 @@ static void x86_pmu_start(struct perf_event
 >>> *event, int flags)
 >>>
 >>>   	cpuc->events[idx] = event;
 >>>   	__set_bit(idx, cpuc->active_mask);
 >>> -	__set_bit(idx, cpuc->running);
 >>> +	/* The cpuc->running is only used by the P4 PMU */
 >>> +	if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON) &&
 >>> +	    (boot_cpu_data.x86 == 0xf))
 >>> +		__set_bit(idx, cpuc->running);
 >>>   	x86_pmu.enable(event);
 >>>   	perf_event_update_userpage(event);
 >>>   }
 >>
 >> Yuck! Use a static_branch() or something. This is a gnarly nest of
 >> code
 >> that runs 99.9% of the time to conclude a negative. IOW a complete
 >> waste
 >> of cycles for everybody not running a P4 space heater.
 >
 > Better still, move it into p4_pmu_enable_event().
 >

Sure, I will move the "cpuc->running" into P4 specific code.

On 9/7/2020 12:01 PM, peterz@infradead.org wrote:
>> @@ -1544,6 +1547,9 @@ static void x86_pmu_del(struct perf_event *event, int flags)
>>   	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
>>   		goto do_del;
>>   
>> +	if (READ_ONCE(x86_pmu.attr_rdpmc) && x86_pmu.sched_task &&
>> +	    test_bit(event->hw.idx, cpuc->active_mask))
>> +		__set_bit(event->hw.idx, cpuc->dirty);
> 
> And that too seems like an overly complicated set of tests and branches.
> This should be effectivly true for the 99% common case.

I made a mistake in the V2 version regarding whether P4 PMU supports 
RDPMC. SDM explicitly documents that the RDPMC instruction was 
introduced in P6. I once thought P4 is older than P6, but I'm wrong. P4 
should have NetBurst microarchitecture which is the successor to the P6.
So P4 should also support the RDPMC instruction.

We should not share the memory space between the 'dirty' and the 
'running' variable. I will modify it in V3

I will also unconditionally set cpuc->dirty here. The worst case for an 
RDPMC task is that we may have to clear all counters for the first time 
in x86_pmu_event_mapped. After that, the sched_task() will clear/update 
the 'dirty'. Only the real 'dirty' counters are clear. For non-RDPMC 
task, it's harmless to unconditionally set the cpuc->dirty.


>>   static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
>>   {
>>   	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>>   		return;
>>   
>> +	/*
>> +	 * Enable sched_task() for the RDPMC task,
>> +	 * and clear the existing dirty counters.
>> +	 */
>> +	if (x86_pmu.sched_task && event->hw.target && !is_sampling_event(event)) {
>> +		perf_sched_cb_inc(event->ctx->pmu);
>> +		x86_pmu_clear_dirty_counters();
>> +	}
> 
> I'm failing to see the correctness of the !is_sampling_event() part
> there.

It looks like an RDPMC task can do sampling as well? I once thought it 
only does counting. I will fix it in V3.


> 
>>   	/*
>>   	 * This function relies on not being called concurrently in two
>>   	 * tasks in the same mm.  Otherwise one task could observe
>> @@ -2246,6 +2286,9 @@ static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *m
>>   	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
>>   		return;
>>   
>> +	if (x86_pmu.sched_task && event->hw.target && !is_sampling_event(event))
>> +		perf_sched_cb_dec(event->ctx->pmu);
>> +
> 
> Idem.
> 
>>   	if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))
>>   		on_each_cpu_mask(mm_cpumask(mm), cr4_update_pce, NULL, 1);
>>   }
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index c72e4904e056..e67713bfa33a 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -4166,11 +4166,39 @@ static void intel_pmu_cpu_dead(int cpu)
>>   	intel_cpuc_finish(&per_cpu(cpu_hw_events, cpu));
>>   }
>>   
>> +static void intel_pmu_rdpmc_sched_task(struct perf_event_context *ctx)
>> +{
>> +	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> +	struct perf_event *event;
>> +
>> +	if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
>> +		return;
>> +
>> +	/*
>> +	 * If the new task has the RDPMC enabled, clear the dirty counters to
>> +	 * prevent the potential leak. If the new task doesn't have the RDPMC
>> +	 * enabled, do nothing.
>> +	 */
>> +	list_for_each_entry(event, &ctx->event_list, event_entry) {
>> +		if (event->hw.target &&
>> +		    (event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED) &&
>> +		    !is_sampling_event(event) &&
>> +		    atomic_read(&event->mmap_count))
>> +			break;
>> +	}
>> +	if (&event->event_entry == &ctx->event_list)
>> +		return;
> 
> That's horrific, what's wrong with something like:
> 
> 	if (!atomic_read(&current->mm->context.perf_rdpmc_allowed))
> 		return;
>

After removing the !is_sampling_event() part, I think we can use this.
I will use it in V3.

>> +
>> +	x86_pmu_clear_dirty_counters();
>> +}
> 
> How is this Intel specific code? IIRC AMD has RDPMC too.
> 

Sure, I will move the code to X86 generic code.

Thanks,
Kan

  parent reply	other threads:[~2020-09-09 15:51 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-21 19:57 [PATCH V2 1/3] perf/core: Pull pmu::sched_task() into perf_event_context_sched_in() kan.liang
2020-08-21 19:57 ` [PATCH V2 2/3] perf/core: Pull pmu::sched_task() into perf_event_context_sched_out() kan.liang
2020-09-11  7:02   ` [tip: perf/core] " tip-bot2 for Kan Liang
2020-08-21 19:57 ` [PATCH V2 3/3] perf/x86: Reset the dirty counter to prevent the leak for an RDPMC task kan.liang
2020-09-07 16:01   ` peterz
2020-09-08 15:58     ` peterz
2020-09-09 13:24     ` Liang, Kan [this message]
2020-09-11  7:02 ` [tip: perf/core] perf/core: Pull pmu::sched_task() into perf_event_context_sched_in() tip-bot2 for Kan Liang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8157c3b0-e22c-fff1-0a3c-b2ad57a7abcb@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).