linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Reinette Chatre <reinette.chatre@intel.com>
Cc: tglx@linutronix.de, fenghua.yu@intel.com, tony.luck@intel.com,
	mingo@redhat.com, acme@kernel.org,
	vikas.shivappa@linux.intel.com, gavin.hindman@intel.com,
	jithu.joseph@intel.com, dave.hansen@intel.com, hpa@zytor.com,
	x86@kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 5/6] x86/intel_rdt: Use perf infrastructure for measurements
Date: Thu, 6 Sep 2018 16:15:24 +0200	[thread overview]
Message-ID: <20180906141524.GF24106@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <30b32ebd826023ab88f3ab3122e4c414ea532722.1534450299.git.reinette.chatre@intel.com>

On Thu, Aug 16, 2018 at 01:16:08PM -0700, Reinette Chatre wrote:
> +	l2_miss_event = perf_event_create_kernel_counter(&perf_miss_attr,
> +							 plr->cpu,
> +							 NULL, NULL, NULL);
> +	if (IS_ERR(l2_miss_event))
> +		goto out;
> +
> +	l2_hit_event = perf_event_create_kernel_counter(&perf_hit_attr,
> +							plr->cpu,
> +							NULL, NULL, NULL);
> +	if (IS_ERR(l2_hit_event))
> +		goto out_l2_miss;
> +
> +	local_irq_disable();
> +	/*
> +	 * Check any possible error state of events used by performing
> +	 * one local read.
> +	 */
> +	if (perf_event_read_local(l2_miss_event, &tmp, NULL, NULL)) {
> +		local_irq_enable();
> +		goto out_l2_hit;
> +	}
> +	if (perf_event_read_local(l2_hit_event, &tmp, NULL, NULL)) {
> +		local_irq_enable();
> +		goto out_l2_hit;
> +	}
> +
> +	/*
> +	 * Disable hardware prefetchers.
>  	 *
> +	 * Call wrmsr direcly to avoid the local register variables from
> +	 * being overwritten due to reordering of their assignment with
> +	 * the wrmsr calls.
> +	 */
> +	__wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
> +
> +	/* Initialize rest of local variables */
> +	/*
> +	 * Performance event has been validated right before this with
> +	 * interrupts disabled - it is thus safe to read the counter index.
> +	 */
> +	l2_miss_pmcnum = x86_perf_rdpmc_ctr_get(l2_miss_event);
> +	l2_hit_pmcnum = x86_perf_rdpmc_ctr_get(l2_hit_event);
> +	line_size = plr->line_size;
> +	mem_r = plr->kmem;
> +	size = plr->size;
> +
> +	/*
> +	 * Read counter variables twice - first to load the instructions
> +	 * used in L1 cache, second to capture accurate value that does not
> +	 * include cache misses incurred because of instruction loads.
> +	 */
> +	rdpmcl(l2_hit_pmcnum, l2_hits_before);
> +	rdpmcl(l2_miss_pmcnum, l2_miss_before);
> +	/*
> +	 * From SDM: Performing back-to-back fast reads are not guaranteed
> +	 * to be monotonic. To guarantee monotonicity on back-toback reads,
> +	 * a serializing instruction must be placed between the two
> +	 * RDPMC instructions
> +	 */
> +	rmb();
> +	rdpmcl(l2_hit_pmcnum, l2_hits_before);
> +	rdpmcl(l2_miss_pmcnum, l2_miss_before);
> +	/*
> +	 * rdpmc is not a serializing instruction. Add barrier to prevent
> +	 * instructions that follow to begin executing before reading the
> +	 * counter value.
> +	 */
> +	rmb();
> +	for (i = 0; i < size; i += line_size) {
> +		/*
> +		 * Add a barrier to prevent speculative execution of this
> +		 * loop reading beyond the end of the buffer.
> +		 */
> +		rmb();
> +		asm volatile("mov (%0,%1,1), %%eax\n\t"
> +			     :
> +			     : "r" (mem_r), "r" (i)
> +			     : "%eax", "memory");
> +	}
> +	rdpmcl(l2_hit_pmcnum, l2_hits_after);
> +	rdpmcl(l2_miss_pmcnum, l2_miss_after);
> +	/*
> +	 * rdpmc is not a serializing instruction. Add barrier to ensure
> +	 * events measured have completed and prevent instructions that
> +	 * follow to begin executing before reading the counter value.
> +	 */
> +	rmb();
> +	/* Re-enable hardware prefetchers */
> +	wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
> +	local_irq_enable();
> +	trace_pseudo_lock_l2(l2_hits_after - l2_hits_before,
> +			     l2_miss_after - l2_miss_before);
> +
> +out_l2_hit:
> +	perf_event_release_kernel(l2_hit_event);
> +out_l2_miss:
> +	perf_event_release_kernel(l2_miss_event);
> +out:
> +	plr->thread_done = 1;
> +	wake_up_interruptible(&plr->lock_thread_wq);
> +	return 0;
> +}
> +

The above, looks a _LOT_ like the below. And while C does suck a little,
I'm sure there's something we can do about this.

> +	l3_miss_event = perf_event_create_kernel_counter(&perf_miss_attr,
> +							 plr->cpu,
> +							 NULL, NULL,
> +							 NULL);
> +	if (IS_ERR(l3_miss_event))
> +		goto out;
> +
> +	l3_hit_event = perf_event_create_kernel_counter(&perf_hit_attr,
> +							plr->cpu,
> +							NULL, NULL,
> +							NULL);
> +	if (IS_ERR(l3_hit_event))
> +		goto out_l3_miss;
> +
>  	local_irq_disable();
>  	/*
> +	 * Check any possible error state of events used by performing
> +	 * one local read.
> +	 */
> +	if (perf_event_read_local(l3_miss_event, &tmp, NULL, NULL)) {
> +		local_irq_enable();
> +		goto out_l3_hit;
> +	}
> +	if (perf_event_read_local(l3_hit_event, &tmp, NULL, NULL)) {
> +		local_irq_enable();
> +		goto out_l3_hit;
> +	}
> +
> +	/*
> +	 * Disable hardware prefetchers.
> +	 *
>  	 * Call wrmsr direcly to avoid the local register variables from
>  	 * being overwritten due to reordering of their assignment with
>  	 * the wrmsr calls.
>  	 */
>  	__wrmsr(MSR_MISC_FEATURE_CONTROL, prefetch_disable_bits, 0x0);
> +
> +	/* Initialize rest of local variables */
> +	/*
> +	 * Performance event has been validated right before this with
> +	 * interrupts disabled - it is thus safe to read the counter index.
> +	 */
> +	l3_hit_pmcnum = x86_perf_rdpmc_ctr_get(l3_hit_event);
> +	l3_miss_pmcnum = x86_perf_rdpmc_ctr_get(l3_miss_event);
> +	line_size = plr->line_size;
>  	mem_r = plr->kmem;
>  	size = plr->size;
> +
> +	/*
> +	 * Read counter variables twice - first to load the instructions
> +	 * used in L1 cache, second to capture accurate value that does not
> +	 * include cache misses incurred because of instruction loads.
> +	 */
> +	rdpmcl(l3_hit_pmcnum, l3_hits_before);
> +	rdpmcl(l3_miss_pmcnum, l3_miss_before);
> +	/*
> +	 * From SDM: Performing back-to-back fast reads are not guaranteed
> +	 * to be monotonic. To guarantee monotonicity on back-toback reads,
> +	 * a serializing instruction must be placed between the two
> +	 * RDPMC instructions
> +	 */
> +	rmb();
> +	rdpmcl(l3_hit_pmcnum, l3_hits_before);
> +	rdpmcl(l3_miss_pmcnum, l3_miss_before);
> +	/*
> +	 * rdpmc is not a serializing instruction. Add barrier to prevent
> +	 * instructions that follow to begin executing before reading the
> +	 * counter value.
> +	 */
> +	rmb();
>  	for (i = 0; i < size; i += line_size) {
> +		/*
> +		 * Add a barrier to prevent speculative execution of this
> +		 * loop reading beyond the end of the buffer.
> +		 */
> +		rmb();
>  		asm volatile("mov (%0,%1,1), %%eax\n\t"
>  			     :
>  			     : "r" (mem_r), "r" (i)
>  			     : "%eax", "memory");
>  	}
> +	rdpmcl(l3_hit_pmcnum, l3_hits_after);
> +	rdpmcl(l3_miss_pmcnum, l3_miss_after);
>  	/*
> +	 * rdpmc is not a serializing instruction. Add barrier to ensure
> +	 * events measured have completed and prevent instructions that
> +	 * follow to begin executing before reading the counter value.
>  	 */
> +	rmb();
> +	/* Re-enable hardware prefetchers */
>  	wrmsr(MSR_MISC_FEATURE_CONTROL, 0x0, 0x0);
>  	local_irq_enable();
> +	l3_miss_after -= l3_miss_before;
> +	if (boot_cpu_data.x86_model == INTEL_FAM6_BROADWELL_X) {
> +		/*
> +		 * On BDW references and misses are counted, need to adjust.
> +		 * Sometimes the "hits" counter is a bit more than the
> +		 * references, for example, x references but x + 1 hits.
> +		 * To not report invalid hit values in this case we treat
> +		 * that as misses equal to references.
> +		 */
> +		/* First compute the number of cache references measured */
> +		l3_hits_after -= l3_hits_before;
> +		/* Next convert references to cache hits */
> +		l3_hits_after -= l3_miss_after > l3_hits_after ?
> +					l3_hits_after : l3_miss_after;
> +	} else {
> +		l3_hits_after -= l3_hits_before;
>  	}
> +	trace_pseudo_lock_l3(l3_hits_after, l3_miss_after);
>  
> +out_l3_hit:
> +	perf_event_release_kernel(l3_hit_event);
> +out_l3_miss:
> +	perf_event_release_kernel(l3_miss_event);

  reply	other threads:[~2018-09-06 14:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-16 20:16 [PATCH V2 0/6] perf/core and x86/intel_rdt: Fix lack of coordination with perf Reinette Chatre
2018-08-16 20:16 ` [PATCH V2 1/6] perf/core: Add sanity check to deal with pinned event failure Reinette Chatre
2018-08-16 20:16 ` [PATCH V2 2/6] x86/intel_rdt: Remove local register variables Reinette Chatre
2018-08-16 20:16 ` [PATCH V2 3/6] x86/intel_rdt: Create required perf event attributes Reinette Chatre
2018-08-16 20:16 ` [PATCH V2 4/6] x86/intel_rdt: Add helper to obtain performance counter index Reinette Chatre
2018-09-06 14:47   ` Peter Zijlstra
2018-09-06 23:26     ` Reinette Chatre
2018-08-16 20:16 ` [PATCH V2 5/6] x86/intel_rdt: Use perf infrastructure for measurements Reinette Chatre
2018-09-06 14:15   ` Peter Zijlstra [this message]
2018-09-06 19:21     ` Reinette Chatre
2018-09-06 19:44       ` Peter Zijlstra
2018-09-06 20:05         ` Reinette Chatre
2018-09-06 20:29           ` Peter Zijlstra
2018-09-06 20:37             ` Reinette Chatre
2018-09-06 21:38               ` Peter Zijlstra
2018-09-06 14:38   ` Peter Zijlstra
2018-08-16 20:16 ` [PATCH V2 6/6] x86/intel_rdt: Re-enable pseudo-lock measurements Reinette Chatre
2018-09-04 16:50 ` [PATCH V2 0/6] perf/core and x86/intel_rdt: Fix lack of coordination with perf Reinette Chatre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180906141524.GF24106@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=acme@kernel.org \
    --cc=dave.hansen@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=gavin.hindman@intel.com \
    --cc=hpa@zytor.com \
    --cc=jithu.joseph@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).