All of lore.kernel.org
 help / color / mirror / Atom feed
From: Don Zickus <dzickus@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: dave.hansen@linux.intel.com, eranian@google.com,
	ak@linux.intel.com, jmario@redhat.com,
	linux-kernel@vger.kernel.org, acme@infradead.org,
	mingo@kernel.org
Subject: Re: [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip()
Date: Wed, 16 Oct 2013 08:46:49 -0400	[thread overview]
Message-ID: <20131016124649.GG227855@redhat.com> (raw)
In-Reply-To: <20131016105755.GX10651@twins.programming.kicks-ass.net>

On Wed, Oct 16, 2013 at 12:57:55PM +0200, Peter Zijlstra wrote:
> A prettier patch below. The main difference is on-demand allocation of
> the scratch buffer.

I'll see if I can sanity test this in the next couple hours.

Further testing yesterday showed that intel_pmu_drain_pebs_nhm still
has long latencies somewhere.  With 15 minute reboots, isolation goes
slooow.

Thanks!

Cheers,
Don

> 
> ---
> Subject: perf, x86: Optimize intel_pmu_pebs_fixup_ip()
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Tue, 15 Oct 2013 12:14:04 +0200
> 
> On Mon, Oct 14, 2013 at 04:35:49PM -0400, Don Zickus wrote:
> > While there are a few places that are causing latencies, for now I focused on
> > the longest one first.  It seems to be 'copy_user_from_nmi'
> >
> > intel_pmu_handle_irq ->
> > 	intel_pmu_drain_pebs_nhm ->
> > 		__intel_pmu_drain_pebs_nhm ->
> > 			__intel_pmu_pebs_event ->
> > 				intel_pmu_pebs_fixup_ip ->
> > 					copy_from_user_nmi
> >
> > In intel_pmu_pebs_fixup_ip(), if the while-loop goes over 50, the sum of
> > all the copy_from_user_nmi latencies seems to go over 1,000,000 cycles
> > (there are some cases where only 10 iterations are needed to go that high
> > too, but in generall over 50 or so).  At this point copy_user_from_nmi
> > seems to account for over 90% of the nmi latency.
> 
> So avoid having to call copy_from_user_nmi() for every instruction.
> Since we already limit the max basic block size, we can easily
> pre-allocate a piece of memory to copy the entire thing into in one
> go.
> 
> Don reports (for a previous version):
> > Your patch made a huge difference in improvement.  The
> > copy_from_user_nmi() no longer hits the million of cycles.  I still
> > have a batch of 100,000-300,000 cycles.  My longest NMI paths used
> > to be dominated by copy_from_user_nmi, now it is not (I have to dig
> > up the new hot path).
> 
> Cc: eranian@google.com
> Cc: ak@linux.intel.com
> Cc: jmario@redhat.com
> Cc: acme@infradead.org
> Cc: dave.hansen@linux.intel.com
> Reported-by: Don Zickus <dzickus@redhat.com>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  arch/x86/kernel/cpu/perf_event_intel_ds.c |   48 +++++++++++++++++++++---------
>  1 file changed, 34 insertions(+), 14 deletions(-)
> 
> --- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
> @@ -12,6 +12,7 @@
>  
>  #define BTS_BUFFER_SIZE		(PAGE_SIZE << 4)
>  #define PEBS_BUFFER_SIZE	PAGE_SIZE
> +#define PEBS_FIXUP_SIZE		PAGE_SIZE
>  
>  /*
>   * pebs_record_32 for p4 and core not supported
> @@ -228,12 +229,14 @@ void fini_debug_store_on_cpu(int cpu)
>  	wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0);
>  }
>  
> +static DEFINE_PER_CPU(void *, insn_buffer);
> +
>  static int alloc_pebs_buffer(int cpu)
>  {
>  	struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
>  	int node = cpu_to_node(cpu);
>  	int max, thresh = 1; /* always use a single PEBS record */
> -	void *buffer;
> +	void *buffer, *ibuffer;
>  
>  	if (!x86_pmu.pebs)
>  		return 0;
> @@ -242,6 +245,15 @@ static int alloc_pebs_buffer(int cpu)
>  	if (unlikely(!buffer))
>  		return -ENOMEM;
>  
> +	if (x86_pmu.intel_cap.pebs_format < 2) {
> +		ibuffer = kzalloc_node(PEBS_FIXUP_SIZE, GFP_KERNEL, node);
> +		if (!ibuffer) {
> +			kfree(buffer);
> +			return -ENOMEM;
> +		}
> +		per_cpu(insn_buffer, cpu) = ibuffer;
> +	}
> +
>  	max = PEBS_BUFFER_SIZE / x86_pmu.pebs_record_size;
>  
>  	ds->pebs_buffer_base = (u64)(unsigned long)buffer;
> @@ -262,6 +274,9 @@ static void release_pebs_buffer(int cpu)
>  	if (!ds || !x86_pmu.pebs)
>  		return;
>  
> +	kfree(per_cpu(insn_buffer, cpu));
> +	per_cpu(insn_buffer, cpu) = NULL;
> +
>  	kfree((void *)(unsigned long)ds->pebs_buffer_base);
>  	ds->pebs_buffer_base = 0;
>  }
> @@ -729,6 +744,7 @@ static int intel_pmu_pebs_fixup_ip(struc
>  	unsigned long old_to, to = cpuc->lbr_entries[0].to;
>  	unsigned long ip = regs->ip;
>  	int is_64bit = 0;
> +	void *kaddr;
>  
>  	/*
>  	 * We don't need to fixup if the PEBS assist is fault like
> @@ -752,7 +768,7 @@ static int intel_pmu_pebs_fixup_ip(struc
>  	 * unsigned math, either ip is before the start (impossible) or
>  	 * the basic block is larger than 1 page (sanity)
>  	 */
> -	if ((ip - to) > PAGE_SIZE)
> +	if ((ip - to) > PEBS_FIXUP_SIZE)
>  		return 0;
>  
>  	/*
> @@ -763,29 +779,33 @@ static int intel_pmu_pebs_fixup_ip(struc
>  		return 1;
>  	}
>  
> +	if (!kernel_ip(ip)) {
> +		int size, bytes;
> +		u8 *buf = this_cpu_ptr(insn_buffer);
> +
> +		size = ip - to; /* Must fit our buffer, see above */
> +		bytes = copy_from_user_nmi(buf, (void __user *)to, size);
> +		if (bytes != size)
> +			return 0;
> +
> +		kaddr = buf;
> +	} else {
> +		kaddr = (void *)to;
> +	}
> +
>  	do {
>  		struct insn insn;
> -		u8 buf[MAX_INSN_SIZE];
> -		void *kaddr;
>  
>  		old_to = to;
> -		if (!kernel_ip(ip)) {
> -			int bytes, size = MAX_INSN_SIZE;
> -
> -			bytes = copy_from_user_nmi(buf, (void __user *)to, size);
> -			if (bytes != size)
> -				return 0;
> -
> -			kaddr = buf;
> -		} else
> -			kaddr = (void *)to;
>  
>  #ifdef CONFIG_X86_64
>  		is_64bit = kernel_ip(to) || !test_thread_flag(TIF_IA32);
>  #endif
>  		insn_init(&insn, kaddr, is_64bit);
>  		insn_get_length(&insn);
> +
>  		to += insn.length;
> +		kaddr += insn.length;
>  	} while (to < ip);
>  
>  	if (to == ip) {

  reply	other threads:[~2013-10-16 12:47 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-14 20:35 x86, perf: throttling issues with long nmi latencies Don Zickus
2013-10-14 21:28 ` Andi Kleen
2013-10-15 10:14 ` Peter Zijlstra
2013-10-15 13:02   ` Peter Zijlstra
2013-10-15 14:32     ` Peter Zijlstra
2013-10-15 15:07       ` Peter Zijlstra
2013-10-15 15:41         ` Don Zickus
2013-10-16 10:57           ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Peter Zijlstra
2013-10-16 12:46             ` Don Zickus [this message]
2013-10-16 13:31               ` Peter Zijlstra
2013-10-16 13:54                 ` Don Zickus
2013-10-17 11:21                 ` Peter Zijlstra
2013-10-17 13:33                 ` Peter Zijlstra
2013-10-29 14:07                   ` [tip:perf/urgent] perf/x86: Fix NMI measurements tip-bot for Peter Zijlstra
2013-10-16 20:52             ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Andi Kleen
2013-10-16 21:03               ` Peter Zijlstra
2013-10-16 23:07                 ` Peter Zijlstra
2013-10-17  9:41                   ` Peter Zijlstra
2013-10-17 16:00                     ` Don Zickus
2013-10-17 16:04                       ` Don Zickus
2013-10-17 16:30                         ` Peter Zijlstra
2013-10-17 18:26                           ` Linus Torvalds
2013-10-17 21:08                             ` Peter Zijlstra
2013-10-17 21:11                               ` Peter Zijlstra
2013-10-17 22:01                             ` Peter Zijlstra
2013-10-17 22:27                               ` Linus Torvalds
2013-10-22 21:12                                 ` Peter Zijlstra
2013-10-23  7:09                                   ` Linus Torvalds
2013-10-23 20:48                                     ` Peter Zijlstra
2013-10-24 10:52                                       ` Peter Zijlstra
2013-10-24 13:47                                         ` Don Zickus
2013-10-24 14:06                                           ` Peter Zijlstra
2013-10-25 16:33                                         ` Don Zickus
2013-10-25 17:03                                           ` Peter Zijlstra
2013-10-26 10:36                                           ` Ingo Molnar
2013-10-28 13:19                                             ` Don Zickus
2013-10-29 14:08                                         ` [tip:perf/core] perf/x86: Further optimize copy_from_user_nmi() tip-bot for Peter Zijlstra
2013-10-23  7:44                                   ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Ingo Molnar
2013-10-17 14:49             ` Don Zickus
2013-10-17 14:51               ` Peter Zijlstra
2013-10-17 15:03                 ` Don Zickus
2013-10-17 15:09                   ` Peter Zijlstra
2013-10-17 15:11                     ` Peter Zijlstra
2013-10-17 16:50             ` [tip:perf/core] perf/x86: " tip-bot for Peter Zijlstra
2013-10-15 16:22         ` x86, perf: throttling issues with long nmi latencies Don Zickus
2013-10-15 14:36     ` Don Zickus
2013-10-15 14:39       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131016124649.GG227855@redhat.com \
    --to=dzickus@redhat.com \
    --cc=acme@infradead.org \
    --cc=ak@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jmario@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.