All of lore.kernel.org
 help / color / mirror / Atom feed
* 3.14-rt ARM performance regression?
@ 2015-01-24  2:03 Josh Cartwright
  2015-01-26 22:28 ` Gratian Crisan
  2015-01-28  4:08 ` Steven Rostedt
  0 siblings, 2 replies; 4+ messages in thread
From: Josh Cartwright @ 2015-01-24  2:03 UTC (permalink / raw)
  To: Steven Rostedt, linux-rt-users; +Cc: Thomas Gleixner, Sebastian Andrzej Siewior

Hey folks-

We've recently undertaken an upgrade of our kernel from 3.2-rt to
3.14-rt, and have run into a performance regression on our ARM boards.
We're still in the process of trying to isolate what we can, but
hopefully someone's already run into this and has a solution or might
have some useful debugging ideas.

The first test we did was to run cyclictest[1] for comparison:

   3.2.35-rt52
   # Total: 312028761 312028761 624057522
   # Min Latencies: 00010 00011
   # Avg Latencies: 00018 00020
   # Max Latencies: 00062 00066 00066
   # Histogram Overflows: 00000 00000 00000

   3.14.25-rt22
   # Total: 304735655 304735657 609471312
   # Min Latencies: 00013 00013
   # Avg Latencies: 00023 00024
   # Max Latencies: 00086 00083 00086
   # Histogram Overflows: 00000 00000 00000

As you can see, we're seeing a 30%-40% degradation not just max latencies, but
also the minimum/maximum latencies.  The above numbers are with the system
under a network throughput load (iperf), but changing the load seems to have
little impact (and in fact, we see a general slowdown even when otherwise
idle).

The ARM SoC used for testing is the dual core Xilinx Zynq.

We've observed no such degradation on our x86 boards.

Many things have changed in the ARM-world between these releases, but
unfortunately bisection is difficult for us, however, we were able to
give 3.10-rt a try, and 3.10-rt shows the same performance degradation.

We suspected something was up with time accounting, as since 3.2, Zynq gained a
clock driver, and shifted to using the arm_global_timer driver as it's
clocksource.  We've compared register dumps of the clocks, cache, and timers
between kernels, and the hardware appears to be configured the same.  It also
seems that the runtimes of identical code paths appear to run slower in
3.14-rt, as observed by the function tracer and the local ftrace clock; we're
looking to better characterize this.

We did, however, construct a test to validate via an external clock that
clock_nanosleep() was sleeping for as long as it says it was by toggling a
GPIO, sleeping for a small period of time, and toggling again, and validating
via a scope that the duration matched.

The toolchain is the same for both kernels (gcc 4.7.2).

We also brought up 3.14-rt on a BeagleBone Black (also ARM) and compared it's
performance to a 3.8-rt build (bringing up 3.2-rt would require a bit more
effort).  We observed a ~30% degradation on this platform as well.

If anyone has any ideas, please let us know!  Otherwise, we'll follow up with
anything else we discover.

Thanks!
  Josh

1: cyclictest -H 500 -m -S -i 237 -p 98

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: 3.14-rt ARM performance regression?
  2015-01-24  2:03 3.14-rt ARM performance regression? Josh Cartwright
@ 2015-01-26 22:28 ` Gratian Crisan
  2015-01-28  4:08 ` Steven Rostedt
  1 sibling, 0 replies; 4+ messages in thread
From: Gratian Crisan @ 2015-01-26 22:28 UTC (permalink / raw)
  To: linux-rt-users
  Cc: Sebastian Andrzej Siewior, Steven Rostedt, Thomas Gleixner,
	Josh Cartwright

To add to Josh's post I have posted some of the data captured during the 
investigation at: https://github.com/gratian/tests

More details available in-line below.

linux-rt-users-owner@vger.kernel.org wrote on 01/23/2015 08:03:41 PM:
> Subject: 3.14-rt ARM performance regression?
> 
> Hey folks-
> 
> We've recently undertaken an upgrade of our kernel from 3.2-rt to
> 3.14-rt, and have run into a performance regression on our ARM boards.
> We're still in the process of trying to isolate what we can, but
> hopefully someone's already run into this and has a solution or might
> have some useful debugging ideas.
> 
<snip>
> We suspected something was up with time accounting, as since 3.2, 
> Zynq gained a
> clock driver, and shifted to using the arm_global_timer driver as it's
> clocksource.  We've compared register dumps of the clocks, cache, and 
timers
> between kernels, and the hardware appears to be configured the same.

The register dumps from the 3.2-rt and 3.14-rt kernel runs are available 
at: https://github.com/gratian/tests/tree/master/register-dumps
In order to make sense of it you will need the Xilinx, Zynq-7000 technical 
reference manual available at: 
http://www.xilinx.com/support/documentation/user_guides/ug585-Zynq-7000-TRM.pdf

> It also
> seems that the runtimes of identical code paths appear to run slower in
> 3.14-rt, as observed by the function tracer and the local ftrace clock; 
we're
> looking to better characterize this.
> 
> We did, however, construct a test to validate via an external clock that
> clock_nanosleep() was sleeping for as long as it says it was by toggling 
a
> GPIO, sleeping for a small period of time, and toggling again, and 
validating
> via a scope that the duration matched.

Test and results available at: 
https://github.com/gratian/tests/tree/master/clock-validation
 
> The toolchain is the same for both kernels (gcc 4.7.2).
> 
> We also brought up 3.14-rt on a BeagleBone Black (also ARM) and compared 
it's
> performance to a 3.8-rt build (bringing up 3.2-rt would require a bit 
more
> effort).  We observed a ~30% degradation on this platform as well.
> 
> If anyone has any ideas, please let us know!  Otherwise, we'll follow up 
with
> anything else we discover.
> 

One of the investigation paths we took is profiling hrtimer_interrupt().

In order to provide a load a simple timer stress test was used: 
https://github.com/gratian/tests/blob/master/timer-stress/timer-stress.c
that in essence starts a large number of non-RT threads that are doing 
clock_nanosleep() calls with a random interval of up to 1ms.

Plotting the CPU cycle counts for hrtimer_interrupt() in 3.14-rt vs. 
3.2-rt appears to show a slowdown of ~12us.
See screenshots under: 
https://github.com/gratian/tests/tree/master/hrtimer_interrupt-profiling

Digging deeper the worst offender when the max is reached seems to be one 
of the callbacks invoked from hrtimer_interrupt.
More specifically the code path seems to be 
hrtimer_interrupt()->tick_sched_timer()->tick_sched_handle()->update_process_times().
I am still profiling this code path trying to pinpoint the source of the 
3.14-rt slowdown in update_process_times().

Ideas/suggestions welcomed.

Thanks,
        Gratian


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: 3.14-rt ARM performance regression?
  2015-01-24  2:03 3.14-rt ARM performance regression? Josh Cartwright
  2015-01-26 22:28 ` Gratian Crisan
@ 2015-01-28  4:08 ` Steven Rostedt
  2015-01-28 23:28   ` Josh Cartwright
  1 sibling, 1 reply; 4+ messages in thread
From: Steven Rostedt @ 2015-01-28  4:08 UTC (permalink / raw)
  To: Josh Cartwright
  Cc: linux-rt-users, Thomas Gleixner, Sebastian Andrzej Siewior

On Fri, 23 Jan 2015 20:03:41 -0600
Josh Cartwright <joshc@ni.com> wrote:

> Hey folks-
> 
> We've recently undertaken an upgrade of our kernel from 3.2-rt to
> 3.14-rt, and have run into a performance regression on our ARM boards.
> We're still in the process of trying to isolate what we can, but
> hopefully someone's already run into this and has a solution or might
> have some useful debugging ideas.
> 
> The first test we did was to run cyclictest[1] for comparison:
> 
>    3.2.35-rt52
>    # Total: 312028761 312028761 624057522
>    # Min Latencies: 00010 00011
>    # Avg Latencies: 00018 00020
>    # Max Latencies: 00062 00066 00066
>    # Histogram Overflows: 00000 00000 00000
> 
>    3.14.25-rt22
>    # Total: 304735655 304735657 609471312
>    # Min Latencies: 00013 00013
>    # Avg Latencies: 00023 00024
>    # Max Latencies: 00086 00083 00086
>    # Histogram Overflows: 00000 00000 00000
> 

I'm curious if the vanilla kernels (non-rt) show the same regression.

Max latencies for vanilla kernels will probably go through the roof,
but the min and average should give you some hint.

-- Steve

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: 3.14-rt ARM performance regression?
  2015-01-28  4:08 ` Steven Rostedt
@ 2015-01-28 23:28   ` Josh Cartwright
  0 siblings, 0 replies; 4+ messages in thread
From: Josh Cartwright @ 2015-01-28 23:28 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: linux-rt-users, Thomas Gleixner, Sebastian Andrzej Siewior

On Tue, Jan 27, 2015 at 11:08:46PM -0500, Steven Rostedt wrote:
> On Fri, 23 Jan 2015 20:03:41 -0600
> Josh Cartwright <joshc@ni.com> wrote:
> 
> > Hey folks-
> > 
> > We've recently undertaken an upgrade of our kernel from 3.2-rt to
> > 3.14-rt, and have run into a performance regression on our ARM boards.
> > We're still in the process of trying to isolate what we can, but
> > hopefully someone's already run into this and has a solution or might
> > have some useful debugging ideas.
> > 
> > The first test we did was to run cyclictest[1] for comparison:
> > 
> >    3.2.35-rt52
> >    # Total: 312028761 312028761 624057522
> >    # Min Latencies: 00010 00011
> >    # Avg Latencies: 00018 00020
> >    # Max Latencies: 00062 00066 00066
> >    # Histogram Overflows: 00000 00000 00000
> > 
> >    3.14.25-rt22
> >    # Total: 304735655 304735657 609471312
> >    # Min Latencies: 00013 00013
> >    # Avg Latencies: 00023 00024
> >    # Max Latencies: 00086 00083 00086
> >    # Histogram Overflows: 00000 00000 00000
> > 
> 
> I'm curious if the vanilla kernels (non-rt) show the same regression.
> 
> Max latencies for vanilla kernels will probably go through the roof,
> but the min and average should give you some hint.

Yes, it's likely a non-rt related problem.  I'll be running a test
overnight comparing min/avg latencies on 3.2-rt vs 3.14-rt, both built
without PREEMPT_RT_FULL.  We'll get a test going on stable (non-rt)
going over the next couple days.

In parallel we're working to do a bisection, but this is a difficult
task given a dependence on a ARM vendor tree.  We'll see where this gets
us.

Thanks!

  Josh

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-01-29  1:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-24  2:03 3.14-rt ARM performance regression? Josh Cartwright
2015-01-26 22:28 ` Gratian Crisan
2015-01-28  4:08 ` Steven Rostedt
2015-01-28 23:28   ` Josh Cartwright

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.