linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Low <jason.low2@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>, Ingo Molnar <mingo@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Mike Galbraith <efault@gmx.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Paul Turner <pjt@google.com>, Alex Shi <alex.shi@intel.com>,
	Preeti U Murthy <preeti@linux.vnet.ibm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Namhyung Kim <namhyung@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Kees Cook <keescook@chromium.org>, Mel Gorman <mgorman@suse.de>,
	aswin@hp.com, scott.norton@hp.com, chegu_vinod@hp.com
Subject: Re: [RFC] sched: Limit idle_balance() when it is being used too frequently
Date: Fri, 19 Jul 2013 12:15:18 -0700	[thread overview]
Message-ID: <1374261318.1830.6.camel@j-VirtualBox> (raw)
In-Reply-To: <20130719183717.GP27075@twins.programming.kicks-ass.net>

On Fri, 2013-07-19 at 20:37 +0200, Peter Zijlstra wrote:
> On Thu, Jul 18, 2013 at 12:06:39PM -0700, Jason Low wrote:
> 
> > N = 1
> > -----
> > 19.21%  reaim  [k] __read_lock_failed                     
> > 14.79%  reaim  [k] mspin_lock                             
> > 12.19%  reaim  [k] __write_lock_failed                    
> > 7.87%   reaim  [k] _raw_spin_lock                          
> > 2.03%   reaim  [k] start_this_handle                       
> > 1.98%   reaim  [k] update_sd_lb_stats                      
> > 1.92%   reaim  [k] mutex_spin_on_owner                     
> > 1.86%   reaim  [k] update_cfs_rq_blocked_load              
> > 1.14%   swapper  [k] intel_idle                              
> > 1.10%   reaim  [.] add_long                                
> > 1.09%   reaim  [.] add_int                                 
> > 1.08%   reaim  [k] load_balance                            
> 
> But but but but.. wth is causing this? The only thing we do more of with
> N=1 is idle_balance(); where would that cause __{read,write}_lock_failed
> and or mspin_lock() contention like that.
> 
> There shouldn't be a rwlock_t in the entire scheduler; those things suck
> worse than quicksand.
> 
> If, as Rik thought, we'd have more rq->lock contention, then I'd
> expected _raw_spin_lock to be up highest.

For this particular fserver workload, that mutex was acquired in the
function calls from ext4_orphan_add() and ext4_orphan_del(). Those read
and write lock calls were from start_this_handle(). 

Although these functions are not called within the idle_balance() code
path, update_sd_lb_stats(), tg_load_down(), idle_cpu(), spin_lock(),
ect... increases the time spent in the kernel and that appears to be
indirectly causing more time to be spent acquiring those other kernel
locks.

Thanks,
Jason




  reply	other threads:[~2013-07-19 19:15 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-16 19:21 [RFC] sched: Limit idle_balance() when it is being used too frequently Jason Low
2013-07-16 19:27 ` Rik van Riel
2013-07-16 20:20 ` Peter Zijlstra
2013-07-16 22:48   ` Jason Low
2013-07-17  7:25     ` Peter Zijlstra
2013-07-17  7:48       ` Peter Zijlstra
2013-07-17  8:11       ` Jason Low
2013-07-17  9:39         ` Peter Zijlstra
2013-07-17 15:59           ` Jason Low
2013-07-17 16:18             ` Peter Zijlstra
2013-07-17 17:51               ` Rik van Riel
2013-07-17 18:01                 ` Peter Zijlstra
2013-07-17 18:48                   ` Jason Low
2013-07-18  4:02                   ` Jason Low
2013-07-18  9:32                     ` Peter Zijlstra
2013-07-18 11:59                       ` Rik van Riel
2013-07-18 12:15                         ` Srikar Dronamraju
2013-07-18 12:35                           ` Peter Zijlstra
2013-07-18 13:06                             ` Srikar Dronamraju
2013-07-18 19:06                         ` Jason Low
2013-07-19 18:37                           ` Peter Zijlstra
2013-07-19 19:15                             ` Jason Low [this message]
2013-07-18 12:12                     ` Srikar Dronamraju
2013-07-18 19:03                       ` Jason Low

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1374261318.1830.6.camel@j-VirtualBox \
    --to=jason.low2@hp.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@intel.com \
    --cc=aswin@hp.com \
    --cc=chegu_vinod@hp.com \
    --cc=efault@gmx.de \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).