From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751990AbcFNL63 (ORCPT ); Tue, 14 Jun 2016 07:58:29 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:36544 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751194AbcFNL61 (ORCPT ); Tue, 14 Jun 2016 07:58:27 -0400 Date: Tue, 14 Jun 2016 13:58:20 +0200 From: Peter Zijlstra To: Steven Rostedt Cc: LKML , Ingo Molnar , Thomas Gleixner , Clark Williams , Andrew Morton , Nick Piggin Subject: Re: [PATCH] sched: Do not release current rq lock on non contended double_lock_balance() Message-ID: <20160614115820.GD30921@twins.programming.kicks-ass.net> References: <20160613123732.3a8ccc57@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160613123732.3a8ccc57@gandalf.local.home> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 13, 2016 at 12:37:32PM -0400, Steven Rostedt wrote: > The solution was to simply release the current (this_rq) lock and then > take both locks. > > spin_unlock(&this_rq->lock); > double_rq_lock(this_rq, busiest); > What I could not understand about Gregory's patch is that regardless of > contention, the currently held lock is always released, opening up a > window for this ping ponging to occur. When I changed the code to only > release on contention of the second lock, things improved tremendously. Its simpler to reason about and there wasn't a problem with at the time. The above puts a strict limit on hold time and is fair because of the queueing. > +++ b/kernel/sched/sched.h > @@ -1548,10 +1548,15 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest) > __acquires(busiest->lock) > __acquires(this_rq->lock) > { > + int ret = 0; > + > + if (unlikely(!raw_spin_trylock(&busiest->lock))) { > + raw_spin_unlock(&this_rq->lock); > + double_rq_lock(this_rq, busiest); > + ret = 1; > + } > > + return ret; > } This relies on trylock no being allowed to steal the lock, which I think is true for all fair spinlocks (for ticket this must be true, but it is possible with qspinlock for example). And it does indeed make the hold time harder to analyze. For instance; pull_rt_task() does: for_each_cpu() { double_lock_balance(this, that); ... double_unlock_balance(this, that); } Which, with the trylock, ends up with a max possible hold time of O(nr_cpus). Unlikely, sure, but RT is a game of upper bounds etc. So should we maybe do something like: if (unlikely(raw_spin_is_contended(&this_rq->lock) || !raw_spin_trylock(&busiest->lock))) { raw_spin_unlock(&this_rq->lock); double_rq_lock(this_rq, busiest); ret = 1; } ? > CPU 0 CPU 1 > ----- ----- > [ wake up ] > spin_lock(cpu1_rq->lock); > spin_lock(cpu1_rq->lock) > double_lock_balance() > [ release cpu1_rq->lock ] > spin_lock(cpu1_rq->lock) > [due to ticket, now acquires > cpu1_rq->lock ] > > [goes to push task] > double_lock_balance() > [ release cpu1_rq->lock ] > [ acquires lock ] > spin_lock(cpu2_rq->lock) > [ blocks as cpu2 is using it ] > Also, its not entirely clear this scenario helps illustrate how your change is better; because here the lock _is_ contended, so we'll fail the trylock, no?