From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966487AbbBDOjY (ORCPT ); Wed, 4 Feb 2015 09:39:24 -0500 Received: from terminus.zytor.com ([198.137.202.10]:38858 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966445AbbBDOjT (ORCPT ); Wed, 4 Feb 2015 09:39:19 -0500 Date: Wed, 4 Feb 2015 06:38:44 -0800 From: tip-bot for Davidlohr Bueso Message-ID: Cc: peterz@infradead.org, linux-kernel@vger.kernel.org, dbueso@suse.de, tglx@linutronix.de, torvalds@linux-foundation.org, dave@stgolabs.net, mingo@kernel.org, hpa@zytor.com Reply-To: dave@stgolabs.net, torvalds@linux-foundation.org, hpa@zytor.com, mingo@kernel.org, tglx@linutronix.de, dbueso@suse.de, peterz@infradead.org, linux-kernel@vger.kernel.org In-Reply-To: <1422857784.18096.1.camel@stgolabs.net> References: <1422857784.18096.1.camel@stgolabs.net> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/core] locking/rtmutex: Optimize setting task running after being blocked Git-Commit-ID: afffc6c1805d98e08e778cddb644a666e78cfcfd X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: afffc6c1805d98e08e778cddb644a666e78cfcfd Gitweb: http://git.kernel.org/tip/afffc6c1805d98e08e778cddb644a666e78cfcfd Author: Davidlohr Bueso AuthorDate: Sun, 1 Feb 2015 22:16:24 -0800 Committer: Ingo Molnar CommitDate: Wed, 4 Feb 2015 07:57:42 +0100 locking/rtmutex: Optimize setting task running after being blocked We explicitly mark the task running after returning from a __rt_mutex_slowlock() call, which does the actual sleeping via wait-wake-trylocking. As such, this patch does two things: (1) refactors the code so that setting current to TASK_RUNNING is done by __rt_mutex_slowlock(), and not by the callers. The downside to this is that it becomes a bit unclear when at what point we block. As such I've added a comment that the task blocks when calling __rt_mutex_slowlock() so readers can figure out when it is running again. (2) relaxes setting current's state through __set_current_state(), instead of it's more expensive barrier alternative. There was no need for the implied barrier as we're obviously not planning on blocking. Signed-off-by: Davidlohr Bueso Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Link: http://lkml.kernel.org/r/1422857784.18096.1.camel@stgolabs.net Signed-off-by: Ingo Molnar --- kernel/locking/rtmutex.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 7c98873..3059bc2f 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1130,6 +1130,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, set_current_state(state); } + __set_current_state(TASK_RUNNING); return ret; } @@ -1188,10 +1189,9 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, ret = task_blocks_on_rt_mutex(lock, &waiter, current, chwalk); if (likely(!ret)) + /* sleep on the mutex */ ret = __rt_mutex_slowlock(lock, state, timeout, &waiter); - set_current_state(TASK_RUNNING); - if (unlikely(ret)) { remove_waiter(lock, &waiter); rt_mutex_handle_deadlock(ret, chwalk, &waiter); @@ -1626,10 +1626,9 @@ int rt_mutex_finish_proxy_lock(struct rt_mutex *lock, set_current_state(TASK_INTERRUPTIBLE); + /* sleep on the mutex */ ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter); - set_current_state(TASK_RUNNING); - if (unlikely(ret)) remove_waiter(lock, waiter);