From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761959AbcINRWd (ORCPT ); Wed, 14 Sep 2016 13:22:33 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:36012 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755671AbcINRWc (ORCPT ); Wed, 14 Sep 2016 13:22:32 -0400 Date: Wed, 14 Sep 2016 19:22:24 +0200 From: Sebastian Andrzej Siewior To: linux-rt-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org, tglx@linutronix.de, Steven Rostedt Subject: [PATCH RT] x86/preempt-lazy: fixup should_resched() Message-ID: <20160914172224.2giblohltenb73ol@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline User-Agent: NeoMutt/20160910 (1.7.0) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org should_resched() returns true if NEED_RESCHED is set and the preempt_count is 0 _or_ if NEED_RESCHED_LAZY is set ignoring the preempt counter. Ignoring the preemp counter is wrong. This patch adds this into account. While at it, __preempt_count_dec_and_test() ignores preempt_lazy_count while checking TIF_NEED_RESCHED_LAZY so we this check, too. Signed-off-by: Sebastian Andrzej Siewior --- arch/x86/include/asm/preempt.h | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 190af4271b5c..58fd4ff3f53a 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -89,6 +89,8 @@ static __always_inline bool __preempt_count_dec_and_test(void) if (____preempt_count_dec_and_test()) return true; #ifdef CONFIG_PREEMPT_LAZY + if (current_thread_info()->preempt_lazy_count) + return false; return test_thread_flag(TIF_NEED_RESCHED_LAZY); #else return false; @@ -101,8 +103,19 @@ static __always_inline bool __preempt_count_dec_and_test(void) static __always_inline bool should_resched(int preempt_offset) { #ifdef CONFIG_PREEMPT_LAZY - return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset || - test_thread_flag(TIF_NEED_RESCHED_LAZY)); + u32 tmp; + + tmp = raw_cpu_read_4(__preempt_count); + if (tmp == preempt_offset) + return true; + + /* preempt count == 0 ? */ + tmp &= ~PREEMPT_NEED_RESCHED; + if (tmp) + return false; + if (current_thread_info()->preempt_lazy_count) + return false; + return test_thread_flag(TIF_NEED_RESCHED_LAZY); #else return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset); #endif -- 2.9.3