From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755488AbbCaRAq (ORCPT ); Tue, 31 Mar 2015 13:00:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41082 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752250AbbCaRAn (ORCPT ); Tue, 31 Mar 2015 13:00:43 -0400 From: Denys Vlasenko To: Ingo Molnar Cc: Denys Vlasenko , Linus Torvalds , Steven Rostedt , Borislav Petkov , "H. Peter Anvin" , Andy Lutomirski , Oleg Nesterov , Frederic Weisbecker , Alexei Starovoitov , Will Drewry , Kees Cook , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/9] x86/asm/entry/64: simplify looping around preempt_schedule_irq Date: Tue, 31 Mar 2015 19:00:07 +0200 Message-Id: <1427821211-25099-5-git-send-email-dvlasenk@redhat.com> In-Reply-To: <1427821211-25099-1-git-send-email-dvlasenk@redhat.com> References: <1427821211-25099-1-git-send-email-dvlasenk@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At exit_intr label, we test whether interrupt/exception was in kernel. If it did, we jump to preemption check. If preemption does happen (IOW if we call preempt_schedule_irq), we go back to exit_intr. But it's pointless, we already know that test succeeded last time, preemption doesn't change the fact that interrupt/exception was in kernel. We can go back directly to checking PER_CPU_VAR(__preempt_count) instead. This makes exit_intr label unused. Dropping it. Signed-off-by: Denys Vlasenko CC: Linus Torvalds CC: Steven Rostedt CC: Ingo Molnar CC: Borislav Petkov CC: "H. Peter Anvin" CC: Andy Lutomirski CC: Oleg Nesterov CC: Frederic Weisbecker CC: Alexei Starovoitov CC: Will Drewry CC: Kees Cook CC: x86@kernel.org CC: linux-kernel@vger.kernel.org --- arch/x86/kernel/entry_64.S | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index 9f8d01f..bad285d 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -654,7 +654,6 @@ ret_from_intr: CFI_DEF_CFA_REGISTER rsp CFI_ADJUST_CFA_OFFSET RBP -exit_intr: testl $3,CS(%rsp) je retint_kernel /* Interrupt came from user space */ @@ -741,12 +740,12 @@ retint_kernel: #ifdef CONFIG_PREEMPT /* Interrupts are off */ /* Check if we need preemption */ - cmpl $0,PER_CPU_VAR(__preempt_count) - jnz 1f bt $9,EFLAGS(%rsp) /* interrupts were off? */ jnc 1f +0: cmpl $0,PER_CPU_VAR(__preempt_count) + jnz 1f call preempt_schedule_irq - jmp exit_intr + jmp 0b 1: #endif /* -- 1.8.1.4