From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751743AbdH1HCL (ORCPT ); Mon, 28 Aug 2017 03:02:11 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:45747 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751371AbdH1GxV (ORCPT ); Mon, 28 Aug 2017 02:53:21 -0400 Message-Id: <20170828064957.222101344@linutronix.de> User-Agent: quilt/0.63-1 Date: Mon, 28 Aug 2017 08:47:29 +0200 From: Thomas Gleixner To: LKML Cc: Ingo Molnar , Peter Anvin , Peter Zijlstra , Andy Lutomirski , Borislav Petkov , Steven Rostedt Subject: [patch V3 14/44] x86/smp: Remove pointless duplicated interrupt code References: <20170828064715.802987421@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=x86-smp--Remove-pointless-duplicated-interrupt-code.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Two NOP5 are really a good tradeoff vs. the unholy IDT switching mess, which duplicates code all over the place. The rescheduling interrupt gets optimized in a later step. Make the ordering of function call and statistics increment the same as in other places. stats first, then function call. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/hw_irq.h | 4 +-- arch/x86/kernel/smp.c | 43 ++++++------------------------------------ 2 files changed, 9 insertions(+), 38 deletions(-) --- a/arch/x86/include/asm/hw_irq.h +++ b/arch/x86/include/asm/hw_irq.h @@ -49,8 +49,8 @@ extern asmlinkage void call_function_sin #ifdef CONFIG_TRACING /* Interrupt handlers registered during init_IRQ */ extern void trace_reschedule_interrupt(void); -extern void trace_call_function_interrupt(void); -extern void trace_call_function_single_interrupt(void); +#define trace_call_function_interrupt call_function_interrupt +#define trace_call_function_single_interrupt call_function_single_interrupt #define trace_thermal_interrupt thermal_interrupt #define trace_threshold_interrupt threshold_interrupt #define trace_deferred_error_interrupt deferred_error_interrupt --- a/arch/x86/kernel/smp.c +++ b/arch/x86/kernel/smp.c @@ -281,57 +281,28 @@ static inline void __smp_reschedule_inte */ ipi_entering_ack_irq(); trace_reschedule_entry(RESCHEDULE_VECTOR); - __smp_reschedule_interrupt(); + inc_irq_stat(irq_resched_count); + scheduler_ipi(); trace_reschedule_exit(RESCHEDULE_VECTOR); exiting_irq(); - /* - * KVM uses this interrupt to force a cpu out of guest mode - */ -} - -static inline void __smp_call_function_interrupt(void) -{ - generic_smp_call_function_interrupt(); - inc_irq_stat(irq_call_count); } __visible void __irq_entry smp_call_function_interrupt(struct pt_regs *regs) { ipi_entering_ack_irq(); - __smp_call_function_interrupt(); - exiting_irq(); -} - -__visible void __irq_entry -smp_trace_call_function_interrupt(struct pt_regs *regs) -{ - ipi_entering_ack_irq(); trace_call_function_entry(CALL_FUNCTION_VECTOR); - __smp_call_function_interrupt(); - trace_call_function_exit(CALL_FUNCTION_VECTOR); - exiting_irq(); -} - -static inline void __smp_call_function_single_interrupt(void) -{ - generic_smp_call_function_single_interrupt(); inc_irq_stat(irq_call_count); -} - -__visible void __irq_entry -smp_call_function_single_interrupt(struct pt_regs *regs) -{ - ipi_entering_ack_irq(); - __smp_call_function_single_interrupt(); + generic_smp_call_function_interrupt(); + trace_call_function_exit(CALL_FUNCTION_VECTOR); exiting_irq(); } -__visible void __irq_entry -smp_trace_call_function_single_interrupt(struct pt_regs *regs) +__visible void __irq_entry smp_call_function_single_interrupt(struct pt_regs *r) { ipi_entering_ack_irq(); trace_call_function_single_entry(CALL_FUNCTION_SINGLE_VECTOR); - __smp_call_function_single_interrupt(); + inc_irq_stat(irq_call_count); + generic_smp_call_function_single_interrupt(); trace_call_function_single_exit(CALL_FUNCTION_SINGLE_VECTOR); exiting_irq(); }