From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932376AbeEHNkf (ORCPT ); Tue, 8 May 2018 09:40:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:60334 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755256AbeEHNi4 (ORCPT ); Tue, 8 May 2018 09:38:56 -0400 From: Frederic Weisbecker To: Ingo Molnar Cc: LKML , Frederic Weisbecker , Sebastian Andrzej Siewior , Tony Luck , Peter Zijlstra , "David S . Miller" , Yoshinori Sato , Michael Ellerman , Helge Deller , Benjamin Herrenschmidt , Paul Mackerras , Thomas Gleixner , Martin Schwidefsky , Rich Felker , Fenghua Yu , Heiko Carstens , "James E . J . Bottomley" Subject: [PATCH 05/11] softirq: Consolidate default local_softirq_pending() implementations Date: Tue, 8 May 2018 15:38:20 +0200 Message-Id: <1525786706-22846-6-git-send-email-frederic@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1525786706-22846-1-git-send-email-frederic@kernel.org> References: <1525786706-22846-1-git-send-email-frederic@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Consolidate and optimize default softirq mask API implementations. Per-CPU operations are expected to be faster and a few architectures already rely on them to implement local_softirq_pending() and related accessors/mutators. Those will be migrated to the new generic code. Signed-off-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: David S. Miller Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: James E.J. Bottomley Cc: Helge Deller Cc: Tony Luck Cc: Fenghua Yu Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: Yoshinori Sato Cc: Rich Felker --- include/linux/interrupt.h | 14 ++++++++++++++ include/linux/irq_cpustat.h | 6 +----- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 5426627..7a11f73 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -432,11 +432,25 @@ extern bool force_irqthreads; #define force_irqthreads (0) #endif +#ifndef local_softirq_pending + +#ifndef local_softirq_pending_ref +#define local_softirq_pending_ref irq_stat.__softirq_pending +#endif + +#define local_softirq_pending() (__this_cpu_read(local_softirq_pending_ref)) +#define set_softirq_pending(x) (__this_cpu_write(local_softirq_pending_ref, (x))) +#define or_softirq_pending(x) (__this_cpu_or(local_softirq_pending_ref, (x))) + +#else /* local_softirq_pending */ + #ifndef __ARCH_SET_SOFTIRQ_PENDING #define set_softirq_pending(x) (local_softirq_pending() = (x)) #define or_softirq_pending(x) (local_softirq_pending() |= (x)) #endif +#endif /* local_softirq_pending */ + /* Some architectures might implement lazy enabling/disabling of * interrupts. In some cases, such as stop_machine, we might want * to ensure that after a local_irq_disable(), interrupts have diff --git a/include/linux/irq_cpustat.h b/include/linux/irq_cpustat.h index ddea03c..6e8895c 100644 --- a/include/linux/irq_cpustat.h +++ b/include/linux/irq_cpustat.h @@ -22,11 +22,7 @@ DECLARE_PER_CPU_ALIGNED(irq_cpustat_t, irq_stat); /* defined in asm/hardirq.h */ #define __IRQ_STAT(cpu, member) (per_cpu(irq_stat.member, cpu)) #endif - /* arch independent irq_stat fields */ -#define local_softirq_pending() \ - __IRQ_STAT(smp_processor_id(), __softirq_pending) - - /* arch dependent irq_stat fields */ +/* arch dependent irq_stat fields */ #define nmi_count(cpu) __IRQ_STAT((cpu), __nmi_count) /* i386 */ #endif /* __irq_cpustat_h */ -- 2.7.4