From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753541AbdFVRLE (ORCPT ); Thu, 22 Jun 2017 13:11:04 -0400 Received: from terminus.zytor.com ([65.50.211.136]:49771 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752066AbdFVRLC (ORCPT ); Thu, 22 Jun 2017 13:11:02 -0400 Date: Thu, 22 Jun 2017 10:06:23 -0700 From: tip-bot for Thomas Gleixner Message-ID: Cc: axboe@kernel.dk, marc.zyngier@arm.com, keith.busch@intel.com, linux-kernel@vger.kernel.org, hch@lst.de, mingo@kernel.org, hpa@zytor.com, mpe@ellerman.id.au, peterz@infradead.org, tglx@linutronix.de Reply-To: tglx@linutronix.de, keith.busch@intel.com, axboe@kernel.dk, marc.zyngier@arm.com, mingo@kernel.org, mpe@ellerman.id.au, peterz@infradead.org, hpa@zytor.com, linux-kernel@vger.kernel.org, hch@lst.de In-Reply-To: <20170619235446.954523476@linutronix.de> References: <20170619235446.954523476@linutronix.de> To: linux-tip-commits@vger.kernel.org Subject: [tip:irq/core] genirq: Introduce IRQD_MANAGED_SHUTDOWN Git-Commit-ID: 54fdf6a0875ca380647ac1cc9b5b8f2dbbbfa131 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 54fdf6a0875ca380647ac1cc9b5b8f2dbbbfa131 Gitweb: http://git.kernel.org/tip/54fdf6a0875ca380647ac1cc9b5b8f2dbbbfa131 Author: Thomas Gleixner AuthorDate: Tue, 20 Jun 2017 01:37:47 +0200 Committer: Thomas Gleixner CommitDate: Thu, 22 Jun 2017 18:21:23 +0200 genirq: Introduce IRQD_MANAGED_SHUTDOWN Affinity managed interrupts should keep their assigned affinity accross CPU hotplug. To avoid magic hackery in device drivers, the core code shall manage them transparently. This will set these interrupts into a managed shutdown state when the last CPU of the assigned affinity mask goes offline. The interrupt will be restarted when one of the CPUs in the assigned affinity mask comes back online. Introduce the necessary state flag and the accessor functions. Signed-off-by: Thomas Gleixner Cc: Jens Axboe Cc: Marc Zyngier Cc: Michael Ellerman Cc: Keith Busch Cc: Peter Zijlstra Cc: Christoph Hellwig Link: http://lkml.kernel.org/r/20170619235446.954523476@linutronix.de --- include/linux/irq.h | 8 ++++++++ kernel/irq/internals.h | 10 ++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/linux/irq.h b/include/linux/irq.h index 4087ef2..0e37276 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -207,6 +207,8 @@ struct irq_data { * IRQD_FORWARDED_TO_VCPU - The interrupt is forwarded to a VCPU * IRQD_AFFINITY_MANAGED - Affinity is auto-managed by the kernel * IRQD_IRQ_STARTED - Startup state of the interrupt + * IRQD_MANAGED_SHUTDOWN - Interrupt was shutdown due to empty affinity + * mask. Applies only to affinity managed irqs. */ enum { IRQD_TRIGGER_MASK = 0xf, @@ -225,6 +227,7 @@ enum { IRQD_FORWARDED_TO_VCPU = (1 << 20), IRQD_AFFINITY_MANAGED = (1 << 21), IRQD_IRQ_STARTED = (1 << 22), + IRQD_MANAGED_SHUTDOWN = (1 << 23), }; #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors) @@ -343,6 +346,11 @@ static inline bool irqd_is_started(struct irq_data *d) return __irqd_to_state(d) & IRQD_IRQ_STARTED; } +static inline bool irqd_is_managed_shutdown(struct irq_data *d) +{ + return __irqd_to_state(d) & IRQD_MANAGED_SHUTDOWN; +} + #undef __irqd_to_state static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d) diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index 040806f..ca4666b 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -193,6 +193,16 @@ static inline void irqd_clr_move_pending(struct irq_data *d) __irqd_to_state(d) &= ~IRQD_SETAFFINITY_PENDING; } +static inline void irqd_set_managed_shutdown(struct irq_data *d) +{ + __irqd_to_state(d) |= IRQD_MANAGED_SHUTDOWN; +} + +static inline void irqd_clr_managed_shutdown(struct irq_data *d) +{ + __irqd_to_state(d) &= ~IRQD_MANAGED_SHUTDOWN; +} + static inline void irqd_clear(struct irq_data *d, unsigned int mask) { __irqd_to_state(d) &= ~mask;