From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E93C43610 for ; Tue, 27 Nov 2018 10:03:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 35D27208E7 for ; Tue, 27 Nov 2018 10:03:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=brainfault-org.20150623.gappssmtp.com header.i=@brainfault-org.20150623.gappssmtp.com header.b="QHNk+ALf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35D27208E7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730541AbeK0VBA (ORCPT ); Tue, 27 Nov 2018 16:01:00 -0500 Received: from mail-pl1-f194.google.com ([209.85.214.194]:35319 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726437AbeK0VA7 (ORCPT ); Tue, 27 Nov 2018 16:00:59 -0500 Received: by mail-pl1-f194.google.com with SMTP id p8so5079265plo.2 for ; Tue, 27 Nov 2018 02:03:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wo30fNAFxowk7vavcNnVWNqww0lbMrK1m/X23B6Fp88=; b=QHNk+ALfeyKpN0si4Wm3EQ7524fCXsiFXK5gRGka5z33j+ywQqgIJd8B47TJFpvLCe Qzwbtt9aLISufOI+EOX/M9lu0JdNAnJW3eKVeOhqxLNzLMGH5ZmOt61GpCWuCbOFw4Tk bgMfDUN7WQqFleDORptgr7bb+fubp7PbzVKezJsVXGui34sCAmiKH24Fk1QVyVnyJ1u5 VM3dy51OiXydhzFxpZ+pXaTFjhdLvUOBS7WlHYMpm621XbhBGSR4frtNR8pjxHUQ/qEr jaWK7U5nlMBp/JeRzM105v7yElrzJq8rOUCuP1EJuiKgAXiCEVBpR0ZMpdp9rcq5ytBu 83/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wo30fNAFxowk7vavcNnVWNqww0lbMrK1m/X23B6Fp88=; b=i54pNHegsirbHncCJoHSFTUaCRWk4TfIkDWUH5NG3r/EuouxABB44R+P9vbf4SBUtd Xg8HOBy7hHEQOVCPNA1dYwmEpBprs825ujvl89eXNcZKV6/yIB1+qDgvLKkD+WQGYDhw HI10aOkKvLTyAEiiXwDA0qY0o229LB6frro6J6F90pnCPZYg+f55EjNvDJiY5ZqK2sSl ySCjeGQp6ZDazGVrT5S3KixS+Qqio0dapkQLjJH2ZGpP+V1yQijPsXPgR6+UBgdxEK9e MsAuAy9eW3E9Uvp7ORjUsHo6s/A2OzucaMLvgpM4rWpbxzdUIH/BoiaBwdeUVdKh/193 cT+w== X-Gm-Message-State: AA+aEWZ0dEKIa+CYQBXabhd5VUSSf5ppCwz/SCTs7/MqNfscHMn4E4VK w2KpkOzz4F+dmyRDGSEa49502g== X-Google-Smtp-Source: AFSGD/UHTcS9Q7p19GViATyS2yk8DI1//va2kHg8uCFpbK5lLICWY8kojgi2XuZ7FrxKJj18mFsW2w== X-Received: by 2002:a17:902:2ac3:: with SMTP id j61mr31644117plb.185.1543313015005; Tue, 27 Nov 2018 02:03:35 -0800 (PST) Received: from anup-ubuntu64.qualcomm.com ([49.207.48.241]) by smtp.googlemail.com with ESMTPSA id t87sm9519590pfk.122.2018.11.27.02.03.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Nov 2018 02:03:34 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Albert Ou , Daniel Lezcano , Thomas Gleixner , Jason Cooper , Marc Zyngier Cc: Atish Patra , Christoph Hellwig , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 1/4] irqchip: sifive-plic: Pre-compute context hart base and enable base Date: Tue, 27 Nov 2018 15:33:14 +0530 Message-Id: <20181127100317.12809-2-anup@brainfault.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181127100317.12809-1-anup@brainfault.org> References: <20181127100317.12809-1-anup@brainfault.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch does following optimizations: 1. Pre-compute hart base for each context handler 2. Pre-compute enable base for each context handler 3. Have enable lock for each context handler instead of global plic_toggle_lock Signed-off-by: Anup Patel --- drivers/irqchip/irq-sifive-plic.c | 41 +++++++++++++------------------ 1 file changed, 17 insertions(+), 24 deletions(-) diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c index 357e9daf94ae..56fce648a901 100644 --- a/drivers/irqchip/irq-sifive-plic.c +++ b/drivers/irqchip/irq-sifive-plic.c @@ -60,36 +60,24 @@ static void __iomem *plic_regs; struct plic_handler { bool present; int ctxid; + void __iomem *hart_base; + raw_spinlock_t enable_lock; + void __iomem *enable_base; }; static DEFINE_PER_CPU(struct plic_handler, plic_handlers); -static inline void __iomem *plic_hart_offset(int ctxid) +static inline void plic_toggle(struct plic_handler *handler, + int hwirq, int enable) { - return plic_regs + CONTEXT_BASE + ctxid * CONTEXT_PER_HART; -} - -static inline u32 __iomem *plic_enable_base(int ctxid) -{ - return plic_regs + ENABLE_BASE + ctxid * ENABLE_PER_HART; -} - -/* - * Protect mask operations on the registers given that we can't assume that - * atomic memory operations work on them. - */ -static DEFINE_RAW_SPINLOCK(plic_toggle_lock); - -static inline void plic_toggle(int ctxid, int hwirq, int enable) -{ - u32 __iomem *reg = plic_enable_base(ctxid) + (hwirq / 32); + u32 __iomem *reg = handler->enable_base + (hwirq / 32); u32 hwirq_mask = 1 << (hwirq % 32); - raw_spin_lock(&plic_toggle_lock); + raw_spin_lock(&handler->enable_lock); if (enable) writel(readl(reg) | hwirq_mask, reg); else writel(readl(reg) & ~hwirq_mask, reg); - raw_spin_unlock(&plic_toggle_lock); + raw_spin_unlock(&handler->enable_lock); } static inline void plic_irq_toggle(struct irq_data *d, int enable) @@ -101,7 +89,7 @@ static inline void plic_irq_toggle(struct irq_data *d, int enable) struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu); if (handler->present) - plic_toggle(handler->ctxid, d->hwirq, enable); + plic_toggle(handler, d->hwirq, enable); } } @@ -150,7 +138,7 @@ static struct irq_domain *plic_irqdomain; static void plic_handle_irq(struct pt_regs *regs) { struct plic_handler *handler = this_cpu_ptr(&plic_handlers); - void __iomem *claim = plic_hart_offset(handler->ctxid) + CONTEXT_CLAIM; + void __iomem *claim = handler->hart_base + CONTEXT_CLAIM; irq_hw_number_t hwirq; WARN_ON_ONCE(!handler->present); @@ -240,11 +228,16 @@ static int __init plic_init(struct device_node *node, handler = per_cpu_ptr(&plic_handlers, cpu); handler->present = true; handler->ctxid = i; + handler->hart_base = + plic_regs + CONTEXT_BASE + i * CONTEXT_PER_HART; + raw_spin_lock_init(&handler->enable_lock); + handler->enable_base = + plic_regs + ENABLE_BASE + i * ENABLE_PER_HART; /* priority must be > threshold to trigger an interrupt */ - writel(0, plic_hart_offset(i) + CONTEXT_THRESHOLD); + writel(0, handler->hart_base + CONTEXT_THRESHOLD); for (hwirq = 1; hwirq <= nr_irqs; hwirq++) - plic_toggle(i, hwirq, 0); + plic_toggle(handler, hwirq, 0); nr_mapped++; } -- 2.17.1