From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4B99C433F5 for ; Thu, 24 Mar 2022 15:14:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242543AbiCXPQL (ORCPT ); Thu, 24 Mar 2022 11:16:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351254AbiCXPQG (ORCPT ); Thu, 24 Mar 2022 11:16:06 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 677149A9B5 for ; Thu, 24 Mar 2022 08:14:31 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id j13so5003787plj.8 for ; Thu, 24 Mar 2022 08:14:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gulBS1a3GDwTjV3gPNI1Ld+RqIL2YPtNwx+Ha/YsDb0=; b=W68E9I7NRSuju6ve0ulAP/blinZwj08cZqJwXZWBKvuv8/PdG7qj6zsk+Jj9c07bPu 8467lPDG6pw6u4e3iGFwSQofxncPWjLK50QfWNZo2XTQZgCiU0rOsGAKSVtAH0eSFT8S ie/gdmRXrZbfJ1oynKMYCT353Mg0xZdgh690WlA7NBId0FmgvLVZA07dhSgPcIjwxAQU 5iaq98MWSM0nyxWH930IuTA2+GZBc23csEmcnWa5XhfFF3Zt4DfEHg2d9ENRlMYCR/eS IzWcOkIP/SyfGavtvu+WFcH+y3rLxuDpfk4VF7dsqlGdzsAkEralkYRhRiCVK56YGGAI hOog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gulBS1a3GDwTjV3gPNI1Ld+RqIL2YPtNwx+Ha/YsDb0=; b=gZBCjBEiQBCyraqGeaP78CnDcbslEaSsqIKyUi4nzYtZQu8k3/MRA4UAlq7pjpbFPj J69AHO2GB3vR3+k1l4M4tU/m2nvodBYaIIikDuX/EMMZtP9r2VOPz/S8jd5RPdmHv8wd aUXgDDYUUJhmdNDHPYm/1rySv1F2IcupXY6uIe+sOx6ik9W+aFQItHUpbbdYzXP782TG CMhNHw0hguJVjpMcYWVxN2aX3Ql3z1EiB4bmldoxqV3kQ0sIOvlHN+O7lDh3aCEkXoXU s3RAsuBVCthvSKc0CZyfvRcolKHvxknUvdxCTKxrY7YR9zLzmqz/ZQ4jB2V7NuY4Efzf vC0g== X-Gm-Message-State: AOAM531UjKPRSr3w8Bo8WvqPx02pPVTIt9rgKgFM7DTaJ5wxksOlV8g1 JIC8JcmMs2moDHW1tv8zU1HoFw== X-Google-Smtp-Source: ABdhPJyheYB5R4jkB0j2ZpGax8sb/9pzjWJHidJz9ExgmsPum1vat1RnlP3XdqJZdXb/dqm2TgGugQ== X-Received: by 2002:a17:902:d4cc:b0:154:3174:e4ff with SMTP id o12-20020a170902d4cc00b001543174e4ffmr6358750plg.134.1648134870433; Thu, 24 Mar 2022 08:14:30 -0700 (PDT) Received: from localhost.localdomain ([122.171.187.87]) by smtp.gmail.com with ESMTPSA id f16-20020a056a00239000b004fa7103e13csm4166065pfc.41.2022.03.24.08.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Mar 2022 08:14:29 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Daniel Lezcano Cc: Atish Patra , Alistair Francis , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v5 6/7] RISC-V: Use IPIs for remote TLB flush when possible Date: Thu, 24 Mar 2022 20:42:57 +0530 Message-Id: <20220324151258.943896-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220324151258.943896-1-apatel@ventanamicro.com> References: <20220324151258.943896-1-apatel@ventanamicro.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If we have specialized interrupt controller (such as AIA IMSIC) which allows supervisor mode to directly inject IPIs without any assistance from M-mode or HS-mode then using such specialized interrupt controller, we can do remote TLB flushes directly from supervisor mode instead of using the SBI RFENCE calls. This patch extends remote TLB flush functions to use supervisor mode IPIs whenever direct supervisor mode IPIs.are supported by interrupt controller. Signed-off-by: Anup Patel --- arch/riscv/mm/tlbflush.c | 93 +++++++++++++++++++++++++++++++++------- 1 file changed, 78 insertions(+), 15 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 37ed760d007c..27a7db8eb2c4 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -23,14 +23,62 @@ static inline void local_flush_tlb_page_asid(unsigned long addr, : "memory"); } +static inline void local_flush_tlb_range(unsigned long start, + unsigned long size, unsigned long stride) +{ + if (size <= stride) + local_flush_tlb_page(start); + else + local_flush_tlb_all(); +} + +static inline void local_flush_tlb_range_asid(unsigned long start, + unsigned long size, unsigned long stride, unsigned long asid) +{ + if (size <= stride) + local_flush_tlb_page_asid(start, asid); + else + local_flush_tlb_all_asid(asid); +} + +static void __ipi_flush_tlb_all(void *info) +{ + local_flush_tlb_all(); +} + void flush_tlb_all(void) { - sbi_remote_sfence_vma(NULL, 0, -1); + if (riscv_use_ipi_for_rfence()) + on_each_cpu(__ipi_flush_tlb_all, NULL, 1); + else + sbi_remote_sfence_vma(NULL, 0, -1); +} + +struct flush_tlb_range_data { + unsigned long asid; + unsigned long start; + unsigned long size; + unsigned long stride; +}; + +static void __ipi_flush_tlb_range_asid(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); +} + +static void __ipi_flush_tlb_range(void *info) +{ + struct flush_tlb_range_data *d = info; + + local_flush_tlb_range(d->start, d->size, d->stride); } -static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, - unsigned long size, unsigned long stride) +static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, + unsigned long size, unsigned long stride) { + struct flush_tlb_range_data ftd; struct cpumask *cmask = mm_cpumask(mm); unsigned int cpuid; bool broadcast; @@ -45,19 +93,34 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, unsigned long asid = atomic_long_read(&mm->context.id); if (broadcast) { - sbi_remote_sfence_vma_asid(cmask, start, size, asid); - } else if (size <= stride) { - local_flush_tlb_page_asid(start, asid); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = asid; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range_asid, + &ftd, 1); + } else + sbi_remote_sfence_vma_asid(cmask, + start, size, asid); } else { - local_flush_tlb_all_asid(asid); + local_flush_tlb_range_asid(start, size, stride, asid); } } else { if (broadcast) { - sbi_remote_sfence_vma(cmask, start, size); - } else if (size <= stride) { - local_flush_tlb_page(start); + if (riscv_use_ipi_for_rfence()) { + ftd.asid = 0; + ftd.start = start; + ftd.size = size; + ftd.stride = stride; + on_each_cpu_mask(cmask, + __ipi_flush_tlb_range, + &ftd, 1); + } else + sbi_remote_sfence_vma(cmask, start, size); } else { - local_flush_tlb_all(); + local_flush_tlb_range(start, size, stride); } } @@ -66,23 +129,23 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start, void flush_tlb_mm(struct mm_struct *mm) { - __sbi_tlb_flush_range(mm, 0, -1, PAGE_SIZE); + __flush_tlb_range(mm, 0, -1, PAGE_SIZE); } void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) { - __sbi_tlb_flush_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, addr, PAGE_SIZE, PAGE_SIZE); } void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PAGE_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { - __sbi_tlb_flush_range(vma->vm_mm, start, end - start, PMD_SIZE); + __flush_tlb_range(vma->vm_mm, start, end - start, PMD_SIZE); } #endif -- 2.25.1