From: Mayuresh Chitale <mchitale@ventanamicro.com>
To: Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
Albert Ou <aou@eecs.berkeley.edu>
Cc: Mayuresh Chitale <mchitale@ventanamicro.com>,
Atish Patra <atishp@atishpatra.org>,
Anup Patel <anup@brainfault.org>,
linux-riscv@lists.infradead.org
Subject: [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma
Date: Fri, 23 Jun 2023 18:08:49 +0530 [thread overview]
Message-ID: <20230623123849.1425805-2-mchitale@ventanamicro.com> (raw)
In-Reply-To: <20230623123849.1425805-1-mchitale@ventanamicro.com>
When svinval is supported the local_flush_tlb_page*
functions would prefer to use the following sequence
to optimize the tlb flushes instead of a simple sfence.vma:
sfence.w.inval
svinval.vma
.
.
svinval.vma
sfence.inval.ir
The maximum number of consecutive svinval.vma instructions
that can be executed in local_flush_tlb_page* functions is
limited to 64. This is required to avoid soft lockups and the
approach is similar to that used in arm64.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
---
arch/riscv/include/asm/tlbflush.h | 1 +
arch/riscv/mm/tlbflush.c | 66 +++++++++++++++++++++++++++----
2 files changed, 59 insertions(+), 8 deletions(-)
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index a09196f8de68..56490c04b0bd 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -30,6 +30,7 @@ static inline void local_flush_tlb_page(unsigned long addr)
#endif /* CONFIG_MMU */
#if defined(CONFIG_SMP) && defined(CONFIG_MMU)
+extern unsigned long tlb_flush_all_threshold;
void flush_tlb_all(void);
void flush_tlb_mm(struct mm_struct *mm);
void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 77be59aadc73..f63cdf8644f3 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -5,6 +5,17 @@
#include <linux/sched.h>
#include <asm/sbi.h>
#include <asm/mmu_context.h>
+#include <asm/hwcap.h>
+#include <asm/insn-def.h>
+
+#define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL)
+
+/*
+ * Flush entire TLB if number of entries to be flushed is greater
+ * than the threshold below. Platforms may override the threshold
+ * value based on marchid, mvendorid, and mimpid.
+ */
+unsigned long tlb_flush_all_threshold __read_mostly = 64;
static inline void local_flush_tlb_all_asid(unsigned long asid)
{
@@ -24,21 +35,60 @@ static inline void local_flush_tlb_page_asid(unsigned long addr,
}
static inline void local_flush_tlb_range(unsigned long start,
- unsigned long size, unsigned long stride)
+ unsigned long size,
+ unsigned long stride)
{
- if (size <= stride)
- local_flush_tlb_page(start);
- else
+ unsigned long end = start + size;
+ unsigned long num_entries = DIV_ROUND_UP(size, stride);
+
+ if (!num_entries || num_entries > tlb_flush_all_threshold) {
local_flush_tlb_all();
+ return;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_W_INVAL() ::: "memory");
+
+ while (start < end) {
+ if (has_svinval())
+ asm volatile(SINVAL_VMA(%0, zero)
+ : : "r" (start) : "memory");
+ else
+ local_flush_tlb_page(start);
+ start += stride;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_INVAL_IR() ::: "memory");
}
static inline void local_flush_tlb_range_asid(unsigned long start,
- unsigned long size, unsigned long stride, unsigned long asid)
+ unsigned long size,
+ unsigned long stride,
+ unsigned long asid)
{
- if (size <= stride)
- local_flush_tlb_page_asid(start, asid);
- else
+ unsigned long end = start + size;
+ unsigned long num_entries = DIV_ROUND_UP(size, stride);
+
+ if (!num_entries || num_entries > tlb_flush_all_threshold) {
local_flush_tlb_all_asid(asid);
+ return;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_W_INVAL() ::: "memory");
+
+ while (start < end) {
+ if (has_svinval())
+ asm volatile(SINVAL_VMA(%0, %1) : : "r" (start),
+ "r" (asid) : "memory");
+ else
+ local_flush_tlb_page_asid(start, asid);
+ start += stride;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_INVAL_IR() ::: "memory");
}
static void __ipi_flush_tlb_all(void *info)
--
2.34.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2023-06-23 12:39 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-23 12:38 [PATCH v5 0/1] Risc-V Svinval support Mayuresh Chitale
2023-06-23 12:38 ` Mayuresh Chitale [this message]
2023-06-24 11:04 ` [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma Andrew Jones
2023-09-25 15:12 ` [PATCH v5 0/1] Risc-V Svinval support Palmer Dabbelt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230623123849.1425805-2-mchitale@ventanamicro.com \
--to=mchitale@ventanamicro.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=atishp@atishpatra.org \
--cc=linux-riscv@lists.infradead.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).