From: "Christopher M. Riedl" <cmr@linux.ibm.com>
To: linuxppc-dev@lists.ozlabs.org
Cc: tglx@linutronix.de, x86@kernel.org,
linux-hardening@vger.kernel.org, keescook@chromium.org
Subject: [RESEND PATCH v4 05/11] powerpc/64s: Add ability to skip SLB preload
Date: Wed, 5 May 2021 23:34:46 -0500 [thread overview]
Message-ID: <20210506043452.9674-6-cmr@linux.ibm.com> (raw)
In-Reply-To: <20210506043452.9674-1-cmr@linux.ibm.com>
Switching to a different mm with Hash translation causes SLB entries to
be preloaded from the current thread_info. This reduces SLB faults, for
example when threads share a common mm but operate on different address
ranges.
Preloading entries from the thread_info struct may not always be
appropriate - such as when switching to a temporary mm. Introduce a new
boolean in mm_context_t to skip the SLB preload entirely. Also move the
SLB preload code into a separate function since switch_slb() is already
quite long. The default behavior (preloading SLB entries from the
current thread_info struct) remains unchanged.
Signed-off-by: Christopher M. Riedl <cmr@linux.ibm.com>
---
v4: * New to series.
---
arch/powerpc/include/asm/book3s/64/mmu.h | 3 ++
arch/powerpc/include/asm/mmu_context.h | 13 ++++++
arch/powerpc/mm/book3s64/mmu_context.c | 2 +
arch/powerpc/mm/book3s64/slb.c | 56 ++++++++++++++----------
4 files changed, 50 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index eace8c3f7b0a1..b23a9dcdee5af 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -130,6 +130,9 @@ typedef struct {
u32 pkey_allocation_map;
s16 execute_only_pkey; /* key holding execute-only protection */
#endif
+
+ /* Do not preload SLB entries from thread_info during switch_slb() */
+ bool skip_slb_preload;
} mm_context_t;
static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 4bc45d3ed8b0e..264787e90b1a1 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -298,6 +298,19 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm,
return 0;
}
+#ifdef CONFIG_PPC_BOOK3S_64
+
+static inline void skip_slb_preload_mm(struct mm_struct *mm)
+{
+ mm->context.skip_slb_preload = true;
+}
+
+#else
+
+static inline void skip_slb_preload_mm(struct mm_struct *mm) {}
+
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
#include <asm-generic/mmu_context.h>
#endif /* __KERNEL__ */
diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index c10fc8a72fb37..3479910264c59 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -202,6 +202,8 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
atomic_set(&mm->context.active_cpus, 0);
atomic_set(&mm->context.copros, 0);
+ mm->context.skip_slb_preload = false;
+
return 0;
}
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index c91bd85eb90e3..da0836cb855af 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -441,10 +441,39 @@ static void slb_cache_slbie_user(unsigned int index)
asm volatile("slbie %0" : : "r" (slbie_data));
}
+static void preload_slb_entries(struct task_struct *tsk, struct mm_struct *mm)
+{
+ struct thread_info *ti = task_thread_info(tsk);
+ unsigned char i;
+
+ /*
+ * We gradually age out SLBs after a number of context switches to
+ * reduce reload overhead of unused entries (like we do with FP/VEC
+ * reload). Each time we wrap 256 switches, take an entry out of the
+ * SLB preload cache.
+ */
+ tsk->thread.load_slb++;
+ if (!tsk->thread.load_slb) {
+ unsigned long pc = KSTK_EIP(tsk);
+
+ preload_age(ti);
+ preload_add(ti, pc);
+ }
+
+ for (i = 0; i < ti->slb_preload_nr; i++) {
+ unsigned char idx;
+ unsigned long ea;
+
+ idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
+ ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
+
+ slb_allocate_user(mm, ea);
+ }
+}
+
/* Flush all user entries from the segment table of the current processor. */
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
- struct thread_info *ti = task_thread_info(tsk);
unsigned char i;
/*
@@ -502,29 +531,8 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
copy_mm_to_paca(mm);
- /*
- * We gradually age out SLBs after a number of context switches to
- * reduce reload overhead of unused entries (like we do with FP/VEC
- * reload). Each time we wrap 256 switches, take an entry out of the
- * SLB preload cache.
- */
- tsk->thread.load_slb++;
- if (!tsk->thread.load_slb) {
- unsigned long pc = KSTK_EIP(tsk);
-
- preload_age(ti);
- preload_add(ti, pc);
- }
-
- for (i = 0; i < ti->slb_preload_nr; i++) {
- unsigned char idx;
- unsigned long ea;
-
- idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
- ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
-
- slb_allocate_user(mm, ea);
- }
+ if (!mm->context.skip_slb_preload)
+ preload_slb_entries(tsk, mm);
/*
* Synchronize slbmte preloads with possible subsequent user memory
--
2.26.1
next prev parent reply other threads:[~2021-05-06 4:38 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-06 4:34 [RESEND PATCH v4 00/11] Use per-CPU temporary mappings for patching Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 01/11] powerpc: Add LKDTM accessor for patching addr Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 02/11] lkdtm/powerpc: Add test to hijack a patch mapping Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 03/11] x86_64: Add LKDTM accessor for patching addr Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 04/11] lkdtm/x86_64: Add test to hijack a patch mapping Christopher M. Riedl
2021-05-06 4:34 ` Christopher M. Riedl [this message]
2021-06-21 3:13 ` [RESEND PATCH v4 05/11] powerpc/64s: Add ability to skip SLB preload Daniel Axtens
2021-07-01 3:48 ` Christopher M. Riedl
2021-07-01 4:15 ` Nicholas Piggin
2021-07-01 5:28 ` Christopher M. Riedl
2021-07-01 6:04 ` Nicholas Piggin
2021-07-01 6:53 ` Christopher M. Riedl
2021-07-01 7:37 ` Nicholas Piggin
2021-07-01 11:30 ` Nicholas Piggin
2021-07-09 4:55 ` Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 06/11] powerpc: Introduce temporary mm Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 07/11] powerpc/64s: Make slb_allocate_user() non-static Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 08/11] powerpc: Initialize and use a temporary mm for patching Christopher M. Riedl
2021-06-21 3:19 ` Daniel Axtens
2021-07-01 5:11 ` Christopher M. Riedl
2021-07-01 6:12 ` Nicholas Piggin
2021-07-01 7:02 ` Christopher M. Riedl
2021-07-01 7:51 ` Nicholas Piggin
2021-07-09 5:03 ` Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 09/11] lkdtm/powerpc: Fix code patching hijack test Christopher M. Riedl
2021-05-06 4:34 ` [RESEND PATCH v4 10/11] powerpc: Protect patching_mm with a lock Christopher M. Riedl
2021-05-06 10:51 ` Peter Zijlstra
2021-05-07 20:03 ` Christopher M. Riedl
2021-05-07 22:26 ` Peter Zijlstra
2021-05-06 4:34 ` [RESEND PATCH v4 11/11] powerpc: Use patch_instruction_unlocked() in loops Christopher M. Riedl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210506043452.9674-6-cmr@linux.ibm.com \
--to=cmr@linux.ibm.com \
--cc=keescook@chromium.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).