linux-hardening.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicholas Piggin <npiggin@gmail.com>
To: "Christopher M. Riedl" <cmr@linux.ibm.com>,
	Daniel Axtens <dja@axtens.net>,
	linuxppc-dev@lists.ozlabs.org
Cc: keescook@chromium.org, linux-hardening@vger.kernel.org,
	tglx@linutronix.de, x86@kernel.org
Subject: Re: [RESEND PATCH v4 05/11] powerpc/64s: Add ability to skip SLB preload
Date: Thu, 01 Jul 2021 14:15:38 +1000	[thread overview]
Message-ID: <1625112841.77uceah4w9.astroid@bobo.none> (raw)
In-Reply-To: <CCHHVUNV216M.1825LSMNZ1XG7@oc8246131445.ibm.com>

Excerpts from Christopher M. Riedl's message of July 1, 2021 1:48 pm:
> On Sun Jun 20, 2021 at 10:13 PM CDT, Daniel Axtens wrote:
>> "Christopher M. Riedl" <cmr@linux.ibm.com> writes:
>>
>> > Switching to a different mm with Hash translation causes SLB entries to
>> > be preloaded from the current thread_info. This reduces SLB faults, for
>> > example when threads share a common mm but operate on different address
>> > ranges.
>> >
>> > Preloading entries from the thread_info struct may not always be
>> > appropriate - such as when switching to a temporary mm. Introduce a new
>> > boolean in mm_context_t to skip the SLB preload entirely. Also move the
>> > SLB preload code into a separate function since switch_slb() is already
>> > quite long. The default behavior (preloading SLB entries from the
>> > current thread_info struct) remains unchanged.
>> >
>> > Signed-off-by: Christopher M. Riedl <cmr@linux.ibm.com>
>> >
>> > ---
>> >
>> > v4:  * New to series.
>> > ---
>> >  arch/powerpc/include/asm/book3s/64/mmu.h |  3 ++
>> >  arch/powerpc/include/asm/mmu_context.h   | 13 ++++++
>> >  arch/powerpc/mm/book3s64/mmu_context.c   |  2 +
>> >  arch/powerpc/mm/book3s64/slb.c           | 56 ++++++++++++++----------
>> >  4 files changed, 50 insertions(+), 24 deletions(-)
>> >
>> > diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>> > index eace8c3f7b0a1..b23a9dcdee5af 100644
>> > --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>> > +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>> > @@ -130,6 +130,9 @@ typedef struct {
>> >  	u32 pkey_allocation_map;
>> >  	s16 execute_only_pkey; /* key holding execute-only protection */
>> >  #endif
>> > +
>> > +	/* Do not preload SLB entries from thread_info during switch_slb() */
>> > +	bool skip_slb_preload;
>> >  } mm_context_t;
>> >  
>> >  static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
>> > diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
>> > index 4bc45d3ed8b0e..264787e90b1a1 100644
>> > --- a/arch/powerpc/include/asm/mmu_context.h
>> > +++ b/arch/powerpc/include/asm/mmu_context.h
>> > @@ -298,6 +298,19 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm,
>> >  	return 0;
>> >  }
>> >  
>> > +#ifdef CONFIG_PPC_BOOK3S_64
>> > +
>> > +static inline void skip_slb_preload_mm(struct mm_struct *mm)
>> > +{
>> > +	mm->context.skip_slb_preload = true;
>> > +}
>> > +
>> > +#else
>> > +
>> > +static inline void skip_slb_preload_mm(struct mm_struct *mm) {}
>> > +
>> > +#endif /* CONFIG_PPC_BOOK3S_64 */
>> > +
>> >  #include <asm-generic/mmu_context.h>
>> >  
>> >  #endif /* __KERNEL__ */
>> > diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
>> > index c10fc8a72fb37..3479910264c59 100644
>> > --- a/arch/powerpc/mm/book3s64/mmu_context.c
>> > +++ b/arch/powerpc/mm/book3s64/mmu_context.c
>> > @@ -202,6 +202,8 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>> >  	atomic_set(&mm->context.active_cpus, 0);
>> >  	atomic_set(&mm->context.copros, 0);
>> >  
>> > +	mm->context.skip_slb_preload = false;
>> > +
>> >  	return 0;
>> >  }
>> >  
>> > diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
>> > index c91bd85eb90e3..da0836cb855af 100644
>> > --- a/arch/powerpc/mm/book3s64/slb.c
>> > +++ b/arch/powerpc/mm/book3s64/slb.c
>> > @@ -441,10 +441,39 @@ static void slb_cache_slbie_user(unsigned int index)
>> >  	asm volatile("slbie %0" : : "r" (slbie_data));
>> >  }
>> >  
>> > +static void preload_slb_entries(struct task_struct *tsk, struct mm_struct *mm)
>> Should this be explicitly inline or even __always_inline? I'm thinking
>> switch_slb is probably a fairly hot path on hash?
> 
> Yes absolutely. I'll make this change in v5.
> 
>>
>> > +{
>> > +	struct thread_info *ti = task_thread_info(tsk);
>> > +	unsigned char i;
>> > +
>> > +	/*
>> > +	 * We gradually age out SLBs after a number of context switches to
>> > +	 * reduce reload overhead of unused entries (like we do with FP/VEC
>> > +	 * reload). Each time we wrap 256 switches, take an entry out of the
>> > +	 * SLB preload cache.
>> > +	 */
>> > +	tsk->thread.load_slb++;
>> > +	if (!tsk->thread.load_slb) {
>> > +		unsigned long pc = KSTK_EIP(tsk);
>> > +
>> > +		preload_age(ti);
>> > +		preload_add(ti, pc);
>> > +	}
>> > +
>> > +	for (i = 0; i < ti->slb_preload_nr; i++) {
>> > +		unsigned char idx;
>> > +		unsigned long ea;
>> > +
>> > +		idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
>> > +		ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
>> > +
>> > +		slb_allocate_user(mm, ea);
>> > +	}
>> > +}
>> > +
>> >  /* Flush all user entries from the segment table of the current processor. */
>> >  void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
>> >  {
>> > -	struct thread_info *ti = task_thread_info(tsk);
>> >  	unsigned char i;
>> >  
>> >  	/*
>> > @@ -502,29 +531,8 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
>> >  
>> >  	copy_mm_to_paca(mm);
>> >  
>> > -	/*
>> > -	 * We gradually age out SLBs after a number of context switches to
>> > -	 * reduce reload overhead of unused entries (like we do with FP/VEC
>> > -	 * reload). Each time we wrap 256 switches, take an entry out of the
>> > -	 * SLB preload cache.
>> > -	 */
>> > -	tsk->thread.load_slb++;
>> > -	if (!tsk->thread.load_slb) {
>> > -		unsigned long pc = KSTK_EIP(tsk);
>> > -
>> > -		preload_age(ti);
>> > -		preload_add(ti, pc);
>> > -	}
>> > -
>> > -	for (i = 0; i < ti->slb_preload_nr; i++) {
>> > -		unsigned char idx;
>> > -		unsigned long ea;
>> > -
>> > -		idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
>> > -		ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
>> > -
>> > -		slb_allocate_user(mm, ea);
>> > -	}
>> > +	if (!mm->context.skip_slb_preload)
>> > +		preload_slb_entries(tsk, mm);
>>
>> Should this be wrapped in likely()?
> 
> Seems like a good idea - yes.
> 
>>
>> >  
>> >  	/*
>> >  	 * Synchronize slbmte preloads with possible subsequent user memory
>>
>> Right below this comment is the isync. It seems to be specifically
>> concerned with synchronising preloaded slbs. Do you need it if you are
>> skipping SLB preloads?
>>
>> It's probably not a big deal to have an extra isync in the fairly rare
>> path when we're skipping preloads, but I thought I'd check.
> 
> I don't _think_ we need the `isync` if we are skipping the SLB preloads,
> but then again it was always in the code-path before. If someone can
> make a compelling argument to drop it when not preloading SLBs I will,
> otherwise (considering some of the other non-obvious things I stepped
> into with the Hash code) I will keep it here for now.

The ISA says slbia wants an isync afterward, so we probably should keep 
it. The comment is a bit misleading in that case.

Why isn't preloading appropriate for a temporary mm? 

Thanks,
Nick

  reply	other threads:[~2021-07-01  4:15 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06  4:34 [RESEND PATCH v4 00/11] Use per-CPU temporary mappings for patching Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 01/11] powerpc: Add LKDTM accessor for patching addr Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 02/11] lkdtm/powerpc: Add test to hijack a patch mapping Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 03/11] x86_64: Add LKDTM accessor for patching addr Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 04/11] lkdtm/x86_64: Add test to hijack a patch mapping Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 05/11] powerpc/64s: Add ability to skip SLB preload Christopher M. Riedl
2021-06-21  3:13   ` Daniel Axtens
2021-07-01  3:48     ` Christopher M. Riedl
2021-07-01  4:15       ` Nicholas Piggin [this message]
2021-07-01  5:28         ` Christopher M. Riedl
2021-07-01  6:04           ` Nicholas Piggin
2021-07-01  6:53             ` Christopher M. Riedl
2021-07-01  7:37               ` Nicholas Piggin
2021-07-01 11:30                 ` Nicholas Piggin
2021-07-09  4:55                 ` Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 06/11] powerpc: Introduce temporary mm Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 07/11] powerpc/64s: Make slb_allocate_user() non-static Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 08/11] powerpc: Initialize and use a temporary mm for patching Christopher M. Riedl
2021-06-21  3:19   ` Daniel Axtens
2021-07-01  5:11     ` Christopher M. Riedl
2021-07-01  6:12   ` Nicholas Piggin
2021-07-01  7:02     ` Christopher M. Riedl
2021-07-01  7:51       ` Nicholas Piggin
2021-07-09  5:03         ` Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 09/11] lkdtm/powerpc: Fix code patching hijack test Christopher M. Riedl
2021-05-06  4:34 ` [RESEND PATCH v4 10/11] powerpc: Protect patching_mm with a lock Christopher M. Riedl
2021-05-06 10:51   ` Peter Zijlstra
2021-05-07 20:03     ` Christopher M. Riedl
2021-05-07 22:26       ` Peter Zijlstra
2021-05-06  4:34 ` [RESEND PATCH v4 11/11] powerpc: Use patch_instruction_unlocked() in loops Christopher M. Riedl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1625112841.77uceah4w9.astroid@bobo.none \
    --to=npiggin@gmail.com \
    --cc=cmr@linux.ibm.com \
    --cc=dja@axtens.net \
    --cc=keescook@chromium.org \
    --cc=linux-hardening@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).