On Sat, 2018-01-20 at 20:22 +0100, KarimAllah Ahmed wrote: > From: Tim Chen I think this is probably From: Andi now rather than From: Tim? We do need the series this far in order to have a full retpoline-based mitigation, and I'd like to see that go in sooner rather than later. There's a little more discussion to be had about the IBRS parts which come later in the series (and the final one or two which weren't posted yet). I think this is the one patch of the "we want this now" IBPB set that we expect serious debate on, which is why it's still a separate "optimisation" patch on top of the previous one which just does IBPB unconditionally. > Flush indirect branches when switching into a process that marked > itself non dumpable.  This protects high value processes like gpg > better, without having too high performance overhead. > > Signed-off-by: Andi Kleen > Signed-off-by: David Woodhouse > Signed-off-by: KarimAllah Ahmed > --- >  arch/x86/mm/tlb.c | 13 ++++++++++++- >  1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 304de7d..f64e80c 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -225,8 +225,19 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, >    * Avoid user/user BTB poisoning by flushing the branch predictor >    * when switching between processes. This stops one process from >    * doing Spectre-v2 attacks on another. > +  * > +  * As an optimization: Flush indirect branches only when > +  * switching into processes that disable dumping. > +  * > +  * This will not flush when switching into kernel threads. > +  * But it would flush when switching into idle and back > +  * > +  * It might be useful to have a one-off cache here > +  * to also not flush the idle case, but we would need some > +  * kind of stable sequence number to remember the previous mm. >    */ > - indirect_branch_prediction_barrier(); > + if (tsk && tsk->mm && get_dumpable(tsk->mm) != SUID_DUMP_USER) > + indirect_branch_prediction_barrier(); >   >   if (IS_ENABLED(CONFIG_VMAP_STACK)) { >   /*