linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Borislav Petkov <bp@alien8.de>
To: Andy Lutomirski <luto@kernel.org>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Nadav Amit <nadav.amit@gmail.com>, Rik van Riel <riel@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Arjan van de Ven <arjan@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Banman <abanman@sgi.com>, Mike Travis <travis@sgi.com>,
	Dimitri Sivanich <sivanich@sgi.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v3 06/11] x86/mm: Rework lazy TLB mode and TLB freshness tracking
Date: Thu, 22 Jun 2017 16:50:13 +0200	[thread overview]
Message-ID: <20170622145013.n3slk7ip6wpany5d@pd.tnic> (raw)
In-Reply-To: <70f3a61658aa7c1c89f4db6a4f81d8df9e396ade.1498022414.git.luto@kernel.org>

On Tue, Jun 20, 2017 at 10:22:12PM -0700, Andy Lutomirski wrote:
> x86's lazy TLB mode used to be fairly weak -- it would switch to
> init_mm the first time it tried to flush a lazy TLB.  This meant an
> unnecessary CR3 write and, if the flush was remote, an unnecessary
> IPI.
> 
> Rewrite it entirely.  When we enter lazy mode, we simply remove the
> cpu from mm_cpumask.  This means that we need a way to figure out

s/cpu/CPU/

> whether we've missed a flush when we switch back out of lazy mode.
> I use the tlb_gen machinery to track whether a context is up to
> date.
> 
> Note to reviewers: this patch, my itself, looks a bit odd.  I'm
> using an array of length 1 containing (ctx_id, tlb_gen) rather than
> just storing tlb_gen, and making it at array isn't necessary yet.
> I'm doing this because the next few patches add PCID support, and,
> with PCID, we need ctx_id, and the array will end up with a length
> greater than 1.  Making it an array now means that there will be
> less churn and therefore less stress on your eyeballs.
> 
> NB: This is dubious but, AFAICT, still correct on Xen and UV.
> xen_exit_mmap() uses mm_cpumask() for nefarious purposes and this
> patch changes the way that mm_cpumask() works.  This should be okay,
> since Xen *also* iterates all online CPUs to find all the CPUs it
> needs to twiddle.

This whole text should be under the "---" line below if we don't want it
in the commit message.

> 
> The UV tlbflush code is rather dated and should be changed.
> 
> Cc: Andrew Banman <abanman@sgi.com>
> Cc: Mike Travis <travis@sgi.com>
> Cc: Dimitri Sivanich <sivanich@sgi.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/include/asm/mmu_context.h |   6 +-
>  arch/x86/include/asm/tlbflush.h    |   4 -
>  arch/x86/mm/init.c                 |   1 -
>  arch/x86/mm/tlb.c                  | 227 +++++++++++++++++++------------------
>  arch/x86/xen/mmu_pv.c              |   3 +-
>  5 files changed, 119 insertions(+), 122 deletions(-)
> 
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index e5295d485899..69a4f1ee86ac 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -125,8 +125,10 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next)
>  
>  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  {
> -	if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK)
> -		this_cpu_write(cpu_tlbstate.state, TLBSTATE_LAZY);
> +	int cpu = smp_processor_id();
> +
> +	if (cpumask_test_cpu(cpu, mm_cpumask(mm)))
> +		cpumask_clear_cpu(cpu, mm_cpumask(mm));

It seems we haz a helper for that: cpumask_test_and_clear_cpu() which
does BTR straightaway.

>  extern atomic64_t last_mm_ctx_id;
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 4f6c30d6ec39..87b13e51e867 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -95,7 +95,6 @@ struct tlb_state {
>  	 * mode even if we've already switched back to swapper_pg_dir.
>  	 */
>  	struct mm_struct *loaded_mm;
> -	int state;
>  
>  	/*
>  	 * Access to this CR4 shadow and to H/W CR4 is protected by
> @@ -310,9 +309,6 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
>  void native_flush_tlb_others(const struct cpumask *cpumask,
>  			     const struct flush_tlb_info *info);
>  
> -#define TLBSTATE_OK	1
> -#define TLBSTATE_LAZY	2
> -
>  static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>  					struct mm_struct *mm)
>  {
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 88ee942cb47d..7d6fa4676af9 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -812,7 +812,6 @@ void __init zone_sizes_init(void)
>  
>  DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = {
>  	.loaded_mm = &init_mm,
> -	.state = 0,
>  	.cr4 = ~0UL,	/* fail hard if we screw up cr4 shadow initialization */
>  };
>  EXPORT_SYMBOL_GPL(cpu_tlbstate);
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 9f5ef7a5e74a..fea2b07ac7d8 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -45,8 +45,8 @@ void leave_mm(int cpu)
>  	if (loaded_mm == &init_mm)
>  		return;
>  
> -	if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK)
> -		BUG();
> +	/* Warn if we're not lazy. */
> +	WARN_ON(cpumask_test_cpu(smp_processor_id(), mm_cpumask(loaded_mm)));

We don't BUG() anymore?

>  
>  	switch_mm(NULL, &init_mm, NULL);
>  }
> @@ -67,133 +67,118 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  {
>  	unsigned cpu = smp_processor_id();
>  	struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
> +	u64 next_tlb_gen;

Please sort function local variables declaration in a reverse christmas
tree order:

	<type> longest_variable_name;
	<type> shorter_var_name;
	<type> even_shorter;
	<type> i;

>  
>  	/*
> -	 * NB: The scheduler will call us with prev == next when
> -	 * switching from lazy TLB mode to normal mode if active_mm
> -	 * isn't changing.  When this happens, there is no guarantee
> -	 * that CR3 (and hence cpu_tlbstate.loaded_mm) matches next.
> +	 * NB: The scheduler will call us with prev == next when switching
> +	 * from lazy TLB mode to normal mode if active_mm isn't changing.
> +	 * When this happens, we don't assume that CR3 (and hence
> +	 * cpu_tlbstate.loaded_mm) matches next.
>  	 *
>  	 * NB: leave_mm() calls us with prev == NULL and tsk == NULL.
>  	 */
>  
> -	this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
> +	/* We don't want flush_tlb_func_* to run concurrently with us. */
> +	if (IS_ENABLED(CONFIG_PROVE_LOCKING))
> +		WARN_ON_ONCE(!irqs_disabled());
> +
> +	VM_BUG_ON(read_cr3_pa() != __pa(real_prev->pgd));

Why do we need that check? Can that ever happen?

>  	if (real_prev == next) {
> -		/*
> -		 * There's nothing to do: we always keep the per-mm control
> -		 * regs in sync with cpu_tlbstate.loaded_mm.  Just
> -		 * sanity-check mm_cpumask.
> -		 */
> -		if (WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(next))))
> -			cpumask_set_cpu(cpu, mm_cpumask(next));
> -		return;
> -	}
> +		if (cpumask_test_cpu(cpu, mm_cpumask(next))) {
> +			/*
> +			 * There's nothing to do: we weren't lazy, and we
> +			 * aren't changing our mm.  We don't need to flush
> +			 * anything, nor do we need to update CR3, CR4, or
> +			 * LDTR.
> +			 */

Nice comment.

> +			return;
> +		}
> +
> +		/* Resume remote flushes and then read tlb_gen. */
> +		cpumask_set_cpu(cpu, mm_cpumask(next));
> +		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
> +
> +		VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[0].ctx_id) !=
> +			  next->context.ctx_id);

I guess this check should be right under the if (real_prev == next), right?

> +
> +		if (this_cpu_read(cpu_tlbstate.ctxs[0].tlb_gen) <
> +		    next_tlb_gen) {

Yeah, let it stick out - that trailing '<' doesn't make it any nicer.

> +			/*
> +			 * Ideally, we'd have a flush_tlb() variant that
> +			 * takes the known CR3 value as input.  This would
> +			 * be faster on Xen PV and on hypothetical CPUs
> +			 * on which INVPCID is fast.
> +			 */
> +			this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen,
> +				       next_tlb_gen);
> +			write_cr3(__pa(next->pgd));

<---- newline here.

> +			/*
> +			 * This gets called via leave_mm() in the idle path
> +			 * where RCU functions differently.  Tracing normally
> +			 * uses RCU, so we have to call the tracepoint
> +			 * specially here.
> +			 */
> +			trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH,
> +						TLB_FLUSH_ALL);
> +		}
>  
> -	if (IS_ENABLED(CONFIG_VMAP_STACK)) {
>  		/*
> -		 * If our current stack is in vmalloc space and isn't
> -		 * mapped in the new pgd, we'll double-fault.  Forcibly
> -		 * map it.
> +		 * We just exited lazy mode, which means that CR4 and/or LDTR
> +		 * may be stale.  (Changes to the required CR4 and LDTR states
> +		 * are not reflected in tlb_gen.)

We need that comment because... ? I mean, we do update CR4/LDTR at the
end of the function.

>  		 */
> -		unsigned int stack_pgd_index = pgd_index(current_stack_pointer());
> +	} else {
> +		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
> +			/*
> +			 * If our current stack is in vmalloc space and isn't
> +			 * mapped in the new pgd, we'll double-fault.  Forcibly
> +			 * map it.
> +			 */
> +			unsigned int stack_pgd_index =

Shorten that var name and make it fit into 80ish cols so that the
linebreak is gone.

> +				pgd_index(current_stack_pointer());
> +
> +			pgd_t *pgd = next->pgd + stack_pgd_index;
> +
> +			if (unlikely(pgd_none(*pgd)))
> +				set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
> +		}
>  
> -		pgd_t *pgd = next->pgd + stack_pgd_index;
> +		/* Stop remote flushes for the previous mm */
> +		if (cpumask_test_cpu(cpu, mm_cpumask(real_prev)))
> +			cpumask_clear_cpu(cpu, mm_cpumask(real_prev));

cpumask_test_and_clear_cpu()

>  
> -		if (unlikely(pgd_none(*pgd)))
> -			set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
> -	}
> +		WARN_ON_ONCE(cpumask_test_cpu(cpu, mm_cpumask(next)));

We warn if the next task is not lazy because...?

> -	this_cpu_write(cpu_tlbstate.loaded_mm, next);
> -	this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, next->context.ctx_id);
> -	this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen,
> -		       atomic64_read(&next->context.tlb_gen));
> +		/*
> +		 * Start remote flushes and then read tlb_gen.
> +		 */
> +		cpumask_set_cpu(cpu, mm_cpumask(next));
> +		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
>  
> -	WARN_ON_ONCE(cpumask_test_cpu(cpu, mm_cpumask(next)));
> -	cpumask_set_cpu(cpu, mm_cpumask(next));
> +		VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[0].ctx_id) ==
> +			  next->context.ctx_id);

Also put it as the first statement after the else { ?

> -	/*
> -	 * Re-load page tables.
> -	 *
> -	 * This logic has an ordering constraint:
> -	 *
> -	 *  CPU 0: Write to a PTE for 'next'
> -	 *  CPU 0: load bit 1 in mm_cpumask.  if nonzero, send IPI.
> -	 *  CPU 1: set bit 1 in next's mm_cpumask
> -	 *  CPU 1: load from the PTE that CPU 0 writes (implicit)
> -	 *
> -	 * We need to prevent an outcome in which CPU 1 observes
> -	 * the new PTE value and CPU 0 observes bit 1 clear in
> -	 * mm_cpumask.  (If that occurs, then the IPI will never
> -	 * be sent, and CPU 0's TLB will contain a stale entry.)
> -	 *
> -	 * The bad outcome can occur if either CPU's load is
> -	 * reordered before that CPU's store, so both CPUs must
> -	 * execute full barriers to prevent this from happening.
> -	 *
> -	 * Thus, switch_mm needs a full barrier between the
> -	 * store to mm_cpumask and any operation that could load
> -	 * from next->pgd.  TLB fills are special and can happen
> -	 * due to instruction fetches or for no reason at all,
> -	 * and neither LOCK nor MFENCE orders them.
> -	 * Fortunately, load_cr3() is serializing and gives the
> -	 * ordering guarantee we need.
> -	 */
> -	load_cr3(next->pgd);
> -
> -	/*
> -	 * This gets called via leave_mm() in the idle path where RCU
> -	 * functions differently.  Tracing normally uses RCU, so we have to
> -	 * call the tracepoint specially here.
> -	 */
> -	trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
> +		this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id,
> +			       next->context.ctx_id);
> +		this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen,
> +			       next_tlb_gen);

Yeah, just let all those three stick out.

Also, no:

                if (this_cpu_read(cpu_tlbstate.ctxs[0].tlb_gen) <
                    next_tlb_gen) {

check?

> +		this_cpu_write(cpu_tlbstate.loaded_mm, next);
> +		write_cr3(__pa(next->pgd));
>  
> -	/* Stop flush ipis for the previous mm */
> -	WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(real_prev)) &&
> -		     real_prev != &init_mm);
> -	cpumask_clear_cpu(cpu, mm_cpumask(real_prev));
> +		/*
> +		 * This gets called via leave_mm() in the idle path where RCU
> +		 * functions differently.  Tracing normally uses RCU, so we
> +		 * have to call the tracepoint specially here.
> +		 */
> +		trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH,
> +					TLB_FLUSH_ALL);
> +	}

That's repeated as in the if-branch above. Move it out I guess.

>  
> -	/* Load per-mm CR4 and LDTR state */
>  	load_mm_cr4(next);
>  	switch_ldt(real_prev, next);
>  }
-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2017-06-22 14:50 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-21  5:22 [PATCH v3 00/11] PCID and improved laziness Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 01/11] x86/mm: Don't reenter flush_tlb_func_common() Andy Lutomirski
2017-06-21  8:01   ` Thomas Gleixner
2017-06-21  8:49   ` Borislav Petkov
2017-06-21 15:15     ` Andy Lutomirski
2017-06-21 23:26   ` Nadav Amit
2017-06-22  2:27     ` Andy Lutomirski
2017-06-22  7:32       ` Ingo Molnar
2017-06-21  5:22 ` [PATCH v3 02/11] x86/ldt: Simplify LDT switching logic Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:40   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 03/11] x86/mm: Remove reset_lazy_tlbstate() Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:50   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 04/11] x86/mm: Give each mm TLB flush generation a unique ID Andy Lutomirski
2017-06-21  8:05   ` Thomas Gleixner
2017-06-21 10:33   ` Borislav Petkov
2017-06-21 15:23     ` Andy Lutomirski
2017-06-21 17:06       ` Borislav Petkov
2017-06-21 17:43   ` Borislav Petkov
2017-06-22  2:34     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm Andy Lutomirski
2017-06-21  8:32   ` Thomas Gleixner
2017-06-21 15:11     ` Andy Lutomirski
2017-06-21 18:44   ` Borislav Petkov
2017-06-22  2:46     ` Andy Lutomirski
2017-06-22  7:24       ` Borislav Petkov
2017-06-22 14:48         ` Andy Lutomirski
2017-06-22 14:59           ` Borislav Petkov
2017-06-22 15:55             ` Andy Lutomirski
2017-06-22 17:22               ` Borislav Petkov
2017-06-22 18:08                 ` Andy Lutomirski
2017-06-23  8:42                   ` Borislav Petkov
2017-06-23 15:46                     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 06/11] x86/mm: Rework lazy TLB mode and TLB freshness tracking Andy Lutomirski
2017-06-21  9:01   ` Thomas Gleixner
2017-06-21 16:04     ` Andy Lutomirski
2017-06-21 17:29       ` Borislav Petkov
2017-06-22 14:50   ` Borislav Petkov [this message]
2017-06-22 17:47     ` Andy Lutomirski
2017-06-22 19:05       ` Borislav Petkov
2017-07-27 19:53       ` Andrew Banman
2017-07-28  2:05         ` Andy Lutomirski
2017-06-23 13:34   ` Boris Ostrovsky
2017-06-23 15:22     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 07/11] x86/mm: Stop calling leave_mm() in idle code Andy Lutomirski
2017-06-21  9:22   ` Thomas Gleixner
2017-06-21 15:16     ` Andy Lutomirski
2017-06-23  9:07   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 08/11] x86/mm: Disable PCID on 32-bit kernels Andy Lutomirski
2017-06-21  9:26   ` Thomas Gleixner
2017-06-23  9:24   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 09/11] x86/mm: Add nopcid to turn off PCID Andy Lutomirski
2017-06-21  9:27   ` Thomas Gleixner
2017-06-23  9:34   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 10/11] x86/mm: Enable CR4.PCIDE on supported systems Andy Lutomirski
2017-06-21  9:39   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-21 20:34     ` Andy Lutomirski
2017-06-23 11:50   ` Borislav Petkov
2017-06-23 15:28     ` Andy Lutomirski
2017-06-23 13:35   ` Boris Ostrovsky
2017-06-21  5:22 ` [PATCH v3 11/11] x86/mm: Try to preserve old TLB entries using PCID Andy Lutomirski
2017-06-21 13:38   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-22  2:57     ` Andy Lutomirski
2017-06-22 12:21       ` Thomas Gleixner
2017-06-22 18:12         ` Andy Lutomirski
2017-06-22 21:22           ` Thomas Gleixner
2017-06-23  3:09             ` Andy Lutomirski
2017-06-23  7:29               ` Thomas Gleixner
2017-06-22 16:09   ` Nadav Amit
2017-06-22 18:10     ` Andy Lutomirski
2017-06-26 15:58   ` Borislav Petkov
2017-06-21 18:23 ` [PATCH v3 00/11] PCID and improved laziness Linus Torvalds
2017-06-22  5:19   ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170622145013.n3slk7ip6wpany5d@pd.tnic \
    --to=bp@alien8.de \
    --cc=abanman@sgi.com \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=dave.hansen@intel.com \
    --cc=jgross@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@suse.de \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=sivanich@sgi.com \
    --cc=torvalds@linux-foundation.org \
    --cc=travis@sgi.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).