All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org,
	npiggin@gmail.com, linux-arch@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux@armlinux.org.uk, heiko.carstens@de.ibm.com
Subject: Re: [RFC][PATCH 01/11] asm-generic/tlb: Provide a comment
Date: Fri, 14 Sep 2018 17:48:57 +0100	[thread overview]
Message-ID: <20180914164857.GG6236@arm.com> (raw)
In-Reply-To: <20180913092811.894806629@infradead.org>

Hi Peter,

On Thu, Sep 13, 2018 at 11:21:11AM +0200, Peter Zijlstra wrote:
> Write a comment explaining some of this..

This comment is much-needed, thanks! Some comments inline.

> + * The mmu_gather API consists of:
> + *
> + *  - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather
> + *
> + *    Finish in particular will issue a (final) TLB invalidate and free
> + *    all (remaining) queued pages.
> + *
> + *  - tlb_start_vma() / tlb_end_vma(); marks the start / end of a VMA
> + *
> + *    Defaults to flushing at tlb_end_vma() to reset the range; helps when
> + *    there's large holes between the VMAs.
> + *
> + *  - tlb_remove_page() / __tlb_remove_page()
> + *  - tlb_remove_page_size() / __tlb_remove_page_size()
> + *
> + *    __tlb_remove_page_size() is the basic primitive that queues a page for
> + *    freeing. __tlb_remove_page() assumes PAGE_SIZE. Both will return a
> + *    boolean indicating if the queue is (now) full and a call to
> + *    tlb_flush_mmu() is required.
> + *
> + *    tlb_remove_page() and tlb_remove_page_size() imply the call to
> + *    tlb_flush_mmu() when required and has no return value.
> + *
> + *  - tlb_change_page_size()

This doesn't seem to exist in my tree.
[since realised you rename to it in the next patch]

> + *
> + *    call before __tlb_remove_page*() to set the current page-size; implies a
> + *    possible tlb_flush_mmu() call.
> + *
> + *  - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() / tlb_flush_mmu_free()
> + *
> + *    tlb_flush_mmu_tlbonly() - does the TLB invalidate (and resets
> + *                              related state, like the range)
> + *
> + *    tlb_flush_mmu_free() - frees the queued pages; make absolutely
> + *			     sure no additional tlb_remove_page()
> + *			     calls happen between _tlbonly() and this.
> + *
> + *    tlb_flush_mmu() - the above two calls.
> + *
> + *  - mmu_gather::fullmm
> + *
> + *    A flag set by tlb_gather_mmu() to indicate we're going to free
> + *    the entire mm; this allows a number of optimizations.
> + *
> + *    XXX list optimizations

On arm64, we can elide the invalidation altogether because we won't
re-allocate the ASID. We also have an invalidate-by-ASID (mm) instruction,
which we could use if we needed to.

> + *
> + *  - mmu_gather::need_flush_all
> + *
> + *    A flag that can be set by the arch code if it wants to force
> + *    flush the entire TLB irrespective of the range. For instance
> + *    x86-PAE needs this when changing top-level entries.
> + *
> + * And requires the architecture to provide and implement tlb_flush().
> + *
> + * tlb_flush() may, in addition to the above mentioned mmu_gather fields, make
> + * use of:
> + *
> + *  - mmu_gather::start / mmu_gather::end
> + *
> + *    which (when !need_flush_all; fullmm will have start = end = ~0UL) provides
> + *    the range that needs to be flushed to cover the pages to be freed.

I don't understand the mention of need_flush_all here -- I didn't think it
was used by the core code at all.

> + *
> + *  - mmu_gather::freed_tables
> + *
> + *    set when we freed page table pages
> + *
> + *  - tlb_get_unmap_shift() / tlb_get_unmap_size()
> + *
> + *    returns the smallest TLB entry size unmapped in this range
> + *
> + * Additionally there are a few opt-in features:
> + *
> + *  HAVE_MMU_GATHER_PAGE_SIZE
> + *
> + *  This ensures we call tlb_flush() every time tlb_change_page_size() actually
> + *  changes the size and provides mmu_gather::page_size to tlb_flush().

Ah, you add this later in the series. I think Nick reckoned we could get rid
of this (the page_size field) eventually...

Will

  parent reply	other threads:[~2018-09-14 16:48 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-13  9:21 [RFC][PATCH 00/11] my generic mmu_gather patches Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 01/11] asm-generic/tlb: Provide a comment Peter Zijlstra
2018-09-13 10:30   ` Martin Schwidefsky
2018-09-13 10:57     ` Peter Zijlstra
2018-09-13 12:18       ` Martin Schwidefsky
2018-09-13 12:39         ` Peter Zijlstra
2018-09-14 10:28           ` Martin Schwidefsky
2018-09-14 10:28             ` Martin Schwidefsky
2018-09-14 13:02             ` Peter Zijlstra
2018-09-14 14:07               ` Martin Schwidefsky
2018-09-14 16:48   ` Will Deacon [this message]
2018-09-19 11:33     ` Peter Zijlstra
2018-09-19 11:51     ` Peter Zijlstra
2018-09-19 12:23       ` Will Deacon
2018-09-19 13:12         ` Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 02/11] asm-generic/tlb: Provide HAVE_MMU_GATHER_PAGE_SIZE Peter Zijlstra
2018-09-14 16:56   ` Will Deacon
2018-09-13  9:21 ` [RFC][PATCH 03/11] x86/mm: Page size aware flush_tlb_mm_range() Peter Zijlstra
2018-09-13 17:22   ` Dave Hansen
2018-09-13 18:42     ` Peter Zijlstra
2018-09-13 18:46       ` Peter Zijlstra
2018-09-13 18:48         ` Peter Zijlstra
2018-09-13 18:49         ` Dave Hansen
2018-09-13 18:47       ` Dave Hansen
2018-09-13  9:21 ` [RFC][PATCH 04/11] asm-generic/tlb: Provide generic VIPT cache flush Peter Zijlstra
2018-09-14 16:56   ` Will Deacon
2018-09-13  9:21 ` [RFC][PATCH 05/11] asm-generic/tlb: Provide generic tlb_flush Peter Zijlstra
2018-09-13 13:09   ` Jann Horn
2018-09-13 14:06     ` Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 06/11] asm-generic/tlb: Conditionally provide tlb_migrate_finish() Peter Zijlstra
2018-09-14 16:57   ` Will Deacon
2018-09-13  9:21 ` [RFC][PATCH 07/11] arm/tlb: Convert to generic mmu_gather Peter Zijlstra
2018-09-18 14:10   ` Will Deacon
2018-09-19 11:28     ` Peter Zijlstra
2018-09-19 12:24       ` Will Deacon
2018-09-19 11:30     ` Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 08/11] ia64/tlb: Conver " Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 09/11] sh/tlb: Convert SH " Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 10/11] um/tlb: Convert " Peter Zijlstra
2018-09-13  9:21 ` [RFC][PATCH 11/11] arch/tlb: Clean up simple architectures Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180914164857.GG6236@arm.com \
    --to=will.deacon@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=npiggin@gmail.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.