From: Will Deacon <will.deacon@arm.com>
To: linux-kernel@vger.kernel.org
Cc: peterz@infradead.org, benh@au1.ibm.com,
torvalds@linux-foundation.org, npiggin@gmail.com,
catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org,
Will Deacon <will.deacon@arm.com>
Subject: [PATCH 12/12] arm64: tlb: Rewrite stale comment in asm/tlbflush.h
Date: Thu, 30 Aug 2018 17:15:46 +0100 [thread overview]
Message-ID: <1535645747-9823-13-git-send-email-will.deacon@arm.com> (raw)
In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com>
Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but
actually that whole block comment needs to be rewritten.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 80 +++++++++++++++++++++++++++------------
1 file changed, 55 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index c98ed8871030..c3c0387aee18 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -70,43 +70,73 @@
})
/*
- * TLB Management
- * ==============
+ * TLB Invalidation
+ * ================
*
- * The TLB specific code is expected to perform whatever tests it needs
- * to determine if it should invalidate the TLB for each call. Start
- * addresses are inclusive and end addresses are exclusive; it is safe to
- * round these addresses down.
+ * This header file implements the low-level TLB invalidation routines
+ * (sometimes referred to as "flushing" in the kernel) for arm64.
*
- * flush_tlb_all()
+ * Every invalidation operation uses the following template:
+ *
+ * DSB ISHST // Ensure prior page-table updates have completed
+ * TLBI ... // Invalidate the TLB
+ * DSB ISH // Ensure the TLB invalidation has completed
+ * if (invalidated kernel mappings)
+ * ISB // Discard any instructions fetched from the old mapping
+ *
+ *
+ * The following functions form part of the "core" TLB invalidation API,
+ * as documented in Documentation/core-api/cachetlb.rst:
*
- * Invalidate the entire TLB.
+ * flush_tlb_all()
+ * Invalidate the entire TLB (kernel + user) on all CPUs
*
* flush_tlb_mm(mm)
+ * Invalidate an entire user address space on all CPUs.
+ * The 'mm' argument identifies the ASID to invalidate.
+ *
+ * flush_tlb_range(vma, start, end)
+ * Invalidate the virtual-address range '[start, end)' on all
+ * CPUs for the user address space corresponding to 'vma->mm'.
+ * Note that this operation also invalidates any walk-cache
+ * entries associated with translations for the specified address
+ * range.
+ *
+ * flush_tlb_kernel_range(start, end)
+ * Same as flush_tlb_range(..., start, end), but applies to
+ * kernel mappings rather than a particular user address space.
+ * Whilst not explicitly documented, this function is used when
+ * unmapping pages from vmalloc/io space.
+ *
+ * flush_tlb_page(vma, addr)
+ * Invalidate a single user mapping for address 'addr' in the
+ * address space corresponding to 'vma->mm'. Note that this
+ * operation only invalidates a single, last-level page-table
+ * entry and therefore does not affect any walk-caches.
*
- * Invalidate all TLB entries in a particular address space.
- * - mm - mm_struct describing address space
*
- * flush_tlb_range(mm,start,end)
+ * Next, we have some undocumented invalidation routines that you probably
+ * don't want to call unless you know what you're doing:
*
- * Invalidate a range of TLB entries in the specified address
- * space.
- * - mm - mm_struct describing address space
- * - start - start address (may not be aligned)
- * - end - end address (exclusive, may not be aligned)
+ * local_flush_tlb_all()
+ * Same as flush_tlb_all(), but only applies to the calling CPU.
*
- * flush_tlb_page(vaddr,vma)
+ * __flush_tlb_kernel_pgtable(addr)
+ * Invalidate a single kernel mapping for address 'addr' on all
+ * CPUs, ensuring that any walk-cache entries associated with the
+ * translation are also invalidated.
*
- * Invalidate the specified page in the specified address range.
- * - vaddr - virtual address (may not be aligned)
- * - vma - vma_struct describing address range
+ * __flush_tlb_range(vma, start, end, stride, last_level)
+ * Invalidate the virtual-address range '[start, end)' on all
+ * CPUs for the user address space corresponding to 'vma->mm'.
+ * The invalidation operations are issued at a granularity
+ * determined by 'stride' and only affect any walk-cache entries
+ * if 'last_level' is equal to false.
*
- * flush_kern_tlb_page(kaddr)
*
- * Invalidate the TLB entry for the specified page. The address
- * will be in the kernels virtual memory space. Current uses
- * only require the D-TLB to be invalidated.
- * - kaddr - Kernel virtual memory address
+ * Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
+ * on top of these routines, since that is our interface to the mmu_gather
+ * API as used by munmap() and friends.
*/
static inline void local_flush_tlb_all(void)
{
--
2.1.4
next prev parent reply other threads:[~2018-08-30 16:16 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-30 16:15 [PATCH 00/12] Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 Will Deacon
2018-08-30 16:15 ` [PATCH 01/12] arm64: tlb: Use last-level invalidation in flush_tlb_kernel_range() Will Deacon
2018-08-30 16:15 ` [PATCH 02/12] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable() Will Deacon
2018-08-30 16:15 ` [PATCH 03/12] arm64: pgtable: Implement p[mu]d_valid() and check in set_p[mu]d() Will Deacon
2018-08-30 16:15 ` [PATCH 04/12] arm64: tlb: Justify non-leaf invalidation in flush_tlb_range() Will Deacon
2018-08-30 16:15 ` [PATCH 05/12] arm64: tlbflush: Allow stride to be specified for __flush_tlb_range() Will Deacon
2018-08-30 16:15 ` [PATCH 06/12] arm64: tlb: Remove redundant !CONFIG_HAVE_RCU_TABLE_FREE code Will Deacon
2018-08-30 16:15 ` [PATCH 07/12] asm-generic/tlb: Guard with #ifdef CONFIG_MMU Will Deacon
2018-08-30 16:15 ` [PATCH 08/12] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Will Deacon
2018-08-30 16:15 ` [PATCH 09/12] asm-generic/tlb: Track which levels of the page tables have been cleared Will Deacon
2018-08-31 1:23 ` Nicholas Piggin
2018-08-30 16:15 ` [PATCH 10/12] arm64: tlb: Adjust stride and type of TLBI according to mmu_gather Will Deacon
2018-08-30 16:15 ` [PATCH 11/12] arm64: tlb: Avoid synchronous TLBIs when freeing page tables Will Deacon
2018-08-30 16:15 ` Will Deacon [this message]
2018-08-30 16:39 ` [PATCH 00/12] Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 Linus Torvalds
2018-08-31 1:00 ` Nicholas Piggin
2018-08-31 1:04 ` Linus Torvalds
2018-08-31 9:54 ` Will Deacon
2018-08-31 10:10 ` Peter Zijlstra
2018-08-31 10:32 ` Nicholas Piggin
2018-08-31 10:49 ` Peter Zijlstra
2018-08-31 11:12 ` Will Deacon
2018-08-31 11:20 ` Peter Zijlstra
2018-08-31 11:50 ` Nicholas Piggin
2018-09-03 12:52 ` Will Deacon
2018-08-30 17:11 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1535645747-9823-13-git-send-email-will.deacon@arm.com \
--to=will.deacon@arm.com \
--cc=benh@au1.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).