All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 0/9] X86 TLB flush optimization
@ 2012-06-28  1:02 Alex Shi
  2012-06-28  1:02 ` [PATCH v10 1/9] x86/tlb_info: get last level TLB entry number of CPU Alex Shi
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Alex Shi @ 2012-06-28  1:02 UTC (permalink / raw)
  To: tglx, mingo, hpa, arnd, rostedt, fweisbec
  Cc: jeremy, alex.shi, luto, yinghai, riel, avi, len.brown, tj, akpm,
	cl, borislav.petkov, ak, jbeulich, eric.dumazet, akinobu.mita,
	vapier, cpw, steiner, viro, kamezawa.hiroyu, rientjes, aarcange,
	linux-kernel

Thank for Fengguang's 0-day build system. It found 2 build errors on 
the first and 7th patch.

So this version fix them, introduce a c_detect_tlb() member into
struct cpu_dev for tlb entries detection of specific CPU vendor.

Thanks all of comments and testing on this patchset!

Alex

[PATCH v10 1/9] x86/tlb_info: get last level TLB entry number of CPU
[PATCH v10 2/9] x86/flush_tlb: try flush_tlb_single one by one in
[PATCH v10 3/9] x86/tlb: fall back to flush all when meet a THP
[PATCH v10 4/9] x86/tlb: add tlb_flushall_shift for specific CPU
[PATCH v10 5/9] x86/tlb: add tlb_flushall_shift knob into debugfs
[PATCH v10 6/9] mm/mmu_gather: enable tlb flush range in generic
[PATCH v10 7/9] x86/tlb: enable tlb flush range support for x86
[PATCH v10 8/9] x86/tlb: replace INVALIDATE_TLB_VECTOR by
[PATCH v10 9/9] x86/tlb: do flush_tlb_kernel_range by 'invlpg'

^ permalink raw reply	[flat|nested] 25+ messages in thread
* [PATCH v9 3/9] x86/tlb: fall back to flush all when meet a THP large page
@ 2012-06-25  6:08 Alex Shi
  2012-06-26 15:14 ` [tip:x86/mm] " tip-bot for Alex Shi
  0 siblings, 1 reply; 25+ messages in thread
From: Alex Shi @ 2012-06-25  6:08 UTC (permalink / raw)
  To: tglx, mingo, hpa, arnd, rostedt, fweisbec
  Cc: jeremy, alex.shi, luto, yinghai, riel, avi, len.brown, tj, akpm,
	cl, borislav.petkov, ak, jbeulich, eric.dumazet, akinobu.mita,
	vapier, cpw, steiner, viro, kamezawa.hiroyu, rientjes, aarcange,
	linux-kernel, yongjie.ren

We don't need to flush large pages by PAGE_SIZE step, that just waste
time. and actually, large page don't need 'invlpg' optimizing according
to our macro benchmark. So, just flush whole TLB is enough for them.

The following result is tested on a 2CPU * 4cores * 2HT NHM EP machine,
with THP 'always' setting.

Multi-thread testing, '-t' paramter is thread number:
                       without this patch 	with this patch
./mprotect -t 1         14ns                       13ns
./mprotect -t 2         13ns                       13ns
./mprotect -t 4         12ns                       11ns
./mprotect -t 8         14ns                       10ns
./mprotect -t 16        28ns                       28ns
./mprotect -t 32        54ns                       52ns
./mprotect -t 128       200ns                      200ns

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 arch/x86/mm/tlb.c |   34 ++++++++++++++++++++++++++++++++++
 1 files changed, 34 insertions(+), 0 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 3b91c98..184a02a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -318,12 +318,42 @@ void flush_tlb_mm(struct mm_struct *mm)
 
 #define FLUSHALL_BAR	16
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline unsigned long has_large_page(struct mm_struct *mm,
+				 unsigned long start, unsigned long end)
+{
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	unsigned long addr = ALIGN(start, HPAGE_SIZE);
+	for (; addr < end; addr += HPAGE_SIZE) {
+		pgd = pgd_offset(mm, addr);
+		if (likely(!pgd_none(*pgd))) {
+			pud = pud_offset(pgd, addr);
+			if (likely(!pud_none(*pud))) {
+				pmd = pmd_offset(pud, addr);
+				if (likely(!pmd_none(*pmd)))
+					if (pmd_large(*pmd))
+						return addr;
+			}
+		}
+	}
+	return 0;
+}
+#else
+static inline unsigned long has_large_page(struct mm_struct *mm,
+				 unsigned long start, unsigned long end)
+{
+	return 0;
+}
+#endif
 void flush_tlb_range(struct vm_area_struct *vma,
 				   unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm;
 
 	if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB) {
+flush_all:
 		flush_tlb_mm(vma->vm_mm);
 		return;
 	}
@@ -346,6 +376,10 @@ void flush_tlb_range(struct vm_area_struct *vma,
 			if ((end - start)/PAGE_SIZE > act_entries/FLUSHALL_BAR)
 				local_flush_tlb();
 			else {
+				if (has_large_page(mm, start, end)) {
+					preempt_enable();
+					goto flush_all;
+				}
 				for (addr = start; addr < end;
 						addr += PAGE_SIZE)
 					__flush_tlb_single(addr);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2012-07-20  0:50 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-28  1:02 [PATCH v10 0/9] X86 TLB flush optimization Alex Shi
2012-06-28  1:02 ` [PATCH v10 1/9] x86/tlb_info: get last level TLB entry number of CPU Alex Shi
2012-06-28 15:37   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 2/9] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range Alex Shi
2012-06-28 15:38   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 3/9] x86/tlb: fall back to flush all when meet a THP large page Alex Shi
2012-06-28 15:39   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 4/9] x86/tlb: add tlb_flushall_shift for specific CPU Alex Shi
2012-06-28 15:40   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 5/9] x86/tlb: add tlb_flushall_shift knob into debugfs Alex Shi
2012-06-28 15:41   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 6/9] mm/mmu_gather: enable tlb flush range in generic mmu_gather Alex Shi
2012-06-28 15:42   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 7/9] x86/tlb: enable tlb flush range support for x86 Alex Shi
2012-06-28 15:42   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-07-19 12:20   ` [PATCH v10 7/9] " Borislav Petkov
2012-07-19 23:52     ` Alex Shi
2012-07-19 23:56       ` H. Peter Anvin
2012-07-20  0:06         ` Alex Shi
2012-07-20  0:44           ` H. Peter Anvin
2012-06-28  1:02 ` [PATCH v10 8/9] x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR Alex Shi
2012-06-28 15:43   ` [tip:x86/mm] " tip-bot for Alex Shi
2012-06-28  1:02 ` [PATCH v10 9/9] x86/tlb: do flush_tlb_kernel_range by 'invlpg' Alex Shi
2012-06-28 15:44   ` [tip:x86/mm] " tip-bot for Alex Shi
  -- strict thread matches above, loose matches on Subject: below --
2012-06-25  6:08 [PATCH v9 3/9] x86/tlb: fall back to flush all when meet a THP large page Alex Shi
2012-06-26 15:14 ` [tip:x86/mm] " tip-bot for Alex Shi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.