All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Shi <alex.shi@intel.com>
To: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com,
	arnd@arndb.de, rostedt@goodmis.org, fweisbec@gmail.com
Cc: jeremy@goop.org, riel@redhat.com, luto@mit.edu,
	alex.shi@intel.com, avi@redhat.com, len.brown@intel.com,
	dhowells@redhat.com, fenghua.yu@intel.com,
	borislav.petkov@amd.com, yinghai@kernel.org, ak@linux.intel.com,
	cpw@sgi.com, steiner@sgi.com, akpm@linux-foundation.org,
	penberg@kernel.org, hughd@google.com, rientjes@google.com,
	kosaki.motohiro@jp.fujitsu.com, n-horiguchi@ah.jp.nec.com,
	tj@kernel.org, oleg@redhat.com, axboe@kernel.dk,
	jmorris@namei.org, a.p.zijlstra@chello.nl,
	kamezawa.hiroyu@jp.fujitsu.com, viro@zeniv.linux.org.uk,
	linux-kernel@vger.kernel.org, yongjie.ren@intel.com
Subject: [PATCH v5 4/7] x86/tlb: fall back to flush all when meet a THP large page
Date: Tue, 15 May 2012 16:55:35 +0800	[thread overview]
Message-ID: <1337072138-8323-5-git-send-email-alex.shi@intel.com> (raw)
In-Reply-To: <1337072138-8323-1-git-send-email-alex.shi@intel.com>

We don't need to flush large pages by PAGE_SIZE step, that just waste
time. and actually, large page don't need 'invlpg' optimizing according
to our macro benchmark. So, just flush whole TLB is enough for them.

The following result is tested on a 2CPU * 4cores * 2HT NHM EP machine,
with THP 'always' setting.

Multi-thread testing, '-t' paramter is thread number:
                       without this patch 	with this patch
./mprotect -t 1         14ns                       13ns
./mprotect -t 2         13ns                       13ns
./mprotect -t 4         12ns                       11ns
./mprotect -t 8         14ns                       10ns
./mprotect -t 16        28ns                       28ns
./mprotect -t 32        54ns                       52ns
./mprotect -t 128       200ns                      200ns

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
 arch/x86/mm/tlb.c |   34 ++++++++++++++++++++++++++++++++++
 1 files changed, 34 insertions(+), 0 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 7d92079..22e5bb1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,12 +316,42 @@ void flush_tlb_mm(struct mm_struct *mm)
 
 #define FLUSHALL_BAR	16
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline int has_large_page(struct mm_struct *mm,
+				 unsigned long start, unsigned long end)
+{
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	unsigned long addr = ALIGN(start, HPAGE_SIZE);
+	for (; addr < end; addr += HPAGE_SIZE) {
+		pgd = pgd_offset(mm, addr);
+		if (likely(!pgd_none(*pgd))) {
+			pud = pud_offset(pgd, addr);
+			if (likely(!pud_none(*pud))) {
+				pmd = pmd_offset(pud, addr);
+				if (likely(!pmd_none(*pmd)))
+					if (pmd_large(*pmd))
+						return 1;
+			}
+		}
+	}
+	return 0;
+}
+#else
+static inline int has_large_page(struct mm_struct *mm,
+				 unsigned long start, unsigned long end)
+{
+	return 0;
+}
+#endif
 void flush_tlb_range(struct vm_area_struct *vma,
 				   unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm;
 
 	if (!cpu_has_invlpg || vma->vm_flags & VM_HUGETLB) {
+flush_all:
 		flush_tlb_mm(vma->vm_mm);
 		return;
 	}
@@ -344,6 +374,10 @@ void flush_tlb_range(struct vm_area_struct *vma,
 			if ((end - start)/PAGE_SIZE > act_entries/FLUSHALL_BAR)
 				local_flush_tlb();
 			else {
+				if (has_large_page(mm, start, end)) {
+					preempt_enable();
+					goto flush_all;
+				}
 				for (addr = start; addr <= end;
 						addr += PAGE_SIZE)
 					__flush_tlb_single(addr);
-- 
1.7.5.4


  parent reply	other threads:[~2012-05-15  8:57 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-15  8:55 [PATCH v5 0/7] tlb flush optimization for x86 Alex Shi
2012-05-15  8:55 ` [PATCH v5 1/7] x86/tlb: unify TLB_FLUSH_ALL definition Alex Shi
2012-05-15  8:55 ` [PATCH v5 2/7] x86/tlb_info: get last level TLB entry number of CPU Alex Shi
2012-05-15  8:55 ` [PATCH v5 3/7] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range Alex Shi
2012-05-15  8:55 ` Alex Shi [this message]
2012-05-15  8:55 ` [PATCH v5 5/7] x86/tlb: add tlb_flushall_shift for specific CPU Alex Shi
2012-05-16  6:49   ` Alex Shi
2012-05-16 17:55     ` H. Peter Anvin
2012-05-17  1:46       ` Alex Shi
2012-05-15  8:55 ` [PATCH v5 6/7] x86/tlb: optimizing flush_tlb_mm Alex Shi
2012-05-15  9:11   ` Peter Zijlstra
2012-05-15  9:15   ` Nick Piggin
2012-05-15  9:17     ` Nick Piggin
2012-05-15 12:58       ` Luming Yu
2012-05-15 13:06         ` Peter Zijlstra
2012-05-15 13:27           ` Luming Yu
2012-05-15 13:28             ` Alex Shi
2012-05-15 13:33           ` Alex Shi
2012-05-15 13:39           ` Steven Rostedt
2012-05-15 14:04             ` Borislav Petkov
2012-05-15 13:08         ` Luming Yu
2012-05-15 14:07       ` Alex Shi
2012-05-15  9:18     ` Peter Zijlstra
2012-05-15  9:52       ` Nick Piggin
2012-05-15 10:00         ` Peter Zijlstra
2012-05-15 10:06           ` Nick Piggin
2012-05-15 10:13             ` Peter Zijlstra
2012-05-15 14:04       ` Alex Shi
2012-05-15 13:24     ` Alex Shi
2012-05-15 14:36       ` Peter Zijlstra
2012-05-15 14:57         ` Peter Zijlstra
2012-05-15 15:01           ` Alex Shi
2012-05-16  6:46           ` Alex Shi
2012-05-16  6:46             ` Alex Shi
2012-05-16  8:00             ` Peter Zijlstra
2012-05-16  8:04               ` Peter Zijlstra
2012-05-16  8:53                 ` Alex Shi
2012-05-16  8:58                   ` Peter Zijlstra
2012-05-16 10:58                     ` Alex Shi
2012-05-16 11:04                       ` Peter Zijlstra
2012-05-16 12:57                         ` Alex Shi
2012-05-16 13:34               ` Alex Shi
2012-05-16 13:34                 ` Alex Shi
2012-05-16 21:09                 ` Peter Zijlstra
2012-05-17  0:43                   ` Alex Shi
2012-05-17  2:07                     ` Steven Rostedt
2012-05-17  8:04                       ` Alex Shi
2012-05-17  2:14                   ` Paul Mundt
2012-05-16 13:44               ` Alex Shi
2012-05-15  8:55 ` [PATCH v5 7/7] x86/tlb: add tlb_flushall_shift knob into debugfs Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1337072138-8323-5-git-send-email-alex.shi@intel.com \
    --to=alex.shi@intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=avi@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=borislav.petkov@amd.com \
    --cc=cpw@sgi.com \
    --cc=dhowells@redhat.com \
    --cc=fenghua.yu@intel.com \
    --cc=fweisbec@gmail.com \
    --cc=hpa@zytor.com \
    --cc=hughd@google.com \
    --cc=jeremy@goop.org \
    --cc=jmorris@namei.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@mit.edu \
    --cc=mingo@redhat.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=oleg@redhat.com \
    --cc=penberg@kernel.org \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=steiner@sgi.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yinghai@kernel.org \
    --cc=yongjie.ren@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.