* [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
Changelog since v1
o Drop a pagetable walk that seems redundant
o Account for TLB flushes only when debugging
o Drop the patch that took number of CPUs to flush into account
ebizzy regressed between 3.4 and 3.10 while testing on a new
machine. Bisection initially found at least three problems of which the
first was commit 611ae8e3 (x86/tlb: enable tlb flush range support for
x86). Second was related to TLB flush accounting. The third was related
to ACPI cpufreq and so it was disabled for the purposes of this series.
The intent of the TLB range flush series was to preserve existing TLB
entries by flushing a range one page at a time instead of flushing the
address space. This makes a certain amount of sense if the address space
being flushed was known to have existing hot entries. The decision on
whether to do a full mm flush or a number of single page flushes depends
on the size of the relevant TLB and how many of these hot entries would
be preserved by a targeted flush. This implicitly assumes a lot including
the following examples
o That the full TLB is in use by the task being flushed
o The TLB has hot entries that are going to be used in the near future
o The TLB has entries for the range being cached
o The cost of the per-page flushes is similar to a single mm flush
o Large pages are unimportant and can always be globally flushed
o Small flushes from workloads are very common
The first three are completely unknowable but unfortunately it is
something that is probably true of micro benchmarks designed to exercise
these paths. The fourth one depends completely on the hardware. I've no
idea what the logic behind the large page decision was but it's certainly
wrong if automatic NUMA balancing is enabled as it frequently flushes a
single THP page. The last one is the strangest because generally only a
process that was mapping/unmapping very small regions would hit this. It's
possible it is the common case for virtualised workloads that is managing
the address space of its guests. Maybe this was the real original motivation
of the TLB range flush support for x86.
Whatever the reason, Ebizzy sees very little benefit as it discards newly
allocated memory very quickly and regressed badly on Ivybridge where
it constantly flushes ranges of 128 pages one page at a time. Earlier
machines may not have seen this problem as the balance point was at a
different location. While I'm wary of optimising for such a benchmark,
it's commonly tested and it's apparent that the worst case defaults for
Ivybridge need to be re-examined.
The following small series restores ebizzy to 3.4-era performance for the
very limited set of machines tested.
ebizzy
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
thread vanilla vanilla altershift-v2r1 nowalk-v2r7
Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
Stddev 1 50.55 ( 0.00%) 78.05 (-54.41%) 44.70 ( 11.58%) 39.22 ( 22.41%)
Stddev 2 37.98 ( 0.00%) 176.92 (-365.76%) 92.40 (-143.26%) 184.32 (-385.24%)
Stddev 3 55.76 ( 0.00%) 126.02 (-126.00%) 99.79 (-78.95%) 32.97 ( 40.87%)
Stddev 4 64.64 ( 0.00%) 117.09 (-81.13%) 124.23 (-92.17%) 212.67 (-229.00%)
Stddev 5 131.53 ( 0.00%) 92.86 ( 29.39%) 108.07 ( 17.83%) 101.05 ( 23.17%)
Stddev 6 109.92 ( 0.00%) 74.87 ( 31.89%) 179.26 (-63.08%) 202.56 (-84.28%)
Stddev 7 124.32 ( 0.00%) 72.25 ( 41.88%) 124.46 ( -0.12%) 128.52 ( -3.38%)
Stddev 8 60.98 ( 0.00%) 60.98 ( -0.00%) 62.31 ( -2.19%) 63.73 ( -4.51%)
Machine was a single socket machine with number of threads tested ranging
from 1 to NR_CPUS. For each thread, there were 100 iterations and the
reported mean and stddev was based on those iterations. The results are
unfortunately noisy but many of the gains are well outside 1 standard
deviation. The test is dominated by the address space allocation, page
allocation and zeroing of the pages with the flush being a relatively
small component of the workload.
It was suggested to remove the per-family TLB shifts entirely but the
figures must have been based on some testing by someone somewhere using a
representative workload. Details on that would be nice but in the meantime
I only altered IvyBridge as the balance point happens to be where ebizzy
becomes an adverse workload.
arch/x86/include/asm/tlbflush.h | 6 ++---
arch/x86/kernel/cpu/intel.c | 2 +-
arch/x86/kernel/cpu/mtrr/generic.c | 4 +--
arch/x86/mm/tlb.c | 52 ++++++++++----------------------------
include/linux/vm_event_item.h | 4 +--
include/linux/vmstat.h | 8 ++++++
6 files changed, 29 insertions(+), 47 deletions(-)
--
1.8.4
^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
Changelog since v1
o Drop a pagetable walk that seems redundant
o Account for TLB flushes only when debugging
o Drop the patch that took number of CPUs to flush into account
ebizzy regressed between 3.4 and 3.10 while testing on a new
machine. Bisection initially found at least three problems of which the
first was commit 611ae8e3 (x86/tlb: enable tlb flush range support for
x86). Second was related to TLB flush accounting. The third was related
to ACPI cpufreq and so it was disabled for the purposes of this series.
The intent of the TLB range flush series was to preserve existing TLB
entries by flushing a range one page at a time instead of flushing the
address space. This makes a certain amount of sense if the address space
being flushed was known to have existing hot entries. The decision on
whether to do a full mm flush or a number of single page flushes depends
on the size of the relevant TLB and how many of these hot entries would
be preserved by a targeted flush. This implicitly assumes a lot including
the following examples
o That the full TLB is in use by the task being flushed
o The TLB has hot entries that are going to be used in the near future
o The TLB has entries for the range being cached
o The cost of the per-page flushes is similar to a single mm flush
o Large pages are unimportant and can always be globally flushed
o Small flushes from workloads are very common
The first three are completely unknowable but unfortunately it is
something that is probably true of micro benchmarks designed to exercise
these paths. The fourth one depends completely on the hardware. I've no
idea what the logic behind the large page decision was but it's certainly
wrong if automatic NUMA balancing is enabled as it frequently flushes a
single THP page. The last one is the strangest because generally only a
process that was mapping/unmapping very small regions would hit this. It's
possible it is the common case for virtualised workloads that is managing
the address space of its guests. Maybe this was the real original motivation
of the TLB range flush support for x86.
Whatever the reason, Ebizzy sees very little benefit as it discards newly
allocated memory very quickly and regressed badly on Ivybridge where
it constantly flushes ranges of 128 pages one page at a time. Earlier
machines may not have seen this problem as the balance point was at a
different location. While I'm wary of optimising for such a benchmark,
it's commonly tested and it's apparent that the worst case defaults for
Ivybridge need to be re-examined.
The following small series restores ebizzy to 3.4-era performance for the
very limited set of machines tested.
ebizzy
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
thread vanilla vanilla altershift-v2r1 nowalk-v2r7
Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
Stddev 1 50.55 ( 0.00%) 78.05 (-54.41%) 44.70 ( 11.58%) 39.22 ( 22.41%)
Stddev 2 37.98 ( 0.00%) 176.92 (-365.76%) 92.40 (-143.26%) 184.32 (-385.24%)
Stddev 3 55.76 ( 0.00%) 126.02 (-126.00%) 99.79 (-78.95%) 32.97 ( 40.87%)
Stddev 4 64.64 ( 0.00%) 117.09 (-81.13%) 124.23 (-92.17%) 212.67 (-229.00%)
Stddev 5 131.53 ( 0.00%) 92.86 ( 29.39%) 108.07 ( 17.83%) 101.05 ( 23.17%)
Stddev 6 109.92 ( 0.00%) 74.87 ( 31.89%) 179.26 (-63.08%) 202.56 (-84.28%)
Stddev 7 124.32 ( 0.00%) 72.25 ( 41.88%) 124.46 ( -0.12%) 128.52 ( -3.38%)
Stddev 8 60.98 ( 0.00%) 60.98 ( -0.00%) 62.31 ( -2.19%) 63.73 ( -4.51%)
Machine was a single socket machine with number of threads tested ranging
from 1 to NR_CPUS. For each thread, there were 100 iterations and the
reported mean and stddev was based on those iterations. The results are
unfortunately noisy but many of the gains are well outside 1 standard
deviation. The test is dominated by the address space allocation, page
allocation and zeroing of the pages with the flush being a relatively
small component of the workload.
It was suggested to remove the per-family TLB shifts entirely but the
figures must have been based on some testing by someone somewhere using a
representative workload. Details on that would be nice but in the meantime
I only altered IvyBridge as the balance point happens to be where ebizzy
becomes an adverse workload.
arch/x86/include/asm/tlbflush.h | 6 ++---
arch/x86/kernel/cpu/intel.c | 2 +-
arch/x86/kernel/cpu/mtrr/generic.c | 4 +--
arch/x86/mm/tlb.c | 52 ++++++++++----------------------------
include/linux/vm_event_item.h | 4 +--
include/linux/vmstat.h | 8 ++++++
6 files changed, 29 insertions(+), 47 deletions(-)
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH 1/4] x86: mm: Clean up inconsistencies when flushing TLB ranges
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-13 20:01 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
NR_TLB_LOCAL_FLUSH_ALL is not always accounted for correctly and the
comparison with total_vm is done before taking tlb_flushall_shift into
account. Clean it up.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Alex Shi <alex.shi@linaro.org>
---
arch/x86/mm/tlb.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ae699b3..09b8cb8 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -189,6 +189,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
{
unsigned long addr;
unsigned act_entries, tlb_entries = 0;
+ unsigned long nr_base_pages;
preempt_disable();
if (current->active_mm != mm)
@@ -210,18 +211,17 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
tlb_entries = tlb_lli_4k[ENTRIES];
else
tlb_entries = tlb_lld_4k[ENTRIES];
+
/* Assume all of TLB entries was occupied by this task */
- act_entries = mm->total_vm > tlb_entries ? tlb_entries : mm->total_vm;
+ act_entries = tlb_entries >> tlb_flushall_shift;
+ act_entries = mm->total_vm > act_entries ? act_entries : mm->total_vm;
+ nr_base_pages = (end - start) >> PAGE_SHIFT;
/* tlb_flushall_shift is on balance point, details in commit log */
- if ((end - start) >> PAGE_SHIFT > act_entries >> tlb_flushall_shift) {
+ if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
- if (has_large_page(mm, start, end)) {
- local_flush_tlb();
- goto flush_all;
- }
/* flush range by one by one 'invlpg' */
for (addr = start; addr < end; addr += PAGE_SIZE) {
count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
--
1.8.4
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 1/4] x86: mm: Clean up inconsistencies when flushing TLB ranges
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
NR_TLB_LOCAL_FLUSH_ALL is not always accounted for correctly and the
comparison with total_vm is done before taking tlb_flushall_shift into
account. Clean it up.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Alex Shi <alex.shi@linaro.org>
---
arch/x86/mm/tlb.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ae699b3..09b8cb8 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -189,6 +189,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
{
unsigned long addr;
unsigned act_entries, tlb_entries = 0;
+ unsigned long nr_base_pages;
preempt_disable();
if (current->active_mm != mm)
@@ -210,18 +211,17 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
tlb_entries = tlb_lli_4k[ENTRIES];
else
tlb_entries = tlb_lld_4k[ENTRIES];
+
/* Assume all of TLB entries was occupied by this task */
- act_entries = mm->total_vm > tlb_entries ? tlb_entries : mm->total_vm;
+ act_entries = tlb_entries >> tlb_flushall_shift;
+ act_entries = mm->total_vm > act_entries ? act_entries : mm->total_vm;
+ nr_base_pages = (end - start) >> PAGE_SHIFT;
/* tlb_flushall_shift is on balance point, details in commit log */
- if ((end - start) >> PAGE_SHIFT > act_entries >> tlb_flushall_shift) {
+ if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
- if (has_large_page(mm, start, end)) {
- local_flush_tlb();
- goto flush_all;
- }
/* flush range by one by one 'invlpg' */
for (addr = start; addr < end; addr += PAGE_SIZE) {
count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 2/4] x86: mm: Account for TLB flushes only when debugging
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-13 20:01 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
Bisection between 3.11 and 3.12 fingered commit 9824cf97 (mm: vmstats:
tlb flush counters). The counters are undeniably useful but how often
do we really need to debug TLB flush related issues? It does not justify
taking the penalty everywhere so make it a debugging option.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/include/asm/tlbflush.h | 6 +++---
arch/x86/kernel/cpu/mtrr/generic.c | 4 ++--
arch/x86/mm/tlb.c | 14 +++++++-------
include/linux/vm_event_item.h | 4 ++--
include/linux/vmstat.h | 8 ++++++++
5 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e6d90ba..04905bf 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -62,7 +62,7 @@ static inline void __flush_tlb_all(void)
static inline void __flush_tlb_one(unsigned long addr)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
__flush_tlb_single(addr);
}
@@ -93,13 +93,13 @@ static inline void __flush_tlb_one(unsigned long addr)
*/
static inline void __flush_tlb_up(void)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
}
static inline void flush_tlb_all(void)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb_all();
}
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index ce2d0a2..0e25a1b 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -683,7 +683,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
}
/* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
/* Save MTRR state */
@@ -697,7 +697,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
static void post_set(void) __releases(set_atomicity_lock)
{
/* Flush TLBs (no need to flush caches - they are disabled) */
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
/* Intel (P6) standard MTRRs */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 09b8cb8..5176526 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -103,7 +103,7 @@ static void flush_tlb_func(void *info)
if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
return;
- count_vm_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
if (f->flush_end == TLB_FLUSH_ALL)
local_flush_tlb();
@@ -131,7 +131,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
info.flush_start = start;
info.flush_end = end;
- count_vm_event(NR_TLB_REMOTE_FLUSH);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
if (is_uv_system()) {
unsigned int cpu;
@@ -151,7 +151,7 @@ void flush_tlb_current_task(void)
preempt_disable();
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
@@ -219,12 +219,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
/* tlb_flushall_shift is on balance point, details in commit log */
if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
/* flush range by one by one 'invlpg' */
for (addr = start; addr < end; addr += PAGE_SIZE) {
- count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
__flush_tlb_single(addr);
}
@@ -262,7 +262,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
static void do_flush_tlb_all(void *info)
{
- count_vm_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
__flush_tlb_all();
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_LAZY)
leave_mm(smp_processor_id());
@@ -270,7 +270,7 @@ static void do_flush_tlb_all(void *info)
void flush_tlb_all(void)
{
- count_vm_event(NR_TLB_REMOTE_FLUSH);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
on_each_cpu(do_flush_tlb_all, NULL, 1);
}
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index c557c6d..070de3d 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -71,12 +71,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
THP_ZERO_PAGE_ALLOC,
THP_ZERO_PAGE_ALLOC_FAILED,
#endif
-#ifdef CONFIG_SMP
+#ifdef CONFIG_DEBUG_TLBFLUSH
NR_TLB_REMOTE_FLUSH, /* cpu tried to flush others' tlbs */
NR_TLB_REMOTE_FLUSH_RECEIVED,/* cpu received ipi for flush */
-#endif
NR_TLB_LOCAL_FLUSH_ALL,
NR_TLB_LOCAL_FLUSH_ONE,
+#endif
NR_VM_EVENT_ITEMS
};
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index e4b9480..80ebba9 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -83,6 +83,14 @@ static inline void vm_events_fold_cpu(int cpu)
#define count_vm_numa_events(x, y) do { (void)(y); } while (0)
#endif /* CONFIG_NUMA_BALANCING */
+#ifdef CONFIG_DEBUG_TLBFLUSH
+#define count_vm_tlb_event(x) count_vm_event(x)
+#define count_vm_tlb_events(x, y) count_vm_events(x, y)
+#else
+#define count_vm_tlb_event(x) do {} while (0)
+#define count_vm_tlb_events(x, y) do { (void)(y); } while (0)
+#endif
+
#define __count_zone_vm_events(item, zone, delta) \
__count_vm_events(item##_NORMAL - ZONE_NORMAL + \
zone_idx(zone), delta)
--
1.8.4
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 2/4] x86: mm: Account for TLB flushes only when debugging
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
Bisection between 3.11 and 3.12 fingered commit 9824cf97 (mm: vmstats:
tlb flush counters). The counters are undeniably useful but how often
do we really need to debug TLB flush related issues? It does not justify
taking the penalty everywhere so make it a debugging option.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/include/asm/tlbflush.h | 6 +++---
arch/x86/kernel/cpu/mtrr/generic.c | 4 ++--
arch/x86/mm/tlb.c | 14 +++++++-------
include/linux/vm_event_item.h | 4 ++--
include/linux/vmstat.h | 8 ++++++++
5 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e6d90ba..04905bf 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -62,7 +62,7 @@ static inline void __flush_tlb_all(void)
static inline void __flush_tlb_one(unsigned long addr)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
__flush_tlb_single(addr);
}
@@ -93,13 +93,13 @@ static inline void __flush_tlb_one(unsigned long addr)
*/
static inline void __flush_tlb_up(void)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
}
static inline void flush_tlb_all(void)
{
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb_all();
}
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index ce2d0a2..0e25a1b 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -683,7 +683,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
}
/* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
/* Save MTRR state */
@@ -697,7 +697,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
static void post_set(void) __releases(set_atomicity_lock)
{
/* Flush TLBs (no need to flush caches - they are disabled) */
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
__flush_tlb();
/* Intel (P6) standard MTRRs */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 09b8cb8..5176526 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -103,7 +103,7 @@ static void flush_tlb_func(void *info)
if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
return;
- count_vm_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
if (f->flush_end == TLB_FLUSH_ALL)
local_flush_tlb();
@@ -131,7 +131,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
info.flush_start = start;
info.flush_end = end;
- count_vm_event(NR_TLB_REMOTE_FLUSH);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
if (is_uv_system()) {
unsigned int cpu;
@@ -151,7 +151,7 @@ void flush_tlb_current_task(void)
preempt_disable();
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
@@ -219,12 +219,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
/* tlb_flushall_shift is on balance point, details in commit log */
if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
- count_vm_event(NR_TLB_LOCAL_FLUSH_ALL);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
/* flush range by one by one 'invlpg' */
for (addr = start; addr < end; addr += PAGE_SIZE) {
- count_vm_event(NR_TLB_LOCAL_FLUSH_ONE);
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
__flush_tlb_single(addr);
}
@@ -262,7 +262,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
static void do_flush_tlb_all(void *info)
{
- count_vm_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
__flush_tlb_all();
if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_LAZY)
leave_mm(smp_processor_id());
@@ -270,7 +270,7 @@ static void do_flush_tlb_all(void *info)
void flush_tlb_all(void)
{
- count_vm_event(NR_TLB_REMOTE_FLUSH);
+ count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
on_each_cpu(do_flush_tlb_all, NULL, 1);
}
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index c557c6d..070de3d 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -71,12 +71,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
THP_ZERO_PAGE_ALLOC,
THP_ZERO_PAGE_ALLOC_FAILED,
#endif
-#ifdef CONFIG_SMP
+#ifdef CONFIG_DEBUG_TLBFLUSH
NR_TLB_REMOTE_FLUSH, /* cpu tried to flush others' tlbs */
NR_TLB_REMOTE_FLUSH_RECEIVED,/* cpu received ipi for flush */
-#endif
NR_TLB_LOCAL_FLUSH_ALL,
NR_TLB_LOCAL_FLUSH_ONE,
+#endif
NR_VM_EVENT_ITEMS
};
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index e4b9480..80ebba9 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -83,6 +83,14 @@ static inline void vm_events_fold_cpu(int cpu)
#define count_vm_numa_events(x, y) do { (void)(y); } while (0)
#endif /* CONFIG_NUMA_BALANCING */
+#ifdef CONFIG_DEBUG_TLBFLUSH
+#define count_vm_tlb_event(x) count_vm_event(x)
+#define count_vm_tlb_events(x, y) count_vm_events(x, y)
+#else
+#define count_vm_tlb_event(x) do {} while (0)
+#define count_vm_tlb_events(x, y) do { (void)(y); } while (0)
+#endif
+
#define __count_zone_vm_events(item, zone, delta) \
__count_vm_events(item##_NORMAL - ZONE_NORMAL + \
zone_idx(zone), delta)
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 3/4] x86: mm: Change tlb_flushall_shift for IvyBridge
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-13 20:01 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
There was a large performance regression that was bisected to commit 611ae8e3
(x86/tlb: enable tlb flush range support for x86). This patch simply changes
the default balance point between a local and global flush for IvyBridge.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/kernel/cpu/intel.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index dc1ec0d..2d93753 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -627,7 +627,7 @@ static void intel_tlb_flushall_shift_set(struct cpuinfo_x86 *c)
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
- tlb_flushall_shift = 1;
+ tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;
--
1.8.4
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 3/4] x86: mm: Change tlb_flushall_shift for IvyBridge
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
There was a large performance regression that was bisected to commit 611ae8e3
(x86/tlb: enable tlb flush range support for x86). This patch simply changes
the default balance point between a local and global flush for IvyBridge.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/kernel/cpu/intel.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index dc1ec0d..2d93753 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -627,7 +627,7 @@ static void intel_tlb_flushall_shift_set(struct cpuinfo_x86 *c)
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
- tlb_flushall_shift = 1;
+ tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 4/4] x86: mm: Eliminate redundant page table walk during TLB range flushing
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-13 20:01 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
When choosing between doing an address space or ranged flush, the x86
implementation of flush_tlb_mm_range takes into account whether there are
any large pages in the range. A per-page flush typically requires fewer
entries than would covered by a single large page and the check is redundant.
There is one potential exception. THP migration flushes single THP entries
and it conceivably would benefit from flushing a single entry instead
of the mm. However, this flush is after a THP allocation, copy and page
table update potentially with any other threads serialised behind it. In
comparison to that, the flush is noise. It makes more sense to optimise
balancing to require fewer flushes than to optimise the flush itself.
This patch deletes the huge page check.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/mm/tlb.c | 28 +---------------------------
1 file changed, 1 insertion(+), 27 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5176526..dd8dda1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -158,32 +158,6 @@ void flush_tlb_current_task(void)
preempt_enable();
}
-/*
- * It can find out the THP large page, or
- * HUGETLB page in tlb_flush when THP disabled
- */
-static inline unsigned long has_large_page(struct mm_struct *mm,
- unsigned long start, unsigned long end)
-{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- unsigned long addr = ALIGN(start, HPAGE_SIZE);
- for (; addr < end; addr += HPAGE_SIZE) {
- pgd = pgd_offset(mm, addr);
- if (likely(!pgd_none(*pgd))) {
- pud = pud_offset(pgd, addr);
- if (likely(!pud_none(*pud))) {
- pmd = pmd_offset(pud, addr);
- if (likely(!pmd_none(*pmd)))
- if (pmd_large(*pmd))
- return addr;
- }
- }
- }
- return 0;
-}
-
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned long vmflag)
{
@@ -218,7 +192,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
nr_base_pages = (end - start) >> PAGE_SHIFT;
/* tlb_flushall_shift is on balance point, details in commit log */
- if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
+ if (nr_base_pages > act_entries) {
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
--
1.8.4
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH 4/4] x86: mm: Eliminate redundant page table walk during TLB range flushing
@ 2013-12-13 20:01 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-13 20:01 UTC (permalink / raw)
To: Alex Shi, Ingo Molnar
Cc: Linus Torvalds, Thomas Gleixner, Andrew Morton, Fengguang Wu,
H Peter Anvin, Linux-X86, Linux-MM, LKML, Mel Gorman
When choosing between doing an address space or ranged flush, the x86
implementation of flush_tlb_mm_range takes into account whether there are
any large pages in the range. A per-page flush typically requires fewer
entries than would covered by a single large page and the check is redundant.
There is one potential exception. THP migration flushes single THP entries
and it conceivably would benefit from flushing a single entry instead
of the mm. However, this flush is after a THP allocation, copy and page
table update potentially with any other threads serialised behind it. In
comparison to that, the flush is noise. It makes more sense to optimise
balancing to require fewer flushes than to optimise the flush itself.
This patch deletes the huge page check.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
arch/x86/mm/tlb.c | 28 +---------------------------
1 file changed, 1 insertion(+), 27 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5176526..dd8dda1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -158,32 +158,6 @@ void flush_tlb_current_task(void)
preempt_enable();
}
-/*
- * It can find out the THP large page, or
- * HUGETLB page in tlb_flush when THP disabled
- */
-static inline unsigned long has_large_page(struct mm_struct *mm,
- unsigned long start, unsigned long end)
-{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- unsigned long addr = ALIGN(start, HPAGE_SIZE);
- for (; addr < end; addr += HPAGE_SIZE) {
- pgd = pgd_offset(mm, addr);
- if (likely(!pgd_none(*pgd))) {
- pud = pud_offset(pgd, addr);
- if (likely(!pud_none(*pud))) {
- pmd = pmd_offset(pud, addr);
- if (likely(!pmd_none(*pmd)))
- if (pmd_large(*pmd))
- return addr;
- }
- }
- }
- return 0;
-}
-
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned long vmflag)
{
@@ -218,7 +192,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
nr_base_pages = (end - start) >> PAGE_SHIFT;
/* tlb_flushall_shift is on balance point, details in commit log */
- if (nr_base_pages > act_entries || has_large_page(mm, start, end)) {
+ if (nr_base_pages > act_entries) {
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb();
} else {
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-13 21:16 ` Linus Torvalds
-1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-13 21:16 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
>
> ebizzy
> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
Hmm. Do you have any idea why 3.4.69 still seems to do better at
higher thread counts?
No complaints about this patch-series, just wondering..
Linus
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-13 21:16 ` Linus Torvalds
0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-13 21:16 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
>
> ebizzy
> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
Hmm. Do you have any idea why 3.4.69 still seems to do better at
higher thread counts?
No complaints about this patch-series, just wondering..
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-13 21:16 ` Linus Torvalds
@ 2013-12-13 22:38 ` H. Peter Anvin
-1 siblings, 0 replies; 71+ messages in thread
From: H. Peter Anvin @ 2013-12-13 22:38 UTC (permalink / raw)
To: Linus Torvalds, Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, Linux-X86, Linux-MM, LKML
On 12/13/2013 01:16 PM, Linus Torvalds wrote:
> On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
>>
>> ebizzy
>> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
>> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
>> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
>> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
>> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
>> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
>> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
>> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
>> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
>> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
>
> Hmm. Do you have any idea why 3.4.69 still seems to do better at
> higher thread counts?
>
> No complaints about this patch-series, just wondering..
>
It would be really great to get some performance numbers on something
other than ebizzy, though...
-hpa
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-13 22:38 ` H. Peter Anvin
0 siblings, 0 replies; 71+ messages in thread
From: H. Peter Anvin @ 2013-12-13 22:38 UTC (permalink / raw)
To: Linus Torvalds, Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, Linux-X86, Linux-MM, LKML
On 12/13/2013 01:16 PM, Linus Torvalds wrote:
> On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
>>
>> ebizzy
>> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
>> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
>> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
>> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
>> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
>> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
>> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
>> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
>> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
>> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
>
> Hmm. Do you have any idea why 3.4.69 still seems to do better at
> higher thread counts?
>
> No complaints about this patch-series, just wondering..
>
It would be really great to get some performance numbers on something
other than ebizzy, though...
-hpa
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-13 21:16 ` Linus Torvalds
@ 2013-12-15 15:55 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-15 15:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 01:16:41PM -0800, Linus Torvalds wrote:
> On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > ebizzy
> > 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> > thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> > Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> > Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> > Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> > Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> > Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> > Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> > Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> > Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
>
> Hmm. Do you have any idea why 3.4.69 still seems to do better at
> higher thread counts?
>
> No complaints about this patch-series, just wondering..
>
Good question. I had insufficient data to answer that quickly and test
modifications were required to even start answering it. The following is
based on tests from a different machine that happened to complete first.
Short answer -- There appears to be a second bug where 3.13-rc3 is less
fair to threads getting time on the CPU. Sometimes this means it can
produce better benchmark results and other times worse. Which is better
depends on the workload and a bit of luck.
The long answer is incomplete and dull.
First, the cost of the affected paths *appear* to be higher in 3.13-rc3,
even with the series applied but 3.4.69 was not necessarily better. The
following is test results based on Alex Shi's microbenchmark that was
posted around the time of the original series. It has been slightly patched
to work around a bug where a global variable is accessed improperly by
threads and hangs. It's reporting the cost of accessing memory for each
thread. Presumably the cost would be higher if we were flushing TLB entries
that are currently hot. Lower values are better.
tlbflush micro benchmark
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Min 1 7.00 ( 0.00%) 6.00 ( 14.29%) 5.00 ( 28.57%)
Min 2 8.00 ( 0.00%) 6.00 ( 25.00%) 4.00 ( 50.00%)
Min 3 13.00 ( 0.00%) 11.00 ( 15.38%) 9.00 ( 30.77%)
Min 4 17.00 ( 0.00%) 19.00 (-11.76%) 15.00 ( 11.76%)
Mean 1 11.28 ( 0.00%) 10.66 ( 5.48%) 5.17 ( 54.13%)
Mean 2 11.42 ( 0.00%) 11.52 ( -0.85%) 9.04 ( 20.82%)
Mean 3 23.43 ( 0.00%) 21.64 ( 7.64%) 10.92 ( 53.39%)
Mean 4 35.33 ( 0.00%) 34.17 ( 3.28%) 19.55 ( 44.67%)
Range 1 6.00 ( 0.00%) 7.00 (-16.67%) 4.00 ( 33.33%)
Range 2 23.00 ( 0.00%) 36.00 (-56.52%) 19.00 ( 17.39%)
Range 3 15.00 ( 0.00%) 17.00 (-13.33%) 10.00 ( 33.33%)
Range 4 29.00 ( 0.00%) 26.00 ( 10.34%) 9.00 ( 68.97%)
Stddev 1 1.01 ( 0.00%) 1.12 ( 10.53%) 0.57 (-43.70%)
Stddev 2 1.83 ( 0.00%) 3.03 ( 66.06%) 6.83 (274.00%)
Stddev 3 2.82 ( 0.00%) 3.28 ( 16.44%) 1.21 (-57.14%)
Stddev 4 6.65 ( 0.00%) 6.32 ( -5.00%) 1.58 (-76.24%)
Max 1 13.00 ( 0.00%) 13.00 ( 0.00%) 9.00 ( 30.77%)
Max 2 31.00 ( 0.00%) 42.00 (-35.48%) 23.00 ( 25.81%)
Max 3 28.00 ( 0.00%) 28.00 ( 0.00%) 19.00 ( 32.14%)
Max 4 46.00 ( 0.00%) 45.00 ( 2.17%) 24.00 ( 47.83%)
It runs the benchmark for a number of threads up to the number of CPUs
in the system (4 in this case). For each number of threads it runs 320
iterations. Each iteration uses a random range of entries between 0 and 256
is selected to be unmapped and flushed. Care is taken so there is a good
spread of sizes selected between 0 and 256. It's meant to guess roughly
what the average performance is.
Access times were simply much better with 3.4.69 but I do not have profiles
that might tell us why. What is very interesting is the CPU time and
elapsed time for the test
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 179.36 165.25 97.29
System 153.59 155.07 128.32
Elapsed 1439.52 1437.69 2802.01
Note that 3.4.69 took much longer to complete the test. The duration of
the test depends on how long it takes for a thread to do the unmapping.
If the unmapping thread gets more time on the CPU, it completes the test
faster and interferes more with the other threads performance (hence the
higher access cost) but this is not necessarily a good result. It could
indicate a fairness issue where the accessing threads are being starved
by the unmapping thread. That is not necessarily the case, it's just
one possibility.
To see what thread fairness looked like, I looked again at ebizzy. This
is the overall performance
ebizzy
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 6366.88 ( 0.00%) 6741.00 ( 5.88%) 6658.32 ( 4.58%)
Mean 2 6917.56 ( 0.00%) 7952.29 ( 14.96%) 8120.79 ( 17.39%)
Mean 3 6231.78 ( 0.00%) 6846.08 ( 9.86%) 7174.98 ( 15.14%)
Mean 4 5887.91 ( 0.00%) 6503.12 ( 10.45%) 6903.05 ( 17.24%)
Mean 5 5680.77 ( 0.00%) 6185.83 ( 8.89%) 6549.15 ( 15.29%)
Mean 6 5692.87 ( 0.00%) 6249.48 ( 9.78%) 6442.21 ( 13.16%)
Mean 7 5846.76 ( 0.00%) 6344.94 ( 8.52%) 6279.13 ( 7.40%)
Mean 8 5974.57 ( 0.00%) 6406.28 ( 7.23%) 6265.29 ( 4.87%)
Range 1 174.00 ( 0.00%) 202.00 (-16.09%) 806.00 (-363.22%)
Range 2 286.00 ( 0.00%) 979.00 (-242.31%) 1255.00 (-338.81%)
Range 3 530.00 ( 0.00%) 583.00 (-10.00%) 626.00 (-18.11%)
Range 4 592.00 ( 0.00%) 691.00 (-16.72%) 630.00 ( -6.42%)
Range 5 567.00 ( 0.00%) 417.00 ( 26.46%) 584.00 ( -3.00%)
Range 6 588.00 ( 0.00%) 353.00 ( 39.97%) 439.00 ( 25.34%)
Range 7 477.00 ( 0.00%) 284.00 ( 40.46%) 343.00 ( 28.09%)
Range 8 408.00 ( 0.00%) 182.00 ( 55.39%) 237.00 ( 41.91%)
Stddev 1 31.59 ( 0.00%) 32.94 ( -4.27%) 154.26 (-388.34%)
Stddev 2 56.95 ( 0.00%) 136.79 (-140.19%) 194.45 (-241.43%)
Stddev 3 132.28 ( 0.00%) 101.02 ( 23.63%) 106.60 ( 19.41%)
Stddev 4 140.93 ( 0.00%) 136.11 ( 3.42%) 138.26 ( 1.90%)
Stddev 5 118.58 ( 0.00%) 86.74 ( 26.85%) 111.73 ( 5.77%)
Stddev 6 109.64 ( 0.00%) 77.49 ( 29.32%) 95.52 ( 12.87%)
Stddev 7 103.91 ( 0.00%) 51.44 ( 50.50%) 54.43 ( 47.62%)
Stddev 8 67.79 ( 0.00%) 31.34 ( 53.76%) 53.08 ( 21.69%)
3.4.69 is still kicking a lot of ass there even though it's slower at
higher number of threads in this particular test.
I had hacked ebizzy to report on the performance of each thread, not just
the overall result and worked out the difference in performance of each
thread. In a complete fair test you would expect the performance of each
thread to be identical and so the spread would be 0
ebizzy thread spread
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
For example, this is saying that with 8 threads on 3.13-rc3 that the
difference between the slowest and fastest thread was 171 records/second.
Note how in 3.13 that there are major differences between the performance
of each particular thread once there are more threads than CPus. The series
actually makes it worse but then again the series does alter what happens
when IPIs get sent. In comparison, 3.4.69's spreads are very low even
when there are more threads than CPUs.
So I think there is a separate bug here that was introduced some time after
3.4.69 that has hurt scheduler fairness. It's not necessarily a scheduler
bug but it does make a test like ebizzy noisy. Because of this bug, I'd
be wary about drawing too many conclusions about ebizzy performance when
the number of threads exceed the number of CPUs.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-15 15:55 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-15 15:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 01:16:41PM -0800, Linus Torvalds wrote:
> On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > ebizzy
> > 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> > thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> > Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> > Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> > Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> > Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> > Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> > Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> > Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> > Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
>
> Hmm. Do you have any idea why 3.4.69 still seems to do better at
> higher thread counts?
>
> No complaints about this patch-series, just wondering..
>
Good question. I had insufficient data to answer that quickly and test
modifications were required to even start answering it. The following is
based on tests from a different machine that happened to complete first.
Short answer -- There appears to be a second bug where 3.13-rc3 is less
fair to threads getting time on the CPU. Sometimes this means it can
produce better benchmark results and other times worse. Which is better
depends on the workload and a bit of luck.
The long answer is incomplete and dull.
First, the cost of the affected paths *appear* to be higher in 3.13-rc3,
even with the series applied but 3.4.69 was not necessarily better. The
following is test results based on Alex Shi's microbenchmark that was
posted around the time of the original series. It has been slightly patched
to work around a bug where a global variable is accessed improperly by
threads and hangs. It's reporting the cost of accessing memory for each
thread. Presumably the cost would be higher if we were flushing TLB entries
that are currently hot. Lower values are better.
tlbflush micro benchmark
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Min 1 7.00 ( 0.00%) 6.00 ( 14.29%) 5.00 ( 28.57%)
Min 2 8.00 ( 0.00%) 6.00 ( 25.00%) 4.00 ( 50.00%)
Min 3 13.00 ( 0.00%) 11.00 ( 15.38%) 9.00 ( 30.77%)
Min 4 17.00 ( 0.00%) 19.00 (-11.76%) 15.00 ( 11.76%)
Mean 1 11.28 ( 0.00%) 10.66 ( 5.48%) 5.17 ( 54.13%)
Mean 2 11.42 ( 0.00%) 11.52 ( -0.85%) 9.04 ( 20.82%)
Mean 3 23.43 ( 0.00%) 21.64 ( 7.64%) 10.92 ( 53.39%)
Mean 4 35.33 ( 0.00%) 34.17 ( 3.28%) 19.55 ( 44.67%)
Range 1 6.00 ( 0.00%) 7.00 (-16.67%) 4.00 ( 33.33%)
Range 2 23.00 ( 0.00%) 36.00 (-56.52%) 19.00 ( 17.39%)
Range 3 15.00 ( 0.00%) 17.00 (-13.33%) 10.00 ( 33.33%)
Range 4 29.00 ( 0.00%) 26.00 ( 10.34%) 9.00 ( 68.97%)
Stddev 1 1.01 ( 0.00%) 1.12 ( 10.53%) 0.57 (-43.70%)
Stddev 2 1.83 ( 0.00%) 3.03 ( 66.06%) 6.83 (274.00%)
Stddev 3 2.82 ( 0.00%) 3.28 ( 16.44%) 1.21 (-57.14%)
Stddev 4 6.65 ( 0.00%) 6.32 ( -5.00%) 1.58 (-76.24%)
Max 1 13.00 ( 0.00%) 13.00 ( 0.00%) 9.00 ( 30.77%)
Max 2 31.00 ( 0.00%) 42.00 (-35.48%) 23.00 ( 25.81%)
Max 3 28.00 ( 0.00%) 28.00 ( 0.00%) 19.00 ( 32.14%)
Max 4 46.00 ( 0.00%) 45.00 ( 2.17%) 24.00 ( 47.83%)
It runs the benchmark for a number of threads up to the number of CPUs
in the system (4 in this case). For each number of threads it runs 320
iterations. Each iteration uses a random range of entries between 0 and 256
is selected to be unmapped and flushed. Care is taken so there is a good
spread of sizes selected between 0 and 256. It's meant to guess roughly
what the average performance is.
Access times were simply much better with 3.4.69 but I do not have profiles
that might tell us why. What is very interesting is the CPU time and
elapsed time for the test
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 179.36 165.25 97.29
System 153.59 155.07 128.32
Elapsed 1439.52 1437.69 2802.01
Note that 3.4.69 took much longer to complete the test. The duration of
the test depends on how long it takes for a thread to do the unmapping.
If the unmapping thread gets more time on the CPU, it completes the test
faster and interferes more with the other threads performance (hence the
higher access cost) but this is not necessarily a good result. It could
indicate a fairness issue where the accessing threads are being starved
by the unmapping thread. That is not necessarily the case, it's just
one possibility.
To see what thread fairness looked like, I looked again at ebizzy. This
is the overall performance
ebizzy
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 6366.88 ( 0.00%) 6741.00 ( 5.88%) 6658.32 ( 4.58%)
Mean 2 6917.56 ( 0.00%) 7952.29 ( 14.96%) 8120.79 ( 17.39%)
Mean 3 6231.78 ( 0.00%) 6846.08 ( 9.86%) 7174.98 ( 15.14%)
Mean 4 5887.91 ( 0.00%) 6503.12 ( 10.45%) 6903.05 ( 17.24%)
Mean 5 5680.77 ( 0.00%) 6185.83 ( 8.89%) 6549.15 ( 15.29%)
Mean 6 5692.87 ( 0.00%) 6249.48 ( 9.78%) 6442.21 ( 13.16%)
Mean 7 5846.76 ( 0.00%) 6344.94 ( 8.52%) 6279.13 ( 7.40%)
Mean 8 5974.57 ( 0.00%) 6406.28 ( 7.23%) 6265.29 ( 4.87%)
Range 1 174.00 ( 0.00%) 202.00 (-16.09%) 806.00 (-363.22%)
Range 2 286.00 ( 0.00%) 979.00 (-242.31%) 1255.00 (-338.81%)
Range 3 530.00 ( 0.00%) 583.00 (-10.00%) 626.00 (-18.11%)
Range 4 592.00 ( 0.00%) 691.00 (-16.72%) 630.00 ( -6.42%)
Range 5 567.00 ( 0.00%) 417.00 ( 26.46%) 584.00 ( -3.00%)
Range 6 588.00 ( 0.00%) 353.00 ( 39.97%) 439.00 ( 25.34%)
Range 7 477.00 ( 0.00%) 284.00 ( 40.46%) 343.00 ( 28.09%)
Range 8 408.00 ( 0.00%) 182.00 ( 55.39%) 237.00 ( 41.91%)
Stddev 1 31.59 ( 0.00%) 32.94 ( -4.27%) 154.26 (-388.34%)
Stddev 2 56.95 ( 0.00%) 136.79 (-140.19%) 194.45 (-241.43%)
Stddev 3 132.28 ( 0.00%) 101.02 ( 23.63%) 106.60 ( 19.41%)
Stddev 4 140.93 ( 0.00%) 136.11 ( 3.42%) 138.26 ( 1.90%)
Stddev 5 118.58 ( 0.00%) 86.74 ( 26.85%) 111.73 ( 5.77%)
Stddev 6 109.64 ( 0.00%) 77.49 ( 29.32%) 95.52 ( 12.87%)
Stddev 7 103.91 ( 0.00%) 51.44 ( 50.50%) 54.43 ( 47.62%)
Stddev 8 67.79 ( 0.00%) 31.34 ( 53.76%) 53.08 ( 21.69%)
3.4.69 is still kicking a lot of ass there even though it's slower at
higher number of threads in this particular test.
I had hacked ebizzy to report on the performance of each thread, not just
the overall result and worked out the difference in performance of each
thread. In a complete fair test you would expect the performance of each
thread to be identical and so the spread would be 0
ebizzy thread spread
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
For example, this is saying that with 8 threads on 3.13-rc3 that the
difference between the slowest and fastest thread was 171 records/second.
Note how in 3.13 that there are major differences between the performance
of each particular thread once there are more threads than CPus. The series
actually makes it worse but then again the series does alter what happens
when IPIs get sent. In comparison, 3.4.69's spreads are very low even
when there are more threads than CPUs.
So I think there is a separate bug here that was introduced some time after
3.4.69 that has hurt scheduler fairness. It's not necessarily a scheduler
bug but it does make a test like ebizzy noisy. Because of this bug, I'd
be wary about drawing too many conclusions about ebizzy performance when
the number of threads exceed the number of CPUs.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-15 15:55 ` Mel Gorman
@ 2013-12-15 16:17 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-15 16:17 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 03:55:39PM +0000, Mel Gorman wrote:
> <SNIP>
> tlbflush micro benchmark
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> Min 1 7.00 ( 0.00%) 6.00 ( 14.29%) 5.00 ( 28.57%)
> Min 2 8.00 ( 0.00%) 6.00 ( 25.00%) 4.00 ( 50.00%)
> Min 3 13.00 ( 0.00%) 11.00 ( 15.38%) 9.00 ( 30.77%)
> Min 4 17.00 ( 0.00%) 19.00 (-11.76%) 15.00 ( 11.76%)
> Mean 1 11.28 ( 0.00%) 10.66 ( 5.48%) 5.17 ( 54.13%)
> Mean 2 11.42 ( 0.00%) 11.52 ( -0.85%) 9.04 ( 20.82%)
> Mean 3 23.43 ( 0.00%) 21.64 ( 7.64%) 10.92 ( 53.39%)
> Mean 4 35.33 ( 0.00%) 34.17 ( 3.28%) 19.55 ( 44.67%)
> Range 1 6.00 ( 0.00%) 7.00 (-16.67%) 4.00 ( 33.33%)
> Range 2 23.00 ( 0.00%) 36.00 (-56.52%) 19.00 ( 17.39%)
> Range 3 15.00 ( 0.00%) 17.00 (-13.33%) 10.00 ( 33.33%)
> Range 4 29.00 ( 0.00%) 26.00 ( 10.34%) 9.00 ( 68.97%)
> Stddev 1 1.01 ( 0.00%) 1.12 ( 10.53%) 0.57 (-43.70%)
> Stddev 2 1.83 ( 0.00%) 3.03 ( 66.06%) 6.83 (274.00%)
> Stddev 3 2.82 ( 0.00%) 3.28 ( 16.44%) 1.21 (-57.14%)
> Stddev 4 6.65 ( 0.00%) 6.32 ( -5.00%) 1.58 (-76.24%)
> Max 1 13.00 ( 0.00%) 13.00 ( 0.00%) 9.00 ( 30.77%)
> Max 2 31.00 ( 0.00%) 42.00 (-35.48%) 23.00 ( 25.81%)
> Max 3 28.00 ( 0.00%) 28.00 ( 0.00%) 19.00 ( 32.14%)
> Max 4 46.00 ( 0.00%) 45.00 ( 2.17%) 24.00 ( 47.83%)
>
> <SNIP>
>
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> User 179.36 165.25 97.29
> System 153.59 155.07 128.32
> Elapsed 1439.52 1437.69 2802.01
>
After I ran the test, I looked closer at the elapsed times and it was
due to a bug in the test setup itself. The tlbflush tests will need to
be rerun but ebizzy still has the problem where threads see very
different performance.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-15 16:17 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-15 16:17 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 03:55:39PM +0000, Mel Gorman wrote:
> <SNIP>
> tlbflush micro benchmark
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> Min 1 7.00 ( 0.00%) 6.00 ( 14.29%) 5.00 ( 28.57%)
> Min 2 8.00 ( 0.00%) 6.00 ( 25.00%) 4.00 ( 50.00%)
> Min 3 13.00 ( 0.00%) 11.00 ( 15.38%) 9.00 ( 30.77%)
> Min 4 17.00 ( 0.00%) 19.00 (-11.76%) 15.00 ( 11.76%)
> Mean 1 11.28 ( 0.00%) 10.66 ( 5.48%) 5.17 ( 54.13%)
> Mean 2 11.42 ( 0.00%) 11.52 ( -0.85%) 9.04 ( 20.82%)
> Mean 3 23.43 ( 0.00%) 21.64 ( 7.64%) 10.92 ( 53.39%)
> Mean 4 35.33 ( 0.00%) 34.17 ( 3.28%) 19.55 ( 44.67%)
> Range 1 6.00 ( 0.00%) 7.00 (-16.67%) 4.00 ( 33.33%)
> Range 2 23.00 ( 0.00%) 36.00 (-56.52%) 19.00 ( 17.39%)
> Range 3 15.00 ( 0.00%) 17.00 (-13.33%) 10.00 ( 33.33%)
> Range 4 29.00 ( 0.00%) 26.00 ( 10.34%) 9.00 ( 68.97%)
> Stddev 1 1.01 ( 0.00%) 1.12 ( 10.53%) 0.57 (-43.70%)
> Stddev 2 1.83 ( 0.00%) 3.03 ( 66.06%) 6.83 (274.00%)
> Stddev 3 2.82 ( 0.00%) 3.28 ( 16.44%) 1.21 (-57.14%)
> Stddev 4 6.65 ( 0.00%) 6.32 ( -5.00%) 1.58 (-76.24%)
> Max 1 13.00 ( 0.00%) 13.00 ( 0.00%) 9.00 ( 30.77%)
> Max 2 31.00 ( 0.00%) 42.00 (-35.48%) 23.00 ( 25.81%)
> Max 3 28.00 ( 0.00%) 28.00 ( 0.00%) 19.00 ( 32.14%)
> Max 4 46.00 ( 0.00%) 45.00 ( 2.17%) 24.00 ( 47.83%)
>
> <SNIP>
>
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> User 179.36 165.25 97.29
> System 153.59 155.07 128.32
> Elapsed 1439.52 1437.69 2802.01
>
After I ran the test, I looked closer at the elapsed times and it was
due to a bug in the test setup itself. The tlbflush tests will need to
be rerun but ebizzy still has the problem where threads see very
different performance.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-15 15:55 ` Mel Gorman
@ 2013-12-15 18:34 ` Linus Torvalds
-1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-15 18:34 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 7:55 AM, Mel Gorman <mgorman@suse.de> wrote:
>
> Short answer -- There appears to be a second bug where 3.13-rc3 is less
> fair to threads getting time on the CPU.
Hmm. Can you point me at the (fixed) microbenchmark you mention?
Linus
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-15 18:34 ` Linus Torvalds
0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-15 18:34 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 7:55 AM, Mel Gorman <mgorman@suse.de> wrote:
>
> Short answer -- There appears to be a second bug where 3.13-rc3 is less
> fair to threads getting time on the CPU.
Hmm. Can you point me at the (fixed) microbenchmark you mention?
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-15 15:55 ` Mel Gorman
@ 2013-12-16 10:24 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-16 10:24 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> I had hacked ebizzy to report on the performance of each thread, not
> just the overall result and worked out the difference in performance
> of each thread. In a complete fair test you would expect the
> performance of each thread to be identical and so the spread would
> be 0
>
> ebizzy thread spread
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
> Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
> Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
> Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
> Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
> Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
> Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
> Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
> Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
> Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
> Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
> Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
> Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
> Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
> Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
> Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
> Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
> Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
> Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
> Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
> Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
>
> For example, this is saying that with 8 threads on 3.13-rc3 that the
> difference between the slowest and fastest thread was 171
> records/second.
We aren't blind fairness fetishists, but the noise difference between
v3.4 and v3.13 appears to be staggering, it's a serious anomaly in
itself.
Whatever we did right in v3.4 we want to do in v3.13 as well - or at
least understand it.
I agree that the absolute numbers would probably only be interesting
once v3.13 is fixed to not spread thread performance that wildly
again.
> [...] Because of this bug, I'd be wary about drawing too many
> conclusions about ebizzy performance when the number of threads
> exceed the number of CPUs.
Yes.
Could it be that the v3.13 workload context switches a lot more than
v3.4 workload? That would magnify any TLB range flushing costs and
would make it essentially a secondary symptom, not a primary cause of
the regression. (I'm only guessing blindly here though.)
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 10:24 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-16 10:24 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> I had hacked ebizzy to report on the performance of each thread, not
> just the overall result and worked out the difference in performance
> of each thread. In a complete fair test you would expect the
> performance of each thread to be identical and so the spread would
> be 0
>
> ebizzy thread spread
> 3.13.0-rc3 3.13.0-rc3 3.4.69
> vanilla nowalk-v2r7 vanilla
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
> Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
> Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
> Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
> Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
> Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
> Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
> Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
> Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
> Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
> Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
> Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
> Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
> Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
> Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
> Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
> Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
> Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
> Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
> Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
> Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
>
> For example, this is saying that with 8 threads on 3.13-rc3 that the
> difference between the slowest and fastest thread was 171
> records/second.
We aren't blind fairness fetishists, but the noise difference between
v3.4 and v3.13 appears to be staggering, it's a serious anomaly in
itself.
Whatever we did right in v3.4 we want to do in v3.13 as well - or at
least understand it.
I agree that the absolute numbers would probably only be interesting
once v3.13 is fixed to not spread thread performance that wildly
again.
> [...] Because of this bug, I'd be wary about drawing too many
> conclusions about ebizzy performance when the number of threads
> exceed the number of CPUs.
Yes.
Could it be that the v3.13 workload context switches a lot more than
v3.4 workload? That would magnify any TLB range flushing costs and
would make it essentially a secondary symptom, not a primary cause of
the regression. (I'm only guessing blindly here though.)
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-13 22:38 ` H. Peter Anvin
@ 2013-12-16 10:39 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 10:39 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Linus Torvalds, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 02:38:32PM -0800, H. Peter Anvin wrote:
> On 12/13/2013 01:16 PM, Linus Torvalds wrote:
> > On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
> >>
> >> ebizzy
> >> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> >> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> >> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> >> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> >> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> >> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> >> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> >> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> >> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> >> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
> >
> > Hmm. Do you have any idea why 3.4.69 still seems to do better at
> > higher thread counts?
> >
> > No complaints about this patch-series, just wondering..
> >
>
> It would be really great to get some performance numbers on something
> other than ebizzy, though...
>
What do you suggest? I'd be interested in hearing what sort of tests
originally motivated the series. I picked a few different tests to see
what fell out. All of this was driven from mmtests so I can do a release
and point to the config files used if anyone wants to try reproducing it.
First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
and ran it for a range of thread numbers, 320 iterations per thread with
random number of entires to flush. Results are from two machines
4 core: Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
8 core: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
Single socket in both cases, both ivybridge. Neither are high end but my
budget does not cover having high-end machines in my local test grid which
is bad but unavoidable.
On a 4 core machine
tlbflush
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 11.17 ( 0.00%) 10.52 ( 5.82%) 5.15 ( 53.93%)
Mean 2 11.70 ( 0.00%) 10.77 ( 7.99%) 10.30 ( 11.94%)
Mean 3 24.07 ( 0.00%) 22.42 ( 6.87%) 10.89 ( 54.74%)
Mean 4 40.48 ( 0.00%) 39.72 ( 1.88%) 19.51 ( 51.81%)
Range 1 7.00 ( 0.00%) 7.00 ( 0.00%) 5.00 ( 28.57%)
Range 2 44.00 ( 0.00%) 20.00 ( 54.55%) 23.00 ( 47.73%)
Range 3 13.00 ( 0.00%) 16.00 (-23.08%) 8.00 ( 38.46%)
Range 4 26.00 ( 0.00%) 32.00 (-23.08%) 11.00 ( 57.69%)
Stddev 1 1.49 ( 0.00%) 1.45 ( -2.83%) 0.52 (-65.22%)
Stddev 2 3.51 ( 0.00%) 2.20 (-37.20%) 7.46 (112.74%)
Stddev 3 1.84 ( 0.00%) 2.43 ( 32.46%) 1.34 (-26.96%)
Stddev 4 3.44 ( 0.00%) 4.61 ( 34.14%) 1.51 (-56.13%)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 197.37 181.76 99.69
System 161.92 161.54 126.49
Elapsed 2741.19 2793.41 2749.12
Showing small gains on that machine but the variations are high enough
that we cannot be certain it's a real gain. The random number of entries
selection is what makes this noisy but picking a single number would
bias the test for the characteristics of a single machine.
Note that 3.4 is still just a lot better.
This was an 8-core machine
tlbflush
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 7.98 ( 0.00%) 8.54 ( -7.01%) 5.16 ( 35.36%)
Mean 2 7.82 ( 0.00%) 8.35 ( -6.84%) 5.81 ( 25.71%)
Mean 3 6.59 ( 0.00%) 7.80 (-18.36%) 5.58 ( 15.37%)
Mean 5 13.28 ( 0.00%) 12.85 ( 3.20%) 8.88 ( 33.15%)
Mean 8 32.50 ( 0.00%) 32.52 ( -0.04%) 19.92 ( 38.71%)
Range 1 7.00 ( 0.00%) 6.00 ( 14.29%) 3.00 ( 57.14%)
Range 2 8.00 ( 0.00%) 7.00 ( 12.50%) 18.00 (-125.00%)
Range 3 6.00 ( 0.00%) 7.00 (-16.67%) 7.00 (-16.67%)
Range 5 11.00 ( 0.00%) 20.00 (-81.82%) 9.00 ( 18.18%)
Range 8 35.00 ( 0.00%) 33.00 ( 5.71%) 8.00 ( 77.14%)
Stddev 1 1.31 ( 0.00%) 1.52 ( 15.75%) 0.48 (-63.66%)
Stddev 2 1.55 ( 0.00%) 1.52 ( -1.54%) 3.06 ( 98.14%)
Stddev 3 1.27 ( 0.00%) 1.61 ( 26.07%) 1.53 ( 20.16%)
Stddev 5 2.99 ( 0.00%) 2.63 (-11.97%) 2.56 (-14.38%)
Stddev 8 8.29 ( 0.00%) 6.51 (-21.46%) 1.23 (-85.15%)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 316.01 341.55 205.00
System 249.25 273.16 203.79
Elapsed 3382.56 4398.20 3682.31
This is showing a mix of gains and losses with higher CPU usage to boot.
The figures are again within variations so difficult to be conclusive
about it. The system CPU usage is higher
The following is netperf running UDP_STREAM and TCP_STREAM on loopback on
the 4-core machine
netperf-udp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 179.14 ( 0.00%) 177.82 ( -0.74%) 207.16 ( 15.64%)
Tput 128 354.67 ( 0.00%) 350.04 ( -1.31%) 416.47 ( 17.42%)
Tput 256 712.01 ( 0.00%) 697.31 ( -2.06%) 828.11 ( 16.31%)
Tput 1024 2770.59 ( 0.00%) 2717.55 ( -1.91%) 3229.38 ( 16.56%)
Tput 2048 5328.83 ( 0.00%) 5255.81 ( -1.37%) 6183.69 ( 16.04%)
Tput 3312 8249.24 ( 0.00%) 8170.62 ( -0.95%) 9491.63 ( 15.06%)
Tput 4096 9865.98 ( 0.00%) 9760.41 ( -1.07%) 11348.02 ( 15.02%)
Tput 8192 17263.69 ( 0.00%) 17261.15 ( -0.01%) 19917.01 ( 15.37%)
Tput 16384 27274.61 ( 0.00%) 27283.01 ( 0.03%) 30785.56 ( 12.87%)
netperf-tcp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 1612.82 ( 0.00%) 1622.31 ( 0.59%) 1584.68 ( -1.74%)
Tput 128 3043.06 ( 0.00%) 3024.19 ( -0.62%) 2926.80 ( -3.82%)
Tput 256 5755.06 ( 0.00%) 5747.26 ( -0.14%) 5328.57 ( -7.41%)
Tput 1024 17662.03 ( 0.00%) 17778.94 ( 0.66%) 11963.09 (-32.27%)
Tput 2048 25382.69 ( 0.00%) 25464.23 ( 0.32%) 15043.90 (-40.73%)
Tput 3312 29990.79 ( 0.00%) 30135.56 ( 0.48%) 15731.78 (-47.54%)
Tput 4096 31612.33 ( 0.00%) 31775.74 ( 0.52%) 17626.10 (-44.24%)
Tput 8192 35366.99 ( 0.00%) 35425.15 ( 0.16%) 21060.61 (-40.45%)
Tput 16384 38547.25 ( 0.00%) 38441.09 ( -0.28%) 27925.43 (-27.56%)
Very marginal there. Something nuts happened with UDP and TCP processing
between 3.4 and 3.13 but this particular series' impact is marginal
8 core machine
netperf-udp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 328.25 ( 0.00%) 331.05 ( 0.85%) 383.97 ( 16.97%)
Tput 128 664.31 ( 0.00%) 659.58 ( -0.71%) 762.59 ( 14.79%)
Tput 256 1305.82 ( 0.00%) 1309.65 ( 0.29%) 1508.27 ( 15.50%)
Tput 1024 5110.17 ( 0.00%) 5081.82 ( -0.55%) 5775.96 ( 13.03%)
Tput 2048 9839.14 ( 0.00%) 10074.00 ( 2.39%) 11010.10 ( 11.90%)
Tput 3312 14787.70 ( 0.00%) 14850.59 ( 0.43%) 16821.29 ( 13.75%)
Tput 4096 17583.14 ( 0.00%) 17936.17 ( 2.01%) 20246.74 ( 15.15%)
Tput 8192 30165.48 ( 0.00%) 30386.78 ( 0.73%) 31904.81 ( 5.77%)
Tput 16384 48345.93 ( 0.00%) 48127.68 ( -0.45%) 48850.30 ( 1.04%)
netperf-tcp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 3064.32 ( 0.00%) 3149.22 ( 2.77%) 2701.19 (-11.85%)
Tput 128 5777.71 ( 0.00%) 5899.85 ( 2.11%) 4931.78 (-14.64%)
Tput 256 10330.00 ( 0.00%) 10567.97 ( 2.30%) 8388.28 (-18.80%)
Tput 1024 30744.90 ( 0.00%) 31084.37 ( 1.10%) 17496.95 (-43.09%)
Tput 2048 43064.86 ( 0.00%) 42916.90 ( -0.34%) 22227.42 (-48.39%)
Tput 3312 50473.85 ( 0.00%) 50388.37 ( -0.17%) 25154.14 (-50.16%)
Tput 4096 53909.70 ( 0.00%) 53965.40 ( 0.10%) 27328.49 (-49.31%)
Tput 8192 63303.83 ( 0.00%) 63152.88 ( -0.24%) 32078.71 (-49.33%)
Tput 16384 68632.11 ( 0.00%) 68063.05 ( -0.83%) 39758.01 (-42.07%)
Looks a bit more solid. I didn't post the figures but the elapsed times
are also lower implying that netperf is using fewer iterations to
measure results it is confident of
Next is a kernel build benchmark. I'd be very surprised if it was hitting
the relevant paths but I think people expect to see this benchmark so....
4 core machine
kernbench
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User min 714.10 ( 0.00%) 714.51 ( -0.06%) 706.83 ( 1.02%)
User mean 715.04 ( 0.00%) 714.75 ( 0.04%) 707.64 ( 1.04%)
User stddev 0.67 ( 0.00%) 0.25 ( 62.98%) 0.69 ( -3.40%)
User max 716.12 ( 0.00%) 715.22 ( 0.13%) 708.56 ( 1.06%)
User range 2.02 ( 0.00%) 0.71 ( 64.85%) 1.73 ( 14.36%)
System min 32.89 ( 0.00%) 32.50 ( 1.19%) 39.17 (-19.09%)
System mean 33.25 ( 0.00%) 32.75 ( 1.53%) 39.51 (-18.82%)
System stddev 0.25 ( 0.00%) 0.22 ( 14.73%) 0.28 (-11.29%)
System max 33.60 ( 0.00%) 33.12 ( 1.43%) 39.83 (-18.54%)
System range 0.71 ( 0.00%) 0.62 ( 12.68%) 0.66 ( 7.04%)
Elapsed min 195.70 ( 0.00%) 195.88 ( -0.09%) 195.84 ( -0.07%)
Elapsed mean 196.09 ( 0.00%) 195.97 ( 0.06%) 196.14 ( -0.03%)
Elapsed stddev 0.25 ( 0.00%) 0.06 ( 74.74%) 0.16 ( 33.94%)
Elapsed max 196.41 ( 0.00%) 196.07 ( 0.17%) 196.33 ( 0.04%)
Elapsed range 0.71 ( 0.00%) 0.19 ( 73.24%) 0.49 ( 30.99%)
CPU min 381.00 ( 0.00%) 381.00 ( 0.00%) 380.00 ( 0.26%)
CPU mean 381.00 ( 0.00%) 381.00 ( 0.00%) 380.40 ( 0.16%)
CPU stddev 0.00 ( 0.00%) 0.00 ( 0.00%) 0.49 (-99.00%)
CPU max 381.00 ( 0.00%) 381.00 ( 0.00%) 381.00 ( 0.00%)
CPU range 0.00 ( 0.00%) 0.00 ( 0.00%) 1.00 (-99.00%)
8 core machine
kernbench
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User min 632.94 ( 0.00%) 632.71 ( 0.04%) 681.00 ( -7.59%)
User mean 633.25 ( 0.00%) 633.41 ( -0.02%) 681.34 ( -7.59%)
User stddev 0.24 ( 0.00%) 0.55 (-124.00%) 0.34 (-39.88%)
User max 633.55 ( 0.00%) 634.14 ( -0.09%) 681.99 ( -7.65%)
User range 0.61 ( 0.00%) 1.43 (-134.43%) 0.99 (-62.30%)
System min 29.74 ( 0.00%) 29.76 ( -0.07%) 38.24 (-28.58%)
System mean 30.12 ( 0.00%) 30.22 ( -0.32%) 38.55 (-27.99%)
System stddev 0.22 ( 0.00%) 0.24 (-11.04%) 0.25 (-14.10%)
System max 30.39 ( 0.00%) 30.48 ( -0.30%) 38.87 (-27.90%)
System range 0.65 ( 0.00%) 0.72 (-10.77%) 0.63 ( 3.08%)
Elapsed min 88.40 ( 0.00%) 88.47 ( -0.08%) 95.81 ( -8.38%)
Elapsed mean 88.55 ( 0.00%) 88.72 ( -0.20%) 96.01 ( -8.43%)
Elapsed stddev 0.10 ( 0.00%) 0.15 (-46.20%) 0.23 (-125.69%)
Elapsed max 88.72 ( 0.00%) 88.88 ( -0.18%) 96.30 ( -8.54%)
Elapsed range 0.32 ( 0.00%) 0.41 (-28.13%) 0.49 (-53.13%)
CPU min 747.00 ( 0.00%) 746.00 ( 0.13%) 747.00 ( 0.00%)
CPU mean 748.80 ( 0.00%) 747.60 ( 0.16%) 749.20 ( -0.05%)
CPU stddev 0.98 ( 0.00%) 1.36 (-38.44%) 1.47 (-50.00%)
CPU max 750.00 ( 0.00%) 750.00 ( 0.00%) 751.00 ( -0.13%)
CPU range 3.00 ( 0.00%) 4.00 (-33.33%) 4.00 (-33.33%)
Yup, nothing there worth getting excited about although slightly amusing
to note that we've improved kernel build times since 3.4.69 if nothing
else. We're all over the performance of that!
This is a modified ebizzy benchmark to give a breakdown of per-thread
performance.
4 core machine
ebizzy total throughput (higher the better)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 6366.88 ( 0.00%) 6741.00 ( 5.88%) 6658.32 ( 4.58%)
Mean 2 6917.56 ( 0.00%) 7952.29 ( 14.96%) 8120.79 ( 17.39%)
Mean 3 6231.78 ( 0.00%) 6846.08 ( 9.86%) 7174.98 ( 15.14%)
Mean 4 5887.91 ( 0.00%) 6503.12 ( 10.45%) 6903.05 ( 17.24%)
Mean 5 5680.77 ( 0.00%) 6185.83 ( 8.89%) 6549.15 ( 15.29%)
Mean 6 5692.87 ( 0.00%) 6249.48 ( 9.78%) 6442.21 ( 13.16%)
Mean 7 5846.76 ( 0.00%) 6344.94 ( 8.52%) 6279.13 ( 7.40%)
Mean 8 5974.57 ( 0.00%) 6406.28 ( 7.23%) 6265.29 ( 4.87%)
Range 1 174.00 ( 0.00%) 202.00 (-16.09%) 806.00 (-363.22%)
Range 2 286.00 ( 0.00%) 979.00 (-242.31%) 1255.00 (-338.81%)
Range 3 530.00 ( 0.00%) 583.00 (-10.00%) 626.00 (-18.11%)
Range 4 592.00 ( 0.00%) 691.00 (-16.72%) 630.00 ( -6.42%)
Range 5 567.00 ( 0.00%) 417.00 ( 26.46%) 584.00 ( -3.00%)
Range 6 588.00 ( 0.00%) 353.00 ( 39.97%) 439.00 ( 25.34%)
Range 7 477.00 ( 0.00%) 284.00 ( 40.46%) 343.00 ( 28.09%)
Range 8 408.00 ( 0.00%) 182.00 ( 55.39%) 237.00 ( 41.91%)
Stddev 1 31.59 ( 0.00%) 32.94 ( -4.27%) 154.26 (-388.34%)
Stddev 2 56.95 ( 0.00%) 136.79 (-140.19%) 194.45 (-241.43%)
Stddev 3 132.28 ( 0.00%) 101.02 ( 23.63%) 106.60 ( 19.41%)
Stddev 4 140.93 ( 0.00%) 136.11 ( 3.42%) 138.26 ( 1.90%)
Stddev 5 118.58 ( 0.00%) 86.74 ( 26.85%) 111.73 ( 5.77%)
Stddev 6 109.64 ( 0.00%) 77.49 ( 29.32%) 95.52 ( 12.87%)
Stddev 7 103.91 ( 0.00%) 51.44 ( 50.50%) 54.43 ( 47.62%)
Stddev 8 67.79 ( 0.00%) 31.34 ( 53.76%) 53.08 ( 21.69%)
4 core machine
ebizzy Thread spread (closer to 0, the more fair it is)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.30 ( 11.76%) 0.07 ( 79.41%)
Mean 3 1.29 ( 0.00%) 0.92 ( 28.68%) 0.29 ( 77.52%)
Mean 4 7.08 ( 0.00%) 42.38 (-498.59%) 0.22 ( 96.89%)
Mean 5 193.54 ( 0.00%) 483.41 (-149.77%) 0.41 ( 99.79%)
Mean 6 151.12 ( 0.00%) 198.22 (-31.17%) 0.42 ( 99.72%)
Mean 7 115.38 ( 0.00%) 160.29 (-38.92%) 0.58 ( 99.50%)
Mean 8 108.65 ( 0.00%) 138.96 (-27.90%) 0.44 ( 99.60%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 5.00 ( 0.00%) 6.00 (-20.00%) 2.00 ( 60.00%)
Range 3 10.00 ( 0.00%) 17.00 (-70.00%) 9.00 ( 10.00%)
Range 4 256.00 ( 0.00%) 1001.00 (-291.02%) 5.00 ( 98.05%)
Range 5 456.00 ( 0.00%) 1226.00 (-168.86%) 6.00 ( 98.68%)
Range 6 298.00 ( 0.00%) 294.00 ( 1.34%) 8.00 ( 97.32%)
Range 7 192.00 ( 0.00%) 220.00 (-14.58%) 7.00 ( 96.35%)
Range 8 171.00 ( 0.00%) 163.00 ( 4.68%) 8.00 ( 95.32%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.72 ( 0.00%) 0.85 ( 17.99%) 0.29 (-59.72%)
Stddev 3 1.42 ( 0.00%) 1.90 ( 34.22%) 1.12 (-21.19%)
Stddev 4 33.83 ( 0.00%) 127.26 (276.15%) 0.79 (-97.65%)
Stddev 5 92.08 ( 0.00%) 225.01 (144.35%) 1.06 (-98.85%)
Stddev 6 64.82 ( 0.00%) 69.43 ( 7.11%) 1.28 (-98.02%)
Stddev 7 36.66 ( 0.00%) 49.19 ( 34.20%) 1.18 (-96.79%)
Stddev 8 30.79 ( 0.00%) 36.23 ( 17.64%) 1.06 (-96.55%)
Three things to note here. The spread goes to hell when there are more
workload threads than cores. Second, the patch is actually making the
spread and thread fairness worse. Third, the fact that there is spread at
all is bad because 3.4.69 experienced no such problem
8 core machine
ebizzy
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 7295.77 ( 0.00%) 7835.63 ( 7.40%) 6713.32 ( -7.98%)
Mean 2 8252.58 ( 0.00%) 9554.63 ( 15.78%) 8334.43 ( 0.99%)
Mean 3 8179.74 ( 0.00%) 9032.46 ( 10.42%) 8134.42 ( -0.55%)
Mean 4 7862.45 ( 0.00%) 8688.01 ( 10.50%) 7966.27 ( 1.32%)
Mean 5 7170.24 ( 0.00%) 8216.15 ( 14.59%) 7820.63 ( 9.07%)
Mean 6 6835.10 ( 0.00%) 7866.95 ( 15.10%) 7773.30 ( 13.73%)
Mean 7 6740.99 ( 0.00%) 7586.36 ( 12.54%) 7712.45 ( 14.41%)
Mean 8 6494.01 ( 0.00%) 6849.82 ( 5.48%) 7705.62 ( 18.66%)
Mean 12 6567.37 ( 0.00%) 6973.66 ( 6.19%) 7554.82 ( 15.04%)
Mean 16 6630.26 ( 0.00%) 7042.52 ( 6.22%) 7331.04 ( 10.57%)
Range 1 767.00 ( 0.00%) 194.00 ( 74.71%) 661.00 ( 13.82%)
Range 2 178.00 ( 0.00%) 185.00 ( -3.93%) 592.00 (-232.58%)
Range 3 175.00 ( 0.00%) 213.00 (-21.71%) 431.00 (-146.29%)
Range 4 806.00 ( 0.00%) 924.00 (-14.64%) 542.00 ( 32.75%)
Range 5 544.00 ( 0.00%) 438.00 ( 19.49%) 444.00 ( 18.38%)
Range 6 399.00 ( 0.00%) 1111.00 (-178.45%) 528.00 (-32.33%)
Range 7 629.00 ( 0.00%) 895.00 (-42.29%) 467.00 ( 25.76%)
Range 8 400.00 ( 0.00%) 255.00 ( 36.25%) 435.00 ( -8.75%)
Range 12 233.00 ( 0.00%) 108.00 ( 53.65%) 330.00 (-41.63%)
Range 16 141.00 ( 0.00%) 134.00 ( 4.96%) 496.00 (-251.77%)
Stddev 1 73.94 ( 0.00%) 52.33 ( 29.23%) 177.17 (-139.59%)
Stddev 2 23.47 ( 0.00%) 42.08 (-79.24%) 88.91 (-278.74%)
Stddev 3 36.48 ( 0.00%) 29.02 ( 20.45%) 101.07 (-177.05%)
Stddev 4 158.37 ( 0.00%) 133.99 ( 15.40%) 130.52 ( 17.59%)
Stddev 5 116.74 ( 0.00%) 76.76 ( 34.25%) 78.31 ( 32.92%)
Stddev 6 66.34 ( 0.00%) 273.87 (-312.83%) 87.79 (-32.33%)
Stddev 7 145.62 ( 0.00%) 174.99 (-20.16%) 90.52 ( 37.84%)
Stddev 8 68.51 ( 0.00%) 47.58 ( 30.54%) 81.11 (-18.39%)
Stddev 12 32.15 ( 0.00%) 20.18 ( 37.22%) 65.74 (-104.50%)
Stddev 16 21.59 ( 0.00%) 20.29 ( 6.01%) 86.42 (-300.25%)
Patch series shows the strongest performance gain here. Not surprising
considering this was the machine and test that first motivated the
series. 3.4.69 is still a lot better.
ebizzy Thread spread
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.35 ( 12.50%) 0.13 ( 67.50%)
Mean 3 23.73 ( 0.00%) 0.46 ( 98.06%) 0.26 ( 98.90%)
Mean 4 12.79 ( 0.00%) 1.40 ( 89.05%) 0.67 ( 94.76%)
Mean 5 13.08 ( 0.00%) 4.06 ( 68.96%) 0.36 ( 97.25%)
Mean 6 23.21 ( 0.00%) 136.62 (-488.63%) 1.13 ( 95.13%)
Mean 7 15.85 ( 0.00%) 203.46 (-1183.66%) 1.51 ( 90.47%)
Mean 8 109.37 ( 0.00%) 47.75 ( 56.34%) 1.05 ( 99.04%)
Mean 12 124.84 ( 0.00%) 120.55 ( 3.44%) 0.59 ( 99.53%)
Mean 16 113.50 ( 0.00%) 109.60 ( 3.44%) 0.49 ( 99.57%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 3.00 ( 0.00%) 11.00 (-266.67%) 1.00 ( 66.67%)
Range 3 80.00 ( 0.00%) 5.00 ( 93.75%) 1.00 ( 98.75%)
Range 4 38.00 ( 0.00%) 5.00 ( 86.84%) 2.00 ( 94.74%)
Range 5 37.00 ( 0.00%) 21.00 ( 43.24%) 1.00 ( 97.30%)
Range 6 46.00 ( 0.00%) 927.00 (-1915.22%) 8.00 ( 82.61%)
Range 7 28.00 ( 0.00%) 716.00 (-2457.14%) 36.00 (-28.57%)
Range 8 325.00 ( 0.00%) 315.00 ( 3.08%) 26.00 ( 92.00%)
Range 12 160.00 ( 0.00%) 151.00 ( 5.62%) 5.00 ( 96.88%)
Range 16 108.00 ( 0.00%) 123.00 (-13.89%) 1.00 ( 99.07%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.62 ( 0.00%) 1.18 ( 91.08%) 0.34 (-45.44%)
Stddev 3 17.40 ( 0.00%) 0.81 (-95.37%) 0.44 (-97.48%)
Stddev 4 8.52 ( 0.00%) 1.05 (-87.69%) 0.51 (-94.00%)
Stddev 5 7.91 ( 0.00%) 3.94 (-50.20%) 0.48 (-93.93%)
Stddev 6 7.11 ( 0.00%) 174.18 (2348.91%) 1.48 (-79.18%)
Stddev 7 5.90 ( 0.00%) 139.48 (2263.45%) 4.12 (-30.24%)
Stddev 8 80.95 ( 0.00%) 58.03 (-28.32%) 2.65 (-96.72%)
Stddev 12 31.48 ( 0.00%) 33.78 ( 7.30%) 0.66 (-97.89%)
Stddev 16 24.32 ( 0.00%) 26.22 ( 7.79%) 0.50 (-97.94%)
Again, while overall performance is better, the spread of performance
between threads is worse but the fact that there is spread at all is
bad.
So overall to me it looks like the series still stands. The clearest result
was from ebizzy which is an adverse workload in this specific case because
of the size of the TLBs involved. The performance of individual threads
is a big concern but I can bisect for that separately and see what falls out.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 10:39 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 10:39 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Linus Torvalds, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Fri, Dec 13, 2013 at 02:38:32PM -0800, H. Peter Anvin wrote:
> On 12/13/2013 01:16 PM, Linus Torvalds wrote:
> > On Fri, Dec 13, 2013 at 12:01 PM, Mel Gorman <mgorman@suse.de> wrote:
> >>
> >> ebizzy
> >> 3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
> >> thread vanilla vanilla altershift-v2r1 nowalk-v2r7
> >> Mean 1 7377.91 ( 0.00%) 6812.38 ( -7.67%) 7784.45 ( 5.51%) 7804.08 ( 5.78%)
> >> Mean 2 8262.07 ( 0.00%) 8276.75 ( 0.18%) 9437.49 ( 14.23%) 9450.88 ( 14.39%)
> >> Mean 3 7895.00 ( 0.00%) 8002.84 ( 1.37%) 8875.38 ( 12.42%) 8914.60 ( 12.91%)
> >> Mean 4 7658.74 ( 0.00%) 7824.83 ( 2.17%) 8509.10 ( 11.10%) 8399.43 ( 9.67%)
> >> Mean 5 7275.37 ( 0.00%) 7678.74 ( 5.54%) 8208.94 ( 12.83%) 8197.86 ( 12.68%)
> >> Mean 6 6875.50 ( 0.00%) 7597.18 ( 10.50%) 7755.66 ( 12.80%) 7807.51 ( 13.56%)
> >> Mean 7 6722.48 ( 0.00%) 7584.75 ( 12.83%) 7456.93 ( 10.93%) 7480.74 ( 11.28%)
> >> Mean 8 6559.55 ( 0.00%) 7591.51 ( 15.73%) 6879.01 ( 4.87%) 6881.86 ( 4.91%)
> >
> > Hmm. Do you have any idea why 3.4.69 still seems to do better at
> > higher thread counts?
> >
> > No complaints about this patch-series, just wondering..
> >
>
> It would be really great to get some performance numbers on something
> other than ebizzy, though...
>
What do you suggest? I'd be interested in hearing what sort of tests
originally motivated the series. I picked a few different tests to see
what fell out. All of this was driven from mmtests so I can do a release
and point to the config files used if anyone wants to try reproducing it.
First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
and ran it for a range of thread numbers, 320 iterations per thread with
random number of entires to flush. Results are from two machines
4 core: Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
8 core: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz
Single socket in both cases, both ivybridge. Neither are high end but my
budget does not cover having high-end machines in my local test grid which
is bad but unavoidable.
On a 4 core machine
tlbflush
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 11.17 ( 0.00%) 10.52 ( 5.82%) 5.15 ( 53.93%)
Mean 2 11.70 ( 0.00%) 10.77 ( 7.99%) 10.30 ( 11.94%)
Mean 3 24.07 ( 0.00%) 22.42 ( 6.87%) 10.89 ( 54.74%)
Mean 4 40.48 ( 0.00%) 39.72 ( 1.88%) 19.51 ( 51.81%)
Range 1 7.00 ( 0.00%) 7.00 ( 0.00%) 5.00 ( 28.57%)
Range 2 44.00 ( 0.00%) 20.00 ( 54.55%) 23.00 ( 47.73%)
Range 3 13.00 ( 0.00%) 16.00 (-23.08%) 8.00 ( 38.46%)
Range 4 26.00 ( 0.00%) 32.00 (-23.08%) 11.00 ( 57.69%)
Stddev 1 1.49 ( 0.00%) 1.45 ( -2.83%) 0.52 (-65.22%)
Stddev 2 3.51 ( 0.00%) 2.20 (-37.20%) 7.46 (112.74%)
Stddev 3 1.84 ( 0.00%) 2.43 ( 32.46%) 1.34 (-26.96%)
Stddev 4 3.44 ( 0.00%) 4.61 ( 34.14%) 1.51 (-56.13%)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 197.37 181.76 99.69
System 161.92 161.54 126.49
Elapsed 2741.19 2793.41 2749.12
Showing small gains on that machine but the variations are high enough
that we cannot be certain it's a real gain. The random number of entries
selection is what makes this noisy but picking a single number would
bias the test for the characteristics of a single machine.
Note that 3.4 is still just a lot better.
This was an 8-core machine
tlbflush
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 7.98 ( 0.00%) 8.54 ( -7.01%) 5.16 ( 35.36%)
Mean 2 7.82 ( 0.00%) 8.35 ( -6.84%) 5.81 ( 25.71%)
Mean 3 6.59 ( 0.00%) 7.80 (-18.36%) 5.58 ( 15.37%)
Mean 5 13.28 ( 0.00%) 12.85 ( 3.20%) 8.88 ( 33.15%)
Mean 8 32.50 ( 0.00%) 32.52 ( -0.04%) 19.92 ( 38.71%)
Range 1 7.00 ( 0.00%) 6.00 ( 14.29%) 3.00 ( 57.14%)
Range 2 8.00 ( 0.00%) 7.00 ( 12.50%) 18.00 (-125.00%)
Range 3 6.00 ( 0.00%) 7.00 (-16.67%) 7.00 (-16.67%)
Range 5 11.00 ( 0.00%) 20.00 (-81.82%) 9.00 ( 18.18%)
Range 8 35.00 ( 0.00%) 33.00 ( 5.71%) 8.00 ( 77.14%)
Stddev 1 1.31 ( 0.00%) 1.52 ( 15.75%) 0.48 (-63.66%)
Stddev 2 1.55 ( 0.00%) 1.52 ( -1.54%) 3.06 ( 98.14%)
Stddev 3 1.27 ( 0.00%) 1.61 ( 26.07%) 1.53 ( 20.16%)
Stddev 5 2.99 ( 0.00%) 2.63 (-11.97%) 2.56 (-14.38%)
Stddev 8 8.29 ( 0.00%) 6.51 (-21.46%) 1.23 (-85.15%)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User 316.01 341.55 205.00
System 249.25 273.16 203.79
Elapsed 3382.56 4398.20 3682.31
This is showing a mix of gains and losses with higher CPU usage to boot.
The figures are again within variations so difficult to be conclusive
about it. The system CPU usage is higher
The following is netperf running UDP_STREAM and TCP_STREAM on loopback on
the 4-core machine
netperf-udp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 179.14 ( 0.00%) 177.82 ( -0.74%) 207.16 ( 15.64%)
Tput 128 354.67 ( 0.00%) 350.04 ( -1.31%) 416.47 ( 17.42%)
Tput 256 712.01 ( 0.00%) 697.31 ( -2.06%) 828.11 ( 16.31%)
Tput 1024 2770.59 ( 0.00%) 2717.55 ( -1.91%) 3229.38 ( 16.56%)
Tput 2048 5328.83 ( 0.00%) 5255.81 ( -1.37%) 6183.69 ( 16.04%)
Tput 3312 8249.24 ( 0.00%) 8170.62 ( -0.95%) 9491.63 ( 15.06%)
Tput 4096 9865.98 ( 0.00%) 9760.41 ( -1.07%) 11348.02 ( 15.02%)
Tput 8192 17263.69 ( 0.00%) 17261.15 ( -0.01%) 19917.01 ( 15.37%)
Tput 16384 27274.61 ( 0.00%) 27283.01 ( 0.03%) 30785.56 ( 12.87%)
netperf-tcp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 1612.82 ( 0.00%) 1622.31 ( 0.59%) 1584.68 ( -1.74%)
Tput 128 3043.06 ( 0.00%) 3024.19 ( -0.62%) 2926.80 ( -3.82%)
Tput 256 5755.06 ( 0.00%) 5747.26 ( -0.14%) 5328.57 ( -7.41%)
Tput 1024 17662.03 ( 0.00%) 17778.94 ( 0.66%) 11963.09 (-32.27%)
Tput 2048 25382.69 ( 0.00%) 25464.23 ( 0.32%) 15043.90 (-40.73%)
Tput 3312 29990.79 ( 0.00%) 30135.56 ( 0.48%) 15731.78 (-47.54%)
Tput 4096 31612.33 ( 0.00%) 31775.74 ( 0.52%) 17626.10 (-44.24%)
Tput 8192 35366.99 ( 0.00%) 35425.15 ( 0.16%) 21060.61 (-40.45%)
Tput 16384 38547.25 ( 0.00%) 38441.09 ( -0.28%) 27925.43 (-27.56%)
Very marginal there. Something nuts happened with UDP and TCP processing
between 3.4 and 3.13 but this particular series' impact is marginal
8 core machine
netperf-udp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 328.25 ( 0.00%) 331.05 ( 0.85%) 383.97 ( 16.97%)
Tput 128 664.31 ( 0.00%) 659.58 ( -0.71%) 762.59 ( 14.79%)
Tput 256 1305.82 ( 0.00%) 1309.65 ( 0.29%) 1508.27 ( 15.50%)
Tput 1024 5110.17 ( 0.00%) 5081.82 ( -0.55%) 5775.96 ( 13.03%)
Tput 2048 9839.14 ( 0.00%) 10074.00 ( 2.39%) 11010.10 ( 11.90%)
Tput 3312 14787.70 ( 0.00%) 14850.59 ( 0.43%) 16821.29 ( 13.75%)
Tput 4096 17583.14 ( 0.00%) 17936.17 ( 2.01%) 20246.74 ( 15.15%)
Tput 8192 30165.48 ( 0.00%) 30386.78 ( 0.73%) 31904.81 ( 5.77%)
Tput 16384 48345.93 ( 0.00%) 48127.68 ( -0.45%) 48850.30 ( 1.04%)
netperf-tcp
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Tput 64 3064.32 ( 0.00%) 3149.22 ( 2.77%) 2701.19 (-11.85%)
Tput 128 5777.71 ( 0.00%) 5899.85 ( 2.11%) 4931.78 (-14.64%)
Tput 256 10330.00 ( 0.00%) 10567.97 ( 2.30%) 8388.28 (-18.80%)
Tput 1024 30744.90 ( 0.00%) 31084.37 ( 1.10%) 17496.95 (-43.09%)
Tput 2048 43064.86 ( 0.00%) 42916.90 ( -0.34%) 22227.42 (-48.39%)
Tput 3312 50473.85 ( 0.00%) 50388.37 ( -0.17%) 25154.14 (-50.16%)
Tput 4096 53909.70 ( 0.00%) 53965.40 ( 0.10%) 27328.49 (-49.31%)
Tput 8192 63303.83 ( 0.00%) 63152.88 ( -0.24%) 32078.71 (-49.33%)
Tput 16384 68632.11 ( 0.00%) 68063.05 ( -0.83%) 39758.01 (-42.07%)
Looks a bit more solid. I didn't post the figures but the elapsed times
are also lower implying that netperf is using fewer iterations to
measure results it is confident of
Next is a kernel build benchmark. I'd be very surprised if it was hitting
the relevant paths but I think people expect to see this benchmark so....
4 core machine
kernbench
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User min 714.10 ( 0.00%) 714.51 ( -0.06%) 706.83 ( 1.02%)
User mean 715.04 ( 0.00%) 714.75 ( 0.04%) 707.64 ( 1.04%)
User stddev 0.67 ( 0.00%) 0.25 ( 62.98%) 0.69 ( -3.40%)
User max 716.12 ( 0.00%) 715.22 ( 0.13%) 708.56 ( 1.06%)
User range 2.02 ( 0.00%) 0.71 ( 64.85%) 1.73 ( 14.36%)
System min 32.89 ( 0.00%) 32.50 ( 1.19%) 39.17 (-19.09%)
System mean 33.25 ( 0.00%) 32.75 ( 1.53%) 39.51 (-18.82%)
System stddev 0.25 ( 0.00%) 0.22 ( 14.73%) 0.28 (-11.29%)
System max 33.60 ( 0.00%) 33.12 ( 1.43%) 39.83 (-18.54%)
System range 0.71 ( 0.00%) 0.62 ( 12.68%) 0.66 ( 7.04%)
Elapsed min 195.70 ( 0.00%) 195.88 ( -0.09%) 195.84 ( -0.07%)
Elapsed mean 196.09 ( 0.00%) 195.97 ( 0.06%) 196.14 ( -0.03%)
Elapsed stddev 0.25 ( 0.00%) 0.06 ( 74.74%) 0.16 ( 33.94%)
Elapsed max 196.41 ( 0.00%) 196.07 ( 0.17%) 196.33 ( 0.04%)
Elapsed range 0.71 ( 0.00%) 0.19 ( 73.24%) 0.49 ( 30.99%)
CPU min 381.00 ( 0.00%) 381.00 ( 0.00%) 380.00 ( 0.26%)
CPU mean 381.00 ( 0.00%) 381.00 ( 0.00%) 380.40 ( 0.16%)
CPU stddev 0.00 ( 0.00%) 0.00 ( 0.00%) 0.49 (-99.00%)
CPU max 381.00 ( 0.00%) 381.00 ( 0.00%) 381.00 ( 0.00%)
CPU range 0.00 ( 0.00%) 0.00 ( 0.00%) 1.00 (-99.00%)
8 core machine
kernbench
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
User min 632.94 ( 0.00%) 632.71 ( 0.04%) 681.00 ( -7.59%)
User mean 633.25 ( 0.00%) 633.41 ( -0.02%) 681.34 ( -7.59%)
User stddev 0.24 ( 0.00%) 0.55 (-124.00%) 0.34 (-39.88%)
User max 633.55 ( 0.00%) 634.14 ( -0.09%) 681.99 ( -7.65%)
User range 0.61 ( 0.00%) 1.43 (-134.43%) 0.99 (-62.30%)
System min 29.74 ( 0.00%) 29.76 ( -0.07%) 38.24 (-28.58%)
System mean 30.12 ( 0.00%) 30.22 ( -0.32%) 38.55 (-27.99%)
System stddev 0.22 ( 0.00%) 0.24 (-11.04%) 0.25 (-14.10%)
System max 30.39 ( 0.00%) 30.48 ( -0.30%) 38.87 (-27.90%)
System range 0.65 ( 0.00%) 0.72 (-10.77%) 0.63 ( 3.08%)
Elapsed min 88.40 ( 0.00%) 88.47 ( -0.08%) 95.81 ( -8.38%)
Elapsed mean 88.55 ( 0.00%) 88.72 ( -0.20%) 96.01 ( -8.43%)
Elapsed stddev 0.10 ( 0.00%) 0.15 (-46.20%) 0.23 (-125.69%)
Elapsed max 88.72 ( 0.00%) 88.88 ( -0.18%) 96.30 ( -8.54%)
Elapsed range 0.32 ( 0.00%) 0.41 (-28.13%) 0.49 (-53.13%)
CPU min 747.00 ( 0.00%) 746.00 ( 0.13%) 747.00 ( 0.00%)
CPU mean 748.80 ( 0.00%) 747.60 ( 0.16%) 749.20 ( -0.05%)
CPU stddev 0.98 ( 0.00%) 1.36 (-38.44%) 1.47 (-50.00%)
CPU max 750.00 ( 0.00%) 750.00 ( 0.00%) 751.00 ( -0.13%)
CPU range 3.00 ( 0.00%) 4.00 (-33.33%) 4.00 (-33.33%)
Yup, nothing there worth getting excited about although slightly amusing
to note that we've improved kernel build times since 3.4.69 if nothing
else. We're all over the performance of that!
This is a modified ebizzy benchmark to give a breakdown of per-thread
performance.
4 core machine
ebizzy total throughput (higher the better)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 6366.88 ( 0.00%) 6741.00 ( 5.88%) 6658.32 ( 4.58%)
Mean 2 6917.56 ( 0.00%) 7952.29 ( 14.96%) 8120.79 ( 17.39%)
Mean 3 6231.78 ( 0.00%) 6846.08 ( 9.86%) 7174.98 ( 15.14%)
Mean 4 5887.91 ( 0.00%) 6503.12 ( 10.45%) 6903.05 ( 17.24%)
Mean 5 5680.77 ( 0.00%) 6185.83 ( 8.89%) 6549.15 ( 15.29%)
Mean 6 5692.87 ( 0.00%) 6249.48 ( 9.78%) 6442.21 ( 13.16%)
Mean 7 5846.76 ( 0.00%) 6344.94 ( 8.52%) 6279.13 ( 7.40%)
Mean 8 5974.57 ( 0.00%) 6406.28 ( 7.23%) 6265.29 ( 4.87%)
Range 1 174.00 ( 0.00%) 202.00 (-16.09%) 806.00 (-363.22%)
Range 2 286.00 ( 0.00%) 979.00 (-242.31%) 1255.00 (-338.81%)
Range 3 530.00 ( 0.00%) 583.00 (-10.00%) 626.00 (-18.11%)
Range 4 592.00 ( 0.00%) 691.00 (-16.72%) 630.00 ( -6.42%)
Range 5 567.00 ( 0.00%) 417.00 ( 26.46%) 584.00 ( -3.00%)
Range 6 588.00 ( 0.00%) 353.00 ( 39.97%) 439.00 ( 25.34%)
Range 7 477.00 ( 0.00%) 284.00 ( 40.46%) 343.00 ( 28.09%)
Range 8 408.00 ( 0.00%) 182.00 ( 55.39%) 237.00 ( 41.91%)
Stddev 1 31.59 ( 0.00%) 32.94 ( -4.27%) 154.26 (-388.34%)
Stddev 2 56.95 ( 0.00%) 136.79 (-140.19%) 194.45 (-241.43%)
Stddev 3 132.28 ( 0.00%) 101.02 ( 23.63%) 106.60 ( 19.41%)
Stddev 4 140.93 ( 0.00%) 136.11 ( 3.42%) 138.26 ( 1.90%)
Stddev 5 118.58 ( 0.00%) 86.74 ( 26.85%) 111.73 ( 5.77%)
Stddev 6 109.64 ( 0.00%) 77.49 ( 29.32%) 95.52 ( 12.87%)
Stddev 7 103.91 ( 0.00%) 51.44 ( 50.50%) 54.43 ( 47.62%)
Stddev 8 67.79 ( 0.00%) 31.34 ( 53.76%) 53.08 ( 21.69%)
4 core machine
ebizzy Thread spread (closer to 0, the more fair it is)
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.30 ( 11.76%) 0.07 ( 79.41%)
Mean 3 1.29 ( 0.00%) 0.92 ( 28.68%) 0.29 ( 77.52%)
Mean 4 7.08 ( 0.00%) 42.38 (-498.59%) 0.22 ( 96.89%)
Mean 5 193.54 ( 0.00%) 483.41 (-149.77%) 0.41 ( 99.79%)
Mean 6 151.12 ( 0.00%) 198.22 (-31.17%) 0.42 ( 99.72%)
Mean 7 115.38 ( 0.00%) 160.29 (-38.92%) 0.58 ( 99.50%)
Mean 8 108.65 ( 0.00%) 138.96 (-27.90%) 0.44 ( 99.60%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 5.00 ( 0.00%) 6.00 (-20.00%) 2.00 ( 60.00%)
Range 3 10.00 ( 0.00%) 17.00 (-70.00%) 9.00 ( 10.00%)
Range 4 256.00 ( 0.00%) 1001.00 (-291.02%) 5.00 ( 98.05%)
Range 5 456.00 ( 0.00%) 1226.00 (-168.86%) 6.00 ( 98.68%)
Range 6 298.00 ( 0.00%) 294.00 ( 1.34%) 8.00 ( 97.32%)
Range 7 192.00 ( 0.00%) 220.00 (-14.58%) 7.00 ( 96.35%)
Range 8 171.00 ( 0.00%) 163.00 ( 4.68%) 8.00 ( 95.32%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.72 ( 0.00%) 0.85 ( 17.99%) 0.29 (-59.72%)
Stddev 3 1.42 ( 0.00%) 1.90 ( 34.22%) 1.12 (-21.19%)
Stddev 4 33.83 ( 0.00%) 127.26 (276.15%) 0.79 (-97.65%)
Stddev 5 92.08 ( 0.00%) 225.01 (144.35%) 1.06 (-98.85%)
Stddev 6 64.82 ( 0.00%) 69.43 ( 7.11%) 1.28 (-98.02%)
Stddev 7 36.66 ( 0.00%) 49.19 ( 34.20%) 1.18 (-96.79%)
Stddev 8 30.79 ( 0.00%) 36.23 ( 17.64%) 1.06 (-96.55%)
Three things to note here. The spread goes to hell when there are more
workload threads than cores. Second, the patch is actually making the
spread and thread fairness worse. Third, the fact that there is spread at
all is bad because 3.4.69 experienced no such problem
8 core machine
ebizzy
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 7295.77 ( 0.00%) 7835.63 ( 7.40%) 6713.32 ( -7.98%)
Mean 2 8252.58 ( 0.00%) 9554.63 ( 15.78%) 8334.43 ( 0.99%)
Mean 3 8179.74 ( 0.00%) 9032.46 ( 10.42%) 8134.42 ( -0.55%)
Mean 4 7862.45 ( 0.00%) 8688.01 ( 10.50%) 7966.27 ( 1.32%)
Mean 5 7170.24 ( 0.00%) 8216.15 ( 14.59%) 7820.63 ( 9.07%)
Mean 6 6835.10 ( 0.00%) 7866.95 ( 15.10%) 7773.30 ( 13.73%)
Mean 7 6740.99 ( 0.00%) 7586.36 ( 12.54%) 7712.45 ( 14.41%)
Mean 8 6494.01 ( 0.00%) 6849.82 ( 5.48%) 7705.62 ( 18.66%)
Mean 12 6567.37 ( 0.00%) 6973.66 ( 6.19%) 7554.82 ( 15.04%)
Mean 16 6630.26 ( 0.00%) 7042.52 ( 6.22%) 7331.04 ( 10.57%)
Range 1 767.00 ( 0.00%) 194.00 ( 74.71%) 661.00 ( 13.82%)
Range 2 178.00 ( 0.00%) 185.00 ( -3.93%) 592.00 (-232.58%)
Range 3 175.00 ( 0.00%) 213.00 (-21.71%) 431.00 (-146.29%)
Range 4 806.00 ( 0.00%) 924.00 (-14.64%) 542.00 ( 32.75%)
Range 5 544.00 ( 0.00%) 438.00 ( 19.49%) 444.00 ( 18.38%)
Range 6 399.00 ( 0.00%) 1111.00 (-178.45%) 528.00 (-32.33%)
Range 7 629.00 ( 0.00%) 895.00 (-42.29%) 467.00 ( 25.76%)
Range 8 400.00 ( 0.00%) 255.00 ( 36.25%) 435.00 ( -8.75%)
Range 12 233.00 ( 0.00%) 108.00 ( 53.65%) 330.00 (-41.63%)
Range 16 141.00 ( 0.00%) 134.00 ( 4.96%) 496.00 (-251.77%)
Stddev 1 73.94 ( 0.00%) 52.33 ( 29.23%) 177.17 (-139.59%)
Stddev 2 23.47 ( 0.00%) 42.08 (-79.24%) 88.91 (-278.74%)
Stddev 3 36.48 ( 0.00%) 29.02 ( 20.45%) 101.07 (-177.05%)
Stddev 4 158.37 ( 0.00%) 133.99 ( 15.40%) 130.52 ( 17.59%)
Stddev 5 116.74 ( 0.00%) 76.76 ( 34.25%) 78.31 ( 32.92%)
Stddev 6 66.34 ( 0.00%) 273.87 (-312.83%) 87.79 (-32.33%)
Stddev 7 145.62 ( 0.00%) 174.99 (-20.16%) 90.52 ( 37.84%)
Stddev 8 68.51 ( 0.00%) 47.58 ( 30.54%) 81.11 (-18.39%)
Stddev 12 32.15 ( 0.00%) 20.18 ( 37.22%) 65.74 (-104.50%)
Stddev 16 21.59 ( 0.00%) 20.29 ( 6.01%) 86.42 (-300.25%)
Patch series shows the strongest performance gain here. Not surprising
considering this was the machine and test that first motivated the
series. 3.4.69 is still a lot better.
ebizzy Thread spread
3.13.0-rc3 3.13.0-rc3 3.4.69
vanilla nowalk-v2r7 vanilla
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.35 ( 12.50%) 0.13 ( 67.50%)
Mean 3 23.73 ( 0.00%) 0.46 ( 98.06%) 0.26 ( 98.90%)
Mean 4 12.79 ( 0.00%) 1.40 ( 89.05%) 0.67 ( 94.76%)
Mean 5 13.08 ( 0.00%) 4.06 ( 68.96%) 0.36 ( 97.25%)
Mean 6 23.21 ( 0.00%) 136.62 (-488.63%) 1.13 ( 95.13%)
Mean 7 15.85 ( 0.00%) 203.46 (-1183.66%) 1.51 ( 90.47%)
Mean 8 109.37 ( 0.00%) 47.75 ( 56.34%) 1.05 ( 99.04%)
Mean 12 124.84 ( 0.00%) 120.55 ( 3.44%) 0.59 ( 99.53%)
Mean 16 113.50 ( 0.00%) 109.60 ( 3.44%) 0.49 ( 99.57%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 3.00 ( 0.00%) 11.00 (-266.67%) 1.00 ( 66.67%)
Range 3 80.00 ( 0.00%) 5.00 ( 93.75%) 1.00 ( 98.75%)
Range 4 38.00 ( 0.00%) 5.00 ( 86.84%) 2.00 ( 94.74%)
Range 5 37.00 ( 0.00%) 21.00 ( 43.24%) 1.00 ( 97.30%)
Range 6 46.00 ( 0.00%) 927.00 (-1915.22%) 8.00 ( 82.61%)
Range 7 28.00 ( 0.00%) 716.00 (-2457.14%) 36.00 (-28.57%)
Range 8 325.00 ( 0.00%) 315.00 ( 3.08%) 26.00 ( 92.00%)
Range 12 160.00 ( 0.00%) 151.00 ( 5.62%) 5.00 ( 96.88%)
Range 16 108.00 ( 0.00%) 123.00 (-13.89%) 1.00 ( 99.07%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.62 ( 0.00%) 1.18 ( 91.08%) 0.34 (-45.44%)
Stddev 3 17.40 ( 0.00%) 0.81 (-95.37%) 0.44 (-97.48%)
Stddev 4 8.52 ( 0.00%) 1.05 (-87.69%) 0.51 (-94.00%)
Stddev 5 7.91 ( 0.00%) 3.94 (-50.20%) 0.48 (-93.93%)
Stddev 6 7.11 ( 0.00%) 174.18 (2348.91%) 1.48 (-79.18%)
Stddev 7 5.90 ( 0.00%) 139.48 (2263.45%) 4.12 (-30.24%)
Stddev 8 80.95 ( 0.00%) 58.03 (-28.32%) 2.65 (-96.72%)
Stddev 12 31.48 ( 0.00%) 33.78 ( 7.30%) 0.66 (-97.89%)
Stddev 16 24.32 ( 0.00%) 26.22 ( 7.79%) 0.50 (-97.94%)
Again, while overall performance is better, the spread of performance
between threads is worse but the fact that there is spread at all is
bad.
So overall to me it looks like the series still stands. The clearest result
was from ebizzy which is an adverse workload in this specific case because
of the size of the TLBs involved. The performance of individual threads
is a big concern but I can bisect for that separately and see what falls out.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-15 18:34 ` Linus Torvalds
@ 2013-12-16 11:16 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 11:16 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 10:34:25AM -0800, Linus Torvalds wrote:
> On Sun, Dec 15, 2013 at 7:55 AM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > Short answer -- There appears to be a second bug where 3.13-rc3 is less
> > fair to threads getting time on the CPU.
>
> Hmm. Can you point me at the (fixed) microbenchmark you mention?
>
ebizzy is what I was using to see the per-thread performance. It's at
http://sourceforge.net/projects/ebizzy/. It's patched with the patch below
to give per-thread stats.
You probably want to run it manually but FWIW, the results I posted were
using mmtests (https://github.com/gormanm/mmtests) to build, patch,
run ebizzy and generate the report. The configuration file I used was
configs/config-global-dhp__tlbflush-performance. I have not tried a manual
performance analysis yet as an automated bisection is in progress to see
can the thread spread problem be found the easy way.
diff --git a/ebizzy.c b/ebizzy.c
index 76c7492..3e7644f 100644
--- a/ebizzy.c
+++ b/ebizzy.c
@@ -83,7 +83,7 @@ static char **hole_mem;
static unsigned int page_size;
static time_t start_time;
static volatile int threads_go;
-static unsigned int records_read;
+static unsigned int *thread_records_read;
static void
usage(void)
@@ -436,6 +436,7 @@ search_mem(void)
static void *
thread_run(void *arg)
{
+ unsigned int *records = (unsigned int *)arg;
if (verbose > 1)
printf("Thread started\n");
@@ -444,7 +445,7 @@ thread_run(void *arg)
while (threads_go == 0);
- records_read += search_mem();
+ *records = search_mem();
if (verbose > 1)
printf("Thread finished, %f seconds\n",
@@ -471,12 +472,19 @@ start_threads(void)
struct rusage start_ru, end_ru;
struct timeval usr_time, sys_time;
int err;
+ unsigned int total_records = 0;
if (verbose)
printf("Threads starting\n");
+ thread_records_read = calloc(threads, sizeof(unsigned int));
+ if (!thread_records_read) {
+ fprintf(stderr, "Error allocating thread_records_read\n");
+ exit(1);
+ }
+
for (i = 0; i < threads; i++) {
- err = pthread_create(&thread_array[i], NULL, thread_run, NULL);
+ err = pthread_create(&thread_array[i], NULL, thread_run, &thread_records_read[i]);
if (err) {
fprintf(stderr, "Error creating thread %d\n", i);
exit(1);
@@ -505,13 +513,21 @@ start_threads(void)
fprintf(stderr, "Error joining thread %d\n", i);
exit(1);
}
+ total_records += thread_records_read[i];
}
if (verbose)
printf("Threads finished\n");
- printf("%u records/s\n",
- (unsigned int) (((double) records_read)/elapsed));
+ printf("%u records/s",
+ (unsigned int) (((double) total_records)/elapsed));
+
+ for (i = 0; i < threads; i++) {
+ printf(" %u", (unsigned int) (((double) thread_records_read[i])/elapsed));
+ }
+ printf("\n");
+
+ free(thread_records_read);
usr_time = difftimeval(&end_ru.ru_utime, &start_ru.ru_utime);
sys_time = difftimeval(&end_ru.ru_stime, &start_ru.ru_stime);
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 11:16 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 11:16 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alex Shi, Ingo Molnar, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Sun, Dec 15, 2013 at 10:34:25AM -0800, Linus Torvalds wrote:
> On Sun, Dec 15, 2013 at 7:55 AM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > Short answer -- There appears to be a second bug where 3.13-rc3 is less
> > fair to threads getting time on the CPU.
>
> Hmm. Can you point me at the (fixed) microbenchmark you mention?
>
ebizzy is what I was using to see the per-thread performance. It's at
http://sourceforge.net/projects/ebizzy/. It's patched with the patch below
to give per-thread stats.
You probably want to run it manually but FWIW, the results I posted were
using mmtests (https://github.com/gormanm/mmtests) to build, patch,
run ebizzy and generate the report. The configuration file I used was
configs/config-global-dhp__tlbflush-performance. I have not tried a manual
performance analysis yet as an automated bisection is in progress to see
can the thread spread problem be found the easy way.
diff --git a/ebizzy.c b/ebizzy.c
index 76c7492..3e7644f 100644
--- a/ebizzy.c
+++ b/ebizzy.c
@@ -83,7 +83,7 @@ static char **hole_mem;
static unsigned int page_size;
static time_t start_time;
static volatile int threads_go;
-static unsigned int records_read;
+static unsigned int *thread_records_read;
static void
usage(void)
@@ -436,6 +436,7 @@ search_mem(void)
static void *
thread_run(void *arg)
{
+ unsigned int *records = (unsigned int *)arg;
if (verbose > 1)
printf("Thread started\n");
@@ -444,7 +445,7 @@ thread_run(void *arg)
while (threads_go == 0);
- records_read += search_mem();
+ *records = search_mem();
if (verbose > 1)
printf("Thread finished, %f seconds\n",
@@ -471,12 +472,19 @@ start_threads(void)
struct rusage start_ru, end_ru;
struct timeval usr_time, sys_time;
int err;
+ unsigned int total_records = 0;
if (verbose)
printf("Threads starting\n");
+ thread_records_read = calloc(threads, sizeof(unsigned int));
+ if (!thread_records_read) {
+ fprintf(stderr, "Error allocating thread_records_read\n");
+ exit(1);
+ }
+
for (i = 0; i < threads; i++) {
- err = pthread_create(&thread_array[i], NULL, thread_run, NULL);
+ err = pthread_create(&thread_array[i], NULL, thread_run, &thread_records_read[i]);
if (err) {
fprintf(stderr, "Error creating thread %d\n", i);
exit(1);
@@ -505,13 +513,21 @@ start_threads(void)
fprintf(stderr, "Error joining thread %d\n", i);
exit(1);
}
+ total_records += thread_records_read[i];
}
if (verbose)
printf("Threads finished\n");
- printf("%u records/s\n",
- (unsigned int) (((double) records_read)/elapsed));
+ printf("%u records/s",
+ (unsigned int) (((double) total_records)/elapsed));
+
+ for (i = 0; i < threads; i++) {
+ printf(" %u", (unsigned int) (((double) thread_records_read[i])/elapsed));
+ }
+ printf("\n");
+
+ free(thread_records_read);
usr_time = difftimeval(&end_ru.ru_utime, &start_ru.ru_utime);
sys_time = difftimeval(&end_ru.ru_stime, &start_ru.ru_stime);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-16 10:24 ` Ingo Molnar
@ 2013-12-16 12:59 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 12:59 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Mon, Dec 16, 2013 at 11:24:39AM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > I had hacked ebizzy to report on the performance of each thread, not
> > just the overall result and worked out the difference in performance
> > of each thread. In a complete fair test you would expect the
> > performance of each thread to be identical and so the spread would
> > be 0
> >
> > ebizzy thread spread
> > 3.13.0-rc3 3.13.0-rc3 3.4.69
> > vanilla nowalk-v2r7 vanilla
> > Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
> > Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
> > Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
> > Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
> > Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
> > Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
> > Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
> > Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
> > Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
> > Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
> > Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
> > Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
> > Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
> > Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
> > Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
> > Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
> > Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
> > Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
> > Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
> > Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
> > Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
> >
> > For example, this is saying that with 8 threads on 3.13-rc3 that the
> > difference between the slowest and fastest thread was 171
> > records/second.
>
> We aren't blind fairness fetishists, but the noise difference between
> v3.4 and v3.13 appears to be staggering, it's a serious anomaly in
> itself.
>
Agreed.
> Whatever we did right in v3.4 we want to do in v3.13 as well - or at
> least understand it.
>
Also agreed. I started a bisection before answering this mail. It would
be cooler and potentially faster to figure it out from direct analysis
but bisection is reliable and less guesswork.
> I agree that the absolute numbers would probably only be interesting
> once v3.13 is fixed to not spread thread performance that wildly
> again.
>
> > [...] Because of this bug, I'd be wary about drawing too many
> > conclusions about ebizzy performance when the number of threads
> > exceed the number of CPUs.
>
> Yes.
>
> Could it be that the v3.13 workload context switches a lot more than
> v3.4 workload?
The opposite. 3.13 context switches and interrupts less.
> That would magnify any TLB range flushing costs and
> would make it essentially a secondary symptom, not a primary cause of
> the regression. (I'm only guessing blindly here though.)
>
Fortunately, I had collected data on context switches
4 core machine: http://www.csn.ul.ie/~mel/postings/spread-20131216/global-ebizzy/ivor/report.html
8 core machine: http://www.csn.ul.ie/~mel/postings/spread-20131216/global-ebizzy/ivy/report.html
The ebizzy results are at the end. One of the graphs are for context
switches as measured by vmstat running during the test.
In both cases you can see that context switches are higher for 3.4 as
are interrupts. The difference in context switches are why I thought this
might be scheduler related but the difference in interrupts was harder to
explain. I'm guessing they're IPIs but did not record /proc/interrupts
to answer that. I lack familiarity with scheduler changes between 3.4
and 3.13-rc4 and have no intuitive feeling for when this might have been
introduced. I'm also not sure if we used to do anything like send IPIs
to reschedule tasks or balance tasks between idle cores that changed
recently. There was also a truckload of nohz changes in that window that
I'm not familiar with that are potentially responsible. Should have
answers soon enough.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 12:59 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-16 12:59 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Mon, Dec 16, 2013 at 11:24:39AM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > I had hacked ebizzy to report on the performance of each thread, not
> > just the overall result and worked out the difference in performance
> > of each thread. In a complete fair test you would expect the
> > performance of each thread to be identical and so the spread would
> > be 0
> >
> > ebizzy thread spread
> > 3.13.0-rc3 3.13.0-rc3 3.4.69
> > vanilla nowalk-v2r7 vanilla
> > Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Mean 2 0.34 ( 0.00%) 0.30 (-11.76%) 0.07 (-79.41%)
> > Mean 3 1.29 ( 0.00%) 0.92 (-28.68%) 0.29 (-77.52%)
> > Mean 4 7.08 ( 0.00%) 42.38 (498.59%) 0.22 (-96.89%)
> > Mean 5 193.54 ( 0.00%) 483.41 (149.77%) 0.41 (-99.79%)
> > Mean 6 151.12 ( 0.00%) 198.22 ( 31.17%) 0.42 (-99.72%)
> > Mean 7 115.38 ( 0.00%) 160.29 ( 38.92%) 0.58 (-99.50%)
> > Mean 8 108.65 ( 0.00%) 138.96 ( 27.90%) 0.44 (-99.60%)
> > Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Range 2 5.00 ( 0.00%) 6.00 ( 20.00%) 2.00 (-60.00%)
> > Range 3 10.00 ( 0.00%) 17.00 ( 70.00%) 9.00 (-10.00%)
> > Range 4 256.00 ( 0.00%) 1001.00 (291.02%) 5.00 (-98.05%)
> > Range 5 456.00 ( 0.00%) 1226.00 (168.86%) 6.00 (-98.68%)
> > Range 6 298.00 ( 0.00%) 294.00 ( -1.34%) 8.00 (-97.32%)
> > Range 7 192.00 ( 0.00%) 220.00 ( 14.58%) 7.00 (-96.35%)
> > Range 8 171.00 ( 0.00%) 163.00 ( -4.68%) 8.00 (-95.32%)
> > Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
> > Stddev 2 0.72 ( 0.00%) 0.85 (-17.99%) 0.29 ( 59.72%)
> > Stddev 3 1.42 ( 0.00%) 1.90 (-34.22%) 1.12 ( 21.19%)
> > Stddev 4 33.83 ( 0.00%) 127.26 (-276.15%) 0.79 ( 97.65%)
> > Stddev 5 92.08 ( 0.00%) 225.01 (-144.35%) 1.06 ( 98.85%)
> > Stddev 6 64.82 ( 0.00%) 69.43 ( -7.11%) 1.28 ( 98.02%)
> > Stddev 7 36.66 ( 0.00%) 49.19 (-34.20%) 1.18 ( 96.79%)
> > Stddev 8 30.79 ( 0.00%) 36.23 (-17.64%) 1.06 ( 96.55%)
> >
> > For example, this is saying that with 8 threads on 3.13-rc3 that the
> > difference between the slowest and fastest thread was 171
> > records/second.
>
> We aren't blind fairness fetishists, but the noise difference between
> v3.4 and v3.13 appears to be staggering, it's a serious anomaly in
> itself.
>
Agreed.
> Whatever we did right in v3.4 we want to do in v3.13 as well - or at
> least understand it.
>
Also agreed. I started a bisection before answering this mail. It would
be cooler and potentially faster to figure it out from direct analysis
but bisection is reliable and less guesswork.
> I agree that the absolute numbers would probably only be interesting
> once v3.13 is fixed to not spread thread performance that wildly
> again.
>
> > [...] Because of this bug, I'd be wary about drawing too many
> > conclusions about ebizzy performance when the number of threads
> > exceed the number of CPUs.
>
> Yes.
>
> Could it be that the v3.13 workload context switches a lot more than
> v3.4 workload?
The opposite. 3.13 context switches and interrupts less.
> That would magnify any TLB range flushing costs and
> would make it essentially a secondary symptom, not a primary cause of
> the regression. (I'm only guessing blindly here though.)
>
Fortunately, I had collected data on context switches
4 core machine: http://www.csn.ul.ie/~mel/postings/spread-20131216/global-ebizzy/ivor/report.html
8 core machine: http://www.csn.ul.ie/~mel/postings/spread-20131216/global-ebizzy/ivy/report.html
The ebizzy results are at the end. One of the graphs are for context
switches as measured by vmstat running during the test.
In both cases you can see that context switches are higher for 3.4 as
are interrupts. The difference in context switches are why I thought this
might be scheduler related but the difference in interrupts was harder to
explain. I'm guessing they're IPIs but did not record /proc/interrupts
to answer that. I lack familiarity with scheduler changes between 3.4
and 3.13-rc4 and have no intuitive feeling for when this might have been
introduced. I'm also not sure if we used to do anything like send IPIs
to reschedule tasks or balance tasks between idle cores that changed
recently. There was also a truckload of nohz changes in that window that
I'm not familiar with that are potentially responsible. Should have
answers soon enough.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-16 12:59 ` Mel Gorman
@ 2013-12-16 13:44 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-16 13:44 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > at least understand it.
>
> Also agreed. I started a bisection before answering this mail. It
> would be cooler and potentially faster to figure it out from direct
> analysis but bisection is reliable and less guesswork.
Trying to guess can potentially last a _lot_ longer than a generic,
no-assumptions bisection ...
The symptoms could point to anything: scheduler, locking details, some
stupid little change in a wakeup sequence somewhere, etc.
It might even be a non-deterministic effect of some timing change
causing the workload 'just' to avoid a common point of preemption and
not scheduling as much - and become more unfair and thus certain
threads lasting longer to finish.
Does the benchmark execute a fixed amount of transactions per thread?
That might artificially increase the numeric regression: with more
threads it 'magnifies' any unfairness effects because slower threads
will become slower, faster threads will become faster, as the thread
count increases.
[ That in itself is somewhat artificial, because real workloads tend
to balance between threads dynamically and don't insist on keeping
the fastest threads idle near the end of a run. It does not
invalidate the complaint about the unfairness itself, obviously. ]
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 13:44 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-16 13:44 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > at least understand it.
>
> Also agreed. I started a bisection before answering this mail. It
> would be cooler and potentially faster to figure it out from direct
> analysis but bisection is reliable and less guesswork.
Trying to guess can potentially last a _lot_ longer than a generic,
no-assumptions bisection ...
The symptoms could point to anything: scheduler, locking details, some
stupid little change in a wakeup sequence somewhere, etc.
It might even be a non-deterministic effect of some timing change
causing the workload 'just' to avoid a common point of preemption and
not scheduling as much - and become more unfair and thus certain
threads lasting longer to finish.
Does the benchmark execute a fixed amount of transactions per thread?
That might artificially increase the numeric regression: with more
threads it 'magnifies' any unfairness effects because slower threads
will become slower, faster threads will become faster, as the thread
count increases.
[ That in itself is somewhat artificial, because real workloads tend
to balance between threads dynamically and don't insist on keeping
the fastest threads idle near the end of a run. It does not
invalidate the complaint about the unfairness itself, obviously. ]
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-16 10:39 ` Mel Gorman
@ 2013-12-16 17:17 ` Linus Torvalds
-1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-16 17:17 UTC (permalink / raw)
To: Mel Gorman
Cc: H. Peter Anvin, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Mon, Dec 16, 2013 at 2:39 AM, Mel Gorman <mgorman@suse.de> wrote:
>
> First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
> and ran it for a range of thread numbers, 320 iterations per thread with
> random number of entires to flush. Results are from two machines
There's something wrong with that benchmark, it sometimes gets stuck,
and the profile numbers are just random (and mostly in user space).
I think you mentioned fixing a bug in it, mind pointing at the fixed benchmark?
Looking at the kernel footprint, it seems to depend on what parameters
you ran that benchmark with. Under certain loads, it seems to spend
most of the time in clearing pages and in the page allocation ("-t 8
-n 320"). And in other loads, it hits smp_call_function_many() and the
TLB flushers ("-t 8 -n 8"). So exactly what parameters did you use?
Because we've had things that change those two things (and they are
totally independent).
And does anything stand out in the profiles of ebizzy? For example, in
between 3.4.x and 3.11, we've converted the anon_vma locking from a
mutex to a rwsem, and we know that caused several issues, possibly
causing unfairness. There are other potential sources of unfairness.
It would be good to perhaps bisect things at least *somewhat*, because
*so* much has changed in 3.4 to 3.11 that it's impossible to guess.
Linus
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-16 17:17 ` Linus Torvalds
0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2013-12-16 17:17 UTC (permalink / raw)
To: Mel Gorman
Cc: H. Peter Anvin, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Mon, Dec 16, 2013 at 2:39 AM, Mel Gorman <mgorman@suse.de> wrote:
>
> First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
> and ran it for a range of thread numbers, 320 iterations per thread with
> random number of entires to flush. Results are from two machines
There's something wrong with that benchmark, it sometimes gets stuck,
and the profile numbers are just random (and mostly in user space).
I think you mentioned fixing a bug in it, mind pointing at the fixed benchmark?
Looking at the kernel footprint, it seems to depend on what parameters
you ran that benchmark with. Under certain loads, it seems to spend
most of the time in clearing pages and in the page allocation ("-t 8
-n 320"). And in other loads, it hits smp_call_function_many() and the
TLB flushers ("-t 8 -n 8"). So exactly what parameters did you use?
Because we've had things that change those two things (and they are
totally independent).
And does anything stand out in the profiles of ebizzy? For example, in
between 3.4.x and 3.11, we've converted the anon_vma locking from a
mutex to a rwsem, and we know that caused several issues, possibly
causing unfairness. There are other potential sources of unfairness.
It would be good to perhaps bisect things at least *somewhat*, because
*so* much has changed in 3.4 to 3.11 that it's impossible to guess.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-16 13:44 ` Ingo Molnar
@ 2013-12-17 9:21 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 9:21 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Mon, Dec 16, 2013 at 02:44:49PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > > at least understand it.
> >
> > Also agreed. I started a bisection before answering this mail. It
> > would be cooler and potentially faster to figure it out from direct
> > analysis but bisection is reliable and less guesswork.
>
> Trying to guess can potentially last a _lot_ longer than a generic,
> no-assumptions bisection ...
>
Indeed. In this case, it would have taken me a while to find the correct
problem because I would consider the affected area to be relatively stable.
> <SNIP>
>
> Does the benchmark execute a fixed amount of transactions per thread?
>
Yes.
> That might artificially increase the numeric regression: with more
> threads it 'magnifies' any unfairness effects because slower threads
> will become slower, faster threads will become faster, as the thread
> count increases.
>
> [ That in itself is somewhat artificial, because real workloads tend
> to balance between threads dynamically and don't insist on keeping
> the fastest threads idle near the end of a run. It does not
> invalidate the complaint about the unfairness itself, obviously. ]
>
I was wrong about fairness. The first bisection found that cache hotness
was a more important factor due to a small mistake made in 3.13-rc1
---8<---
sched: Assign correct scheduling domain to sd_llc
Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
dereference on sd_busy but the fix also altered what scheduling domain it
used for sd_llc. One impact of this is that a task selecting a runqueue may
consider idle CPUs that are not cache siblings as candidates for running.
Tasks are then running on CPUs that are not cache hot.
This was found through bisection where ebizzy threads were not seeing equal
performance and it looked like a scheduling fairness issue. This patch
mitigates but does not completely fix the problem on all machines tested
implying there may be an additional bug or a common root cause. Here are
the average range of performance seen by individual ebizzy threads. It
was tested on top of candidate patches related to x86 TLB range flushing.
4-core machine
3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r3
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.10 ( 70.59%)
Mean 3 1.29 ( 0.00%) 0.93 ( 27.91%)
Mean 4 7.08 ( 0.00%) 0.77 ( 89.12%)
Mean 5 193.54 ( 0.00%) 2.14 ( 98.89%)
Mean 6 151.12 ( 0.00%) 2.06 ( 98.64%)
Mean 7 115.38 ( 0.00%) 2.04 ( 98.23%)
Mean 8 108.65 ( 0.00%) 1.92 ( 98.23%)
8-core machine
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.21 ( 47.50%)
Mean 3 23.73 ( 0.00%) 0.89 ( 96.25%)
Mean 4 12.79 ( 0.00%) 1.04 ( 91.87%)
Mean 5 13.08 ( 0.00%) 2.42 ( 81.50%)
Mean 6 23.21 ( 0.00%) 69.46 (-199.27%)
Mean 7 15.85 ( 0.00%) 101.72 (-541.77%)
Mean 8 109.37 ( 0.00%) 19.13 ( 82.51%)
Mean 12 124.84 ( 0.00%) 28.62 ( 77.07%)
Mean 16 113.50 ( 0.00%) 24.16 ( 78.71%)
It's eliminated for one machine and reduced for another.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
kernel/sched/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e85cda2..a848254 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
static void update_top_cache_domain(int cpu)
{
struct sched_domain *sd;
+ struct sched_domain *busy_sd = NULL;
int id = cpu;
int size = 1;
@@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
if (sd) {
id = cpumask_first(sched_domain_span(sd));
size = cpumask_weight(sched_domain_span(sd));
- sd = sd->parent; /* sd_busy */
+ busy_sd = sd->parent; /* sd_busy */
}
- rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
+ rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
per_cpu(sd_llc_size, cpu) = size;
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 9:21 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 9:21 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Mon, Dec 16, 2013 at 02:44:49PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > > at least understand it.
> >
> > Also agreed. I started a bisection before answering this mail. It
> > would be cooler and potentially faster to figure it out from direct
> > analysis but bisection is reliable and less guesswork.
>
> Trying to guess can potentially last a _lot_ longer than a generic,
> no-assumptions bisection ...
>
Indeed. In this case, it would have taken me a while to find the correct
problem because I would consider the affected area to be relatively stable.
> <SNIP>
>
> Does the benchmark execute a fixed amount of transactions per thread?
>
Yes.
> That might artificially increase the numeric regression: with more
> threads it 'magnifies' any unfairness effects because slower threads
> will become slower, faster threads will become faster, as the thread
> count increases.
>
> [ That in itself is somewhat artificial, because real workloads tend
> to balance between threads dynamically and don't insist on keeping
> the fastest threads idle near the end of a run. It does not
> invalidate the complaint about the unfairness itself, obviously. ]
>
I was wrong about fairness. The first bisection found that cache hotness
was a more important factor due to a small mistake made in 3.13-rc1
---8<---
sched: Assign correct scheduling domain to sd_llc
Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
dereference on sd_busy but the fix also altered what scheduling domain it
used for sd_llc. One impact of this is that a task selecting a runqueue may
consider idle CPUs that are not cache siblings as candidates for running.
Tasks are then running on CPUs that are not cache hot.
This was found through bisection where ebizzy threads were not seeing equal
performance and it looked like a scheduling fairness issue. This patch
mitigates but does not completely fix the problem on all machines tested
implying there may be an additional bug or a common root cause. Here are
the average range of performance seen by individual ebizzy threads. It
was tested on top of candidate patches related to x86 TLB range flushing.
4-core machine
3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r3
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.10 ( 70.59%)
Mean 3 1.29 ( 0.00%) 0.93 ( 27.91%)
Mean 4 7.08 ( 0.00%) 0.77 ( 89.12%)
Mean 5 193.54 ( 0.00%) 2.14 ( 98.89%)
Mean 6 151.12 ( 0.00%) 2.06 ( 98.64%)
Mean 7 115.38 ( 0.00%) 2.04 ( 98.23%)
Mean 8 108.65 ( 0.00%) 1.92 ( 98.23%)
8-core machine
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.21 ( 47.50%)
Mean 3 23.73 ( 0.00%) 0.89 ( 96.25%)
Mean 4 12.79 ( 0.00%) 1.04 ( 91.87%)
Mean 5 13.08 ( 0.00%) 2.42 ( 81.50%)
Mean 6 23.21 ( 0.00%) 69.46 (-199.27%)
Mean 7 15.85 ( 0.00%) 101.72 (-541.77%)
Mean 8 109.37 ( 0.00%) 19.13 ( 82.51%)
Mean 12 124.84 ( 0.00%) 28.62 ( 77.07%)
Mean 16 113.50 ( 0.00%) 24.16 ( 78.71%)
It's eliminated for one machine and reduced for another.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
kernel/sched/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e85cda2..a848254 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
static void update_top_cache_domain(int cpu)
{
struct sched_domain *sd;
+ struct sched_domain *busy_sd = NULL;
int id = cpu;
int size = 1;
@@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
if (sd) {
id = cpumask_first(sched_domain_span(sd));
size = cpumask_weight(sched_domain_span(sd));
- sd = sd->parent; /* sd_busy */
+ busy_sd = sd->parent; /* sd_busy */
}
- rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
+ rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
per_cpu(sd_llc_size, cpu) = size;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 9:21 ` Mel Gorman
@ 2013-12-17 9:26 ` Peter Zijlstra
-1 siblings, 0 replies; 71+ messages in thread
From: Peter Zijlstra @ 2013-12-17 9:26 UTC (permalink / raw)
To: Mel Gorman
Cc: Ingo Molnar, Linus Torvalds, Alex Shi, Thomas Gleixner,
Andrew Morton, Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM,
LKML
On Tue, Dec 17, 2013 at 09:21:25AM +0000, Mel Gorman wrote:
> if (sd) {
> id = cpumask_first(sched_domain_span(sd));
> size = cpumask_weight(sched_domain_span(sd));
> - sd = sd->parent; /* sd_busy */
> + busy_sd = sd->parent; /* sd_busy */
> }
> - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
Argh, so much for paying attention :/
Thanks!
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 9:26 ` Peter Zijlstra
0 siblings, 0 replies; 71+ messages in thread
From: Peter Zijlstra @ 2013-12-17 9:26 UTC (permalink / raw)
To: Mel Gorman
Cc: Ingo Molnar, Linus Torvalds, Alex Shi, Thomas Gleixner,
Andrew Morton, Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM,
LKML
On Tue, Dec 17, 2013 at 09:21:25AM +0000, Mel Gorman wrote:
> if (sd) {
> id = cpumask_first(sched_domain_span(sd));
> size = cpumask_weight(sched_domain_span(sd));
> - sd = sd->parent; /* sd_busy */
> + busy_sd = sd->parent; /* sd_busy */
> }
> - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
Argh, so much for paying attention :/
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-16 17:17 ` Linus Torvalds
@ 2013-12-17 9:55 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 9:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: H. Peter Anvin, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Mon, Dec 16, 2013 at 09:17:35AM -0800, Linus Torvalds wrote:
> On Mon, Dec 16, 2013 at 2:39 AM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
> > and ran it for a range of thread numbers, 320 iterations per thread with
> > random number of entires to flush. Results are from two machines
>
> There's something wrong with that benchmark, it sometimes gets stuck,
It's not a thread-safe benchmark. The parent unmapping thread can finish
before the children start and it infinite loops.
> and the profile numbers are just random (and mostly in user space).
>
Yep, it's why when I used it I ran a large number of iterations with
semi-randomised number of entries trying to knock some sense out of it.
I was hoping that the Intel folk might come back with more details on
what their testing methodology was.
> I think you mentioned fixing a bug in it, mind pointing at the fixed benchmark?
>
Ugh, I'm embarassed by this. I did not properly fix the benchmark, just
bodged around the part that can lockup. Patch is below. Actual testing was
run using mmtests with the configs/config-global-dhp__tlbflush-performance
configuration file using something like this
# build boot kernel 1
./run-mmtests.sh --run-monitor --config configs/config-global-dhp__tlbflush-performance test-kernel-1
# build boot kernel 2
./run-mmtests.sh --run-monitor --config configs/config-global-dhp__tlbflush-performance test-kernel-2
cd work/log
../../compare-kernels.sh
> Looking at the kernel footprint, it seems to depend on what parameters
> you ran that benchmark with. Under certain loads, it seems to spend
> most of the time in clearing pages and in the page allocation ("-t 8
> -n 320"). And in other loads, it hits smp_call_function_many() and the
> TLB flushers ("-t 8 -n 8"). So exactly what parameters did you use?
>
A range of parameters. The test effectively does this
TLBFLUSH_MAX_ENTRIES=256
for_each_thread_count
for iteration in `seq 1 320`
# Select a range of entries to randomly select from. This is to ensure
# an evenish spread of entries to be tested
NR_SECTION=$((ITERATION%8))
RANGE=$((TLBFLUSH_MAX_ENTRIES/8))
THIS_MIN_ENTRIES=$((RANGE*NR_SECTION+1))
THIS_MAX_ENTRIES=$((THIS_MIN_ENTRIES+RANGE))
NR_ENTRIES=$((THIS_MIN_ENTRIES+(RANDOM%RANGE)))
if [ $NR_ENTRIES -gt $THIS_MAX_ENTRIES ]; then
NR_ENTRIES=$THIS_MAX_ENTRIES
fi
RESULT=`tlbflush -n $NR_ENTRIES -t $NR_THREADS 2>&1`
done
done
It splits the values for nr_entries (-n switch) into 8 segments and randomly
selects values within them. This results in noise but ensures the test hits
the best, average and worst cases for TLB range flushing. Writing this,
I realise I should have made MAX_ENTRIES 512 to hit the original shift
values. The original mail indicated that this test was run once for a very
limited number of threads and entries and I really hope this is not what
actually happened to tune that shift value.
> Because we've had things that change those two things (and they are
> totally independent).
>
Indeed and tuning on specifics would be a bad idea -- hence why my
testing took a randomised selection of ranges to test with and a large
number of iterations.
> And does anything stand out in the profiles of ebizzy? For example, in
> between 3.4.x and 3.11, we've converted the anon_vma locking from a
> mutex to a rwsem, and we know that caused several issues, possibly
> causing unfairness. There are other potential sources of unfairness.
> It would be good to perhaps bisect things at least *somewhat*, because
> *so* much has changed in 3.4 to 3.11 that it's impossible to guess.
>
I'll check. Right now, the machines are still occupied running bisections
which is still finding bugs. When that has found the obvious stuff, I'll use
profiles to identify what's left. FWIW, I would be surprised if ebizzy was
affected by the anon_vma locking. I do not think the threads are operating
within the same VMAs in a manner that would contend on those locks. If there
is a lock being contended, it's going to be on mmap_sem for creating mappings
just slightly larger than MMAP_THRESHOLD. Guessing though, not proven.
This is bodge that stops Alex's benchmark locking up. It's the wrong way to
fix a problem like this. I was not even convinced this benchmark was useful
to begin with and was unmotivated to spending time on fixing it up properly.
--- tlbflush.c.orig 2013-12-15 11:05:08.813821030 +0000
+++ tlbflush.c 2013-12-15 11:04:46.504926426 +0000
@@ -67,13 +67,17 @@
char x;
int i, k;
int randn[PAGE_SIZE];
+ int count = 0;
for (i=0;i<PAGE_SIZE; i++)
randn[i] = rand();
actimes = malloc(sizeof(long));
- while (*threadstart == 0 )
+ while (*threadstart == 0) {
+ if (++count > 1000000)
+ break;
usleep(1);
+ }
if (d->rw == 0)
@@ -180,6 +181,7 @@
threadstart = malloc(sizeof(int));
*threadstart = 0;
data.readp = &p; data.startaddr = startaddr; data.rw = rw; data.loop = l;
+ sleep(1);
for (i=0; i< t; i++)
if(pthread_create(&pid[i], NULL, accessmm, &data))
perror("pthread create");
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 9:55 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 9:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: H. Peter Anvin, Alex Shi, Ingo Molnar, Thomas Gleixner,
Andrew Morton, Fengguang Wu, Linux-X86, Linux-MM, LKML
On Mon, Dec 16, 2013 at 09:17:35AM -0800, Linus Torvalds wrote:
> On Mon, Dec 16, 2013 at 2:39 AM, Mel Gorman <mgorman@suse.de> wrote:
> >
> > First was Alex's microbenchmark from https://lkml.org/lkml/2012/5/17/59
> > and ran it for a range of thread numbers, 320 iterations per thread with
> > random number of entires to flush. Results are from two machines
>
> There's something wrong with that benchmark, it sometimes gets stuck,
It's not a thread-safe benchmark. The parent unmapping thread can finish
before the children start and it infinite loops.
> and the profile numbers are just random (and mostly in user space).
>
Yep, it's why when I used it I ran a large number of iterations with
semi-randomised number of entries trying to knock some sense out of it.
I was hoping that the Intel folk might come back with more details on
what their testing methodology was.
> I think you mentioned fixing a bug in it, mind pointing at the fixed benchmark?
>
Ugh, I'm embarassed by this. I did not properly fix the benchmark, just
bodged around the part that can lockup. Patch is below. Actual testing was
run using mmtests with the configs/config-global-dhp__tlbflush-performance
configuration file using something like this
# build boot kernel 1
./run-mmtests.sh --run-monitor --config configs/config-global-dhp__tlbflush-performance test-kernel-1
# build boot kernel 2
./run-mmtests.sh --run-monitor --config configs/config-global-dhp__tlbflush-performance test-kernel-2
cd work/log
../../compare-kernels.sh
> Looking at the kernel footprint, it seems to depend on what parameters
> you ran that benchmark with. Under certain loads, it seems to spend
> most of the time in clearing pages and in the page allocation ("-t 8
> -n 320"). And in other loads, it hits smp_call_function_many() and the
> TLB flushers ("-t 8 -n 8"). So exactly what parameters did you use?
>
A range of parameters. The test effectively does this
TLBFLUSH_MAX_ENTRIES=256
for_each_thread_count
for iteration in `seq 1 320`
# Select a range of entries to randomly select from. This is to ensure
# an evenish spread of entries to be tested
NR_SECTION=$((ITERATION%8))
RANGE=$((TLBFLUSH_MAX_ENTRIES/8))
THIS_MIN_ENTRIES=$((RANGE*NR_SECTION+1))
THIS_MAX_ENTRIES=$((THIS_MIN_ENTRIES+RANGE))
NR_ENTRIES=$((THIS_MIN_ENTRIES+(RANDOM%RANGE)))
if [ $NR_ENTRIES -gt $THIS_MAX_ENTRIES ]; then
NR_ENTRIES=$THIS_MAX_ENTRIES
fi
RESULT=`tlbflush -n $NR_ENTRIES -t $NR_THREADS 2>&1`
done
done
It splits the values for nr_entries (-n switch) into 8 segments and randomly
selects values within them. This results in noise but ensures the test hits
the best, average and worst cases for TLB range flushing. Writing this,
I realise I should have made MAX_ENTRIES 512 to hit the original shift
values. The original mail indicated that this test was run once for a very
limited number of threads and entries and I really hope this is not what
actually happened to tune that shift value.
> Because we've had things that change those two things (and they are
> totally independent).
>
Indeed and tuning on specifics would be a bad idea -- hence why my
testing took a randomised selection of ranges to test with and a large
number of iterations.
> And does anything stand out in the profiles of ebizzy? For example, in
> between 3.4.x and 3.11, we've converted the anon_vma locking from a
> mutex to a rwsem, and we know that caused several issues, possibly
> causing unfairness. There are other potential sources of unfairness.
> It would be good to perhaps bisect things at least *somewhat*, because
> *so* much has changed in 3.4 to 3.11 that it's impossible to guess.
>
I'll check. Right now, the machines are still occupied running bisections
which is still finding bugs. When that has found the obvious stuff, I'll use
profiles to identify what's left. FWIW, I would be surprised if ebizzy was
affected by the anon_vma locking. I do not think the threads are operating
within the same VMAs in a manner that would contend on those locks. If there
is a lock being contended, it's going to be on mmap_sem for creating mappings
just slightly larger than MMAP_THRESHOLD. Guessing though, not proven.
This is bodge that stops Alex's benchmark locking up. It's the wrong way to
fix a problem like this. I was not even convinced this benchmark was useful
to begin with and was unmotivated to spending time on fixing it up properly.
--- tlbflush.c.orig 2013-12-15 11:05:08.813821030 +0000
+++ tlbflush.c 2013-12-15 11:04:46.504926426 +0000
@@ -67,13 +67,17 @@
char x;
int i, k;
int randn[PAGE_SIZE];
+ int count = 0;
for (i=0;i<PAGE_SIZE; i++)
randn[i] = rand();
actimes = malloc(sizeof(long));
- while (*threadstart == 0 )
+ while (*threadstart == 0) {
+ if (++count > 1000000)
+ break;
usleep(1);
+ }
if (d->rw == 0)
@@ -180,6 +181,7 @@
threadstart = malloc(sizeof(int));
*threadstart = 0;
data.readp = &p; data.startaddr = startaddr; data.rw = rw; data.loop = l;
+ sleep(1);
for (i=0; i< t; i++)
if(pthread_create(&pid[i], NULL, accessmm, &data))
perror("pthread create");
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 9:21 ` Mel Gorman
@ 2013-12-17 11:00 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-17 11:00 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> On Mon, Dec 16, 2013 at 02:44:49PM +0100, Ingo Molnar wrote:
> >
> > * Mel Gorman <mgorman@suse.de> wrote:
> >
> > > > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > > > at least understand it.
> > >
> > > Also agreed. I started a bisection before answering this mail. It
> > > would be cooler and potentially faster to figure it out from direct
> > > analysis but bisection is reliable and less guesswork.
> >
> > Trying to guess can potentially last a _lot_ longer than a generic,
> > no-assumptions bisection ...
> >
>
> Indeed. In this case, it would have taken me a while to find the correct
> problem because I would consider the affected area to be relatively stable.
>
> > <SNIP>
> >
> > Does the benchmark execute a fixed amount of transactions per thread?
> >
>
> Yes.
>
> > That might artificially increase the numeric regression: with more
> > threads it 'magnifies' any unfairness effects because slower threads
> > will become slower, faster threads will become faster, as the thread
> > count increases.
> >
> > [ That in itself is somewhat artificial, because real workloads tend
> > to balance between threads dynamically and don't insist on keeping
> > the fastest threads idle near the end of a run. It does not
> > invalidate the complaint about the unfairness itself, obviously. ]
> >
>
> I was wrong about fairness. The first bisection found that cache hotness
> was a more important factor due to a small mistake made in 3.13-rc1
>
> ---8<---
> sched: Assign correct scheduling domain to sd_llc
>
> Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
> dereference on sd_busy but the fix also altered what scheduling domain it
> used for sd_llc. One impact of this is that a task selecting a runqueue may
> consider idle CPUs that are not cache siblings as candidates for running.
> Tasks are then running on CPUs that are not cache hot.
>
> This was found through bisection where ebizzy threads were not seeing equal
> performance and it looked like a scheduling fairness issue. This patch
> mitigates but does not completely fix the problem on all machines tested
> implying there may be an additional bug or a common root cause. Here are
> the average range of performance seen by individual ebizzy threads. It
> was tested on top of candidate patches related to x86 TLB range flushing.
>
> 4-core machine
> 3.13.0-rc3 3.13.0-rc3
> vanilla fixsd-v3r3
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.34 ( 0.00%) 0.10 ( 70.59%)
> Mean 3 1.29 ( 0.00%) 0.93 ( 27.91%)
> Mean 4 7.08 ( 0.00%) 0.77 ( 89.12%)
> Mean 5 193.54 ( 0.00%) 2.14 ( 98.89%)
> Mean 6 151.12 ( 0.00%) 2.06 ( 98.64%)
> Mean 7 115.38 ( 0.00%) 2.04 ( 98.23%)
> Mean 8 108.65 ( 0.00%) 1.92 ( 98.23%)
>
> 8-core machine
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.40 ( 0.00%) 0.21 ( 47.50%)
> Mean 3 23.73 ( 0.00%) 0.89 ( 96.25%)
> Mean 4 12.79 ( 0.00%) 1.04 ( 91.87%)
> Mean 5 13.08 ( 0.00%) 2.42 ( 81.50%)
> Mean 6 23.21 ( 0.00%) 69.46 (-199.27%)
> Mean 7 15.85 ( 0.00%) 101.72 (-541.77%)
> Mean 8 109.37 ( 0.00%) 19.13 ( 82.51%)
> Mean 12 124.84 ( 0.00%) 28.62 ( 77.07%)
> Mean 16 113.50 ( 0.00%) 24.16 ( 78.71%)
>
> It's eliminated for one machine and reduced for another.
>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
> kernel/sched/core.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e85cda2..a848254 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
> static void update_top_cache_domain(int cpu)
> {
> struct sched_domain *sd;
> + struct sched_domain *busy_sd = NULL;
> int id = cpu;
> int size = 1;
>
> @@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
> if (sd) {
> id = cpumask_first(sched_domain_span(sd));
> size = cpumask_weight(sched_domain_span(sd));
> - sd = sd->parent; /* sd_busy */
> + busy_sd = sd->parent; /* sd_busy */
> }
> - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
>
> rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> per_cpu(sd_llc_size, cpu) = size;
Indeed that makes a lot of sense, thanks Mel for tracking down this
part of the puzzle! Will get your fix to Linus ASAP.
Does this fix also speed up Ebizzy's transaction performance, or is
its main effect a reduction in workload variation noise?
Also it appears the Ebizzy numbers ought to be stable enough now to
make the range-TLB-flush measurements more precise?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 11:00 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-17 11:00 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> On Mon, Dec 16, 2013 at 02:44:49PM +0100, Ingo Molnar wrote:
> >
> > * Mel Gorman <mgorman@suse.de> wrote:
> >
> > > > Whatever we did right in v3.4 we want to do in v3.13 as well - or
> > > > at least understand it.
> > >
> > > Also agreed. I started a bisection before answering this mail. It
> > > would be cooler and potentially faster to figure it out from direct
> > > analysis but bisection is reliable and less guesswork.
> >
> > Trying to guess can potentially last a _lot_ longer than a generic,
> > no-assumptions bisection ...
> >
>
> Indeed. In this case, it would have taken me a while to find the correct
> problem because I would consider the affected area to be relatively stable.
>
> > <SNIP>
> >
> > Does the benchmark execute a fixed amount of transactions per thread?
> >
>
> Yes.
>
> > That might artificially increase the numeric regression: with more
> > threads it 'magnifies' any unfairness effects because slower threads
> > will become slower, faster threads will become faster, as the thread
> > count increases.
> >
> > [ That in itself is somewhat artificial, because real workloads tend
> > to balance between threads dynamically and don't insist on keeping
> > the fastest threads idle near the end of a run. It does not
> > invalidate the complaint about the unfairness itself, obviously. ]
> >
>
> I was wrong about fairness. The first bisection found that cache hotness
> was a more important factor due to a small mistake made in 3.13-rc1
>
> ---8<---
> sched: Assign correct scheduling domain to sd_llc
>
> Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
> dereference on sd_busy but the fix also altered what scheduling domain it
> used for sd_llc. One impact of this is that a task selecting a runqueue may
> consider idle CPUs that are not cache siblings as candidates for running.
> Tasks are then running on CPUs that are not cache hot.
>
> This was found through bisection where ebizzy threads were not seeing equal
> performance and it looked like a scheduling fairness issue. This patch
> mitigates but does not completely fix the problem on all machines tested
> implying there may be an additional bug or a common root cause. Here are
> the average range of performance seen by individual ebizzy threads. It
> was tested on top of candidate patches related to x86 TLB range flushing.
>
> 4-core machine
> 3.13.0-rc3 3.13.0-rc3
> vanilla fixsd-v3r3
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.34 ( 0.00%) 0.10 ( 70.59%)
> Mean 3 1.29 ( 0.00%) 0.93 ( 27.91%)
> Mean 4 7.08 ( 0.00%) 0.77 ( 89.12%)
> Mean 5 193.54 ( 0.00%) 2.14 ( 98.89%)
> Mean 6 151.12 ( 0.00%) 2.06 ( 98.64%)
> Mean 7 115.38 ( 0.00%) 2.04 ( 98.23%)
> Mean 8 108.65 ( 0.00%) 1.92 ( 98.23%)
>
> 8-core machine
> Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
> Mean 2 0.40 ( 0.00%) 0.21 ( 47.50%)
> Mean 3 23.73 ( 0.00%) 0.89 ( 96.25%)
> Mean 4 12.79 ( 0.00%) 1.04 ( 91.87%)
> Mean 5 13.08 ( 0.00%) 2.42 ( 81.50%)
> Mean 6 23.21 ( 0.00%) 69.46 (-199.27%)
> Mean 7 15.85 ( 0.00%) 101.72 (-541.77%)
> Mean 8 109.37 ( 0.00%) 19.13 ( 82.51%)
> Mean 12 124.84 ( 0.00%) 28.62 ( 77.07%)
> Mean 16 113.50 ( 0.00%) 24.16 ( 78.71%)
>
> It's eliminated for one machine and reduced for another.
>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
> kernel/sched/core.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e85cda2..a848254 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
> static void update_top_cache_domain(int cpu)
> {
> struct sched_domain *sd;
> + struct sched_domain *busy_sd = NULL;
> int id = cpu;
> int size = 1;
>
> @@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
> if (sd) {
> id = cpumask_first(sched_domain_span(sd));
> size = cpumask_weight(sched_domain_span(sd));
> - sd = sd->parent; /* sd_busy */
> + busy_sd = sd->parent; /* sd_busy */
> }
> - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
>
> rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> per_cpu(sd_llc_size, cpu) = size;
Indeed that makes a lot of sense, thanks Mel for tracking down this
part of the puzzle! Will get your fix to Linus ASAP.
Does this fix also speed up Ebizzy's transaction performance, or is
its main effect a reduction in workload variation noise?
Also it appears the Ebizzy numbers ought to be stable enough now to
make the range-TLB-flush measurements more precise?
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 11:00 ` Ingo Molnar
@ 2013-12-17 14:32 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 14:32 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 12:00:51PM +0100, Ingo Molnar wrote:
>
> > sched: Assign correct scheduling domain to sd_llc
> >
> > Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
> > dereference on sd_busy but the fix also altered what scheduling domain it
> > used for sd_llc. One impact of this is that a task selecting a runqueue may
> > consider idle CPUs that are not cache siblings as candidates for running.
> > Tasks are then running on CPUs that are not cache hot.
> >
> > <PATCH SNIPPED>
>
> Indeed that makes a lot of sense, thanks Mel for tracking down this
> part of the puzzle! Will get your fix to Linus ASAP.
>
> Does this fix also speed up Ebizzy's transaction performance, or is
> its main effect a reduction in workload variation noise?
>
Mixed results, some gains and some losses.
3.13.0-rc3 3.13.0-rc3 3.4.69 3.13.0-rc3
vanilla nowalk-v2r7 vanilla fixsd-v3r3
Mean 1 7295.77 ( 0.00%) 7835.63 ( 7.40%) 6713.32 ( -7.98%) 7757.03 ( 6.32%)
Mean 2 8252.58 ( 0.00%) 9554.63 ( 15.78%) 8334.43 ( 0.99%) 9457.34 ( 14.60%)
Mean 3 8179.74 ( 0.00%) 9032.46 ( 10.42%) 8134.42 ( -0.55%) 8928.25 ( 9.15%)
Mean 4 7862.45 ( 0.00%) 8688.01 ( 10.50%) 7966.27 ( 1.32%) 8560.87 ( 8.88%)
Mean 5 7170.24 ( 0.00%) 8216.15 ( 14.59%) 7820.63 ( 9.07%) 8270.72 ( 15.35%)
Mean 6 6835.10 ( 0.00%) 7866.95 ( 15.10%) 7773.30 ( 13.73%) 7998.50 ( 17.02%)
Mean 7 6740.99 ( 0.00%) 7586.36 ( 12.54%) 7712.45 ( 14.41%) 7519.46 ( 11.55%)
Mean 8 6494.01 ( 0.00%) 6849.82 ( 5.48%) 7705.62 ( 18.66%) 6842.44 ( 5.37%)
Mean 12 6567.37 ( 0.00%) 6973.66 ( 6.19%) 7554.82 ( 15.04%) 6471.83 ( -1.45%)
Mean 16 6630.26 ( 0.00%) 7042.52 ( 6.22%) 7331.04 ( 10.57%) 6380.16 ( -3.77%)
Range 1 767.00 ( 0.00%) 194.00 ( 74.71%) 661.00 ( 13.82%) 217.00 ( 71.71%)
Range 2 178.00 ( 0.00%) 185.00 ( -3.93%) 592.00 (-232.58%) 240.00 (-34.83%)
Range 3 175.00 ( 0.00%) 213.00 (-21.71%) 431.00 (-146.29%) 511.00 (-192.00%)
Range 4 806.00 ( 0.00%) 924.00 (-14.64%) 542.00 ( 32.75%) 723.00 ( 10.30%)
Range 5 544.00 ( 0.00%) 438.00 ( 19.49%) 444.00 ( 18.38%) 663.00 (-21.88%)
Range 6 399.00 ( 0.00%) 1111.00 (-178.45%) 528.00 (-32.33%) 1031.00 (-158.40%)
Range 7 629.00 ( 0.00%) 895.00 (-42.29%) 467.00 ( 25.76%) 877.00 (-39.43%)
Range 8 400.00 ( 0.00%) 255.00 ( 36.25%) 435.00 ( -8.75%) 656.00 (-64.00%)
Range 12 233.00 ( 0.00%) 108.00 ( 53.65%) 330.00 (-41.63%) 343.00 (-47.21%)
Range 16 141.00 ( 0.00%) 134.00 ( 4.96%) 496.00 (-251.77%) 291.00 (-106.38%)
Stddev 1 73.94 ( 0.00%) 52.33 ( 29.23%) 177.17 (-139.59%) 37.34 ( 49.51%)
Stddev 2 23.47 ( 0.00%) 42.08 (-79.24%) 88.91 (-278.74%) 38.16 (-62.58%)
Stddev 3 36.48 ( 0.00%) 29.02 ( 20.45%) 101.07 (-177.05%) 134.62 (-269.01%)
Stddev 4 158.37 ( 0.00%) 133.99 ( 15.40%) 130.52 ( 17.59%) 150.61 ( 4.90%)
Stddev 5 116.74 ( 0.00%) 76.76 ( 34.25%) 78.31 ( 32.92%) 116.67 ( 0.06%)
Stddev 6 66.34 ( 0.00%) 273.87 (-312.83%) 87.79 (-32.33%) 235.11 (-254.40%)
Stddev 7 145.62 ( 0.00%) 174.99 (-20.16%) 90.52 ( 37.84%) 156.08 ( -7.18%)
Stddev 8 68.51 ( 0.00%) 47.58 ( 30.54%) 81.11 (-18.39%) 96.00 (-40.13%)
Stddev 12 32.15 ( 0.00%) 20.18 ( 37.22%) 65.74 (-104.50%) 45.00 (-39.99%)
Stddev 16 21.59 ( 0.00%) 20.29 ( 6.01%) 86.42 (-300.25%) 38.20 (-76.93%)
fixsd-v3r3 is all the patches discussed so far applied. Lost at higher
thread counts, won at lower ones. All the results still worse than 3.4.69
To complicate matters further, additional testing indicated that the
tlbflush shift change *may* have made the variation worse. I was preparing
to bisect to search for patches that increased "thread performance spread"
in ebizzy and tested a number of potential bisect points
Tue 17 Dec 11:11:08 GMT 2013 ivy ebizzyrange v3.12 mean-max:36 good
Tue 17 Dec 11:32:28 GMT 2013 ivy ebizzyrange v3.13-rc3 mean-max:80 bad
Tue 17 Dec 12:00:23 GMT 2013 ivy ebizzyrange v3.4 mean-max:0 good
Tue 17 Dec 12:21:58 GMT 2013 ivy ebizzyrange v3.10 mean-max:26 good
Tue 17 Dec 12:42:49 GMT 2013 ivy ebizzyrange v3.11 mean-max:7 good
Tue 17 Dec 13:32:14 GMT 2013 ivy ebizzyrange x86-tlb-range-flush-optimisation-v3r3 mean-max:110 bad
This is part of the log for an automated bisection script. mean-max is
the worst average spread recorded for all threads tested. It's telling
me that the worst thread spread seen by v3.13-rc3 is 80 and the worst
seen by the patch series (tlbflush shift change, fix to sd etc) is 110.
The bisection is doing very few iterations so it could just be co-incidence
but it makes sense. If the kernel is scheduling tasks on CPUs that are not
cache siblings then the cost of remote TLB flushes (range or otherwise)
changes. It's an important enough problem that I feel compelled to
retest with
x86: mm: Clean up inconsistencies when flushing TLB ranges
x86: mm: Account for TLB flushes only when debugging
x86: mm: Eliminate redundant page table walk during TLB range flushing
sched: Assign correct scheduling domain to sd_llc
I'll then re-evalate the tlbflush shift patch based on what falls out of
that test. It may turn out that tlbflush shifts on its own simply cannot
optimise for both the tlbflush microbenchmark and ebizzy as the former
deals with average cost and the latter hits the worst case every time.
At that point it'll be time to look at profiles and see where we are
actually spending time because the possibilities of finding things to fix
through bisection will be exhausted.
> Also it appears the Ebizzy numbers ought to be stable enough now to
> make the range-TLB-flush measurements more precise?
>
Right now, the tlbflush microbenchmark figures look awful on the 8-core
machine when the tlbflush shift patch and the schedule domain fix are
both applied.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 14:32 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 14:32 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 12:00:51PM +0100, Ingo Molnar wrote:
>
> > sched: Assign correct scheduling domain to sd_llc
> >
> > Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
> > dereference on sd_busy but the fix also altered what scheduling domain it
> > used for sd_llc. One impact of this is that a task selecting a runqueue may
> > consider idle CPUs that are not cache siblings as candidates for running.
> > Tasks are then running on CPUs that are not cache hot.
> >
> > <PATCH SNIPPED>
>
> Indeed that makes a lot of sense, thanks Mel for tracking down this
> part of the puzzle! Will get your fix to Linus ASAP.
>
> Does this fix also speed up Ebizzy's transaction performance, or is
> its main effect a reduction in workload variation noise?
>
Mixed results, some gains and some losses.
3.13.0-rc3 3.13.0-rc3 3.4.69 3.13.0-rc3
vanilla nowalk-v2r7 vanilla fixsd-v3r3
Mean 1 7295.77 ( 0.00%) 7835.63 ( 7.40%) 6713.32 ( -7.98%) 7757.03 ( 6.32%)
Mean 2 8252.58 ( 0.00%) 9554.63 ( 15.78%) 8334.43 ( 0.99%) 9457.34 ( 14.60%)
Mean 3 8179.74 ( 0.00%) 9032.46 ( 10.42%) 8134.42 ( -0.55%) 8928.25 ( 9.15%)
Mean 4 7862.45 ( 0.00%) 8688.01 ( 10.50%) 7966.27 ( 1.32%) 8560.87 ( 8.88%)
Mean 5 7170.24 ( 0.00%) 8216.15 ( 14.59%) 7820.63 ( 9.07%) 8270.72 ( 15.35%)
Mean 6 6835.10 ( 0.00%) 7866.95 ( 15.10%) 7773.30 ( 13.73%) 7998.50 ( 17.02%)
Mean 7 6740.99 ( 0.00%) 7586.36 ( 12.54%) 7712.45 ( 14.41%) 7519.46 ( 11.55%)
Mean 8 6494.01 ( 0.00%) 6849.82 ( 5.48%) 7705.62 ( 18.66%) 6842.44 ( 5.37%)
Mean 12 6567.37 ( 0.00%) 6973.66 ( 6.19%) 7554.82 ( 15.04%) 6471.83 ( -1.45%)
Mean 16 6630.26 ( 0.00%) 7042.52 ( 6.22%) 7331.04 ( 10.57%) 6380.16 ( -3.77%)
Range 1 767.00 ( 0.00%) 194.00 ( 74.71%) 661.00 ( 13.82%) 217.00 ( 71.71%)
Range 2 178.00 ( 0.00%) 185.00 ( -3.93%) 592.00 (-232.58%) 240.00 (-34.83%)
Range 3 175.00 ( 0.00%) 213.00 (-21.71%) 431.00 (-146.29%) 511.00 (-192.00%)
Range 4 806.00 ( 0.00%) 924.00 (-14.64%) 542.00 ( 32.75%) 723.00 ( 10.30%)
Range 5 544.00 ( 0.00%) 438.00 ( 19.49%) 444.00 ( 18.38%) 663.00 (-21.88%)
Range 6 399.00 ( 0.00%) 1111.00 (-178.45%) 528.00 (-32.33%) 1031.00 (-158.40%)
Range 7 629.00 ( 0.00%) 895.00 (-42.29%) 467.00 ( 25.76%) 877.00 (-39.43%)
Range 8 400.00 ( 0.00%) 255.00 ( 36.25%) 435.00 ( -8.75%) 656.00 (-64.00%)
Range 12 233.00 ( 0.00%) 108.00 ( 53.65%) 330.00 (-41.63%) 343.00 (-47.21%)
Range 16 141.00 ( 0.00%) 134.00 ( 4.96%) 496.00 (-251.77%) 291.00 (-106.38%)
Stddev 1 73.94 ( 0.00%) 52.33 ( 29.23%) 177.17 (-139.59%) 37.34 ( 49.51%)
Stddev 2 23.47 ( 0.00%) 42.08 (-79.24%) 88.91 (-278.74%) 38.16 (-62.58%)
Stddev 3 36.48 ( 0.00%) 29.02 ( 20.45%) 101.07 (-177.05%) 134.62 (-269.01%)
Stddev 4 158.37 ( 0.00%) 133.99 ( 15.40%) 130.52 ( 17.59%) 150.61 ( 4.90%)
Stddev 5 116.74 ( 0.00%) 76.76 ( 34.25%) 78.31 ( 32.92%) 116.67 ( 0.06%)
Stddev 6 66.34 ( 0.00%) 273.87 (-312.83%) 87.79 (-32.33%) 235.11 (-254.40%)
Stddev 7 145.62 ( 0.00%) 174.99 (-20.16%) 90.52 ( 37.84%) 156.08 ( -7.18%)
Stddev 8 68.51 ( 0.00%) 47.58 ( 30.54%) 81.11 (-18.39%) 96.00 (-40.13%)
Stddev 12 32.15 ( 0.00%) 20.18 ( 37.22%) 65.74 (-104.50%) 45.00 (-39.99%)
Stddev 16 21.59 ( 0.00%) 20.29 ( 6.01%) 86.42 (-300.25%) 38.20 (-76.93%)
fixsd-v3r3 is all the patches discussed so far applied. Lost at higher
thread counts, won at lower ones. All the results still worse than 3.4.69
To complicate matters further, additional testing indicated that the
tlbflush shift change *may* have made the variation worse. I was preparing
to bisect to search for patches that increased "thread performance spread"
in ebizzy and tested a number of potential bisect points
Tue 17 Dec 11:11:08 GMT 2013 ivy ebizzyrange v3.12 mean-max:36 good
Tue 17 Dec 11:32:28 GMT 2013 ivy ebizzyrange v3.13-rc3 mean-max:80 bad
Tue 17 Dec 12:00:23 GMT 2013 ivy ebizzyrange v3.4 mean-max:0 good
Tue 17 Dec 12:21:58 GMT 2013 ivy ebizzyrange v3.10 mean-max:26 good
Tue 17 Dec 12:42:49 GMT 2013 ivy ebizzyrange v3.11 mean-max:7 good
Tue 17 Dec 13:32:14 GMT 2013 ivy ebizzyrange x86-tlb-range-flush-optimisation-v3r3 mean-max:110 bad
This is part of the log for an automated bisection script. mean-max is
the worst average spread recorded for all threads tested. It's telling
me that the worst thread spread seen by v3.13-rc3 is 80 and the worst
seen by the patch series (tlbflush shift change, fix to sd etc) is 110.
The bisection is doing very few iterations so it could just be co-incidence
but it makes sense. If the kernel is scheduling tasks on CPUs that are not
cache siblings then the cost of remote TLB flushes (range or otherwise)
changes. It's an important enough problem that I feel compelled to
retest with
x86: mm: Clean up inconsistencies when flushing TLB ranges
x86: mm: Account for TLB flushes only when debugging
x86: mm: Eliminate redundant page table walk during TLB range flushing
sched: Assign correct scheduling domain to sd_llc
I'll then re-evalate the tlbflush shift patch based on what falls out of
that test. It may turn out that tlbflush shifts on its own simply cannot
optimise for both the tlbflush microbenchmark and ebizzy as the former
deals with average cost and the latter hits the worst case every time.
At that point it'll be time to look at profiles and see where we are
actually spending time because the possibilities of finding things to fix
through bisection will be exhausted.
> Also it appears the Ebizzy numbers ought to be stable enough now to
> make the range-TLB-flush measurements more precise?
>
Right now, the tlbflush microbenchmark figures look awful on the 8-core
machine when the tlbflush shift patch and the schedule domain fix are
both applied.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 14:32 ` Mel Gorman
@ 2013-12-17 14:42 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-17 14:42 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> [...]
>
> At that point it'll be time to look at profiles and see where we are
> actually spending time because the possibilities of finding things
> to fix through bisection will be exhausted.
Yeah.
One (heavy handed but effective) trick that can be used in such a
situation is to just revert everything that is causing problems, and
continue reverting until we get back to a v3.4 baseline performance.
Once such a 'clean' tree (or queue of patches) is achived, that can be
used as a measurement base and the individual features can be
re-applied again, one by one, with measurement and analysis becoming a
lot easier.
> > Also it appears the Ebizzy numbers ought to be stable enough now
> > to make the range-TLB-flush measurements more precise?
>
> Right now, the tlbflush microbenchmark figures look awful on the
> 8-core machine when the tlbflush shift patch and the schedule domain
> fix are both applied.
I think that furthr strengthens the case for the 'clean base' approach
I outlined above - but it's your call obviously ...
Thanks again for going through all this. Tracking multi-commit
performance regressions across 1.5 years worth of commits is generally
very hard. Does your testing effort comes from enterprise Linux QA
testing, or did you ran into this problem accidentally?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 14:42 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-17 14:42 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> [...]
>
> At that point it'll be time to look at profiles and see where we are
> actually spending time because the possibilities of finding things
> to fix through bisection will be exhausted.
Yeah.
One (heavy handed but effective) trick that can be used in such a
situation is to just revert everything that is causing problems, and
continue reverting until we get back to a v3.4 baseline performance.
Once such a 'clean' tree (or queue of patches) is achived, that can be
used as a measurement base and the individual features can be
re-applied again, one by one, with measurement and analysis becoming a
lot easier.
> > Also it appears the Ebizzy numbers ought to be stable enough now
> > to make the range-TLB-flush measurements more precise?
>
> Right now, the tlbflush microbenchmark figures look awful on the
> 8-core machine when the tlbflush shift patch and the schedule domain
> fix are both applied.
I think that furthr strengthens the case for the 'clean base' approach
I outlined above - but it's your call obviously ...
Thanks again for going through all this. Tracking multi-commit
performance regressions across 1.5 years worth of commits is generally
very hard. Does your testing effort comes from enterprise Linux QA
testing, or did you ran into this problem accidentally?
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 14:42 ` Ingo Molnar
@ 2013-12-17 17:54 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 17:54 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 03:42:14PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > [...]
> >
> > At that point it'll be time to look at profiles and see where we are
> > actually spending time because the possibilities of finding things
> > to fix through bisection will be exhausted.
>
> Yeah.
>
> One (heavy handed but effective) trick that can be used in such a
> situation is to just revert everything that is causing problems, and
> continue reverting until we get back to a v3.4 baseline performance.
>
Very tempted but the potential timeframe here is very large and the number
of patches could be considerable. Some patches cause a lot of noise. For
example, one patch enabled ACPI cpufreq driver loading which looks like
a regression during that window but it's a side-effect that gets fixed
later. It'll take time to identify all the patches that potentially cause
problems.
> Once such a 'clean' tree (or queue of patches) is achived, that can be
> used as a measurement base and the individual features can be
> re-applied again, one by one, with measurement and analysis becoming a
> lot easier.
>
Ordinarily I would agree with you but would prefer a shorter window for
that type of strategy.
> > > Also it appears the Ebizzy numbers ought to be stable enough now
> > > to make the range-TLB-flush measurements more precise?
> >
> > Right now, the tlbflush microbenchmark figures look awful on the
> > 8-core machine when the tlbflush shift patch and the schedule domain
> > fix are both applied.
>
> I think that furthr strengthens the case for the 'clean base' approach
> I outlined above - but it's your call obviously ...
>
I'll keep it as plan b if it cannot be fixed with a direct approach.
> Thanks again for going through all this. Tracking multi-commit
> performance regressions across 1.5 years worth of commits is generally
> very hard. Does your testing effort comes from enterprise Linux QA
> testing, or did you ran into this problem accidentally?
>
It does not come from enterprise Linux QA testing but it's motivated by
it. I want to catch as many "obvious" performance bugs before they do as
it saves time and stress in the long run. To assist that, I setup continual
performance regression testing and ebizzy was included in the first report
I opened. It makes me worry what the rest of the reports contain.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-17 17:54 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-17 17:54 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 03:42:14PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > [...]
> >
> > At that point it'll be time to look at profiles and see where we are
> > actually spending time because the possibilities of finding things
> > to fix through bisection will be exhausted.
>
> Yeah.
>
> One (heavy handed but effective) trick that can be used in such a
> situation is to just revert everything that is causing problems, and
> continue reverting until we get back to a v3.4 baseline performance.
>
Very tempted but the potential timeframe here is very large and the number
of patches could be considerable. Some patches cause a lot of noise. For
example, one patch enabled ACPI cpufreq driver loading which looks like
a regression during that window but it's a side-effect that gets fixed
later. It'll take time to identify all the patches that potentially cause
problems.
> Once such a 'clean' tree (or queue of patches) is achived, that can be
> used as a measurement base and the individual features can be
> re-applied again, one by one, with measurement and analysis becoming a
> lot easier.
>
Ordinarily I would agree with you but would prefer a shorter window for
that type of strategy.
> > > Also it appears the Ebizzy numbers ought to be stable enough now
> > > to make the range-TLB-flush measurements more precise?
> >
> > Right now, the tlbflush microbenchmark figures look awful on the
> > 8-core machine when the tlbflush shift patch and the schedule domain
> > fix are both applied.
>
> I think that furthr strengthens the case for the 'clean base' approach
> I outlined above - but it's your call obviously ...
>
I'll keep it as plan b if it cannot be fixed with a direct approach.
> Thanks again for going through all this. Tracking multi-commit
> performance regressions across 1.5 years worth of commits is generally
> very hard. Does your testing effort comes from enterprise Linux QA
> testing, or did you ran into this problem accidentally?
>
It does not come from enterprise Linux QA testing but it's motivated by
it. I want to catch as many "obvious" performance bugs before they do as
it saves time and stress in the long run. To assist that, I setup continual
performance regression testing and ebizzy was included in the first report
I opened. It makes me worry what the rest of the reports contain.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-13 20:01 ` Mel Gorman
@ 2013-12-18 7:28 ` Fengguang Wu
-1 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2013-12-18 7:28 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
Hi Mel,
I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
Basically there are
1) no big performance changes
76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
407038 +1.2% 412032 TOTAL hackbench.throughput
50307 -1.5% 49549 TOTAL ebizzy.throughput
2) huge proc-vmstat.nr_tlb_* increases
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
Here are the detailed numbers. eabb1f89905a0c809d13 is the HEAD commit
with 4 patches applied. The "~ N%" notations are the stddev percent.
The "[+-] N%" notations are the increase/decrease percent. The
brickland2, lkp-snb01, lkp-ib03 etc. are testbox names.
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3345155 ~ 0% -0.3% 3335172 ~ 0% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
33249939 ~ 0% +3.3% 34336155 ~ 1% brickland2/micro/vm-scalability/1T-shm-pread-seq
4669392 ~ 0% -0.2% 4660378 ~ 0% brickland2/micro/vm-scalability/300s-anon-r-rand
18822426 ~ 5% -10.2% 16911111 ~ 0% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
4993937 ~ 1% +4.6% 5221846 ~ 2% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
4010960 ~ 0% +0.4% 4025880 ~ 0% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
7536676 ~ 0% +1.1% 7617297 ~ 0% brickland2/micro/vm-scalability/300s-lru-file-readtwice
76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
88901 ~ 2% -3.1% 86131 ~ 0% brickland2/micro/hackbench/600%-process-pipe
153250 ~ 2% +3.1% 157931 ~ 1% brickland2/micro/hackbench/600%-process-socket
164886 ~ 1% +1.9% 167969 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
407038 +1.2% 412032 TOTAL hackbench.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
50307 ~ 1% -1.5% 49549 ~ 0% lkp-ib03/micro/ebizzy/400%-5-30
50307 -1.5% 49549 TOTAL ebizzy.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
270328 ~ 0% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
512691 ~ 0% +4.7e+14% 2.412e+18 ~51% brickland1/micro/will-it-scale/futex1
510718 ~ 1% +2.8e+14% 1.408e+18 ~83% brickland1/micro/will-it-scale/futex2
514847 ~ 0% +1.5e+14% 7.66e+17 ~44% brickland1/micro/will-it-scale/getppid1
512854 ~ 0% +1.4e+14% 7.159e+17 ~34% brickland1/micro/will-it-scale/lock1
516614 ~ 0% +8.1e+13% 4.189e+17 ~82% brickland1/micro/will-it-scale/lseek1
514457 ~ 1% +2.2e+14% 1.12e+18 ~71% brickland1/micro/will-it-scale/lseek2
533138 ~ 0% +4.8e+14% 2.561e+18 ~33% brickland1/micro/will-it-scale/malloc2
518503 ~ 0% +2.7e+14% 1.414e+18 ~74% brickland1/micro/will-it-scale/open1
512378 ~ 0% +2.4e+14% 1.232e+18 ~56% brickland1/micro/will-it-scale/open2
515078 ~ 0% +1.8e+14% 9.444e+17 ~23% brickland1/micro/will-it-scale/page_fault1
511034 ~ 0% +1.1e+14% 5.572e+17 ~43% brickland1/micro/will-it-scale/page_fault2
516217 ~ 0% +2.8e+14% 1.457e+18 ~57% brickland1/micro/will-it-scale/page_fault3
513735 ~ 0% +4.5e+13% 2.32e+17 ~75% brickland1/micro/will-it-scale/pipe1
513640 ~ 1% +7.3e+14% 3.766e+18 ~31% brickland1/micro/will-it-scale/poll1
515473 ~ 0% +6.1e+14% 3.138e+18 ~24% brickland1/micro/will-it-scale/poll2
517039 ~ 0% +2e+14% 1.032e+18 ~48% brickland1/micro/will-it-scale/posix_semaphore1
513686 ~ 0% +2e+14% 1.045e+18 ~107% brickland1/micro/will-it-scale/pread1
517218 ~ 1% +1.7e+14% 8.752e+17 ~57% brickland1/micro/will-it-scale/pread2
514904 ~ 0% +1.2e+14% 6.399e+17 ~46% brickland1/micro/will-it-scale/pthread_mutex1
512881 ~ 0% +2.6e+14% 1.314e+18 ~47% brickland1/micro/will-it-scale/pthread_mutex2
512844 ~ 0% +3.1e+14% 1.57e+18 ~91% brickland1/micro/will-it-scale/pwrite1
516859 ~ 0% +2.9e+14% 1.512e+18 ~37% brickland1/micro/will-it-scale/pwrite2
513227 ~ 0% +6.9e+13% 3.518e+17 ~90% brickland1/micro/will-it-scale/read1
518291 ~ 0% +3.6e+14% 1.875e+18 ~18% brickland1/micro/will-it-scale/read2
517795 ~ 0% +4.5e+14% 2.306e+18 ~53% brickland1/micro/will-it-scale/readseek
521558 ~ 0% +4.3e+14% 2.252e+18 ~41% brickland1/micro/will-it-scale/sched_yield
518017 ~ 1% +1.5e+14% 7.85e+17 ~42% brickland1/micro/will-it-scale/unlink2
514742 ~ 0% +4e+14% 2.046e+18 ~53% brickland1/micro/will-it-scale/write1
512803 ~ 0% +4.8e+14% 2.443e+18 ~22% brickland1/micro/will-it-scale/writeseek
1777511 ~ 0% +1.9e+13% 3.363e+17 ~33% brickland2/micro/hackbench/600%-process-pipe
2132721 ~ 6% +5.5e+13% 1.172e+18 ~24% brickland2/micro/hackbench/600%-process-socket
886153 ~ 1% +6.1e+13% 5.427e+17 ~38% brickland2/micro/hackbench/600%-threads-pipe
627654 ~ 2% +2.3e+14% 1.452e+18 ~ 8% brickland2/micro/hackbench/600%-threads-socket
5022448 ~ 7% +9.8e+12% 4.911e+17 ~70% brickland2/micro/vm-scalability/16G-msync
655929 ~ 2% +3.3e+13% 2.161e+17 ~43% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
645229 ~ 1% +1e+14% 6.675e+17 ~92% brickland2/micro/vm-scalability/16G-shm-pread-rand
511508 ~ 1% +4e+14% 2.054e+18 ~29% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
649861 ~ 0% +3.7e+13% 2.395e+17 ~62% brickland2/micro/vm-scalability/16G-shm-xread-rand
324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
511881 ~ 0% +9.4e+13% 4.792e+17 ~ 5% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
523080 ~ 0% +4e+14% 2.087e+18 ~17% brickland2/micro/vm-scalability/1T-shm-pread-seq
483125 ~ 1% +4.6e+14% 2.23e+18 ~13% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
527818 ~ 0% +3.6e+14% 1.898e+18 ~19% brickland2/micro/vm-scalability/1T-shm-xread-seq
449900 ~ 1% +2.1e+14% 9.422e+17 ~60% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
286569 ~ 0% +7.3e+14% 2.103e+18 ~83% brickland2/micro/vm-scalability/300s-anon-r-seq
458987 ~ 0% +5.7e+13% 2.601e+17 ~35% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
459891 ~ 1% +1.8e+14% 8.497e+17 ~33% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1918575 ~ 0% +2.5e+13% 4.831e+17 ~17% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1691758 ~ 0% +6.3e+13% 1.06e+18 ~30% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
500601 ~ 0% +7.3e+13% 3.678e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-readonce
471815 ~ 1% +9.5e+13% 4.485e+17 ~74% brickland2/micro/vm-scalability/300s-lru-file-readtwice
499281 ~ 1% +1.3e+14% 6.267e+17 ~10% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
541137 ~ 0% +7.4e+13% 4.026e+17 ~50% brickland2/micro/vm-scalability/300s-mmap-pread-rand
422058 ~ 1% +2.4e+14% 9.997e+17 ~16% brickland2/micro/vm-scalability/300s-mmap-pread-seq
486583 ~ 2% +1.3e+14% 6.117e+17 ~37% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
429204 ~ 2% +4.2e+14% 1.792e+18 ~ 6% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
358178 ~ 0% +4.4e+14% 1.58e+18 ~ 9% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
335104 ~ 0% +5.5e+14% 1.848e+18 ~16% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
331175 ~ 0% +4.4e+14% 1.471e+18 ~44% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
356821 ~ 0% +2.4e+14% 8.612e+17 ~63% fat/micro/dd-write/1HDD-cfq-xfs-100dd
336606 ~ 0% +2e+14% 6.822e+17 ~73% fat/micro/dd-write/1HDD-cfq-xfs-10dd
329511 ~ 0% +2.9e+14% 9.518e+17 ~63% fat/micro/dd-write/1HDD-cfq-xfs-1dd
335872 ~ 0% +4.6e+14% 1.55e+18 ~ 2% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
332429 ~ 0% +3.2e+14% 1.051e+18 ~61% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
359230 ~ 0% +1.8e+14% 6.545e+17 ~50% fat/micro/dd-write/1HDD-deadline-ext4-100dd
335957 ~ 0% +2.9e+14% 9.75e+17 ~25% fat/micro/dd-write/1HDD-deadline-ext4-10dd
333178 ~ 0% +1.1e+14% 3.511e+17 ~65% fat/micro/dd-write/1HDD-deadline-ext4-1dd
357406 ~ 0% +7.1e+14% 2.55e+18 ~22% fat/micro/dd-write/1HDD-deadline-xfs-100dd
332342 ~ 0% +4e+14% 1.319e+18 ~11% fat/micro/dd-write/1HDD-deadline-xfs-10dd
331823 ~ 0% +2.2e+14% 7.247e+17 ~58% fat/micro/dd-write/1HDD-deadline-xfs-1dd
103797 ~ 0% -100.0% 1 ~141% lkp-a04/micro/netperf/120s-200%-TCP_RR
29352723 ~ 0% +1.8e+12% 5.199e+17 ~68% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
253764 ~ 0% +1.5e+14% 3.723e+17 ~41% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
251460 ~ 1% +1.2e+14% 3.09e+17 ~66% lkp-ib03/micro/netperf/120s-200%-TCP_RR
252357 ~ 1% +1.8e+14% 4.643e+17 ~42% lkp-ib03/micro/netperf/120s-200%-UDP_RR
2802319 ~ 3% +8.8e+12% 2.476e+17 ~83% lkp-nex05/micro/hackbench/800%-process-pipe
2344699 ~ 0% +3.1e+13% 7.351e+17 ~24% lkp-nex05/micro/hackbench/800%-process-socket
944933 ~ 2% +4.3e+13% 4.06e+17 ~ 7% lkp-nex05/micro/hackbench/800%-threads-pipe
763122 ~ 0% +5.6e+13% 4.296e+17 ~61% lkp-nex05/micro/hackbench/800%-threads-socket
265113 ~ 0% -100.0% 0 lkp-nex05/micro/tlbflush/100%-8
1375290 ~ 3% +2.4e+13% 3.263e+17 ~51% lkp-snb01/micro/hackbench/1600%-threads-pipe
1141467 ~ 1% +1.7e+13% 1.977e+17 ~40% lkp-snb01/micro/hackbench/1600%-threads-socket
789789 ~ 0% +1.7e+15% 1.37e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
559134 ~ 0% +2.2e+15% 1.211e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
533188 ~ 0% +2.1e+15% 1.105e+19 ~ 5% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
794948 ~ 0% +1.9e+15% 1.518e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
555237 ~ 0% +2.4e+15% 1.35e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
531695 ~ 0% +1.5e+15% 8.153e+18 ~11% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
778886 ~ 0% +1.9e+15% 1.517e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
549300 ~ 0% +2.3e+15% 1.283e+19 ~ 0% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
527275 ~ 0% +1.2e+15% 6.59e+18 ~12% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
794872 ~ 0% +1.9e+15% 1.506e+19 ~ 0% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
553822 ~ 0% +2.4e+15% 1.306e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
529079 ~ 0% +1.5e+15% 7.958e+18 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
776427 ~ 0% +2e+15% 1.552e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
546912 ~ 0% +2.3e+15% 1.263e+19 ~ 3% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
523882 ~ 0% +1.3e+15% 6.782e+18 ~ 7% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
466018 ~ 0% +7.2e+14% 3.362e+18 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
465694 ~ 0% +7.5e+14% 3.494e+18 ~20% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
636199 ~ 1% +1.4e+14% 8.6e+17 ~38% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
628230 ~ 1% +1.3e+14% 7.951e+17 ~14% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
624286 ~ 0% +9.9e+14% 6.187e+18 ~ 2% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
470666 ~ 1% +3.7e+14% 1.748e+18 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
465417 ~ 0% +5.1e+14% 2.354e+18 ~32% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
581600 ~ 0% +1.4e+14% 8.304e+17 ~15% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
581818 ~ 0% +1.9e+14% 1.097e+18 ~57% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
467899 ~ 0% +2.3e+13% 1.061e+17 ~22% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
582271 ~ 0% +1.2e+15% 7.192e+18 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
471064 ~ 1% +2.8e+14% 1.305e+18 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
464862 ~ 0% +5.6e+14% 2.612e+18 ~13% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
393 ~ 1% -85.7% 56 ~28% avoton1/crypto/tcrypt/2s-505-509
15803 ~11% +1.2e+16% 1.965e+18 ~65% brickland1/micro/will-it-scale/futex1
4913 ~12% +3.2e+16% 1.554e+18 ~84% brickland1/micro/will-it-scale/futex2
12852 ~85% +3.4e+15% 4.376e+17 ~45% brickland1/micro/will-it-scale/futex4
14179 ~47% +6.3e+15% 8.988e+17 ~47% brickland1/micro/will-it-scale/getppid1
12671 ~27% +6.9e+15% 8.774e+17 ~20% brickland1/micro/will-it-scale/lock1
13765 ~10% +3.1e+15% 4.23e+17 ~80% brickland1/micro/will-it-scale/lseek1
9585 ~64% +1.4e+16% 1.334e+18 ~81% brickland1/micro/will-it-scale/lseek2
13775 ~43% +1.9e+16% 2.658e+18 ~36% brickland1/micro/will-it-scale/malloc2
8969 ~58% +1e+16% 9.329e+17 ~61% brickland1/micro/will-it-scale/open1
8056 ~30% +1.6e+16% 1.253e+18 ~57% brickland1/micro/will-it-scale/open2
12380 ~45% +8e+15% 9.92e+17 ~44% brickland1/micro/will-it-scale/page_fault1
15214 ~54% +3.9e+15% 5.92e+17 ~53% brickland1/micro/will-it-scale/page_fault2
10910 ~23% +1.1e+16% 1.19e+18 ~85% brickland1/micro/will-it-scale/page_fault3
20099 ~55% +1.9e+15% 3.798e+17 ~66% brickland1/micro/will-it-scale/pipe1
8468 ~54% +4.1e+16% 3.458e+18 ~39% brickland1/micro/will-it-scale/poll1
14578 ~28% +2.4e+16% 3.558e+18 ~ 8% brickland1/micro/will-it-scale/poll2
12628 ~16% +8.1e+15% 1.027e+18 ~50% brickland1/micro/will-it-scale/posix_semaphore1
5493 ~11% +2.5e+16% 1.349e+18 ~103% brickland1/micro/will-it-scale/pread1
12278 ~29% +5.4e+15% 6.626e+17 ~39% brickland1/micro/will-it-scale/pread2
12944 ~19% +6.7e+15% 8.7e+17 ~66% brickland1/micro/will-it-scale/pthread_mutex1
11687 ~66% +9.9e+15% 1.16e+18 ~64% brickland1/micro/will-it-scale/pthread_mutex2
20841 ~16% +9.1e+15% 1.907e+18 ~101% brickland1/micro/will-it-scale/pwrite1
16466 ~56% +8.8e+15% 1.441e+18 ~35% brickland1/micro/will-it-scale/pwrite2
12778 ~42% +2.7e+15% 3.469e+17 ~91% brickland1/micro/will-it-scale/read1
12599 ~34% +1.6e+16% 2.013e+18 ~22% brickland1/micro/will-it-scale/read2
10827 ~35% +1.9e+16% 2.047e+18 ~59% brickland1/micro/will-it-scale/readseek
12148 ~40% +1.9e+16% 2.274e+18 ~41% brickland1/micro/will-it-scale/sched_yield
15135 ~13% +2.4e+15% 3.685e+17 ~69% brickland1/micro/will-it-scale/unix1
10193 ~24% +5.5e+15% 5.606e+17 ~80% brickland1/micro/will-it-scale/unlink1
12863 ~10% +4.8e+15% 6.189e+17 ~29% brickland1/micro/will-it-scale/unlink2
13792 ~66% +1.3e+16% 1.8e+18 ~72% brickland1/micro/will-it-scale/write1
9516 ~64% +2.6e+16% 2.468e+18 ~21% brickland1/micro/will-it-scale/writeseek
10528 ~46% +3.5e+15% 3.672e+17 ~18% brickland2/micro/hackbench/600%-process-pipe
5690 ~31% +1.6e+16% 9.28e+17 ~45% brickland2/micro/hackbench/600%-process-socket
51573 ~27% +9.6e+14% 4.94e+17 ~53% brickland2/micro/hackbench/600%-threads-pipe
95291 ~44% +1.1e+15% 1.062e+18 ~ 6% brickland2/micro/hackbench/600%-threads-socket
51844 ~10% +5.5e+14% 2.86e+17 ~105% brickland2/micro/vm-scalability/16G-msync
13334 ~80% +1.6e+15% 2.094e+17 ~68% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
6719 ~49% +1e+16% 6.792e+17 ~89% brickland2/micro/vm-scalability/16G-shm-pread-rand
9280 ~57% +2e+16% 1.868e+18 ~15% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
13979 ~52% +1.7e+15% 2.309e+17 ~23% brickland2/micro/vm-scalability/16G-shm-xread-rand
17219 ~28% -100.0% 1 ~70% brickland2/micro/vm-scalability/16G-truncate
15478 ~ 6% +2.5e+15% 3.82e+17 ~14% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
9384 ~50% +2.1e+16% 1.927e+18 ~27% brickland2/micro/vm-scalability/1T-shm-pread-seq
4074 ~12% +5.1e+16% 2.073e+18 ~19% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
17303 ~57% +1e+16% 1.774e+18 ~20% brickland2/micro/vm-scalability/1T-shm-xread-seq
7018 ~10% +7.9e+15% 5.548e+17 ~45% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
25135 ~13% +8.2e+15% 2.071e+18 ~79% brickland2/micro/vm-scalability/300s-anon-r-seq
8835 ~36% +1.1e+16% 1.003e+18 ~109% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
4975 ~28% +1.2e+16% 5.832e+17 ~40% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1.682e+08 ~ 1% +2.7e+11% 4.532e+17 ~ 5% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1.578e+08 ~ 0% +6e+11% 9.516e+17 ~35% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
16968 ~26% +1.8e+15% 3.027e+17 ~52% brickland2/micro/vm-scalability/300s-lru-file-readonce
10641 ~50% +4e+15% 4.27e+17 ~50% brickland2/micro/vm-scalability/300s-lru-file-readtwice
12265 ~46% +5e+15% 6.188e+17 ~11% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
12728 ~45% +3.1e+15% 3.979e+17 ~35% brickland2/micro/vm-scalability/300s-mmap-pread-rand
21516 ~ 9% +4.4e+15% 9.517e+17 ~ 8% brickland2/micro/vm-scalability/300s-mmap-pread-seq
12009 ~83% +4.6e+15% 5.548e+17 ~45% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
13007 ~51% +1.4e+16% 1.792e+18 ~15% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
4428 ~12% +2e+16% 8.883e+17 ~ 7% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
769 ~21% +1.8e+17% 1.351e+18 ~ 9% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
420 ~ 3% +2.2e+17% 9.427e+17 ~24% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
4840 ~ 9% +1e+15% 4.839e+16 ~92% fat/micro/dd-write/1HDD-cfq-xfs-100dd
1447 ~ 2% +2e+16% 2.953e+17 ~56% fat/micro/dd-write/1HDD-cfq-xfs-10dd
378 ~25% +4.9e+16% 1.871e+17 ~75% fat/micro/dd-write/1HDD-cfq-xfs-1dd
751 ~27% +1.6e+17% 1.202e+18 ~ 3% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
424 ~13% +1.9e+17% 8.096e+17 ~44% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
4650 ~ 8% +1.2e+15% 5.675e+16 ~44% fat/micro/dd-write/1HDD-deadline-ext4-100dd
1179 ~21% +1.5e+16% 1.725e+17 ~116% fat/micro/dd-write/1HDD-deadline-ext4-10dd
327 ~27% +2.9e+16% 9.597e+16 ~86% fat/micro/dd-write/1HDD-deadline-ext4-1dd
4657 ~ 9% +1.6e+15% 7.341e+16 ~67% fat/micro/dd-write/1HDD-deadline-xfs-100dd
908 ~13% +2.9e+16% 2.589e+17 ~31% fat/micro/dd-write/1HDD-deadline-xfs-10dd
406 ~20% +3.5e+16% 1.43e+17 ~141% fat/micro/dd-write/1HDD-deadline-xfs-1dd
222 ~ 2% +7.2e+16% 1.597e+17 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_CRR
215 ~ 4% -99.4% 1 ~141% lkp-a04/micro/netperf/120s-200%-TCP_RR
1547 ~ 2% +3.3e+16% 5.041e+17 ~61% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1535 ~ 0% +2.3e+16% 3.583e+17 ~48% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
1462 ~ 3% +1.6e+16% 2.332e+17 ~77% lkp-ib03/micro/netperf/120s-200%-TCP_RR
1419 ~17% +2.2e+16% 3.102e+17 ~20% lkp-ib03/micro/netperf/120s-200%-UDP_RR
52605367 ~ 5% +5e+11% 2.654e+17 ~50% lkp-nex04/micro/ebizzy/400%-5-30
1907 ~ 3% +1.2e+16% 2.253e+17 ~87% lkp-nex05/micro/hackbench/800%-process-pipe
1845 ~ 2% +2.4e+16% 4.353e+17 ~24% lkp-nex05/micro/hackbench/800%-process-socket
117908 ~15% +2.3e+14% 2.681e+17 ~21% lkp-nex05/micro/hackbench/800%-threads-pipe
183191 ~82% +2.1e+14% 3.871e+17 ~63% lkp-nex05/micro/hackbench/800%-threads-socket
678123 ~ 2% -100.0% 24 ~141% lkp-nex05/micro/tlbflush/100%-8
259357 ~ 4% +1e+14% 2.723e+17 ~32% lkp-snb01/micro/hackbench/1600%-threads-pipe
381071 ~22% +3.9e+13% 1.497e+17 ~33% lkp-snb01/micro/hackbench/1600%-threads-socket
15987 ~ 0% +3e+15% 4.763e+17 ~20% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
2759 ~ 2% +2.4e+16% 6.527e+17 ~25% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
847 ~ 5% +1.2e+17% 9.831e+17 ~30% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
14573 ~ 2% +1.3e+14% 1.943e+16 ~70% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
3509 ~ 8% +2e+15% 6.971e+16 ~40% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
783 ~ 1% +1.7e+16% 1.365e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
15418 ~ 1% +3e+14% 4.676e+16 ~102% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
3521 ~ 8% +3.4e+15% 1.209e+17 ~37% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
750 ~ 0% +3.8e+16% 2.836e+17 ~59% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
15271 ~ 1% +6.1e+13% 9.373e+15 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
3663 ~ 3% +2.1e+15% 7.845e+16 ~40% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
811 ~ 4% +6.3e+16% 5.119e+17 ~33% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
15401 ~ 1% +2.3e+14% 3.542e+16 ~72% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
3601 ~12% +4.1e+15% 1.462e+17 ~51% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
830 ~ 5% +1.3e+16% 1.076e+17 ~53% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
1758 ~ 3% +1.1e+17% 1.901e+18 ~ 9% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
1729 ~ 2% +9.3e+16% 1.609e+18 ~ 3% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
984 ~ 8% +1.3e+07% 1.323e+08 ~39% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
1170 ~21% +1e+07% 1.225e+08 ~12% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
1024 ~14% +7.5e+05% 7730209 ~33% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndwr-sync
1512 ~ 4% +8.8e+14% 1.336e+16 ~141% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
2073 ~ 3% +1.2e+07% 2.403e+08 ~10% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
2213 ~ 3% +1.4e+07% 3.113e+08 ~33% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
805 ~13% +6.6e+15% 5.352e+16 ~92% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
1048 ~ 3% +6.6e+15% 6.933e+16 ~40% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
1097 ~ 4% +6e+15% 6.557e+16 ~45% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
1531 ~ 3% +4.7e+15% 7.266e+16 ~19% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
1800 ~ 9% +1e+07% 1.852e+08 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
1962 ~ 2% +5.2e+14% 1.016e+16 ~141% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
136 ~ 4% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
215 ~ 7% +1e+18% 2.238e+18 ~47% brickland1/micro/will-it-scale/futex1
142 ~ 2% +1.1e+18% 1.55e+18 ~87% brickland1/micro/will-it-scale/futex2
186 ~18% +2.8e+17% 5.303e+17 ~82% brickland1/micro/will-it-scale/getppid1
198 ~16% +3.8e+17% 7.492e+17 ~30% brickland1/micro/will-it-scale/lock1
185 ~ 5% +2.3e+17% 4.223e+17 ~81% brickland1/micro/will-it-scale/lseek1
165 ~ 9% +7.9e+17% 1.307e+18 ~81% brickland1/micro/will-it-scale/lseek2
199 ~ 9% +1.2e+18% 2.462e+18 ~38% brickland1/micro/will-it-scale/malloc2
187 ~11% +5.9e+17% 1.095e+18 ~71% brickland1/micro/will-it-scale/open1
211 ~29% +6e+17% 1.263e+18 ~59% brickland1/micro/will-it-scale/open2
258 ~ 6% +2.8e+17% 7.292e+17 ~39% brickland1/micro/will-it-scale/page_fault1
310 ~18% +1.3e+17% 4.018e+17 ~28% brickland1/micro/will-it-scale/page_fault2
357 ~ 8% +3.3e+17% 1.161e+18 ~88% brickland1/micro/will-it-scale/page_fault3
232 ~31% +1.8e+17% 4.117e+17 ~64% brickland1/micro/will-it-scale/pipe1
250 ~26% +1.3e+18% 3.23e+18 ~33% brickland1/micro/will-it-scale/poll1
208 ~ 8% +1.5e+18% 3.172e+18 ~12% brickland1/micro/will-it-scale/poll2
198 ~13% +5.1e+17% 1.013e+18 ~51% brickland1/micro/will-it-scale/posix_semaphore1
179 ~ 9% +6.2e+17% 1.117e+18 ~112% brickland1/micro/will-it-scale/pread1
714 ~ 4% +1e+17% 7.243e+17 ~36% brickland1/micro/will-it-scale/pread2
259 ~ 8% +2.8e+17% 7.329e+17 ~62% brickland1/micro/will-it-scale/pthread_mutex1
190 ~ 5% +7.6e+17% 1.456e+18 ~36% brickland1/micro/will-it-scale/pthread_mutex2
281 ~41% +6.9e+17% 1.952e+18 ~102% brickland1/micro/will-it-scale/pwrite1
682 ~13% +2e+17% 1.362e+18 ~36% brickland1/micro/will-it-scale/pwrite2
224 ~45% +1.5e+17% 3.452e+17 ~92% brickland1/micro/will-it-scale/read1
279 ~11% +6.6e+17% 1.83e+18 ~14% brickland1/micro/will-it-scale/read2
187 ~ 9% +1.2e+18% 2.203e+18 ~55% brickland1/micro/will-it-scale/readseek
207 ~10% +1.2e+18% 2.535e+18 ~21% brickland1/micro/will-it-scale/sched_yield
198 ~ 8% +2.1e+17% 4.259e+17 ~36% brickland1/micro/will-it-scale/unlink2
219 ~22% +8.3e+17% 1.823e+18 ~76% brickland1/micro/will-it-scale/write1
183 ~23% +1.3e+18% 2.39e+18 ~26% brickland1/micro/will-it-scale/writeseek
256 ~22% +1.3e+17% 3.385e+17 ~21% brickland2/micro/hackbench/600%-process-pipe
237 ~11% +3.8e+17% 8.978e+17 ~36% brickland2/micro/hackbench/600%-process-socket
2000 ~30% +2.4e+16% 4.869e+17 ~42% brickland2/micro/hackbench/600%-threads-pipe
2742 ~10% +3.8e+16% 1.042e+18 ~12% brickland2/micro/hackbench/600%-threads-socket
46754 ~11% +1.1e+15% 5.134e+17 ~51% brickland2/micro/vm-scalability/16G-msync
1296 ~19% +1.8e+16% 2.275e+17 ~48% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
427 ~ 9% +1.5e+17% 6.322e+17 ~89% brickland2/micro/vm-scalability/16G-shm-pread-rand
469 ~11% +4.7e+17% 2.208e+18 ~29% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
429 ~22% +4.3e+16% 1.86e+17 ~19% brickland2/micro/vm-scalability/16G-shm-xread-rand
278 ~32% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
1044 ~12% +3.9e+16% 4.044e+17 ~21% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
1027 ~ 0% +1.9e+17% 1.989e+18 ~23% brickland2/micro/vm-scalability/1T-shm-pread-seq
334 ~25% +6e+17% 2.005e+18 ~10% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
1007 ~10% +1.6e+17% 1.61e+18 ~18% brickland2/micro/vm-scalability/1T-shm-xread-seq
191 ~ 9% +2e+17% 3.891e+17 ~88% brickland2/micro/vm-scalability/300s-anon-r-rand
204 ~10% +2.5e+17% 5.182e+17 ~49% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
263 ~23% +7.8e+17% 2.054e+18 ~88% brickland2/micro/vm-scalability/300s-anon-r-seq
189 ~33% +6.5e+17% 1.227e+18 ~115% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
158 ~38% +3.9e+17% 6.175e+17 ~45% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1.683e+08 ~ 1% +2.4e+11% 4.035e+17 ~36% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1.578e+08 ~ 0% +5.5e+11% 8.677e+17 ~34% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
429 ~ 5% +7.3e+16% 3.133e+17 ~39% brickland2/micro/vm-scalability/300s-lru-file-readonce
205 ~22% +2.5e+17% 5.1e+17 ~86% brickland2/micro/vm-scalability/300s-lru-file-readtwice
555 ~ 7% +1.1e+17% 6.182e+17 ~ 6% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
221 ~11% +1.7e+17% 3.722e+17 ~48% brickland2/micro/vm-scalability/300s-mmap-pread-rand
389 ~15% +2.3e+17% 8.909e+17 ~20% brickland2/micro/vm-scalability/300s-mmap-pread-seq
1130 ~ 7% +4.1e+16% 4.646e+17 ~35% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
654 ~ 8% +2.2e+17% 1.436e+18 ~15% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
4330 ~12% +1.1e+15% 4.7e+16 ~87% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
678 ~22% +4e+16% 2.689e+17 ~25% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
320 ~ 7% +3.4e+16% 1.098e+17 ~33% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
4749 ~ 9% +3.8e+14% 1.794e+16 ~122% fat/micro/dd-write/1HDD-cfq-xfs-100dd
1339 ~ 2% +6.1e+15% 8.145e+16 ~86% fat/micro/dd-write/1HDD-cfq-xfs-10dd
273 ~29% +2.4e+16% 6.472e+16 ~115% fat/micro/dd-write/1HDD-cfq-xfs-1dd
646 ~32% +7.6e+15% 4.926e+16 ~52% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
316 ~15% +2.5e+16% 7.789e+16 ~110% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
4548 ~ 8% +3.6e+14% 1.624e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-100dd
1070 ~23% +3.8e+15% 4.059e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-10dd
221 ~39% +1.1e+16% 2.45e+16 ~81% fat/micro/dd-write/1HDD-deadline-ext4-1dd
4563 ~ 9% +4.7e+13% 2.16e+15 ~140% fat/micro/dd-write/1HDD-deadline-xfs-100dd
811 ~15% +3e+15% 2.447e+16 ~81% fat/micro/dd-write/1HDD-deadline-xfs-10dd
295 ~27% +1.3e+12% 3.881e+12 ~63% fat/micro/dd-write/1HDD-deadline-xfs-1dd
156 ~ 2% +5.1e+16% 8.02e+16 ~99% lkp-a04/micro/netperf/120s-200%-TCP_CRR
148 ~ 3% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
3772540 ~ 0% +5.5e+12% 2.085e+17 ~27% lkp-ib03/micro/ebizzy/400%-5-30
221 ~ 5% +2e+17% 4.434e+17 ~92% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
176 ~ 7% +1.7e+17% 2.957e+17 ~87% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
214 ~12% +7e+16% 1.494e+17 ~62% lkp-ib03/micro/netperf/120s-200%-TCP_RR
169 ~ 5% +2.6e+17% 4.341e+17 ~33% lkp-ib03/micro/netperf/120s-200%-UDP_RR
513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
15165 ~ 0% +3.2e+15% 4.86e+17 ~16% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
2396 ~ 2% +2.6e+16% 6.187e+17 ~29% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
473 ~ 8% +1.9e+17% 8.989e+17 ~43% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
14021 ~ 2% +7.8e+13% 1.092e+16 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
3150 ~ 9% +4.3e+14% 1.359e+16 ~140% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
418 ~ 0% +2.3e+16% 9.474e+16 ~28% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
14661 ~ 0% +3.6e+14% 5.33e+16 ~97% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
3084 ~10% +4.2e+15% 1.295e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
361 ~ 3% +6.6e+16% 2.403e+17 ~57% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
14473 ~ 1% +1.6e+13% 2.367e+15 ~140% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
3296 ~ 3% +1.1e+15% 3.58e+16 ~46% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
400 ~ 4% +5e+16% 2.014e+17 ~69% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
14638 ~ 1% +1.1e+14% 1.654e+16 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
3218 ~13% +4.9e+15% 1.592e+17 ~74% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
405 ~ 4% +2.4e+16% 9.656e+16 ~48% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
1686 ~ 3% +3e+16% 5.075e+17 ~32% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
1658 ~ 2% +2.1e+16% 3.512e+17 ~25% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
927 ~10% +5.1e+11% 4.73e+12 ~44% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
1110 ~23% +3.9e+11% 4.386e+12 ~21% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
1450 ~ 4% +7.1e+11% 1.03e+13 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
2003 ~ 3% +4.8e+11% 9.596e+12 ~12% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
2134 ~ 3% +6.2e+11% 1.317e+13 ~31% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
763 ~12% +7.2e+15% 5.504e+16 ~73% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
971 ~ 3% +8.3e+15% 8.058e+16 ~45% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
1024 ~ 5% +1e+16% 1.073e+17 ~60% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
1464 ~ 3% +2.5e+15% 3.613e+16 ~24% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
1744 ~10% +4e+11% 6.932e+12 ~24% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
1894 ~ 2% +5.9e+11% 1.111e+13 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
36971 ~ 0% +1.5e+08% 5.564e+10 ~141% avoton1/crypto/tcrypt/2s-301-319
30210 ~ 0% -89.7% 3108 ~19% avoton1/crypto/tcrypt/2s-505-509
17804 ~ 0% +1e+16% 1.861e+18 ~76% brickland1/micro/will-it-scale/futex1
17813 ~ 0% +8.6e+15% 1.528e+18 ~83% brickland1/micro/will-it-scale/futex2
17880 ~ 0% +3.9e+15% 6.977e+17 ~55% brickland1/micro/will-it-scale/getppid1
17829 ~ 0% +4.7e+15% 8.331e+17 ~33% brickland1/micro/will-it-scale/lock1
17850 ~ 0% +2.3e+15% 4.164e+17 ~82% brickland1/micro/will-it-scale/lseek1
17850 ~ 0% +4.8e+15% 8.603e+17 ~61% brickland1/micro/will-it-scale/lseek2
17846 ~ 0% +1.1e+16% 2.025e+18 ~59% brickland1/micro/will-it-scale/malloc2
18172 ~ 0% -63.6% 6623 ~14% brickland1/micro/will-it-scale/mmap2
17899 ~ 0% +6.1e+15% 1.093e+18 ~69% brickland1/micro/will-it-scale/open1
17837 ~ 0% +7e+15% 1.255e+18 ~57% brickland1/micro/will-it-scale/open2
54199 ~ 0% +1.8e+15% 9.902e+17 ~13% brickland1/micro/will-it-scale/page_fault1
42510 ~ 0% +9.6e+14% 4.069e+17 ~45% brickland1/micro/will-it-scale/page_fault2
170171 ~ 0% +8.2e+14% 1.399e+18 ~61% brickland1/micro/will-it-scale/page_fault3
17855 ~ 0% +1e+15% 1.87e+17 ~49% brickland1/micro/will-it-scale/pipe1
17873 ~ 0% +1.8e+16% 3.161e+18 ~37% brickland1/micro/will-it-scale/poll1
17843 ~ 0% +1.9e+16% 3.335e+18 ~ 9% brickland1/micro/will-it-scale/poll2
17872 ~ 0% +5.7e+15% 1.024e+18 ~50% brickland1/micro/will-it-scale/posix_semaphore1
17827 ~ 0% +5.2e+15% 9.269e+17 ~107% brickland1/micro/will-it-scale/pread1
17982 ~ 0% +4e+15% 7.161e+17 ~42% brickland1/micro/will-it-scale/pread2
17865 ~ 0% +3.9e+15% 6.932e+17 ~48% brickland1/micro/will-it-scale/pthread_mutex1
17818 ~ 0% +6.2e+15% 1.109e+18 ~55% brickland1/micro/will-it-scale/pthread_mutex2
17819 ~ 0% +8.9e+15% 1.592e+18 ~93% brickland1/micro/will-it-scale/pwrite1
18000 ~ 0% +7.3e+15% 1.32e+18 ~39% brickland1/micro/will-it-scale/pwrite2
17874 ~ 0% +1.9e+15% 3.418e+17 ~94% brickland1/micro/will-it-scale/read1
17988 ~ 0% +1.1e+16% 1.964e+18 ~20% brickland1/micro/will-it-scale/read2
17897 ~ 0% +1.2e+16% 2.063e+18 ~53% brickland1/micro/will-it-scale/readseek
17978 ~ 0% +1.3e+16% 2.259e+18 ~41% brickland1/micro/will-it-scale/sched_yield
17855 ~ 0% +3.1e+15% 5.594e+17 ~40% brickland1/micro/will-it-scale/unlink2
17841 ~ 0% +1.1e+16% 1.942e+18 ~59% brickland1/micro/will-it-scale/write1
17840 ~ 0% +1.4e+16% 2.555e+18 ~15% brickland1/micro/will-it-scale/writeseek
27664 ~ 2% +1.1e+15% 3.078e+17 ~15% brickland2/micro/hackbench/600%-process-pipe
15925 ~ 5% +5.6e+15% 8.867e+17 ~24% brickland2/micro/hackbench/600%-process-socket
28749 ~ 2% +1.6e+15% 4.511e+17 ~47% brickland2/micro/hackbench/600%-threads-pipe
16005 ~ 9% +6.6e+15% 1.061e+18 ~10% brickland2/micro/hackbench/600%-threads-socket
25886 ~ 2% +8.7e+14% 2.26e+17 ~35% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
25203 ~ 0% +2.5e+15% 6.257e+17 ~95% brickland2/micro/vm-scalability/16G-shm-pread-rand
19097 ~ 0% +1e+16% 1.974e+18 ~16% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
25288 ~ 0% +7.2e+14% 1.812e+17 ~48% brickland2/micro/vm-scalability/16G-shm-xread-rand
10671 ~ 0% -71.1% 3086 ~15% brickland2/micro/vm-scalability/16G-truncate
19001 ~ 0% +2.3e+15% 4.431e+17 ~ 9% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
19721 ~ 0% +9.2e+15% 1.823e+18 ~24% brickland2/micro/vm-scalability/1T-shm-pread-seq
17867 ~ 0% +1.2e+16% 2.118e+18 ~ 9% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
19893 ~ 0% +9e+15% 1.788e+18 ~22% brickland2/micro/vm-scalability/1T-shm-xread-seq
16433 ~ 2% +3.2e+15% 5.303e+17 ~45% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
8837 ~ 0% +2.3e+16% 1.99e+18 ~94% brickland2/micro/vm-scalability/300s-anon-r-seq
16862 ~ 0% +7e+15% 1.176e+18 ~114% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
16808 ~ 0% +4.6e+15% 7.766e+17 ~33% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
20507 ~ 0% +1.7e+15% 3.41e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
18674 ~ 0% +5.1e+15% 9.583e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
18832 ~ 0% +1.8e+15% 3.443e+17 ~28% brickland2/micro/vm-scalability/300s-lru-file-readonce
17489 ~ 0% +2.4e+15% 4.206e+17 ~76% brickland2/micro/vm-scalability/300s-lru-file-readtwice
18790 ~ 2% +2.7e+15% 5.119e+17 ~ 5% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
20337 ~ 0% +2e+15% 4.009e+17 ~46% brickland2/micro/vm-scalability/300s-mmap-pread-rand
14994 ~ 0% +5.5e+15% 8.186e+17 ~20% brickland2/micro/vm-scalability/300s-mmap-pread-seq
17830 ~ 0% +2.6e+15% 4.586e+17 ~43% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
15556 ~ 2% +1.1e+16% 1.649e+18 ~ 7% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
15258 ~ 0% +4.6e+14% 6.963e+16 ~49% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
14293 ~ 0% +2.2e+15% 3.199e+17 ~17% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
14104 ~ 0% +6.2e+14% 8.718e+16 ~31% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
15176 ~ 0% +1.2e+14% 1.872e+16 ~113% fat/micro/dd-write/1HDD-cfq-xfs-100dd
14257 ~ 0% +5.7e+14% 8.144e+16 ~86% fat/micro/dd-write/1HDD-cfq-xfs-10dd
14065 ~ 0% +4.6e+14% 6.471e+16 ~115% fat/micro/dd-write/1HDD-cfq-xfs-1dd
14296 ~ 0% +3.3e+14% 4.72e+16 ~20% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
14163 ~ 0% +6.9e+14% 9.719e+16 ~79% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
15217 ~ 0% +1.1e+14% 1.623e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-100dd
14180 ~ 0% +1.7e+14% 2.446e+16 ~81% fat/micro/dd-write/1HDD-deadline-xfs-10dd
10634 ~ 0% -43.9% 5971 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
3781807 ~ 0% +6.7e+12% 2.543e+17 ~42% lkp-ib03/micro/ebizzy/400%-5-30
9234 ~ 0% +2.7e+15% 2.489e+17 ~74% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
9079 ~ 0% +3e+15% 2.682e+17 ~103% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
9016 ~ 0% +3.1e+15% 2.775e+17 ~69% lkp-ib03/micro/netperf/120s-200%-TCP_RR
9099 ~ 0% +4.2e+15% 3.854e+17 ~25% lkp-ib03/micro/netperf/120s-200%-UDP_RR
22724 ~ 0% +1.1e+15% 2.508e+17 ~77% lkp-nex05/micro/hackbench/800%-process-pipe
15900 ~ 2% +2.8e+15% 4.396e+17 ~29% lkp-nex05/micro/hackbench/800%-process-socket
23757 ~ 2% +1.2e+15% 2.94e+17 ~18% lkp-nex05/micro/hackbench/800%-threads-pipe
14867 ~ 0% +2.6e+15% 3.863e+17 ~65% lkp-nex05/micro/hackbench/800%-threads-socket
5515 ~ 0% -42.3% 3184 ~42% lkp-nex05/micro/tlbflush/100%-8
18295 ~ 3% +1.3e+15% 2.39e+17 ~28% lkp-snb01/micro/hackbench/1600%-threads-pipe
9304 ~ 1% +1.6e+15% 1.483e+17 ~50% lkp-snb01/micro/hackbench/1600%-threads-socket
34259 ~ 0% +1.8e+15% 6.324e+17 ~39% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
24088 ~ 0% +2.8e+15% 6.708e+17 ~26% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
22923 ~ 0% +4.7e+15% 1.076e+18 ~27% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
23949 ~ 0% +3.6e+14% 8.725e+16 ~ 4% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
22852 ~ 0% +6.2e+14% 1.418e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
33664 ~ 0% +1.3e+14% 4.488e+16 ~101% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
23679 ~ 0% +7.3e+14% 1.734e+17 ~72% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
22691 ~ 0% +1.2e+15% 2.759e+17 ~58% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
23989 ~ 0% +4.3e+14% 1.021e+17 ~22% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
22874 ~ 0% +2e+15% 4.529e+17 ~69% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
23682 ~ 0% +6.8e+14% 1.6e+17 ~56% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
22652 ~ 0% +4.3e+14% 9.848e+16 ~49% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
20029 ~ 0% +2.3e+15% 4.684e+17 ~41% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
20044 ~ 0% +1.5e+15% 2.936e+17 ~26% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
28205 ~ 1% -78.1% 6186 ~ 6% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
27802 ~ 1% -78.5% 5968 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
20016 ~ 0% -74.2% 5167 ~ 0% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndwr-sync
27596 ~ 0% -79.0% 5801 ~ 1% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
20198 ~ 1% -63.7% 7336 ~ 1% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
20032 ~ 0% -60.1% 7997 ~ 9% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
25640 ~ 0% +1.9e+14% 4.937e+16 ~51% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
20047 ~ 0% +9e+14% 1.798e+17 ~17% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
25624 ~ 0% +6.3e+13% 1.607e+16 ~53% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
20246 ~ 1% -66.7% 6734 ~ 7% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
20025 ~ 0% -63.1% 7395 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
Thanks,
Fengguang
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-18 7:28 ` Fengguang Wu
0 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2013-12-18 7:28 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
Hi Mel,
I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
Basically there are
1) no big performance changes
76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
407038 +1.2% 412032 TOTAL hackbench.throughput
50307 -1.5% 49549 TOTAL ebizzy.throughput
2) huge proc-vmstat.nr_tlb_* increases
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
Here are the detailed numbers. eabb1f89905a0c809d13 is the HEAD commit
with 4 patches applied. The "~ N%" notations are the stddev percent.
The "[+-] N%" notations are the increase/decrease percent. The
brickland2, lkp-snb01, lkp-ib03 etc. are testbox names.
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3345155 ~ 0% -0.3% 3335172 ~ 0% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
33249939 ~ 0% +3.3% 34336155 ~ 1% brickland2/micro/vm-scalability/1T-shm-pread-seq
4669392 ~ 0% -0.2% 4660378 ~ 0% brickland2/micro/vm-scalability/300s-anon-r-rand
18822426 ~ 5% -10.2% 16911111 ~ 0% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
4993937 ~ 1% +4.6% 5221846 ~ 2% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
4010960 ~ 0% +0.4% 4025880 ~ 0% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
7536676 ~ 0% +1.1% 7617297 ~ 0% brickland2/micro/vm-scalability/300s-lru-file-readtwice
76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
88901 ~ 2% -3.1% 86131 ~ 0% brickland2/micro/hackbench/600%-process-pipe
153250 ~ 2% +3.1% 157931 ~ 1% brickland2/micro/hackbench/600%-process-socket
164886 ~ 1% +1.9% 167969 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
407038 +1.2% 412032 TOTAL hackbench.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
50307 ~ 1% -1.5% 49549 ~ 0% lkp-ib03/micro/ebizzy/400%-5-30
50307 -1.5% 49549 TOTAL ebizzy.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
270328 ~ 0% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
512691 ~ 0% +4.7e+14% 2.412e+18 ~51% brickland1/micro/will-it-scale/futex1
510718 ~ 1% +2.8e+14% 1.408e+18 ~83% brickland1/micro/will-it-scale/futex2
514847 ~ 0% +1.5e+14% 7.66e+17 ~44% brickland1/micro/will-it-scale/getppid1
512854 ~ 0% +1.4e+14% 7.159e+17 ~34% brickland1/micro/will-it-scale/lock1
516614 ~ 0% +8.1e+13% 4.189e+17 ~82% brickland1/micro/will-it-scale/lseek1
514457 ~ 1% +2.2e+14% 1.12e+18 ~71% brickland1/micro/will-it-scale/lseek2
533138 ~ 0% +4.8e+14% 2.561e+18 ~33% brickland1/micro/will-it-scale/malloc2
518503 ~ 0% +2.7e+14% 1.414e+18 ~74% brickland1/micro/will-it-scale/open1
512378 ~ 0% +2.4e+14% 1.232e+18 ~56% brickland1/micro/will-it-scale/open2
515078 ~ 0% +1.8e+14% 9.444e+17 ~23% brickland1/micro/will-it-scale/page_fault1
511034 ~ 0% +1.1e+14% 5.572e+17 ~43% brickland1/micro/will-it-scale/page_fault2
516217 ~ 0% +2.8e+14% 1.457e+18 ~57% brickland1/micro/will-it-scale/page_fault3
513735 ~ 0% +4.5e+13% 2.32e+17 ~75% brickland1/micro/will-it-scale/pipe1
513640 ~ 1% +7.3e+14% 3.766e+18 ~31% brickland1/micro/will-it-scale/poll1
515473 ~ 0% +6.1e+14% 3.138e+18 ~24% brickland1/micro/will-it-scale/poll2
517039 ~ 0% +2e+14% 1.032e+18 ~48% brickland1/micro/will-it-scale/posix_semaphore1
513686 ~ 0% +2e+14% 1.045e+18 ~107% brickland1/micro/will-it-scale/pread1
517218 ~ 1% +1.7e+14% 8.752e+17 ~57% brickland1/micro/will-it-scale/pread2
514904 ~ 0% +1.2e+14% 6.399e+17 ~46% brickland1/micro/will-it-scale/pthread_mutex1
512881 ~ 0% +2.6e+14% 1.314e+18 ~47% brickland1/micro/will-it-scale/pthread_mutex2
512844 ~ 0% +3.1e+14% 1.57e+18 ~91% brickland1/micro/will-it-scale/pwrite1
516859 ~ 0% +2.9e+14% 1.512e+18 ~37% brickland1/micro/will-it-scale/pwrite2
513227 ~ 0% +6.9e+13% 3.518e+17 ~90% brickland1/micro/will-it-scale/read1
518291 ~ 0% +3.6e+14% 1.875e+18 ~18% brickland1/micro/will-it-scale/read2
517795 ~ 0% +4.5e+14% 2.306e+18 ~53% brickland1/micro/will-it-scale/readseek
521558 ~ 0% +4.3e+14% 2.252e+18 ~41% brickland1/micro/will-it-scale/sched_yield
518017 ~ 1% +1.5e+14% 7.85e+17 ~42% brickland1/micro/will-it-scale/unlink2
514742 ~ 0% +4e+14% 2.046e+18 ~53% brickland1/micro/will-it-scale/write1
512803 ~ 0% +4.8e+14% 2.443e+18 ~22% brickland1/micro/will-it-scale/writeseek
1777511 ~ 0% +1.9e+13% 3.363e+17 ~33% brickland2/micro/hackbench/600%-process-pipe
2132721 ~ 6% +5.5e+13% 1.172e+18 ~24% brickland2/micro/hackbench/600%-process-socket
886153 ~ 1% +6.1e+13% 5.427e+17 ~38% brickland2/micro/hackbench/600%-threads-pipe
627654 ~ 2% +2.3e+14% 1.452e+18 ~ 8% brickland2/micro/hackbench/600%-threads-socket
5022448 ~ 7% +9.8e+12% 4.911e+17 ~70% brickland2/micro/vm-scalability/16G-msync
655929 ~ 2% +3.3e+13% 2.161e+17 ~43% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
645229 ~ 1% +1e+14% 6.675e+17 ~92% brickland2/micro/vm-scalability/16G-shm-pread-rand
511508 ~ 1% +4e+14% 2.054e+18 ~29% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
649861 ~ 0% +3.7e+13% 2.395e+17 ~62% brickland2/micro/vm-scalability/16G-shm-xread-rand
324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
511881 ~ 0% +9.4e+13% 4.792e+17 ~ 5% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
523080 ~ 0% +4e+14% 2.087e+18 ~17% brickland2/micro/vm-scalability/1T-shm-pread-seq
483125 ~ 1% +4.6e+14% 2.23e+18 ~13% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
527818 ~ 0% +3.6e+14% 1.898e+18 ~19% brickland2/micro/vm-scalability/1T-shm-xread-seq
449900 ~ 1% +2.1e+14% 9.422e+17 ~60% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
286569 ~ 0% +7.3e+14% 2.103e+18 ~83% brickland2/micro/vm-scalability/300s-anon-r-seq
458987 ~ 0% +5.7e+13% 2.601e+17 ~35% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
459891 ~ 1% +1.8e+14% 8.497e+17 ~33% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1918575 ~ 0% +2.5e+13% 4.831e+17 ~17% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1691758 ~ 0% +6.3e+13% 1.06e+18 ~30% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
500601 ~ 0% +7.3e+13% 3.678e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-readonce
471815 ~ 1% +9.5e+13% 4.485e+17 ~74% brickland2/micro/vm-scalability/300s-lru-file-readtwice
499281 ~ 1% +1.3e+14% 6.267e+17 ~10% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
541137 ~ 0% +7.4e+13% 4.026e+17 ~50% brickland2/micro/vm-scalability/300s-mmap-pread-rand
422058 ~ 1% +2.4e+14% 9.997e+17 ~16% brickland2/micro/vm-scalability/300s-mmap-pread-seq
486583 ~ 2% +1.3e+14% 6.117e+17 ~37% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
429204 ~ 2% +4.2e+14% 1.792e+18 ~ 6% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
358178 ~ 0% +4.4e+14% 1.58e+18 ~ 9% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
335104 ~ 0% +5.5e+14% 1.848e+18 ~16% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
331175 ~ 0% +4.4e+14% 1.471e+18 ~44% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
356821 ~ 0% +2.4e+14% 8.612e+17 ~63% fat/micro/dd-write/1HDD-cfq-xfs-100dd
336606 ~ 0% +2e+14% 6.822e+17 ~73% fat/micro/dd-write/1HDD-cfq-xfs-10dd
329511 ~ 0% +2.9e+14% 9.518e+17 ~63% fat/micro/dd-write/1HDD-cfq-xfs-1dd
335872 ~ 0% +4.6e+14% 1.55e+18 ~ 2% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
332429 ~ 0% +3.2e+14% 1.051e+18 ~61% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
359230 ~ 0% +1.8e+14% 6.545e+17 ~50% fat/micro/dd-write/1HDD-deadline-ext4-100dd
335957 ~ 0% +2.9e+14% 9.75e+17 ~25% fat/micro/dd-write/1HDD-deadline-ext4-10dd
333178 ~ 0% +1.1e+14% 3.511e+17 ~65% fat/micro/dd-write/1HDD-deadline-ext4-1dd
357406 ~ 0% +7.1e+14% 2.55e+18 ~22% fat/micro/dd-write/1HDD-deadline-xfs-100dd
332342 ~ 0% +4e+14% 1.319e+18 ~11% fat/micro/dd-write/1HDD-deadline-xfs-10dd
331823 ~ 0% +2.2e+14% 7.247e+17 ~58% fat/micro/dd-write/1HDD-deadline-xfs-1dd
103797 ~ 0% -100.0% 1 ~141% lkp-a04/micro/netperf/120s-200%-TCP_RR
29352723 ~ 0% +1.8e+12% 5.199e+17 ~68% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
253764 ~ 0% +1.5e+14% 3.723e+17 ~41% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
251460 ~ 1% +1.2e+14% 3.09e+17 ~66% lkp-ib03/micro/netperf/120s-200%-TCP_RR
252357 ~ 1% +1.8e+14% 4.643e+17 ~42% lkp-ib03/micro/netperf/120s-200%-UDP_RR
2802319 ~ 3% +8.8e+12% 2.476e+17 ~83% lkp-nex05/micro/hackbench/800%-process-pipe
2344699 ~ 0% +3.1e+13% 7.351e+17 ~24% lkp-nex05/micro/hackbench/800%-process-socket
944933 ~ 2% +4.3e+13% 4.06e+17 ~ 7% lkp-nex05/micro/hackbench/800%-threads-pipe
763122 ~ 0% +5.6e+13% 4.296e+17 ~61% lkp-nex05/micro/hackbench/800%-threads-socket
265113 ~ 0% -100.0% 0 lkp-nex05/micro/tlbflush/100%-8
1375290 ~ 3% +2.4e+13% 3.263e+17 ~51% lkp-snb01/micro/hackbench/1600%-threads-pipe
1141467 ~ 1% +1.7e+13% 1.977e+17 ~40% lkp-snb01/micro/hackbench/1600%-threads-socket
789789 ~ 0% +1.7e+15% 1.37e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
559134 ~ 0% +2.2e+15% 1.211e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
533188 ~ 0% +2.1e+15% 1.105e+19 ~ 5% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
794948 ~ 0% +1.9e+15% 1.518e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
555237 ~ 0% +2.4e+15% 1.35e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
531695 ~ 0% +1.5e+15% 8.153e+18 ~11% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
778886 ~ 0% +1.9e+15% 1.517e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
549300 ~ 0% +2.3e+15% 1.283e+19 ~ 0% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
527275 ~ 0% +1.2e+15% 6.59e+18 ~12% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
794872 ~ 0% +1.9e+15% 1.506e+19 ~ 0% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
553822 ~ 0% +2.4e+15% 1.306e+19 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
529079 ~ 0% +1.5e+15% 7.958e+18 ~ 2% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
776427 ~ 0% +2e+15% 1.552e+19 ~ 1% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
546912 ~ 0% +2.3e+15% 1.263e+19 ~ 3% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
523882 ~ 0% +1.3e+15% 6.782e+18 ~ 7% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
466018 ~ 0% +7.2e+14% 3.362e+18 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
465694 ~ 0% +7.5e+14% 3.494e+18 ~20% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
636199 ~ 1% +1.4e+14% 8.6e+17 ~38% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
628230 ~ 1% +1.3e+14% 7.951e+17 ~14% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
624286 ~ 0% +9.9e+14% 6.187e+18 ~ 2% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
470666 ~ 1% +3.7e+14% 1.748e+18 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
465417 ~ 0% +5.1e+14% 2.354e+18 ~32% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
581600 ~ 0% +1.4e+14% 8.304e+17 ~15% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
581818 ~ 0% +1.9e+14% 1.097e+18 ~57% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
467899 ~ 0% +2.3e+13% 1.061e+17 ~22% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
582271 ~ 0% +1.2e+15% 7.192e+18 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
471064 ~ 1% +2.8e+14% 1.305e+18 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
464862 ~ 0% +5.6e+14% 2.612e+18 ~13% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
393 ~ 1% -85.7% 56 ~28% avoton1/crypto/tcrypt/2s-505-509
15803 ~11% +1.2e+16% 1.965e+18 ~65% brickland1/micro/will-it-scale/futex1
4913 ~12% +3.2e+16% 1.554e+18 ~84% brickland1/micro/will-it-scale/futex2
12852 ~85% +3.4e+15% 4.376e+17 ~45% brickland1/micro/will-it-scale/futex4
14179 ~47% +6.3e+15% 8.988e+17 ~47% brickland1/micro/will-it-scale/getppid1
12671 ~27% +6.9e+15% 8.774e+17 ~20% brickland1/micro/will-it-scale/lock1
13765 ~10% +3.1e+15% 4.23e+17 ~80% brickland1/micro/will-it-scale/lseek1
9585 ~64% +1.4e+16% 1.334e+18 ~81% brickland1/micro/will-it-scale/lseek2
13775 ~43% +1.9e+16% 2.658e+18 ~36% brickland1/micro/will-it-scale/malloc2
8969 ~58% +1e+16% 9.329e+17 ~61% brickland1/micro/will-it-scale/open1
8056 ~30% +1.6e+16% 1.253e+18 ~57% brickland1/micro/will-it-scale/open2
12380 ~45% +8e+15% 9.92e+17 ~44% brickland1/micro/will-it-scale/page_fault1
15214 ~54% +3.9e+15% 5.92e+17 ~53% brickland1/micro/will-it-scale/page_fault2
10910 ~23% +1.1e+16% 1.19e+18 ~85% brickland1/micro/will-it-scale/page_fault3
20099 ~55% +1.9e+15% 3.798e+17 ~66% brickland1/micro/will-it-scale/pipe1
8468 ~54% +4.1e+16% 3.458e+18 ~39% brickland1/micro/will-it-scale/poll1
14578 ~28% +2.4e+16% 3.558e+18 ~ 8% brickland1/micro/will-it-scale/poll2
12628 ~16% +8.1e+15% 1.027e+18 ~50% brickland1/micro/will-it-scale/posix_semaphore1
5493 ~11% +2.5e+16% 1.349e+18 ~103% brickland1/micro/will-it-scale/pread1
12278 ~29% +5.4e+15% 6.626e+17 ~39% brickland1/micro/will-it-scale/pread2
12944 ~19% +6.7e+15% 8.7e+17 ~66% brickland1/micro/will-it-scale/pthread_mutex1
11687 ~66% +9.9e+15% 1.16e+18 ~64% brickland1/micro/will-it-scale/pthread_mutex2
20841 ~16% +9.1e+15% 1.907e+18 ~101% brickland1/micro/will-it-scale/pwrite1
16466 ~56% +8.8e+15% 1.441e+18 ~35% brickland1/micro/will-it-scale/pwrite2
12778 ~42% +2.7e+15% 3.469e+17 ~91% brickland1/micro/will-it-scale/read1
12599 ~34% +1.6e+16% 2.013e+18 ~22% brickland1/micro/will-it-scale/read2
10827 ~35% +1.9e+16% 2.047e+18 ~59% brickland1/micro/will-it-scale/readseek
12148 ~40% +1.9e+16% 2.274e+18 ~41% brickland1/micro/will-it-scale/sched_yield
15135 ~13% +2.4e+15% 3.685e+17 ~69% brickland1/micro/will-it-scale/unix1
10193 ~24% +5.5e+15% 5.606e+17 ~80% brickland1/micro/will-it-scale/unlink1
12863 ~10% +4.8e+15% 6.189e+17 ~29% brickland1/micro/will-it-scale/unlink2
13792 ~66% +1.3e+16% 1.8e+18 ~72% brickland1/micro/will-it-scale/write1
9516 ~64% +2.6e+16% 2.468e+18 ~21% brickland1/micro/will-it-scale/writeseek
10528 ~46% +3.5e+15% 3.672e+17 ~18% brickland2/micro/hackbench/600%-process-pipe
5690 ~31% +1.6e+16% 9.28e+17 ~45% brickland2/micro/hackbench/600%-process-socket
51573 ~27% +9.6e+14% 4.94e+17 ~53% brickland2/micro/hackbench/600%-threads-pipe
95291 ~44% +1.1e+15% 1.062e+18 ~ 6% brickland2/micro/hackbench/600%-threads-socket
51844 ~10% +5.5e+14% 2.86e+17 ~105% brickland2/micro/vm-scalability/16G-msync
13334 ~80% +1.6e+15% 2.094e+17 ~68% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
6719 ~49% +1e+16% 6.792e+17 ~89% brickland2/micro/vm-scalability/16G-shm-pread-rand
9280 ~57% +2e+16% 1.868e+18 ~15% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
13979 ~52% +1.7e+15% 2.309e+17 ~23% brickland2/micro/vm-scalability/16G-shm-xread-rand
17219 ~28% -100.0% 1 ~70% brickland2/micro/vm-scalability/16G-truncate
15478 ~ 6% +2.5e+15% 3.82e+17 ~14% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
9384 ~50% +2.1e+16% 1.927e+18 ~27% brickland2/micro/vm-scalability/1T-shm-pread-seq
4074 ~12% +5.1e+16% 2.073e+18 ~19% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
17303 ~57% +1e+16% 1.774e+18 ~20% brickland2/micro/vm-scalability/1T-shm-xread-seq
7018 ~10% +7.9e+15% 5.548e+17 ~45% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
25135 ~13% +8.2e+15% 2.071e+18 ~79% brickland2/micro/vm-scalability/300s-anon-r-seq
8835 ~36% +1.1e+16% 1.003e+18 ~109% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
4975 ~28% +1.2e+16% 5.832e+17 ~40% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1.682e+08 ~ 1% +2.7e+11% 4.532e+17 ~ 5% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1.578e+08 ~ 0% +6e+11% 9.516e+17 ~35% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
16968 ~26% +1.8e+15% 3.027e+17 ~52% brickland2/micro/vm-scalability/300s-lru-file-readonce
10641 ~50% +4e+15% 4.27e+17 ~50% brickland2/micro/vm-scalability/300s-lru-file-readtwice
12265 ~46% +5e+15% 6.188e+17 ~11% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
12728 ~45% +3.1e+15% 3.979e+17 ~35% brickland2/micro/vm-scalability/300s-mmap-pread-rand
21516 ~ 9% +4.4e+15% 9.517e+17 ~ 8% brickland2/micro/vm-scalability/300s-mmap-pread-seq
12009 ~83% +4.6e+15% 5.548e+17 ~45% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
13007 ~51% +1.4e+16% 1.792e+18 ~15% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
4428 ~12% +2e+16% 8.883e+17 ~ 7% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
769 ~21% +1.8e+17% 1.351e+18 ~ 9% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
420 ~ 3% +2.2e+17% 9.427e+17 ~24% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
4840 ~ 9% +1e+15% 4.839e+16 ~92% fat/micro/dd-write/1HDD-cfq-xfs-100dd
1447 ~ 2% +2e+16% 2.953e+17 ~56% fat/micro/dd-write/1HDD-cfq-xfs-10dd
378 ~25% +4.9e+16% 1.871e+17 ~75% fat/micro/dd-write/1HDD-cfq-xfs-1dd
751 ~27% +1.6e+17% 1.202e+18 ~ 3% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
424 ~13% +1.9e+17% 8.096e+17 ~44% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
4650 ~ 8% +1.2e+15% 5.675e+16 ~44% fat/micro/dd-write/1HDD-deadline-ext4-100dd
1179 ~21% +1.5e+16% 1.725e+17 ~116% fat/micro/dd-write/1HDD-deadline-ext4-10dd
327 ~27% +2.9e+16% 9.597e+16 ~86% fat/micro/dd-write/1HDD-deadline-ext4-1dd
4657 ~ 9% +1.6e+15% 7.341e+16 ~67% fat/micro/dd-write/1HDD-deadline-xfs-100dd
908 ~13% +2.9e+16% 2.589e+17 ~31% fat/micro/dd-write/1HDD-deadline-xfs-10dd
406 ~20% +3.5e+16% 1.43e+17 ~141% fat/micro/dd-write/1HDD-deadline-xfs-1dd
222 ~ 2% +7.2e+16% 1.597e+17 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_CRR
215 ~ 4% -99.4% 1 ~141% lkp-a04/micro/netperf/120s-200%-TCP_RR
1547 ~ 2% +3.3e+16% 5.041e+17 ~61% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1535 ~ 0% +2.3e+16% 3.583e+17 ~48% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
1462 ~ 3% +1.6e+16% 2.332e+17 ~77% lkp-ib03/micro/netperf/120s-200%-TCP_RR
1419 ~17% +2.2e+16% 3.102e+17 ~20% lkp-ib03/micro/netperf/120s-200%-UDP_RR
52605367 ~ 5% +5e+11% 2.654e+17 ~50% lkp-nex04/micro/ebizzy/400%-5-30
1907 ~ 3% +1.2e+16% 2.253e+17 ~87% lkp-nex05/micro/hackbench/800%-process-pipe
1845 ~ 2% +2.4e+16% 4.353e+17 ~24% lkp-nex05/micro/hackbench/800%-process-socket
117908 ~15% +2.3e+14% 2.681e+17 ~21% lkp-nex05/micro/hackbench/800%-threads-pipe
183191 ~82% +2.1e+14% 3.871e+17 ~63% lkp-nex05/micro/hackbench/800%-threads-socket
678123 ~ 2% -100.0% 24 ~141% lkp-nex05/micro/tlbflush/100%-8
259357 ~ 4% +1e+14% 2.723e+17 ~32% lkp-snb01/micro/hackbench/1600%-threads-pipe
381071 ~22% +3.9e+13% 1.497e+17 ~33% lkp-snb01/micro/hackbench/1600%-threads-socket
15987 ~ 0% +3e+15% 4.763e+17 ~20% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
2759 ~ 2% +2.4e+16% 6.527e+17 ~25% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
847 ~ 5% +1.2e+17% 9.831e+17 ~30% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
14573 ~ 2% +1.3e+14% 1.943e+16 ~70% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
3509 ~ 8% +2e+15% 6.971e+16 ~40% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
783 ~ 1% +1.7e+16% 1.365e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
15418 ~ 1% +3e+14% 4.676e+16 ~102% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
3521 ~ 8% +3.4e+15% 1.209e+17 ~37% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
750 ~ 0% +3.8e+16% 2.836e+17 ~59% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
15271 ~ 1% +6.1e+13% 9.373e+15 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
3663 ~ 3% +2.1e+15% 7.845e+16 ~40% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
811 ~ 4% +6.3e+16% 5.119e+17 ~33% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
15401 ~ 1% +2.3e+14% 3.542e+16 ~72% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
3601 ~12% +4.1e+15% 1.462e+17 ~51% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
830 ~ 5% +1.3e+16% 1.076e+17 ~53% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
1758 ~ 3% +1.1e+17% 1.901e+18 ~ 9% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
1729 ~ 2% +9.3e+16% 1.609e+18 ~ 3% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
984 ~ 8% +1.3e+07% 1.323e+08 ~39% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
1170 ~21% +1e+07% 1.225e+08 ~12% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
1024 ~14% +7.5e+05% 7730209 ~33% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndwr-sync
1512 ~ 4% +8.8e+14% 1.336e+16 ~141% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
2073 ~ 3% +1.2e+07% 2.403e+08 ~10% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
2213 ~ 3% +1.4e+07% 3.113e+08 ~33% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
805 ~13% +6.6e+15% 5.352e+16 ~92% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
1048 ~ 3% +6.6e+15% 6.933e+16 ~40% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
1097 ~ 4% +6e+15% 6.557e+16 ~45% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
1531 ~ 3% +4.7e+15% 7.266e+16 ~19% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
1800 ~ 9% +1e+07% 1.852e+08 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
1962 ~ 2% +5.2e+14% 1.016e+16 ~141% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
136 ~ 4% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
215 ~ 7% +1e+18% 2.238e+18 ~47% brickland1/micro/will-it-scale/futex1
142 ~ 2% +1.1e+18% 1.55e+18 ~87% brickland1/micro/will-it-scale/futex2
186 ~18% +2.8e+17% 5.303e+17 ~82% brickland1/micro/will-it-scale/getppid1
198 ~16% +3.8e+17% 7.492e+17 ~30% brickland1/micro/will-it-scale/lock1
185 ~ 5% +2.3e+17% 4.223e+17 ~81% brickland1/micro/will-it-scale/lseek1
165 ~ 9% +7.9e+17% 1.307e+18 ~81% brickland1/micro/will-it-scale/lseek2
199 ~ 9% +1.2e+18% 2.462e+18 ~38% brickland1/micro/will-it-scale/malloc2
187 ~11% +5.9e+17% 1.095e+18 ~71% brickland1/micro/will-it-scale/open1
211 ~29% +6e+17% 1.263e+18 ~59% brickland1/micro/will-it-scale/open2
258 ~ 6% +2.8e+17% 7.292e+17 ~39% brickland1/micro/will-it-scale/page_fault1
310 ~18% +1.3e+17% 4.018e+17 ~28% brickland1/micro/will-it-scale/page_fault2
357 ~ 8% +3.3e+17% 1.161e+18 ~88% brickland1/micro/will-it-scale/page_fault3
232 ~31% +1.8e+17% 4.117e+17 ~64% brickland1/micro/will-it-scale/pipe1
250 ~26% +1.3e+18% 3.23e+18 ~33% brickland1/micro/will-it-scale/poll1
208 ~ 8% +1.5e+18% 3.172e+18 ~12% brickland1/micro/will-it-scale/poll2
198 ~13% +5.1e+17% 1.013e+18 ~51% brickland1/micro/will-it-scale/posix_semaphore1
179 ~ 9% +6.2e+17% 1.117e+18 ~112% brickland1/micro/will-it-scale/pread1
714 ~ 4% +1e+17% 7.243e+17 ~36% brickland1/micro/will-it-scale/pread2
259 ~ 8% +2.8e+17% 7.329e+17 ~62% brickland1/micro/will-it-scale/pthread_mutex1
190 ~ 5% +7.6e+17% 1.456e+18 ~36% brickland1/micro/will-it-scale/pthread_mutex2
281 ~41% +6.9e+17% 1.952e+18 ~102% brickland1/micro/will-it-scale/pwrite1
682 ~13% +2e+17% 1.362e+18 ~36% brickland1/micro/will-it-scale/pwrite2
224 ~45% +1.5e+17% 3.452e+17 ~92% brickland1/micro/will-it-scale/read1
279 ~11% +6.6e+17% 1.83e+18 ~14% brickland1/micro/will-it-scale/read2
187 ~ 9% +1.2e+18% 2.203e+18 ~55% brickland1/micro/will-it-scale/readseek
207 ~10% +1.2e+18% 2.535e+18 ~21% brickland1/micro/will-it-scale/sched_yield
198 ~ 8% +2.1e+17% 4.259e+17 ~36% brickland1/micro/will-it-scale/unlink2
219 ~22% +8.3e+17% 1.823e+18 ~76% brickland1/micro/will-it-scale/write1
183 ~23% +1.3e+18% 2.39e+18 ~26% brickland1/micro/will-it-scale/writeseek
256 ~22% +1.3e+17% 3.385e+17 ~21% brickland2/micro/hackbench/600%-process-pipe
237 ~11% +3.8e+17% 8.978e+17 ~36% brickland2/micro/hackbench/600%-process-socket
2000 ~30% +2.4e+16% 4.869e+17 ~42% brickland2/micro/hackbench/600%-threads-pipe
2742 ~10% +3.8e+16% 1.042e+18 ~12% brickland2/micro/hackbench/600%-threads-socket
46754 ~11% +1.1e+15% 5.134e+17 ~51% brickland2/micro/vm-scalability/16G-msync
1296 ~19% +1.8e+16% 2.275e+17 ~48% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
427 ~ 9% +1.5e+17% 6.322e+17 ~89% brickland2/micro/vm-scalability/16G-shm-pread-rand
469 ~11% +4.7e+17% 2.208e+18 ~29% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
429 ~22% +4.3e+16% 1.86e+17 ~19% brickland2/micro/vm-scalability/16G-shm-xread-rand
278 ~32% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
1044 ~12% +3.9e+16% 4.044e+17 ~21% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
1027 ~ 0% +1.9e+17% 1.989e+18 ~23% brickland2/micro/vm-scalability/1T-shm-pread-seq
334 ~25% +6e+17% 2.005e+18 ~10% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
1007 ~10% +1.6e+17% 1.61e+18 ~18% brickland2/micro/vm-scalability/1T-shm-xread-seq
191 ~ 9% +2e+17% 3.891e+17 ~88% brickland2/micro/vm-scalability/300s-anon-r-rand
204 ~10% +2.5e+17% 5.182e+17 ~49% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
263 ~23% +7.8e+17% 2.054e+18 ~88% brickland2/micro/vm-scalability/300s-anon-r-seq
189 ~33% +6.5e+17% 1.227e+18 ~115% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
158 ~38% +3.9e+17% 6.175e+17 ~45% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
1.683e+08 ~ 1% +2.4e+11% 4.035e+17 ~36% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
1.578e+08 ~ 0% +5.5e+11% 8.677e+17 ~34% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
429 ~ 5% +7.3e+16% 3.133e+17 ~39% brickland2/micro/vm-scalability/300s-lru-file-readonce
205 ~22% +2.5e+17% 5.1e+17 ~86% brickland2/micro/vm-scalability/300s-lru-file-readtwice
555 ~ 7% +1.1e+17% 6.182e+17 ~ 6% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
221 ~11% +1.7e+17% 3.722e+17 ~48% brickland2/micro/vm-scalability/300s-mmap-pread-rand
389 ~15% +2.3e+17% 8.909e+17 ~20% brickland2/micro/vm-scalability/300s-mmap-pread-seq
1130 ~ 7% +4.1e+16% 4.646e+17 ~35% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
654 ~ 8% +2.2e+17% 1.436e+18 ~15% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
4330 ~12% +1.1e+15% 4.7e+16 ~87% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
678 ~22% +4e+16% 2.689e+17 ~25% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
320 ~ 7% +3.4e+16% 1.098e+17 ~33% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
4749 ~ 9% +3.8e+14% 1.794e+16 ~122% fat/micro/dd-write/1HDD-cfq-xfs-100dd
1339 ~ 2% +6.1e+15% 8.145e+16 ~86% fat/micro/dd-write/1HDD-cfq-xfs-10dd
273 ~29% +2.4e+16% 6.472e+16 ~115% fat/micro/dd-write/1HDD-cfq-xfs-1dd
646 ~32% +7.6e+15% 4.926e+16 ~52% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
316 ~15% +2.5e+16% 7.789e+16 ~110% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
4548 ~ 8% +3.6e+14% 1.624e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-100dd
1070 ~23% +3.8e+15% 4.059e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-10dd
221 ~39% +1.1e+16% 2.45e+16 ~81% fat/micro/dd-write/1HDD-deadline-ext4-1dd
4563 ~ 9% +4.7e+13% 2.16e+15 ~140% fat/micro/dd-write/1HDD-deadline-xfs-100dd
811 ~15% +3e+15% 2.447e+16 ~81% fat/micro/dd-write/1HDD-deadline-xfs-10dd
295 ~27% +1.3e+12% 3.881e+12 ~63% fat/micro/dd-write/1HDD-deadline-xfs-1dd
156 ~ 2% +5.1e+16% 8.02e+16 ~99% lkp-a04/micro/netperf/120s-200%-TCP_CRR
148 ~ 3% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
3772540 ~ 0% +5.5e+12% 2.085e+17 ~27% lkp-ib03/micro/ebizzy/400%-5-30
221 ~ 5% +2e+17% 4.434e+17 ~92% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
176 ~ 7% +1.7e+17% 2.957e+17 ~87% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
214 ~12% +7e+16% 1.494e+17 ~62% lkp-ib03/micro/netperf/120s-200%-TCP_RR
169 ~ 5% +2.6e+17% 4.341e+17 ~33% lkp-ib03/micro/netperf/120s-200%-UDP_RR
513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
15165 ~ 0% +3.2e+15% 4.86e+17 ~16% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
2396 ~ 2% +2.6e+16% 6.187e+17 ~29% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
473 ~ 8% +1.9e+17% 8.989e+17 ~43% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
14021 ~ 2% +7.8e+13% 1.092e+16 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-100dd
3150 ~ 9% +4.3e+14% 1.359e+16 ~140% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
418 ~ 0% +2.3e+16% 9.474e+16 ~28% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
14661 ~ 0% +3.6e+14% 5.33e+16 ~97% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
3084 ~10% +4.2e+15% 1.295e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
361 ~ 3% +6.6e+16% 2.403e+17 ~57% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
14473 ~ 1% +1.6e+13% 2.367e+15 ~140% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-100dd
3296 ~ 3% +1.1e+15% 3.58e+16 ~46% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
400 ~ 4% +5e+16% 2.014e+17 ~69% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
14638 ~ 1% +1.1e+14% 1.654e+16 ~141% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-100dd
3218 ~13% +4.9e+15% 1.592e+17 ~74% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
405 ~ 4% +2.4e+16% 9.656e+16 ~48% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
1686 ~ 3% +3e+16% 5.075e+17 ~32% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
1658 ~ 2% +2.1e+16% 3.512e+17 ~25% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
927 ~10% +5.1e+11% 4.73e+12 ~44% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
1110 ~23% +3.9e+11% 4.386e+12 ~21% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
1450 ~ 4% +7.1e+11% 1.03e+13 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
2003 ~ 3% +4.8e+11% 9.596e+12 ~12% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
2134 ~ 3% +6.2e+11% 1.317e+13 ~31% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
763 ~12% +7.2e+15% 5.504e+16 ~73% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrd-sync
971 ~ 3% +8.3e+15% 8.058e+16 ~45% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
1024 ~ 5% +1e+16% 1.073e+17 ~60% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
1464 ~ 3% +2.5e+15% 3.613e+16 ~24% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
1744 ~10% +4e+11% 6.932e+12 ~24% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
1894 ~ 2% +5.9e+11% 1.111e+13 ~18% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
36971 ~ 0% +1.5e+08% 5.564e+10 ~141% avoton1/crypto/tcrypt/2s-301-319
30210 ~ 0% -89.7% 3108 ~19% avoton1/crypto/tcrypt/2s-505-509
17804 ~ 0% +1e+16% 1.861e+18 ~76% brickland1/micro/will-it-scale/futex1
17813 ~ 0% +8.6e+15% 1.528e+18 ~83% brickland1/micro/will-it-scale/futex2
17880 ~ 0% +3.9e+15% 6.977e+17 ~55% brickland1/micro/will-it-scale/getppid1
17829 ~ 0% +4.7e+15% 8.331e+17 ~33% brickland1/micro/will-it-scale/lock1
17850 ~ 0% +2.3e+15% 4.164e+17 ~82% brickland1/micro/will-it-scale/lseek1
17850 ~ 0% +4.8e+15% 8.603e+17 ~61% brickland1/micro/will-it-scale/lseek2
17846 ~ 0% +1.1e+16% 2.025e+18 ~59% brickland1/micro/will-it-scale/malloc2
18172 ~ 0% -63.6% 6623 ~14% brickland1/micro/will-it-scale/mmap2
17899 ~ 0% +6.1e+15% 1.093e+18 ~69% brickland1/micro/will-it-scale/open1
17837 ~ 0% +7e+15% 1.255e+18 ~57% brickland1/micro/will-it-scale/open2
54199 ~ 0% +1.8e+15% 9.902e+17 ~13% brickland1/micro/will-it-scale/page_fault1
42510 ~ 0% +9.6e+14% 4.069e+17 ~45% brickland1/micro/will-it-scale/page_fault2
170171 ~ 0% +8.2e+14% 1.399e+18 ~61% brickland1/micro/will-it-scale/page_fault3
17855 ~ 0% +1e+15% 1.87e+17 ~49% brickland1/micro/will-it-scale/pipe1
17873 ~ 0% +1.8e+16% 3.161e+18 ~37% brickland1/micro/will-it-scale/poll1
17843 ~ 0% +1.9e+16% 3.335e+18 ~ 9% brickland1/micro/will-it-scale/poll2
17872 ~ 0% +5.7e+15% 1.024e+18 ~50% brickland1/micro/will-it-scale/posix_semaphore1
17827 ~ 0% +5.2e+15% 9.269e+17 ~107% brickland1/micro/will-it-scale/pread1
17982 ~ 0% +4e+15% 7.161e+17 ~42% brickland1/micro/will-it-scale/pread2
17865 ~ 0% +3.9e+15% 6.932e+17 ~48% brickland1/micro/will-it-scale/pthread_mutex1
17818 ~ 0% +6.2e+15% 1.109e+18 ~55% brickland1/micro/will-it-scale/pthread_mutex2
17819 ~ 0% +8.9e+15% 1.592e+18 ~93% brickland1/micro/will-it-scale/pwrite1
18000 ~ 0% +7.3e+15% 1.32e+18 ~39% brickland1/micro/will-it-scale/pwrite2
17874 ~ 0% +1.9e+15% 3.418e+17 ~94% brickland1/micro/will-it-scale/read1
17988 ~ 0% +1.1e+16% 1.964e+18 ~20% brickland1/micro/will-it-scale/read2
17897 ~ 0% +1.2e+16% 2.063e+18 ~53% brickland1/micro/will-it-scale/readseek
17978 ~ 0% +1.3e+16% 2.259e+18 ~41% brickland1/micro/will-it-scale/sched_yield
17855 ~ 0% +3.1e+15% 5.594e+17 ~40% brickland1/micro/will-it-scale/unlink2
17841 ~ 0% +1.1e+16% 1.942e+18 ~59% brickland1/micro/will-it-scale/write1
17840 ~ 0% +1.4e+16% 2.555e+18 ~15% brickland1/micro/will-it-scale/writeseek
27664 ~ 2% +1.1e+15% 3.078e+17 ~15% brickland2/micro/hackbench/600%-process-pipe
15925 ~ 5% +5.6e+15% 8.867e+17 ~24% brickland2/micro/hackbench/600%-process-socket
28749 ~ 2% +1.6e+15% 4.511e+17 ~47% brickland2/micro/hackbench/600%-threads-pipe
16005 ~ 9% +6.6e+15% 1.061e+18 ~10% brickland2/micro/hackbench/600%-threads-socket
25886 ~ 2% +8.7e+14% 2.26e+17 ~35% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
25203 ~ 0% +2.5e+15% 6.257e+17 ~95% brickland2/micro/vm-scalability/16G-shm-pread-rand
19097 ~ 0% +1e+16% 1.974e+18 ~16% brickland2/micro/vm-scalability/16G-shm-xread-rand-mt
25288 ~ 0% +7.2e+14% 1.812e+17 ~48% brickland2/micro/vm-scalability/16G-shm-xread-rand
10671 ~ 0% -71.1% 3086 ~15% brickland2/micro/vm-scalability/16G-truncate
19001 ~ 0% +2.3e+15% 4.431e+17 ~ 9% brickland2/micro/vm-scalability/1T-shm-pread-seq-mt
19721 ~ 0% +9.2e+15% 1.823e+18 ~24% brickland2/micro/vm-scalability/1T-shm-pread-seq
17867 ~ 0% +1.2e+16% 2.118e+18 ~ 9% brickland2/micro/vm-scalability/1T-shm-xread-seq-mt
19893 ~ 0% +9e+15% 1.788e+18 ~22% brickland2/micro/vm-scalability/1T-shm-xread-seq
16433 ~ 2% +3.2e+15% 5.303e+17 ~45% brickland2/micro/vm-scalability/300s-anon-r-seq-mt
8837 ~ 0% +2.3e+16% 1.99e+18 ~94% brickland2/micro/vm-scalability/300s-anon-r-seq
16862 ~ 0% +7e+15% 1.176e+18 ~114% brickland2/micro/vm-scalability/300s-anon-rx-rand-mt
16808 ~ 0% +4.6e+15% 7.766e+17 ~33% brickland2/micro/vm-scalability/300s-anon-rx-seq-mt
20507 ~ 0% +1.7e+15% 3.41e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-mmap-read-rand
18674 ~ 0% +5.1e+15% 9.583e+17 ~31% brickland2/micro/vm-scalability/300s-lru-file-mmap-read
18832 ~ 0% +1.8e+15% 3.443e+17 ~28% brickland2/micro/vm-scalability/300s-lru-file-readonce
17489 ~ 0% +2.4e+15% 4.206e+17 ~76% brickland2/micro/vm-scalability/300s-lru-file-readtwice
18790 ~ 2% +2.7e+15% 5.119e+17 ~ 5% brickland2/micro/vm-scalability/300s-mmap-pread-rand-mt
20337 ~ 0% +2e+15% 4.009e+17 ~46% brickland2/micro/vm-scalability/300s-mmap-pread-rand
14994 ~ 0% +5.5e+15% 8.186e+17 ~20% brickland2/micro/vm-scalability/300s-mmap-pread-seq
17830 ~ 0% +2.6e+15% 4.586e+17 ~43% brickland2/micro/vm-scalability/300s-mmap-xread-rand-mt
15556 ~ 2% +1.1e+16% 1.649e+18 ~ 7% brickland2/micro/vm-scalability/300s-mmap-xread-seq-mt
15258 ~ 0% +4.6e+14% 6.963e+16 ~49% fat/micro/dd-write/1HDD-cfq-btrfs-100dd
14293 ~ 0% +2.2e+15% 3.199e+17 ~17% fat/micro/dd-write/1HDD-cfq-btrfs-10dd
14104 ~ 0% +6.2e+14% 8.718e+16 ~31% fat/micro/dd-write/1HDD-cfq-btrfs-1dd
15176 ~ 0% +1.2e+14% 1.872e+16 ~113% fat/micro/dd-write/1HDD-cfq-xfs-100dd
14257 ~ 0% +5.7e+14% 8.144e+16 ~86% fat/micro/dd-write/1HDD-cfq-xfs-10dd
14065 ~ 0% +4.6e+14% 6.471e+16 ~115% fat/micro/dd-write/1HDD-cfq-xfs-1dd
14296 ~ 0% +3.3e+14% 4.72e+16 ~20% fat/micro/dd-write/1HDD-deadline-btrfs-10dd
14163 ~ 0% +6.9e+14% 9.719e+16 ~79% fat/micro/dd-write/1HDD-deadline-btrfs-1dd
15217 ~ 0% +1.1e+14% 1.623e+16 ~141% fat/micro/dd-write/1HDD-deadline-ext4-100dd
14180 ~ 0% +1.7e+14% 2.446e+16 ~81% fat/micro/dd-write/1HDD-deadline-xfs-10dd
10634 ~ 0% -43.9% 5971 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
3781807 ~ 0% +6.7e+12% 2.543e+17 ~42% lkp-ib03/micro/ebizzy/400%-5-30
9234 ~ 0% +2.7e+15% 2.489e+17 ~74% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
9079 ~ 0% +3e+15% 2.682e+17 ~103% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
9016 ~ 0% +3.1e+15% 2.775e+17 ~69% lkp-ib03/micro/netperf/120s-200%-TCP_RR
9099 ~ 0% +4.2e+15% 3.854e+17 ~25% lkp-ib03/micro/netperf/120s-200%-UDP_RR
22724 ~ 0% +1.1e+15% 2.508e+17 ~77% lkp-nex05/micro/hackbench/800%-process-pipe
15900 ~ 2% +2.8e+15% 4.396e+17 ~29% lkp-nex05/micro/hackbench/800%-process-socket
23757 ~ 2% +1.2e+15% 2.94e+17 ~18% lkp-nex05/micro/hackbench/800%-threads-pipe
14867 ~ 0% +2.6e+15% 3.863e+17 ~65% lkp-nex05/micro/hackbench/800%-threads-socket
5515 ~ 0% -42.3% 3184 ~42% lkp-nex05/micro/tlbflush/100%-8
18295 ~ 3% +1.3e+15% 2.39e+17 ~28% lkp-snb01/micro/hackbench/1600%-threads-pipe
9304 ~ 1% +1.6e+15% 1.483e+17 ~50% lkp-snb01/micro/hackbench/1600%-threads-socket
34259 ~ 0% +1.8e+15% 6.324e+17 ~39% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-100dd
24088 ~ 0% +2.8e+15% 6.708e+17 ~26% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-10dd
22923 ~ 0% +4.7e+15% 1.076e+18 ~27% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-btrfs-1dd
23949 ~ 0% +3.6e+14% 8.725e+16 ~ 4% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-10dd
22852 ~ 0% +6.2e+14% 1.418e+17 ~54% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-ext4-1dd
33664 ~ 0% +1.3e+14% 4.488e+16 ~101% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-100dd
23679 ~ 0% +7.3e+14% 1.734e+17 ~72% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-10dd
22691 ~ 0% +1.2e+15% 2.759e+17 ~58% lkp-ws02/micro/dd-write/11HDD-JBOD-cfq-xfs-1dd
23989 ~ 0% +4.3e+14% 1.021e+17 ~22% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-10dd
22874 ~ 0% +2e+15% 4.529e+17 ~69% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-ext4-1dd
23682 ~ 0% +6.8e+14% 1.6e+17 ~56% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-10dd
22652 ~ 0% +4.3e+14% 9.848e+16 ~49% lkp-ws02/micro/dd-write/11HDD-JBOD-deadline-xfs-1dd
20029 ~ 0% +2.3e+15% 4.684e+17 ~41% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
20044 ~ 0% +1.5e+15% 2.936e+17 ~26% snb-drag/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
28205 ~ 1% -78.1% 6186 ~ 6% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
27802 ~ 1% -78.5% 5968 ~ 4% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
20016 ~ 0% -74.2% 5167 ~ 0% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndwr-sync
27596 ~ 0% -79.0% 5801 ~ 1% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrd-sync
20198 ~ 1% -63.7% 7336 ~ 1% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqrewr-sync
20032 ~ 0% -60.1% 7997 ~ 9% snb-drag/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-seqwr-sync
25640 ~ 0% +1.9e+14% 4.937e+16 ~51% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndrw-sync
20047 ~ 0% +9e+14% 1.798e+17 ~17% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-rndwr-sync
25624 ~ 0% +6.3e+13% 1.607e+16 ~53% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
20246 ~ 1% -66.7% 6734 ~ 7% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrewr-sync
20025 ~ 0% -63.1% 7395 ~ 5% snb-drag/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqwr-sync
5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
Thanks,
Fengguang
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 17:54 ` Mel Gorman
@ 2013-12-18 10:24 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-18 10:24 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> > Thanks again for going through all this. Tracking multi-commit
> > performance regressions across 1.5 years worth of commits is
> > generally very hard. Does your testing effort comes from
> > enterprise Linux QA testing, or did you ran into this problem
> > accidentally?
>
> It does not come from enterprise Linux QA testing but it's motivated
> by it. I want to catch as many "obvious" performance bugs before
> they do as it saves time and stress in the long run. To assist that,
> I setup continual performance regression testing and ebizzy was
> included in the first report I opened. [...]
Neat!
> [...] It makes me worry what the rest of the reports contain.
It will be full with reports of phenomenal speedups!
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-18 10:24 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-18 10:24 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> > Thanks again for going through all this. Tracking multi-commit
> > performance regressions across 1.5 years worth of commits is
> > generally very hard. Does your testing effort comes from
> > enterprise Linux QA testing, or did you ran into this problem
> > accidentally?
>
> It does not come from enterprise Linux QA testing but it's motivated
> by it. I want to catch as many "obvious" performance bugs before
> they do as it saves time and stress in the long run. To assist that,
> I setup continual performance regression testing and ebizzy was
> included in the first report I opened. [...]
Neat!
> [...] It makes me worry what the rest of the reports contain.
It will be full with reports of phenomenal speedups!
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* [tip:sched/core] sched: Assign correct scheduling domain to ' sd_llc'
2013-12-17 9:21 ` Mel Gorman
` (2 preceding siblings ...)
(?)
@ 2013-12-18 10:32 ` tip-bot for Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: tip-bot for Mel Gorman @ 2013-12-18 10:32 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, torvalds, peterz, alex.shi, akpm,
mgorman, tglx, fengguang.wu
Commit-ID: 5d4cf996cf134e8ddb4f906b8197feb9267c2b77
Gitweb: http://git.kernel.org/tip/5d4cf996cf134e8ddb4f906b8197feb9267c2b77
Author: Mel Gorman <mgorman@suse.de>
AuthorDate: Tue, 17 Dec 2013 09:21:25 +0000
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 17 Dec 2013 15:08:43 +0100
sched: Assign correct scheduling domain to 'sd_llc'
Commit 42eb088e (sched: Avoid NULL dereference on sd_busy) corrected a NULL
dereference on sd_busy but the fix also altered what scheduling domain it
used for the 'sd_llc' percpu variable.
One impact of this is that a task selecting a runqueue may consider
idle CPUs that are not cache siblings as candidates for running.
Tasks are then running on CPUs that are not cache hot.
This was found through bisection where ebizzy threads were not seeing equal
performance and it looked like a scheduling fairness issue. This patch
mitigates but does not completely fix the problem on all machines tested
implying there may be an additional bug or a common root cause. Here are
the average range of performance seen by individual ebizzy threads. It
was tested on top of candidate patches related to x86 TLB range flushing.
4-core machine
3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r3
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.34 ( 0.00%) 0.10 ( 70.59%)
Mean 3 1.29 ( 0.00%) 0.93 ( 27.91%)
Mean 4 7.08 ( 0.00%) 0.77 ( 89.12%)
Mean 5 193.54 ( 0.00%) 2.14 ( 98.89%)
Mean 6 151.12 ( 0.00%) 2.06 ( 98.64%)
Mean 7 115.38 ( 0.00%) 2.04 ( 98.23%)
Mean 8 108.65 ( 0.00%) 1.92 ( 98.23%)
8-core machine
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.21 ( 47.50%)
Mean 3 23.73 ( 0.00%) 0.89 ( 96.25%)
Mean 4 12.79 ( 0.00%) 1.04 ( 91.87%)
Mean 5 13.08 ( 0.00%) 2.42 ( 81.50%)
Mean 6 23.21 ( 0.00%) 69.46 (-199.27%)
Mean 7 15.85 ( 0.00%) 101.72 (-541.77%)
Mean 8 109.37 ( 0.00%) 19.13 ( 82.51%)
Mean 12 124.84 ( 0.00%) 28.62 ( 77.07%)
Mean 16 113.50 ( 0.00%) 24.16 ( 78.71%)
It's eliminated for one machine and reduced for another.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alex Shi <alex.shi@linaro.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: H Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20131217092124.GV11295@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 19af58f..a88f4a4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
static void update_top_cache_domain(int cpu)
{
struct sched_domain *sd;
+ struct sched_domain *busy_sd = NULL;
int id = cpu;
int size = 1;
@@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
if (sd) {
id = cpumask_first(sched_domain_span(sd));
size = cpumask_weight(sched_domain_span(sd));
- sd = sd->parent; /* sd_busy */
+ busy_sd = sd->parent; /* sd_busy */
}
- rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
+ rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
per_cpu(sd_llc_size, cpu) = size;
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-17 11:00 ` Ingo Molnar
@ 2013-12-19 14:24 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-19 14:24 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 12:00:51PM +0100, Ingo Molnar wrote:
> > It's eliminated for one machine and reduced for another.
> >
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > ---
> > kernel/sched/core.c | 5 +++--
> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e85cda2..a848254 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
> > static void update_top_cache_domain(int cpu)
> > {
> > struct sched_domain *sd;
> > + struct sched_domain *busy_sd = NULL;
> > int id = cpu;
> > int size = 1;
> >
> > @@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
> > if (sd) {
> > id = cpumask_first(sched_domain_span(sd));
> > size = cpumask_weight(sched_domain_span(sd));
> > - sd = sd->parent; /* sd_busy */
> > + busy_sd = sd->parent; /* sd_busy */
> > }
> > - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> > + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
> >
> > rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> > per_cpu(sd_llc_size, cpu) = size;
>
> Indeed that makes a lot of sense, thanks Mel for tracking down this
> part of the puzzle! Will get your fix to Linus ASAP.
>
> Does this fix also speed up Ebizzy's transaction performance, or is
> its main effect a reduction in workload variation noise?
>
> Also it appears the Ebizzy numbers ought to be stable enough now to
> make the range-TLB-flush measurements more precise?
Ok, so the results on this question finally came in. I still have not
profiled this due to other bugs in flight.
fixsd-v3r4 is only the scheduling domain fix
shift-v3r4 is this series, including the tlbshift flush change
8-core machine
ebizzy performance
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
vanilla vanilla fixsd-v3r4 shift-v3r4
Mean 1 7295.77 ( 0.00%) 6713.32 ( -7.98%) 7320.71 ( 0.34%) 7744.07 ( 6.14%)
Mean 2 8252.58 ( 0.00%) 8334.43 ( 0.99%) 8233.29 ( -0.23%) 9451.07 ( 14.52%)
Mean 3 8179.74 ( 0.00%) 8134.42 ( -0.55%) 8137.38 ( -0.52%) 8947.15 ( 9.38%)
Mean 4 7862.45 ( 0.00%) 7966.27 ( 1.32%) 7837.52 ( -0.32%) 8594.52 ( 9.31%)
Mean 5 7170.24 ( 0.00%) 7820.63 ( 9.07%) 7086.82 ( -1.16%) 8222.22 ( 14.67%)
Mean 6 6835.10 ( 0.00%) 7773.30 ( 13.73%) 6822.95 ( -0.18%) 7863.05 ( 15.04%)
Mean 7 6740.99 ( 0.00%) 7712.45 ( 14.41%) 6697.30 ( -0.65%) 7537.98 ( 11.82%)
Mean 8 6494.01 ( 0.00%) 7705.62 ( 18.66%) 6449.95 ( -0.68%) 6848.89 ( 5.46%)
Mean 12 6567.37 ( 0.00%) 7554.82 ( 15.04%) 6106.56 ( -7.02%) 6515.51 ( -0.79%)
Mean 16 6630.26 ( 0.00%) 7331.04 ( 10.57%) 5999.57 ( -9.51%) 6410.09 ( -3.32%)
Range 1 767.00 ( 0.00%) 661.00 ( 13.82%) 182.00 ( 76.27%) 243.00 ( 68.32%)
Range 2 178.00 ( 0.00%) 592.00 (-232.58%) 200.00 (-12.36%) 376.00 (-111.24%)
Range 3 175.00 ( 0.00%) 431.00 (-146.29%) 225.00 (-28.57%) 522.00 (-198.29%)
Range 4 806.00 ( 0.00%) 542.00 ( 32.75%) 878.00 ( -8.93%) 478.00 ( 40.69%)
Range 5 544.00 ( 0.00%) 444.00 ( 18.38%) 893.00 (-64.15%) 576.00 ( -5.88%)
Range 6 399.00 ( 0.00%) 528.00 (-32.33%) 669.00 (-67.67%) 1134.00 (-184.21%)
Range 7 629.00 ( 0.00%) 467.00 ( 25.76%) 517.00 ( 17.81%) 870.00 (-38.31%)
Range 8 400.00 ( 0.00%) 435.00 ( -8.75%) 309.00 ( 22.75%) 441.00 (-10.25%)
Range 12 233.00 ( 0.00%) 330.00 (-41.63%) 260.00 (-11.59%) 314.00 (-34.76%)
Range 16 141.00 ( 0.00%) 496.00 (-251.77%) 127.00 ( 9.93%) 156.00 (-10.64%)
Stddev 1 73.94 ( 0.00%) 177.17 (-139.59%) 33.77 ( 54.32%) 40.82 ( 44.80%)
Stddev 2 23.47 ( 0.00%) 88.91 (-278.74%) 30.60 (-30.35%) 44.64 (-90.17%)
Stddev 3 36.48 ( 0.00%) 101.07 (-177.05%) 41.76 (-14.47%) 114.25 (-213.16%)
Stddev 4 158.37 ( 0.00%) 130.52 ( 17.59%) 178.91 (-12.97%) 114.66 ( 27.60%)
Stddev 5 116.74 ( 0.00%) 78.31 ( 32.92%) 213.76 (-83.10%) 105.69 ( 9.47%)
Stddev 6 66.34 ( 0.00%) 87.79 (-32.33%) 103.69 (-56.30%) 238.52 (-259.54%)
Stddev 7 145.62 ( 0.00%) 90.52 ( 37.84%) 126.49 ( 13.14%) 170.51 (-17.09%)
Stddev 8 68.51 ( 0.00%) 81.11 (-18.39%) 45.73 ( 33.25%) 65.11 ( 4.96%)
Stddev 12 32.15 ( 0.00%) 65.74 (-104.50%) 37.52 (-16.72%) 46.79 (-45.53%)
Stddev 16 21.59 ( 0.00%) 86.42 (-300.25%) 26.05 (-20.67%) 37.20 (-72.28%)
Scheduling fix on its own makes little difference and hurts ebizzy if
anything. However, the patch is clearly the right thing to do and we can
still see the tlb flush shift change is required for good results.
As for the stability
8-core machine
ebizzy Thread spread
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
vanilla vanilla fixsd-v3r4 shift-v3r4
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.13 ( 67.50%) 0.50 (-25.00%) 0.24 ( 40.00%)
Mean 3 23.73 ( 0.00%) 0.26 ( 98.90%) 19.80 ( 16.56%) 1.03 ( 95.66%)
Mean 4 12.79 ( 0.00%) 0.67 ( 94.76%) 7.92 ( 38.08%) 1.20 ( 90.62%)
Mean 5 13.08 ( 0.00%) 0.36 ( 97.25%) 102.28 (-681.96%) 5.86 ( 55.20%)
Mean 6 23.21 ( 0.00%) 1.13 ( 95.13%) 13.61 ( 41.36%) 92.37 (-297.98%)
Mean 7 15.85 ( 0.00%) 1.51 ( 90.47%) 9.48 ( 40.19%) 131.49 (-729.59%)
Mean 8 109.37 ( 0.00%) 1.05 ( 99.04%) 7.37 ( 93.26%) 19.75 ( 81.94%)
Mean 12 124.84 ( 0.00%) 0.59 ( 99.53%) 27.32 ( 78.12%) 34.32 ( 72.51%)
Mean 16 113.50 ( 0.00%) 0.49 ( 99.57%) 20.02 ( 82.36%) 28.57 ( 74.83%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 3.00 ( 0.00%) 1.00 ( 66.67%) 2.00 ( 33.33%) 1.00 ( 66.67%)
Range 3 80.00 ( 0.00%) 1.00 ( 98.75%) 87.00 ( -8.75%) 21.00 ( 73.75%)
Range 4 38.00 ( 0.00%) 2.00 ( 94.74%) 39.00 ( -2.63%) 5.00 ( 86.84%)
Range 5 37.00 ( 0.00%) 1.00 ( 97.30%) 368.00 (-894.59%) 50.00 (-35.14%)
Range 6 46.00 ( 0.00%) 8.00 ( 82.61%) 39.00 ( 15.22%) 876.00 (-1804.35%)
Range 7 28.00 ( 0.00%) 36.00 (-28.57%) 21.00 ( 25.00%) 649.00 (-2217.86%)
Range 8 325.00 ( 0.00%) 26.00 ( 92.00%) 11.00 ( 96.62%) 74.00 ( 77.23%)
Range 12 160.00 ( 0.00%) 5.00 ( 96.88%) 39.00 ( 75.62%) 47.00 ( 70.62%)
Range 16 108.00 ( 0.00%) 1.00 ( 99.07%) 29.00 ( 73.15%) 34.00 ( 68.52%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.62 ( 0.00%) 0.34 (-45.44%) 0.66 ( 6.38%) 0.43 (-30.72%)
Stddev 3 17.40 ( 0.00%) 0.44 (-97.48%) 16.54 ( -4.96%) 2.43 (-86.03%)
Stddev 4 8.52 ( 0.00%) 0.51 (-94.00%) 7.81 ( -8.38%) 0.84 (-90.18%)
Stddev 5 7.91 ( 0.00%) 0.48 (-93.93%) 105.16 (1229.65%) 9.00 ( 13.74%)
Stddev 6 7.11 ( 0.00%) 1.48 (-79.18%) 7.20 ( 1.17%) 124.99 (1657.37%)
Stddev 7 5.90 ( 0.00%) 4.12 (-30.24%) 4.28 (-27.41%) 110.32 (1769.33%)
Stddev 8 80.95 ( 0.00%) 2.65 (-96.72%) 2.63 (-96.76%) 10.01 (-87.64%)
Stddev 12 31.48 ( 0.00%) 0.66 (-97.89%) 12.20 (-61.24%) 13.06 (-58.50%)
Stddev 16 24.32 ( 0.00%) 0.50 (-97.94%) 8.96 (-63.18%) 9.56 (-60.70%)
The spread is much improved but still less stable than 3.4 was so something
weird is still going on there and the tlb flush measurements are still a
bit questionable.
Still, I had queued up long-lived tests with more thread counts to
measure the impact and found this
4-core
tlbflush
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
Mean 1 10.68 ( 0.00%) 10.27 ( 3.83%) 10.45 ( 2.11%)
Mean 2 11.02 ( 0.00%) 18.62 (-68.97%) 22.57 (-104.79%)
Mean 3 22.73 ( 0.00%) 22.95 ( -0.99%) 22.10 ( 2.76%)
Mean 5 51.06 ( 0.00%) 47.20 ( 7.56%) 46.45 ( 9.03%)
Mean 8 82.62 ( 0.00%) 43.67 ( 47.15%) 42.72 ( 48.29%)
Range 1 6.00 ( 0.00%) 8.00 (-33.33%) 8.00 (-33.33%)
Range 2 17.00 ( 0.00%) 52.00 (-205.88%) 49.00 (-188.24%)
Range 3 15.00 ( 0.00%) 24.00 (-60.00%) 24.00 (-60.00%)
Range 5 36.00 ( 0.00%) 35.00 ( 2.78%) 21.00 ( 41.67%)
Range 8 49.00 ( 0.00%) 10.00 ( 79.59%) 15.00 ( 69.39%)
Stddev 1 0.95 ( 0.00%) 1.28 ( 35.11%) 0.87 ( -7.82%)
Stddev 2 1.67 ( 0.00%) 15.21 (812.95%) 16.25 (875.62%)
Stddev 3 2.53 ( 0.00%) 3.42 ( 35.13%) 3.05 ( 20.61%)
Stddev 5 4.25 ( 0.00%) 4.31 ( 1.37%) 3.65 (-14.16%)
Stddev 8 5.71 ( 0.00%) 1.88 (-67.09%) 1.71 (-70.12%)
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
User 804.88 900.31 1057.53
System 526.53 507.57 578.95
Elapsed 12629.24 14931.78 17925.47
There are 320 iterations of the test per thread count. The number of
entries is randomly selected with a min of 1 and max of 512. To ensure
a reasonably even spread of entries, the full range is broken up into 8
sections and a random number selected within that section.
iteration 1, random number between 0-64
iteration 2, random number between 64-128 etc
This is actually still a very weak methodology. When you do not know what
are typical ranges, random is a reasonable choice but it can be easily
argued that the opimisation was for smaller ranges and an even spread is
not representative of any workload that matters. To improve this, we'd
need to know the probability distribution of TLB flush range sizes for a
set of workloads that are considered "common", build a synthetic trace and
feed that into this benchmark. Even that is not perfect because it would
not account for the time between flushes but there are limits of what can
be reasonably done and still be doing something useful. Alex or Peter,
was there any specific methodology used for selecting the ranges to be
flushed by the microbenchmark?
Anyway, random ranges on the 4-core machine showed that the conservative
choice was a good one in many cases. Two threads seems to be screwed implying
that fixing the scheduling domain may have meant we are frequently sending
an IPI to a relatively remote core. It's a separate issue because that
smacks of being a pure scheduling problem.
8-core
tlbflush
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
Mean 1 8.78 ( 0.00%) 9.54 ( -8.65%) 9.46 ( -7.76%)
Mean 2 8.19 ( 0.00%) 9.54 (-16.44%) 9.43 (-15.03%)
Mean 3 8.86 ( 0.00%) 9.95 (-12.39%) 9.81 (-10.80%)
Mean 5 13.38 ( 0.00%) 14.67 ( -9.60%) 15.51 (-15.93%)
Mean 8 32.97 ( 0.00%) 40.88 (-24.02%) 38.91 (-18.04%)
Mean 13 68.47 ( 0.00%) 32.10 ( 53.12%) 31.38 ( 54.16%)
Mean 16 86.15 ( 0.00%) 40.10 ( 53.46%) 39.04 ( 54.68%)
Range 1 7.00 ( 0.00%) 8.00 (-14.29%) 7.00 ( 0.00%)
Range 2 6.00 ( 0.00%) 38.00 (-533.33%) 36.00 (-500.00%)
Range 3 12.00 ( 0.00%) 18.00 (-50.00%) 17.00 (-41.67%)
Range 5 16.00 ( 0.00%) 34.00 (-112.50%) 27.00 (-68.75%)
Range 8 34.00 ( 0.00%) 23.00 ( 32.35%) 21.00 ( 38.24%)
Range 13 47.00 ( 0.00%) 11.00 ( 76.60%) 9.00 ( 80.85%)
Range 16 50.00 ( 0.00%) 12.00 ( 76.00%) 11.00 ( 78.00%)
Stddev 1 1.46 ( 0.00%) 1.58 ( 8.37%) 1.24 (-15.19%)
Stddev 2 1.47 ( 0.00%) 4.11 (180.46%) 2.65 ( 80.77%)
Stddev 3 2.00 ( 0.00%) 3.61 ( 80.40%) 2.73 ( 36.59%)
Stddev 5 2.36 ( 0.00%) 4.71 (100.05%) 5.01 (112.85%)
Stddev 8 7.03 ( 0.00%) 4.54 (-35.42%) 4.08 (-41.92%)
Stddev 13 6.80 ( 0.00%) 2.28 (-66.48%) 1.67 (-75.53%)
Stddev 16 7.36 ( 0.00%) 2.71 (-63.22%) 2.14 (-70.93%)
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
User 3181.72 3640.00 4234.63
System 3043.31 2746.05 3606.61
Elapsed 31871.22 34678.69 38131.45
And this shows for two Ivybridge processors that the select of shift value
gives different results. I wonder was that taken into account. This is
showing that we see big relative regressions for lower number of threads
*but* the absolute difference between them is very small. There are
relatively big gains for higher numbers of threads *and* big absolute
gains. The worst case is far less worse with the series applied at least
for randomly selected ranges to flush.
Because we lack data on TLB range flush distributions I think we should
still go with the conservative choice for the TLB flush shift. The worst
case is really bad here and it's painfully obvious on ebizzy.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-19 14:24 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-19 14:24 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Tue, Dec 17, 2013 at 12:00:51PM +0100, Ingo Molnar wrote:
> > It's eliminated for one machine and reduced for another.
> >
> > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > ---
> > kernel/sched/core.c | 5 +++--
> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e85cda2..a848254 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4902,6 +4902,7 @@ DEFINE_PER_CPU(struct sched_domain *, sd_asym);
> > static void update_top_cache_domain(int cpu)
> > {
> > struct sched_domain *sd;
> > + struct sched_domain *busy_sd = NULL;
> > int id = cpu;
> > int size = 1;
> >
> > @@ -4909,9 +4910,9 @@ static void update_top_cache_domain(int cpu)
> > if (sd) {
> > id = cpumask_first(sched_domain_span(sd));
> > size = cpumask_weight(sched_domain_span(sd));
> > - sd = sd->parent; /* sd_busy */
> > + busy_sd = sd->parent; /* sd_busy */
> > }
> > - rcu_assign_pointer(per_cpu(sd_busy, cpu), sd);
> > + rcu_assign_pointer(per_cpu(sd_busy, cpu), busy_sd);
> >
> > rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> > per_cpu(sd_llc_size, cpu) = size;
>
> Indeed that makes a lot of sense, thanks Mel for tracking down this
> part of the puzzle! Will get your fix to Linus ASAP.
>
> Does this fix also speed up Ebizzy's transaction performance, or is
> its main effect a reduction in workload variation noise?
>
> Also it appears the Ebizzy numbers ought to be stable enough now to
> make the range-TLB-flush measurements more precise?
Ok, so the results on this question finally came in. I still have not
profiled this due to other bugs in flight.
fixsd-v3r4 is only the scheduling domain fix
shift-v3r4 is this series, including the tlbshift flush change
8-core machine
ebizzy performance
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
vanilla vanilla fixsd-v3r4 shift-v3r4
Mean 1 7295.77 ( 0.00%) 6713.32 ( -7.98%) 7320.71 ( 0.34%) 7744.07 ( 6.14%)
Mean 2 8252.58 ( 0.00%) 8334.43 ( 0.99%) 8233.29 ( -0.23%) 9451.07 ( 14.52%)
Mean 3 8179.74 ( 0.00%) 8134.42 ( -0.55%) 8137.38 ( -0.52%) 8947.15 ( 9.38%)
Mean 4 7862.45 ( 0.00%) 7966.27 ( 1.32%) 7837.52 ( -0.32%) 8594.52 ( 9.31%)
Mean 5 7170.24 ( 0.00%) 7820.63 ( 9.07%) 7086.82 ( -1.16%) 8222.22 ( 14.67%)
Mean 6 6835.10 ( 0.00%) 7773.30 ( 13.73%) 6822.95 ( -0.18%) 7863.05 ( 15.04%)
Mean 7 6740.99 ( 0.00%) 7712.45 ( 14.41%) 6697.30 ( -0.65%) 7537.98 ( 11.82%)
Mean 8 6494.01 ( 0.00%) 7705.62 ( 18.66%) 6449.95 ( -0.68%) 6848.89 ( 5.46%)
Mean 12 6567.37 ( 0.00%) 7554.82 ( 15.04%) 6106.56 ( -7.02%) 6515.51 ( -0.79%)
Mean 16 6630.26 ( 0.00%) 7331.04 ( 10.57%) 5999.57 ( -9.51%) 6410.09 ( -3.32%)
Range 1 767.00 ( 0.00%) 661.00 ( 13.82%) 182.00 ( 76.27%) 243.00 ( 68.32%)
Range 2 178.00 ( 0.00%) 592.00 (-232.58%) 200.00 (-12.36%) 376.00 (-111.24%)
Range 3 175.00 ( 0.00%) 431.00 (-146.29%) 225.00 (-28.57%) 522.00 (-198.29%)
Range 4 806.00 ( 0.00%) 542.00 ( 32.75%) 878.00 ( -8.93%) 478.00 ( 40.69%)
Range 5 544.00 ( 0.00%) 444.00 ( 18.38%) 893.00 (-64.15%) 576.00 ( -5.88%)
Range 6 399.00 ( 0.00%) 528.00 (-32.33%) 669.00 (-67.67%) 1134.00 (-184.21%)
Range 7 629.00 ( 0.00%) 467.00 ( 25.76%) 517.00 ( 17.81%) 870.00 (-38.31%)
Range 8 400.00 ( 0.00%) 435.00 ( -8.75%) 309.00 ( 22.75%) 441.00 (-10.25%)
Range 12 233.00 ( 0.00%) 330.00 (-41.63%) 260.00 (-11.59%) 314.00 (-34.76%)
Range 16 141.00 ( 0.00%) 496.00 (-251.77%) 127.00 ( 9.93%) 156.00 (-10.64%)
Stddev 1 73.94 ( 0.00%) 177.17 (-139.59%) 33.77 ( 54.32%) 40.82 ( 44.80%)
Stddev 2 23.47 ( 0.00%) 88.91 (-278.74%) 30.60 (-30.35%) 44.64 (-90.17%)
Stddev 3 36.48 ( 0.00%) 101.07 (-177.05%) 41.76 (-14.47%) 114.25 (-213.16%)
Stddev 4 158.37 ( 0.00%) 130.52 ( 17.59%) 178.91 (-12.97%) 114.66 ( 27.60%)
Stddev 5 116.74 ( 0.00%) 78.31 ( 32.92%) 213.76 (-83.10%) 105.69 ( 9.47%)
Stddev 6 66.34 ( 0.00%) 87.79 (-32.33%) 103.69 (-56.30%) 238.52 (-259.54%)
Stddev 7 145.62 ( 0.00%) 90.52 ( 37.84%) 126.49 ( 13.14%) 170.51 (-17.09%)
Stddev 8 68.51 ( 0.00%) 81.11 (-18.39%) 45.73 ( 33.25%) 65.11 ( 4.96%)
Stddev 12 32.15 ( 0.00%) 65.74 (-104.50%) 37.52 (-16.72%) 46.79 (-45.53%)
Stddev 16 21.59 ( 0.00%) 86.42 (-300.25%) 26.05 (-20.67%) 37.20 (-72.28%)
Scheduling fix on its own makes little difference and hurts ebizzy if
anything. However, the patch is clearly the right thing to do and we can
still see the tlb flush shift change is required for good results.
As for the stability
8-core machine
ebizzy Thread spread
3.13.0-rc3 3.4.69 3.13.0-rc3 3.13.0-rc3
vanilla vanilla fixsd-v3r4 shift-v3r4
Mean 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Mean 2 0.40 ( 0.00%) 0.13 ( 67.50%) 0.50 (-25.00%) 0.24 ( 40.00%)
Mean 3 23.73 ( 0.00%) 0.26 ( 98.90%) 19.80 ( 16.56%) 1.03 ( 95.66%)
Mean 4 12.79 ( 0.00%) 0.67 ( 94.76%) 7.92 ( 38.08%) 1.20 ( 90.62%)
Mean 5 13.08 ( 0.00%) 0.36 ( 97.25%) 102.28 (-681.96%) 5.86 ( 55.20%)
Mean 6 23.21 ( 0.00%) 1.13 ( 95.13%) 13.61 ( 41.36%) 92.37 (-297.98%)
Mean 7 15.85 ( 0.00%) 1.51 ( 90.47%) 9.48 ( 40.19%) 131.49 (-729.59%)
Mean 8 109.37 ( 0.00%) 1.05 ( 99.04%) 7.37 ( 93.26%) 19.75 ( 81.94%)
Mean 12 124.84 ( 0.00%) 0.59 ( 99.53%) 27.32 ( 78.12%) 34.32 ( 72.51%)
Mean 16 113.50 ( 0.00%) 0.49 ( 99.57%) 20.02 ( 82.36%) 28.57 ( 74.83%)
Range 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Range 2 3.00 ( 0.00%) 1.00 ( 66.67%) 2.00 ( 33.33%) 1.00 ( 66.67%)
Range 3 80.00 ( 0.00%) 1.00 ( 98.75%) 87.00 ( -8.75%) 21.00 ( 73.75%)
Range 4 38.00 ( 0.00%) 2.00 ( 94.74%) 39.00 ( -2.63%) 5.00 ( 86.84%)
Range 5 37.00 ( 0.00%) 1.00 ( 97.30%) 368.00 (-894.59%) 50.00 (-35.14%)
Range 6 46.00 ( 0.00%) 8.00 ( 82.61%) 39.00 ( 15.22%) 876.00 (-1804.35%)
Range 7 28.00 ( 0.00%) 36.00 (-28.57%) 21.00 ( 25.00%) 649.00 (-2217.86%)
Range 8 325.00 ( 0.00%) 26.00 ( 92.00%) 11.00 ( 96.62%) 74.00 ( 77.23%)
Range 12 160.00 ( 0.00%) 5.00 ( 96.88%) 39.00 ( 75.62%) 47.00 ( 70.62%)
Range 16 108.00 ( 0.00%) 1.00 ( 99.07%) 29.00 ( 73.15%) 34.00 ( 68.52%)
Stddev 1 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%) 0.00 ( 0.00%)
Stddev 2 0.62 ( 0.00%) 0.34 (-45.44%) 0.66 ( 6.38%) 0.43 (-30.72%)
Stddev 3 17.40 ( 0.00%) 0.44 (-97.48%) 16.54 ( -4.96%) 2.43 (-86.03%)
Stddev 4 8.52 ( 0.00%) 0.51 (-94.00%) 7.81 ( -8.38%) 0.84 (-90.18%)
Stddev 5 7.91 ( 0.00%) 0.48 (-93.93%) 105.16 (1229.65%) 9.00 ( 13.74%)
Stddev 6 7.11 ( 0.00%) 1.48 (-79.18%) 7.20 ( 1.17%) 124.99 (1657.37%)
Stddev 7 5.90 ( 0.00%) 4.12 (-30.24%) 4.28 (-27.41%) 110.32 (1769.33%)
Stddev 8 80.95 ( 0.00%) 2.65 (-96.72%) 2.63 (-96.76%) 10.01 (-87.64%)
Stddev 12 31.48 ( 0.00%) 0.66 (-97.89%) 12.20 (-61.24%) 13.06 (-58.50%)
Stddev 16 24.32 ( 0.00%) 0.50 (-97.94%) 8.96 (-63.18%) 9.56 (-60.70%)
The spread is much improved but still less stable than 3.4 was so something
weird is still going on there and the tlb flush measurements are still a
bit questionable.
Still, I had queued up long-lived tests with more thread counts to
measure the impact and found this
4-core
tlbflush
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
Mean 1 10.68 ( 0.00%) 10.27 ( 3.83%) 10.45 ( 2.11%)
Mean 2 11.02 ( 0.00%) 18.62 (-68.97%) 22.57 (-104.79%)
Mean 3 22.73 ( 0.00%) 22.95 ( -0.99%) 22.10 ( 2.76%)
Mean 5 51.06 ( 0.00%) 47.20 ( 7.56%) 46.45 ( 9.03%)
Mean 8 82.62 ( 0.00%) 43.67 ( 47.15%) 42.72 ( 48.29%)
Range 1 6.00 ( 0.00%) 8.00 (-33.33%) 8.00 (-33.33%)
Range 2 17.00 ( 0.00%) 52.00 (-205.88%) 49.00 (-188.24%)
Range 3 15.00 ( 0.00%) 24.00 (-60.00%) 24.00 (-60.00%)
Range 5 36.00 ( 0.00%) 35.00 ( 2.78%) 21.00 ( 41.67%)
Range 8 49.00 ( 0.00%) 10.00 ( 79.59%) 15.00 ( 69.39%)
Stddev 1 0.95 ( 0.00%) 1.28 ( 35.11%) 0.87 ( -7.82%)
Stddev 2 1.67 ( 0.00%) 15.21 (812.95%) 16.25 (875.62%)
Stddev 3 2.53 ( 0.00%) 3.42 ( 35.13%) 3.05 ( 20.61%)
Stddev 5 4.25 ( 0.00%) 4.31 ( 1.37%) 3.65 (-14.16%)
Stddev 8 5.71 ( 0.00%) 1.88 (-67.09%) 1.71 (-70.12%)
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
User 804.88 900.31 1057.53
System 526.53 507.57 578.95
Elapsed 12629.24 14931.78 17925.47
There are 320 iterations of the test per thread count. The number of
entries is randomly selected with a min of 1 and max of 512. To ensure
a reasonably even spread of entries, the full range is broken up into 8
sections and a random number selected within that section.
iteration 1, random number between 0-64
iteration 2, random number between 64-128 etc
This is actually still a very weak methodology. When you do not know what
are typical ranges, random is a reasonable choice but it can be easily
argued that the opimisation was for smaller ranges and an even spread is
not representative of any workload that matters. To improve this, we'd
need to know the probability distribution of TLB flush range sizes for a
set of workloads that are considered "common", build a synthetic trace and
feed that into this benchmark. Even that is not perfect because it would
not account for the time between flushes but there are limits of what can
be reasonably done and still be doing something useful. Alex or Peter,
was there any specific methodology used for selecting the ranges to be
flushed by the microbenchmark?
Anyway, random ranges on the 4-core machine showed that the conservative
choice was a good one in many cases. Two threads seems to be screwed implying
that fixing the scheduling domain may have meant we are frequently sending
an IPI to a relatively remote core. It's a separate issue because that
smacks of being a pure scheduling problem.
8-core
tlbflush
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
Mean 1 8.78 ( 0.00%) 9.54 ( -8.65%) 9.46 ( -7.76%)
Mean 2 8.19 ( 0.00%) 9.54 (-16.44%) 9.43 (-15.03%)
Mean 3 8.86 ( 0.00%) 9.95 (-12.39%) 9.81 (-10.80%)
Mean 5 13.38 ( 0.00%) 14.67 ( -9.60%) 15.51 (-15.93%)
Mean 8 32.97 ( 0.00%) 40.88 (-24.02%) 38.91 (-18.04%)
Mean 13 68.47 ( 0.00%) 32.10 ( 53.12%) 31.38 ( 54.16%)
Mean 16 86.15 ( 0.00%) 40.10 ( 53.46%) 39.04 ( 54.68%)
Range 1 7.00 ( 0.00%) 8.00 (-14.29%) 7.00 ( 0.00%)
Range 2 6.00 ( 0.00%) 38.00 (-533.33%) 36.00 (-500.00%)
Range 3 12.00 ( 0.00%) 18.00 (-50.00%) 17.00 (-41.67%)
Range 5 16.00 ( 0.00%) 34.00 (-112.50%) 27.00 (-68.75%)
Range 8 34.00 ( 0.00%) 23.00 ( 32.35%) 21.00 ( 38.24%)
Range 13 47.00 ( 0.00%) 11.00 ( 76.60%) 9.00 ( 80.85%)
Range 16 50.00 ( 0.00%) 12.00 ( 76.00%) 11.00 ( 78.00%)
Stddev 1 1.46 ( 0.00%) 1.58 ( 8.37%) 1.24 (-15.19%)
Stddev 2 1.47 ( 0.00%) 4.11 (180.46%) 2.65 ( 80.77%)
Stddev 3 2.00 ( 0.00%) 3.61 ( 80.40%) 2.73 ( 36.59%)
Stddev 5 2.36 ( 0.00%) 4.71 (100.05%) 5.01 (112.85%)
Stddev 8 7.03 ( 0.00%) 4.54 (-35.42%) 4.08 (-41.92%)
Stddev 13 6.80 ( 0.00%) 2.28 (-66.48%) 1.67 (-75.53%)
Stddev 16 7.36 ( 0.00%) 2.71 (-63.22%) 2.14 (-70.93%)
3.13.0-rc3 3.13.0-rc3 3.13.0-rc3
vanilla fixsd-v3r4 shift-v3r4
User 3181.72 3640.00 4234.63
System 3043.31 2746.05 3606.61
Elapsed 31871.22 34678.69 38131.45
And this shows for two Ivybridge processors that the select of shift value
gives different results. I wonder was that taken into account. This is
showing that we see big relative regressions for lower number of threads
*but* the absolute difference between them is very small. There are
relatively big gains for higher numbers of threads *and* big absolute
gains. The worst case is far less worse with the series applied at least
for randomly selected ranges to flush.
Because we lack data on TLB range flush distributions I think we should
still go with the conservative choice for the TLB flush shift. The worst
case is really bad here and it's painfully obvious on ebizzy.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-18 7:28 ` Fengguang Wu
@ 2013-12-19 14:34 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-19 14:34 UTC (permalink / raw)
To: Fengguang Wu
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Wed, Dec 18, 2013 at 03:28:14PM +0800, Fengguang Wu wrote:
> Hi Mel,
>
> I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
>
> Basically there are
>
> 1) no big performance changes
>
> 76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
> 407038 +1.2% 412032 TOTAL hackbench.throughput
> 50307 -1.5% 49549 TOTAL ebizzy.throughput
>
I'm assuming this was an ivybridge processor. How many threads were ebizzy
tested with? The memory ranges used by the vm scalability benchmarks are
probably too large to be affected by the series but I'm guessing. I doubt
hackbench is doing any flushes and the 1.2% is noise.
> 2) huge proc-vmstat.nr_tlb_* increases
>
> 99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
> 3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
> 3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
> 5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
>
The accounting changes can be mostly explained by "x86: mm: Clean up
inconsistencies when flushing TLB ranges". flush_all was simply not
being counted before so I would claim that the old figure was simply
wrong and did not reflect reality.
Alterations on when range versus global flushes would affect the other
counters but arguably it's now behaving as originally intended by the tlb
flush shift.
> Here are the detailed numbers. eabb1f89905a0c809d13 is the HEAD commit
> with 4 patches applied. The "~ N%" notations are the stddev percent.
> The "[+-] N%" notations are the increase/decrease percent. The
> brickland2, lkp-snb01, lkp-ib03 etc. are testbox names.
>
Are positive numbers always better? If so, most of these figures look good
to me and support the series being merged. Please speak up if that is in
error.
I do see a few major regressions like this
> 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
but I have no idea what the test is doing and whether something happened
that the test broke that time or if it's something to be really
concerned about.
Thanks
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-19 14:34 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-19 14:34 UTC (permalink / raw)
To: Fengguang Wu
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Wed, Dec 18, 2013 at 03:28:14PM +0800, Fengguang Wu wrote:
> Hi Mel,
>
> I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
>
> Basically there are
>
> 1) no big performance changes
>
> 76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
> 407038 +1.2% 412032 TOTAL hackbench.throughput
> 50307 -1.5% 49549 TOTAL ebizzy.throughput
>
I'm assuming this was an ivybridge processor. How many threads were ebizzy
tested with? The memory ranges used by the vm scalability benchmarks are
probably too large to be affected by the series but I'm guessing. I doubt
hackbench is doing any flushes and the 1.2% is noise.
> 2) huge proc-vmstat.nr_tlb_* increases
>
> 99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
> 3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
> 3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
> 5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
>
The accounting changes can be mostly explained by "x86: mm: Clean up
inconsistencies when flushing TLB ranges". flush_all was simply not
being counted before so I would claim that the old figure was simply
wrong and did not reflect reality.
Alterations on when range versus global flushes would affect the other
counters but arguably it's now behaving as originally intended by the tlb
flush shift.
> Here are the detailed numbers. eabb1f89905a0c809d13 is the HEAD commit
> with 4 patches applied. The "~ N%" notations are the stddev percent.
> The "[+-] N%" notations are the increase/decrease percent. The
> brickland2, lkp-snb01, lkp-ib03 etc. are testbox names.
>
Are positive numbers always better? If so, most of these figures look good
to me and support the series being merged. Please speak up if that is in
error.
I do see a few major regressions like this
> 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
but I have no idea what the test is doing and whether something happened
that the test broke that time or if it's something to be really
concerned about.
Thanks
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-19 14:24 ` Mel Gorman
@ 2013-12-19 16:49 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-19 16:49 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> [...]
>
> Because we lack data on TLB range flush distributions I think we
> should still go with the conservative choice for the TLB flush
> shift. The worst case is really bad here and it's painfully obvious
> on ebizzy.
So I'm obviously much in favor of this - I'd in fact suggest making
the conservative choice on _all_ CPU models that have aggressive TLB
range values right now, because frankly the testing used to pick those
values does not look all that convincing to me.
I very much suspect that the problem goes wider than just IvyBridge
CPUs ... it's just that few people put as much testing into it as you.
We can certainly get more aggressive in the future, subject to proper
measurements.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-19 16:49 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-19 16:49 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> [...]
>
> Because we lack data on TLB range flush distributions I think we
> should still go with the conservative choice for the TLB flush
> shift. The worst case is really bad here and it's painfully obvious
> on ebizzy.
So I'm obviously much in favor of this - I'd in fact suggest making
the conservative choice on _all_ CPU models that have aggressive TLB
range values right now, because frankly the testing used to pick those
values does not look all that convincing to me.
I very much suspect that the problem goes wider than just IvyBridge
CPUs ... it's just that few people put as much testing into it as you.
We can certainly get more aggressive in the future, subject to proper
measurements.
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-19 16:49 ` Ingo Molnar
@ 2013-12-20 11:13 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 11:13 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > [...]
> >
> > Because we lack data on TLB range flush distributions I think we
> > should still go with the conservative choice for the TLB flush
> > shift. The worst case is really bad here and it's painfully obvious
> > on ebizzy.
>
> So I'm obviously much in favor of this - I'd in fact suggest making
> the conservative choice on _all_ CPU models that have aggressive TLB
> range values right now, because frankly the testing used to pick those
> values does not look all that convincing to me.
>
I think the choices there are already reasonably conservative. I'd be
reluctant to support merging a patch that made a choice on all CPU models
without having access to the machines to run tests on. I don't see the
Intel people volunteering to do the necessary testing.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 11:13 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 11:13 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > [...]
> >
> > Because we lack data on TLB range flush distributions I think we
> > should still go with the conservative choice for the TLB flush
> > shift. The worst case is really bad here and it's painfully obvious
> > on ebizzy.
>
> So I'm obviously much in favor of this - I'd in fact suggest making
> the conservative choice on _all_ CPU models that have aggressive TLB
> range values right now, because frankly the testing used to pick those
> values does not look all that convincing to me.
>
I think the choices there are already reasonably conservative. I'd be
reluctant to support merging a patch that made a choice on all CPU models
without having access to the machines to run tests on. I don't see the
Intel people volunteering to do the necessary testing.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 11:13 ` Mel Gorman
@ 2013-12-20 11:18 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-20 11:18 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
> >
> > * Mel Gorman <mgorman@suse.de> wrote:
> >
> > > [...]
> > >
> > > Because we lack data on TLB range flush distributions I think we
> > > should still go with the conservative choice for the TLB flush
> > > shift. The worst case is really bad here and it's painfully obvious
> > > on ebizzy.
> >
> > So I'm obviously much in favor of this - I'd in fact suggest
> > making the conservative choice on _all_ CPU models that have
> > aggressive TLB range values right now, because frankly the testing
> > used to pick those values does not look all that convincing to me.
>
> I think the choices there are already reasonably conservative. I'd
> be reluctant to support merging a patch that made a choice on all
> CPU models without having access to the machines to run tests on. I
> don't see the Intel people volunteering to do the necessary testing.
So based on this thread I lost confidence in test results on all CPU
models but the one you tested.
I see two workable options right now:
- We turn the feature off on all other CPU models, until someone
measures and tunes them reliably.
or
- We make all tunings that are more aggressive than yours to match
yours. In the future people can measure and argue for more
aggressive tunings.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 11:18 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-20 11:18 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
> >
> > * Mel Gorman <mgorman@suse.de> wrote:
> >
> > > [...]
> > >
> > > Because we lack data on TLB range flush distributions I think we
> > > should still go with the conservative choice for the TLB flush
> > > shift. The worst case is really bad here and it's painfully obvious
> > > on ebizzy.
> >
> > So I'm obviously much in favor of this - I'd in fact suggest
> > making the conservative choice on _all_ CPU models that have
> > aggressive TLB range values right now, because frankly the testing
> > used to pick those values does not look all that convincing to me.
>
> I think the choices there are already reasonably conservative. I'd
> be reluctant to support merging a patch that made a choice on all
> CPU models without having access to the machines to run tests on. I
> don't see the Intel people volunteering to do the necessary testing.
So based on this thread I lost confidence in test results on all CPU
models but the one you tested.
I see two workable options right now:
- We turn the feature off on all other CPU models, until someone
measures and tunes them reliably.
or
- We make all tunings that are more aggressive than yours to match
yours. In the future people can measure and argue for more
aggressive tunings.
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 11:18 ` Ingo Molnar
@ 2013-12-20 12:00 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 12:00 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Fri, Dec 20, 2013 at 12:18:18PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
> > >
> > > * Mel Gorman <mgorman@suse.de> wrote:
> > >
> > > > [...]
> > > >
> > > > Because we lack data on TLB range flush distributions I think we
> > > > should still go with the conservative choice for the TLB flush
> > > > shift. The worst case is really bad here and it's painfully obvious
> > > > on ebizzy.
> > >
> > > So I'm obviously much in favor of this - I'd in fact suggest
> > > making the conservative choice on _all_ CPU models that have
> > > aggressive TLB range values right now, because frankly the testing
> > > used to pick those values does not look all that convincing to me.
> >
> > I think the choices there are already reasonably conservative. I'd
> > be reluctant to support merging a patch that made a choice on all
> > CPU models without having access to the machines to run tests on. I
> > don't see the Intel people volunteering to do the necessary testing.
>
> So based on this thread I lost confidence in test results on all CPU
> models but the one you tested.
>
> I see two workable options right now:
>
> - We turn the feature off on all other CPU models, until someone
> measures and tunes them reliably.
>
That would mean setting tlb_flushall_shift to -1. I think it's overkill
but it's not really my call.
HPA?
> or
>
> - We make all tunings that are more aggressive than yours to match
> yours. In the future people can measure and argue for more
> aggressive tunings.
>
I'm missing something obvious because switching the default to 2 will use
individual page flushes more aggressively which I do not think was your
intent. The basic check is
if (tlb_flushall_shift == -1)
flush all
act_entries = tlb_entries >> tlb_flushall_shift;
nr_base_pages = range to flush
if (nr_base_pages > act_entries)
flush all
else
flush individual pages
Full mm flush is the "safe" bet
tlb_flushall_shift == -1 Always use flush all
tlb_flushall_shift == 1 Aggressively use individual flushes
tlb_flushall_shift == 6 Conservatively use individual flushes
IvyBridge was too aggressive using individual flushes and my patch makes
it less aggressive.
Intel's code for this currently looks like
switch ((c->x86 << 8) + c->x86_model) {
case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
case 0x61d: /* six-core 45 nm xeon "Dunnington" */
tlb_flushall_shift = -1;
break;
case 0x61a: /* 45 nm nehalem, "Bloomfield" */
case 0x61e: /* 45 nm nehalem, "Lynnfield" */
case 0x625: /* 32 nm nehalem, "Clarkdale" */
case 0x62c: /* 32 nm nehalem, "Gulftown" */
case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
case 0x62f: /* 32 nm Xeon E7 */
tlb_flushall_shift = 6;
break;
case 0x62a: /* SandyBridge */
case 0x62d: /* SandyBridge, "Romely-EP" */
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;
}
That default shift of "6" is already conservative which is why I don't
think we need to change anything there. AMD is slightly more aggressive
in their choices but not enough to panic.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 12:00 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 12:00 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Fri, Dec 20, 2013 at 12:18:18PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
> > >
> > > * Mel Gorman <mgorman@suse.de> wrote:
> > >
> > > > [...]
> > > >
> > > > Because we lack data on TLB range flush distributions I think we
> > > > should still go with the conservative choice for the TLB flush
> > > > shift. The worst case is really bad here and it's painfully obvious
> > > > on ebizzy.
> > >
> > > So I'm obviously much in favor of this - I'd in fact suggest
> > > making the conservative choice on _all_ CPU models that have
> > > aggressive TLB range values right now, because frankly the testing
> > > used to pick those values does not look all that convincing to me.
> >
> > I think the choices there are already reasonably conservative. I'd
> > be reluctant to support merging a patch that made a choice on all
> > CPU models without having access to the machines to run tests on. I
> > don't see the Intel people volunteering to do the necessary testing.
>
> So based on this thread I lost confidence in test results on all CPU
> models but the one you tested.
>
> I see two workable options right now:
>
> - We turn the feature off on all other CPU models, until someone
> measures and tunes them reliably.
>
That would mean setting tlb_flushall_shift to -1. I think it's overkill
but it's not really my call.
HPA?
> or
>
> - We make all tunings that are more aggressive than yours to match
> yours. In the future people can measure and argue for more
> aggressive tunings.
>
I'm missing something obvious because switching the default to 2 will use
individual page flushes more aggressively which I do not think was your
intent. The basic check is
if (tlb_flushall_shift == -1)
flush all
act_entries = tlb_entries >> tlb_flushall_shift;
nr_base_pages = range to flush
if (nr_base_pages > act_entries)
flush all
else
flush individual pages
Full mm flush is the "safe" bet
tlb_flushall_shift == -1 Always use flush all
tlb_flushall_shift == 1 Aggressively use individual flushes
tlb_flushall_shift == 6 Conservatively use individual flushes
IvyBridge was too aggressive using individual flushes and my patch makes
it less aggressive.
Intel's code for this currently looks like
switch ((c->x86 << 8) + c->x86_model) {
case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
case 0x61d: /* six-core 45 nm xeon "Dunnington" */
tlb_flushall_shift = -1;
break;
case 0x61a: /* 45 nm nehalem, "Bloomfield" */
case 0x61e: /* 45 nm nehalem, "Lynnfield" */
case 0x625: /* 32 nm nehalem, "Clarkdale" */
case 0x62c: /* 32 nm nehalem, "Gulftown" */
case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
case 0x62f: /* 32 nm Xeon E7 */
tlb_flushall_shift = 6;
break;
case 0x62a: /* SandyBridge */
case 0x62d: /* SandyBridge, "Romely-EP" */
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;
}
That default shift of "6" is already conservative which is why I don't
think we need to change anything there. AMD is slightly more aggressive
in their choices but not enough to panic.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 12:00 ` Mel Gorman
@ 2013-12-20 12:20 ` Ingo Molnar
-1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-20 12:20 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> tlb_flushall_shift == -1 Always use flush all
> tlb_flushall_shift == 1 Aggressively use individual flushes
> tlb_flushall_shift == 6 Conservatively use individual flushes
>
> IvyBridge was too aggressive using individual flushes and my patch
> makes it less aggressive.
>
> Intel's code for this currently looks like
>
> switch ((c->x86 << 8) + c->x86_model) {
> case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
> case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
> case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
> case 0x61d: /* six-core 45 nm xeon "Dunnington" */
> tlb_flushall_shift = -1;
> break;
> case 0x61a: /* 45 nm nehalem, "Bloomfield" */
> case 0x61e: /* 45 nm nehalem, "Lynnfield" */
> case 0x625: /* 32 nm nehalem, "Clarkdale" */
> case 0x62c: /* 32 nm nehalem, "Gulftown" */
> case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
> case 0x62f: /* 32 nm Xeon E7 */
> tlb_flushall_shift = 6;
> break;
> case 0x62a: /* SandyBridge */
> case 0x62d: /* SandyBridge, "Romely-EP" */
> tlb_flushall_shift = 5;
> break;
> case 0x63a: /* Ivybridge */
> tlb_flushall_shift = 2;
> break;
> default:
> tlb_flushall_shift = 6;
> }
>
> That default shift of "6" is already conservative which is why I
> don't think we need to change anything there. AMD is slightly more
> aggressive in their choices but not enough to panic.
Lets face it, the per model tunings are most likely crap: the only
place where it significantly deviated from '6' was Ivybridge - and
there it was causing a regression.
With your patch we'll have 6 everywhere, except on SandyBridge where
it's slightly more agressive at 5 - which is probably noise.
So my argument is that we should use '6' _everywhere_ and do away with
the pretense that we do per model tunings...
Thanks,
Ingo
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 12:20 ` Ingo Molnar
0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2013-12-20 12:20 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
* Mel Gorman <mgorman@suse.de> wrote:
> tlb_flushall_shift == -1 Always use flush all
> tlb_flushall_shift == 1 Aggressively use individual flushes
> tlb_flushall_shift == 6 Conservatively use individual flushes
>
> IvyBridge was too aggressive using individual flushes and my patch
> makes it less aggressive.
>
> Intel's code for this currently looks like
>
> switch ((c->x86 << 8) + c->x86_model) {
> case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
> case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
> case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
> case 0x61d: /* six-core 45 nm xeon "Dunnington" */
> tlb_flushall_shift = -1;
> break;
> case 0x61a: /* 45 nm nehalem, "Bloomfield" */
> case 0x61e: /* 45 nm nehalem, "Lynnfield" */
> case 0x625: /* 32 nm nehalem, "Clarkdale" */
> case 0x62c: /* 32 nm nehalem, "Gulftown" */
> case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
> case 0x62f: /* 32 nm Xeon E7 */
> tlb_flushall_shift = 6;
> break;
> case 0x62a: /* SandyBridge */
> case 0x62d: /* SandyBridge, "Romely-EP" */
> tlb_flushall_shift = 5;
> break;
> case 0x63a: /* Ivybridge */
> tlb_flushall_shift = 2;
> break;
> default:
> tlb_flushall_shift = 6;
> }
>
> That default shift of "6" is already conservative which is why I
> don't think we need to change anything there. AMD is slightly more
> aggressive in their choices but not enough to panic.
Lets face it, the per model tunings are most likely crap: the only
place where it significantly deviated from '6' was Ivybridge - and
there it was causing a regression.
With your patch we'll have 6 everywhere, except on SandyBridge where
it's slightly more agressive at 5 - which is probably noise.
So my argument is that we should use '6' _everywhere_ and do away with
the pretense that we do per model tunings...
Thanks,
Ingo
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 12:20 ` Ingo Molnar
@ 2013-12-20 13:55 ` Mel Gorman
-1 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 13:55 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Fri, Dec 20, 2013 at 01:20:19PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > tlb_flushall_shift == -1 Always use flush all
> > tlb_flushall_shift == 1 Aggressively use individual flushes
> > tlb_flushall_shift == 6 Conservatively use individual flushes
> >
> > IvyBridge was too aggressive using individual flushes and my patch
> > makes it less aggressive.
> >
> > Intel's code for this currently looks like
> >
> > switch ((c->x86 << 8) + c->x86_model) {
> > case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
> > case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
> > case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
> > case 0x61d: /* six-core 45 nm xeon "Dunnington" */
> > tlb_flushall_shift = -1;
> > break;
> > case 0x61a: /* 45 nm nehalem, "Bloomfield" */
> > case 0x61e: /* 45 nm nehalem, "Lynnfield" */
> > case 0x625: /* 32 nm nehalem, "Clarkdale" */
> > case 0x62c: /* 32 nm nehalem, "Gulftown" */
> > case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
> > case 0x62f: /* 32 nm Xeon E7 */
> > tlb_flushall_shift = 6;
> > break;
> > case 0x62a: /* SandyBridge */
> > case 0x62d: /* SandyBridge, "Romely-EP" */
> > tlb_flushall_shift = 5;
> > break;
> > case 0x63a: /* Ivybridge */
> > tlb_flushall_shift = 2;
> > break;
> > default:
> > tlb_flushall_shift = 6;
> > }
> >
> > That default shift of "6" is already conservative which is why I
> > don't think we need to change anything there. AMD is slightly more
> > aggressive in their choices but not enough to panic.
>
> Lets face it, the per model tunings are most likely crap: the only
> place where it significantly deviated from '6' was Ivybridge - and
> there it was causing a regression.
>
> With your patch we'll have 6 everywhere, except on SandyBridge where
> it's slightly more agressive at 5 - which is probably noise.
>
> So my argument is that we should use '6' _everywhere_ and do away with
> the pretense that we do per model tunings...
>
Understood. I prototyped a suitable patch and stuck it in a queue. I
also took the libery of adding a patch that also reset IvyBridge to 6
out of curiousity. I'll post a suitable series once I have results.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 13:55 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 13:55 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alex Shi, Thomas Gleixner, Andrew Morton,
Fengguang Wu, H Peter Anvin, Linux-X86, Linux-MM, LKML,
Peter Zijlstra
On Fri, Dec 20, 2013 at 01:20:19PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@suse.de> wrote:
>
> > tlb_flushall_shift == -1 Always use flush all
> > tlb_flushall_shift == 1 Aggressively use individual flushes
> > tlb_flushall_shift == 6 Conservatively use individual flushes
> >
> > IvyBridge was too aggressive using individual flushes and my patch
> > makes it less aggressive.
> >
> > Intel's code for this currently looks like
> >
> > switch ((c->x86 << 8) + c->x86_model) {
> > case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
> > case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
> > case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
> > case 0x61d: /* six-core 45 nm xeon "Dunnington" */
> > tlb_flushall_shift = -1;
> > break;
> > case 0x61a: /* 45 nm nehalem, "Bloomfield" */
> > case 0x61e: /* 45 nm nehalem, "Lynnfield" */
> > case 0x625: /* 32 nm nehalem, "Clarkdale" */
> > case 0x62c: /* 32 nm nehalem, "Gulftown" */
> > case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
> > case 0x62f: /* 32 nm Xeon E7 */
> > tlb_flushall_shift = 6;
> > break;
> > case 0x62a: /* SandyBridge */
> > case 0x62d: /* SandyBridge, "Romely-EP" */
> > tlb_flushall_shift = 5;
> > break;
> > case 0x63a: /* Ivybridge */
> > tlb_flushall_shift = 2;
> > break;
> > default:
> > tlb_flushall_shift = 6;
> > }
> >
> > That default shift of "6" is already conservative which is why I
> > don't think we need to change anything there. AMD is slightly more
> > aggressive in their choices but not enough to panic.
>
> Lets face it, the per model tunings are most likely crap: the only
> place where it significantly deviated from '6' was Ivybridge - and
> there it was causing a regression.
>
> With your patch we'll have 6 everywhere, except on SandyBridge where
> it's slightly more agressive at 5 - which is probably noise.
>
> So my argument is that we should use '6' _everywhere_ and do away with
> the pretense that we do per model tunings...
>
Understood. I prototyped a suitable patch and stuck it in a queue. I
also took the libery of adding a patch that also reset IvyBridge to 6
out of curiousity. I'll post a suitable series once I have results.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-19 14:34 ` Mel Gorman
(?)
@ 2013-12-20 15:51 ` Fengguang Wu
2013-12-20 16:44 ` Mel Gorman
-1 siblings, 1 reply; 71+ messages in thread
From: Fengguang Wu @ 2013-12-20 15:51 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
[-- Attachment #1: Type: text/plain, Size: 5764 bytes --]
On Thu, Dec 19, 2013 at 02:34:50PM +0000, Mel Gorman wrote:
> On Wed, Dec 18, 2013 at 03:28:14PM +0800, Fengguang Wu wrote:
> > Hi Mel,
> >
> > I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
> >
> > Basically there are
> >
> > 1) no big performance changes
> >
> > 76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
> > 407038 +1.2% 412032 TOTAL hackbench.throughput
> > 50307 -1.5% 49549 TOTAL ebizzy.throughput
> >
>
> I'm assuming this was an ivybridge processor.
The test boxes brickland2 and lkp-ib03 are ivybridge; lkp-snb01 is sandybridge.
> How many threads were ebizzy tested with?
The below case has params string "400%-5-30", which means
nr_threads = 400% * nr_cpu = 4 * 48 = 192
iterations = 5
duration = 30
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
50307 ~ 1% -1.5% 49549 ~ 0% lkp-ib03/micro/ebizzy/400%-5-30
50307 -1.5% 49549 TOTAL ebizzy.throughput
> The memory ranges used by the vm scalability benchmarks are
> probably too large to be affected by the series but I'm guessing.
Do you mean these lines?
3345155 ~ 0% -0.3% 3335172 ~ 0% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
33249939 ~ 0% +3.3% 34336155 ~ 1% brickland2/micro/vm-scalability/1T-shm-pread-seq
The two cases run 128 threads/processes, each accessing randomly/sequentially
a 64GB shm file concurrently. Sorry the 16G/1T prefixes are somehow misleading.
> I doubt hackbench is doing any flushes and the 1.2% is noise.
Here are the proc-vmstat.nr_tlb_remote_flush numbers for hackbench:
513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
I tried rebuild kernels with distclean and this time got the below
hackbench changes. I'll queue the hackbench test in all our test boxes
to get a more complete evaluation.
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
232925 ~ 0% -8.4% 213339 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
232925 -8.4% 213339 TOTAL hackbench.throughput
This time, the ebizzy params are refreshed and the test case is
exercised in all our test machines. The results that have changed are:
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
80776 -1.2% 79793 TOTAL ebizzy.throughput
The full change set is attached.
> > 2) huge proc-vmstat.nr_tlb_* increases
> >
> > 99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
> > 3.812e+08 +2.2e+13% 8.393e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
> > 3.301e+08 +2.2e+13% 7.241e+19 TOTAL proc-vmstat.nr_tlb_remote_flush
> > 5990864 +1.2e+15% 7.032e+19 TOTAL proc-vmstat.nr_tlb_local_flush_all
> >
>
> The accounting changes can be mostly explained by "x86: mm: Clean up
> inconsistencies when flushing TLB ranges". flush_all was simply not
> being counted before so I would claim that the old figure was simply
> wrong and did not reflect reality.
>
> Alterations on when range versus global flushes would affect the other
> counters but arguably it's now behaving as originally intended by the tlb
> flush shift.
OK.
> > Here are the detailed numbers. eabb1f89905a0c809d13 is the HEAD commit
> > with 4 patches applied. The "~ N%" notations are the stddev percent.
> > The "[+-] N%" notations are the increase/decrease percent. The
> > brickland2, lkp-snb01, lkp-ib03 etc. are testbox names.
> >
>
> Are positive numbers always better?
Not necessarily. A positive change merely means the absolute numbers
of hackbench.throughput, ebizzy.throughput, etc. are increased in the
new kernel. But yes, for the above stats, it happen to be "the higher,
the better".
> If so, most of these figures look good to me and support the series
> being merged. Please speak up if that is in error.
Agreed, except that I'll need to re-evaluate the hackbench test case.
> I do see a few major regressions like this
>
> > 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
>
> but I have no idea what the test is doing and whether something happened
> that the test broke that time or if it's something to be really
> concerned about.
This test case simply creates sparse files, populate them with zeros,
then delete them in parallel. Here $mem is physical memory size 128G,
$nr_cpu is 120.
for i in `seq $nr_cpu`
do
create_sparse_file $SPARSE_FILE-$i $((mem / nr_cpu))
cp $SPARSE_FILE-$i /dev/null
done
for i in `seq $nr_cpu`
do
rm $SPARSE_FILE-$i &
done
Thanks,
Fengguang
[-- Attachment #2: eabb1f89905a0c809d13ec27795ced089c107eb8 --]
[-- Type: text/plain, Size: 74166 bytes --]
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
232925 ~ 0% -8.4% 213339 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
232925 -8.4% 213339 TOTAL hackbench.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
136.87 ~ 1% +4.4% 142.90 ~ 2% lkp-nex04/micro/ebizzy/400%-5-30
32.60 ~ 0% +0.8% 32.86 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
41.25 ~ 0% -1.9% 40.48 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
26.37 ~ 0% -1.2% 26.06 ~ 0% xps2/micro/ebizzy/200%-100-10
237.09 +2.2% 242.29 TOTAL ebizzy.time.user
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4934 ~ 0% +0.7% 4971 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
29583 ~ 0% +2.2% 30237 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_RR
34517 +2.0% 35208 TOTAL netperf.Throughput_tps
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
80776 -1.2% 79793 TOTAL ebizzy.throughput
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
27493 ~ 5% +4.1% 28614 ~ 0% lkp-nex05/micro/tlbflush/100%-512-320
27493 +4.1% 28614 TOTAL tlbflush.mem_acc_time_thread_ms
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1770.22 ~ 0% -0.4% 1763.99 ~ 0% lkp-nex04/micro/ebizzy/400%-5-30
286.57 ~ 0% -0.1% 286.30 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
594.92 ~ 0% +0.1% 595.68 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
53.35 ~ 0% +0.6% 53.67 ~ 0% xps2/micro/ebizzy/200%-100-10
2705.06 -0.2% 2699.64 TOTAL ebizzy.time.sys
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
427 ~16% -91.7% 35 ~ 3% avoton1/crypto/tcrypt/2s-505-509
7.141e+08 ~ 1% +1.3e+10% 9.593e+16 ~ 8% grantley/micro/ebizzy/200%-100-10
23867179 ~ 8% -100.0% 0 ~ 0% kbuildx/micro/ebizzy/200%-100-10
1230047 ~ 0% +2.6e+12% 3.186e+16 ~61% lkp-a04/micro/ebizzy/200%-100-10
256 ~10% +9.2e+16% 2.349e+17 ~27% lkp-a04/micro/netperf/120s-200%-TCP_STREAM
1.158e+09 ~ 0% +6.3e+09% 7.291e+16 ~45% lkp-ib03/micro/ebizzy/200%-100-10
2495 ~40% +1e+16% 2.545e+17 ~126% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1537 ~ 2% +3.8e+16% 5.812e+17 ~81% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
1420 ~ 5% +5.9e+16% 8.376e+17 ~ 9% lkp-ib03/micro/netperf/120s-200%-TCP_RR
1751 ~16% +1e+18% 1.808e+19 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
1392 ~ 4% +2.4e+16% 3.3e+17 ~75% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
1534 ~ 6% +2e+16% 3.083e+17 ~76% lkp-ib03/micro/netperf/120s-200%-UDP_RR
25457451 ~ 2% +2.2e+11% 5.683e+16 ~21% lkp-nex04/micro/tlbflush/200%-512-320
3.545e+08 ~ 0% +7.3e+10% 2.601e+17 ~31% lkp-nex05/micro/ebizzy/200%-100-10
25434301 ~ 4% +8.5e+11% 2.173e+17 ~46% lkp-nex05/micro/tlbflush/100%-512-320
5.899e+08 ~ 0% +1.1e+10% 6.465e+16 ~32% lkp-sb03/micro/ebizzy/200%-100-10
8.239e+08 ~ 0% +1.7e+10% 1.426e+17 ~18% lkp-sbx04/micro/ebizzy/200%-100-10
5.979e+08 ~ 3% +1.1e+10% 6.421e+16 ~59% lkp-snb01/micro/ebizzy/200%-100-10
2018 ~ 2% +5.5e+15% 1.108e+17 ~19% lkp-snb01/micro/hackbench/1600%-process-pipe
2337 ~ 1% +6.8e+15% 1.596e+17 ~25% lkp-snb01/micro/hackbench/1600%-process-socket
238535 ~22% +1.1e+14% 2.564e+17 ~13% lkp-snb01/micro/hackbench/1600%-threads-pipe
308286 ~ 9% +5.9e+13% 1.827e+17 ~11% lkp-snb01/micro/hackbench/1600%-threads-socket
15 ~ 3% +8e+16% 1.249e+16 ~70% lkp-t410/micro/ebizzy/200%-100-10
21000804 ~ 0% +1.6e+11% 3.386e+16 ~63% nhm-white/sysbench/oltp/600s-100%-1000000
1.621e+08 ~ 0% +2.9e+10% 4.765e+16 ~42% nhm8/micro/ebizzy/200%-100-10
22806224 ~15% -100.0% 0 ~ 0% vpx/micro/ebizzy/200%-100-10
88288455 ~ 0% +3.9e+10% 3.42e+16 ~47% xps2/micro/ebizzy/200%-100-10
4.609e+09 +4.9e+11% 2.247e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
140 ~ 6% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
13005586 ~ 1% +7.2e+11% 9.398e+16 ~25% grantley/micro/ebizzy/200%-100-10
7994499 ~ 8% -100.0% 0 ~ 0% kbuildx/micro/ebizzy/200%-100-10
436762 ~ 0% +7.3e+12% 3.186e+16 ~61% lkp-a04/micro/ebizzy/200%-100-10
188 ~16% -100.0% 0 lkp-a04/micro/netperf/120s-200%-TCP_RR
24658539 ~ 0% +2.3e+11% 5.63e+16 ~26% lkp-ib03/micro/ebizzy/200%-100-10
230 ~15% +1.5e+17% 3.542e+17 ~116% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
196 ~10% +3.3e+17% 6.465e+17 ~51% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
219 ~ 4% +7.9e+12% 1.724e+13 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
160 ~15% +1.3e+17% 2.072e+17 ~92% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
192 ~14% +2.3e+17% 4.365e+17 ~11% lkp-ib03/micro/netperf/120s-200%-UDP_RR
20661685 ~ 0% +5e+10% 1.03e+16 ~70% lkp-ne04/micro/ebizzy/200%-100-10
411585 ~ 2% +9.9e+12% 4.055e+16 ~28% lkp-nex04/micro/tlbflush/200%-512-320
5657998 ~ 0% +3.8e+12% 2.176e+17 ~21% lkp-nex05/micro/ebizzy/200%-100-10
420583 ~ 4% +4.1e+13% 1.719e+17 ~68% lkp-nex05/micro/tlbflush/100%-512-320
19058842 ~ 0% +2.9e+11% 5.576e+16 ~29% lkp-sb03/micro/ebizzy/200%-100-10
13106426 ~ 0% +4.7e+11% 6.199e+16 ~40% lkp-sbx04/micro/ebizzy/200%-100-10
19314329 ~ 3% +2.6e+11% 5e+16 ~20% lkp-snb01/micro/ebizzy/200%-100-10
510 ~ 1% +2.9e+16% 1.468e+17 ~25% lkp-snb01/micro/hackbench/1600%-process-pipe
756 ~ 5% +1.9e+16% 1.424e+17 ~56% lkp-snb01/micro/hackbench/1600%-process-socket
19158 ~15% +1.6e+15% 2.983e+17 ~35% lkp-snb01/micro/hackbench/1600%-threads-pipe
20757 ~ 9% +7.1e+14% 1.478e+17 ~20% lkp-snb01/micro/hackbench/1600%-threads-socket
3659073 ~ 0% +1.1e+11% 4.106e+15 ~141% nhm-white/sysbench/oltp/600s-100%-1000000
14767833 ~ 0% +1.1e+11% 1.698e+16 ~126% nhm8/micro/ebizzy/200%-100-10
7639068 ~15% -100.0% 0 ~ 0% vpx/micro/ebizzy/200%-100-10
12652913 ~ 0% +1.7e+11% 2.104e+16 ~35% xps2/micro/ebizzy/200%-100-10
1.635e+08 +2e+12% 3.212e+18 TOTAL proc-vmstat.nr_tlb_remote_flush
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
269335 ~ 0% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
617727 ~ 0% +1.8e+13% 1.108e+17 ~29% grantley/micro/ebizzy/200%-100-10
321613 ~ 0% -100.0% 15 ~60% kbuildx/micro/ebizzy/200%-100-10
348216 ~ 0% +9.2e+12% 3.186e+16 ~61% lkp-a04/micro/ebizzy/200%-100-10
104866 ~ 1% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
104585 ~ 0% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-UDP_RR
773781 ~ 0% +7.7e+12% 5.962e+16 ~18% lkp-ib03/micro/ebizzy/200%-100-10
29318914 ~ 0% +1.1e+12% 3.254e+17 ~118% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
250366 ~ 0% +2.6e+14% 6.44e+17 ~53% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
249838 ~ 0% +1.2e+14% 2.999e+17 ~41% lkp-ib03/micro/netperf/120s-200%-TCP_RR
250035 ~ 0% +7.2e+15% 1.808e+19 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
247778 ~ 2% +1.2e+14% 2.903e+17 ~105% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
251020 ~ 0% +1.9e+14% 4.663e+17 ~ 4% lkp-ib03/micro/netperf/120s-200%-UDP_RR
1231993 ~ 2% +4.6e+12% 5.683e+16 ~21% lkp-nex04/micro/tlbflush/200%-512-320
842151 ~ 0% +2.3e+13% 1.94e+17 ~21% lkp-nex05/micro/ebizzy/200%-100-10
1327483 ~13% +1.5e+13% 2.036e+17 ~65% lkp-nex05/micro/tlbflush/100%-512-320
770590 ~ 0% +6.6e+12% 5.098e+16 ~68% lkp-sb03/micro/ebizzy/200%-100-10
926878 ~ 0% +8e+12% 7.44e+16 ~23% lkp-sbx04/micro/ebizzy/200%-100-10
787757 ~ 4% +8.3e+12% 6.524e+16 ~35% lkp-snb01/micro/ebizzy/200%-100-10
6467223 ~ 1% +2.5e+12% 1.607e+17 ~32% lkp-snb01/micro/hackbench/1600%-process-pipe
4375452 ~ 1% +8.2e+12% 3.583e+17 ~14% lkp-snb01/micro/hackbench/1600%-process-socket
1382546 ~ 0% +2e+13% 2.71e+17 ~37% lkp-snb01/micro/hackbench/1600%-threads-pipe
1122990 ~ 1% +4.3e+13% 4.775e+17 ~42% lkp-snb01/micro/hackbench/1600%-threads-socket
3781598 ~ 1% +6.2e+11% 2.342e+16 ~44% nhm-white/sysbench/oltp/600s-100%-1000000
320787 ~ 0% -100.0% 21 ~30% vpx/micro/ebizzy/200%-100-10
467399 ~ 0% +5.9e+12% 2.762e+16 ~30% xps2/micro/ebizzy/200%-100-10
56912929 +3.9e+13% 2.227e+19 TOTAL proc-vmstat.nr_tlb_local_flush_one
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
37 ~68% +401.8% 185 ~17% lkp-sbx04/micro/ebizzy/200%-100-10
37 +401.8% 185 TOTAL pagetypeinfo.Node1.Normal.Unmovable.3
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
87 ~62% +171.4% 237 ~13% lkp-sbx04/micro/ebizzy/200%-100-10
87 +171.4% 237 TOTAL buddyinfo.Node.1.zone.Normal.3
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
30206 ~ 0% -87.5% 3785 ~ 4% avoton1/crypto/tcrypt/2s-505-509
13077898 ~ 1% +7.4e+11% 9.631e+16 ~29% grantley/micro/ebizzy/200%-100-10
8072671 ~ 8% -99.9% 8577 ~17% kbuildx/micro/ebizzy/200%-100-10
477416 ~ 0% +6.7e+12% 3.186e+16 ~61% lkp-a04/micro/ebizzy/200%-100-10
10784 ~ 1% -46.1% 5810 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
10764 ~ 1% -47.5% 5647 ~ 0% lkp-a04/micro/netperf/120s-200%-UDP_RR
24695567 ~ 0% +2.3e+11% 5.754e+16 ~15% lkp-ib03/micro/ebizzy/200%-100-10
9211 ~ 0% +3.4e+15% 3.086e+17 ~96% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
9022 ~ 0% +6.9e+15% 6.269e+17 ~51% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
9070 ~ 0% +1.9e+11% 1.724e+13 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
8973 ~ 0% +4e+15% 3.578e+17 ~71% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
9076 ~ 0% +3.7e+15% 3.374e+17 ~24% lkp-ib03/micro/netperf/120s-200%-UDP_RR
391952 ~ 2% +1e+13% 4.055e+16 ~28% lkp-nex04/micro/tlbflush/200%-512-320
5700945 ~ 0% +4.2e+12% 2.383e+17 ~24% lkp-nex05/micro/ebizzy/200%-100-10
365605 ~ 2% +5.6e+13% 2.04e+17 ~50% lkp-nex05/micro/tlbflush/100%-512-320
19093987 ~ 0% +2.7e+11% 5.062e+16 ~22% lkp-sb03/micro/ebizzy/200%-100-10
13150807 ~ 0% +4.8e+11% 6.298e+16 ~ 7% lkp-sbx04/micro/ebizzy/200%-100-10
19350039 ~ 3% +2.4e+11% 4.708e+16 ~47% lkp-snb01/micro/ebizzy/200%-100-10
14838 ~ 1% +8.5e+14% 1.26e+17 ~20% lkp-snb01/micro/hackbench/1600%-process-pipe
11199 ~ 1% +1.1e+15% 1.239e+17 ~77% lkp-snb01/micro/hackbench/1600%-process-socket
17997 ~ 1% +1.2e+15% 2.167e+17 ~ 6% lkp-snb01/micro/hackbench/1600%-threads-pipe
9182 ~ 3% +2.5e+15% 2.326e+17 ~51% lkp-snb01/micro/hackbench/1600%-threads-socket
2509102 ~ 0% +1.2e+11% 3.087e+15 ~141% nhm-white/sysbench/oltp/600s-100%-1000000
14788965 ~ 0% +1.2e+11% 1.775e+16 ~118% nhm8/micro/ebizzy/200%-100-10
7717200 ~15% -99.9% 9909 ~12% vpx/micro/ebizzy/200%-100-10
12673616 ~ 0% +1.7e+11% 2.104e+16 ~35% xps2/micro/ebizzy/200%-100-10
1.422e+08 +2.3e+12% 3.201e+18 TOTAL proc-vmstat.nr_tlb_local_flush_all
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
414 ~22% -44.1% 231 ~13% grantley/micro/ebizzy/200%-100-10
211 ~43% +62.2% 342 ~37% lkp-nex04/micro/ebizzy/200%-100-10
272 ~47% -63.9% 98 ~42% lkp-sbx04/micro/ebizzy/200%-100-10
897 -25.1% 672 TOTAL pagetypeinfo.Node0.Normal.Unmovable.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
242 ~21% +108.5% 504 ~14% grantley/micro/ebizzy/200%-100-10
129 ~ 8% +120.6% 286 ~40% lkp-sbx04/micro/ebizzy/200%-100-10
371 +112.7% 790 TOTAL pagetypeinfo.Node1.Normal.Unmovable.1
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
346 ~34% -46.2% 186 ~13% grantley/micro/ebizzy/200%-100-10
182 ~45% +77.3% 322 ~33% lkp-nex04/micro/ebizzy/200%-100-10
203 ~49% -60.7% 80 ~14% lkp-sbx04/micro/ebizzy/200%-100-10
4006 ~ 3% +6.1% 4251 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
4739 +2.1% 4840 TOTAL pagetypeinfo.Node0.Normal.Unmovable.1
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
63 ~35% +161.4% 164 ~43% lkp-nex04/micro/tlbflush/200%-512-320
63 +161.4% 164 TOTAL numa-vmstat.node3.nr_dirtied
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
95 ~42% +72.0% 164 ~23% lkp-ib03/micro/netperf/120s-200%-TCP_RR
409 ~17% +35.8% 555 ~14% lkp-nex04/micro/ebizzy/200%-100-10
504 +42.6% 719 TOTAL buddyinfo.Node.0.zone.Normal.1
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
194 ~17% +125.0% 438 ~13% lkp-ne04/micro/ebizzy/200%-100-10
194 +125.0% 438 TOTAL slabinfo.ip_fib_trie.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
194 ~17% +125.0% 438 ~13% lkp-ne04/micro/ebizzy/200%-100-10
194 +125.0% 438 TOTAL slabinfo.ip_fib_trie.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
306 ~25% +72.6% 529 ~14% grantley/micro/ebizzy/200%-100-10
209 ~16% +82.2% 382 ~18% lkp-sbx04/micro/ebizzy/200%-100-10
516 +76.5% 911 TOTAL pagetypeinfo.Node1.Normal.Unmovable.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
238 ~20% +42.9% 340 ~ 7% kbuildx/micro/ebizzy/200%-100-10
119 ~20% +71.4% 204 ~ 0% xps2/micro/ebizzy/200%-100-10
357 +52.4% 544 TOTAL slabinfo.Acpi-State.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
238 ~20% +42.9% 340 ~ 7% kbuildx/micro/ebizzy/200%-100-10
119 ~20% +71.4% 204 ~ 0% xps2/micro/ebizzy/200%-100-10
357 +52.4% 544 TOTAL slabinfo.Acpi-State.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
549 ~21% -35.1% 356 ~ 8% grantley/micro/ebizzy/200%-100-10
348 ~24% +33.4% 465 ~23% lkp-nex04/micro/ebizzy/200%-100-10
898 -8.5% 821 TOTAL buddyinfo.Node.0.zone.Normal.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
32543 ~16% -38.6% 19990 ~10% lkp-nex04/micro/ebizzy/400%-5-30
32543 -38.6% 19990 TOTAL numa-meminfo.node2.Active(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8135 ~16% -38.6% 4997 ~10% lkp-nex04/micro/ebizzy/400%-5-30
8135 -38.6% 4997 TOTAL numa-vmstat.node2.nr_active_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
59 ~35% +169.1% 159 ~45% lkp-nex04/micro/tlbflush/200%-512-320
123 ~29% -39.6% 74 ~35% lkp-sbx04/micro/ebizzy/200%-100-10
182 +28.3% 234 TOTAL numa-vmstat.node3.nr_written
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
915 ~23% +42.8% 1308 ~32% lkp-ib03/micro/netperf/120s-200%-TCP_RR
915 +42.8% 1308 TOTAL numa-vmstat.node1.nr_alloc_batch
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3.643e+08 ~ 0% -52.2% 1.741e+08 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
52579091 ~ 1% -25.3% 39279438 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
4.169e+08 -48.8% 2.133e+08 TOTAL numa-numastat.node0.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
5685 ~23% +50.2% 8539 ~ 5% lkp-nex04/micro/ebizzy/400%-5-30
5685 +50.2% 8539 TOTAL numa-vmstat.node3.nr_active_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
22743 ~23% +50.2% 34158 ~ 5% lkp-nex04/micro/ebizzy/400%-5-30
22743 +50.2% 34158 TOTAL numa-meminfo.node3.Active(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
214 ~21% -28.1% 154 ~19% lkp-nex04/micro/ebizzy/400%-5-30
214 -28.1% 154 TOTAL numa-vmstat.node3.nr_mlock
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
858 ~21% -28.0% 618 ~18% lkp-nex04/micro/ebizzy/400%-5-30
858 -28.0% 618 TOTAL numa-meminfo.node3.Mlocked
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
216 ~21% -28.0% 155 ~18% lkp-nex04/micro/ebizzy/400%-5-30
216 -28.0% 155 TOTAL numa-vmstat.node3.nr_unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
446 ~18% +44.7% 645 ~10% grantley/micro/ebizzy/200%-100-10
340 ~11% +60.9% 548 ~16% lkp-sbx04/micro/ebizzy/200%-100-10
787 +51.7% 1193 TOTAL buddyinfo.Node.1.zone.Normal.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
865 ~21% -27.9% 623 ~18% lkp-nex04/micro/ebizzy/400%-5-30
865 -27.9% 623 TOTAL numa-meminfo.node3.Unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
276 ~ 2% +32.5% 366 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
132 ~ 0% +70.0% 225 ~28% lkp-nex04/micro/tlbflush/200%-512-320
409 +44.7% 591 TOTAL numa-vmstat.node0.nr_mlock
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
31124 ~11% +17.8% 36650 ~ 4% grantley/micro/kbuild/200%
10553 ~19% +40.0% 14769 ~23% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
10428 ~ 1% +69.6% 17686 ~12% lkp-ne04/micro/ebizzy/200%-100-10
5602 ~28% +59.8% 8952 ~ 4% lkp-nex04/micro/ebizzy/400%-5-30
14038 ~22% -33.5% 9341 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
10900 ~ 8% +39.6% 15214 ~ 9% lkp-snb01/micro/hackbench/1600%-threads-pipe
82647 +24.2% 102613 TOTAL numa-vmstat.node1.nr_active_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
124266 ~11% +17.9% 146470 ~ 4% grantley/micro/kbuild/200%
42212 ~19% +40.0% 59078 ~23% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
41715 ~ 1% +69.6% 70749 ~12% lkp-ne04/micro/ebizzy/200%-100-10
22414 ~28% +59.8% 35810 ~ 4% lkp-nex04/micro/ebizzy/400%-5-30
56156 ~22% -33.5% 37364 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
43599 ~ 8% +39.6% 60856 ~ 9% lkp-snb01/micro/hackbench/1600%-threads-pipe
330364 +24.2% 410329 TOTAL numa-meminfo.node1.Active(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1109 ~ 2% +32.3% 1468 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
532 ~ 0% +69.4% 901 ~28% lkp-nex04/micro/tlbflush/200%-512-320
1642 +44.3% 2370 TOTAL numa-meminfo.node0.Mlocked
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
402 ~23% -26.8% 294 ~20% lkp-nex04/micro/ebizzy/200%-100-10
402 -26.8% 294 TOTAL pagetypeinfo.Node2.Normal.Unmovable.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
280 ~ 2% +31.8% 370 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
135 ~ 0% +67.9% 226 ~27% lkp-nex04/micro/tlbflush/200%-512-320
415 +43.5% 596 TOTAL numa-vmstat.node0.nr_unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1124 ~ 2% +31.7% 1481 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
540 ~ 0% +68.1% 907 ~27% lkp-nex04/micro/tlbflush/200%-512-320
1664 +43.5% 2389 TOTAL numa-meminfo.node0.Unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2159 ~ 5% +38.2% 2984 ~17% lkp-nex04/micro/tlbflush/200%-512-320
2159 +38.2% 2984 TOTAL numa-vmstat.node3.nr_active_anon
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8646 ~ 6% +38.0% 11934 ~17% lkp-nex04/micro/tlbflush/200%-512-320
8646 +38.0% 11934 TOTAL numa-meminfo.node3.Active(anon)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
232 ~25% +23.8% 287 ~19% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
232 +23.8% 287 TOTAL slabinfo.skbuff_fclone_cache.num_slabs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
232 ~25% +23.8% 287 ~19% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
232 +23.8% 287 TOTAL slabinfo.skbuff_fclone_cache.active_slabs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
23452 ~11% +107.1% 48557 ~53% lkp-ib03/micro/netperf/120s-200%-TCP_RR
458900 ~10% -16.3% 383945 ~ 5% lkp-nex04/micro/ebizzy/400%-5-30
1.858e+08 ~ 4% -53.6% 86284522 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
27227372 ~ 2% -28.1% 19569200 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
2.135e+08 -50.2% 1.063e+08 TOTAL numa-vmstat.node1.numa_miss
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
133827 ~10% -15.3% 113311 ~ 4% grantley/micro/kbuild/200%
69106 ~11% -24.3% 52332 ~25% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
70063 ~ 0% -41.4% 41023 ~22% lkp-ne04/micro/ebizzy/200%-100-10
33244 ~19% -36.9% 20988 ~ 5% lkp-nex04/micro/ebizzy/400%-5-30
54487 ~23% +34.5% 73298 ~ 6% lkp-snb01/micro/hackbench/1600%-process-pipe
67116 ~ 5% -25.8% 49833 ~11% lkp-snb01/micro/hackbench/1600%-threads-pipe
427845 -18.0% 350786 TOTAL numa-meminfo.node0.Active(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
33498 ~10% -15.4% 28331 ~ 4% grantley/micro/kbuild/200%
17276 ~11% -24.3% 13082 ~25% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
17515 ~ 0% -41.5% 10255 ~22% lkp-ne04/micro/ebizzy/200%-100-10
8311 ~19% -36.9% 5246 ~ 5% lkp-nex04/micro/ebizzy/400%-5-30
13621 ~23% +34.5% 18324 ~ 6% lkp-snb01/micro/hackbench/1600%-process-pipe
16778 ~ 5% -25.8% 12458 ~11% lkp-snb01/micro/hackbench/1600%-threads-pipe
107001 -18.0% 87698 TOTAL numa-vmstat.node0.nr_active_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
23452 ~11% +107.0% 48540 ~53% lkp-ib03/micro/netperf/120s-200%-TCP_RR
1097094 ~ 2% -10.5% 981884 ~ 7% lkp-nex04/micro/ebizzy/400%-5-30
1.858e+08 ~ 4% -53.6% 86281642 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
27218147 ~ 2% -28.1% 19563337 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
2.142e+08 -50.1% 1.069e+08 TOTAL numa-vmstat.node0.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
435 ~14% +55.3% 676 ~10% grantley/micro/ebizzy/200%-100-10
359 ~ 2% +47.9% 532 ~24% lkp-sbx04/micro/ebizzy/200%-100-10
795 +51.9% 1208 TOTAL buddyinfo.Node.1.zone.Normal.1
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2298 ~ 7% +32.2% 3037 ~16% lkp-nex04/micro/tlbflush/200%-512-320
2298 +32.2% 3037 TOTAL numa-vmstat.node3.nr_anon_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
9201 ~ 7% +32.0% 12148 ~15% lkp-nex04/micro/tlbflush/200%-512-320
9201 +32.0% 12148 TOTAL numa-meminfo.node3.AnonPages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
912988 ~ 9% -15.0% 776426 ~ 7% lkp-nex04/micro/ebizzy/400%-5-30
3.643e+08 ~ 0% -52.2% 1.741e+08 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
52579091 ~ 1% -25.3% 39279464 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
4.178e+08 -48.8% 2.141e+08 TOTAL numa-numastat.node1.numa_miss
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
912988 ~ 9% -15.0% 776426 ~ 7% lkp-nex04/micro/ebizzy/400%-5-30
3.643e+08 ~ 0% -52.2% 1.741e+08 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
52579040 ~ 1% -25.3% 39279457 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
4.178e+08 -48.8% 2.141e+08 TOTAL numa-numastat.node1.other_node
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
148 ~ 9% +43.9% 214 ~18% lkp-sbx04/micro/ebizzy/200%-100-10
148 +43.9% 214 TOTAL numa-vmstat.node3.nr_kernel_stack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1191 ~ 9% +43.9% 1714 ~18% lkp-sbx04/micro/ebizzy/200%-100-10
1191 +43.9% 1714 TOTAL numa-meminfo.node3.KernelStack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
201 ~ 4% -20.7% 159 ~ 5% grantley/micro/ebizzy/200%-100-10
151 ~ 2% +34.5% 204 ~ 7% lkp-ib03/micro/ebizzy/200%-100-10
39 ~17% +116.8% 86 ~50% lkp-ib03/micro/netperf/120s-200%-TCP_RR
106 ~32% -37.3% 66 ~27% lkp-ib03/micro/netperf/120s-200%-UDP_RR
165 ~ 9% -19.8% 132 ~16% lkp-sb03/micro/ebizzy/200%-100-10
59 ~34% +83.7% 109 ~ 1% lkp-sbx04/micro/ebizzy/200%-100-10
148 ~12% +20.0% 178 ~ 6% lkp-snb01/micro/ebizzy/200%-100-10
106 ~14% +24.1% 132 ~15% lkp-snb01/micro/hackbench/1600%-threads-pipe
978 +9.2% 1068 TOTAL numa-vmstat.node1.nr_written
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
323 ~ 9% +34.0% 433 ~ 3% grantley/micro/kbuild/200%
8187 ~26% +24.0% 10151 ~20% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
8510 +24.4% 10585 TOTAL slabinfo.skbuff_fclone_cache.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
547 ~17% -23.0% 421 ~16% lkp-nex04/micro/ebizzy/200%-100-10
240 ~20% -28.2% 172 ~16% lkp-nex04/micro/tlbflush/200%-512-320
787 -24.6% 594 TOTAL buddyinfo.Node.2.zone.Normal.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
323 ~ 9% +34.0% 433 ~ 3% grantley/micro/kbuild/200%
8376 ~25% +23.7% 10364 ~20% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
8700 +24.1% 10798 TOTAL slabinfo.skbuff_fclone_cache.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
209 ~ 4% -20.3% 166 ~ 6% grantley/micro/ebizzy/200%-100-10
158 ~ 2% +33.5% 211 ~ 7% lkp-ib03/micro/ebizzy/200%-100-10
43 ~14% +106.9% 90 ~47% lkp-ib03/micro/netperf/120s-200%-TCP_RR
172 ~ 9% -20.0% 137 ~17% lkp-sb03/micro/ebizzy/200%-100-10
61 ~33% +83.8% 113 ~ 1% lkp-sbx04/micro/ebizzy/200%-100-10
154 ~12% +20.6% 185 ~ 6% lkp-snb01/micro/ebizzy/200%-100-10
113 ~13% +23.5% 140 ~15% lkp-snb01/micro/hackbench/1600%-threads-pipe
911 +14.6% 1044 TOTAL numa-vmstat.node1.nr_dirtied
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
189 ~10% -31.2% 130 ~18% lkp-nex04/micro/tlbflush/200%-512-320
189 -31.2% 130 TOTAL numa-vmstat.node2.nr_written
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
199 ~10% -30.9% 137 ~17% lkp-nex04/micro/tlbflush/200%-512-320
199 -30.9% 137 TOTAL numa-vmstat.node2.nr_dirtied
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1740841 ~ 3% +8.7% 1891900 ~ 3% grantley/micro/kbuild/200%
503223 ~ 8% -14.9% 427992 ~ 7% lkp-nex04/micro/ebizzy/400%-5-30
1.86e+08 ~ 4% -53.6% 86323165 ~ 2% lkp-snb01/micro/ebizzy/200%-100-10
27266340 ~ 2% -28.1% 19610310 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
164601 ~ 7% +44.4% 237720 ~21% lkp-snb01/micro/hackbench/1600%-threads-socket
2.156e+08 -49.7% 1.085e+08 TOTAL numa-vmstat.node1.numa_other
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
85188 ~11% +22.1% 104052 ~10% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
62918 ~ 0% +42.5% 89677 ~12% lkp-ne04/micro/ebizzy/200%-100-10
37458 ~15% +40.2% 52508 ~ 3% lkp-nex04/micro/ebizzy/400%-5-30
185564 +32.7% 246238 TOTAL numa-meminfo.node1.Active
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
11344 ~ 1% +34.7% 15284 ~ 7% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
763742 ~21% -50.1% 381443 ~57% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
775086 -48.8% 396727 TOTAL interrupts.RES
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
135 ~24% +19.3% 162 ~19% lkp-a04/micro/netperf/120s-200%-TCP_RR
1263 ~18% -19.4% 1018 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
1399 -15.6% 1181 TOTAL uptime.idle
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
70274 ~15% -21.8% 54951 ~ 0% grantley/micro/kbuild/200%
70274 -21.8% 54951 TOTAL softirqs.SCHED
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
346 ~14% +17.7% 407 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
456 ~ 7% -13.5% 394 ~12% lkp-ib03/micro/netperf/120s-200%-TCP_RR
421 ~21% -22.6% 326 ~15% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
1223 -7.8% 1128 TOTAL numa-vmstat.node0.nr_kernel_stack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2772 ~14% +17.6% 3261 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
3649 ~ 7% -13.5% 3158 ~12% lkp-ib03/micro/netperf/120s-200%-TCP_RR
3377 ~21% -22.6% 2614 ~15% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
9799 -7.8% 9035 TOTAL numa-meminfo.node0.KernelStack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
364 ~11% -15.4% 308 ~ 3% grantley/micro/ebizzy/200%-100-10
438 ~11% -14.1% 376 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
364 ~24% +25.4% 456 ~10% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
124 ~ 2% -13.4% 108 ~10% lkp-nex04/micro/ebizzy/400%-5-30
124 ~ 4% +9.6% 136 ~ 3% lkp-nex05/micro/ebizzy/200%-100-10
192 ~22% -31.5% 131 ~ 6% lkp-sbx04/micro/ebizzy/200%-100-10
1608 -5.7% 1517 TOTAL numa-vmstat.node1.nr_kernel_stack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2916 ~11% -15.2% 2472 ~ 3% grantley/micro/ebizzy/200%-100-10
3514 ~11% -14.1% 3018 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
2916 ~24% +25.3% 3655 ~10% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
1002 ~ 2% -13.4% 868 ~10% lkp-nex04/micro/ebizzy/400%-5-30
1002 ~ 4% +9.2% 1094 ~ 3% lkp-nex05/micro/ebizzy/200%-100-10
1542 ~22% -31.5% 1057 ~ 6% lkp-sbx04/micro/ebizzy/200%-100-10
12895 -5.7% 12165 TOTAL numa-meminfo.node1.KernelStack
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
111416 ~ 9% -17.4% 91975 ~11% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
90500 ~ 0% -29.2% 64086 ~16% lkp-ne04/micro/ebizzy/200%-100-10
67113 ~ 8% -22.4% 52047 ~ 6% lkp-nex04/micro/ebizzy/400%-5-30
130343 ~ 4% -12.8% 113722 ~ 4% lkp-snb01/micro/hackbench/1600%-threads-pipe
399373 -19.4% 321832 TOTAL numa-meminfo.node0.Active
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
62766 ~ 7% -19.1% 50771 ~ 2% lkp-nex04/micro/ebizzy/400%-5-30
62766 -19.1% 50771 TOTAL numa-meminfo.node2.Active
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1014 ~ 5% -17.9% 832 ~ 5% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1014 -17.9% 832 TOTAL slabinfo.blkdev_ioc.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1014 ~ 5% -17.9% 832 ~ 5% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1014 -17.9% 832 TOTAL slabinfo.blkdev_ioc.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2.485e+08 ~ 1% +28.4% 3.191e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
2.485e+08 +28.4% 3.191e+08 TOTAL numa-numastat.node1.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2.485e+08 ~ 1% +28.4% 3.191e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
2.485e+08 +28.4% 3.191e+08 TOTAL numa-numastat.node0.numa_miss
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2.485e+08 ~ 1% +28.4% 3.191e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
2.485e+08 +28.4% 3.191e+08 TOTAL numa-numastat.node0.other_node
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4809 ~ 7% +17.1% 5630 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
2118 ~17% -25.7% 1575 ~18% lkp-nex04/micro/tlbflush/200%-512-320
5262 ~ 4% +23.0% 6471 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
12190 +12.2% 13676 TOTAL numa-vmstat.node0.nr_active_anon
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
19280 ~ 7% +16.7% 22504 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
8476 ~17% -25.6% 6307 ~18% lkp-nex04/micro/tlbflush/200%-512-320
21067 ~ 4% +22.7% 25859 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
48824 +12.0% 54671 TOTAL numa-meminfo.node0.Active(anon)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
170 ~ 5% +23.3% 210 ~ 5% grantley/micro/ebizzy/200%-100-10
230 ~ 1% -20.9% 182 ~ 8% lkp-ib03/micro/ebizzy/200%-100-10
149 ~ 3% -30.4% 104 ~40% lkp-ib03/micro/netperf/120s-200%-TCP_RR
195 ~ 8% +15.2% 225 ~10% lkp-sb03/micro/ebizzy/200%-100-10
190 ~ 6% -23.1% 146 ~ 8% lkp-snb01/micro/ebizzy/200%-100-10
936 -7.3% 868 TOTAL numa-vmstat.node0.nr_dirtied
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
164 ~ 5% +23.6% 202 ~ 5% grantley/micro/ebizzy/200%-100-10
222 ~ 1% -21.0% 175 ~ 9% lkp-ib03/micro/ebizzy/200%-100-10
137 ~ 4% -33.0% 92 ~46% lkp-ib03/micro/netperf/120s-200%-TCP_RR
189 ~ 8% +14.8% 217 ~10% lkp-sb03/micro/ebizzy/200%-100-10
184 ~ 6% -23.0% 141 ~ 8% lkp-snb01/micro/ebizzy/200%-100-10
896 -7.5% 828 TOTAL numa-vmstat.node0.nr_written
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1136 ~23% +24.1% 1409 ~14% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
1607 ~ 1% +7.8% 1733 ~ 3% lkp-sb03/micro/ebizzy/200%-100-10
2743 +14.6% 3142 TOTAL numa-vmstat.node0.nr_alloc_batch
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
17132 ~ 1% +27.7% 21870 ~13% lkp-sbx04/micro/ebizzy/200%-100-10
17132 +27.7% 21870 TOTAL numa-meminfo.node2.SUnreclaim
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
576 ~ 9% -22.2% 448 ~11% grantley/micro/ebizzy/200%-100-10
554 ~ 5% -15.4% 469 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
896 ~ 5% +19.0% 1066 ~ 7% lkp-sbx04/micro/ebizzy/200%-100-10
2026 -2.1% 1984 TOTAL slabinfo.kmem_cache_node.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4282 ~ 1% +27.7% 5467 ~13% lkp-sbx04/micro/ebizzy/200%-100-10
4282 +27.7% 5467 TOTAL numa-vmstat.node2.nr_slab_unreclaimable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4797098 ~10% -21.7% 3756714 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
5147989 ~ 0% -20.3% 4102144 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
9945088 -21.0% 7858858 TOTAL meminfo.DirectMap2M
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3955 ~ 8% -17.4% 3265 ~12% avoton1/crypto/tcrypt/2s-200-204
3955 -17.4% 3265 TOTAL slabinfo.kmalloc-128.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3991 ~ 8% -16.9% 3316 ~12% avoton1/crypto/tcrypt/2s-200-204
3991 -16.9% 3316 TOTAL slabinfo.kmalloc-128.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4260 ~ 8% +19.5% 5092 ~17% lkp-sbx04/micro/ebizzy/200%-100-10
4260 +19.5% 5092 TOTAL numa-vmstat.node2.nr_anon_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
11575 ~14% -21.6% 9080 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
15044 ~ 3% +11.0% 16697 ~ 7% lkp-nex04/micro/ebizzy/400%-5-30
31418 ~ 3% -14.7% 26795 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
21917 ~12% -21.6% 17179 ~ 8% lkp-sbx04/micro/ebizzy/200%-100-10
79955 -12.8% 69753 TOTAL numa-meminfo.node1.Active(anon)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2889 ~14% -21.4% 2270 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
3684 ~ 3% +13.6% 4185 ~ 8% lkp-nex04/micro/ebizzy/400%-5-30
7846 ~ 3% -14.6% 6698 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
5471 ~13% -21.2% 4309 ~ 8% lkp-sbx04/micro/ebizzy/200%-100-10
19891 -12.2% 17464 TOTAL numa-vmstat.node1.nr_active_anon
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
476 ~ 1% -18.8% 386 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
476 -18.8% 386 TOTAL numa-vmstat.node1.nr_mlock
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1906 ~ 1% -18.8% 1548 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
1906 -18.8% 1548 TOTAL numa-meminfo.node1.Mlocked
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4818 ~ 2% -16.5% 4024 ~ 1% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
4818 -16.5% 4024 TOTAL proc-vmstat.nr_alloc_batch
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
29534 ~ 3% -13.9% 25438 ~ 3% lkp-a03/micro/ebizzy/200%-100-10
34986 ~ 7% -13.7% 30208 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
29525 ~ 8% -23.1% 22698 ~ 4% xps2/micro/ebizzy/200%-100-10
94046 -16.7% 78345 TOTAL meminfo.DirectMap4k
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
647 ~ 4% +22.4% 792 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_MAERTS
767 ~ 9% -19.6% 616 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
3028 ~ 0% -13.8% 2612 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
3006 ~ 0% -14.5% 2569 ~ 1% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
7449 -11.5% 6591 TOTAL slabinfo.kmalloc-512.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.268e+08 ~ 3% +25.3% 1.589e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
10963052 ~ 5% +19.5% 13100880 ~11% lkp-snb01/micro/hackbench/1600%-process-pipe
1.378e+08 +24.8% 1.72e+08 TOTAL numa-vmstat.node1.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.268e+08 ~ 3% +25.3% 1.589e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
10960254 ~ 5% +19.5% 13096862 ~11% lkp-snb01/micro/hackbench/1600%-process-pipe
1.378e+08 +24.8% 1.72e+08 TOTAL numa-vmstat.node0.numa_miss
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1920 ~ 1% -18.5% 1564 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
1920 -18.5% 1564 TOTAL numa-meminfo.node1.Unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
479 ~ 1% -18.5% 391 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
479 -18.5% 391 TOTAL numa-vmstat.node1.nr_unevictable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
27938 ~ 1% +19.4% 33369 ~ 9% lkp-sbx04/micro/ebizzy/200%-100-10
27938 +19.4% 33369 TOTAL numa-meminfo.node2.Slab
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8307 ~ 7% +10.2% 9152 ~ 2% lkp-nex04/micro/ebizzy/400%-5-30
494 ~ 3% -18.4% 403 ~ 4% vpx/micro/ebizzy/200%-100-10
8801 +8.6% 9555 TOTAL slabinfo.buffer_head.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
609 ~ 4% +14.0% 694 ~ 4% lkp-nex04/micro/ebizzy/400%-5-30
609 +14.0% 694 TOTAL slabinfo.kmem_cache_node.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2560 ~ 3% +12.0% 2866 ~ 5% lkp-a04/micro/netperf/120s-200%-TCP_RR
2988 ~ 5% -13.7% 2579 ~ 4% lkp-a04/micro/netperf/120s-200%-UDP_RR
2481 ~ 2% +23.4% 3061 ~ 8% lkp-a06/micro/ebizzy/200%-100-10
8029 +5.9% 8507 TOTAL slabinfo.anon_vma.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
6.129e+08 ~ 0% -19.5% 4.931e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
75210725 ~ 0% -14.2% 64497421 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
6.881e+08 -19.0% 5.576e+08 TOTAL proc-vmstat.numa_miss
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
6.129e+08 ~ 0% -19.5% 4.931e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
75210691 ~ 0% -14.2% 64497437 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
6.881e+08 -19.0% 5.576e+08 TOTAL proc-vmstat.numa_other
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
6.129e+08 ~ 0% -19.5% 4.931e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
75210581 ~ 0% -14.2% 64497614 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
6.881e+08 -19.0% 5.576e+08 TOTAL proc-vmstat.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
11715 ~13% -20.2% 9346 ~ 8% lkp-nex04/micro/ebizzy/200%-100-10
14824 ~ 3% +12.4% 16655 ~ 8% lkp-nex04/micro/ebizzy/400%-5-30
31289 ~ 2% -15.2% 26527 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
21888 ~12% -20.8% 17344 ~ 7% lkp-sbx04/micro/ebizzy/200%-100-10
26660 ~ 5% +7.6% 28678 ~ 6% lkp-snb01/micro/ebizzy/200%-100-10
106377 -7.4% 98551 TOTAL numa-meminfo.node1.AnonPages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2922 ~13% -19.8% 2343 ~ 8% lkp-nex04/micro/ebizzy/200%-100-10
3628 ~ 3% +15.0% 4173 ~ 8% lkp-nex04/micro/ebizzy/400%-5-30
7816 ~ 2% -15.2% 6631 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
5461 ~12% -20.4% 4348 ~ 7% lkp-sbx04/micro/ebizzy/200%-100-10
6668 ~ 5% +7.6% 7175 ~ 6% lkp-snb01/micro/ebizzy/200%-100-10
26498 -6.9% 24670 TOTAL numa-vmstat.node1.nr_anon_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8580 ~ 4% +6.7% 9152 ~ 2% lkp-nex04/micro/ebizzy/400%-5-30
494 ~ 3% -18.4% 403 ~ 4% vpx/micro/ebizzy/200%-100-10
9074 +5.3% 9555 TOTAL slabinfo.buffer_head.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2072 ~ 7% -18.2% 1695 ~ 5% lkp-sb03/micro/ebizzy/200%-100-10
2072 -18.2% 1695 TOTAL numa-meminfo.node1.PageTables
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2560 ~ 3% +12.0% 2866 ~ 5% lkp-a04/micro/netperf/120s-200%-TCP_RR
2988 ~ 5% -13.7% 2579 ~ 4% lkp-a04/micro/netperf/120s-200%-UDP_RR
2502 ~ 2% +22.3% 3061 ~ 8% lkp-a06/micro/ebizzy/200%-100-10
8051 +5.7% 8507 TOTAL slabinfo.anon_vma.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.268e+08 ~ 3% +25.4% 1.59e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
11036862 ~ 4% +19.3% 13171551 ~11% lkp-snb01/micro/hackbench/1600%-process-pipe
624263 ~ 0% -11.5% 552390 ~ 6% lkp-snb01/micro/hackbench/1600%-threads-socket
1.385e+08 +24.7% 1.727e+08 TOTAL numa-vmstat.node0.numa_other
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
14552936 ~ 3% +19.2% 17348365 ~11% lkp-nex04/micro/ebizzy/200%-100-10
26307020 ~ 0% -9.4% 23835777 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
40859956 +0.8% 41184142 TOTAL proc-vmstat.pgalloc_dma32
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
626 ~ 4% +14.3% 716 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
626 +14.3% 716 TOTAL pagetypeinfo.Node1.Normal.Movable.2
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
936 ~ 3% -13.9% 806 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
936 -13.9% 806 TOTAL slabinfo.bdev_cache.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
936 ~ 3% -13.9% 806 ~ 8% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
936 -13.9% 806 TOTAL slabinfo.bdev_cache.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1577 ~ 5% -8.5% 1443 ~ 2% avoton1/crypto/tcrypt/2s-301-319
5274 ~ 4% +6.9% 5637 ~ 4% grantley/micro/kbuild/200%
866 ~14% -25.9% 642 ~14% lkp-a04/micro/netperf/120s-200%-TCP_STREAM
791 ~ 9% +15.8% 916 ~ 3% lkp-a04/micro/netperf/120s-200%-UDP_RR
6588 ~ 1% -9.5% 5960 ~ 1% lkp-nex05/micro/ebizzy/200%-100-10
15098 -3.3% 14599 TOTAL slabinfo.proc_inode_cache.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
57253 ~ 7% +21.3% 69433 ~ 4% lkp-nex04/micro/ebizzy/400%-5-30
57253 +21.3% 69433 TOTAL numa-meminfo.node3.Active
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
884 ~ 1% +13.8% 1005 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
884 +13.8% 1005 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.0
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
19200 ~ 7% +17.4% 22541 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
21256 ~ 3% +23.3% 26218 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
40456 +20.5% 48759 TOTAL numa-meminfo.node0.AnonPages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
62905189 ~ 3% -16.2% 52691612 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
62905189 -16.2% 52691612 TOTAL numa-vmstat.node0.numa_local
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4789 ~ 7% +17.6% 5634 ~ 7% lkp-nex04/micro/ebizzy/200%-100-10
5317 ~ 4% +23.4% 6560 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
10107 +20.7% 12195 TOTAL numa-vmstat.node0.nr_anon_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
62981796 ~ 3% -16.2% 52766301 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
62981796 -16.2% 52766301 TOTAL numa-vmstat.node0.numa_hit
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
173 ~ 3% +17.5% 203 ~ 6% lkp-nex04/micro/tlbflush/200%-512-320
518 ~ 7% -18.3% 423 ~ 6% lkp-sb03/micro/ebizzy/200%-100-10
691 -9.4% 626 TOTAL numa-vmstat.node1.nr_page_table_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1143330 ~ 7% +18.4% 1353226 ~ 5% lkp-nex04/micro/tlbflush/200%-512-320
1143330 +18.4% 1353226 TOTAL numa-vmstat.node2.numa_foreign
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1577 ~ 5% -8.0% 1451 ~ 3% avoton1/crypto/tcrypt/2s-301-319
5312 ~ 5% +7.9% 5731 ~ 3% grantley/micro/kbuild/200%
791 ~ 9% +15.8% 916 ~ 3% lkp-a04/micro/netperf/120s-200%-UDP_RR
616 ~ 3% +21.8% 751 ~ 9% lkp-a06/micro/ebizzy/200%-100-10
8297 +6.7% 8849 TOTAL slabinfo.proc_inode_cache.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
489 ~ 3% -9.5% 442 ~ 1% lkp-nex04/micro/tlbflush/200%-512-320
372 ~ 9% +25.3% 466 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
861 +5.5% 908 TOTAL numa-vmstat.node0.nr_page_table_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
7508 ~ 1% -13.3% 6509 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
22870 ~ 5% +14.9% 26268 ~ 5% lkp-snb01/micro/hackbench/1600%-threads-pipe
30379 +7.9% 32778 TOTAL slabinfo.kmalloc-192.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1421 ~ 6% -6.1% 1335 ~ 6% lkp-a04/micro/netperf/120s-200%-TCP_CRR
2114 ~ 6% -14.2% 1813 ~ 5% nhm-white/sysbench/oltp/600s-100%-1000000
1875 ~ 6% +23.4% 2313 ~ 2% xps2/micro/pigz/100%
5411 +0.9% 5462 TOTAL slabinfo.kmalloc-256.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2384 ~ 5% -9.3% 2162 ~ 6% lkp-a04/micro/netperf/120s-200%-TCP_MAERTS
7508 ~ 1% -13.2% 6515 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
23051 ~ 5% +14.7% 26445 ~ 5% lkp-snb01/micro/hackbench/1600%-threads-pipe
32944 +6.6% 35123 TOTAL slabinfo.kmalloc-192.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1477 ~ 6% -11.3% 1310 ~ 4% lkp-nex04/micro/ebizzy/400%-5-30
1961 ~ 2% -9.8% 1769 ~ 1% lkp-nex04/micro/tlbflush/200%-512-320
1491 ~ 9% +25.0% 1863 ~ 4% lkp-sb03/micro/ebizzy/200%-100-10
4930 +0.3% 4943 TOTAL numa-meminfo.node0.PageTables
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.458e+09 ~ 0% +13.6% 1.656e+09 ~ 0% lkp-snb01/micro/ebizzy/200%-100-10
1.458e+09 +13.6% 1.656e+09 TOTAL numa-numastat.node1.local_node
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.458e+09 ~ 0% +13.6% 1.656e+09 ~ 0% lkp-snb01/micro/ebizzy/200%-100-10
1.458e+09 +13.6% 1.656e+09 TOTAL numa-numastat.node1.numa_hit
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
952 ~ 2% -10.7% 850 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
884 ~ 5% -11.5% 782 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
1836 -11.1% 1632 TOTAL slabinfo.dnotify_mark.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
952 ~ 2% -10.7% 850 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
884 ~ 5% -11.5% 782 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
1836 -11.1% 1632 TOTAL slabinfo.dnotify_mark.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
789 ~ 4% -10.8% 704 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
3221 ~ 2% -11.6% 2848 ~ 1% lkp-ib03/micro/ebizzy/200%-100-10
3242 ~ 0% -12.8% 2826 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
7253 -12.1% 6378 TOTAL slabinfo.kmalloc-512.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
7.44e+08 ~ 3% +10.9% 8.252e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
7.44e+08 +10.9% 8.252e+08 TOTAL numa-vmstat.node1.numa_local
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
7.442e+08 ~ 3% +10.9% 8.252e+08 ~ 1% lkp-snb01/micro/ebizzy/200%-100-10
7.442e+08 +10.9% 8.252e+08 TOTAL numa-vmstat.node1.numa_hit
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.219e+08 ~ 1% -14.0% 1.048e+08 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
1.219e+08 -14.0% 1.048e+08 TOTAL numa-numastat.node0.local_node
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1.219e+08 ~ 1% -14.0% 1.048e+08 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
1.219e+08 -14.0% 1.048e+08 TOTAL numa-numastat.node0.numa_hit
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
28858 ~ 0% -10.3% 25873 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
28858 -10.3% 25873 TOTAL slabinfo.vm_area_struct.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
46909 ~ 6% -13.1% 40751 ~ 5% avoton1/crypto/tcrypt/2s-200-204
40062 ~ 7% +10.2% 44164 ~ 7% lkp-a04/micro/netperf/120s-200%-TCP_STREAM
549372 ~ 6% +7.4% 589808 ~ 4% nhm8/micro/ebizzy/200%-100-10
636343 +6.0% 674724 TOTAL softirqs.RCU
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8164 ~ 6% +8.0% 8819 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
8164 +8.0% 8819 TOTAL slabinfo.kmalloc-2048.num_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
8040 ~ 6% +8.0% 8685 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
8040 +8.0% 8685 TOTAL slabinfo.kmalloc-2048.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
3416 ~ 1% +12.4% 3840 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
3416 +12.4% 3840 TOTAL buddyinfo.Node.1.zone.Normal.0
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2617 ~ 2% +12.9% 2956 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
2617 +12.9% 2956 TOTAL pagetypeinfo.Node1.Normal.Unmovable.0
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
28736 ~ 0% -10.4% 25734 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
29205 ~ 6% -8.3% 26783 ~ 6% lkp-ib03/micro/netperf/120s-200%-TCP_RR
57941 -9.4% 52518 TOTAL slabinfo.vm_area_struct.active_objs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2.661e+08 ~ 0% -9.5% 2.409e+08 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
10511623 ~ 0% -8.4% 9633555 ~ 6% nhm-white/sysbench/oltp/600s-100%-1000000
2.766e+08 -9.4% 2.505e+08 TOTAL proc-vmstat.pgalloc_normal
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
555 ~ 3% +11.6% 620 ~ 1% lkp-nex04/micro/tlbflush/200%-512-320
555 +11.6% 620 TOTAL numa-vmstat.node3.nr_page_table_pages
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1106 ~ 3% +7.1% 1185 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
1106 +7.1% 1185 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.1
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
39954 ~ 1% -10.8% 35656 ~ 2% grantley/micro/ebizzy/200%-100-10
22284 ~ 5% -10.0% 20050 ~ 4% lkp-sbx04/micro/ebizzy/200%-100-10
62238 -10.5% 55706 TOTAL numa-meminfo.node0.SUnreclaim
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
9987 ~ 1% -10.7% 8914 ~ 2% grantley/micro/ebizzy/200%-100-10
5571 ~ 5% -10.0% 5012 ~ 4% lkp-sbx04/micro/ebizzy/200%-100-10
15558 -10.5% 13926 TOTAL numa-vmstat.node0.nr_slab_unreclaimable
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2218 ~ 3% +11.5% 2473 ~ 1% lkp-nex04/micro/tlbflush/200%-512-320
2218 +11.5% 2473 TOTAL numa-meminfo.node3.PageTables
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2.924e+08 ~ 0% -9.5% 2.647e+08 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
2.924e+08 -9.5% 2.647e+08 TOTAL proc-vmstat.pgfree
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
842138 ~ 4% +11.3% 937303 ~ 2% lkp-nex04/micro/tlbflush/200%-512-320
842138 +11.3% 937303 TOTAL numa-vmstat.node2.numa_local
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
382783 ~ 0% -7.7% 353469 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
382783 -7.7% 353469 TOTAL numa-meminfo.node1.Inactive
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
95668 ~ 0% -7.7% 88340 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
95668 -7.7% 88340 TOTAL numa-vmstat.node1.nr_inactive_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
382675 ~ 0% -7.7% 353363 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
382675 -7.7% 353363 TOTAL numa-meminfo.node1.Inactive(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
354404 ~ 0% +8.3% 383713 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
354404 +8.3% 383713 TOTAL numa-meminfo.node0.Inactive
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
354322 ~ 0% +8.3% 383629 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
354322 +8.3% 383629 TOTAL numa-meminfo.node0.Inactive(file)
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
88580 ~ 0% +8.3% 95907 ~ 2% lkp-ne04/micro/ebizzy/200%-100-10
88580 +8.3% 95907 TOTAL numa-vmstat.node0.nr_inactive_file
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4 ~20% +50.0% 6 ~ 0% avoton1/crypto/tcrypt/2s-205-210
3352 ~ 1% -1.9% 3287 ~ 1% grantley/micro/kbuild/200%
3120 ~ 0% -0.3% 3110 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-socket
6476 -1.1% 6404 TOTAL time.percent_of_cpu_this_job_got
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
28.56 ~ 1% +1.9% 29.11 ~ 1% avoton1/crypto/tcrypt/2s-200-204
28.34 ~ 0% +0.9% 28.60 ~ 0% avoton1/crypto/tcrypt/2s-500-504
21.53 ~ 0% +1.2% 21.79 ~ 0% grantley/micro/ebizzy/200%-100-10
24.59 ~35% +27.1% 31.26 ~27% lkp-a04/micro/netperf/120s-200%-TCP_RR
18.06 ~ 0% -2.2% 17.67 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
17.86 ~ 1% +2.5% 18.30 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
17.68 ~ 1% +3.0% 18.21 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
18.18 ~ 0% -0.5% 18.09 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_SENDFILE
21.53 ~ 3% +3.5% 22.29 ~ 2% lkp-nex04/micro/ebizzy/200%-100-10
21.87 ~ 2% +3.4% 22.62 ~ 1% lkp-nex04/micro/ebizzy/400%-5-30
30.45 ~ 2% -3.1% 29.51 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
24.54 ~ 0% -2.5% 23.94 ~ 1% lkp-sbx04/micro/ebizzy/200%-100-10
18.22 ~ 1% -4.2% 17.45 ~ 0% lkp-snb01/micro/hackbench/1600%-process-pipe
18.03 ~ 1% +5.3% 18.99 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-pipe
7.27 ~ 0% -0.4% 7.24 ~ 0% nhm-white/micro/ebizzy/200%-100-10
7.26 ~ 0% -0.3% 7.23 ~ 0% nhm-white/sysbench/oltp/600s-100%-1000000
7.46 ~ 0% -1.2% 7.36 ~ 0% nhm8/micro/ebizzy/200%-100-10
8.25 ~14% -13.1% 7.17 ~ 2% vpx/micro/ebizzy/200%-100-10
7.25 ~ 0% -0.3% 7.23 ~ 0% xps2/micro/ebizzy/200%-100-10
7.24 ~ 0% -0.2% 7.22 ~ 0% xps2/micro/pigz/100%
354.17 +2.0% 361.27 TOTAL boottime.dhcp
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
283.95 ~ 0% +1.7% 288.80 ~ 0% avoton1/crypto/tcrypt/2s-200-204
1507.97 ~ 1% +3.8% 1565.31 ~ 2% grantley/micro/ebizzy/200%-100-10
171.95 ~ 2% -2.7% 167.30 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
177.09 ~ 0% -3.4% 171.02 ~ 3% lkp-a04/micro/ebizzy/200%-100-10
126.52 ~25% +20.7% 152.67 ~20% lkp-a04/micro/netperf/120s-200%-TCP_RR
1268.71 ~ 5% -6.2% 1190.03 ~ 2% lkp-ib03/micro/ebizzy/200%-100-10
1263.97 ~ 1% -4.0% 1213.81 ~ 2% lkp-ib03/micro/netperf/120s-200%-TCP_RR
1450.45 ~21% -16.6% 1209.06 ~ 1% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
2265.10 ~ 2% +2.9% 2329.92 ~ 0% lkp-nex04/micro/ebizzy/200%-100-10
2200.05 ~ 1% -2.9% 2135.66 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
701.22 ~ 0% +3.9% 728.26 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-pipe
108.34 ~ 1% +17.7% 127.56 ~19% nhm-white/sysbench/oltp/600s-100%-1000000
55.07 ~ 7% -6.8% 51.30 ~ 1% vpx/micro/ebizzy/200%-100-10
11580.40 -2.2% 11330.71 TOTAL boottime.idle
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
38.06 ~25% +41.7% 53.94 ~ 2% avoton1/crypto/tcrypt/2s-205-210
8854.64 ~ 0% -0.4% 8823.52 ~ 0% lkp-nex04/micro/ebizzy/400%-5-30
1909.31 ~ 4% -4.1% 1831.01 ~ 0% lkp-nex05/micro/tlbflush/100%-512-320
28660.09 ~ 0% -0.1% 28633.32 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
59506.46 ~ 0% +0.1% 59582.11 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
5491.85 ~ 0% +0.3% 5510.49 ~ 0% nhm8/micro/dbench/100%
5335.66 ~ 0% +0.6% 5367.23 ~ 0% xps2/micro/ebizzy/200%-100-10
109796.06 +0.0% 109801.62 TOTAL time.system_time
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
302978 ~ 1% -2.7% 294844 ~ 0% grantley/micro/kbuild/200%
10559 ~ 0% +2.1% 10786 ~ 1% lkp-a04/micro/ebizzy/200%-100-10
532425 ~ 0% +0.8% 536900 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
591175 ~ 2% -3.6% 569644 ~ 2% lkp-nex05/micro/ebizzy/200%-100-10
24107 ~ 0% +1.7% 24523 ~ 1% nhm-white/micro/ebizzy/200%-100-10
24304 ~ 0% -2.3% 23745 ~ 0% xps2/micro/ebizzy/200%-100-10
1485551 -1.7% 1460444 TOTAL time.voluntary_context_switches
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
39.63 ~ 0% +1.5% 40.24 ~ 0% avoton1/crypto/tcrypt/2s-200-204
39.49 ~ 0% +0.4% 39.64 ~ 0% avoton1/crypto/tcrypt/2s-301-319
41.37 ~ 1% -2.4% 40.39 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
41.77 ~ 1% -1.3% 41.24 ~ 1% lkp-nex05/micro/tlbflush/100%-512-320
27.55 ~ 2% -4.2% 26.39 ~ 0% lkp-snb01/micro/hackbench/1600%-process-pipe
26.95 ~ 0% +3.4% 27.87 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
16.91 ~ 0% -2.2% 16.54 ~ 1% nhm8/micro/dbench/100%
16.01 ~ 9% -8.7% 14.63 ~ 2% vpx/micro/ebizzy/200%-100-10
249.69 -1.1% 246.93 TOTAL boottime.boot
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
84220649 ~ 0% +0.7% 84794796 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
84172596 ~ 0% +0.7% 84796336 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
84173685 ~ 0% +0.8% 84848135 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
4.728e+09 ~ 0% -1.2% 4.671e+09 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
4884 ~ 0% -2.0% 4786 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_RR
2.514e+09 ~ 0% -1.6% 2.475e+09 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
51621106 ~ 0% -8.6% 47164538 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
2.632e+08 ~ 0% +0.2% 2.639e+08 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
16342 ~ 0% +0.4% 16410 ~ 0% xps2/micro/pigz/100%
7.809e+09 -1.3% 7.712e+09 TOTAL time.minor_page_faults
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
2659 ~ 0% -0.3% 2651 ~ 0% grantley/micro/kbuild/200%
686 ~ 1% +4.4% 716 ~ 2% lkp-nex04/micro/ebizzy/400%-5-30
3262 ~ 0% +0.8% 3288 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
4137 ~ 0% -1.9% 4059 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
2228 ~ 0% -3.0% 2161 ~ 1% lkp-snb01/micro/hackbench/1600%-process-pipe
1517 ~ 0% -0.8% 1505 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
1363 ~ 0% -2.5% 1330 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-socket
3116 ~ 0% -0.6% 3098 ~ 0% nhm8/micro/dbench/100%
2637 ~ 0% -1.2% 2606 ~ 0% xps2/micro/ebizzy/200%-100-10
21609 -0.9% 21418 TOTAL time.user_time
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
1187 ~ 0% -0.5% 1181 ~ 0% avoton1/crypto/tcrypt/2s-500-504
9501 ~ 1% -2.1% 9300 ~ 1% grantley/micro/kbuild/200%
3340 ~ 0% +0.8% 3366 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
3359 ~ 0% +0.3% 3369 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
3343 ~ 0% +0.9% 3373 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
2048146 ~ 0% -3.6% 1974295 ~ 3% lkp-ib03/micro/ebizzy/200%-100-10
11442 ~ 0% +0.5% 11502 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
17992 ~ 7% -17.1% 14912 ~11% lkp-ib03/micro/netperf/120s-200%-TCP_MAERTS
1477911 ~ 0% -1.1% 1462062 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
1427714 ~ 0% -1.5% 1406534 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
5003938 -2.3% 4889897 TOTAL vmstat.system.in
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
86.02 ~ 1% +1.7% 87.46 ~ 1% grantley/micro/kbuild/200%
121.15 ~ 0% +0.1% 121.22 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
207.17 +0.7% 208.69 TOTAL time.elapsed_time
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
719 ~ 0% +0.2% 720 ~ 0% avoton1/crypto/tcrypt/2s-505-509
9377 ~ 1% -2.6% 9134 ~ 1% grantley/micro/kbuild/200%
1793803 ~ 0% +1.0% 1811656 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
5547967 ~ 0% +1.9% 5655750 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_RR
5598 ~ 0% -0.6% 5567 ~ 0% lkp-nex05/micro/ebizzy/200%-100-10
3031 ~ 0% +0.6% 3051 ~ 0% lkp-sb03/micro/ebizzy/200%-100-10
1286 ~ 1% +1.6% 1307 ~ 1% nhm8/micro/ebizzy/200%-100-10
7361784 +1.7% 7487188 TOTAL vmstat.system.cs
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
4163881 ~ 0% +0.3% 4174862 ~ 0% grantley/micro/ebizzy/200%-100-10
1190689 ~ 0% +0.4% 1195644 ~ 0% lkp-ne04/micro/ebizzy/200%-100-10
9.295e+08 ~ 0% +0.8% 9.365e+08 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
918867 ~ 1% +1.1% 928905 ~ 0% nhm8/micro/ebizzy/200%-100-10
9.357e+08 +0.8% 9.428e+08 TOTAL time.involuntary_context_switches
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 15:51 ` Fengguang Wu
@ 2013-12-20 16:44 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 16:44 UTC (permalink / raw)
To: Fengguang Wu
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 20, 2013 at 11:51:43PM +0800, Fengguang Wu wrote:
> On Thu, Dec 19, 2013 at 02:34:50PM +0000, Mel Gorman wrote:
> > On Wed, Dec 18, 2013 at 03:28:14PM +0800, Fengguang Wu wrote:
> > > Hi Mel,
> > >
> > > I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
> > >
> > > Basically there are
> > >
> > > 1) no big performance changes
> > >
> > > 76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
> > > 407038 +1.2% 412032 TOTAL hackbench.throughput
> > > 50307 -1.5% 49549 TOTAL ebizzy.throughput
> > >
> >
> > I'm assuming this was an ivybridge processor.
>
> The test boxes brickland2 and lkp-ib03 are ivybridge; lkp-snb01 is sandybridge.
>
Ok.
> > How many threads were ebizzy tested with?
>
> The below case has params string "400%-5-30", which means
>
> nr_threads = 400% * nr_cpu = 4 * 48 = 192
> iterations = 5
> duration = 30
>
> v3.13-rc3 eabb1f89905a0c809d13
> --------------- -------------------------
> 50307 ~ 1% -1.5% 49549 ~ 0% lkp-ib03/micro/ebizzy/400%-5-30
> 50307 -1.5% 49549 TOTAL ebizzy.throughput
>
That is a limited range of threads to test with but ok.
> > The memory ranges used by the vm scalability benchmarks are
> > probably too large to be affected by the series but I'm guessing.
>
> Do you mean these lines?
>
> 3345155 ~ 0% -0.3% 3335172 ~ 0% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
> 33249939 ~ 0% +3.3% 34336155 ~ 1% brickland2/micro/vm-scalability/1T-shm-pread-seq
>
> The two cases run 128 threads/processes, each accessing randomly/sequentially
> a 64GB shm file concurrently. Sorry the 16G/1T prefixes are somehow misleading.
>
It's ok, the conclusion is still the same. The regions are still too
large to be really affected the series.
> > I doubt hackbench is doing any flushes and the 1.2% is noise.
>
> Here are the proc-vmstat.nr_tlb_remote_flush numbers for hackbench:
>
> 513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
> 603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
> 6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
> 7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
> 21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
> 24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
>
This is a surprise. The differences I can understand because of changes
in accounting but not the flushes themselves. The only flushes I would
expect are when the process exits and the regions are torn down.
The exception would be if automatic NUMA balancing was enabled and this
was a NUMA machine. In that case, NUMA hinting faults could be migrating
memory and triggering flushes.
Could you do something like
# perf probe native_flush_tlb_others
# cd /sys/kernel/debug/tracing
# echo sym-offset > trace_options
# echo sym-addr > trace_options
# echo stacktrace > trace_options
# echo 1 > events/probe/native_flush_tlb_others/enable
# cat trace_pipe > /tmp/log
and get a breakdown of what the source of these remote flushes are
please?
> This time, the ebizzy params are refreshed and the test case is
> exercised in all our test machines. The results that have changed are:
>
> v3.13-rc3 eabb1f89905a0c809d13
> --------------- -------------------------
> 873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
> 873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
> 873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
> 49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
> 26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
> 2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
> 80776 -1.2% 79793 TOTAL ebizzy.throughput
>
No change on lkp-ib03 which I would have expected some difference. Thing
is, for ebizzy to notice the number of TLB entries matter. On both
machines I tested, the last level TLB had 512 entries. How many entries
are on the last level TLB on lkp-ib03?
> > I do see a few major regressions like this
> >
> > > 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
> >
> > but I have no idea what the test is doing and whether something happened
> > that the test broke that time or if it's something to be really
> > concerned about.
>
> This test case simply creates sparse files, populate them with zeros,
> then delete them in parallel. Here $mem is physical memory size 128G,
> $nr_cpu is 120.
>
> for i in `seq $nr_cpu`
> do
> create_sparse_file $SPARSE_FILE-$i $((mem / nr_cpu))
> cp $SPARSE_FILE-$i /dev/null
> done
>
> for i in `seq $nr_cpu`
> do
> rm $SPARSE_FILE-$i &
> done
>
In itself, that does not explain why the result was 0 with the series
applied. The 3.13-rc3 result was "324497". 324497 what?
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
@ 2013-12-20 16:44 ` Mel Gorman
0 siblings, 0 replies; 71+ messages in thread
From: Mel Gorman @ 2013-12-20 16:44 UTC (permalink / raw)
To: Fengguang Wu
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
On Fri, Dec 20, 2013 at 11:51:43PM +0800, Fengguang Wu wrote:
> On Thu, Dec 19, 2013 at 02:34:50PM +0000, Mel Gorman wrote:
> > On Wed, Dec 18, 2013 at 03:28:14PM +0800, Fengguang Wu wrote:
> > > Hi Mel,
> > >
> > > I'd like to share some test numbers with your patches applied on top of v3.13-rc3.
> > >
> > > Basically there are
> > >
> > > 1) no big performance changes
> > >
> > > 76628486 -0.7% 76107841 TOTAL vm-scalability.throughput
> > > 407038 +1.2% 412032 TOTAL hackbench.throughput
> > > 50307 -1.5% 49549 TOTAL ebizzy.throughput
> > >
> >
> > I'm assuming this was an ivybridge processor.
>
> The test boxes brickland2 and lkp-ib03 are ivybridge; lkp-snb01 is sandybridge.
>
Ok.
> > How many threads were ebizzy tested with?
>
> The below case has params string "400%-5-30", which means
>
> nr_threads = 400% * nr_cpu = 4 * 48 = 192
> iterations = 5
> duration = 30
>
> v3.13-rc3 eabb1f89905a0c809d13
> --------------- -------------------------
> 50307 ~ 1% -1.5% 49549 ~ 0% lkp-ib03/micro/ebizzy/400%-5-30
> 50307 -1.5% 49549 TOTAL ebizzy.throughput
>
That is a limited range of threads to test with but ok.
> > The memory ranges used by the vm scalability benchmarks are
> > probably too large to be affected by the series but I'm guessing.
>
> Do you mean these lines?
>
> 3345155 ~ 0% -0.3% 3335172 ~ 0% brickland2/micro/vm-scalability/16G-shm-pread-rand-mt
> 33249939 ~ 0% +3.3% 34336155 ~ 1% brickland2/micro/vm-scalability/1T-shm-pread-seq
>
> The two cases run 128 threads/processes, each accessing randomly/sequentially
> a 64GB shm file concurrently. Sorry the 16G/1T prefixes are somehow misleading.
>
It's ok, the conclusion is still the same. The regions are still too
large to be really affected the series.
> > I doubt hackbench is doing any flushes and the 1.2% is noise.
>
> Here are the proc-vmstat.nr_tlb_remote_flush numbers for hackbench:
>
> 513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
> 603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
> 6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
> 7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
> 21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
> 24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
>
This is a surprise. The differences I can understand because of changes
in accounting but not the flushes themselves. The only flushes I would
expect are when the process exits and the regions are torn down.
The exception would be if automatic NUMA balancing was enabled and this
was a NUMA machine. In that case, NUMA hinting faults could be migrating
memory and triggering flushes.
Could you do something like
# perf probe native_flush_tlb_others
# cd /sys/kernel/debug/tracing
# echo sym-offset > trace_options
# echo sym-addr > trace_options
# echo stacktrace > trace_options
# echo 1 > events/probe/native_flush_tlb_others/enable
# cat trace_pipe > /tmp/log
and get a breakdown of what the source of these remote flushes are
please?
> This time, the ebizzy params are refreshed and the test case is
> exercised in all our test machines. The results that have changed are:
>
> v3.13-rc3 eabb1f89905a0c809d13
> --------------- -------------------------
> 873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
> 873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
> 873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
> 49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
> 26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
> 2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
> 80776 -1.2% 79793 TOTAL ebizzy.throughput
>
No change on lkp-ib03 which I would have expected some difference. Thing
is, for ebizzy to notice the number of TLB entries matter. On both
machines I tested, the last level TLB had 512 entries. How many entries
are on the last level TLB on lkp-ib03?
> > I do see a few major regressions like this
> >
> > > 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
> >
> > but I have no idea what the test is doing and whether something happened
> > that the test broke that time or if it's something to be really
> > concerned about.
>
> This test case simply creates sparse files, populate them with zeros,
> then delete them in parallel. Here $mem is physical memory size 128G,
> $nr_cpu is 120.
>
> for i in `seq $nr_cpu`
> do
> create_sparse_file $SPARSE_FILE-$i $((mem / nr_cpu))
> cp $SPARSE_FILE-$i /dev/null
> done
>
> for i in `seq $nr_cpu`
> do
> rm $SPARSE_FILE-$i &
> done
>
In itself, that does not explain why the result was 0 with the series
applied. The 3.13-rc3 result was "324497". 324497 what?
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2
2013-12-20 16:44 ` Mel Gorman
(?)
@ 2013-12-21 15:49 ` Fengguang Wu
-1 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2013-12-21 15:49 UTC (permalink / raw)
To: Mel Gorman
Cc: Alex Shi, Ingo Molnar, Linus Torvalds, Thomas Gleixner,
Andrew Morton, H Peter Anvin, Linux-X86, Linux-MM, LKML
[-- Attachment #1: Type: text/plain, Size: 4613 bytes --]
Hi Mel,
On Fri, Dec 20, 2013 at 04:44:26PM +0000, Mel Gorman wrote:
> On Fri, Dec 20, 2013 at 11:51:43PM +0800, Fengguang Wu wrote:
> > On Thu, Dec 19, 2013 at 02:34:50PM +0000, Mel Gorman wrote:
[snip]
> > > I doubt hackbench is doing any flushes and the 1.2% is noise.
> >
> > Here are the proc-vmstat.nr_tlb_remote_flush numbers for hackbench:
> >
> > 513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
> > 603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
> > 6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
> > 7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
> > 21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
> > 24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
> >
>
> This is a surprise. The differences I can understand because of changes
> in accounting but not the flushes themselves. The only flushes I would
> expect are when the process exits and the regions are torn down.
>
> The exception would be if automatic NUMA balancing was enabled and this
> was a NUMA machine. In that case, NUMA hinting faults could be migrating
> memory and triggering flushes.
You are right, the kconfig (attached) does have
CONFIG_NUMA_BALANCING=y
and lkp-nex05 is a 4-socket NHM-EX machine; lkp-snb01 is a 2-socket
SNB machine.
> Could you do something like
>
> # perf probe native_flush_tlb_others
> # cd /sys/kernel/debug/tracing
> # echo sym-offset > trace_options
> # echo sym-addr > trace_options
> # echo stacktrace > trace_options
> # echo 1 > events/probe/native_flush_tlb_others/enable
> # cat trace_pipe > /tmp/log
>
> and get a breakdown of what the source of these remote flushes are
> please?
Sure. Attached is the log file.
> > This time, the ebizzy params are refreshed and the test case is
> > exercised in all our test machines. The results that have changed are:
> >
> > v3.13-rc3 eabb1f89905a0c809d13
> > --------------- -------------------------
> > 873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
> > 873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
> > 873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
> > 49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
> > 26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
> > 2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
> > 80776 -1.2% 79793 TOTAL ebizzy.throughput
> >
>
> No change on lkp-ib03 which I would have expected some difference. Thing
> is, for ebizzy to notice the number of TLB entries matter. On both
> machines I tested, the last level TLB had 512 entries. How many entries
> are on the last level TLB on lkp-ib03?
[ 0.116154] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[ 0.116154] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0
> > > I do see a few major regressions like this
> > >
> > > > 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
> > >
> > > but I have no idea what the test is doing and whether something happened
> > > that the test broke that time or if it's something to be really
> > > concerned about.
> >
> > This test case simply creates sparse files, populate them with zeros,
> > then delete them in parallel. Here $mem is physical memory size 128G,
> > $nr_cpu is 120.
> >
> > for i in `seq $nr_cpu`
> > do
> > create_sparse_file $SPARSE_FILE-$i $((mem / nr_cpu))
> > cp $SPARSE_FILE-$i /dev/null
> > done
> >
> > for i in `seq $nr_cpu`
> > do
> > rm $SPARSE_FILE-$i &
> > done
> >
>
> In itself, that does not explain why the result was 0 with the series
> applied. The 3.13-rc3 result was "324497". 324497 what?
It's the proc-vmstat.nr_tlb_local_flush_one number, which is showed in the end
of every "TOTAL" line:
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
...
324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
...
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
btw, I've got the full test results for hackbench. Attached are the
new comparison results. There are small ups and downs, overall no big
regressions.
Thanks,
Fengguang
[-- Attachment #2: perf-probe --]
[-- Type: text/plain, Size: 323123 bytes --]
Added new event:
probe:native_flush_tlb_others (on native_flush_tlb_others)
You can now use it in all perf tools, such as:
perf record -e probe:native_flush_tlb_others -aR sleep 1
wrapper-4253 [000] d..2 26.132316: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
wrapper-4253 [000] d..2 26.132324: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
basename-4278 [018] d..2 26.138846: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
basename-4278 [018] d..2 26.138852: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
mkdir-4286 [019] d..2 26.140542: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
mkdir-4286 [019] d..2 26.140546: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sort-4284 [015] d..2 26.141105: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sort-4284 [015] d..2 26.141108: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
cat-4290 [025] d..2 26.142846: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-4290 [025] d..2 26.142850: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
ln-4293 [025] d..2 26.143633: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4293 [025] d..2 26.143636: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
uniq-4309 [027] d..2 26.149232: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
uniq-4309 [027] d..2 26.149236: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
grep-4312 [027] d..2 26.150960: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
grep-4312 [027] d..2 26.150964: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-4313 [018] d..2 26.151684: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-4313 [018] d..2 26.151688: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-4316 [018] d..2 26.152445: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-4316 [018] d..2 26.152449: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-proc-vmstat-4321 [026] d..2 26.154806: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-proc-vmstat-4321 [026] d..2 26.154810: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
wrapper-4322 [025] d..2 26.155261: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
wrapper-4322 [025] d..2 26.155266: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
run-job-4179 [005] d..3 26.163530: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
run-job-4179 [005] d..3 26.163534: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-4342 [026] d..2 26.165310: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-4342 [026] d..2 26.165313: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-4346 [017] d..2 26.167062: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-4346 [017] d..2 26.167066: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
ln-4350 [025] d..2 26.169556: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4350 [025] d..2 26.169559: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
grep-4351 [025] d..2 26.170301: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
grep-4351 [025] d..2 26.170304: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
gzip-slabinfo-4352 [019] d..2 26.171114: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
gzip-slabinfo-4352 [019] d..2 26.171118: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
ln-4365 [017] d..2 26.177229: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4365 [017] d..2 26.177233: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
ln-4366 [017] d..2 26.177977: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4366 [017] d..2 26.177981: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
grep-4367 [017] d..2 26.178749: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
grep-4367 [017] d..2 26.178753: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
gzip-buddyinfo-4368 [027] d..2 26.179567: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
gzip-buddyinfo-4368 [027] d..2 26.179570: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
run-job-4179 [006] d..3 26.180522: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
run-job-4179 [006] d..3 26.180526: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
lock_stat-4412 [017] d..2 26.206810: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
lock_stat-4412 [017] d..2 26.206815: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
gzip-softirqs-4427 [019] d..2 26.212948: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
gzip-softirqs-4427 [019] d..2 26.212952: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
softirqs-4423 [011] d..2 26.216181: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
softirqs-4423 [011] d..2 26.216184: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
ln-4451 [025] d..2 26.226603: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4451 [025] d..2 26.226607: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
ln-4452 [025] d..2 26.227228: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
ln-4452 [025] d..2 26.227231: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
grep-4453 [025] d..2 26.227897: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
grep-4453 [025] d..2 26.227900: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-pmeter-4454 [025] d..2 26.228929: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-pmeter-4454 [025] d..2 26.228932: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
run-job-4179 [012] d..3 26.229339: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
run-job-4179 [012] d..3 26.229350: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-5800 [027] d..2 27.144834: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-5800 [027] d..2 27.144842: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-5814 [018] d..2 27.145746: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-5814 [018] d..2 27.145753: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-6393 [018] d..2 27.184725: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-6393 [018] d..2 27.184730: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-24991 [001] d..3 41.384620: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24991 [001] d..3 41.384625: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-24992 [004] d..2 41.384729: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24992 [004] d..2 41.384731: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-24994 [008] d..2 41.384734: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24994 [008] d..2 41.384737: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-24987 [009] d..2 41.384745: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24987 [009] d..2 41.384751: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-24998 [007] d..2 41.384786: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24998 [007] d..2 41.384788: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-25005 [001] d..3 41.384887: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25004 [008] d..3 41.384887: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25004 [008] d..3 41.384889: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25005 [001] d..3 41.384889: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25006 [002] d..3 41.384894: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25006 [002] d..3 41.384897: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-4486 [000] d..2 41.385039: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-4486 [000] d..2 41.385042: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-25181 [025] d..2 41.385873: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-25181 [025] d..2 41.385876: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-36771 [018] d..2 42.173860: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36771 [018] d..2 42.173866: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-36845 [018] d..2 42.178786: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36845 [018] d..2 42.178789: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37739 [018] d..2 42.240031: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37739 [018] d..2 42.240037: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37824 [021] d..2 42.245561: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37824 [021] d..2 42.245573: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-38027 [018] d..2 42.259586: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38027 [018] d..2 42.259589: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-38037 [018] d..2 42.260263: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38037 [018] d..2 42.260266: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-1955 [012] d..2 50.059494: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1955 [012] d..2 50.059502: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-25433 [013] d..2 58.301248: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25433 [013] d..2 58.301255: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-26510 [011] d..3 58.317961: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26510 [011] d..3 58.317965: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26324 [014] d..3 58.320884: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26324 [014] d..3 58.320888: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26318 [002] d..2 58.320890: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26318 [002] d..2 58.320895: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-5425 [017] d..2 58.330333: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-5425 [017] d..2 58.330336: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-8204 [002] d..3 58.514084: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8204 [002] d..3 58.514089: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-16143 [018] d..2 59.069695: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-16143 [018] d..2 59.069701: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-16150 [019] d..2 59.069929: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-16150 [019] d..2 59.069932: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-25756 [019] d..2 59.759720: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-25756 [019] d..2 59.759725: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-13288 [015] d..2 73.980401: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13288 [015] d..2 73.980409: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-25763 [013] d..3 73.994915: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25764 [025] d..3 73.994918: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25763 [013] d..3 73.994918: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25764 [025] d..3 73.994922: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25908 [004] d..3 74.010041: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25909 [000] d..3 74.010045: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25908 [004] d..3 74.010046: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25909 [000] d..3 74.010048: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25922 [003] d..3 74.010285: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25922 [003] d..3 74.010288: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25927 [006] d..3 74.010291: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25928 [007] d..3 74.010292: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25927 [006] d..3 74.010294: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25928 [007] d..3 74.010295: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-5426 [001] d..2 74.010515: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-5426 [001] d..2 74.010518: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-35081 [018] d..2 74.688851: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35081 [018] d..2 74.688868: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-36129 [018] d..2 74.769295: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36129 [018] d..2 74.769300: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-38228 [018] d..2 74.925695: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38228 [018] d..2 74.925700: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-6273 [031] d..2 83.266936: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-6273 [031] d..2 83.266944: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-6275 [030] d..2 83.273009: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-6275 [030] d..2 83.273014: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37050 [011] d..2 94.044471: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37050 [011] d..2 94.044477: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-2701 [018] d..3 94.062167: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-2701 [018] d..3 94.062173: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-2702 [006] d..3 94.062311: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-2702 [006] d..3 94.062314: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-2703 [018] d..3 94.062314: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-2703 [018] d..3 94.062317: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6035 [008] d..3 94.066102: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6035 [008] d..3 94.066105: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6047 [002] d..3 94.066208: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6048 [008] d..3 94.066211: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6047 [002] d..3 94.066212: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6048 [008] d..3 94.066213: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6048 [003] d..2 94.066311: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6048 [003] d..2 94.066313: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-6050 [004] d..3 94.066346: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6052 [008] d..3 94.066349: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6050 [004] d..3 94.066349: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6052 [008] d..3 94.066351: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-6516 [017] d..2 94.068870: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-6516 [017] d..2 94.068873: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-6519 [025] d..2 94.071850: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6519 [025] d..2 94.071856: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-6885 [021] d..2 94.094650: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-6885 [021] d..2 94.094655: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-16166 [018] d..2 94.725582: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-16166 [018] d..2 94.725588: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-16180 [018] d..2 94.726505: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-16180 [018] d..2 94.726508: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-27059 [029] d..2 95.744435: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-27059 [029] d..2 95.744444: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-18038 [010] d..2 103.488759: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18038 [010] d..2 103.488766: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-27101 [031] d..2 104.152293: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-27101 [031] d..2 104.152301: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-27102 [015] d..2 104.153279: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-27102 [015] d..2 104.153282: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-27103 [031] d..2 104.154353: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-27103 [031] d..2 104.154356: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-23802 [022] d..3 109.609822: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23802 [022] d..3 109.609828: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-27009 [013] d..3 109.638428: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-27009 [013] d..3 109.638434: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-27016 [015] d..3 109.638530: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-27016 [015] d..3 109.638533: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-27016 [001] d..2 109.638620: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-27016 [001] d..2 109.638622: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-27020 [014] d..3 109.638658: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-27020 [014] d..3 109.638661: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-6519 [001] d..2 109.638933: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-6519 [001] d..2 109.638935: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-33101 [022] d..2 110.025868: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33101 [022] d..2 110.025873: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-337 [018] d..2 110.561646: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-337 [018] d..2 110.561651: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-346 [018] d..2 110.562326: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-346 [018] d..2 110.562341: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-7347 [010] d..2 115.462617: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-7347 [010] d..2 115.462625: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-7386 [030] d..2 117.003703: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-7386 [030] d..2 117.003712: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-7521 [025] d..2 123.071016: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-7521 [025] d..2 123.071025: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34786 [014] d..2 131.534288: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34786 [014] d..2 131.534295: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-311 [001] d..2 131.606508: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32103 [004] d..2 131.606509: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32103 [004] d..2 131.606514: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-311 [001] d..2 131.606514: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-37587 [003] d..2 131.607612: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37587 [003] d..2 131.607614: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-32259 [026] d..2 131.608986: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32259 [026] d..2 131.608992: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-7229 [006] d..3 131.610144: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-7229 [006] d..3 131.610146: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-7683 [022] d..2 131.611850: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-7683 [022] d..2 131.611853: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-9996 [018] d..2 131.779603: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-9996 [018] d..2 131.779608: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-10006 [018] d..2 131.780217: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-10006 [018] d..2 131.780220: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-10015 [018] d..2 131.780784: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-10015 [018] d..2 131.780787: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-10312 [018] d..2 131.800862: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-10312 [018] d..2 131.800865: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-10324 [018] d..2 131.801643: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-10324 [018] d..2 131.801646: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-17878 [018] d..2 132.351484: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-17878 [018] d..2 132.351490: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-23526 [018] d..2 132.781301: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-23526 [018] d..2 132.781307: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-23539 [018] d..2 132.782127: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-23539 [018] d..2 132.782131: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-23550 [018] d..2 132.782825: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-23550 [018] d..2 132.782829: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-23560 [018] d..2 132.783523: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-23560 [018] d..2 132.783527: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-23793 [018] d..2 132.802025: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-23793 [018] d..2 132.802029: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-23806 [018] d..2 132.802726: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-23806 [018] d..2 132.802730: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-23825 [018] d..2 132.803977: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-23825 [018] d..2 132.803980: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-28313 [000] d..2 141.203886: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-28313 [000] d..2 141.203894: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-8852 [015] d..2 149.901367: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8852 [015] d..2 149.901374: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-19078 [010] d..3 149.902214: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-19078 [010] d..3 149.902216: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-15270 [005] d..3 149.904612: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-15270 [005] d..3 149.904617: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
cmd-28526 [000] d..2 149.922783: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-28526 [000] d..2 149.922787: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-38073 [018] d..2 150.593573: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38073 [018] d..2 150.593579: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-39281 [028] d..2 166.599214: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39281 [028] d..2 166.599221: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-37747 [022] d..3 166.619673: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37747 [022] d..3 166.619678: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28812 [028] d..2 166.629939: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28812 [028] d..2 166.629944: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-8769 [025] d..2 166.636621: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-8769 [025] d..2 166.636625: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-8770 [000] d..2 166.639407: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-8770 [000] d..2 166.639410: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-17790 [018] d..2 167.265466: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-17790 [018] d..2 167.265472: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-17806 [018] d..2 167.266216: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-17806 [018] d..2 167.266219: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-17817 [018] d..2 167.266836: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-17817 [018] d..2 167.266839: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-20623 [018] d..2 167.469762: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-20623 [018] d..2 167.469767: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-20634 [018] d..2 167.470426: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-20634 [018] d..2 167.470429: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-20645 [018] d..2 167.471021: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-20645 [018] d..2 167.471025: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-29342 [009] d..2 176.505998: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-29342 [009] d..2 176.506007: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-8792 [030] d..2 183.412856: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8792 [030] d..2 183.412864: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-14313 [006] d..3 183.441887: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-14313 [006] d..3 183.441894: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-29252 [000] d..3 183.455644: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-29252 [000] d..3 183.455649: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-29254 [015] d..3 183.455650: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-29254 [015] d..3 183.455653: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-29270 [015] d..3 183.455972: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-29270 [015] d..3 183.455974: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-8772 [001] d..2 183.457256: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8772 [001] d..2 183.457259: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
cmd-29494 [002] d..2 183.460944: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-29494 [002] d..2 183.460946: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-39001 [027] d..2 184.106091: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39001 [027] d..2 184.106098: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-39035 [026] d..2 184.108017: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39035 [026] d..2 184.108021: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-39343 [026] d..2 184.131157: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39343 [026] d..2 184.131161: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-9606 [022] d..2 192.775662: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-9606 [022] d..2 192.775668: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
diskstats-9617 [022] d..2 193.302132: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
diskstats-9617 [022] d..2 193.302138: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
pagetypeinfo-9626 [018] dN.2 193.647370: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
pagetypeinfo-9626 [018] dN.2 193.647377: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
buddyinfo-9634 [022] d..2 193.964587: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
buddyinfo-9634 [022] d..2 193.964594: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
proc-vmstat-9684 [016] d..2 195.804034: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
proc-vmstat-9684 [016] d..2 195.804041: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
numa-vmstat-9697 [018] d..2 196.157195: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
numa-vmstat-9697 [018] d..2 196.157202: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
numa-numastat-9737 [019] d..2 197.747432: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
numa-numastat-9737 [019] d..2 197.747439: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-9457 [009] d..3 202.166188: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9456 [000] d..3 202.166188: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9456 [000] d..3 202.166195: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-9457 [009] d..3 202.166195: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-9461 [001] d..3 202.166203: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9461 [001] d..3 202.166208: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-9466 [026] d..3 202.166295: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9466 [026] d..3 202.166299: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-9473 [014] d..3 202.166426: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9473 [014] d..3 202.166429: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-29496 [027] d..2 202.166622: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-29496 [027] d..2 202.166625: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-4478 [000] d..2 202.167970: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-4478 [000] d..2 202.167972: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
cmd-9815 [017] d..2 202.168153: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-9815 [017] d..2 202.168157: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-9815 [000] d..2 202.171340: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-9815 [000] d..2 202.171342: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
hackbench-9817 [025] d..2 202.171499: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9817 [025] d..2 202.171502: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-17046 [018] d..2 202.635324: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-17046 [018] d..2 202.635329: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-17801 [018] d..2 202.686913: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-17801 [018] d..2 202.686919: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-21179 [018] d..2 202.924592: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-21179 [018] d..2 202.924598: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-23892 [018] d..2 203.114688: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-23892 [018] d..2 203.114694: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-30428 [026] d..2 211.708403: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-30428 [026] d..2 211.708412: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-10107 [014] d..3 220.316928: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10107 [014] d..3 220.316934: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10111 [024] d..3 220.317044: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10111 [024] d..3 220.317047: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26460 [018] d..2 220.329147: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26460 [018] d..2 220.329153: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-30262 [008] d..3 220.330323: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30262 [008] d..3 220.330326: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-30264 [006] d..3 220.330329: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30266 [003] d..3 220.330330: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30264 [006] d..3 220.330332: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-30266 [003] d..3 220.330332: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-30268 [006] d..3 220.330459: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30269 [008] d..3 220.330460: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30268 [006] d..3 220.330462: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-30269 [008] d..3 220.330463: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28978 [020] d..3 220.330606: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28978 [020] d..3 220.330609: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28985 [002] d..3 220.330829: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28986 [004] d..3 220.330831: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28987 [007] d..3 220.330831: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28985 [002] d..3 220.330832: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28987 [007] d..3 220.330834: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28986 [004] d..3 220.330834: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-28988 [006] d..3 220.330838: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-28988 [006] d..3 220.330841: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-30628 [017] d..2 220.333146: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-30628 [017] d..2 220.333149: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-30629 [000] d..2 220.339082: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-30629 [000] d..2 220.339085: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-10694 [030] d..2 230.623859: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-10694 [030] d..2 230.623868: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-10703 [011] d..2 230.849553: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-10703 [011] d..2 230.849561: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
proc-vmstat-10725 [027] d..2 231.633076: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
proc-vmstat-10725 [027] d..2 231.633085: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-39616 [030] d..2 236.263770: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39616 [030] d..2 236.263778: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-34838 [000] d..2 236.265072: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34838 [000] d..2 236.265076: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-10367 [006] d..3 236.281621: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10367 [006] d..3 236.281627: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10605 [023] d..3 236.306679: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10605 [023] d..3 236.306683: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10614 [003] d..3 236.306871: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10614 [003] d..3 236.306874: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10613 [000] d..3 236.306877: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10616 [007] d..3 236.306880: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10613 [000] d..3 236.306882: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10616 [007] d..3 236.306883: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10615 [006] d..3 236.306913: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10615 [006] d..3 236.306916: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-30631 [007] d..2 236.307066: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-30631 [007] d..2 236.307068: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-10829 [025] d..2 236.307914: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-10829 [025] d..2 236.307917: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-10830 [000] d..2 236.408599: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-10830 [000] d..2 236.408604: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
cat-19468 [018] d..2 237.027290: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-19468 [018] d..2 237.027296: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
buddyinfo-4364 [005] d..2 237.027836: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
buddyinfo-4364 [005] d..2 237.027838: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
cat-19913 [018] d..2 237.060094: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-19913 [018] d..2 237.060098: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-19925 [018] d..2 237.061048: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-19925 [018] d..2 237.061051: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-19930 [019] d..2 237.061263: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-19930 [019] d..2 237.061267: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-20206 [018] d..2 237.082837: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-20206 [018] d..2 237.082840: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-22452 [018] d..2 237.246747: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-22452 [018] d..2 237.246753: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-22463 [018] d..2 237.247508: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-22463 [018] d..2 237.247511: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-29880 [007] d..3 255.449246: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-29880 [007] d..3 255.449252: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31308 [005] d..3 255.469525: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31310 [008] d..3 255.469526: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31308 [005] d..3 255.469528: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31312 [001] d..3 255.469529: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31310 [008] d..3 255.469529: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31312 [001] d..3 255.469531: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31314 [003] d..3 255.469533: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31314 [003] d..3 255.469536: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31318 [007] d..3 255.469635: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31318 [007] d..3 255.469638: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31319 [001] d..3 255.469761: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31319 [001] d..3 255.469763: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31323 [002] d..3 255.469768: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31324 [003] d..3 255.469770: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31323 [002] d..3 255.469771: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31324 [003] d..3 255.469772: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-31327 [022] d..3 255.469779: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31327 [022] d..3 255.469782: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-10832 [000] d..2 255.469973: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10832 [000] d..2 255.469978: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-31663 [025] d..2 255.470867: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-31663 [025] d..2 255.470871: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-35594 [019] d..2 255.739725: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35594 [019] d..2 255.739731: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-35605 [018] d..2 255.740454: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35605 [018] d..2 255.740458: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37646 [019] d..2 255.884609: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37646 [019] d..2 255.884614: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37658 [018] d..2 255.885286: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37658 [018] d..2 255.885289: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37670 [018] d..2 255.885905: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37670 [018] d..2 255.885908: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-2404 [018] d..2 256.255378: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-2404 [018] d..2 256.255384: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-2416 [018] d..2 256.256043: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-2416 [018] d..2 256.256047: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-2428 [018] d..2 256.256711: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-2428 [018] d..2 256.256715: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-4067 [018] d..2 256.372788: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-4067 [018] d..2 256.372794: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-9358 [018] d..2 256.740999: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-9358 [018] d..2 256.741005: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-9374 [018] d..2 256.741755: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-9374 [018] d..2 256.741759: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-9386 [018] d..2 256.742415: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-9386 [018] d..2 256.742418: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-11389 [018] d..2 256.886437: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-11389 [018] d..2 256.886443: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-11403 [018] d..2 256.887254: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-11403 [018] d..2 256.887258: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-11413 [018] d..2 256.887971: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-11413 [018] d..2 256.887974: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-11424 [018] d..2 256.888671: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-11424 [018] d..2 256.888674: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-31673 [010] d..2 272.051664: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31673 [010] d..2 272.051671: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-11519 [006] d..2 272.086419: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11519 [006] d..2 272.086425: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-11640 [000] d..3 272.100364: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11640 [000] d..3 272.100367: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-11641 [009] d..3 272.100376: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11641 [009] d..3 272.100379: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-11652 [014] d..3 272.100559: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-11653 [000] d..3 272.100560: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11652 [014] d..3 272.100562: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-11653 [000] d..3 272.100562: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-11655 [010] d..3 272.100563: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11655 [010] d..3 272.100566: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-11656 [029] d..3 272.100566: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11656 [029] d..3 272.100570: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-11649 [028] d..2 272.100576: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11649 [028] d..2 272.100579: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-31666 [014] d..2 272.100762: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31666 [014] d..2 272.100764: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
cmd-11883 [000] d..2 272.105257: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-11883 [000] d..2 272.105259: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
sleep-19207 [018] d..2 272.611303: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-19207 [018] d..2 272.611308: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
numa-meminfo-4291 [012] d..2 272.836792: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
numa-meminfo-4291 [012] d..2 272.836807: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-23794 [018] d..2 272.934305: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-23794 [018] d..2 272.934310: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-32462 [009] d..2 281.808680: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-32462 [009] d..2 281.808688: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
meminfo-32496 [028] dN.2 283.043054: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
meminfo-32496 [028] dN.2 283.043063: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-17774 [014] d..2 288.706883: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17774 [014] d..2 288.706891: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-23725 [023] d..3 288.723303: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23725 [023] d..3 288.723308: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23731 [008] d..3 288.723541: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23731 [008] d..3 288.723543: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23732 [001] d..3 288.723628: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23732 [001] d..3 288.723631: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-32245 [014] d..3 288.724604: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32245 [014] d..3 288.724607: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-32242 [009] d..2 288.724724: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32242 [009] d..2 288.724727: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-32245 [010] d..2 288.724727: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32245 [010] d..2 288.724730: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-32246 [014] d..2 288.724737: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32246 [014] d..2 288.724739: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-32257 [024] d..2 288.724907: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-32257 [024] d..2 288.724911: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-32628 [017] d..2 288.733944: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-32628 [017] d..2 288.733948: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-32629 [000] d..2 288.736797: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-32629 [000] d..2 288.736800: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
pagetypeinfo-4380 [005] d..2 289.186152: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
pagetypeinfo-4380 [005] d..2 289.186157: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-39703 [019] d..2 289.218423: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39703 [019] d..2 289.218428: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
proc-vmstat-4314 [001] d..2 289.442694: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
proc-vmstat-4314 [001] d..2 289.442699: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
hackbench-10481 [011] d..3 305.039025: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-10481 [011] d..3 305.039032: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-12874 [017] d..2 305.043240: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-12874 [017] d..2 305.043245: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-12875 [000] d..2 305.046024: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-12875 [000] d..2 305.046027: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-22820 [018] d..2 305.722376: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-22820 [018] d..2 305.722382: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-24198 [018] d..2 305.820638: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-24198 [018] d..2 305.820644: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-25007 [018] d..2 305.877738: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-25007 [018] d..2 305.877742: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-26583 [018] d..2 305.989942: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-26583 [018] d..2 305.989947: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-26603 [018] d..2 305.991566: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-26603 [018] d..2 305.991569: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-33489 [029] d..2 316.200806: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33489 [029] d..2 316.200814: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-24052 [021] d..3 320.411188: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-24052 [021] d..3 320.411194: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13327 [005] d..2 320.412872: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13327 [005] d..2 320.412875: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-17774 [011] d..2 320.413096: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17774 [011] d..2 320.413101: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-18175 [029] d..3 320.414436: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18175 [029] d..3 320.414440: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18173 [010] d..3 320.414469: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18173 [010] d..3 320.414472: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18177 [001] d..2 320.414596: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18177 [001] d..2 320.414598: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-33587 [017] d..2 320.428289: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33587 [017] d..2 320.428293: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-33588 [000] d..2 320.431108: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33588 [000] d..2 320.431113: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-5492 [018] d..2 321.285665: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-5492 [018] d..2 321.285671: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-5852 [018] d..2 321.312094: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-5852 [018] d..2 321.312098: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-5864 [018] d..2 321.312794: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-5864 [018] d..2 321.312797: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-5877 [018] d..2 321.313547: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-5877 [018] d..2 321.313551: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-6426 [019] d..2 321.351980: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-6426 [019] d..2 321.351986: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-13625 [009] d..2 330.712911: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-13625 [009] d..2 330.712920: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-11999 [020] d..2 334.714576: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-11999 [020] d..2 334.714583: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-3568 [024] d..2 335.906807: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-3568 [024] d..2 335.906815: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-1396 [013] d..2 335.910351: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1396 [013] d..2 335.910355: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-13547 [000] d..3 335.945106: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13546 [009] d..3 335.945106: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13545 [010] d..3 335.945108: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13547 [000] d..3 335.945112: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13546 [009] d..3 335.945112: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13545 [010] d..3 335.945112: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13556 [031] d..3 335.945127: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13556 [031] d..3 335.945130: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13561 [011] d..3 335.945238: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13561 [011] d..3 335.945242: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13564 [013] d..3 335.945305: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13564 [013] d..3 335.945308: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-33590 [001] d..2 335.951758: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33590 [001] d..2 335.951762: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-13757 [025] d..2 335.952581: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-13757 [025] d..2 335.952585: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-13758 [025] d..2 335.953308: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-13758 [025] d..2 335.953311: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-13759 [025] d..2 335.954381: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-13759 [025] d..2 335.954384: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-13758 [002] d..2 335.956561: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-13758 [002] d..2 335.956564: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-23531 [014] d..2 336.679319: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-23531 [014] d..2 336.679325: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-24701 [026] d..2 336.767783: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-24701 [026] d..2 336.767790: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34302 [001] d..2 345.336043: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34302 [001] d..2 345.336049: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34312 [019] d..2 345.884490: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34312 [019] d..2 345.884496: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34320 [006] dN.2 346.238729: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34320 [006] dN.2 346.238736: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34368 [018] d..2 348.109540: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34368 [018] d..2 348.109546: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34394 [023] d..2 349.121762: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34394 [023] d..2 349.121768: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-31437 [017] d..2 353.114041: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-31437 [017] d..2 353.114047: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-25405 [011] d..2 353.221349: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25405 [011] d..2 353.221357: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-14761 [019] d..3 353.236477: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-14761 [019] d..3 353.236483: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25030 [000] d..3 353.268911: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25030 [000] d..3 353.268914: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25035 [002] d..3 353.268922: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25035 [002] d..3 353.268926: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25044 [000] d..3 353.269021: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25045 [004] d..3 353.269022: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25044 [000] d..3 353.269023: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25045 [004] d..3 353.269025: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-25029 [008] d..2 353.269127: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-25029 [008] d..2 353.269130: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-13760 [000] d..2 353.269311: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13760 [000] d..2 353.269312: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-1784 [018] d..2 353.816400: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-1784 [018] d..2 353.816405: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-2610 [018] d..2 353.876294: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-2610 [018] d..2 353.876300: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-3423 [019] d..2 353.935421: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-3423 [019] d..2 353.935427: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-5514 [018] d..2 354.076524: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-5514 [018] d..2 354.076530: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-6174 [018] d..2 354.122191: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-6174 [018] d..2 354.122196: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34575 [001] d..3 370.587044: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34575 [001] d..3 370.587049: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34478 [001] d..3 370.590777: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34478 [001] d..3 370.590780: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34488 [008] d..3 370.590937: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34489 [001] d..3 370.590939: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34488 [008] d..3 370.590940: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34489 [001] d..3 370.590941: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34491 [002] d..3 370.590942: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34493 [005] d..3 370.590943: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34491 [002] d..3 370.590945: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34493 [005] d..3 370.590946: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34495 [003] d..3 370.590957: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34496 [018] d..3 370.590957: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34495 [003] d..3 370.590960: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-34496 [018] d..3 370.590960: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
cat-25052 [018] d..2 371.316744: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-25052 [018] d..2 371.316749: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-25062 [018] d..2 371.317396: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-25062 [018] d..2 371.317399: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-26028 [018] d..2 371.384824: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-26028 [018] d..2 371.384829: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-26042 [018] d..2 371.385594: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-26042 [018] d..2 371.385598: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-26125 [009] d..2 387.211207: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26134 [000] d..2 387.211207: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26134 [000] d..2 387.211213: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-26125 [009] d..2 387.211213: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-33741 [024] d..3 387.229243: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33741 [024] d..3 387.229247: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35169 [004] d..3 387.249125: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35165 [002] d..3 387.249125: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35169 [004] d..3 387.249130: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35165 [002] d..3 387.249130: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35176 [005] d..3 387.249227: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35176 [005] d..3 387.249230: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35178 [001] d..3 387.249247: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35178 [001] d..3 387.249250: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35176 [006] d..2 387.249361: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35176 [006] d..2 387.249363: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-14689 [000] d..2 387.249634: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-14689 [000] d..2 387.249636: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-35414 [000] d..2 387.253798: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35414 [000] d..2 387.253800: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-38160 [019] d..2 387.434742: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38160 [019] d..2 387.434748: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-5187 [018] d..2 387.963000: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-5187 [018] d..2 387.963006: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-6902 [018] d..2 388.085003: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-6902 [018] d..2 388.085009: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-7171 [018] d..2 388.104827: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-7171 [018] d..2 388.104831: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-9143 [010] d..3 403.332935: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-9143 [010] d..3 403.332942: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1823 [003] d..3 403.341238: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1823 [003] d..3 403.341243: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1839 [001] d..3 403.341443: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1839 [001] d..3 403.341446: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-15633 [017] d..2 403.349788: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-15633 [017] d..2 403.349792: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-15634 [017] d..2 403.350472: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-15634 [017] d..2 403.350476: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-15634 [000] d..2 403.353760: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-15634 [000] d..2 403.353764: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-27186 [017] d..2 404.136325: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-27186 [017] d..2 404.136331: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-27849 [018] d..2 404.182313: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-27849 [018] d..2 404.182319: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-28224 [018] d..2 404.208279: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-28224 [018] d..2 404.208283: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-36281 [028] d..2 415.353277: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36281 [028] d..2 415.353286: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-23696 [021] d..3 419.678000: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23696 [021] d..3 419.678006: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-35673 [013] d..2 419.678891: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-35673 [013] d..2 419.678896: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-26161 [018] d..2 419.701866: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26161 [018] d..2 419.701869: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-26170 [008] d..3 419.701999: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26169 [006] d..3 419.702000: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26170 [008] d..3 419.702002: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26169 [006] d..3 419.702002: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26175 [003] d..3 419.702121: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26175 [003] d..3 419.702123: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26177 [007] d..3 419.702125: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26178 [008] d..3 419.702126: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-26177 [007] d..3 419.702128: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-26178 [008] d..3 419.702128: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-15636 [001] d..2 419.708291: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-15636 [001] d..2 419.708293: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
cat-5788 [026] d..2 420.420661: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-5788 [026] d..2 420.420668: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-5804 [026] d..2 420.421581: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-5804 [026] d..2 420.421584: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-6580 [026] d..2 420.477573: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-6580 [026] d..2 420.477579: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-6601 [026] d..2 420.478917: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-6601 [026] d..2 420.478920: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-16487 [021] d..2 429.066880: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-16487 [021] d..2 429.066887: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-16490 [019] d..2 429.088763: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-16490 [019] d..2 429.088767: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
numa-vmstat-16501 [006] d..2 429.153385: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
numa-vmstat-16501 [006] d..2 429.153391: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
slabinfo-16504 [021] d..2 429.790110: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
slabinfo-16504 [021] d..2 429.790117: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
diskstats-16512 [021] dN.2 429.905224: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
diskstats-16512 [021] dN.2 429.905229: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-5821 [013] d..2 438.998932: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-5821 [013] d..2 438.998940: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-16187 [006] d..2 439.009408: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-16187 [006] d..2 439.009414: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-12599 [024] d..2 439.018669: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-12594 [015] d..2 439.018672: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-12599 [024] d..2 439.018674: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-12594 [015] d..2 439.018675: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-13701 [015] d..3 439.021600: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13701 [015] d..3 439.021602: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-13712 [012] d..3 439.021726: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13712 [012] d..3 439.021729: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1072 [006] d..3 439.022104: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1072 [006] d..3 439.022107: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1071 [031] d..3 439.022108: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1071 [031] d..3 439.022110: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-16712 [017] d..2 439.026648: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-16712 [017] d..2 439.026652: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-16713 [000] d..2 439.029424: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-16713 [000] d..2 439.029426: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
cat-20281 [027] d..2 439.261883: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-20281 [027] d..2 439.261891: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-25484 [018] d..2 439.625854: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-25484 [018] d..2 439.625860: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-25496 [018] d..2 439.626604: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-25496 [018] d..2 439.626607: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-28685 [018] d..2 439.855131: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-28685 [018] d..2 439.855137: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-28703 [018] d..2 439.856016: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-28703 [018] d..2 439.856020: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-28724 [019] d..2 439.857427: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-28724 [019] d..2 439.857442: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-28738 [018] d..2 439.858151: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-28738 [018] d..2 439.858154: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-16729 [011] d..2 454.397346: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-16729 [011] d..2 454.397353: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-17331 [000] d..3 454.399006: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17331 [000] d..3 454.399009: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-16715 [000] d..2 454.415822: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-16715 [000] d..2 454.415826: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-37432 [025] d..2 454.416663: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37432 [025] d..2 454.416666: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37433 [000] d..2 454.419520: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37433 [000] d..2 454.419522: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-38412 [019] d..2 454.480947: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38412 [019] d..2 454.480952: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-6420 [018] d..2 455.056449: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-6420 [018] d..2 455.056456: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
numa-vmstat-17528 [012] d..2 464.636584: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
numa-vmstat-17528 [012] dn.2 464.636593: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-8750 [011] d..2 471.391914: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8750 [011] d..2 471.391922: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-8674 [028] d..2 471.405240: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-8674 [028] d..2 471.405245: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-17410 [007] d..3 471.414603: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17410 [007] d..3 471.414608: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-17417 [004] d..3 471.414727: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17418 [021] d..3 471.414728: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17419 [008] d..3 471.414729: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17417 [004] d..3 471.414730: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-17418 [021] d..3 471.414731: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-17419 [008] d..3 471.414732: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-17686 [017] d..2 471.417906: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-17686 [017] d..2 471.417910: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-17689 [025] d..2 471.421910: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17689 [025] d..2 471.421915: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-27181 [018] d..2 472.077665: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-27181 [018] d..2 472.077671: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-28065 [018] d..2 472.139008: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-28065 [018] d..2 472.139014: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-28356 [018] d..2 472.159720: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-28356 [018] d..2 472.159723: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-28404 [018] d..2 472.163164: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-28404 [018] d..2 472.163167: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-28673 [018] d..2 472.183239: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-28673 [018] d..2 472.183243: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-38249 [030] d..2 481.601517: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38249 [030] d..2 481.601527: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-19653 [014] d..2 487.770502: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-19653 [014] d..2 487.770509: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-38162 [001] d..3 487.801321: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38163 [002] d..3 487.801321: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38163 [002] d..3 487.801326: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-38162 [001] d..3 487.801326: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-38166 [000] d..3 487.801330: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38166 [000] d..3 487.801335: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-38173 [023] d..3 487.801526: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38173 [023] d..3 487.801529: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-38178 [004] d..3 487.801654: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38178 [004] d..3 487.801657: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-17689 [000] d..2 487.801848: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17689 [000] d..2 487.801850: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-9301 [018] d..2 488.610320: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-9301 [018] d..2 488.610326: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-17968 [030] d..2 503.299042: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-17968 [030] d..2 503.299049: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-18369 [012] d..3 503.302507: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18369 [012] d..3 503.302510: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18376 [009] d..3 503.302605: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18376 [009] d..3 503.302608: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18378 [015] d..3 503.302735: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18378 [015] d..3 503.302738: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18377 [004] d..3 503.302744: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18377 [004] d..3 503.302749: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-18621 [017] d..2 503.304094: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-18621 [017] d..2 503.304098: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-18622 [017] d..2 503.304787: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-18622 [017] d..2 503.304791: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-18622 [000] d..2 503.306864: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-18622 [000] d..2 503.306867: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
sleep-22911 [018] d..2 503.590304: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-22911 [018] d..2 503.590309: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-23271 [018] d..2 503.616071: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-23271 [018] d..2 503.616075: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-28957 [018] d..2 504.014071: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-28957 [018] d..2 504.014076: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-29137 [018] d..2 504.026857: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-29137 [018] d..2 504.026860: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-32411 [018] d..2 504.259520: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-32411 [018] d..2 504.259525: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-32423 [018] d..2 504.260170: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-32423 [018] d..2 504.260173: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-32434 [018] d..2 504.260796: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-32434 [018] d..2 504.260800: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37040 [018] d..2 504.591363: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37040 [018] d..2 504.591369: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37050 [018] d..2 504.592002: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37050 [018] d..2 504.592006: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-39272 [026] d..2 510.774177: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39272 [026] d..2 510.774184: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-18919 [020] d..3 525.327423: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18919 [020] d..3 525.327429: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18921 [002] d..3 525.327556: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18920 [005] d..3 525.327556: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18921 [002] d..3 525.327558: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18920 [005] d..3 525.327559: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18923 [005] d..3 525.327694: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18924 [020] d..3 525.327695: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-18923 [005] d..3 525.327697: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-18924 [020] d..3 525.327698: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-37603 [007] d..2 525.329785: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37603 [007] d..2 525.329788: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-39577 [017] d..2 525.338675: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-39577 [017] d..2 525.338678: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-10856 [018] d..2 526.208864: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-10856 [018] d..2 526.208871: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-10870 [018] d..2 526.209663: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-10870 [018] d..2 526.209667: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-10881 [018] d..2 526.210409: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-10881 [018] d..2 526.210413: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-10893 [018] d..2 526.211100: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-10893 [018] d..2 526.211104: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-11650 [018] d..2 526.267404: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-11650 [018] d..2 526.267409: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-11660 [018] d..2 526.268093: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-11660 [018] d..2 526.268097: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-2492 [012] d..3 541.156654: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-2492 [012] d..3 541.156661: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-19351 [000] d..2 541.160888: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-19351 [000] d..2 541.160891: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-19357 [010] d..3 541.160894: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-19357 [010] d..3 541.160897: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-19804 [017] d..2 541.185425: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-19804 [017] d..2 541.185430: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-40436 [000] d..2 553.570201: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40436 [000] d..2 553.570209: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-20062 [010] d..2 557.104493: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20062 [010] d..2 557.104501: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-40250 [011] d..3 557.120655: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40250 [011] d..3 557.120661: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40255 [011] d..3 557.120785: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40254 [010] d..3 557.120787: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40255 [011] d..3 557.120787: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40254 [010] d..3 557.120789: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40259 [030] d..3 557.120795: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40259 [030] d..3 557.120799: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40293 [011] d..3 557.127072: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40292 [010] d..3 557.127074: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40293 [011] d..3 557.127075: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40292 [010] d..3 557.127077: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-9676 [019] d..2 557.792696: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-9676 [019] d..2 557.792702: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-9709 [018] d..2 557.794744: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-9709 [018] d..2 557.794747: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-12583 [018] d..2 557.999609: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-12583 [018] d..2 557.999615: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-13335 [018] d..2 558.053559: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-13335 [018] d..2 558.053564: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
meminfo-20548 [024] dN.2 559.205383: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
meminfo-20548 [024] dN.2 559.205391: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
meminfo-20581 [014] d..2 566.213924: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
meminfo-20581 [014] d..2 566.213932: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-20041 [027] d..2 574.935680: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20041 [027] d..2 574.935687: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-20509 [022] d..3 574.943152: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20509 [022] d..3 574.943158: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-20015 [029] d..3 574.943570: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20015 [029] d..3 574.943574: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-20023 [014] d..2 574.943774: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20023 [014] d..2 574.943776: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-20028 [010] d..3 574.943806: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-20028 [010] d..3 574.943809: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-40535 [001] d..2 574.944045: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-40535 [001] d..2 574.944048: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-20774 [018] d..2 574.945073: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-20774 [018] d..2 574.945076: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-4478 [010] d..2 574.945471: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-4478 [010] d..2 574.945473: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
cmd-20775 [017] d..2 574.945643: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-20775 [017] d..2 574.945647: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> load_script+0x1be/0x1dc <ffffffff81204e18>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-20776 [018] d..2 574.945804: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-20776 [018] d..2 574.945807: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-20777 [018] d..2 574.946398: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-20777 [018] d..2 574.946401: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-20778 [017] d..2 574.946680: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-20778 [017] d..2 574.946683: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-25954 [031] d..2 575.299137: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-25954 [031] d..2 575.299144: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-30421 [031] d..2 575.618382: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-30421 [031] d..2 575.618389: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-30441 [018] d..2 575.619326: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-30441 [018] d..2 575.619331: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-33622 [026] d..2 575.847838: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33622 [026] d..2 575.847846: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-755 [003] d..2 586.162355: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-755 [003] d..2 586.162362: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-36949 [003] d..3 591.923411: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36950 [005] d..3 591.923411: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36949 [003] d..3 591.923416: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-36950 [005] d..3 591.923416: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
<...>-36947 [002] d..2 591.923521: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-36947 [002] d..2 591.923523: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-38106 [014] d..2 591.923797: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-38106 [014] d..2 591.923803: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
cmd-884 [000] d..2 591.942744: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-884 [000] d..2 591.942747: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
date-14857 [018] d..2 592.893222: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-14857 [018] d..2 592.893227: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
hackbench-13947 [029] d..2 607.926390: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-13947 [029] d..2 607.926398: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-21504 [001] d..2 608.104614: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-21504 [001] d..2 608.104618: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-887 [027] d..3 608.109310: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-887 [027] d..3 608.109316: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-893 [012] d..3 608.109323: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-898 [026] d..3 608.109324: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-893 [012] d..3 608.109327: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-898 [026] d..3 608.109328: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-904 [015] d..3 608.109484: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-906 [000] d..3 608.109487: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-904 [015] d..3 608.109487: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-906 [000] d..3 608.109490: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
date-21754 [017] d..2 608.113636: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-21754 [017] d..2 608.113639: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cmd-21755 [000] d..2 608.117241: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cmd-21755 [000] d..2 608.117243: <stack trace>
=> dup_mm+0x37e/0x480 <ffffffff810c1829>
=> copy_process.part.30+0xa58/0x11ee <ffffffff810c23ae>
=> do_fork+0xba/0x2ac <ffffffff810c2ce1>
=> SyS_clone+0x16/0x18 <ffffffff810c2f4d>
=> stub_clone+0x69/0x90 <ffffffff81a07969>
<...>-33501 [018] d..2 608.948750: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-33501 [018] d..2 608.948756: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34109 [016] d..2 608.991902: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34109 [016] d..2 608.991907: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-37454 [019] d..2 609.244419: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-37454 [019] d..2 609.244425: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
<...>-34765 [015] d..2 628.488220: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-34765 [015] d..2 628.488227: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
<...>-32820 [027] d..3 628.494492: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
<...>-32820 [027] d..3 628.494495: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23685 [004] d..3 628.507366: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23685 [004] d..3 628.507371: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23691 [017] d..3 628.507564: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23690 [006] d..3 628.507565: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23691 [017] d..3 628.507567: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23690 [006] d..3 628.507569: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-23693 [001] d..2 628.507665: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-23693 [001] d..2 628.507668: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> do_exit+0x38b/0x989 <ffffffff810c5a31>
=> do_group_exit+0x44/0xac <ffffffff810c60a9>
=> __wake_up_parent+0x0/0x28 <ffffffff810c6125>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
hackbench-1616 [007] d..3 628.512842: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1616 [007] d..3 628.512845: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1618 [001] d..3 628.512990: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1618 [001] d..3 628.512992: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1620 [002] d..3 628.512994: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1623 [006] d..3 628.512997: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1620 [002] d..3 628.512997: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1623 [006] d..3 628.512999: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-1625 [007] d..3 628.513003: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-1625 [007] d..3 628.513006: <stack trace>
=> ptep_clear_flush+0x36/0x40 <ffffffff81196fbc>
=> do_wp_page+0x685/0x7c1 <ffffffff811877c2>
=> handle_mm_fault+0x9e9/0xc9c <ffffffff8118a32f>
=> __do_page_fault+0x3b6/0x504 <ffffffff81a03b0e>
=> do_page_fault+0xe/0x10 <ffffffff81a03c6a>
=> page_fault+0x28/0x30 <ffffffff81a00858>
hackbench-21757 [017] d..2 628.513245: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
hackbench-21757 [017] d..2 628.513247: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
date-2059 [025] d..2 628.514066: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-2059 [025] d..2 628.514070: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
run-job-4175 [001] d..2 628.781525: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
run-job-4175 [001] d..2 628.781527: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
sleep-2068 [026] d..2 628.784172: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-2068 [026] d..2 628.784178: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
date-2066 [029] d..2 628.784225: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
date-2066 [029] d..2 628.784229: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cat-2071 [025] d..2 628.784987: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cat-2071 [025] d..2 628.784990: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
sleep-2088 [025] d..2 629.021982: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
sleep-2088 [025] d..2 629.021987: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
wget-2113 [017] d..2 629.792738: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
wget-2113 [017] d..2 629.792742: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
cp-2118 [000] d..2 630.035622: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
cp-2118 [000] d..2 630.035627: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
lsof-2130 [012] d..2 631.125170: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
lsof-2130 [012] d..2 631.125174: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> unmap_region+0xdd/0xef <ffffffff8118d0ff>
=> do_munmap+0x250/0x2e3 <ffffffff8118ea85>
=> vm_munmap+0x42/0x5b <ffffffff8118eb5a>
=> SyS_munmap+0x23/0x29 <ffffffff8118eb96>
=> system_call_fastpath+0x16/0x1b <ffffffff81a07669>
wd_keepalive-2236 [025] d..2 641.795155: native_flush_tlb_others: (native_flush_tlb_others+0x0/0x30 <ffffffff8106c861>)
wd_keepalive-2236 [025] d..2 641.795165: <stack trace>
=> tlb_flush_mmu+0x47/0x75 <ffffffff81185e9a>
=> tlb_finish_mmu+0x14/0x39 <ffffffff81185edc>
=> exit_mmap+0x9b/0x12c <ffffffff8118ec37>
=> mmput+0x74/0x109 <ffffffff810c1222>
=> flush_old_exec+0x6fe/0x76b <ffffffff811c6027>
=> load_elf_binary+0x2b9/0x16c4 <ffffffff812064de>
=> search_binary_handler+0x70/0x168 <ffffffff811c53ed>
=> do_execve_common.isra.22+0x42d/0x645 <ffffffff811c690f>
=> do_execve+0x18/0x1a <ffffffff811c6b3f>
=> SyS_execve+0x3b/0x51 <ffffffff811c6d6a>
=> stub_execve+0x69/0xa0 <ffffffff81a07bb9>
[-- Attachment #3: config-3.13.0-rc3-00004-geabb1f8 --]
[-- Type: text/plain, Size: 81251 bytes --]
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.13.0-rc3 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y
#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
# CONFIG_AUDIT is not set
#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
# CONFIG_IRQ_DOMAIN_DEBUG is not set
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
CONFIG_RCU_FAST_NO_HZ=y
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
CONFIG_ARCH_USES_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING_DEFAULT_ENABLED is not set
CONFIG_NUMA_BALANCING=y
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
# CONFIG_CGROUP_CPUACCT is not set
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_UIDGID_STRICT_TYPE_CHECKS=y
# CONFIG_SCHED_AUTOGROUP is not set
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_EXPERT=y
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_PCI_QUIRKS=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y
#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# CONFIG_OPROFILE is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
# CONFIG_JUMP_LABEL is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
# CONFIG_SYSTEM_TRUSTED_KEYRING is not set
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_CMDLINE_PARSER is not set
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
CONFIG_BLOCK_COMPAT=y
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y
#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_VSMP is not set
# CONFIG_X86_INTEL_LPSS is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
# CONFIG_PARAVIRT_SPINLOCKS is not set
# CONFIG_XEN is not set
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_KVM_GUEST=y
# CONFIG_KVM_DEBUG_FS is not set
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
# CONFIG_PROCESSOR_SELECT is not set
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
# CONFIG_CALGARY_IOMMU is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
# CONFIG_X86_MCE_AMD is not set
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m
CONFIG_X86_THERMAL_VECTOR=y
CONFIG_I8K=m
CONFIG_MICROCODE=m
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_MICROCODE_INTEL_LIB=y
# CONFIG_MICROCODE_INTEL_EARLY is not set
# CONFIG_MICROCODE_AMD_EARLY is not set
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
# CONFIG_MOVABLE_NODE is not set
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_TRANSPARENT_HUGEPAGE=y
# CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS is not set
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_CLEANCACHE is not set
# CONFIG_FRONTSWAP is not set
# CONFIG_CMA is not set
# CONFIG_ZBUD is not set
# CONFIG_MEM_SOFT_DIRTY is not set
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_SECCOMP=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
CONFIG_COMPAT_VDSO=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_AUTOSLEEP=y
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
# CONFIG_PM_TRACE_RTC is not set
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
# CONFIG_ACPI_PROCFS is not set
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=m
# CONFIG_ACPI_IPMI is not set
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=m
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
# CONFIG_ACPI_HOTPLUG_MEMORY is not set
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_SFI is not set
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
#
# x86 CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
CONFIG_X86_PCC_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=m
# CONFIG_X86_P4_CLOCKMOD is not set
#
# shared options
#
# CONFIG_X86_SPEEDSTEP_LIB is not set
#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y
#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set
#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_DOMAINS=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
# CONFIG_PCIE_ECRC is not set
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
CONFIG_PCI_REALLOC_ENABLE_AUTO=y
CONFIG_PCI_STUB=m
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
CONFIG_PCI_IOAPIC=y
CONFIG_PCI_LABEL=y
#
# PCI host controller drivers
#
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y
#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set
# CONFIG_X86_SYSFB is not set
#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
CONFIG_IA32_AOUT=y
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=y
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
CONFIG_INET_XFRM_MODE_BEET=y
# CONFIG_INET_LRO is not set
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
# CONFIG_TCP_CONG_CUBIC is not set
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_BIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="bic"
# CONFIG_TCP_MD5SIG is not set
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
# CONFIG_INET6_AH is not set
# CONFIG_INET6_ESP is not set
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_VTI is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
# CONFIG_NETFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=y
# CONFIG_NET_SCTPPROBE is not set
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
# CONFIG_BRIDGE is not set
CONFIG_HAVE_NET_DSA=y
CONFIG_VLAN_8021Q=y
# CONFIG_VLAN_8021Q_GVRP is not set
# CONFIG_VLAN_8021Q_MVRP is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
# CONFIG_NET_SCHED is not set
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_MMAP is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_HSR is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set
CONFIG_NET_FLOW_LIMIT=y
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
# CONFIG_CFG80211 is not set
# CONFIG_LIB80211 is not set
#
# CFG80211 needs to be enabled for MAC80211
#
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y
#
# Device Drivers
#
#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
# CONFIG_DEVTMPFS_MOUNT is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
# CONFIG_GENERIC_CPU_DEVICES is not set
# CONFIG_DMA_SHARED_BUFFER is not set
#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y
#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
# CONFIG_BLK_DEV_XIP is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_VIRTIO_BLK=y
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set
#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set
#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set
#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set
#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_INTEL_MEI is not set
# CONFIG_INTEL_MEI_ME is not set
# CONFIG_VMWARE_VMCI is not set
#
# Intel MIC Host Driver
#
# CONFIG_INTEL_MIC_HOST is not set
#
# Intel MIC Card Driver
#
# CONFIG_INTEL_MIC_CARD is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set
#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_TGT is not set
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
# CONFIG_BLK_DEV_SR is not set
CONFIG_CHR_DEV_SG=y
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y
#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=y
CONFIG_SCSI_SAS_ATTRS=y
CONFIG_SCSI_SAS_LIBSAS=y
# CONFIG_SCSI_SAS_ATA is not set
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=y
CONFIG_SCSI_AACRAID=y
CONFIG_SCSI_AIC7XXX=y
CONFIG_AIC7XXX_CMDS_PER_DEVICE=4
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
# CONFIG_AIC7XXX_REG_PRETTY_PRINT is not set
CONFIG_SCSI_AIC7XXX_OLD=y
CONFIG_SCSI_AIC79XX=y
CONFIG_AIC79XX_CMDS_PER_DEVICE=4
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
CONFIG_SCSI_AIC94XX=y
# CONFIG_AIC94XX_DEBUG is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=y
CONFIG_MEGARAID_MAILBOX=y
CONFIG_MEGARAID_LEGACY=y
CONFIG_MEGARAID_SAS=y
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
# CONFIG_SCSI_MPT2SAS_LOGGING is not set
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
# CONFIG_SCSI_MPT3SAS_LOGGING is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=y
CONFIG_SCSI_BUSLOGIC=y
# CONFIG_SCSI_FLASHPOINT is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_LIBFC is not set
# CONFIG_LIBFCOE is not set
# CONFIG_FCOE is not set
# CONFIG_FCOE_FNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_FUTURE_DOMAIN is not set
CONFIG_SCSI_GDTH=y
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_SCSI_QLA_FC=y
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_SRP is not set
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=y
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y
#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=y
# CONFIG_SATA_AHCI_PLATFORM is not set
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y
#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y
#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=y
# CONFIG_SATA_HIGHBANK is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_RCAR is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set
#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set
#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PCMCIA is not set
# CONFIG_PATA_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set
#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
# CONFIG_ATA_GENERIC is not set
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
# CONFIG_MD_AUTODETECT is not set
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=y
CONFIG_MD_RAID456=y
CONFIG_MD_MULTIPATH=y
CONFIG_MD_FAULTY=y
CONFIG_BCACHE=y
# CONFIG_BCACHE_DEBUG is not set
# CONFIG_BCACHE_CLOSURES_DEBUG is not set
CONFIG_BLK_DEV_DM=y
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=y
CONFIG_DM_BIO_PRISON=y
CONFIG_DM_PERSISTENT_DATA=y
CONFIG_DM_CRYPT=y
CONFIG_DM_SNAPSHOT=y
# CONFIG_DM_THIN_PROVISIONING is not set
CONFIG_DM_CACHE=y
CONFIG_DM_CACHE_MQ=y
CONFIG_DM_CACHE_CLEANER=y
CONFIG_DM_MIRROR=y
# CONFIG_DM_LOG_USERSPACE is not set
# CONFIG_DM_RAID is not set
CONFIG_DM_ZERO=y
CONFIG_DM_MULTIPATH=y
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=y
# CONFIG_DM_UEVENT is not set
CONFIG_DM_FLAKEY=y
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
# CONFIG_TARGET_CORE is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=y
CONFIG_FUSION_SAS=y
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=y
# CONFIG_FUSION_LOGGING is not set
#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_MII=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=y
# CONFIG_NLMON is not set
# CONFIG_ARCNET is not set
#
# CAIF transport drivers
#
CONFIG_VHOST_NET=y
CONFIG_VHOST_RING=y
CONFIG_VHOST=y
#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
CONFIG_ACENIC=y
# CONFIG_ACENIC_OMIT_TIGON_I is not set
CONFIG_NET_VENDOR_AMD=y
CONFIG_AMD8111_ETH=y
CONFIG_PCNET32=y
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
CONFIG_ATL2=y
CONFIG_ATL1=y
CONFIG_ATL1E=y
CONFIG_ATL1C=y
# CONFIG_ALX is not set
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=y
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
# CONFIG_BNX2X is not set
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
CONFIG_TULIP=y
# CONFIG_TULIP_MWI is not set
# CONFIG_TULIP_MMIO is not set
# CONFIG_TULIP_NAPI is not set
CONFIG_DE4X5=y
CONFIG_WINBOND_840=y
CONFIG_DM9102=y
CONFIG_ULI526X=y
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
CONFIG_DL2K=y
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
# CONFIG_IGBVF is not set
CONFIG_IXGB=y
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
# CONFIG_IXGBEVF is not set
# CONFIG_I40E is not set
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=y
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=y
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
# CONFIG_MLX5_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=y
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
# CONFIG_NET_PACKET_ENGINE is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
CONFIG_8139CP=y
CONFIG_8139TOO=y
CONFIG_8139TOO_PIO=y
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=y
# CONFIG_SH_ETH is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
# CONFIG_SC92031 is not set
CONFIG_NET_VENDOR_SIS=y
CONFIG_SIS900=y
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TLAN is not set
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=y
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=y
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y
#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_QSEMI_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_SMSC_PHY is not set
CONFIG_BROADCOM_PHY=y
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_FIXED_PHY is not set
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
#
# USB Network Adapters
#
CONFIG_USB_CATC=y
CONFIG_USB_KAWETH=y
CONFIG_USB_PEGASUS=y
CONFIG_USB_RTL8150=y
# CONFIG_USB_RTL8152 is not set
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_AX88179_178A=y
CONFIG_USB_NET_CDCETHER=y
CONFIG_USB_NET_CDC_EEM=y
CONFIG_USB_NET_CDC_NCM=y
# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set
# CONFIG_USB_NET_CDC_MBIM is not set
CONFIG_USB_NET_DM9601=y
# CONFIG_USB_NET_SR9700 is not set
CONFIG_USB_NET_SMSC75XX=y
CONFIG_USB_NET_SMSC95XX=y
CONFIG_USB_NET_GL620A=y
CONFIG_USB_NET_NET1080=y
CONFIG_USB_NET_PLUSB=y
CONFIG_USB_NET_MCS7830=y
CONFIG_USB_NET_RNDIS_HOST=y
CONFIG_USB_NET_CDC_SUBSET=y
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_KC2190=y
CONFIG_USB_NET_ZAURUS=y
# CONFIG_USB_NET_CX82310_ETH is not set
# CONFIG_USB_NET_KALMIA is not set
# CONFIG_USB_NET_QMI_WWAN is not set
CONFIG_USB_NET_INT51X1=y
CONFIG_USB_IPHETH=y
CONFIG_USB_SIERRA_NET=y
# CONFIG_USB_VL600 is not set
CONFIG_WLAN=y
# CONFIG_PCMCIA_RAYCS is not set
# CONFIG_AIRO is not set
# CONFIG_ATMEL is not set
# CONFIG_AIRO_CS is not set
# CONFIG_PCMCIA_WL3501 is not set
# CONFIG_PRISM54 is not set
# CONFIG_USB_ZD1201 is not set
# CONFIG_HOSTAP is not set
# CONFIG_WL_TI is not set
#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set
#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=y
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_SERIAL=y
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set
#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
# CONFIG_CYCLADES is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
# CONFIG_SYNCLINK is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_SYNCLINK_GT is not set
# CONFIG_NOZOMI is not set
# CONFIG_ISI is not set
# CONFIG_N_HDLC is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y
#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=y
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
CONFIG_SERIAL_8250_DETECT_IRQ=y
CONFIG_SERIAL_8250_RSA=y
# CONFIG_SERIAL_8250_DW is not set
#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_HVC_DRIVER=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_IPMI_HANDLER=m
# CONFIG_IPMI_PANIC_EVENT is not set
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
CONFIG_HW_RANDOM_INTEL=y
CONFIG_HW_RANDOM_AMD=y
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y
#
# I2C Hardware Bus support
#
#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set
#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set
#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set
#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set
#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set
#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set
#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set
#
# PPS generators support
#
#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HTU21 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IBMAEM is not set
# CONFIG_SENSORS_IBMPEX is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_SCH5627 is not set
# CONFIG_SENSORS_SCH5636 is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set
#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_CPU_THERMAL is not set
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_PKG_TEMP_THERMAL=m
#
# Texas Instruments thermal drivers
#
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=y
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_ALIM1535_WDT is not set
# CONFIG_ALIM7101_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SP5100_TCO is not set
# CONFIG_SC520_WDT is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
# CONFIG_IE6XX_WDT is not set
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
# CONFIG_HP_WATCHDOG is not set
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_NV_TCO is not set
# CONFIG_60XX_WDT is not set
# CONFIG_SBC8360_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_VIA_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
# CONFIG_W83697UG_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
#
# PCI-based Watchdog Cards
#
# CONFIG_PCIPCWATCHDOG is not set
# CONFIG_WDTPCI is not set
#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y
#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set
#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_CS5535 is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_HTC_PASIC3 is not set
CONFIG_LPC_ICH=y
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set
#
# Graphics support
#
# CONFIG_AGP is not set
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
# CONFIG_VGA_SWITCHEROO is not set
# CONFIG_DRM is not set
# CONFIG_VGASTATE is not set
# CONFIG_VIDEO_OUTPUT_CONTROL is not set
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
CONFIG_DUMMY_CONSOLE=y
# CONFIG_SOUND is not set
#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
# CONFIG_HIDRAW is not set
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=y
#
# Special HID drivers
#
CONFIG_HID_A4TECH=y
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=y
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=y
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
CONFIG_HID_CYPRESS=y
CONFIG_HID_DRAGONRISE=y
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=y
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_HUION is not set
# CONFIG_HID_KEYTOUCH is not set
CONFIG_HID_KYE=y
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=y
# CONFIG_HID_ICADE is not set
CONFIG_HID_TWINHAN=y
CONFIG_HID_KENSINGTON=y
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=y
CONFIG_HID_LOGITECH_DJ=m
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
# CONFIG_HID_MAGICMOUSE is not set
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=y
CONFIG_HID_ORTEK=y
CONFIG_HID_PANTHERLORD=y
# CONFIG_PANTHERLORD_FF is not set
CONFIG_HID_PETALYNX=y
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=y
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEELSERIES is not set
CONFIG_HID_SUNPLUS=y
CONFIG_HID_GREENASIA=y
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_SMARTJOYPLUS=y
# CONFIG_SMARTJOYPLUS_FF is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=y
CONFIG_HID_THRUSTMASTER=y
CONFIG_THRUSTMASTER_FF=y
# CONFIG_HID_XINMO is not set
CONFIG_HID_ZEROPLUS=y
# CONFIG_ZEROPLUS_FF is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set
#
# USB HID support
#
CONFIG_USB_HID=y
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
# CONFIG_USB_DEBUG is not set
# CONFIG_USB_ANNOUNCE_NEW_DEVICES is not set
#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set
#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_FUSBH200_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set
#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set
#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=y
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
CONFIG_USB_STORAGE_DATAFAB=y
CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_ISD200=y
CONFIG_USB_STORAGE_USBAT=y
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
CONFIG_USB_STORAGE_ALAUDA=y
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_CHIPIDEA is not set
#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set
#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
CONFIG_USB_TEST=y
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HSIC_USB3503 is not set
#
# USB Physical Layer drivers
#
# CONFIG_USB_PHY is not set
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_SAMSUNG_USB2PHY is not set
# CONFIG_SAMSUNG_USB3PHY is not set
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_MM_EDAC=y
CONFIG_EDAC_GHES=y
CONFIG_EDAC_E752X=y
# CONFIG_EDAC_I82975X is not set
# CONFIG_EDAC_I3000 is not set
# CONFIG_EDAC_I3200 is not set
# CONFIG_EDAC_X38 is not set
# CONFIG_EDAC_I5400 is not set
# CONFIG_EDAC_I7CORE is not set
# CONFIG_EDAC_I5000 is not set
# CONFIG_EDAC_I5100 is not set
# CONFIG_EDAC_I7300 is not set
# CONFIG_EDAC_SBRIDGE is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set
#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set
#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set
#
# SPI RTC drivers
#
#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set
#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_MOXART is not set
#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set
#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set
# CONFIG_STAGING is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
# CONFIG_EEEPC_LAPTOP is not set
# CONFIG_ACPI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_PVPANIC is not set
# CONFIG_CHROME_PLATFORMS is not set
#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_SUPPORT=y
# CONFIG_AMD_IOMMU is not set
# CONFIG_INTEL_IOMMU is not set
# CONFIG_IRQ_REMAP is not set
#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set
#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
# CONFIG_FMC is not set
#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_PHY_EXYNOS_MIPI_VIDEO is not set
# CONFIG_POWERCAP is not set
#
# Firmware Drivers
#
CONFIG_EDD=y
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DELL_RBU=y
CONFIG_DCDBAS=y
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
# CONFIG_ISCSI_IBFT_FIND is not set
# CONFIG_GOOGLE_FIRMWARE is not set
#
# EFI (Extensible Firmware Interface) Support
#
# CONFIG_EFI_VARS is not set
CONFIG_UEFI_CPER=y
#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
CONFIG_REISERFS_FS=y
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
# CONFIG_QUOTA_NETLINK_INTERFACE is not set
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
CONFIG_FUSE_FS=y
# CONFIG_CUSE is not set
CONFIG_GENERIC_ACL=y
#
# Caches
#
# CONFIG_FSCACHE is not set
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=y
CONFIG_UDF_NLS=y
#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
CONFIG_ROMFS_FS=y
CONFIG_ROMFS_BACKED_BY_BLOCK=y
CONFIG_ROMFS_ON_BLOCK=y
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_FTRACE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_F2FS_FS is not set
# CONFIG_EFIVAR_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
# CONFIG_NFS_V4_2 is not set
CONFIG_PNFS_FILE_LAYOUT=y
CONFIG_PNFS_BLOCK=y
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFSD=y
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
# CONFIG_NFSD_FAULT_INJECTION is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
CONFIG_CIFS=y
# CONFIG_CIFS_STATS is not set
CONFIG_CIFS_WEAK_PW_HASH=y
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
# CONFIG_CIFS_ACL is not set
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DFS_UPCALL is not set
# CONFIG_CIFS_SMB2 is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set
#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_BOOT_PRINTK_DELAY is not set
CONFIG_DYNAMIC_DEBUG=y
#
# Compile-time checks and compiler options
#
# CONFIG_DEBUG_INFO is not set
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_DEBUG_KERNEL=y
#
# Memory Debugging
#
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
# CONFIG_DEBUG_STACKOVERFLOW is not set
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_DEBUG_SHIRQ is not set
#
# Debug Lockups and Hangs
#
# CONFIG_LOCKUP_DETECTOR is not set
# CONFIG_DETECT_HUNG_TASK is not set
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
CONFIG_SCHED_DEBUG=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_TIMER_STATS is not set
#
# Lock Debugging (spinlocks, mutexes, etc...)
#
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_TRACE_IRQFLAGS=y
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set
#
# RCU Debugging
#
CONFIG_SPARSE_RCU_POINTER=y
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
# CONFIG_FAULT_INJECTION is not set
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_IRQSOFF_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
CONFIG_UPROBE_EVENT=y
CONFIG_PROBE_EVENTS=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_FTRACE_MCOUNT_RECORD=y
# CONFIG_FTRACE_STARTUP_TEST is not set
CONFIG_MMIOTRACE=y
# CONFIG_MMIOTRACE_TEST is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
#
# Runtime Testing
#
CONFIG_LKDTM=y
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
# CONFIG_EARLY_PRINTK_DBGP is not set
# CONFIG_EARLY_PRINTK_EFI is not set
# CONFIG_X86_PTDUMP is not set
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
CONFIG_DOUBLEFAULT=y
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_DEBUG is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
# CONFIG_OPTIMIZE_INLINING is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_STATIC_CPU_HAS is not set
#
# Security options
#
CONFIG_KEYS=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_BIG_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
# CONFIG_SECURITY is not set
# CONFIG_SECURITYFS is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_DEFAULT_SECURITY=""
CONFIG_XOR_BLOCKS=y
CONFIG_ASYNC_CORE=y
CONFIG_ASYNC_MEMCPY=y
CONFIG_ASYNC_XOR=y
CONFIG_ASYNC_PQ=y
CONFIG_ASYNC_RAID6_RECOV=y
CONFIG_CRYPTO=y
#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_PCRYPT=y
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=y
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_ABLK_HELPER=y
CONFIG_CRYPTO_GLUE_HELPER_X86=y
#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=y
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_SEQIV=y
#
# Block modes
#
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=y
CONFIG_CRYPTO_PCBC=y
CONFIG_CRYPTO_XTS=y
#
# Hash modes
#
CONFIG_CRYPTO_CMAC=y
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=y
CONFIG_CRYPTO_VMAC=y
#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=y
CONFIG_CRYPTO_CRC32=y
CONFIG_CRYPTO_CRC32_PCLMUL=y
CONFIG_CRYPTO_CRCT10DIF=y
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
CONFIG_CRYPTO_GHASH=y
CONFIG_CRYPTO_MD4=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=y
CONFIG_CRYPTO_RMD128=y
CONFIG_CRYPTO_RMD160=y
CONFIG_CRYPTO_RMD256=y
CONFIG_CRYPTO_RMD320=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=y
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_TGR192=y
CONFIG_CRYPTO_WP512=y
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=y
#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=y
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_ANUBIS=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_BLOWFISH=y
CONFIG_CRYPTO_BLOWFISH_COMMON=y
CONFIG_CRYPTO_BLOWFISH_X86_64=y
CONFIG_CRYPTO_CAMELLIA=y
CONFIG_CRYPTO_CAMELLIA_X86_64=y
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=y
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
CONFIG_CRYPTO_CAST_COMMON=y
CONFIG_CRYPTO_CAST5=y
CONFIG_CRYPTO_CAST5_AVX_X86_64=y
CONFIG_CRYPTO_CAST6=y
CONFIG_CRYPTO_CAST6_AVX_X86_64=y
CONFIG_CRYPTO_DES=y
CONFIG_CRYPTO_FCRYPT=y
CONFIG_CRYPTO_KHAZAD=y
CONFIG_CRYPTO_SALSA20=y
CONFIG_CRYPTO_SALSA20_X86_64=y
CONFIG_CRYPTO_SEED=y
CONFIG_CRYPTO_SERPENT=y
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=y
CONFIG_CRYPTO_SERPENT_AVX_X86_64=y
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
CONFIG_CRYPTO_TEA=y
CONFIG_CRYPTO_TWOFISH=y
CONFIG_CRYPTO_TWOFISH_COMMON=y
CONFIG_CRYPTO_TWOFISH_X86_64=y
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=y
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=y
#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_ZLIB=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_HASH_INFO=y
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_PUBLIC_KEY_ALGO_RSA=y
# CONFIG_X509_CERTIFICATE_PARSER is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
# CONFIG_KVM_AMD is not set
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_BINARY_PRINTF=y
#
# Library routines
#
CONFIG_RAID6_PQ=y
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_PERCPU_RWSEM=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=y
# CONFIG_CRC8 is not set
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
# CONFIG_AVERAGE is not set
CONFIG_CLZ_TAB=y
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set
CONFIG_MPILIB=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
[-- Attachment #4: eabb1f89905a0c809d13ec27795ced089c107eb8 --]
[-- Type: text/plain, Size: 35732 bytes --]
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
231353 ~ 2% -7.8% 213339 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
153062 ~ 0% +3.2% 157909 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
155354 ~ 0% +3.2% 160342 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-socket
82183 ~ 0% -2.9% 79806 ~ 0% xps2/micro/hackbench/1600%-process-pipe
621954 -1.7% 611398 TOTAL hackbench.throughput
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
4916 ~ 0% +1.1% 4971 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
4916 +1.1% 4971 TOTAL netperf.Throughput_tps
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
409 ~10% -91.4% 35 ~ 3% avoton1/crypto/tcrypt/2s-505-509
268 ~ 4% -100.0% 0 lkp-a04/micro/netperf/120s-200%-TCP_RR
276 ~ 3% +6.5e+18% 1.792e+19 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
273 ~ 5% -100.0% 0 lkp-a04/micro/netperf/120s-200%-UDP_RR
1691 ~56% +1.5e+16% 2.545e+17 ~126% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
1983 ~63% +1.7e+16% 3.3e+17 ~75% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
2202 ~12% +5e+15% 1.108e+17 ~19% lkp-snb01/micro/hackbench/1600%-process-pipe
2365 ~ 9% +6.7e+15% 1.596e+17 ~25% lkp-snb01/micro/hackbench/1600%-process-socket
261751 ~ 8% +9.8e+13% 2.564e+17 ~13% lkp-snb01/micro/hackbench/1600%-threads-pipe
289394 ~31% +6.3e+13% 1.827e+17 ~11% lkp-snb01/micro/hackbench/1600%-threads-socket
189 ~ 9% +2.9e+17% 5.462e+17 ~60% xps2/micro/hackbench/1600%-process-pipe
560803 +3.5e+15% 1.976e+19 TOTAL proc-vmstat.nr_tlb_remote_flush_received
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
132 ~ 6% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
200 ~ 6% -100.0% 0 lkp-a04/micro/netperf/120s-200%-TCP_RR
208 ~ 3% +6.1e+14% 1.27e+15 ~170% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
203 ~ 8% -100.0% 0 lkp-a04/micro/netperf/120s-200%-UDP_RR
191 ~18% +1.9e+17% 3.542e+17 ~116% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
221 ~23% +9.3e+16% 2.072e+17 ~92% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
512 ~ 7% +2.9e+16% 1.468e+17 ~25% lkp-snb01/micro/hackbench/1600%-process-pipe
751 ~ 6% +1.9e+16% 1.424e+17 ~56% lkp-snb01/micro/hackbench/1600%-process-socket
21802 ~11% +1.4e+15% 2.983e+17 ~35% lkp-snb01/micro/hackbench/1600%-threads-pipe
20953 ~28% +7.1e+14% 1.478e+17 ~20% lkp-snb01/micro/hackbench/1600%-threads-socket
77 ~12% +4.1e+17% 3.185e+17 ~63% xps2/micro/hackbench/1600%-process-pipe
45256 +3.6e+15% 1.616e+18 TOTAL proc-vmstat.nr_tlb_remote_flush
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
268484 ~ 0% -100.0% 0 ~ 0% avoton1/crypto/tcrypt/2s-505-509
106095 ~ 0% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
105985 ~ 0% +1.7e+16% 1.792e+19 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
106475 ~ 1% -100.0% 0 ~ 0% lkp-a04/micro/netperf/120s-200%-UDP_RR
29191378 ~ 0% +1.1e+12% 3.254e+17 ~118% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
251512 ~ 1% +1.2e+14% 2.903e+17 ~105% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
6265648 ~ 5% +2.6e+12% 1.607e+17 ~32% lkp-snb01/micro/hackbench/1600%-process-pipe
4212742 ~ 1% +8.5e+12% 3.583e+17 ~14% lkp-snb01/micro/hackbench/1600%-process-socket
1366808 ~ 1% +2e+13% 2.71e+17 ~37% lkp-snb01/micro/hackbench/1600%-threads-pipe
1089219 ~ 1% +4.4e+13% 4.775e+17 ~42% lkp-snb01/micro/hackbench/1600%-threads-socket
2313982 ~ 1% +2.4e+13% 5.462e+17 ~60% xps2/micro/hackbench/1600%-process-pipe
45278332 +4.5e+13% 2.035e+19 TOTAL proc-vmstat.nr_tlb_local_flush_one
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
30228 ~ 0% -87.5% 3785 ~ 4% avoton1/crypto/tcrypt/2s-505-509
10864 ~ 0% -46.5% 5810 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
10846 ~ 0% +9.9e+13% 1.075e+16 ~172% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
10861 ~ 0% -48.0% 5647 ~ 0% lkp-a04/micro/netperf/120s-200%-UDP_RR
9209 ~ 0% +3.4e+15% 3.086e+17 ~96% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
9049 ~ 0% +4e+15% 3.578e+17 ~71% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
14750 ~ 2% +8.5e+14% 1.26e+17 ~20% lkp-snb01/micro/hackbench/1600%-process-pipe
10943 ~ 3% +1.1e+15% 1.239e+17 ~77% lkp-snb01/micro/hackbench/1600%-process-socket
17832 ~ 1% +1.2e+15% 2.167e+17 ~ 6% lkp-snb01/micro/hackbench/1600%-threads-pipe
8973 ~ 1% +2.6e+15% 2.326e+17 ~51% lkp-snb01/micro/hackbench/1600%-threads-socket
4340 ~ 0% +7.3e+15% 3.185e+17 ~63% xps2/micro/hackbench/1600%-process-pipe
137898 +1.2e+15% 1.695e+18 TOTAL proc-vmstat.nr_tlb_local_flush_all
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
130 ~ 3% +35.7% 176 ~18% lkp-a04/micro/netperf/120s-200%-TCP_CRR
113 ~ 4% +43.2% 162 ~19% lkp-a04/micro/netperf/120s-200%-TCP_RR
243 +39.2% 339 TOTAL uptime.idle
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
685 ~11% -30.2% 478 ~ 4% xps2/micro/hackbench/1600%-process-pipe
685 -30.2% 478 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.4
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
106 ~14% +24.2% 132 ~15% lkp-snb01/micro/hackbench/1600%-threads-pipe
106 +24.2% 132 TOTAL numa-vmstat.node1.nr_written
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
107323 ~20% -38.1% 66462 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-pipe
10582 ~ 8% -14.8% 9020 ~ 5% xps2/micro/hackbench/1600%-process-pipe
117905 -36.0% 75483 TOTAL interrupts.IWI
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
112 ~13% +24.4% 140 ~15% lkp-snb01/micro/hackbench/1600%-threads-pipe
112 +24.4% 140 TOTAL numa-vmstat.node1.nr_dirtied
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
921 ~10% -27.3% 670 ~ 5% xps2/micro/hackbench/1600%-process-pipe
921 -27.3% 670 TOTAL buddyinfo.Node.0.zone.DMA32.4
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2503 ~ 3% +22.4% 3063 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
2503 +22.4% 3063 TOTAL pagetypeinfo.Node0.Normal.Unmovable.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
52716856 ~ 6% -25.5% 39279457 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
52716856 -25.5% 39279457 TOTAL numa-numastat.node1.other_node
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
52716864 ~ 6% -25.5% 39279464 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
52716864 -25.5% 39279464 TOTAL numa-numastat.node1.numa_miss
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
52716879 ~ 6% -25.5% 39279438 ~13% lkp-snb01/micro/hackbench/1600%-process-pipe
52716879 -25.5% 39279438 TOTAL numa-numastat.node0.numa_foreign
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
777 ~16% -23.9% 591 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
725 ~17% +19.0% 862 ~13% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
1502 -3.2% 1454 TOTAL slabinfo.proc_inode_cache.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
767 ~17% -22.9% 591 ~ 1% lkp-a04/micro/netperf/120s-200%-TCP_RR
712 ~17% +21.1% 862 ~13% lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
1479 -1.7% 1454 TOTAL slabinfo.proc_inode_cache.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1684 ~ 6% -21.1% 1329 ~ 4% xps2/micro/hackbench/1600%-process-pipe
1684 -21.1% 1329 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.3
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
26518589 ~ 6% -26.1% 19610310 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
26518589 -26.1% 19610310 TOTAL numa-vmstat.node1.numa_other
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
9839 ~20% +33.7% 13155 ~11% lkp-a04/micro/netperf/120s-200%-UDP_RR
12772 ~16% +19.7% 15284 ~ 7% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
22161698 ~ 9% +19.4% 26471094 ~ 6% lkp-snb01/micro/hackbench/1600%-process-socket
18580162 ~ 9% +43.8% 26722959 ~23% lkp-snb01/micro/hackbench/1600%-threads-socket
40764472 +30.6% 53222493 TOTAL interrupts.RES
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
26428479 ~ 6% -26.0% 19563337 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
26428479 -26.0% 19563337 TOTAL numa-vmstat.node0.numa_foreign
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
26436285 ~ 6% -26.0% 19569200 ~12% lkp-snb01/micro/hackbench/1600%-process-pipe
26436285 -26.0% 19569200 TOTAL numa-vmstat.node1.numa_miss
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
413 ~ 6% +21.3% 501 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
413 +21.3% 501 TOTAL pagetypeinfo.Node1.Normal.Unmovable.5
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3829 ~10% -14.7% 3265 ~12% avoton1/crypto/tcrypt/2s-200-204
3829 -14.7% 3265 TOTAL slabinfo.kmalloc-128.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3856 ~10% -14.0% 3316 ~12% avoton1/crypto/tcrypt/2s-200-204
3856 -14.0% 3316 TOTAL slabinfo.kmalloc-128.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
609 ~ 4% -20.7% 483 ~ 3% xps2/micro/hackbench/1600%-process-pipe
609 -20.7% 483 TOTAL buddyinfo.Node.0.zone.Normal.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
545 ~ 7% +15.0% 627 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
545 +15.0% 627 TOTAL buddyinfo.Node.1.zone.Normal.5
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
53127 ~ 6% -10.7% 47442 ~ 6% avoton1/crypto/tcrypt/2s-500-504
36171 ~ 4% +19.2% 43113 ~11% lkp-a04/micro/netperf/120s-200%-TCP_RR
89298 +1.4% 90555 TOTAL softirqs.RCU
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
856 ~ 3% +17.4% 1005 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
856 +17.4% 1005 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3209 ~ 3% +19.7% 3840 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
3209 +19.7% 3840 TOTAL buddyinfo.Node.1.zone.Normal.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
627 ~ 3% +13.1% 710 ~ 8% lkp-snb01/micro/hackbench/1600%-process-pipe
627 +13.1% 710 TOTAL pagetypeinfo.Node0.Normal.Movable.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
585 ~ 6% +13.8% 666 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
2180 ~ 7% -18.7% 1773 ~ 2% xps2/micro/hackbench/1600%-process-pipe
2765 -11.8% 2439 TOTAL buddyinfo.Node.0.zone.DMA32.3
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
616 ~ 4% +16.1% 716 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
616 +16.1% 716 TOTAL pagetypeinfo.Node1.Normal.Movable.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1574 ~ 1% -11.4% 1395 ~ 8% lkp-a04/micro/netperf/120s-200%-TCP_CRR
1574 -11.4% 1395 TOTAL slabinfo.kmalloc-256.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
75095890 ~ 1% -14.1% 64497421 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
75095890 -14.1% 64497421 TOTAL proc-vmstat.numa_miss
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
75095891 ~ 1% -14.1% 64497437 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
75095891 -14.1% 64497437 TOTAL proc-vmstat.numa_other
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
75095926 ~ 1% -14.1% 64497614 ~ 9% lkp-snb01/micro/hackbench/1600%-process-pipe
75095926 -14.1% 64497614 TOTAL proc-vmstat.numa_foreign
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
738 ~ 5% +22.4% 904 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
738 +22.4% 904 TOTAL pagetypeinfo.Node1.Normal.Movable.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
73344 ~ 0% +12.7% 82638 ~ 7% lkp-a04/micro/netperf/120s-200%-TCP_CRR
73344 +12.7% 82638 TOTAL softirqs.TIMER
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1138 ~ 3% +13.9% 1296 ~ 6% lkp-snb01/micro/hackbench/1600%-process-pipe
1138 +13.9% 1296 TOTAL buddyinfo.Node.0.zone.DMA32.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
25786 ~ 6% +17.7% 30357 ~ 3% xps2/micro/hackbench/1600%-process-pipe
25786 +17.7% 30357 TOTAL proc-vmstat.nr_page_table_pages
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
726 ~ 4% +19.4% 867 ~ 3% lkp-snb01/micro/hackbench/1600%-process-pipe
726 +19.4% 867 TOTAL pagetypeinfo.Node1.Normal.Movable.1
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2520 ~ 4% +17.3% 2956 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
2520 +17.3% 2956 TOTAL pagetypeinfo.Node1.Normal.Unmovable.0
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3842 ~ 4% -17.0% 3189 ~ 7% lkp-a04/micro/netperf/120s-200%-TCP_CRR
4536 ~ 6% -11.3% 4024 ~ 1% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
8378 -13.9% 7213 TOTAL proc-vmstat.nr_alloc_batch
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
103132 ~ 2% +18.8% 122485 ~ 4% xps2/micro/hackbench/1600%-process-pipe
103132 +18.8% 122485 TOTAL meminfo.PageTables
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1484 ~ 1% -10.1% 1335 ~ 6% lkp-a04/micro/netperf/120s-200%-TCP_CRR
1484 -10.1% 1335 TOTAL slabinfo.kmalloc-256.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2985 ~ 9% -13.6% 2579 ~ 4% lkp-a04/micro/netperf/120s-200%-UDP_RR
47636 ~ 3% +12.7% 53700 ~ 1% xps2/micro/hackbench/1600%-process-pipe
50621 +11.2% 56279 TOTAL slabinfo.anon_vma.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1337 ~ 7% +9.9% 1469 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
1337 +9.9% 1469 TOTAL buddyinfo.Node.1.zone.Normal.4
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
159 ~ 0% +10.7% 177 ~ 6% lkp-a04/micro/netperf/120s-200%-TCP_CRR
155 ~ 0% +11.3% 173 ~ 6% lkp-a04/micro/netperf/120s-200%-TCP_RR
315 +11.0% 350 TOTAL uptime.boot
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2585 ~ 5% +9.7% 2837 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
2585 +9.7% 2837 TOTAL buddyinfo.Node.1.zone.Normal.3
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1075 ~ 5% +10.2% 1185 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
1075 +10.2% 1185 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.1
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
76291 ~ 3% +15.7% 88274 ~ 1% xps2/micro/hackbench/1600%-process-pipe
76291 +15.7% 88274 TOTAL slabinfo.vm_area_struct.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3467 ~ 3% +15.7% 4012 ~ 1% xps2/micro/hackbench/1600%-process-pipe
3467 +15.7% 4012 TOTAL slabinfo.vm_area_struct.active_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3467 ~ 3% +15.7% 4012 ~ 1% xps2/micro/hackbench/1600%-process-pipe
3467 +15.7% 4012 TOTAL slabinfo.vm_area_struct.num_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2985 ~ 9% -13.6% 2579 ~ 4% lkp-a04/micro/netperf/120s-200%-UDP_RR
61060 ~ 3% +11.0% 67777 ~ 0% xps2/micro/hackbench/1600%-process-pipe
64046 +9.9% 70356 TOTAL slabinfo.anon_vma.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1353 ~ 5% +9.8% 1486 ~ 6% lkp-snb01/micro/hackbench/1600%-process-pipe
1353 +9.8% 1486 TOTAL buddyinfo.Node.0.zone.DMA32.1
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1958 ~ 5% +10.7% 2168 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
1958 +10.7% 2168 TOTAL buddyinfo.Node.0.zone.Normal.3
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
68634 ~ 3% +15.7% 79397 ~ 2% xps2/micro/hackbench/1600%-process-pipe
68634 +15.7% 79397 TOTAL slabinfo.vm_area_struct.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3131 ~ 4% +9.6% 3431 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
3131 +9.6% 3431 TOTAL buddyinfo.Node.0.zone.Normal.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
35163 ~ 5% +15.4% 40566 ~ 2% xps2/micro/hackbench/1600%-process-pipe
35163 +15.4% 40566 TOTAL proc-vmstat.nr_anon_pages
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1782 ~ 3% +13.8% 2028 ~ 1% xps2/micro/hackbench/1600%-process-pipe
1782 +13.8% 2028 TOTAL slabinfo.kmalloc-64.active_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1782 ~ 3% +13.8% 2028 ~ 1% xps2/micro/hackbench/1600%-process-pipe
1782 +13.8% 2028 TOTAL slabinfo.kmalloc-64.num_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
114104 ~ 3% +13.8% 129818 ~ 1% xps2/micro/hackbench/1600%-process-pipe
114104 +13.8% 129818 TOTAL slabinfo.kmalloc-64.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
35463 ~ 5% +15.2% 40850 ~ 2% xps2/micro/hackbench/1600%-process-pipe
35463 +15.2% 40850 TOTAL proc-vmstat.nr_active_anon
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
140672 ~ 1% +16.4% 163747 ~ 2% xps2/micro/hackbench/1600%-process-pipe
140672 +16.4% 163747 TOTAL meminfo.AnonPages
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
94657 ~ 3% +14.4% 108326 ~ 1% xps2/micro/hackbench/1600%-process-pipe
94657 +14.4% 108326 TOTAL slabinfo.kmalloc-64.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
945 ~ 6% +11.2% 1051 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
3842 ~ 4% -9.3% 3487 ~ 0% xps2/micro/hackbench/1600%-process-pipe
4788 -5.2% 4538 TOTAL buddyinfo.Node.0.zone.DMA32.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
172 ~ 5% -6.4% 161 ~ 4% xps2/micro/hackbench/1600%-process-pipe
172 -6.4% 161 TOTAL proc-vmstat.nr_written
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
141872 ~ 1% +16.1% 164766 ~ 2% xps2/micro/hackbench/1600%-process-pipe
141872 +16.1% 164766 TOTAL meminfo.Active(anon)
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
160912 ~ 5% -9.1% 146302 ~ 5% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
1162879 ~ 2% +18.7% 1380217 ~ 3% xps2/micro/hackbench/1600%-process-pipe
1323791 +15.3% 1526519 TOTAL meminfo.Committed_AS
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
746 ~ 5% +11.9% 834 ~ 3% lkp-snb01/micro/hackbench/1600%-process-pipe
2955 ~ 4% -12.4% 2590 ~ 3% xps2/micro/hackbench/1600%-process-pipe
3701 -7.5% 3424 TOTAL pagetypeinfo.Node0.DMA32.Unmovable.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
145444 ~ 1% +15.7% 168230 ~ 2% xps2/micro/hackbench/1600%-process-pipe
145444 +15.7% 168230 TOTAL meminfo.Active
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
61345506 ~ 6% -14.1% 52691612 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
61345506 -14.1% 52691612 TOTAL numa-vmstat.node0.numa_local
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
37413 ~ 4% -13.1% 32495 ~ 5% avoton1/crypto/tcrypt/2s-200-204
37413 -13.1% 32495 TOTAL meminfo.DirectMap4k
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
61378936 ~ 6% -14.0% 52766301 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
61378936 -14.0% 52766301 TOTAL numa-vmstat.node0.numa_hit
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3756 ~ 5% +7.4% 4036 ~ 3% lkp-snb01/micro/hackbench/1600%-process-pipe
3756 +7.4% 4036 TOTAL buddyinfo.Node.1.zone.Normal.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3169 ~ 3% +12.3% 3560 ~ 5% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
3169 +12.3% 3560 TOTAL slabinfo.task_xstate.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3169 ~ 3% +12.3% 3560 ~ 5% lkp-ib03/micro/netperf/120s-200%-TCP_STREAM
3169 +12.3% 3560 TOTAL slabinfo.task_xstate.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2305 ~ 4% -6.8% 2148 ~ 5% lkp-a04/micro/netperf/120s-200%-TCP_CRR
23997 ~ 7% +10.2% 26445 ~ 5% lkp-snb01/micro/hackbench/1600%-threads-pipe
4002 ~ 2% -6.6% 3737 ~ 3% nhm8/micro/dbench/100%
30304 +6.7% 32330 TOTAL slabinfo.kmalloc-192.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1.2e+08 ~ 3% -12.6% 1.048e+08 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
5689883 ~ 3% +7.8% 6135367 ~ 1% lkp-snb01/micro/hackbench/1600%-process-socket
1.257e+08 -11.7% 1.11e+08 TOTAL numa-numastat.node0.local_node
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1.2e+08 ~ 3% -12.6% 1.048e+08 ~ 7% lkp-snb01/micro/hackbench/1600%-process-pipe
5689883 ~ 3% +7.8% 6135379 ~ 1% lkp-snb01/micro/hackbench/1600%-process-socket
1.257e+08 -11.7% 1.11e+08 TOTAL numa-numastat.node0.numa_hit
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
475180 ~ 2% +8.1% 513474 ~ 3% lkp-snb01/micro/hackbench/1600%-threads-socket
475180 +8.1% 513474 TOTAL numa-vmstat.node0.numa_miss
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
475243 ~ 2% +8.1% 513568 ~ 3% lkp-snb01/micro/hackbench/1600%-threads-socket
475243 +8.1% 513568 TOTAL numa-vmstat.node1.numa_foreign
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1895 ~ 3% -7.4% 1755 ~ 5% avoton1/crypto/tcrypt/2s-505-509
1895 -7.4% 1755 TOTAL slabinfo.kmalloc-512.num_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
3899 ~ 4% +9.0% 4251 ~ 4% lkp-snb01/micro/hackbench/1600%-process-pipe
3899 +9.0% 4251 TOTAL pagetypeinfo.Node0.Normal.Unmovable.1
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2281 ~ 5% -7.8% 2102 ~ 7% lkp-a04/micro/netperf/120s-200%-TCP_CRR
23836 ~ 7% +10.2% 26268 ~ 5% lkp-snb01/micro/hackbench/1600%-threads-pipe
4002 ~ 2% -6.6% 3737 ~ 3% nhm8/micro/dbench/100%
30119 +6.6% 32108 TOTAL slabinfo.kmalloc-192.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
953 ~ 3% +11.0% 1058 ~ 0% xps2/micro/hackbench/1600%-process-pipe
953 +11.0% 1058 TOTAL slabinfo.anon_vma.num_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
953 ~ 3% +11.0% 1058 ~ 0% xps2/micro/hackbench/1600%-process-pipe
953 +11.0% 1058 TOTAL slabinfo.anon_vma.active_slabs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1822 ~ 3% -7.0% 1695 ~ 5% avoton1/crypto/tcrypt/2s-505-509
1822 -7.0% 1695 TOTAL slabinfo.kmalloc-512.active_objs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2507 ~ 4% +8.9% 2731 ~ 2% lkp-snb01/micro/hackbench/1600%-process-pipe
2507 +8.9% 2731 TOTAL pagetypeinfo.Node0.Normal.Unmovable.2
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
284.98 ~ 0% +1.3% 288.80 ~ 0% avoton1/crypto/tcrypt/2s-200-204
285.83 ~ 1% -1.4% 281.86 ~ 0% avoton1/crypto/tcrypt/2s-205-210
105.26 ~ 5% +44.4% 152.00 ~21% lkp-a04/micro/netperf/120s-200%-TCP_CRR
104.14 ~ 4% +46.6% 152.67 ~20% lkp-a04/micro/netperf/120s-200%-TCP_RR
719.99 ~ 1% -4.0% 691.07 ~ 0% lkp-snb01/micro/hackbench/1600%-process-pipe
704.05 ~ 2% +3.4% 728.26 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-pipe
2204.24 +4.1% 2294.67 TOTAL boottime.idle
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
39.75 ~ 0% +1.2% 40.24 ~ 0% avoton1/crypto/tcrypt/2s-200-204
39.85 ~ 1% -1.3% 39.33 ~ 0% avoton1/crypto/tcrypt/2s-205-210
32.93 ~ 4% +50.3% 49.48 ~23% lkp-a04/micro/netperf/120s-200%-TCP_CRR
32.63 ~ 3% +52.1% 49.61 ~22% lkp-a04/micro/netperf/120s-200%-TCP_RR
27.55 ~ 1% -4.2% 26.39 ~ 0% lkp-snb01/micro/hackbench/1600%-process-pipe
26.96 ~ 0% +3.4% 27.87 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
16.86 ~ 1% -1.9% 16.54 ~ 1% nhm8/micro/dbench/100%
216.53 +15.2% 249.46 TOTAL boottime.boot
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
6.555e+08 ~ 2% +8.0% 7.077e+08 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
45683455 ~ 6% +19.7% 54686717 ~ 3% lkp-snb01/micro/hackbench/1600%-process-socket
10457165 ~ 5% +44.2% 15074518 ~ 4% xps2/micro/hackbench/1600%-process-pipe
7.116e+08 +9.2% 7.775e+08 TOTAL time.involuntary_context_switches
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
4 ~14% +42.9% 6 ~ 0% avoton1/crypto/tcrypt/2s-205-210
2839 ~ 0% -0.8% 2817 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
3120 ~ 0% -0.3% 3110 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-socket
5964 -0.5% 5934 TOTAL time.percent_of_cpu_this_job_got
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
28.51 ~ 1% +2.1% 29.11 ~ 1% avoton1/crypto/tcrypt/2s-200-204
28.62 ~ 1% -1.7% 28.14 ~ 0% avoton1/crypto/tcrypt/2s-205-210
24.40 ~ 2% +0.1% 24.42 ~ 0% grantley/micro/kbuild/200%
18.89 ~ 0% +65.5% 31.26 ~27% lkp-a04/micro/netperf/120s-200%-TCP_RR
18.69 ~ 2% -6.6% 17.45 ~ 0% lkp-snb01/micro/hackbench/1600%-process-pipe
18.05 ~ 1% +5.2% 18.99 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-pipe
7.51 ~11% -3.7% 7.23 ~ 0% xps2/micro/hackbench/1600%-process-pipe
144.66 +8.3% 156.61 TOTAL boottime.dhcp
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
2248 ~ 1% -3.9% 2161 ~ 1% lkp-snb01/micro/hackbench/1600%-process-pipe
1690 ~ 0% -18.1% 1384 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
1584 ~ 0% -5.0% 1505 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
1642 ~ 0% -19.0% 1330 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-socket
730 ~ 0% +3.2% 753 ~ 0% xps2/micro/hackbench/1600%-process-pipe
7895 -9.6% 7136 TOTAL time.user_time
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
40.18 ~12% +34.2% 53.94 ~ 2% avoton1/crypto/tcrypt/2s-205-210
15681.97 ~ 0% +3.1% 16160.28 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
17034.21 ~ 0% +1.6% 17314.78 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
3471.89 ~ 0% -1.5% 3418.90 ~ 0% xps2/micro/hackbench/1600%-process-pipe
36228.25 +2.0% 36947.89 TOTAL time.system_time
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1791178 ~ 0% +1.1% 1811656 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
597537 ~ 3% +9.2% 652375 ~ 2% lkp-snb01/micro/hackbench/1600%-process-socket
528539 ~ 2% +9.9% 581079 ~ 4% lkp-snb01/micro/hackbench/1600%-threads-socket
122230 ~ 2% +22.4% 149616 ~ 1% xps2/micro/hackbench/1600%-process-pipe
3039486 +5.1% 3194728 TOTAL vmstat.system.cs
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
1103 ~ 1% +3.3% 1140 ~ 1% lkp-a04/micro/netperf/120s-200%-UDP_RR
11480 ~ 0% +0.2% 11502 ~ 0% lkp-ib03/micro/netperf/120s-200%-TCP_CRR
43851 ~ 7% +14.2% 50073 ~ 5% lkp-snb01/micro/hackbench/1600%-process-socket
1370529 ~ 0% +2.6% 1406534 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
38956 ~ 7% +35.2% 52666 ~20% lkp-snb01/micro/hackbench/1600%-threads-socket
1465920 +3.8% 1521916 TOTAL vmstat.system.in
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
298932 ~ 1% -1.4% 294844 ~ 0% grantley/micro/kbuild/200%
3.017e+08 ~ 2% +9.6% 3.307e+08 ~ 1% lkp-snb01/micro/hackbench/1600%-process-socket
2.318e+09 ~ 0% +1.5% 2.354e+09 ~ 0% lkp-snb01/micro/hackbench/1600%-threads-pipe
2.768e+08 ~ 1% +8.0% 2.99e+08 ~ 2% lkp-snb01/micro/hackbench/1600%-threads-socket
63163410 ~ 2% +18.0% 74521593 ~ 0% xps2/micro/hackbench/1600%-process-pipe
2.96e+09 +3.3% 3.058e+09 TOTAL time.voluntary_context_switches
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
50750958 ~ 2% -7.1% 47164538 ~ 5% lkp-snb01/micro/hackbench/1600%-process-pipe
33899807 ~ 0% +4.4% 35395421 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
1101815 ~ 0% +2.7% 1131176 ~ 1% lkp-snb01/micro/hackbench/1600%-threads-socket
19183273 ~ 0% -3.7% 18467806 ~ 0% xps2/micro/hackbench/1600%-process-pipe
104935855 -2.6% 102158942 TOTAL time.minor_page_faults
v3.13-rc4 eabb1f89905a0c809d13
--------------- -------------------------
125 ~ 0% +0.1% 125 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_CRR
121 ~ 0% +0.0% 121 ~ 0% lkp-a04/micro/netperf/120s-200%-TCP_RR
611 ~ 0% +1.8% 622 ~ 0% lkp-snb01/micro/hackbench/1600%-process-socket
607 ~ 0% -0.9% 602 ~ 0% xps2/micro/hackbench/1600%-process-pipe
1465 +0.4% 1471 TOTAL time.elapsed_time
^ permalink raw reply [flat|nested] 71+ messages in thread
end of thread, other threads:[~2013-12-21 15:50 UTC | newest]
Thread overview: 71+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-13 20:01 [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2 Mel Gorman
2013-12-13 20:01 ` Mel Gorman
2013-12-13 20:01 ` [PATCH 1/4] x86: mm: Clean up inconsistencies when flushing TLB ranges Mel Gorman
2013-12-13 20:01 ` Mel Gorman
2013-12-13 20:01 ` [PATCH 2/4] x86: mm: Account for TLB flushes only when debugging Mel Gorman
2013-12-13 20:01 ` Mel Gorman
2013-12-13 20:01 ` [PATCH 3/4] x86: mm: Change tlb_flushall_shift for IvyBridge Mel Gorman
2013-12-13 20:01 ` Mel Gorman
2013-12-13 20:01 ` [PATCH 4/4] x86: mm: Eliminate redundant page table walk during TLB range flushing Mel Gorman
2013-12-13 20:01 ` Mel Gorman
2013-12-13 21:16 ` [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2 Linus Torvalds
2013-12-13 21:16 ` Linus Torvalds
2013-12-13 22:38 ` H. Peter Anvin
2013-12-13 22:38 ` H. Peter Anvin
2013-12-16 10:39 ` Mel Gorman
2013-12-16 10:39 ` Mel Gorman
2013-12-16 17:17 ` Linus Torvalds
2013-12-16 17:17 ` Linus Torvalds
2013-12-17 9:55 ` Mel Gorman
2013-12-17 9:55 ` Mel Gorman
2013-12-15 15:55 ` Mel Gorman
2013-12-15 15:55 ` Mel Gorman
2013-12-15 16:17 ` Mel Gorman
2013-12-15 16:17 ` Mel Gorman
2013-12-15 18:34 ` Linus Torvalds
2013-12-15 18:34 ` Linus Torvalds
2013-12-16 11:16 ` Mel Gorman
2013-12-16 11:16 ` Mel Gorman
2013-12-16 10:24 ` Ingo Molnar
2013-12-16 10:24 ` Ingo Molnar
2013-12-16 12:59 ` Mel Gorman
2013-12-16 12:59 ` Mel Gorman
2013-12-16 13:44 ` Ingo Molnar
2013-12-16 13:44 ` Ingo Molnar
2013-12-17 9:21 ` Mel Gorman
2013-12-17 9:21 ` Mel Gorman
2013-12-17 9:26 ` Peter Zijlstra
2013-12-17 9:26 ` Peter Zijlstra
2013-12-17 11:00 ` Ingo Molnar
2013-12-17 11:00 ` Ingo Molnar
2013-12-17 14:32 ` Mel Gorman
2013-12-17 14:32 ` Mel Gorman
2013-12-17 14:42 ` Ingo Molnar
2013-12-17 14:42 ` Ingo Molnar
2013-12-17 17:54 ` Mel Gorman
2013-12-17 17:54 ` Mel Gorman
2013-12-18 10:24 ` Ingo Molnar
2013-12-18 10:24 ` Ingo Molnar
2013-12-19 14:24 ` Mel Gorman
2013-12-19 14:24 ` Mel Gorman
2013-12-19 16:49 ` Ingo Molnar
2013-12-19 16:49 ` Ingo Molnar
2013-12-20 11:13 ` Mel Gorman
2013-12-20 11:13 ` Mel Gorman
2013-12-20 11:18 ` Ingo Molnar
2013-12-20 11:18 ` Ingo Molnar
2013-12-20 12:00 ` Mel Gorman
2013-12-20 12:00 ` Mel Gorman
2013-12-20 12:20 ` Ingo Molnar
2013-12-20 12:20 ` Ingo Molnar
2013-12-20 13:55 ` Mel Gorman
2013-12-20 13:55 ` Mel Gorman
2013-12-18 10:32 ` [tip:sched/core] sched: Assign correct scheduling domain to ' sd_llc' tip-bot for Mel Gorman
2013-12-18 7:28 ` [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB range flush v2 Fengguang Wu
2013-12-18 7:28 ` Fengguang Wu
2013-12-19 14:34 ` Mel Gorman
2013-12-19 14:34 ` Mel Gorman
2013-12-20 15:51 ` Fengguang Wu
2013-12-20 16:44 ` Mel Gorman
2013-12-20 16:44 ` Mel Gorman
2013-12-21 15:49 ` Fengguang Wu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.