* [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 16:36 ` Randy Dunlap
2018-05-17 11:06 ` [PATCH v11 02/26] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
` (26 subsequent siblings)
27 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
The architecture support is needed since the speculative page fault handler
is called from the architecture's page faulting code, and some code has to
be added there to handle the speculative handler.
The dependency on ARCH_HAS_PTE_SPECIAL is required because vm_normal_page()
does processing that is not compatible with the speculative handling in the
case ARCH_HAS_PTE_SPECIAL is not set.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/Kconfig | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d0888c5b97a..a38796276113 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -761,3 +761,25 @@ config GUP_BENCHMARK
config ARCH_HAS_PTE_SPECIAL
bool
+
+config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+ def_bool n
+
+config SPECULATIVE_PAGE_FAULT
+ bool "Speculative page faults"
+ default y
+ depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+ depends on ARCH_HAS_PTE_SPECIAL && MMU && SMP
+ help
+ Try to handle user space page faults without holding the mmap_sem.
+
+ This should allow better concurrency for massively threaded process
+ since the page fault handler will not wait for other threads memory
+ layout change to be done, assuming that this change is done in another
+ part of the process's memory space. This type of page fault is named
+ speculative page fault.
+
+ If the speculative page fault fails because of a concurrency is
+ detected or because underlying PMD or PTE tables are not yet
+ allocating, it is failing its processing and a classic page fault
+ is then tried.
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 11:06 ` [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-17 16:36 ` Randy Dunlap
2018-05-17 17:19 ` Matthew Wilcox
2018-05-22 11:47 ` Laurent Dufour
0 siblings, 2 replies; 106+ messages in thread
From: Randy Dunlap @ 2018-05-17 16:36 UTC (permalink / raw)
To: Laurent Dufour, akpm, mhocko, peterz, kirill, ak, dave, jack,
Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, Punit Agrawal,
vinayak menon, Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Hi,
On 05/17/2018 04:06 AM, Laurent Dufour wrote:
> This configuration variable will be used to build the code needed to
> handle speculative page fault.
>
> By default it is turned off, and activated depending on architecture
> support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
>
> The architecture support is needed since the speculative page fault handler
> is called from the architecture's page faulting code, and some code has to
> be added there to handle the speculative handler.
>
> The dependency on ARCH_HAS_PTE_SPECIAL is required because vm_normal_page()
> does processing that is not compatible with the speculative handling in the
> case ARCH_HAS_PTE_SPECIAL is not set.
>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Suggested-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> mm/Kconfig | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 1d0888c5b97a..a38796276113 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -761,3 +761,25 @@ config GUP_BENCHMARK
>
> config ARCH_HAS_PTE_SPECIAL
> bool
> +
> +config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> + def_bool n
> +
> +config SPECULATIVE_PAGE_FAULT
> + bool "Speculative page faults"
> + default y
> + depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> + depends on ARCH_HAS_PTE_SPECIAL && MMU && SMP
> + help
> + Try to handle user space page faults without holding the mmap_sem.
> +
> + This should allow better concurrency for massively threaded process
processes
> + since the page fault handler will not wait for other threads memory
thread's
> + layout change to be done, assuming that this change is done in another
> + part of the process's memory space. This type of page fault is named
> + speculative page fault.
> +
> + If the speculative page fault fails because of a concurrency is
because a concurrency is
> + detected or because underlying PMD or PTE tables are not yet
> + allocating, it is failing its processing and a classic page fault
allocated, the speculative page fault fails and a classic page fault
> + is then tried.
Also, all of the help text (below the "help" line) should be indented by
1 tab + 2 spaces (in coding-style.rst).
--
~Randy
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 16:36 ` Randy Dunlap
@ 2018-05-17 17:19 ` Matthew Wilcox
2018-05-17 17:34 ` Randy Dunlap
2018-05-22 11:44 ` [PATCH " Laurent Dufour
2018-05-22 11:47 ` Laurent Dufour
1 sibling, 2 replies; 106+ messages in thread
From: Matthew Wilcox @ 2018-05-17 17:19 UTC (permalink / raw)
To: Randy Dunlap
Cc: Laurent Dufour, akpm, mhocko, peterz, kirill, ak, dave, jack,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On Thu, May 17, 2018 at 09:36:00AM -0700, Randy Dunlap wrote:
> > + If the speculative page fault fails because of a concurrency is
>
> because a concurrency is
While one can use concurrency as a noun, it sounds archaic to me. I'd
rather:
If the speculative page fault fails because a concurrent modification
is detected or because underlying PMD or PTE tables are not yet
> > + detected or because underlying PMD or PTE tables are not yet
> > + allocating, it is failing its processing and a classic page fault
>
> allocated, the speculative page fault fails and a classic page fault
>
> > + is then tried.
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 17:19 ` Matthew Wilcox
@ 2018-05-17 17:34 ` Randy Dunlap
2018-05-22 12:00 ` [FIX PATCH " Laurent Dufour
2018-05-22 11:44 ` [PATCH " Laurent Dufour
1 sibling, 1 reply; 106+ messages in thread
From: Randy Dunlap @ 2018-05-17 17:34 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Laurent Dufour, akpm, mhocko, peterz, kirill, ak, dave, jack,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 05/17/2018 10:19 AM, Matthew Wilcox wrote:
> On Thu, May 17, 2018 at 09:36:00AM -0700, Randy Dunlap wrote:
>>> + If the speculative page fault fails because of a concurrency is
>>
>> because a concurrency is
>
> While one can use concurrency as a noun, it sounds archaic to me. I'd
> rather:
>
> If the speculative page fault fails because a concurrent modification
> is detected or because underlying PMD or PTE tables are not yet
Yeah, OK.
>>> + detected or because underlying PMD or PTE tables are not yet
>>> + allocating, it is failing its processing and a classic page fault
>>
>> allocated, the speculative page fault fails and a classic page fault
>>
>>> + is then tried.
--
~Randy
^ permalink raw reply [flat|nested] 106+ messages in thread
* [FIX PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 17:34 ` Randy Dunlap
@ 2018-05-22 12:00 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-22 12:00 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, Randy Dunlap
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
This configuration variable will be used to build the code needed to
handle speculative page fault.
By default it is turned off, and activated depending on architecture
support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
The architecture support is needed since the speculative page fault handler
is called from the architecture's page faulting code, and some code has to
be added there to handle the speculative handler.
The dependency on ARCH_HAS_PTE_SPECIAL is required because vm_normal_page()
does processing that is not compatible with the speculative handling in the
case ARCH_HAS_PTE_SPECIAL is not set.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/Kconfig | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d0888c5b97a..d958fd8ce73a 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -761,3 +761,25 @@ config GUP_BENCHMARK
config ARCH_HAS_PTE_SPECIAL
bool
+
+config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+ def_bool n
+
+config SPECULATIVE_PAGE_FAULT
+ bool "Speculative page faults"
+ default y
+ depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+ depends on ARCH_HAS_PTE_SPECIAL && MMU && SMP
+ help
+ Try to handle user space page faults without holding the mmap_sem.
+
+ This should allow better concurrency for massively threaded processes
+ since the page fault handler will not wait for other thread's memory
+ layout change to be done, assuming that this change is done in
+ another part of the process's memory space. This type of page fault
+ is named speculative page fault.
+
+ If the speculative page fault fails because a concurrent modification
+ is detected or because underlying PMD or PTE tables are not yet
+ allocated, the speculative page fault fails and a classic page fault
+ is then tried.
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 17:19 ` Matthew Wilcox
2018-05-17 17:34 ` Randy Dunlap
@ 2018-05-22 11:44 ` Laurent Dufour
1 sibling, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-22 11:44 UTC (permalink / raw)
To: Matthew Wilcox, Randy Dunlap
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, khandual,
aneesh.kumar, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
hpa, Will Deacon, Sergey Senozhatsky, sergey.senozhatsky.work,
Andrea Arcangeli, Alexei Starovoitov, kemi.wang, Daniel Jordan,
David Rientjes, Jerome Glisse, Ganesh Mahendran, Minchan Kim,
Punit Agrawal, vinayak menon, Yang Shi, linux-kernel, linux-mm,
haren, npiggin, bsingharora, paulmck, Tim Chen, linuxppc-dev,
x86
On 17/05/2018 19:19, Matthew Wilcox wrote:
> On Thu, May 17, 2018 at 09:36:00AM -0700, Randy Dunlap wrote:
>>> + If the speculative page fault fails because of a concurrency is
>>
>> because a concurrency is
>
> While one can use concurrency as a noun, it sounds archaic to me. I'd
> rather:
>
> If the speculative page fault fails because a concurrent modification
> is detected or because underlying PMD or PTE tables are not yet
Thanks Matthew, I'll do that.
>
>>> + detected or because underlying PMD or PTE tables are not yet
>>> + allocating, it is failing its processing and a classic page fault
>>
>> allocated, the speculative page fault fails and a classic page fault
>>
>>> + is then tried.
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
2018-05-17 16:36 ` Randy Dunlap
2018-05-17 17:19 ` Matthew Wilcox
@ 2018-05-22 11:47 ` Laurent Dufour
1 sibling, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-22 11:47 UTC (permalink / raw)
To: Randy Dunlap, akpm, mhocko, peterz, kirill, ak, dave, jack,
Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, Punit Agrawal,
vinayak menon, Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
On 17/05/2018 18:36, Randy Dunlap wrote:
> Hi,
>
> On 05/17/2018 04:06 AM, Laurent Dufour wrote:
>> This configuration variable will be used to build the code needed to
>> handle speculative page fault.
>>
>> By default it is turned off, and activated depending on architecture
>> support, ARCH_HAS_PTE_SPECIAL, SMP and MMU.
>>
>> The architecture support is needed since the speculative page fault handler
>> is called from the architecture's page faulting code, and some code has to
>> be added there to handle the speculative handler.
>>
>> The dependency on ARCH_HAS_PTE_SPECIAL is required because vm_normal_page()
>> does processing that is not compatible with the speculative handling in the
>> case ARCH_HAS_PTE_SPECIAL is not set.
>>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Suggested-by: David Rientjes <rientjes@google.com>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> mm/Kconfig | 22 ++++++++++++++++++++++
>> 1 file changed, 22 insertions(+)
>>
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index 1d0888c5b97a..a38796276113 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -761,3 +761,25 @@ config GUP_BENCHMARK
>>
>> config ARCH_HAS_PTE_SPECIAL
>> bool
>> +
>> +config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> + def_bool n
>> +
>> +config SPECULATIVE_PAGE_FAULT
>> + bool "Speculative page faults"
>> + default y
>> + depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> + depends on ARCH_HAS_PTE_SPECIAL && MMU && SMP
>> + help
>> + Try to handle user space page faults without holding the mmap_sem.
>> +
>> + This should allow better concurrency for massively threaded process
>
> processes
>
>> + since the page fault handler will not wait for other threads memory
>
> thread's
>
>> + layout change to be done, assuming that this change is done in another
>> + part of the process's memory space. This type of page fault is named
>> + speculative page fault.
>> +
>> + If the speculative page fault fails because of a concurrency is
>
> because a concurrency is
>
>> + detected or because underlying PMD or PTE tables are not yet
>> + allocating, it is failing its processing and a classic page fault
>
> allocated, the speculative page fault fails and a classic page fault
>
>> + is then tried.
>
>
> Also, all of the help text (below the "help" line) should be indented by
> 1 tab + 2 spaces (in coding-style.rst).
Thanks, Randy for reviewing my miserable English grammar.
I'll fix that and the indentation.
^ permalink raw reply [flat|nested] 106+ messages in thread
* [PATCH v11 02/26] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 03/26] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
` (25 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
Speculative Page Fault handler when building for 64bit.
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 47e7f582f86a..603f788a3e83 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -32,6 +32,7 @@ config X86_64
select SWIOTLB
select X86_DEV_DMA_OPS
select ARCH_HAS_SYSCALL_WRAPPER
+ select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
#
# Arch settings
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 03/26] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 01/26] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 02/26] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 04/26] arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
` (24 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for BOOK3S_64. This enables
the Speculative Page Fault handler.
Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
called by update_mmu_cache()
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/powerpc/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index be7aca467692..75f71b963630 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -232,6 +232,7 @@ config PPC
select OLD_SIGACTION if PPC32
select OLD_SIGSUSPEND
select SPARSE_IRQ
+ select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT if PPC_BOOK3S_64
select SYSCTL_EXCEPTION_TRACE
select VIRT_TO_BUS if !PPC64
#
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 04/26] arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (2 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 03/26] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 05/26] mm: prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
` (23 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Mahendran Ganesh <opensource.ganesh@gmail.com>
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4759566a78cb..c932ae6d2cce 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -147,6 +147,7 @@ config ARM64
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
+ select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
help
ARM 64-bit (AArch64) Linux support.
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 05/26] mm: prepare for FAULT_FLAG_SPECULATIVE
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (3 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 04/26] arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 06/26] mm: introduce pte_spinlock " Laurent Dufour
` (22 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Peter Zijlstra <peterz@infradead.org>
When speculating faults (without holding mmap_sem) we need to validate
that the vma against which we loaded pages is still valid when we're
ready to install the new PTE.
Therefore, replace the pte_offset_map_lock() calls that (re)take the
PTL with pte_map_lock() which can fail in case we find the VMA changed
since we started the fault.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
implemented as the vm_fault structure in the kernel]
[move pte_map_lock()'s definition upper in the file]
[move the define of FAULT_FLAG_SPECULATIVE later in the series]
[review error path in do_swap_page(), do_anonymous_page() and
wp_page_copy()]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 87 ++++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 58 insertions(+), 29 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 14578158ed20..a55e72c8e469 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2298,6 +2298,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+static inline bool pte_map_lock(struct vm_fault *vmf)
+{
+ vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ return true;
+}
+
/*
* handle_pte_fault chooses page fault handler according to an entry which was
* read non-atomically. Before making any commitment, on those architectures
@@ -2487,25 +2494,26 @@ static int wp_page_copy(struct vm_fault *vmf)
const unsigned long mmun_start = vmf->address & PAGE_MASK;
const unsigned long mmun_end = mmun_start + PAGE_SIZE;
struct mem_cgroup *memcg;
+ int ret = VM_FAULT_OOM;
if (unlikely(anon_vma_prepare(vma)))
- goto oom;
+ goto out;
if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
new_page = alloc_zeroed_user_highpage_movable(vma,
vmf->address);
if (!new_page)
- goto oom;
+ goto out;
} else {
new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
vmf->address);
if (!new_page)
- goto oom;
+ goto out;
cow_user_page(new_page, old_page, vmf->address, vma);
}
if (mem_cgroup_try_charge(new_page, mm, GFP_KERNEL, &memcg, false))
- goto oom_free_new;
+ goto out_free_new;
__SetPageUptodate(new_page);
@@ -2514,7 +2522,10 @@ static int wp_page_copy(struct vm_fault *vmf)
/*
* Re-check the pte - we dropped the lock
*/
- vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
+ if (!pte_map_lock(vmf)) {
+ ret = VM_FAULT_RETRY;
+ goto out_uncharge;
+ }
if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
if (old_page) {
if (!PageAnon(old_page)) {
@@ -2601,12 +2612,14 @@ static int wp_page_copy(struct vm_fault *vmf)
put_page(old_page);
}
return page_copied ? VM_FAULT_WRITE : 0;
-oom_free_new:
+out_uncharge:
+ mem_cgroup_cancel_charge(new_page, memcg, false);
+out_free_new:
put_page(new_page);
-oom:
+out:
if (old_page)
put_page(old_page);
- return VM_FAULT_OOM;
+ return ret;
}
/**
@@ -2627,8 +2640,8 @@ static int wp_page_copy(struct vm_fault *vmf)
int finish_mkwrite_fault(struct vm_fault *vmf)
{
WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
- vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
- &vmf->ptl);
+ if (!pte_map_lock(vmf))
+ return VM_FAULT_RETRY;
/*
* We might have raced with another page fault while we released the
* pte_offset_map_lock.
@@ -2746,8 +2759,11 @@ static int do_wp_page(struct vm_fault *vmf)
get_page(vmf->page);
pte_unmap_unlock(vmf->pte, vmf->ptl);
lock_page(vmf->page);
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
+ if (!pte_map_lock(vmf)) {
+ unlock_page(vmf->page);
+ put_page(vmf->page);
+ return VM_FAULT_RETRY;
+ }
if (!pte_same(*vmf->pte, vmf->orig_pte)) {
unlock_page(vmf->page);
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2954,11 +2970,15 @@ int do_swap_page(struct vm_fault *vmf)
if (!page) {
/*
- * Back out if somebody else faulted in this pte
- * while we released the pte lock.
+ * Back out if the VMA has changed in our back during
+ * a speculative page fault or if somebody else
+ * faulted in this pte while we released the pte lock.
*/
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
+ if (!pte_map_lock(vmf)) {
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ ret = VM_FAULT_RETRY;
+ goto out;
+ }
if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
ret = VM_FAULT_OOM;
delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
@@ -3011,10 +3031,13 @@ int do_swap_page(struct vm_fault *vmf)
}
/*
- * Back out if somebody else already faulted in this pte.
+ * Back out if the VMA has changed in our back during a speculative
+ * page fault or if somebody else already faulted in this pte.
*/
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
- &vmf->ptl);
+ if (!pte_map_lock(vmf)) {
+ ret = VM_FAULT_RETRY;
+ goto out_cancel_cgroup;
+ }
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
goto out_nomap;
@@ -3092,8 +3115,9 @@ int do_swap_page(struct vm_fault *vmf)
out:
return ret;
out_nomap:
- mem_cgroup_cancel_charge(page, memcg, false);
pte_unmap_unlock(vmf->pte, vmf->ptl);
+out_cancel_cgroup:
+ mem_cgroup_cancel_charge(page, memcg, false);
out_page:
unlock_page(page);
out_release:
@@ -3144,8 +3168,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
!mm_forbids_zeropage(vma->vm_mm)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
vma->vm_page_prot));
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
+ if (!pte_map_lock(vmf))
+ return VM_FAULT_RETRY;
if (!pte_none(*vmf->pte))
goto unlock;
ret = check_stable_address_space(vma->vm_mm);
@@ -3180,14 +3204,16 @@ static int do_anonymous_page(struct vm_fault *vmf)
if (vma->vm_flags & VM_WRITE)
entry = pte_mkwrite(pte_mkdirty(entry));
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
- &vmf->ptl);
- if (!pte_none(*vmf->pte))
+ if (!pte_map_lock(vmf)) {
+ ret = VM_FAULT_RETRY;
goto release;
+ }
+ if (!pte_none(*vmf->pte))
+ goto unlock_and_release;
ret = check_stable_address_space(vma->vm_mm);
if (ret)
- goto release;
+ goto unlock_and_release;
/* Deliver the page fault to userland, check inside PT lock */
if (userfaultfd_missing(vma)) {
@@ -3209,10 +3235,12 @@ static int do_anonymous_page(struct vm_fault *vmf)
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
return ret;
+unlock_and_release:
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
release:
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
- goto unlock;
+ return ret;
oom_free_page:
put_page(page);
oom:
@@ -3305,8 +3333,9 @@ static int pte_alloc_one_map(struct vm_fault *vmf)
* pte_none() under vmf->ptl protection when we return to
* alloc_set_pte().
*/
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
- &vmf->ptl);
+ if (!pte_map_lock(vmf))
+ return VM_FAULT_RETRY;
+
return 0;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 06/26] mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (4 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 05/26] mm: prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 07/26] mm: make pte_unmap_same compatible with SPF Laurent Dufour
` (21 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.
So move the fetch and locking operations in a dedicated function.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index a55e72c8e469..fa0d9493acac 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2298,6 +2298,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+static inline bool pte_spinlock(struct vm_fault *vmf)
+{
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ return true;
+}
+
static inline bool pte_map_lock(struct vm_fault *vmf)
{
vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
@@ -3814,8 +3821,8 @@ static int do_numa_page(struct vm_fault *vmf)
* validation through pte_unmap_same(). It's of NUMA type but
* the pfn may be screwed if the read is non atomic.
*/
- vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
- spin_lock(vmf->ptl);
+ if (!pte_spinlock(vmf))
+ return VM_FAULT_RETRY;
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
goto out;
@@ -4008,8 +4015,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
return do_numa_page(vmf);
- vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
- spin_lock(vmf->ptl);
+ if (!pte_spinlock(vmf))
+ return VM_FAULT_RETRY;
entry = vmf->orig_pte;
if (unlikely(!pte_same(*vmf->pte, entry)))
goto unlock;
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 07/26] mm: make pte_unmap_same compatible with SPF
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (5 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 06/26] mm: introduce pte_spinlock " Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 08/26] mm: introduce INIT_VMA() Laurent Dufour
` (20 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
pte_unmap_same() is making the assumption that the page table are still
around because the mmap_sem is held.
This is no more the case when running a speculative page fault and
additional check must be made to ensure that the final page table are still
there.
This is now done by calling pte_spinlock() to check for the VMA's
consistency while locking for the page tables.
This is requiring passing a vm_fault structure to pte_unmap_same() which is
containing all the needed parameters.
As pte_spinlock() may fail in the case of a speculative page fault, if the
VMA has been touched in our back, pte_unmap_same() should now return 3
cases :
1. pte are the same (0)
2. pte are different (VM_FAULT_PTNOTSAME)
3. a VMA's changes has been detected (VM_FAULT_RETRY)
The case 2 is handled by the introduction of a new VM_FAULT flag named
VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 4 +++-
mm/memory.c | 39 ++++++++++++++++++++++++++++-----------
2 files changed, 31 insertions(+), 12 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 338b8a1afb02..113b572471ca 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1249,6 +1249,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
#define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables
* and needs fsync() to complete (for
* synchronous page faults in DAX) */
+#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */
#define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
@@ -1267,7 +1268,8 @@ static inline void clear_page_pfmemalloc(struct page *page)
{ VM_FAULT_RETRY, "RETRY" }, \
{ VM_FAULT_FALLBACK, "FALLBACK" }, \
{ VM_FAULT_DONE_COW, "DONE_COW" }, \
- { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }
+ { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }, \
+ { VM_FAULT_PTNOTSAME, "PTNOTSAME" }
/* Encode hstate index for a hwpoisoned large page */
#define VM_FAULT_SET_HINDEX(x) ((x) << 12)
diff --git a/mm/memory.c b/mm/memory.c
index fa0d9493acac..75163c145c76 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2319,21 +2319,29 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
* parts, do_swap_page must check under lock before unmapping the pte and
* proceeding (but do_wp_page is only called after already making such a check;
* and do_anonymous_page can safely check later on).
+ *
+ * pte_unmap_same() returns:
+ * 0 if the PTE are the same
+ * VM_FAULT_PTNOTSAME if the PTE are different
+ * VM_FAULT_RETRY if the VMA has changed in our back during
+ * a speculative page fault handling.
*/
-static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
- pte_t *page_table, pte_t orig_pte)
+static inline int pte_unmap_same(struct vm_fault *vmf)
{
- int same = 1;
+ int ret = 0;
+
#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
if (sizeof(pte_t) > sizeof(unsigned long)) {
- spinlock_t *ptl = pte_lockptr(mm, pmd);
- spin_lock(ptl);
- same = pte_same(*page_table, orig_pte);
- spin_unlock(ptl);
+ if (pte_spinlock(vmf)) {
+ if (!pte_same(*vmf->pte, vmf->orig_pte))
+ ret = VM_FAULT_PTNOTSAME;
+ spin_unlock(vmf->ptl);
+ } else
+ ret = VM_FAULT_RETRY;
}
#endif
- pte_unmap(page_table);
- return same;
+ pte_unmap(vmf->pte);
+ return ret;
}
static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
@@ -2922,10 +2930,19 @@ int do_swap_page(struct vm_fault *vmf)
pte_t pte;
int locked;
int exclusive = 0;
- int ret = 0;
+ int ret;
- if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
+ ret = pte_unmap_same(vmf);
+ if (ret) {
+ /*
+ * If pte != orig_pte, this means another thread did the
+ * swap operation in our back.
+ * So nothing else to do.
+ */
+ if (ret == VM_FAULT_PTNOTSAME)
+ ret = 0;
goto out;
+ }
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 08/26] mm: introduce INIT_VMA()
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (6 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 07/26] mm: make pte_unmap_same compatible with SPF Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 09/26] mm: VMA sequence count Laurent Dufour
` (19 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Some VMA struct fields need to be initialized once the VMA structure is
allocated.
Currently this only concerns anon_vma_chain field but some other will be
added to support the speculative page fault.
Instead of spreading the initialization calls all over the code, let's
introduce a dedicated inline function.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
fs/exec.c | 2 +-
include/linux/mm.h | 5 +++++
kernel/fork.c | 2 +-
mm/mmap.c | 10 +++++-----
mm/nommu.c | 2 +-
5 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 6fc98cfd3bdb..7e134a588ef3 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -311,7 +311,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
vma->vm_start = vma->vm_end - PAGE_SIZE;
vma->vm_flags = VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ INIT_VMA(vma);
err = insert_vm_struct(mm, vma);
if (err)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 113b572471ca..35ecb983ff36 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1303,6 +1303,11 @@ struct zap_details {
pgoff_t last_index; /* Highest page->index to unmap */
};
+static inline void INIT_VMA(struct vm_area_struct *vma)
+{
+ INIT_LIST_HEAD(&vma->anon_vma_chain);
+}
+
struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
pte_t pte, bool with_public_device);
#define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
diff --git a/kernel/fork.c b/kernel/fork.c
index 744d6fbba8f8..99198a02efe9 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -458,7 +458,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
if (!tmp)
goto fail_nomem;
*tmp = *mpnt;
- INIT_LIST_HEAD(&tmp->anon_vma_chain);
+ INIT_VMA(tmp);
retval = vma_dup_policy(mpnt, tmp);
if (retval)
goto fail_nomem_policy;
diff --git a/mm/mmap.c b/mm/mmap.c
index d2ef1060a2d2..ceb1c2c1b46b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1709,7 +1709,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
vma->vm_flags = vm_flags;
vma->vm_page_prot = vm_get_page_prot(vm_flags);
vma->vm_pgoff = pgoff;
- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ INIT_VMA(vma);
if (file) {
if (vm_flags & VM_DENYWRITE) {
@@ -2595,7 +2595,7 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
/* most fields are the same, copy all, and then fixup */
*new = *vma;
- INIT_LIST_HEAD(&new->anon_vma_chain);
+ INIT_VMA(new);
if (new_below)
new->vm_end = addr;
@@ -2965,7 +2965,7 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
return -ENOMEM;
}
- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ INIT_VMA(vma);
vma->vm_mm = mm;
vma->vm_start = addr;
vma->vm_end = addr + len;
@@ -3184,7 +3184,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
new_vma->vm_pgoff = pgoff;
if (vma_dup_policy(vma, new_vma))
goto out_free_vma;
- INIT_LIST_HEAD(&new_vma->anon_vma_chain);
+ INIT_VMA(new_vma);
if (anon_vma_clone(new_vma, vma))
goto out_free_mempol;
if (new_vma->vm_file)
@@ -3327,7 +3327,7 @@ static struct vm_area_struct *__install_special_mapping(
if (unlikely(vma == NULL))
return ERR_PTR(-ENOMEM);
- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ INIT_VMA(vma);
vma->vm_mm = mm;
vma->vm_start = addr;
vma->vm_end = addr + len;
diff --git a/mm/nommu.c b/mm/nommu.c
index 4452d8bd9ae4..ece424315cc5 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1212,7 +1212,7 @@ unsigned long do_mmap(struct file *file,
region->vm_flags = vm_flags;
region->vm_pgoff = pgoff;
- INIT_LIST_HEAD(&vma->anon_vma_chain);
+ INIT_VMA(vma);
vma->vm_flags = vm_flags;
vma->vm_pgoff = pgoff;
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 09/26] mm: VMA sequence count
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (7 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 08/26] mm: introduce INIT_VMA() Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 10/26] mm: protect VMA modifications using " Laurent Dufour
` (18 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Peter Zijlstra <peterz@infradead.org>
Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.
The calls to vm_write_begin/end() in unmap_page_range() are
used to detect when a VMA is being unmap and thus that new page fault
should not be satisfied for this VMA. If the seqcount hasn't changed when
the page table are locked, this means we are safe to satisfy the page
fault.
The flip side is that we cannot distinguish between a vma_adjust() and
the unmap_page_range() -- where with the former we could have
re-checked the vma bounds against the address.
The VMA's sequence counter is also used to detect change to various VMA's
fields used during the page fault handling, such as:
- vm_start, vm_end
- vm_pgoff
- vm_flags, vm_page_prot
- anon_vma
- vm_policy
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[Port to 4.12 kernel]
[Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
using vm_raw_write* functions]
[Fix a lock dependency warning in mmap_region() when entering the error
path]
[move sequence initialisation INIT_VMA()]
[Review the patch description about unmap_page_range()]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/mm_types.h | 3 +++
mm/memory.c | 2 ++
mm/mmap.c | 31 +++++++++++++++++++++++++++++++
4 files changed, 80 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 35ecb983ff36..18acfdeee759 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1306,6 +1306,9 @@ struct zap_details {
static inline void INIT_VMA(struct vm_area_struct *vma)
{
INIT_LIST_HEAD(&vma->anon_vma_chain);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ seqcount_init(&vma->vm_sequence);
+#endif
}
struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
@@ -1428,6 +1431,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
unmap_mapping_range(mapping, holebegin, holelen, 0);
}
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+static inline void vm_write_begin(struct vm_area_struct *vma)
+{
+ write_seqcount_begin(&vma->vm_sequence);
+}
+static inline void vm_write_begin_nested(struct vm_area_struct *vma,
+ int subclass)
+{
+ write_seqcount_begin_nested(&vma->vm_sequence, subclass);
+}
+static inline void vm_write_end(struct vm_area_struct *vma)
+{
+ write_seqcount_end(&vma->vm_sequence);
+}
+static inline void vm_raw_write_begin(struct vm_area_struct *vma)
+{
+ raw_write_seqcount_begin(&vma->vm_sequence);
+}
+static inline void vm_raw_write_end(struct vm_area_struct *vma)
+{
+ raw_write_seqcount_end(&vma->vm_sequence);
+}
+#else
+static inline void vm_write_begin(struct vm_area_struct *vma)
+{
+}
+static inline void vm_write_begin_nested(struct vm_area_struct *vma,
+ int subclass)
+{
+}
+static inline void vm_write_end(struct vm_area_struct *vma)
+{
+}
+static inline void vm_raw_write_begin(struct vm_area_struct *vma)
+{
+}
+static inline void vm_raw_write_end(struct vm_area_struct *vma)
+{
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
void *buf, int len, unsigned int gup_flags);
extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 54f1e05ecf3e..fb5962308183 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -335,6 +335,9 @@ struct vm_area_struct {
struct mempolicy *vm_policy; /* NUMA policy for the VMA */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ seqcount_t vm_sequence;
+#endif
} __randomize_layout;
struct core_thread {
diff --git a/mm/memory.c b/mm/memory.c
index 75163c145c76..551a1916da5d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1499,6 +1499,7 @@ void unmap_page_range(struct mmu_gather *tlb,
unsigned long next;
BUG_ON(addr >= end);
+ vm_write_begin(vma);
tlb_start_vma(tlb, vma);
pgd = pgd_offset(vma->vm_mm, addr);
do {
@@ -1508,6 +1509,7 @@ void unmap_page_range(struct mmu_gather *tlb,
next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
} while (pgd++, addr = next, addr != end);
tlb_end_vma(tlb, vma);
+ vm_write_end(vma);
}
diff --git a/mm/mmap.c b/mm/mmap.c
index ceb1c2c1b46b..eeafd0bc8b36 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -701,6 +701,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
long adjust_next = 0;
int remove_next = 0;
+ /*
+ * Why using vm_raw_write*() functions here to avoid lockdep's warning ?
+ *
+ * Locked is complaining about a theoretical lock dependency, involving
+ * 3 locks:
+ * mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
+ *
+ * Here are the major path leading to this dependency :
+ * 1. __vma_adjust() mmap_sem -> vm_sequence -> i_mmap_rwsem
+ * 2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
+ * 3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
+ * 4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
+ *
+ * So there is no way to solve this easily, especially because in
+ * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
+ * VMAs are not yet known.
+ * However, the way the vm_seq is used is guarantying that we will
+ * never block on it since we just check for its value and never wait
+ * for it to move, see vma_has_changed() and handle_speculative_fault().
+ */
+ vm_raw_write_begin(vma);
+ if (next)
+ vm_raw_write_begin(next);
+
if (next && !insert) {
struct vm_area_struct *exporter = NULL, *importer = NULL;
@@ -911,6 +935,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
anon_vma_merge(vma, next);
mm->map_count--;
mpol_put(vma_policy(next));
+ vm_raw_write_end(next);
kmem_cache_free(vm_area_cachep, next);
/*
* In mprotect's case 6 (see comments on vma_merge),
@@ -925,6 +950,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
* "vma->vm_next" gap must be updated.
*/
next = vma->vm_next;
+ if (next)
+ vm_raw_write_begin(next);
} else {
/*
* For the scope of the comment "next" and
@@ -971,6 +998,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (insert && file)
uprobe_mmap(insert);
+ if (next && next != vma)
+ vm_raw_write_end(next);
+ vm_raw_write_end(vma);
+
validate_mm(mm);
return 0;
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (8 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 09/26] mm: VMA sequence count Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-11-05 7:04 ` vinayak menon
2018-05-17 11:06 ` [PATCH v11 11/26] mm: protect mremap() against SPF hanlder Laurent Dufour
` (17 subsequent siblings)
27 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.
This patch provides protection against the VMA modification done in :
- madvise()
- mpol_rebind_policy()
- vma_replace_policy()
- change_prot_numa()
- mlock(), munlock()
- mprotect()
- mmap_region()
- collapse_huge_page()
- userfaultd registering services
In addition, VMA fields which will be read during the speculative fault
path needs to be written using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
fs/proc/task_mmu.c | 5 ++++-
fs/userfaultfd.c | 17 +++++++++++++----
mm/khugepaged.c | 3 +++
mm/madvise.c | 6 +++++-
mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
mm/mlock.c | 13 ++++++++-----
mm/mmap.c | 22 +++++++++++++---------
mm/mprotect.c | 4 +++-
mm/swap_state.c | 8 ++++++--
9 files changed, 89 insertions(+), 40 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 597969db9e90..7247d6d5afba 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1137,8 +1137,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
goto out_mm;
}
for (vma = mm->mmap; vma; vma = vma->vm_next) {
- vma->vm_flags &= ~VM_SOFTDIRTY;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & ~VM_SOFTDIRTY);
vma_set_page_prot(vma);
+ vm_write_end(vma);
}
downgrade_write(&mm->mmap_sem);
break;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cec550c8468f..b8212ba17695 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
octx = vma->vm_userfaultfd_ctx.ctx;
if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
+ vm_write_begin(vma);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
- vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
+ vm_write_end(vma);
return 0;
}
@@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
vma = prev;
else
prev = vma;
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ vm_write_end(vma);
}
up_write(&mm->mmap_sem);
mmput(mm);
@@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
* the next vma was merged into the current one and
* the current one has not been updated yet.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx.ctx = ctx;
+ vm_write_end(vma);
skip:
prev = vma;
@@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
* the next vma was merged into the current one and
* the current one has not been updated yet.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ vm_write_end(vma);
skip:
prev = vma;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d7b2a4bf8671..0b28af4b950d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1011,6 +1011,7 @@ static void collapse_huge_page(struct mm_struct *mm,
if (mm_find_pmd(mm, address) != pmd)
goto out;
+ vm_write_begin(vma);
anon_vma_lock_write(vma->anon_vma);
pte = pte_offset_map(pmd, address);
@@ -1046,6 +1047,7 @@ static void collapse_huge_page(struct mm_struct *mm,
pmd_populate(mm, pmd, pmd_pgtable(_pmd));
spin_unlock(pmd_ptl);
anon_vma_unlock_write(vma->anon_vma);
+ vm_write_end(vma);
result = SCAN_FAIL;
goto out;
}
@@ -1080,6 +1082,7 @@ static void collapse_huge_page(struct mm_struct *mm,
set_pmd_at(mm, address, pmd, _pmd);
update_mmu_cache_pmd(vma, address, pmd);
spin_unlock(pmd_ptl);
+ vm_write_end(vma);
*hpage = NULL;
diff --git a/mm/madvise.c b/mm/madvise.c
index 4d3c922ea1a1..e328f7ab5942 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
/*
* vm_flags is protected by the mmap_sem held in write mode.
*/
- vma->vm_flags = new_flags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, new_flags);
+ vm_write_end(vma);
out:
return error;
}
@@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
.private = tlb,
};
+ vm_write_begin(vma);
tlb_start_vma(tlb, vma);
walk_page_range(addr, end, &free_walk);
tlb_end_vma(tlb, vma);
+ vm_write_end(vma);
}
static int madvise_free_single_vma(struct vm_area_struct *vma,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9ac49ef17b4e..898d325c9fea 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
struct vm_area_struct *vma;
down_write(&mm->mmap_sem);
- for (vma = mm->mmap; vma; vma = vma->vm_next)
+ for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ vm_write_begin(vma);
mpol_rebind_policy(vma->vm_policy, new);
+ vm_write_end(vma);
+ }
up_write(&mm->mmap_sem);
}
@@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
{
int nr_updated;
+ vm_write_begin(vma);
nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
if (nr_updated)
count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
+ vm_write_end(vma);
return nr_updated;
}
@@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
if (IS_ERR(new))
return PTR_ERR(new);
+ vm_write_begin(vma);
if (vma->vm_ops && vma->vm_ops->set_policy) {
err = vma->vm_ops->set_policy(vma, new);
if (err)
@@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
}
old = vma->vm_policy;
- vma->vm_policy = new; /* protected by mmap_sem */
+ /*
+ * The speculative page fault handler accesses this field without
+ * hodling the mmap_sem.
+ */
+ WRITE_ONCE(vma->vm_policy, new);
+ vm_write_end(vma);
mpol_put(old);
return 0;
err_out:
+ vm_write_end(vma);
mpol_put(new);
return err;
}
@@ -1614,23 +1626,28 @@ COMPAT_SYSCALL_DEFINE4(migrate_pages, compat_pid_t, pid,
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
unsigned long addr)
{
- struct mempolicy *pol = NULL;
+ struct mempolicy *pol;
- if (vma) {
- if (vma->vm_ops && vma->vm_ops->get_policy) {
- pol = vma->vm_ops->get_policy(vma, addr);
- } else if (vma->vm_policy) {
- pol = vma->vm_policy;
+ if (!vma)
+ return NULL;
- /*
- * shmem_alloc_page() passes MPOL_F_SHARED policy with
- * a pseudo vma whose vma->vm_ops=NULL. Take a reference
- * count on these policies which will be dropped by
- * mpol_cond_put() later
- */
- if (mpol_needs_cond_ref(pol))
- mpol_get(pol);
- }
+ if (vma->vm_ops && vma->vm_ops->get_policy)
+ return vma->vm_ops->get_policy(vma, addr);
+
+ /*
+ * This could be called without holding the mmap_sem in the
+ * speculative page fault handler's path.
+ */
+ pol = READ_ONCE(vma->vm_policy);
+ if (pol) {
+ /*
+ * shmem_alloc_page() passes MPOL_F_SHARED policy with
+ * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+ * count on these policies which will be dropped by
+ * mpol_cond_put() later
+ */
+ if (mpol_needs_cond_ref(pol))
+ mpol_get(pol);
}
return pol;
diff --git a/mm/mlock.c b/mm/mlock.c
index 74e5a6547c3d..c40285c94ced 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -445,7 +445,9 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
void munlock_vma_pages_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
- vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, vma->vm_flags & VM_LOCKED_CLEAR_MASK);
+ vm_write_end(vma);
while (start < end) {
struct page *page;
@@ -568,10 +570,11 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
* It's okay if try_to_unmap_one unmaps a page just after we
* set VM_LOCKED, populate_vma_page_range will bring it back.
*/
-
- if (lock)
- vma->vm_flags = newflags;
- else
+ if (lock) {
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, newflags);
+ vm_write_end(vma);
+ } else
munlock_vma_pages_range(vma, start, end);
out:
diff --git a/mm/mmap.c b/mm/mmap.c
index eeafd0bc8b36..add13b4e1d8d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -852,17 +852,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
if (start != vma->vm_start) {
- vma->vm_start = start;
+ WRITE_ONCE(vma->vm_start, start);
start_changed = true;
}
if (end != vma->vm_end) {
- vma->vm_end = end;
+ WRITE_ONCE(vma->vm_end, end);
end_changed = true;
}
- vma->vm_pgoff = pgoff;
+ WRITE_ONCE(vma->vm_pgoff, pgoff);
if (adjust_next) {
- next->vm_start += adjust_next << PAGE_SHIFT;
- next->vm_pgoff += adjust_next;
+ WRITE_ONCE(next->vm_start,
+ next->vm_start + (adjust_next << PAGE_SHIFT));
+ WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
}
if (root) {
@@ -1793,13 +1794,15 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
out:
perf_event_mmap(vma);
+ vm_write_begin(vma);
vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
if (vm_flags & VM_LOCKED) {
if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
vma == get_gate_vma(current->mm)))
mm->locked_vm += (len >> PAGE_SHIFT);
else
- vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+ WRITE_ONCE(vma->vm_flags,
+ vma->vm_flags & VM_LOCKED_CLEAR_MASK);
}
if (file)
@@ -1812,9 +1815,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
* then new mapped in-place (which must be aimed as
* a completely new data area).
*/
- vma->vm_flags |= VM_SOFTDIRTY;
+ WRITE_ONCE(vma->vm_flags, vma->vm_flags | VM_SOFTDIRTY);
vma_set_page_prot(vma);
+ vm_write_end(vma);
return addr;
@@ -2443,8 +2447,8 @@ int expand_downwards(struct vm_area_struct *vma,
mm->locked_vm += grow;
vm_stat_account(mm, vma->vm_flags, grow);
anon_vma_interval_tree_pre_update_vma(vma);
- vma->vm_start = address;
- vma->vm_pgoff -= grow;
+ WRITE_ONCE(vma->vm_start, address);
+ WRITE_ONCE(vma->vm_pgoff, vma->vm_pgoff - grow);
anon_vma_interval_tree_post_update_vma(vma);
vma_gap_update(vma);
spin_unlock(&mm->page_table_lock);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 625608bc8962..83594cc68062 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -375,12 +375,14 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
* vm_flags and vm_page_prot are protected by the mmap_sem
* held in write mode.
*/
- vma->vm_flags = newflags;
+ vm_write_begin(vma);
+ WRITE_ONCE(vma->vm_flags, newflags);
dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot);
vma_set_page_prot(vma);
change_protection(vma, start, end, vma->vm_page_prot,
dirty_accountable, 0);
+ vm_write_end(vma);
/*
* Private VM_LOCKED VMA becoming writable: trigger COW to avoid major
diff --git a/mm/swap_state.c b/mm/swap_state.c
index c6b3eab73fde..2ee7198df281 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -572,6 +572,10 @@ static unsigned long swapin_nr_pages(unsigned long offset)
* the readahead.
*
* Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
+ * This is needed to ensure the VMA will not be freed in our back. In the case
+ * of the speculative page fault handler, this cannot happen, even if we don't
+ * hold the mmap_sem. Callees are assumed to take care of reading VMA's fields
+ * using READ_ONCE() to read consistent values.
*/
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct vm_fault *vmf)
@@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
unsigned long *start,
unsigned long *end)
{
- *start = max3(lpfn, PFN_DOWN(vma->vm_start),
+ *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
PFN_DOWN(faddr & PMD_MASK));
- *end = min3(rpfn, PFN_DOWN(vma->vm_end),
+ *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
}
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
2018-05-17 11:06 ` [PATCH v11 10/26] mm: protect VMA modifications using " Laurent Dufour
@ 2018-11-05 7:04 ` vinayak menon
0 siblings, 0 replies; 106+ messages in thread
From: vinayak menon @ 2018-11-05 7:04 UTC (permalink / raw)
To: Laurent Dufour
Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
jack, Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, punitagrawal,
yang.shi, linux-kernel, linux-mm, haren, npiggin, Balbir Singh,
Paul McKenney, Tim Chen, linuxppc-dev, x86, Vinayak Menon
Hi Laurent,
On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
>
> The VMA sequence count has been introduced to allow fast detection of
> VMA modification when running a page fault handler without holding
> the mmap_sem.
>
> This patch provides protection against the VMA modification done in :
> - madvise()
> - mpol_rebind_policy()
> - vma_replace_policy()
> - change_prot_numa()
> - mlock(), munlock()
> - mprotect()
> - mmap_region()
> - collapse_huge_page()
> - userfaultd registering services
>
> In addition, VMA fields which will be read during the speculative fault
> path needs to be written using WRITE_ONCE to prevent write to be split
> and intermediate values to be pushed to other CPUs.
>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> fs/proc/task_mmu.c | 5 ++++-
> fs/userfaultfd.c | 17 +++++++++++++----
> mm/khugepaged.c | 3 +++
> mm/madvise.c | 6 +++++-
> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
> mm/mlock.c | 13 ++++++++-----
> mm/mmap.c | 22 +++++++++++++---------
> mm/mprotect.c | 4 +++-
> mm/swap_state.c | 8 ++++++--
> 9 files changed, 89 insertions(+), 40 deletions(-)
>
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_fault *vmf)
> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
> unsigned long *start,
> unsigned long *end)
> {
> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
> PFN_DOWN(faddr & PMD_MASK));
> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
> }
>
> --
> 2.7.4
>
I have got a crash on 4.14 kernel with speculative page faults enabled
and here is my analysis of the problem.
The issue was reported only once.
[23409.303395] el1_da+0x24/0x84
[23409.303400] __radix_tree_lookup+0x8/0x90
[23409.303407] find_get_entry+0x64/0x14c
[23409.303410] pagecache_get_page+0x5c/0x27c
[23409.303416] __read_swap_cache_async+0x80/0x260
[23409.303420] swap_vma_readahead+0x264/0x37c
[23409.303423] swapin_readahead+0x5c/0x6c
[23409.303428] do_swap_page+0x128/0x6e4
[23409.303431] handle_pte_fault+0x230/0xca4
[23409.303435] __handle_speculative_fault+0x57c/0x7c8
[23409.303438] do_page_fault+0x228/0x3e8
[23409.303442] do_translation_fault+0x50/0x6c
[23409.303445] do_mem_abort+0x5c/0xe0
[23409.303447] el0_da+0x20/0x24
Process A accesses address ADDR (part of VMA A) and that results in a
translation fault.
Kernel enters __handle_speculative_fault to fix the fault.
Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
from speculative path.
During this time, another process B which shares the same mm, does a
mprotect from another CPU which follows
mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
After the split, ADDR falls into VMA B, but process A is still using
VMA A.
Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
swap_vma_readahead->swap_ra_info uses start and end of vma to
calculate ptes and nr_pte, which goes wrong due to this and finally
resulting in wrong "entry" passed to
swap_vma_readahead->__read_swap_cache_async, and in turn causing
invalid swapper_space
being passed to __read_swap_cache_async->find_get_page, causing an abort.
The fix I have tried is to cache vm_start and vm_end also in vmf and
use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
send
the patch I am a using if you feel that is the right thing to do.
Thanks,
Vinayak
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
@ 2018-11-05 7:04 ` vinayak menon
0 siblings, 0 replies; 106+ messages in thread
From: vinayak menon @ 2018-11-05 7:04 UTC (permalink / raw)
To: Laurent Dufour
Cc: jack, sergey.senozhatsky.work, Peter Zijlstra, Will Deacon,
Michal Hocko, linux-mm, paulus, punitagrawal, hpa,
Alexei Starovoitov, khandual, Andrea Arcangeli, ak, Minchan Kim,
x86, Matthew Wilcox, Daniel Jordan, Ingo Molnar, David Rientjes,
Paul McKenney, npiggin, Jerome Glisse, dave, kemi.wang, kirill,
Thomas Gleixner, Ganesh Mahendran, yang.shi, linuxppc-dev,
linux-kernel, Sergey Senozhatsky, Vinayak Menon, aneesh.kumar,
Andrew Morton, Tim Chen, haren
Hi Laurent,
On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
>
> The VMA sequence count has been introduced to allow fast detection of
> VMA modification when running a page fault handler without holding
> the mmap_sem.
>
> This patch provides protection against the VMA modification done in :
> - madvise()
> - mpol_rebind_policy()
> - vma_replace_policy()
> - change_prot_numa()
> - mlock(), munlock()
> - mprotect()
> - mmap_region()
> - collapse_huge_page()
> - userfaultd registering services
>
> In addition, VMA fields which will be read during the speculative fault
> path needs to be written using WRITE_ONCE to prevent write to be split
> and intermediate values to be pushed to other CPUs.
>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> fs/proc/task_mmu.c | 5 ++++-
> fs/userfaultfd.c | 17 +++++++++++++----
> mm/khugepaged.c | 3 +++
> mm/madvise.c | 6 +++++-
> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
> mm/mlock.c | 13 ++++++++-----
> mm/mmap.c | 22 +++++++++++++---------
> mm/mprotect.c | 4 +++-
> mm/swap_state.c | 8 ++++++--
> 9 files changed, 89 insertions(+), 40 deletions(-)
>
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_fault *vmf)
> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
> unsigned long *start,
> unsigned long *end)
> {
> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
> PFN_DOWN(faddr & PMD_MASK));
> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
> }
>
> --
> 2.7.4
>
I have got a crash on 4.14 kernel with speculative page faults enabled
and here is my analysis of the problem.
The issue was reported only once.
[23409.303395] el1_da+0x24/0x84
[23409.303400] __radix_tree_lookup+0x8/0x90
[23409.303407] find_get_entry+0x64/0x14c
[23409.303410] pagecache_get_page+0x5c/0x27c
[23409.303416] __read_swap_cache_async+0x80/0x260
[23409.303420] swap_vma_readahead+0x264/0x37c
[23409.303423] swapin_readahead+0x5c/0x6c
[23409.303428] do_swap_page+0x128/0x6e4
[23409.303431] handle_pte_fault+0x230/0xca4
[23409.303435] __handle_speculative_fault+0x57c/0x7c8
[23409.303438] do_page_fault+0x228/0x3e8
[23409.303442] do_translation_fault+0x50/0x6c
[23409.303445] do_mem_abort+0x5c/0xe0
[23409.303447] el0_da+0x20/0x24
Process A accesses address ADDR (part of VMA A) and that results in a
translation fault.
Kernel enters __handle_speculative_fault to fix the fault.
Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
from speculative path.
During this time, another process B which shares the same mm, does a
mprotect from another CPU which follows
mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
After the split, ADDR falls into VMA B, but process A is still using
VMA A.
Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
swap_vma_readahead->swap_ra_info uses start and end of vma to
calculate ptes and nr_pte, which goes wrong due to this and finally
resulting in wrong "entry" passed to
swap_vma_readahead->__read_swap_cache_async, and in turn causing
invalid swapper_space
being passed to __read_swap_cache_async->find_get_page, causing an abort.
The fix I have tried is to cache vm_start and vm_end also in vmf and
use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
send
the patch I am a using if you feel that is the right thing to do.
Thanks,
Vinayak
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
2018-11-05 7:04 ` vinayak menon
(?)
@ 2018-11-05 18:22 ` Laurent Dufour
-1 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 18:22 UTC (permalink / raw)
To: vinayak menon
Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
jack, Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, punitagrawal,
yang.shi, linux-kernel, linux-mm, haren, npiggin, Balbir Singh,
Paul McKenney, Tim Chen, linuxppc-dev, x86, Vinayak Menon
[-- Attachment #1: Type: text/plain, Size: 4478 bytes --]
Le 05/11/2018 à 08:04, vinayak menon a écrit :
> Hi Laurent,
>
> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>>
>> The VMA sequence count has been introduced to allow fast detection of
>> VMA modification when running a page fault handler without holding
>> the mmap_sem.
>>
>> This patch provides protection against the VMA modification done in :
>> - madvise()
>> - mpol_rebind_policy()
>> - vma_replace_policy()
>> - change_prot_numa()
>> - mlock(), munlock()
>> - mprotect()
>> - mmap_region()
>> - collapse_huge_page()
>> - userfaultd registering services
>>
>> In addition, VMA fields which will be read during the speculative fault
>> path needs to be written using WRITE_ONCE to prevent write to be split
>> and intermediate values to be pushed to other CPUs.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> fs/proc/task_mmu.c | 5 ++++-
>> fs/userfaultfd.c | 17 +++++++++++++----
>> mm/khugepaged.c | 3 +++
>> mm/madvise.c | 6 +++++-
>> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
>> mm/mlock.c | 13 ++++++++-----
>> mm/mmap.c | 22 +++++++++++++---------
>> mm/mprotect.c | 4 +++-
>> mm/swap_state.c | 8 ++++++--
>> 9 files changed, 89 insertions(+), 40 deletions(-)
>>
>> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>> struct vm_fault *vmf)
>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>> unsigned long *start,
>> unsigned long *end)
>> {
>> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>> PFN_DOWN(faddr & PMD_MASK));
>> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>> }
>>
>> --
>> 2.7.4
>>
>
> I have got a crash on 4.14 kernel with speculative page faults enabled
> and here is my analysis of the problem.
> The issue was reported only once.
Hi Vinayak,
Thanks for reporting this.
>
> [23409.303395] el1_da+0x24/0x84
> [23409.303400] __radix_tree_lookup+0x8/0x90
> [23409.303407] find_get_entry+0x64/0x14c
> [23409.303410] pagecache_get_page+0x5c/0x27c
> [23409.303416] __read_swap_cache_async+0x80/0x260
> [23409.303420] swap_vma_readahead+0x264/0x37c
> [23409.303423] swapin_readahead+0x5c/0x6c
> [23409.303428] do_swap_page+0x128/0x6e4
> [23409.303431] handle_pte_fault+0x230/0xca4
> [23409.303435] __handle_speculative_fault+0x57c/0x7c8
> [23409.303438] do_page_fault+0x228/0x3e8
> [23409.303442] do_translation_fault+0x50/0x6c
> [23409.303445] do_mem_abort+0x5c/0xe0
> [23409.303447] el0_da+0x20/0x24
>
> Process A accesses address ADDR (part of VMA A) and that results in a
> translation fault.
> Kernel enters __handle_speculative_fault to fix the fault.
> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
> from speculative path.
> During this time, another process B which shares the same mm, does a
> mprotect from another CPU which follows
> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
> After the split, ADDR falls into VMA B, but process A is still using
> VMA A.
> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
> swap_vma_readahead->swap_ra_info uses start and end of vma to
> calculate ptes and nr_pte, which goes wrong due to this and finally
> resulting in wrong "entry" passed to
> swap_vma_readahead->__read_swap_cache_async, and in turn causing
> invalid swapper_space
> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>
> The fix I have tried is to cache vm_start and vm_end also in vmf and
> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
> send
> the patch I am a using if you feel that is the right thing to do.
I think the best would be to don't do swap readahead during the
speculatvive page fault. If the page is found in the swap cache, that's
fine, but otherwise, we should f allback to the regular page fault.
The attached -untested- patch is doing this, if you want to give it a
try. I'll review that for the next series.
Thanks,
Laurent.
[-- Attachment #2: 0001-mm-don-t-do-swap-readahead-during-speculative-page-f.patch --]
[-- Type: text/plain, Size: 1507 bytes --]
From 056afafb0bccea6a356f80f4253ffcd3ef4a1f8d Mon Sep 17 00:00:00 2001
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Date: Mon, 5 Nov 2018 18:43:01 +0100
Subject: [PATCH] mm: don't do swap readahead during speculative page fault
Vinayak Menon faced a panic because one thread was page faulting a page in
swap, while another one was mprotecting a part of the VMA leading to a VMA
split.
This raise a panic in swap_vma_readahead() because the VMA's boundaries
were not more matching the faulting address.
To avoid this, if the page is not found in the swap, the speculative page
fault is aborted to retry a regular page fault.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 9dd5ffeb1f7e..720dc9a1b99f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3139,6 +3139,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
lru_cache_add_anon(page);
swap_readpage(page, true);
}
+ } else if (vmf->flags & FAULT_FLAG_SPECULATIVE) {
+ /*
+ * Don't try readahead during a speculative page fault as
+ * the VMA's boundaries may change in our back.
+ * If the page is not in the swap cache and synchronous read
+ * is disabled, fall back to the regular page fault mechanism.
+ */
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ ret = VM_FAULT_RETRY;
+ goto out;
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
--
2.19.1
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
@ 2018-11-05 18:22 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 18:22 UTC (permalink / raw)
To: vinayak menon
Cc: jack, sergey.senozhatsky.work, Peter Zijlstra, Will Deacon,
Michal Hocko, linux-mm, paulus, punitagrawal, hpa,
Alexei Starovoitov, khandual, Andrea Arcangeli, ak, Minchan Kim,
x86, Matthew Wilcox, Daniel Jordan, Ingo Molnar, David Rientjes,
Paul McKenney, npiggin, Jerome Glisse, dave, kemi.wang, kirill,
Thomas Gleixner, Ganesh Mahendran, yang.shi, linuxppc-dev,
linux-kernel, Sergey Senozhatsky, Vinayak Menon, aneesh.kumar,
Andrew Morton, Tim Chen, haren
[-- Attachment #1: Type: text/plain, Size: 4478 bytes --]
Le 05/11/2018 à 08:04, vinayak menon a écrit :
> Hi Laurent,
>
> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>>
>> The VMA sequence count has been introduced to allow fast detection of
>> VMA modification when running a page fault handler without holding
>> the mmap_sem.
>>
>> This patch provides protection against the VMA modification done in :
>> - madvise()
>> - mpol_rebind_policy()
>> - vma_replace_policy()
>> - change_prot_numa()
>> - mlock(), munlock()
>> - mprotect()
>> - mmap_region()
>> - collapse_huge_page()
>> - userfaultd registering services
>>
>> In addition, VMA fields which will be read during the speculative fault
>> path needs to be written using WRITE_ONCE to prevent write to be split
>> and intermediate values to be pushed to other CPUs.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> fs/proc/task_mmu.c | 5 ++++-
>> fs/userfaultfd.c | 17 +++++++++++++----
>> mm/khugepaged.c | 3 +++
>> mm/madvise.c | 6 +++++-
>> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
>> mm/mlock.c | 13 ++++++++-----
>> mm/mmap.c | 22 +++++++++++++---------
>> mm/mprotect.c | 4 +++-
>> mm/swap_state.c | 8 ++++++--
>> 9 files changed, 89 insertions(+), 40 deletions(-)
>>
>> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>> struct vm_fault *vmf)
>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>> unsigned long *start,
>> unsigned long *end)
>> {
>> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>> PFN_DOWN(faddr & PMD_MASK));
>> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>> }
>>
>> --
>> 2.7.4
>>
>
> I have got a crash on 4.14 kernel with speculative page faults enabled
> and here is my analysis of the problem.
> The issue was reported only once.
Hi Vinayak,
Thanks for reporting this.
>
> [23409.303395] el1_da+0x24/0x84
> [23409.303400] __radix_tree_lookup+0x8/0x90
> [23409.303407] find_get_entry+0x64/0x14c
> [23409.303410] pagecache_get_page+0x5c/0x27c
> [23409.303416] __read_swap_cache_async+0x80/0x260
> [23409.303420] swap_vma_readahead+0x264/0x37c
> [23409.303423] swapin_readahead+0x5c/0x6c
> [23409.303428] do_swap_page+0x128/0x6e4
> [23409.303431] handle_pte_fault+0x230/0xca4
> [23409.303435] __handle_speculative_fault+0x57c/0x7c8
> [23409.303438] do_page_fault+0x228/0x3e8
> [23409.303442] do_translation_fault+0x50/0x6c
> [23409.303445] do_mem_abort+0x5c/0xe0
> [23409.303447] el0_da+0x20/0x24
>
> Process A accesses address ADDR (part of VMA A) and that results in a
> translation fault.
> Kernel enters __handle_speculative_fault to fix the fault.
> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
> from speculative path.
> During this time, another process B which shares the same mm, does a
> mprotect from another CPU which follows
> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
> After the split, ADDR falls into VMA B, but process A is still using
> VMA A.
> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
> swap_vma_readahead->swap_ra_info uses start and end of vma to
> calculate ptes and nr_pte, which goes wrong due to this and finally
> resulting in wrong "entry" passed to
> swap_vma_readahead->__read_swap_cache_async, and in turn causing
> invalid swapper_space
> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>
> The fix I have tried is to cache vm_start and vm_end also in vmf and
> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
> send
> the patch I am a using if you feel that is the right thing to do.
I think the best would be to don't do swap readahead during the
speculatvive page fault. If the page is found in the swap cache, that's
fine, but otherwise, we should f allback to the regular page fault.
The attached -untested- patch is doing this, if you want to give it a
try. I'll review that for the next series.
Thanks,
Laurent.
[-- Attachment #2: 0001-mm-don-t-do-swap-readahead-during-speculative-page-f.patch --]
[-- Type: text/plain, Size: 1507 bytes --]
From 056afafb0bccea6a356f80f4253ffcd3ef4a1f8d Mon Sep 17 00:00:00 2001
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Date: Mon, 5 Nov 2018 18:43:01 +0100
Subject: [PATCH] mm: don't do swap readahead during speculative page fault
Vinayak Menon faced a panic because one thread was page faulting a page in
swap, while another one was mprotecting a part of the VMA leading to a VMA
split.
This raise a panic in swap_vma_readahead() because the VMA's boundaries
were not more matching the faulting address.
To avoid this, if the page is not found in the swap, the speculative page
fault is aborted to retry a regular page fault.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 9dd5ffeb1f7e..720dc9a1b99f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3139,6 +3139,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
lru_cache_add_anon(page);
swap_readpage(page, true);
}
+ } else if (vmf->flags & FAULT_FLAG_SPECULATIVE) {
+ /*
+ * Don't try readahead during a speculative page fault as
+ * the VMA's boundaries may change in our back.
+ * If the page is not in the swap cache and synchronous read
+ * is disabled, fall back to the regular page fault mechanism.
+ */
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ ret = VM_FAULT_RETRY;
+ goto out;
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
--
2.19.1
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
@ 2018-11-05 18:22 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 18:22 UTC (permalink / raw)
To: vinayak menon
Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
jack, Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, punitagrawal,
yang.shi, linux-kernel, linux-mm, haren, npiggin, Balbir Singh,
Paul McKenney, Tim Chen, linuxppc-dev, x86, Vinayak Menon
[-- Attachment #1: Type: text/plain, Size: 4480 bytes --]
Le 05/11/2018 A 08:04, vinayak menon a A(C)critA :
> Hi Laurent,
>
> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>>
>> The VMA sequence count has been introduced to allow fast detection of
>> VMA modification when running a page fault handler without holding
>> the mmap_sem.
>>
>> This patch provides protection against the VMA modification done in :
>> - madvise()
>> - mpol_rebind_policy()
>> - vma_replace_policy()
>> - change_prot_numa()
>> - mlock(), munlock()
>> - mprotect()
>> - mmap_region()
>> - collapse_huge_page()
>> - userfaultd registering services
>>
>> In addition, VMA fields which will be read during the speculative fault
>> path needs to be written using WRITE_ONCE to prevent write to be split
>> and intermediate values to be pushed to other CPUs.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> fs/proc/task_mmu.c | 5 ++++-
>> fs/userfaultfd.c | 17 +++++++++++++----
>> mm/khugepaged.c | 3 +++
>> mm/madvise.c | 6 +++++-
>> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
>> mm/mlock.c | 13 ++++++++-----
>> mm/mmap.c | 22 +++++++++++++---------
>> mm/mprotect.c | 4 +++-
>> mm/swap_state.c | 8 ++++++--
>> 9 files changed, 89 insertions(+), 40 deletions(-)
>>
>> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>> struct vm_fault *vmf)
>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>> unsigned long *start,
>> unsigned long *end)
>> {
>> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>> PFN_DOWN(faddr & PMD_MASK));
>> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>> }
>>
>> --
>> 2.7.4
>>
>
> I have got a crash on 4.14 kernel with speculative page faults enabled
> and here is my analysis of the problem.
> The issue was reported only once.
Hi Vinayak,
Thanks for reporting this.
>
> [23409.303395] el1_da+0x24/0x84
> [23409.303400] __radix_tree_lookup+0x8/0x90
> [23409.303407] find_get_entry+0x64/0x14c
> [23409.303410] pagecache_get_page+0x5c/0x27c
> [23409.303416] __read_swap_cache_async+0x80/0x260
> [23409.303420] swap_vma_readahead+0x264/0x37c
> [23409.303423] swapin_readahead+0x5c/0x6c
> [23409.303428] do_swap_page+0x128/0x6e4
> [23409.303431] handle_pte_fault+0x230/0xca4
> [23409.303435] __handle_speculative_fault+0x57c/0x7c8
> [23409.303438] do_page_fault+0x228/0x3e8
> [23409.303442] do_translation_fault+0x50/0x6c
> [23409.303445] do_mem_abort+0x5c/0xe0
> [23409.303447] el0_da+0x20/0x24
>
> Process A accesses address ADDR (part of VMA A) and that results in a
> translation fault.
> Kernel enters __handle_speculative_fault to fix the fault.
> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
> from speculative path.
> During this time, another process B which shares the same mm, does a
> mprotect from another CPU which follows
> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
> After the split, ADDR falls into VMA B, but process A is still using
> VMA A.
> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
> swap_vma_readahead->swap_ra_info uses start and end of vma to
> calculate ptes and nr_pte, which goes wrong due to this and finally
> resulting in wrong "entry" passed to
> swap_vma_readahead->__read_swap_cache_async, and in turn causing
> invalid swapper_space
> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>
> The fix I have tried is to cache vm_start and vm_end also in vmf and
> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
> send
> the patch I am a using if you feel that is the right thing to do.
I think the best would be to don't do swap readahead during the
speculatvive page fault. If the page is found in the swap cache, that's
fine, but otherwise, we should f allback to the regular page fault.
The attached -untested- patch is doing this, if you want to give it a
try. I'll review that for the next series.
Thanks,
Laurent.
[-- Attachment #2: 0001-mm-don-t-do-swap-readahead-during-speculative-page-f.patch --]
[-- Type: text/plain, Size: 1507 bytes --]
From 056afafb0bccea6a356f80f4253ffcd3ef4a1f8d Mon Sep 17 00:00:00 2001
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Date: Mon, 5 Nov 2018 18:43:01 +0100
Subject: [PATCH] mm: don't do swap readahead during speculative page fault
Vinayak Menon faced a panic because one thread was page faulting a page in
swap, while another one was mprotecting a part of the VMA leading to a VMA
split.
This raise a panic in swap_vma_readahead() because the VMA's boundaries
were not more matching the faulting address.
To avoid this, if the page is not found in the swap, the speculative page
fault is aborted to retry a regular page fault.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 9dd5ffeb1f7e..720dc9a1b99f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3139,6 +3139,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
lru_cache_add_anon(page);
swap_readpage(page, true);
}
+ } else if (vmf->flags & FAULT_FLAG_SPECULATIVE) {
+ /*
+ * Don't try readahead during a speculative page fault as
+ * the VMA's boundaries may change in our back.
+ * If the page is not in the swap cache and synchronous read
+ * is disabled, fall back to the regular page fault mechanism.
+ */
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+ ret = VM_FAULT_RETRY;
+ goto out;
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
--
2.19.1
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
2018-11-05 18:22 ` Laurent Dufour
(?)
@ 2018-11-06 9:28 ` Vinayak Menon
-1 siblings, 0 replies; 106+ messages in thread
From: Vinayak Menon @ 2018-11-06 9:28 UTC (permalink / raw)
To: Laurent Dufour, vinayak menon
Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
jack, Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, punitagrawal,
yang.shi, linux-kernel, linux-mm, haren, npiggin, Balbir Singh,
Paul McKenney, Tim Chen, linuxppc-dev, x86
On 11/5/2018 11:52 PM, Laurent Dufour wrote:
> Le 05/11/2018 à 08:04, vinayak menon a écrit :
>> Hi Laurent,
>>
>> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
>> <ldufour@linux.vnet.ibm.com> wrote:
>>>
>>> The VMA sequence count has been introduced to allow fast detection of
>>> VMA modification when running a page fault handler without holding
>>> the mmap_sem.
>>>
>>> This patch provides protection against the VMA modification done in :
>>> - madvise()
>>> - mpol_rebind_policy()
>>> - vma_replace_policy()
>>> - change_prot_numa()
>>> - mlock(), munlock()
>>> - mprotect()
>>> - mmap_region()
>>> - collapse_huge_page()
>>> - userfaultd registering services
>>>
>>> In addition, VMA fields which will be read during the speculative fault
>>> path needs to be written using WRITE_ONCE to prevent write to be split
>>> and intermediate values to be pushed to other CPUs.
>>>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> fs/proc/task_mmu.c | 5 ++++-
>>> fs/userfaultfd.c | 17 +++++++++++++----
>>> mm/khugepaged.c | 3 +++
>>> mm/madvise.c | 6 +++++-
>>> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
>>> mm/mlock.c | 13 ++++++++-----
>>> mm/mmap.c | 22 +++++++++++++---------
>>> mm/mprotect.c | 4 +++-
>>> mm/swap_state.c | 8 ++++++--
>>> 9 files changed, 89 insertions(+), 40 deletions(-)
>>>
>>> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>>> struct vm_fault *vmf)
>>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>>> unsigned long *start,
>>> unsigned long *end)
>>> {
>>> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>>> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>>> PFN_DOWN(faddr & PMD_MASK));
>>> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>>> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>>> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>>> }
>>>
>>> --
>>> 2.7.4
>>>
>>
>> I have got a crash on 4.14 kernel with speculative page faults enabled
>> and here is my analysis of the problem.
>> The issue was reported only once.
>
> Hi Vinayak,
>
> Thanks for reporting this.
>
>>
>> [23409.303395] el1_da+0x24/0x84
>> [23409.303400] __radix_tree_lookup+0x8/0x90
>> [23409.303407] find_get_entry+0x64/0x14c
>> [23409.303410] pagecache_get_page+0x5c/0x27c
>> [23409.303416] __read_swap_cache_async+0x80/0x260
>> [23409.303420] swap_vma_readahead+0x264/0x37c
>> [23409.303423] swapin_readahead+0x5c/0x6c
>> [23409.303428] do_swap_page+0x128/0x6e4
>> [23409.303431] handle_pte_fault+0x230/0xca4
>> [23409.303435] __handle_speculative_fault+0x57c/0x7c8
>> [23409.303438] do_page_fault+0x228/0x3e8
>> [23409.303442] do_translation_fault+0x50/0x6c
>> [23409.303445] do_mem_abort+0x5c/0xe0
>> [23409.303447] el0_da+0x20/0x24
>>
>> Process A accesses address ADDR (part of VMA A) and that results in a
>> translation fault.
>> Kernel enters __handle_speculative_fault to fix the fault.
>> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
>> from speculative path.
>> During this time, another process B which shares the same mm, does a
>> mprotect from another CPU which follows
>> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
>> After the split, ADDR falls into VMA B, but process A is still using
>> VMA A.
>> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
>> swap_vma_readahead->swap_ra_info uses start and end of vma to
>> calculate ptes and nr_pte, which goes wrong due to this and finally
>> resulting in wrong "entry" passed to
>> swap_vma_readahead->__read_swap_cache_async, and in turn causing
>> invalid swapper_space
>> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>>
>> The fix I have tried is to cache vm_start and vm_end also in vmf and
>> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
>> send
>> the patch I am a using if you feel that is the right thing to do.
>
> I think the best would be to don't do swap readahead during the speculatvive page fault. If the page is found in the swap cache, that's fine, but otherwise, we should f allback to the regular page fault.
>
> The attached -untested- patch is doing this, if you want to give it a try. I'll review that for the next series.
>
Thanks Laurent. I and going to try this patch.
With this patch, since all non-SWP_SYNCHRONOUS_IO swapins result in non-speculative fault
and a retry, wouldn't this have an impact on some perf numbers ? If so, would caching start
and end be a better option ?
Also, would it make sense to move the FAULT_FLAG_SPECULATIVE check inside swapin_readahead,
in a way that swap_cluster_readahead can take the speculative path ? swap_cluster_readahead
doesn't seem to use vma values.
Thanks,
Vinayak
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
@ 2018-11-06 9:28 ` Vinayak Menon
0 siblings, 0 replies; 106+ messages in thread
From: Vinayak Menon @ 2018-11-06 9:28 UTC (permalink / raw)
To: Laurent Dufour, vinayak menon
Cc: jack, sergey.senozhatsky.work, Peter Zijlstra, Will Deacon,
Michal Hocko, linux-mm, paulus, punitagrawal, hpa,
Alexei Starovoitov, khandual, Andrea Arcangeli, ak, Minchan Kim,
x86, Matthew Wilcox, Daniel Jordan, Ingo Molnar, David Rientjes,
Paul McKenney, npiggin, Jerome Glisse, dave, kemi.wang, kirill,
Thomas Gleixner, Ganesh Mahendran, yang.shi, linuxppc-dev,
linux-kernel, Sergey Senozhatsky, aneesh.kumar, Andrew Morton,
Tim Chen, haren
On 11/5/2018 11:52 PM, Laurent Dufour wrote:
> Le 05/11/2018 à 08:04, vinayak menon a écrit :
>> Hi Laurent,
>>
>> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
>> <ldufour@linux.vnet.ibm.com> wrote:
>>>
>>> The VMA sequence count has been introduced to allow fast detection of
>>> VMA modification when running a page fault handler without holding
>>> the mmap_sem.
>>>
>>> This patch provides protection against the VMA modification done in :
>>> - madvise()
>>> - mpol_rebind_policy()
>>> - vma_replace_policy()
>>> - change_prot_numa()
>>> - mlock(), munlock()
>>> - mprotect()
>>> - mmap_region()
>>> - collapse_huge_page()
>>> - userfaultd registering services
>>>
>>> In addition, VMA fields which will be read during the speculative fault
>>> path needs to be written using WRITE_ONCE to prevent write to be split
>>> and intermediate values to be pushed to other CPUs.
>>>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> fs/proc/task_mmu.c | 5 ++++-
>>> fs/userfaultfd.c | 17 +++++++++++++----
>>> mm/khugepaged.c | 3 +++
>>> mm/madvise.c | 6 +++++-
>>> mm/mempolicy.c | 51 ++++++++++++++++++++++++++++++++++-----------------
>>> mm/mlock.c | 13 ++++++++-----
>>> mm/mmap.c | 22 +++++++++++++---------
>>> mm/mprotect.c | 4 +++-
>>> mm/swap_state.c | 8 ++++++--
>>> 9 files changed, 89 insertions(+), 40 deletions(-)
>>>
>>> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>>> struct vm_fault *vmf)
>>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>>> unsigned long *start,
>>> unsigned long *end)
>>> {
>>> - *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>>> + *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>>> PFN_DOWN(faddr & PMD_MASK));
>>> - *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>>> + *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>>> PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>>> }
>>>
>>> --
>>> 2.7.4
>>>
>>
>> I have got a crash on 4.14 kernel with speculative page faults enabled
>> and here is my analysis of the problem.
>> The issue was reported only once.
>
> Hi Vinayak,
>
> Thanks for reporting this.
>
>>
>> [23409.303395] el1_da+0x24/0x84
>> [23409.303400] __radix_tree_lookup+0x8/0x90
>> [23409.303407] find_get_entry+0x64/0x14c
>> [23409.303410] pagecache_get_page+0x5c/0x27c
>> [23409.303416] __read_swap_cache_async+0x80/0x260
>> [23409.303420] swap_vma_readahead+0x264/0x37c
>> [23409.303423] swapin_readahead+0x5c/0x6c
>> [23409.303428] do_swap_page+0x128/0x6e4
>> [23409.303431] handle_pte_fault+0x230/0xca4
>> [23409.303435] __handle_speculative_fault+0x57c/0x7c8
>> [23409.303438] do_page_fault+0x228/0x3e8
>> [23409.303442] do_translation_fault+0x50/0x6c
>> [23409.303445] do_mem_abort+0x5c/0xe0
>> [23409.303447] el0_da+0x20/0x24
>>
>> Process A accesses address ADDR (part of VMA A) and that results in a
>> translation fault.
>> Kernel enters __handle_speculative_fault to fix the fault.
>> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
>> from speculative path.
>> During this time, another process B which shares the same mm, does a
>> mprotect from another CPU which follows
>> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
>> After the split, ADDR falls into VMA B, but process A is still using
>> VMA A.
>> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
>> swap_vma_readahead->swap_ra_info uses start and end of vma to
>> calculate ptes and nr_pte, which goes wrong due to this and finally
>> resulting in wrong "entry" passed to
>> swap_vma_readahead->__read_swap_cache_async, and in turn causing
>> invalid swapper_space
>> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>>
>> The fix I have tried is to cache vm_start and vm_end also in vmf and
>> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
>> send
>> the patch I am a using if you feel that is the right thing to do.
>
> I think the best would be to don't do swap readahead during the speculatvive page fault. If the page is found in the swap cache, that's fine, but otherwise, we should f allback to the regular page fault.
>
> The attached -untested- patch is doing this, if you want to give it a try. I'll review that for the next series.
>
Thanks Laurent. I and going to try this patch.
With this patch, since all non-SWP_SYNCHRONOUS_IO swapins result in non-speculative fault
and a retry, wouldn't this have an impact on some perf numbers ? If so, would caching start
and end be a better option ?
Also, would it make sense to move the FAULT_FLAG_SPECULATIVE check inside swapin_readahead,
in a way that swap_cluster_readahead can take the speculative path ? swap_cluster_readahead
doesn't seem to use vma values.
Thanks,
Vinayak
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 10/26] mm: protect VMA modifications using VMA sequence count
@ 2018-11-06 9:28 ` Vinayak Menon
0 siblings, 0 replies; 106+ messages in thread
From: Vinayak Menon @ 2018-11-06 9:28 UTC (permalink / raw)
To: Laurent Dufour, vinayak menon
Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
jack, Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, kemi.wang, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, punitagrawal,
yang.shi, linux-kernel, linux-mm, haren, npiggin, Balbir Singh,
Paul McKenney, Tim Chen, linuxppc-dev, x86
On 11/5/2018 11:52 PM, Laurent Dufour wrote:
> Le 05/11/2018 A 08:04, vinayak menon a A(C)critA :
>> Hi Laurent,
>>
>> On Thu, May 17, 2018 at 4:37 PM Laurent Dufour
>> <ldufour@linux.vnet.ibm.com> wrote:
>>>
>>> The VMA sequence count has been introduced to allow fast detection of
>>> VMA modification when running a page fault handler without holding
>>> the mmap_sem.
>>>
>>> This patch provides protection against the VMA modification done in :
>>> A A A A A A A A - madvise()
>>> A A A A A A A A - mpol_rebind_policy()
>>> A A A A A A A A - vma_replace_policy()
>>> A A A A A A A A - change_prot_numa()
>>> A A A A A A A A - mlock(), munlock()
>>> A A A A A A A A - mprotect()
>>> A A A A A A A A - mmap_region()
>>> A A A A A A A A - collapse_huge_page()
>>> A A A A A A A A - userfaultd registering services
>>>
>>> In addition, VMA fields which will be read during the speculative fault
>>> path needs to be written using WRITE_ONCE to prevent write to be split
>>> and intermediate values to be pushed to other CPUs.
>>>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> A fs/proc/task_mmu.c |A 5 ++++-
>>> A fs/userfaultfd.cA A | 17 +++++++++++++----
>>> A mm/khugepaged.cA A A |A 3 +++
>>> A mm/madvise.cA A A A A A |A 6 +++++-
>>> A mm/mempolicy.cA A A A | 51 ++++++++++++++++++++++++++++++++++-----------------
>>> A mm/mlock.cA A A A A A A A | 13 ++++++++-----
>>> A mm/mmap.cA A A A A A A A A | 22 +++++++++++++---------
>>> A mm/mprotect.cA A A A A |A 4 +++-
>>> A mm/swap_state.cA A A |A 8 ++++++--
>>> A 9 files changed, 89 insertions(+), 40 deletions(-)
>>>
>>> A struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>>> A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A struct vm_fault *vmf)
>>> @@ -665,9 +669,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>>> A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A unsigned long *start,
>>> A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A unsigned long *end)
>>> A {
>>> -A A A A A A *start = max3(lpfn, PFN_DOWN(vma->vm_start),
>>> +A A A A A A *start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>>> A A A A A A A A A A A A A A A A A A A A A A PFN_DOWN(faddr & PMD_MASK));
>>> -A A A A A A *end = min3(rpfn, PFN_DOWN(vma->vm_end),
>>> +A A A A A A *end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>>> A A A A A A A A A A A A A A A A A A A A PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>>> A }
>>>
>>> --
>>> 2.7.4
>>>
>>
>> I have got a crash on 4.14 kernel with speculative page faults enabled
>> and here is my analysis of the problem.
>> The issue was reported only once.
>
> Hi Vinayak,
>
> Thanks for reporting this.
>
>>
>> [23409.303395]A el1_da+0x24/0x84
>> [23409.303400]A __radix_tree_lookup+0x8/0x90
>> [23409.303407]A find_get_entry+0x64/0x14c
>> [23409.303410]A pagecache_get_page+0x5c/0x27c
>> [23409.303416]A __read_swap_cache_async+0x80/0x260
>> [23409.303420]A swap_vma_readahead+0x264/0x37c
>> [23409.303423]A swapin_readahead+0x5c/0x6c
>> [23409.303428]A do_swap_page+0x128/0x6e4
>> [23409.303431]A handle_pte_fault+0x230/0xca4
>> [23409.303435]A __handle_speculative_fault+0x57c/0x7c8
>> [23409.303438]A do_page_fault+0x228/0x3e8
>> [23409.303442]A do_translation_fault+0x50/0x6c
>> [23409.303445]A do_mem_abort+0x5c/0xe0
>> [23409.303447]A el0_da+0x20/0x24
>>
>> Process A accesses address ADDR (part of VMA A) and that results in a
>> translation fault.
>> Kernel enters __handle_speculative_fault to fix the fault.
>> Process A enters do_swap_page->swapin_readahead->swap_vma_readahead
>> from speculative path.
>> During this time, another process B which shares the same mm, does a
>> mprotect from another CPU which follows
>> mprotect_fixup->__split_vma, and it splits VMA A into VMAs A and B.
>> After the split, ADDR falls into VMA B, but process A is still using
>> VMA A.
>> Now ADDR is greater than VMA_A->vm_start and VMA_A->vm_end.
>> swap_vma_readahead->swap_ra_info uses start and end of vma to
>> calculate ptes and nr_pte, which goes wrong due to this and finally
>> resulting in wrong "entry" passed to
>> swap_vma_readahead->__read_swap_cache_async, and in turn causing
>> invalid swapper_space
>> being passed to __read_swap_cache_async->find_get_page, causing an abort.
>>
>> The fix I have tried is to cache vm_start and vm_end also in vmf and
>> use it in swap_ra_clamp_pfn. Let me know your thoughts on this. I can
>> send
>> the patch I am a using if you feel that is the right thing to do.
>
> I think the best would be to don't do swap readahead during the speculatvive page fault. If the page is found in the swap cache, that's fine, but otherwise, we should fA A A allback to the regular page fault.
>
> The attached -untested- patch is doing this, if you want to give it a try. I'll review that for the next series.
>
Thanks Laurent. I and going to try this patch.
With this patch, since all non-SWP_SYNCHRONOUS_IO swapins result in non-speculative fault
and a retry, wouldn't this have an impact on some perf numbers ? If so, would caching start
and end be a better option ?
Also, would it make sense to move the FAULT_FLAG_SPECULATIVE check inside swapin_readahead,
in a way thatA swap_cluster_readahead can take the speculative path ? swap_cluster_readahead
doesn't seem to use vma values.
Thanks,
Vinayak
^ permalink raw reply [flat|nested] 106+ messages in thread
* [PATCH v11 11/26] mm: protect mremap() against SPF hanlder
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (9 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 10/26] mm: protect VMA modifications using " Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 12/26] mm: protect SPF handler against anon_vma changes Laurent Dufour
` (16 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
If a thread is remapping an area while another one is faulting on the
destination area, the SPF handler may fetch the vma from the RB tree before
the pte has been moved by the other thread. This means that the moved ptes
will overwrite those create by the page fault handler leading to page
leaked.
CPU 1 CPU2
enter mremap()
unmap the dest area
copy_vma() Enter speculative page fault handler
>> at this time the dest area is present in the RB tree
fetch the vma matching dest area
create a pte as the VMA matched
Exit the SPF handler
<data written in the new page>
move_ptes()
> it is assumed that the dest area is empty,
> the move ptes overwrite the page mapped by the CPU2.
To prevent that, when the VMA matching the dest area is extended or created
by copy_vma(), it should be marked as non available to the SPF handler.
The usual way to so is to rely on vm_write_begin()/end().
This is already in __vma_adjust() called by copy_vma() (through
vma_merge()). But __vma_adjust() is calling vm_write_end() before returning
which create a window for another thread.
This patch adds a new parameter to vma_merge() which is passed down to
vma_adjust().
The assumption is that copy_vma() is returning a vma which should be
released by calling vm_raw_write_end() by the callee once the ptes have
been moved.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 24 +++++++++++++++++++-----
mm/mmap.c | 53 +++++++++++++++++++++++++++++++++++++++++------------
mm/mremap.c | 13 +++++++++++++
3 files changed, 73 insertions(+), 17 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 18acfdeee759..3f8b2ce0ef7c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2253,18 +2253,32 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
/* mmap.c */
extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
+
extern int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
- struct vm_area_struct *expand);
+ struct vm_area_struct *expand, bool keep_locked);
+
static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
{
- return __vma_adjust(vma, start, end, pgoff, insert, NULL);
+ return __vma_adjust(vma, start, end, pgoff, insert, NULL, false);
}
-extern struct vm_area_struct *vma_merge(struct mm_struct *,
+
+extern struct vm_area_struct *__vma_merge(struct mm_struct *mm,
+ struct vm_area_struct *prev, unsigned long addr, unsigned long end,
+ unsigned long vm_flags, struct anon_vma *anon, struct file *file,
+ pgoff_t pgoff, struct mempolicy *mpol,
+ struct vm_userfaultfd_ctx uff, bool keep_locked);
+
+static inline struct vm_area_struct *vma_merge(struct mm_struct *mm,
struct vm_area_struct *prev, unsigned long addr, unsigned long end,
- unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
- struct mempolicy *, struct vm_userfaultfd_ctx);
+ unsigned long vm_flags, struct anon_vma *anon, struct file *file,
+ pgoff_t off, struct mempolicy *pol, struct vm_userfaultfd_ctx uff)
+{
+ return __vma_merge(mm, prev, addr, end, vm_flags, anon, file, off,
+ pol, uff, false);
+}
+
extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
extern int __split_vma(struct mm_struct *, struct vm_area_struct *,
unsigned long addr, int new_below);
diff --git a/mm/mmap.c b/mm/mmap.c
index add13b4e1d8d..2450860e3f8e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -689,7 +689,7 @@ static inline void __vma_unlink_prev(struct mm_struct *mm,
*/
int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
- struct vm_area_struct *expand)
+ struct vm_area_struct *expand, bool keep_locked)
{
struct mm_struct *mm = vma->vm_mm;
struct vm_area_struct *next = vma->vm_next, *orig_vma = vma;
@@ -805,8 +805,12 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
importer->anon_vma = exporter->anon_vma;
error = anon_vma_clone(importer, exporter);
- if (error)
+ if (error) {
+ if (next && next != vma)
+ vm_raw_write_end(next);
+ vm_raw_write_end(vma);
return error;
+ }
}
}
again:
@@ -1001,7 +1005,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
if (next && next != vma)
vm_raw_write_end(next);
- vm_raw_write_end(vma);
+ if (!keep_locked)
+ vm_raw_write_end(vma);
validate_mm(mm);
@@ -1137,12 +1142,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
* parameter) may establish ptes with the wrong permissions of NNNN
* instead of the right permissions of XXXX.
*/
-struct vm_area_struct *vma_merge(struct mm_struct *mm,
+struct vm_area_struct *__vma_merge(struct mm_struct *mm,
struct vm_area_struct *prev, unsigned long addr,
unsigned long end, unsigned long vm_flags,
struct anon_vma *anon_vma, struct file *file,
pgoff_t pgoff, struct mempolicy *policy,
- struct vm_userfaultfd_ctx vm_userfaultfd_ctx)
+ struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
+ bool keep_locked)
{
pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
struct vm_area_struct *area, *next;
@@ -1190,10 +1196,11 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
/* cases 1, 6 */
err = __vma_adjust(prev, prev->vm_start,
next->vm_end, prev->vm_pgoff, NULL,
- prev);
+ prev, keep_locked);
} else /* cases 2, 5, 7 */
err = __vma_adjust(prev, prev->vm_start,
- end, prev->vm_pgoff, NULL, prev);
+ end, prev->vm_pgoff, NULL, prev,
+ keep_locked);
if (err)
return NULL;
khugepaged_enter_vma_merge(prev, vm_flags);
@@ -1210,10 +1217,12 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
vm_userfaultfd_ctx)) {
if (prev && addr < prev->vm_end) /* case 4 */
err = __vma_adjust(prev, prev->vm_start,
- addr, prev->vm_pgoff, NULL, next);
+ addr, prev->vm_pgoff, NULL, next,
+ keep_locked);
else { /* cases 3, 8 */
err = __vma_adjust(area, addr, next->vm_end,
- next->vm_pgoff - pglen, NULL, next);
+ next->vm_pgoff - pglen, NULL, next,
+ keep_locked);
/*
* In case 3 area is already equal to next and
* this is a noop, but in case 8 "area" has
@@ -3184,9 +3193,20 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent))
return NULL; /* should never get here */
- new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx);
+
+ /* There is 3 cases to manage here in
+ * AAAA AAAA AAAA AAAA
+ * PPPP.... PPPP......NNNN PPPP....NNNN PP........NN
+ * PPPPPPPP(A) PPPP..NNNNNNNN(B) PPPPPPPPPPPP(1) NULL
+ * PPPPPPPPNNNN(2)
+ * PPPPNNNNNNNN(3)
+ *
+ * new_vma == prev in case A,1,2
+ * new_vma == next in case B,3
+ */
+ new_vma = __vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
+ vma->anon_vma, vma->vm_file, pgoff,
+ vma_policy(vma), vma->vm_userfaultfd_ctx, true);
if (new_vma) {
/*
* Source vma may have been merged into new_vma
@@ -3226,6 +3246,15 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
get_file(new_vma->vm_file);
if (new_vma->vm_ops && new_vma->vm_ops->open)
new_vma->vm_ops->open(new_vma);
+ /*
+ * As the VMA is linked right now, it may be hit by the
+ * speculative page fault handler. But we don't want it to
+ * to start mapping page in this area until the caller has
+ * potentially move the pte from the moved VMA. To prevent
+ * that we protect it right now, and let the caller unprotect
+ * it once the move is done.
+ */
+ vm_raw_write_begin(new_vma);
vma_link(mm, new_vma, prev, rb_link, rb_parent);
*need_rmap_locks = false;
}
diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..8ed1a1d6eaed 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -302,6 +302,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
if (!new_vma)
return -ENOMEM;
+ /* new_vma is returned protected by copy_vma, to prevent speculative
+ * page fault to be done in the destination area before we move the pte.
+ * Now, we must also protect the source VMA since we don't want pages
+ * to be mapped in our back while we are copying the PTEs.
+ */
+ if (vma != new_vma)
+ vm_raw_write_begin(vma);
+
moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
need_rmap_locks);
if (moved_len < old_len) {
@@ -318,6 +326,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
*/
move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
true);
+ if (vma != new_vma)
+ vm_raw_write_end(vma);
vma = new_vma;
old_len = new_len;
old_addr = new_addr;
@@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
mremap_userfaultfd_prep(new_vma, uf);
arch_remap(mm, old_addr, old_addr + old_len,
new_addr, new_addr + new_len);
+ if (vma != new_vma)
+ vm_raw_write_end(vma);
}
+ vm_raw_write_end(new_vma);
/* Conceal VM_ACCOUNT so old reservation is not undone */
if (vm_flags & VM_ACCOUNT) {
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 12/26] mm: protect SPF handler against anon_vma changes
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (10 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 11/26] mm: protect mremap() against SPF hanlder Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 13/26] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
` (15 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.
In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protected by the mmap_sem.
In __vma_adjust() when importer->anon_vma is set, there is no need to
protect against speculative page faults since speculative page fault
is aborted if the vma->anon_vma is not set.
When calling page_add_new_anon_rmap() vma->anon_vma is necessarily
valid since we checked for it when locking the pte and the anon_vma is
removed once the pte is unlocked. So even if the speculative page
fault handler is running concurrently with do_unmap(), as the pte is
locked in unmap_region() - through unmap_vmas() - and the anon_vma
unlinked later, because we check for the vma sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 551a1916da5d..d0b5f14cfe69 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -624,7 +624,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
* Hide vma from rmap and truncate_pagecache before freeing
* pgtables
*/
+ vm_write_begin(vma);
unlink_anon_vmas(vma);
+ vm_write_end(vma);
unlink_file_vma(vma);
if (is_vm_hugetlb_page(vma)) {
@@ -638,7 +640,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
&& !is_vm_hugetlb_page(next)) {
vma = next;
next = vma->vm_next;
+ vm_write_begin(vma);
unlink_anon_vmas(vma);
+ vm_write_end(vma);
unlink_file_vma(vma);
}
free_pgd_range(tlb, addr, vma->vm_end,
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 13/26] mm: cache some VMA fields in the vm_fault structure
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (11 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 12/26] mm: protect SPF handler against anon_vma changes Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 14/26] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
` (14 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.
In the detail, when we deal with a speculative page fault, the mmap_sem is
not taken, so parallel VMA's changes can occurred. When a VMA change is
done which will impact the page fault processing, we assumed that the VMA
sequence counter will be changed. In the page fault processing, at the
time the PTE is locked, we checked the VMA sequence counter to detect
changes done in our back. If no change is detected we can continue further.
But this doesn't prevent the VMA to not be changed in our back while the
PTE is locked. So VMA's fields which are used while the PTE is locked must
be saved to ensure that we are using *static* values. This is important
since the PTE changes will be made on regards to these VMA fields and they
need to be consistent. This concerns the vma->vm_flags and
vma->vm_page_prot VMA fields.
This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 10 ++++++++--
mm/huge_memory.c | 6 +++---
mm/hugetlb.c | 2 ++
mm/khugepaged.c | 2 ++
mm/memory.c | 50 ++++++++++++++++++++++++++------------------------
mm/migrate.c | 2 +-
6 files changed, 42 insertions(+), 30 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3f8b2ce0ef7c..f385d721867d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -373,6 +373,12 @@ struct vm_fault {
* page table to avoid allocation from
* atomic context.
*/
+ /*
+ * These entries are required when handling speculative page fault.
+ * This way the page handling is done using consistent field values.
+ */
+ unsigned long vma_flags;
+ pgprot_t vma_page_prot;
};
/* page entry size for vm->huge_fault() */
@@ -693,9 +699,9 @@ void free_compound_page(struct page *page);
* pte_mkwrite. But get_user_pages can cause write faults for mappings
* that do not have writing enabled, when used by access_process_vm.
*/
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
{
- if (likely(vma->vm_flags & VM_WRITE))
+ if (likely(vma_flags & VM_WRITE))
pte = pte_mkwrite(pte);
return pte;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 323acdd14e6e..6bf5420cc62e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1194,8 +1194,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
pte_t entry;
- entry = mk_pte(pages[i], vma->vm_page_prot);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = mk_pte(pages[i], vmf->vma_page_prot);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
memcg = (void *)page_private(pages[i]);
set_page_private(pages[i], 0);
page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
@@ -2168,7 +2168,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
entry = pte_swp_mksoft_dirty(entry);
} else {
entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
- entry = maybe_mkwrite(entry, vma);
+ entry = maybe_mkwrite(entry, vma->vm_flags);
if (!write)
entry = pte_wrprotect(entry);
if (!young)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 129088710510..d7764b6568f5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3718,6 +3718,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
.vma = vma,
.address = address,
.flags = flags,
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
/*
* Hard to debug if it ends up being
* used by a callee that assumes
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0b28af4b950d..2b02a9f9589e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -887,6 +887,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
.flags = FAULT_FLAG_ALLOW_RETRY,
.pmd = pmd,
.pgoff = linear_page_index(vma, address),
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
};
/* we only decide to swapin, if there is enough young ptes */
diff --git a/mm/memory.c b/mm/memory.c
index d0b5f14cfe69..9dc455ae550c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1822,7 +1822,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
out_mkwrite:
if (mkwrite) {
entry = pte_mkyoung(entry);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
}
set_pte_at(mm, addr, pte, entry);
@@ -2482,7 +2482,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
entry = pte_mkyoung(vmf->orig_pte);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
update_mmu_cache(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2558,8 +2558,8 @@ static int wp_page_copy(struct vm_fault *vmf)
inc_mm_counter_fast(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
- entry = mk_pte(new_page, vma->vm_page_prot);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = mk_pte(new_page, vmf->vma_page_prot);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
/*
* Clear the pte entry and flush it first, before updating the
* pte with the new entry. This will avoid a race condition
@@ -2624,7 +2624,7 @@ static int wp_page_copy(struct vm_fault *vmf)
* Don't let another task, with possibly unlocked vma,
* keep the mlocked page.
*/
- if (page_copied && (vma->vm_flags & VM_LOCKED)) {
+ if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
lock_page(old_page); /* LRU manipulation */
if (PageMlocked(old_page))
munlock_vma_page(old_page);
@@ -2660,7 +2660,7 @@ static int wp_page_copy(struct vm_fault *vmf)
*/
int finish_mkwrite_fault(struct vm_fault *vmf)
{
- WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
+ WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
if (!pte_map_lock(vmf))
return VM_FAULT_RETRY;
/*
@@ -2762,7 +2762,7 @@ static int do_wp_page(struct vm_fault *vmf)
* We should not cow pages in a shared writeable mapping.
* Just mark the pages writable and/or call ops->pfn_mkwrite.
*/
- if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+ if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))
return wp_pfn_shared(vmf);
@@ -2809,7 +2809,7 @@ static int do_wp_page(struct vm_fault *vmf)
return VM_FAULT_WRITE;
}
unlock_page(vmf->page);
- } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+ } else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))) {
return wp_page_shared(vmf);
}
@@ -3088,9 +3088,9 @@ int do_swap_page(struct vm_fault *vmf)
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
- pte = mk_pte(page, vma->vm_page_prot);
+ pte = mk_pte(page, vmf->vma_page_prot);
if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
- pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+ pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
exclusive = RMAP_EXCLUSIVE;
@@ -3115,7 +3115,7 @@ int do_swap_page(struct vm_fault *vmf)
swap_free(entry);
if (mem_cgroup_swap_full(page) ||
- (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+ (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
try_to_free_swap(page);
unlock_page(page);
if (page != swapcache && swapcache) {
@@ -3173,7 +3173,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
pte_t entry;
/* File mapping without ->vm_ops ? */
- if (vma->vm_flags & VM_SHARED)
+ if (vmf->vma_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
/*
@@ -3197,7 +3197,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
if (!(vmf->flags & FAULT_FLAG_WRITE) &&
!mm_forbids_zeropage(vma->vm_mm)) {
entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
- vma->vm_page_prot));
+ vmf->vma_page_prot));
if (!pte_map_lock(vmf))
return VM_FAULT_RETRY;
if (!pte_none(*vmf->pte))
@@ -3230,8 +3230,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
*/
__SetPageUptodate(page);
- entry = mk_pte(page, vma->vm_page_prot);
- if (vma->vm_flags & VM_WRITE)
+ entry = mk_pte(page, vmf->vma_page_prot);
+ if (vmf->vma_flags & VM_WRITE)
entry = pte_mkwrite(pte_mkdirty(entry));
if (!pte_map_lock(vmf)) {
@@ -3428,7 +3428,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
for (i = 0; i < HPAGE_PMD_NR; i++)
flush_icache_page(vma, page + i);
- entry = mk_huge_pmd(page, vma->vm_page_prot);
+ entry = mk_huge_pmd(page, vmf->vma_page_prot);
if (write)
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
@@ -3502,11 +3502,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
return VM_FAULT_NOPAGE;
flush_icache_page(vma, page);
- entry = mk_pte(page, vma->vm_page_prot);
+ entry = mk_pte(page, vmf->vma_page_prot);
if (write)
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
/* copy-on-write page */
- if (write && !(vma->vm_flags & VM_SHARED)) {
+ if (write && !(vmf->vma_flags & VM_SHARED)) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
@@ -3545,7 +3545,7 @@ int finish_fault(struct vm_fault *vmf)
/* Did we COW the page? */
if ((vmf->flags & FAULT_FLAG_WRITE) &&
- !(vmf->vma->vm_flags & VM_SHARED))
+ !(vmf->vma_flags & VM_SHARED))
page = vmf->cow_page;
else
page = vmf->page;
@@ -3799,7 +3799,7 @@ static int do_fault(struct vm_fault *vmf)
ret = VM_FAULT_SIGBUS;
else if (!(vmf->flags & FAULT_FLAG_WRITE))
ret = do_read_fault(vmf);
- else if (!(vma->vm_flags & VM_SHARED))
+ else if (!(vmf->vma_flags & VM_SHARED))
ret = do_cow_fault(vmf);
else
ret = do_shared_fault(vmf);
@@ -3856,7 +3856,7 @@ static int do_numa_page(struct vm_fault *vmf)
* accessible ptes, some can allow access by kernel mode.
*/
pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
- pte = pte_modify(pte, vma->vm_page_prot);
+ pte = pte_modify(pte, vmf->vma_page_prot);
pte = pte_mkyoung(pte);
if (was_writable)
pte = pte_mkwrite(pte);
@@ -3890,7 +3890,7 @@ static int do_numa_page(struct vm_fault *vmf)
* Flag if the page is shared between multiple address spaces. This
* is later used when determining whether to group tasks together
*/
- if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
+ if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
flags |= TNF_SHARED;
last_cpupid = page_cpupid_last(page);
@@ -3935,7 +3935,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
/* COW handled on pte level: split pmd */
- VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
+ VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
return VM_FAULT_FALLBACK;
@@ -4082,6 +4082,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
.flags = flags,
.pgoff = linear_page_index(vma, address),
.gfp_mask = __get_fault_gfp_mask(vma),
+ .vma_flags = vma->vm_flags,
+ .vma_page_prot = vma->vm_page_prot,
};
unsigned int dirty = flags & FAULT_FLAG_WRITE;
struct mm_struct *mm = vma->vm_mm;
diff --git a/mm/migrate.c b/mm/migrate.c
index 8c0af0f7cab1..ae3d0faf72cb 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
*/
entry = pte_to_swp_entry(*pvmw.pte);
if (is_write_migration_entry(entry))
- pte = maybe_mkwrite(pte, vma);
+ pte = maybe_mkwrite(pte, vma->vm_flags);
if (unlikely(is_zone_device_page(new))) {
if (is_device_private_page(new)) {
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 14/26] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (12 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 13/26] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 15/26] mm: introduce __lru_cache_add_active_or_unevictable Laurent Dufour
` (13 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.
This way during the speculative page fault path the saved vma->vm_flags
could be used.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/migrate.h | 4 ++--
mm/memory.c | 2 +-
mm/migrate.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f2b4abbca55e..fd4c3ab7bd9c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -126,14 +126,14 @@ static inline void __ClearPageMovable(struct page *page)
#ifdef CONFIG_NUMA_BALANCING
extern bool pmd_trans_migrating(pmd_t pmd);
extern int migrate_misplaced_page(struct page *page,
- struct vm_area_struct *vma, int node);
+ struct vm_fault *vmf, int node);
#else
static inline bool pmd_trans_migrating(pmd_t pmd)
{
return false;
}
static inline int migrate_misplaced_page(struct page *page,
- struct vm_area_struct *vma, int node)
+ struct vm_fault *vmf, int node)
{
return -EAGAIN; /* can't migrate now */
}
diff --git a/mm/memory.c b/mm/memory.c
index 9dc455ae550c..cb6310b74cfb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3904,7 +3904,7 @@ static int do_numa_page(struct vm_fault *vmf)
}
/* Migrate to the requested node */
- migrated = migrate_misplaced_page(page, vma, target_nid);
+ migrated = migrate_misplaced_page(page, vmf, target_nid);
if (migrated) {
page_nid = target_nid;
flags |= TNF_MIGRATED;
diff --git a/mm/migrate.c b/mm/migrate.c
index ae3d0faf72cb..884c57a16b7a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1945,7 +1945,7 @@ bool pmd_trans_migrating(pmd_t pmd)
* node. Caller is expected to have an elevated reference count on
* the page that will be dropped by this function before returning.
*/
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+int migrate_misplaced_page(struct page *page, struct vm_fault *vmf,
int node)
{
pg_data_t *pgdat = NODE_DATA(node);
@@ -1958,7 +1958,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
* with execute permissions as they are probably shared libraries.
*/
if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
- (vma->vm_flags & VM_EXEC))
+ (vmf->vma_flags & VM_EXEC))
goto out;
/*
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 15/26] mm: introduce __lru_cache_add_active_or_unevictable
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (13 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 14/26] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 16/26] mm: introduce __vm_normal_page() Laurent Dufour
` (12 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/swap.h | 10 ++++++++--
mm/memory.c | 8 ++++----
mm/swap.c | 6 +++---
3 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index f73eafcaf4e9..730c14738574 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -338,8 +338,14 @@ extern void deactivate_file_page(struct page *page);
extern void mark_page_lazyfree(struct page *page);
extern void swap_setup(void);
-extern void lru_cache_add_active_or_unevictable(struct page *page,
- struct vm_area_struct *vma);
+extern void __lru_cache_add_active_or_unevictable(struct page *page,
+ unsigned long vma_flags);
+
+static inline void lru_cache_add_active_or_unevictable(struct page *page,
+ struct vm_area_struct *vma)
+{
+ return __lru_cache_add_active_or_unevictable(page, vma->vm_flags);
+}
/* linux/mm/vmscan.c */
extern unsigned long zone_reclaimable_pages(struct zone *zone);
diff --git a/mm/memory.c b/mm/memory.c
index cb6310b74cfb..deac7f12d777 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2569,7 +2569,7 @@ static int wp_page_copy(struct vm_fault *vmf)
ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
page_add_new_anon_rmap(new_page, vma, vmf->address, false);
mem_cgroup_commit_charge(new_page, memcg, false, false);
- lru_cache_add_active_or_unevictable(new_page, vma);
+ __lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
/*
* We call the notify macro here because, when using secondary
* mmu page tables (such as kvm shadow page tables), we want the
@@ -3105,7 +3105,7 @@ int do_swap_page(struct vm_fault *vmf)
if (unlikely(page != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
mem_cgroup_commit_charge(page, memcg, true, false);
@@ -3256,7 +3256,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
setpte:
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
@@ -3510,7 +3510,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
- lru_cache_add_active_or_unevictable(page, vma);
+ __lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
page_add_file_rmap(page, false);
diff --git a/mm/swap.c b/mm/swap.c
index 26fc9b5f1b6c..ba97d437e68a 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -456,12 +456,12 @@ void lru_cache_add(struct page *page)
* directly back onto it's zone's unevictable list, it does NOT use a
* per cpu pagevec.
*/
-void lru_cache_add_active_or_unevictable(struct page *page,
- struct vm_area_struct *vma)
+void __lru_cache_add_active_or_unevictable(struct page *page,
+ unsigned long vma_flags)
{
VM_BUG_ON_PAGE(PageLRU(page), page);
- if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
+ if (likely((vma_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
SetPageActive(page);
else if (!TestSetPageMlocked(page)) {
/*
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 16/26] mm: introduce __vm_normal_page()
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (14 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 15/26] mm: introduce __lru_cache_add_active_or_unevictable Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 17/26] mm: introduce __page_add_new_anon_rmap() Laurent Dufour
` (11 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.
Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value as parameter.
Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 18 +++++++++++++++---
mm/memory.c | 21 ++++++++++++---------
2 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f385d721867d..bcebec117d4d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1317,9 +1317,21 @@ static inline void INIT_VMA(struct vm_area_struct *vma)
#endif
}
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
- pte_t pte, bool with_public_device);
-#define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte, bool with_public_device,
+ unsigned long vma_flags);
+static inline struct page *_vm_normal_page(struct vm_area_struct *vma,
+ unsigned long addr, pte_t pte,
+ bool with_public_device)
+{
+ return __vm_normal_page(vma, addr, pte, with_public_device,
+ vma->vm_flags);
+}
+static inline struct page *vm_normal_page(struct vm_area_struct *vma,
+ unsigned long addr, pte_t pte)
+{
+ return _vm_normal_page(vma, addr, pte, false);
+}
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t pmd);
diff --git a/mm/memory.c b/mm/memory.c
index deac7f12d777..cc4e6221ee7b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -780,7 +780,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
}
/*
- * vm_normal_page -- This function gets the "struct page" associated with a pte.
+ * __vm_normal_page -- This function gets the "struct page" associated with
+ * a pte.
*
* "Special" mappings do not wish to be associated with a "struct page" (either
* it doesn't exist, or it exists but they don't want to touch it). In this
@@ -821,8 +822,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
* PFNMAP mappings in order to support COWable mappings.
*
*/
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
- pte_t pte, bool with_public_device)
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+ pte_t pte, bool with_public_device,
+ unsigned long vma_flags)
{
unsigned long pfn = pte_pfn(pte);
@@ -831,7 +833,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
goto check_pfn;
if (vma->vm_ops && vma->vm_ops->find_special_page)
return vma->vm_ops->find_special_page(vma, addr);
- if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+ if (vma_flags & (VM_PFNMAP | VM_MIXEDMAP))
return NULL;
if (is_zero_pfn(pfn))
return NULL;
@@ -863,8 +865,8 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
/* !CONFIG_ARCH_HAS_PTE_SPECIAL case follows: */
- if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
- if (vma->vm_flags & VM_MIXEDMAP) {
+ if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+ if (vma_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
return NULL;
goto out;
@@ -873,7 +875,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
off = (addr - vma->vm_start) >> PAGE_SHIFT;
if (pfn == vma->vm_pgoff + off)
return NULL;
- if (!is_cow_mapping(vma->vm_flags))
+ if (!is_cow_mapping(vma_flags))
return NULL;
}
}
@@ -2753,7 +2755,8 @@ static int do_wp_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
+ vmf->page = __vm_normal_page(vma, vmf->address, vmf->orig_pte, false,
+ vmf->vma_flags);
if (!vmf->page) {
/*
* VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a
@@ -3863,7 +3866,7 @@ static int do_numa_page(struct vm_fault *vmf)
ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte);
update_mmu_cache(vma, vmf->address, vmf->pte);
- page = vm_normal_page(vma, vmf->address, pte);
+ page = __vm_normal_page(vma, vmf->address, pte, false, vmf->vma_flags);
if (!page) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
return 0;
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 17/26] mm: introduce __page_add_new_anon_rmap()
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (15 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 16/26] mm: introduce __vm_normal_page() Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 18/26] mm: protect mm_rb tree with a rwlock Laurent Dufour
` (10 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.
This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VMA will be the same as the original one,
because in __split_vma the new->anon_vma is set to src->anon_vma when
*new = *vma.
So even if the VMA boundaries are not correct, the anon_vma pointer is
still valid.
If the VMA has been merged, then the VMA in which it has been merged
must have the same anon_vma pointer otherwise the merge can't be done.
So in all the case we know that the anon_vma is valid, since we have
checked before starting the speculative page fault that the anon_vma
pointer is valid for this VMA and since there is an anon_vma this
means that at one time a page has been backed and that before the VMA
is cleaned, the page table lock would have to be grab to clean the
PTE, and the anon_vma field is checked once the PTE is locked.
This patch introduce a new __page_add_new_anon_rmap() service which
doesn't check for the VMA boundaries, and create a new inline one
which do the check.
When called from a page fault handler, if this is not a speculative one,
there is a guarantee that vm_start and vm_end match the faulting address,
so this check is useless. In the context of the speculative page fault
handler, this check may be wrong but anon_vma is still valid as explained
above.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/rmap.h | 12 ++++++++++--
mm/memory.c | 8 ++++----
mm/rmap.c | 5 ++---
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..a5d282573093 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -174,8 +174,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long, bool);
void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long, int);
-void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
- unsigned long, bool);
+void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
+ unsigned long, bool);
+static inline void page_add_new_anon_rmap(struct page *page,
+ struct vm_area_struct *vma,
+ unsigned long address, bool compound)
+{
+ VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+ __page_add_new_anon_rmap(page, vma, address, compound);
+}
+
void page_add_file_rmap(struct page *, bool);
void page_remove_rmap(struct page *, bool);
diff --git a/mm/memory.c b/mm/memory.c
index cc4e6221ee7b..ab32b0b4bd69 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2569,7 +2569,7 @@ static int wp_page_copy(struct vm_fault *vmf)
* thread doing COW.
*/
ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
- page_add_new_anon_rmap(new_page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(new_page, vma, vmf->address, false);
mem_cgroup_commit_charge(new_page, memcg, false, false);
__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
/*
@@ -3106,7 +3106,7 @@ int do_swap_page(struct vm_fault *vmf)
/* ksm created a completely new copy */
if (unlikely(page != swapcache && swapcache)) {
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
@@ -3257,7 +3257,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
}
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
setpte:
@@ -3511,7 +3511,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
/* copy-on-write page */
if (write && !(vmf->vma_flags & VM_SHARED)) {
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
- page_add_new_anon_rmap(page, vma, vmf->address, false);
+ __page_add_new_anon_rmap(page, vma, vmf->address, false);
mem_cgroup_commit_charge(page, memcg, false, false);
__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
} else {
diff --git a/mm/rmap.c b/mm/rmap.c
index 6db729dc4c50..42d1ebed2b5b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1136,7 +1136,7 @@ void do_page_add_anon_rmap(struct page *page,
}
/**
- * page_add_new_anon_rmap - add pte mapping to a new anonymous page
+ * __page_add_new_anon_rmap - add pte mapping to a new anonymous page
* @page: the page to add the mapping to
* @vma: the vm area in which the mapping is added
* @address: the user virtual address mapped
@@ -1146,12 +1146,11 @@ void do_page_add_anon_rmap(struct page *page,
* This means the inc-and-test can be bypassed.
* Page does not have to be locked.
*/
-void page_add_new_anon_rmap(struct page *page,
+void __page_add_new_anon_rmap(struct page *page,
struct vm_area_struct *vma, unsigned long address, bool compound)
{
int nr = compound ? hpage_nr_pages(page) : 1;
- VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
__SetPageSwapBacked(page);
if (compound) {
VM_BUG_ON_PAGE(!PageTransHuge(page), page);
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 18/26] mm: protect mm_rb tree with a rwlock
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (16 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 17/26] mm: introduce __page_add_new_anon_rmap() Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 19/26] mm: provide speculative fault infrastructure Laurent Dufour
` (9 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
This change is inspired by the Peter's proposal patch [1] which was
protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
that particular case, and it is introducing major performance degradation
due to excessive scheduling operations.
To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
is protecting it access using a rwlock. As the mm_rb tree is a O(log n)
search it is safe to protect it using such a lock. The VMA cache is not
protected by the new rwlock and it should not be used without holding the
mmap_sem.
To allow the picked VMA structure to be used once the rwlock is released, a
use count is added to the VMA structure. When the VMA is allocated it is
set to 1. Each time the VMA is picked with the rwlock held its use count
is incremented. Each time the VMA is released it is decremented. When the
use count hits zero, this means that the VMA is no more used and should be
freed.
This patch is preparing for 2 kind of VMA access :
- as usual, under the control of the mmap_sem,
- without holding the mmap_sem for the speculative page fault handler.
Access done under the control the mmap_sem doesn't require to grab the
rwlock to protect read access to the mm_rb tree, but access in write must
be done under the protection of the rwlock too. This affects inserting and
removing of elements in the RB tree.
The patch is introducing 2 new functions:
- vma_get() to find a VMA based on an address by holding the new rwlock.
- vma_put() to release the VMA when its no more used.
These services are designed to be used when access are made to the RB tree
without holding the mmap_sem.
When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
we rely on the WMB done when releasing the rwlock to serialize the write
with the RMB done in a later patch to check for the VMA's validity.
When free_vma is called, the file associated with the VMA is closed
immediately, but the policy and the file structure remained in used until
the VMA's use count reach 0, which may happens later when exiting an
in progress speculative page fault.
[1] https://patchwork.kernel.org/patch/5108281/
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 1 +
include/linux/mm_types.h | 4 ++
kernel/fork.c | 3 ++
mm/init-mm.c | 3 ++
mm/internal.h | 6 +++
mm/mmap.c | 115 +++++++++++++++++++++++++++++++++++------------
6 files changed, 104 insertions(+), 28 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bcebec117d4d..05cbba70104b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1314,6 +1314,7 @@ static inline void INIT_VMA(struct vm_area_struct *vma)
INIT_LIST_HEAD(&vma->anon_vma_chain);
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
seqcount_init(&vma->vm_sequence);
+ atomic_set(&vma->vm_ref_count, 1);
#endif
}
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index fb5962308183..b16ba02f7fd6 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -337,6 +337,7 @@ struct vm_area_struct {
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
seqcount_t vm_sequence;
+ atomic_t vm_ref_count; /* see vma_get(), vma_put() */
#endif
} __randomize_layout;
@@ -355,6 +356,9 @@ struct kioctx_table;
struct mm_struct {
struct vm_area_struct *mmap; /* list of VMAs */
struct rb_root mm_rb;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ rwlock_t mm_rb_lock;
+#endif
u32 vmacache_seqnum; /* per-thread vmacache */
#ifdef CONFIG_MMU
unsigned long (*get_unmapped_area) (struct file *filp,
diff --git a/kernel/fork.c b/kernel/fork.c
index 99198a02efe9..f1258c2ade09 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -907,6 +907,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
mm->mmap = NULL;
mm->mm_rb = RB_ROOT;
mm->vmacache_seqnum = 0;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ rwlock_init(&mm->mm_rb_lock);
+#endif
atomic_set(&mm->mm_users, 1);
atomic_set(&mm->mm_count, 1);
init_rwsem(&mm->mmap_sem);
diff --git a/mm/init-mm.c b/mm/init-mm.c
index f0179c9c04c2..228134f5a336 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -17,6 +17,9 @@
struct mm_struct init_mm = {
.mm_rb = RB_ROOT,
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ .mm_rb_lock = __RW_LOCK_UNLOCKED(init_mm.mm_rb_lock),
+#endif
.pgd = swapper_pg_dir,
.mm_users = ATOMIC_INIT(2),
.mm_count = ATOMIC_INIT(1),
diff --git a/mm/internal.h b/mm/internal.h
index 62d8c34e63d5..fb2667b20f0a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,12 @@ void page_writeback_init(void);
int do_swap_page(struct vm_fault *vmf);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern struct vm_area_struct *get_vma(struct mm_struct *mm,
+ unsigned long addr);
+extern void put_vma(struct vm_area_struct *vma);
+#endif
+
void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/mmap.c b/mm/mmap.c
index 2450860e3f8e..54d298a67047 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -169,6 +169,27 @@ void unlink_file_vma(struct vm_area_struct *vma)
}
}
+static void __free_vma(struct vm_area_struct *vma)
+{
+ if (vma->vm_file)
+ fput(vma->vm_file);
+ mpol_put(vma_policy(vma));
+ kmem_cache_free(vm_area_cachep, vma);
+}
+
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+void put_vma(struct vm_area_struct *vma)
+{
+ if (atomic_dec_and_test(&vma->vm_ref_count))
+ __free_vma(vma);
+}
+#else
+static inline void put_vma(struct vm_area_struct *vma)
+{
+ __free_vma(vma);
+}
+#endif
+
/*
* Close a vm structure and free it, returning the next.
*/
@@ -179,10 +200,7 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
might_sleep();
if (vma->vm_ops && vma->vm_ops->close)
vma->vm_ops->close(vma);
- if (vma->vm_file)
- fput(vma->vm_file);
- mpol_put(vma_policy(vma));
- kmem_cache_free(vm_area_cachep, vma);
+ put_vma(vma);
return next;
}
@@ -402,6 +420,14 @@ static void validate_mm(struct mm_struct *mm)
#define validate_mm(mm) do { } while (0)
#endif
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+#define mm_rb_write_lock(mm) write_lock(&(mm)->mm_rb_lock)
+#define mm_rb_write_unlock(mm) write_unlock(&(mm)->mm_rb_lock)
+#else
+#define mm_rb_write_lock(mm) do { } while (0)
+#define mm_rb_write_unlock(mm) do { } while (0)
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
RB_DECLARE_CALLBACKS(static, vma_gap_callbacks, struct vm_area_struct, vm_rb,
unsigned long, rb_subtree_gap, vma_compute_subtree_gap)
@@ -420,26 +446,37 @@ static void vma_gap_update(struct vm_area_struct *vma)
}
static inline void vma_rb_insert(struct vm_area_struct *vma,
- struct rb_root *root)
+ struct mm_struct *mm)
{
+ struct rb_root *root = &mm->mm_rb;
+
/* All rb_subtree_gap values must be consistent prior to insertion */
validate_mm_rb(root, NULL);
rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
}
-static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
+static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm)
{
+ struct rb_root *root = &mm->mm_rb;
/*
* Note rb_erase_augmented is a fairly large inline function,
* so make sure we instantiate it only once with our desired
* augmented rbtree callbacks.
*/
+ mm_rb_write_lock(mm);
rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
+ mm_rb_write_unlock(mm); /* wmb */
+
+ /*
+ * Ensure the removal is complete before clearing the node.
+ * Matched by vma_has_changed()/handle_speculative_fault().
+ */
+ RB_CLEAR_NODE(&vma->vm_rb);
}
static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
- struct rb_root *root,
+ struct mm_struct *mm,
struct vm_area_struct *ignore)
{
/*
@@ -447,21 +484,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
* with the possible exception of the "next" vma being erased if
* next->vm_start was reduced.
*/
- validate_mm_rb(root, ignore);
+ validate_mm_rb(&mm->mm_rb, ignore);
- __vma_rb_erase(vma, root);
+ __vma_rb_erase(vma, mm);
}
static __always_inline void vma_rb_erase(struct vm_area_struct *vma,
- struct rb_root *root)
+ struct mm_struct *mm)
{
/*
* All rb_subtree_gap values must be consistent prior to erase,
* with the possible exception of the vma being erased.
*/
- validate_mm_rb(root, vma);
+ validate_mm_rb(&mm->mm_rb, vma);
- __vma_rb_erase(vma, root);
+ __vma_rb_erase(vma, mm);
}
/*
@@ -576,10 +613,12 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
* immediately update the gap to the correct value. Finally we
* rebalance the rbtree after all augmented values have been set.
*/
+ mm_rb_write_lock(mm);
rb_link_node(&vma->vm_rb, rb_parent, rb_link);
vma->rb_subtree_gap = 0;
vma_gap_update(vma);
- vma_rb_insert(vma, &mm->mm_rb);
+ vma_rb_insert(vma, mm);
+ mm_rb_write_unlock(mm);
}
static void __vma_link_file(struct vm_area_struct *vma)
@@ -655,7 +694,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm,
{
struct vm_area_struct *next;
- vma_rb_erase_ignore(vma, &mm->mm_rb, ignore);
+ vma_rb_erase_ignore(vma, mm, ignore);
next = vma->vm_next;
if (has_prev)
prev->vm_next = next;
@@ -932,16 +971,13 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
}
if (remove_next) {
- if (file) {
+ if (file)
uprobe_munmap(next, next->vm_start, next->vm_end);
- fput(file);
- }
if (next->anon_vma)
anon_vma_merge(vma, next);
mm->map_count--;
- mpol_put(vma_policy(next));
vm_raw_write_end(next);
- kmem_cache_free(vm_area_cachep, next);
+ put_vma(next);
/*
* In mprotect's case 6 (see comments on vma_merge),
* we must remove another next too. It would clutter
@@ -2199,15 +2235,11 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
EXPORT_SYMBOL(get_unmapped_area);
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+static struct vm_area_struct *__find_vma(struct mm_struct *mm,
+ unsigned long addr)
{
struct rb_node *rb_node;
- struct vm_area_struct *vma;
-
- /* Check the cache first. */
- vma = vmacache_find(mm, addr);
- if (likely(vma))
- return vma;
+ struct vm_area_struct *vma = NULL;
rb_node = mm->mm_rb.rb_node;
@@ -2225,13 +2257,40 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
rb_node = rb_node->rb_right;
}
+ return vma;
+}
+
+struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma;
+
+ /* Check the cache first. */
+ vma = vmacache_find(mm, addr);
+ if (likely(vma))
+ return vma;
+
+ vma = __find_vma(mm, addr);
if (vma)
vmacache_update(addr, vma);
return vma;
}
-
EXPORT_SYMBOL(find_vma);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+struct vm_area_struct *get_vma(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma = NULL;
+
+ read_lock(&mm->mm_rb_lock);
+ vma = __find_vma(mm, addr);
+ if (vma)
+ atomic_inc(&vma->vm_ref_count);
+ read_unlock(&mm->mm_rb_lock);
+
+ return vma;
+}
+#endif
+
/*
* Same as find_vma, but also return a pointer to the previous VMA in *pprev.
*/
@@ -2599,7 +2658,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
insertion_point = (prev ? &prev->vm_next : &mm->mmap);
vma->vm_prev = NULL;
do {
- vma_rb_erase(vma, &mm->mm_rb);
+ vma_rb_erase(vma, mm);
mm->map_count--;
tail_vma = vma;
vma = vma->vm_next;
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (17 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 18/26] mm: protect mm_rb tree with a rwlock Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-07-24 14:26 ` zhong jiang
2018-05-17 11:06 ` [PATCH v11 20/26] mm: adding speculative page fault failure trace events Laurent Dufour
` (8 subsequent siblings)
27 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Peter Zijlstra <peterz@infradead.org>
Provide infrastructure to do a speculative fault (not holding
mmap_sem).
The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(including umapping / page-table deletion) and we use gup_fast() style
page-table walking to deal with page-table races.
Once we've obtained the page and are ready to update the PTE, we
validate if the state we started the fault with is still valid, if
not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
PTE and we're done.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[Manage the newly introduced pte_spinlock() for speculative page
fault to fail if the VMA is touched in our back]
[Rename vma_is_dead() to vma_has_changed() and declare it here]
[Fetch p4d and pud]
[Set vmd.sequence in __handle_mm_fault()]
[Abort speculative path when handle_userfault() has to be called]
[Add additional VMA's flags checks in handle_speculative_fault()]
[Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
[Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
[Remove warning comment about waiting for !seq&1 since we don't want
to wait]
[Remove warning about no huge page support, mention it explictly]
[Don't call do_fault() in the speculative path as __do_fault() calls
vma->vm_ops->fault() which may want to release mmap_sem]
[Only vm_fault pointer argument for vma_has_changed()]
[Fix check against huge page, calling pmd_trans_huge()]
[Use READ_ONCE() when reading VMA's fields in the speculative path]
[Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
processing done in vm_normal_page()]
[Check that vma->anon_vma is already set when starting the speculative
path]
[Check for memory policy as we can't support MPOL_INTERLEAVE case due to
the processing done in mpol_misplaced()]
[Don't support VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
[Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Add mem cgroup oom check]
[Use READ_ONCE to access p*d entries]
[Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
[Don't fetch pte again in handle_pte_fault() when running the speculative
path]
[Check PMD against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
other CPU's invalidating the TLB and requiring this CPU to catch the
inter processor's interrupt]
[Move define of FAULT_FLAG_SPECULATIVE here]
[Introduce __handle_speculative_fault() and add a check against
mm->mm_users in handle_speculative_fault() defined in mm.h]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/hugetlb_inline.h | 2 +-
include/linux/mm.h | 30 ++++
include/linux/pagemap.h | 4 +-
mm/internal.h | 16 +-
mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
5 files changed, 385 insertions(+), 7 deletions(-)
diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 0660a03d37d9..9e25283d6fc9 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -8,7 +8,7 @@
static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
{
- return !!(vma->vm_flags & VM_HUGETLB);
+ return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
}
#else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 05cbba70104b..31acf98a7d92 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
#define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
#define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
#define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
+#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
#define FAULT_FLAG_TRACE \
{ FAULT_FLAG_WRITE, "WRITE" }, \
@@ -343,6 +344,10 @@ struct vm_fault {
gfp_t gfp_mask; /* gfp mask to be used for allocations */
pgoff_t pgoff; /* Logical page offset based on vma */
unsigned long address; /* Faulting virtual address */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ unsigned int sequence;
+ pmd_t orig_pmd; /* value of PMD at the time of fault */
+#endif
pmd_t *pmd; /* Pointer to pmd entry matching
* the 'address' */
pud_t *pud; /* Pointer to pud entry matching
@@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
#ifdef CONFIG_MMU
extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags);
+
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern int __handle_speculative_fault(struct mm_struct *mm,
+ unsigned long address,
+ unsigned int flags);
+static inline int handle_speculative_fault(struct mm_struct *mm,
+ unsigned long address,
+ unsigned int flags)
+{
+ /*
+ * Try speculative page fault for multithreaded user space task only.
+ */
+ if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
+ return VM_FAULT_RETRY;
+ return __handle_speculative_fault(mm, address, flags);
+}
+#else
+static inline int handle_speculative_fault(struct mm_struct *mm,
+ unsigned long address,
+ unsigned int flags)
+{
+ return VM_FAULT_RETRY;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
unsigned long address, unsigned int fault_flags,
bool *unlocked);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b1bd2186e6d2..6e2aa4e79af7 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
pgoff_t pgoff;
if (unlikely(is_vm_hugetlb_page(vma)))
return linear_hugepage_index(vma, address);
- pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
- pgoff += vma->vm_pgoff;
+ pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
+ pgoff += READ_ONCE(vma->vm_pgoff);
return pgoff;
}
diff --git a/mm/internal.h b/mm/internal.h
index fb2667b20f0a..10b188c87fa4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
extern struct vm_area_struct *get_vma(struct mm_struct *mm,
unsigned long addr);
extern void put_vma(struct vm_area_struct *vma);
-#endif
+
+static inline bool vma_has_changed(struct vm_fault *vmf)
+{
+ int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
+ unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
+
+ /*
+ * Matches both the wmb in write_seqlock_{begin,end}() and
+ * the wmb in vma_rb_erase().
+ */
+ smp_rmb();
+
+ return ret || seq != vmf->sequence;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
unsigned long floor, unsigned long ceiling);
diff --git a/mm/memory.c b/mm/memory.c
index ab32b0b4bd69..7bbbb8c7b9cd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
if (page)
dump_page(page, "bad pte");
pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
- (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
+ (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
+ mapping, index);
pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
vma->vm_file,
vma->vm_ops ? vma->vm_ops->fault : NULL,
@@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+static bool pte_spinlock(struct vm_fault *vmf)
+{
+ bool ret = false;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ pmd_t pmdval;
+#endif
+
+ /* Check if vma is still valid */
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ spin_lock(vmf->ptl);
+ return true;
+ }
+
+again:
+ local_irq_disable();
+ if (vma_has_changed(vmf))
+ goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+ * We check if the pmd value is still the same to ensure that there
+ * is not a huge collapse operation in progress in our back.
+ */
+ pmdval = READ_ONCE(*vmf->pmd);
+ if (!pmd_same(pmdval, vmf->orig_pmd))
+ goto out;
+#endif
+
+ vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ if (unlikely(!spin_trylock(vmf->ptl))) {
+ local_irq_enable();
+ goto again;
+ }
+
+ if (vma_has_changed(vmf)) {
+ spin_unlock(vmf->ptl);
+ goto out;
+ }
+
+ ret = true;
+out:
+ local_irq_enable();
+ return ret;
+}
+
+static bool pte_map_lock(struct vm_fault *vmf)
+{
+ bool ret = false;
+ pte_t *pte;
+ spinlock_t *ptl;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ pmd_t pmdval;
+#endif
+
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+ vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ return true;
+ }
+
+ /*
+ * The first vma_has_changed() guarantees the page-tables are still
+ * valid, having IRQs disabled ensures they stay around, hence the
+ * second vma_has_changed() to make sure they are still valid once
+ * we've got the lock. After that a concurrent zap_pte_range() will
+ * block on the PTL and thus we're safe.
+ */
+again:
+ local_irq_disable();
+ if (vma_has_changed(vmf))
+ goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+ * We check if the pmd value is still the same to ensure that there
+ * is not a huge collapse operation in progress in our back.
+ */
+ pmdval = READ_ONCE(*vmf->pmd);
+ if (!pmd_same(pmdval, vmf->orig_pmd))
+ goto out;
+#endif
+
+ /*
+ * Same as pte_offset_map_lock() except that we call
+ * spin_trylock() in place of spin_lock() to avoid race with
+ * unmap path which may have the lock and wait for this CPU
+ * to invalidate TLB but this CPU has irq disabled.
+ * Since we are in a speculative patch, accept it could fail
+ */
+ ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+ pte = pte_offset_map(vmf->pmd, vmf->address);
+ if (unlikely(!spin_trylock(ptl))) {
+ pte_unmap(pte);
+ local_irq_enable();
+ goto again;
+ }
+
+ if (vma_has_changed(vmf)) {
+ pte_unmap_unlock(pte, ptl);
+ goto out;
+ }
+
+ vmf->pte = pte;
+ vmf->ptl = ptl;
+ ret = true;
+out:
+ local_irq_enable();
+ return ret;
+}
+#else
static inline bool pte_spinlock(struct vm_fault *vmf)
{
vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
@@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
vmf->address, &vmf->ptl);
return true;
}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
/*
* handle_pte_fault chooses page fault handler according to an entry which was
@@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
ret = check_stable_address_space(vma->vm_mm);
if (ret)
goto unlock;
+ /*
+ * Don't call the userfaultfd during the speculative path.
+ * We already checked for the VMA to not be managed through
+ * userfaultfd, but it may be set in our back once we have lock
+ * the pte. In such a case we can ignore it this time.
+ */
+ if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ goto setpte;
/* Deliver the page fault to userland, check inside PT lock */
if (userfaultfd_missing(vma)) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
goto unlock_and_release;
/* Deliver the page fault to userland, check inside PT lock */
- if (userfaultfd_missing(vma)) {
+ if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
@@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (unlikely(pmd_none(*vmf->pmd))) {
/*
+ * In the case of the speculative page fault handler we abort
+ * the speculative path immediately as the pmd is probably
+ * in the way to be converted in a huge one. We will try
+ * again holding the mmap_sem (which implies that the collapse
+ * operation is done).
+ */
+ if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ return VM_FAULT_RETRY;
+ /*
* Leave __pte_alloc() until later: because vm_ops->fault may
* want to allocate huge page, and if we expose page table
* for an instant, it will be difficult to retract from
* concurrent faults and from rmap lookups.
*/
vmf->pte = NULL;
- } else {
+ } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
/* See comment in pte_alloc_one_map() */
if (pmd_devmap_trans_unstable(vmf->pmd))
return 0;
@@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
* pmd from under us anymore at this point because we hold the
* mmap_sem read mode and khugepaged takes it in write mode.
* So now it's safe to run pte_offset_map().
+ * This is not applicable to the speculative page fault handler
+ * but in that case, the pte is fetched earlier in
+ * handle_speculative_fault().
*/
vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
vmf->orig_pte = *vmf->pte;
@@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
if (!vmf->pte) {
if (vma_is_anonymous(vmf->vma))
return do_anonymous_page(vmf);
+ else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+ return VM_FAULT_RETRY;
else
return do_fault(vmf);
}
@@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
vmf.pmd = pmd_alloc(mm, vmf.pud, address);
if (!vmf.pmd)
return VM_FAULT_OOM;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
+#endif
if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
ret = create_huge_pmd(&vmf);
if (!(ret & VM_FAULT_FALLBACK))
@@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
return handle_pte_fault(&vmf);
}
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+/*
+ * Tries to handle the page fault in a speculative way, without grabbing the
+ * mmap_sem.
+ */
+int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
+ unsigned int flags)
+{
+ struct vm_fault vmf = {
+ .address = address,
+ };
+ pgd_t *pgd, pgdval;
+ p4d_t *p4d, p4dval;
+ pud_t pudval;
+ int seq, ret = VM_FAULT_RETRY;
+ struct vm_area_struct *vma;
+#ifdef CONFIG_NUMA
+ struct mempolicy *pol;
+#endif
+
+ /* Clear flags that may lead to release the mmap_sem to retry */
+ flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
+ flags |= FAULT_FLAG_SPECULATIVE;
+
+ vma = get_vma(mm, address);
+ if (!vma)
+ return ret;
+
+ seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+ if (seq & 1)
+ goto out_put;
+
+ /*
+ * Can't call vm_ops service has we don't know what they would do
+ * with the VMA.
+ * This include huge page from hugetlbfs.
+ */
+ if (vma->vm_ops)
+ goto out_put;
+
+ /*
+ * __anon_vma_prepare() requires the mmap_sem to be held
+ * because vm_next and vm_prev must be safe. This can't be guaranteed
+ * in the speculative path.
+ */
+ if (unlikely(!vma->anon_vma))
+ goto out_put;
+
+ vmf.vma_flags = READ_ONCE(vma->vm_flags);
+ vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+
+ /* Can't call userland page fault handler in the speculative path */
+ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+ goto out_put;
+
+ if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+ /*
+ * This could be detected by the check address against VMA's
+ * boundaries but we want to trace it as not supported instead
+ * of changed.
+ */
+ goto out_put;
+
+ if (address < READ_ONCE(vma->vm_start)
+ || READ_ONCE(vma->vm_end) <= address)
+ goto out_put;
+
+ if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+ flags & FAULT_FLAG_INSTRUCTION,
+ flags & FAULT_FLAG_REMOTE)) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+
+ /* This is one is required to check that the VMA has write access set */
+ if (flags & FAULT_FLAG_WRITE) {
+ if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+ } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+ ret = VM_FAULT_SIGSEGV;
+ goto out_put;
+ }
+
+#ifdef CONFIG_NUMA
+ /*
+ * MPOL_INTERLEAVE implies additional checks in
+ * mpol_misplaced() which are not compatible with the
+ *speculative page fault processing.
+ */
+ pol = __get_vma_policy(vma, address);
+ if (!pol)
+ pol = get_task_policy(current);
+ if (pol && pol->mode == MPOL_INTERLEAVE)
+ goto out_put;
+#endif
+
+ /*
+ * Do a speculative lookup of the PTE entry.
+ */
+ local_irq_disable();
+ pgd = pgd_offset(mm, address);
+ pgdval = READ_ONCE(*pgd);
+ if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
+ goto out_walk;
+
+ p4d = p4d_offset(pgd, address);
+ p4dval = READ_ONCE(*p4d);
+ if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
+ goto out_walk;
+
+ vmf.pud = pud_offset(p4d, address);
+ pudval = READ_ONCE(*vmf.pud);
+ if (pud_none(pudval) || unlikely(pud_bad(pudval)))
+ goto out_walk;
+
+ /* Huge pages at PUD level are not supported. */
+ if (unlikely(pud_trans_huge(pudval)))
+ goto out_walk;
+
+ vmf.pmd = pmd_offset(vmf.pud, address);
+ vmf.orig_pmd = READ_ONCE(*vmf.pmd);
+ /*
+ * pmd_none could mean that a hugepage collapse is in progress
+ * in our back as collapse_huge_page() mark it before
+ * invalidating the pte (which is done once the IPI is catched
+ * by all CPU and we have interrupt disabled).
+ * For this reason we cannot handle THP in a speculative way since we
+ * can't safely indentify an in progress collapse operation done in our
+ * back on that PMD.
+ * Regarding the order of the following checks, see comment in
+ * pmd_devmap_trans_unstable()
+ */
+ if (unlikely(pmd_devmap(vmf.orig_pmd) ||
+ pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
+ is_swap_pmd(vmf.orig_pmd)))
+ goto out_walk;
+
+ /*
+ * The above does not allocate/instantiate page-tables because doing so
+ * would lead to the possibility of instantiating page-tables after
+ * free_pgtables() -- and consequently leaking them.
+ *
+ * The result is that we take at least one !speculative fault per PMD
+ * in order to instantiate it.
+ */
+
+ vmf.pte = pte_offset_map(vmf.pmd, address);
+ vmf.orig_pte = READ_ONCE(*vmf.pte);
+ barrier(); /* See comment in handle_pte_fault() */
+ if (pte_none(vmf.orig_pte)) {
+ pte_unmap(vmf.pte);
+ vmf.pte = NULL;
+ }
+
+ vmf.vma = vma;
+ vmf.pgoff = linear_page_index(vma, address);
+ vmf.gfp_mask = __get_fault_gfp_mask(vma);
+ vmf.sequence = seq;
+ vmf.flags = flags;
+
+ local_irq_enable();
+
+ /*
+ * We need to re-validate the VMA after checking the bounds, otherwise
+ * we might have a false positive on the bounds.
+ */
+ if (read_seqcount_retry(&vma->vm_sequence, seq))
+ goto out_put;
+
+ mem_cgroup_oom_enable();
+ ret = handle_pte_fault(&vmf);
+ mem_cgroup_oom_disable();
+
+ put_vma(vma);
+
+ /*
+ * The task may have entered a memcg OOM situation but
+ * if the allocation error was handled gracefully (no
+ * VM_FAULT_OOM), there is no need to kill anything.
+ * Just clean up the OOM state peacefully.
+ */
+ if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
+ mem_cgroup_oom_synchronize(false);
+ return ret;
+
+out_walk:
+ local_irq_enable();
+out_put:
+ put_vma(vma);
+ return ret;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
/*
* By the time we get here, we already hold the mm semaphore
*
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-05-17 11:06 ` [PATCH v11 19/26] mm: provide speculative fault infrastructure Laurent Dufour
@ 2018-07-24 14:26 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-24 14:26 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/5/17 19:06, Laurent Dufour wrote:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Provide infrastructure to do a speculative fault (not holding
> mmap_sem).
>
> The not holding of mmap_sem means we can race against VMA
> change/removal and page-table destruction. We use the SRCU VMA freeing
> to keep the VMA around. We use the VMA seqcount to detect change
> (including umapping / page-table deletion) and we use gup_fast() style
> page-table walking to deal with page-table races.
>
> Once we've obtained the page and are ready to update the PTE, we
> validate if the state we started the fault with is still valid, if
> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
> PTE and we're done.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> [Manage the newly introduced pte_spinlock() for speculative page
> fault to fail if the VMA is touched in our back]
> [Rename vma_is_dead() to vma_has_changed() and declare it here]
> [Fetch p4d and pud]
> [Set vmd.sequence in __handle_mm_fault()]
> [Abort speculative path when handle_userfault() has to be called]
> [Add additional VMA's flags checks in handle_speculative_fault()]
> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
> [Remove warning comment about waiting for !seq&1 since we don't want
> to wait]
> [Remove warning about no huge page support, mention it explictly]
> [Don't call do_fault() in the speculative path as __do_fault() calls
> vma->vm_ops->fault() which may want to release mmap_sem]
> [Only vm_fault pointer argument for vma_has_changed()]
> [Fix check against huge page, calling pmd_trans_huge()]
> [Use READ_ONCE() when reading VMA's fields in the speculative path]
> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
> processing done in vm_normal_page()]
> [Check that vma->anon_vma is already set when starting the speculative
> path]
> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
> the processing done in mpol_misplaced()]
> [Don't support VMA growing up or down]
> [Move check on vm_sequence just before calling handle_pte_fault()]
> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
> [Add mem cgroup oom check]
> [Use READ_ONCE to access p*d entries]
> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
> [Don't fetch pte again in handle_pte_fault() when running the speculative
> path]
> [Check PMD against concurrent collapsing operation]
> [Try spin lock the pte during the speculative path to avoid deadlock with
> other CPU's invalidating the TLB and requiring this CPU to catch the
> inter processor's interrupt]
> [Move define of FAULT_FLAG_SPECULATIVE here]
> [Introduce __handle_speculative_fault() and add a check against
> mm->mm_users in handle_speculative_fault() defined in mm.h]
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> include/linux/hugetlb_inline.h | 2 +-
> include/linux/mm.h | 30 ++++
> include/linux/pagemap.h | 4 +-
> mm/internal.h | 16 +-
> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
> 5 files changed, 385 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 0660a03d37d9..9e25283d6fc9 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -8,7 +8,7 @@
>
> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> {
> - return !!(vma->vm_flags & VM_HUGETLB);
> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
> }
>
> #else
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 05cbba70104b..31acf98a7d92 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>
> #define FAULT_FLAG_TRACE \
> { FAULT_FLAG_WRITE, "WRITE" }, \
> @@ -343,6 +344,10 @@ struct vm_fault {
> gfp_t gfp_mask; /* gfp mask to be used for allocations */
> pgoff_t pgoff; /* Logical page offset based on vma */
> unsigned long address; /* Faulting virtual address */
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + unsigned int sequence;
> + pmd_t orig_pmd; /* value of PMD at the time of fault */
> +#endif
> pmd_t *pmd; /* Pointer to pmd entry matching
> * the 'address' */
> pud_t *pud; /* Pointer to pud entry matching
> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
> #ifdef CONFIG_MMU
> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> unsigned int flags);
> +
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +extern int __handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags);
> +static inline int handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags)
> +{
> + /*
> + * Try speculative page fault for multithreaded user space task only.
> + */
> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
> + return VM_FAULT_RETRY;
> + return __handle_speculative_fault(mm, address, flags);
> +}
> +#else
> +static inline int handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags)
> +{
> + return VM_FAULT_RETRY;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
> unsigned long address, unsigned int fault_flags,
> bool *unlocked);
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index b1bd2186e6d2..6e2aa4e79af7 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
> pgoff_t pgoff;
> if (unlikely(is_vm_hugetlb_page(vma)))
> return linear_hugepage_index(vma, address);
> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
> - pgoff += vma->vm_pgoff;
> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
> + pgoff += READ_ONCE(vma->vm_pgoff);
> return pgoff;
> }
>
> diff --git a/mm/internal.h b/mm/internal.h
> index fb2667b20f0a..10b188c87fa4 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
> unsigned long addr);
> extern void put_vma(struct vm_area_struct *vma);
> -#endif
> +
> +static inline bool vma_has_changed(struct vm_fault *vmf)
> +{
> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
> +
> + /*
> + * Matches both the wmb in write_seqlock_{begin,end}() and
> + * the wmb in vma_rb_erase().
> + */
> + smp_rmb();
> +
> + return ret || seq != vmf->sequence;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>
> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
> unsigned long floor, unsigned long ceiling);
> diff --git a/mm/memory.c b/mm/memory.c
> index ab32b0b4bd69..7bbbb8c7b9cd 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
> if (page)
> dump_page(page, "bad pte");
> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
> + mapping, index);
> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
> vma->vm_file,
> vma->vm_ops ? vma->vm_ops->fault : NULL,
> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(apply_to_page_range);
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +static bool pte_spinlock(struct vm_fault *vmf)
> +{
> + bool ret = false;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + pmd_t pmdval;
> +#endif
> +
> + /* Check if vma is still valid */
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + spin_lock(vmf->ptl);
> + return true;
> + }
> +
> +again:
> + local_irq_disable();
> + if (vma_has_changed(vmf))
> + goto out;
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + /*
> + * We check if the pmd value is still the same to ensure that there
> + * is not a huge collapse operation in progress in our back.
> + */
> + pmdval = READ_ONCE(*vmf->pmd);
> + if (!pmd_same(pmdval, vmf->orig_pmd))
> + goto out;
> +#endif
> +
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + if (unlikely(!spin_trylock(vmf->ptl))) {
> + local_irq_enable();
> + goto again;
> + }
> +
> + if (vma_has_changed(vmf)) {
> + spin_unlock(vmf->ptl);
> + goto out;
> + }
> +
> + ret = true;
> +out:
> + local_irq_enable();
> + return ret;
> +}
> +
> +static bool pte_map_lock(struct vm_fault *vmf)
> +{
> + bool ret = false;
> + pte_t *pte;
> + spinlock_t *ptl;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + pmd_t pmdval;
> +#endif
> +
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> + vmf->address, &vmf->ptl);
> + return true;
> + }
> +
> + /*
> + * The first vma_has_changed() guarantees the page-tables are still
> + * valid, having IRQs disabled ensures they stay around, hence the
> + * second vma_has_changed() to make sure they are still valid once
> + * we've got the lock. After that a concurrent zap_pte_range() will
> + * block on the PTL and thus we're safe.
> + */
> +again:
> + local_irq_disable();
> + if (vma_has_changed(vmf))
> + goto out;
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + /*
> + * We check if the pmd value is still the same to ensure that there
> + * is not a huge collapse operation in progress in our back.
> + */
> + pmdval = READ_ONCE(*vmf->pmd);
> + if (!pmd_same(pmdval, vmf->orig_pmd))
> + goto out;
> +#endif
> +
> + /*
> + * Same as pte_offset_map_lock() except that we call
> + * spin_trylock() in place of spin_lock() to avoid race with
> + * unmap path which may have the lock and wait for this CPU
> + * to invalidate TLB but this CPU has irq disabled.
> + * Since we are in a speculative patch, accept it could fail
> + */
> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + pte = pte_offset_map(vmf->pmd, vmf->address);
> + if (unlikely(!spin_trylock(ptl))) {
> + pte_unmap(pte);
> + local_irq_enable();
> + goto again;
> + }
> +
> + if (vma_has_changed(vmf)) {
> + pte_unmap_unlock(pte, ptl);
> + goto out;
> + }
> +
> + vmf->pte = pte;
> + vmf->ptl = ptl;
> + ret = true;
> +out:
> + local_irq_enable();
> + return ret;
> +}
> +#else
> static inline bool pte_spinlock(struct vm_fault *vmf)
> {
> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
> vmf->address, &vmf->ptl);
> return true;
> }
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>
> /*
> * handle_pte_fault chooses page fault handler according to an entry which was
> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
> ret = check_stable_address_space(vma->vm_mm);
> if (ret)
> goto unlock;
> + /*
> + * Don't call the userfaultfd during the speculative path.
> + * We already checked for the VMA to not be managed through
> + * userfaultfd, but it may be set in our back once we have lock
> + * the pte. In such a case we can ignore it this time.
> + */
> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + goto setpte;
> /* Deliver the page fault to userland, check inside PT lock */
> if (userfaultfd_missing(vma)) {
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> goto unlock_and_release;
>
> /* Deliver the page fault to userland, check inside PT lock */
> - if (userfaultfd_missing(vma)) {
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> mem_cgroup_cancel_charge(page, memcg, false);
> put_page(page);
> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>
> if (unlikely(pmd_none(*vmf->pmd))) {
> /*
> + * In the case of the speculative page fault handler we abort
> + * the speculative path immediately as the pmd is probably
> + * in the way to be converted in a huge one. We will try
> + * again holding the mmap_sem (which implies that the collapse
> + * operation is done).
> + */
> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + return VM_FAULT_RETRY;
> + /*
> * Leave __pte_alloc() until later: because vm_ops->fault may
> * want to allocate huge page, and if we expose page table
> * for an instant, it will be difficult to retract from
> * concurrent faults and from rmap lookups.
> */
> vmf->pte = NULL;
> - } else {
> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> /* See comment in pte_alloc_one_map() */
> if (pmd_devmap_trans_unstable(vmf->pmd))
> return 0;
> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
> * pmd from under us anymore at this point because we hold the
> * mmap_sem read mode and khugepaged takes it in write mode.
> * So now it's safe to run pte_offset_map().
> + * This is not applicable to the speculative page fault handler
> + * but in that case, the pte is fetched earlier in
> + * handle_speculative_fault().
> */
> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
> vmf->orig_pte = *vmf->pte;
> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
> if (!vmf->pte) {
> if (vma_is_anonymous(vmf->vma))
> return do_anonymous_page(vmf);
> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + return VM_FAULT_RETRY;
> else
> return do_fault(vmf);
> }
> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
> if (!vmf.pmd)
> return VM_FAULT_OOM;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
> +#endif
> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
> ret = create_huge_pmd(&vmf);
> if (!(ret & VM_FAULT_FALLBACK))
> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> return handle_pte_fault(&vmf);
> }
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +/*
> + * Tries to handle the page fault in a speculative way, without grabbing the
> + * mmap_sem.
> + */
> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
> + unsigned int flags)
> +{
> + struct vm_fault vmf = {
> + .address = address,
> + };
> + pgd_t *pgd, pgdval;
> + p4d_t *p4d, p4dval;
> + pud_t pudval;
> + int seq, ret = VM_FAULT_RETRY;
> + struct vm_area_struct *vma;
> +#ifdef CONFIG_NUMA
> + struct mempolicy *pol;
> +#endif
> +
> + /* Clear flags that may lead to release the mmap_sem to retry */
> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
> + flags |= FAULT_FLAG_SPECULATIVE;
> +
> + vma = get_vma(mm, address);
> + if (!vma)
> + return ret;
> +
> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
> + if (seq & 1)
> + goto out_put;
> +
> + /*
> + * Can't call vm_ops service has we don't know what they would do
> + * with the VMA.
> + * This include huge page from hugetlbfs.
> + */
> + if (vma->vm_ops)
> + goto out_put;
> +
Hi Laurent
I think that most of pagefault will leave here. Is there any case need to skip ?
I have tested the following patch, it work well.
diff --git a/mm/memory.c b/mm/memory.c
index 936128b..9bc1545 100644
@@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
if (!fe->pte) {
if (vma_is_anonymous(fe->vma))
return do_anonymous_page(fe);
- else if (fe->flags & FAULT_FLAG_SPECULATIVE)
- return VM_FAULT_RETRY;
else
return do_fault(fe);
}
@@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
goto out_put;
}
/*
- * Can't call vm_ops service has we don't know what they would do
- * with the VMA.
- * This include huge page from hugetlbfs.
- */
- if (vma->vm_ops) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
- }
Thanks
zhong jiang
> + /*
> + * __anon_vma_prepare() requires the mmap_sem to be held
> + * because vm_next and vm_prev must be safe. This can't be guaranteed
> + * in the speculative path.
> + */
> + if (unlikely(!vma->anon_vma))
> + goto out_put;
> +
> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
> +
> + /* Can't call userland page fault handler in the speculative path */
> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
> + goto out_put;
> +
> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
> + /*
> + * This could be detected by the check address against VMA's
> + * boundaries but we want to trace it as not supported instead
> + * of changed.
> + */
> + goto out_put;
> +
> + if (address < READ_ONCE(vma->vm_start)
> + || READ_ONCE(vma->vm_end) <= address)
> + goto out_put;
> +
> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> + flags & FAULT_FLAG_INSTRUCTION,
> + flags & FAULT_FLAG_REMOTE)) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> +
> + /* This is one is required to check that the VMA has write access set */
> + if (flags & FAULT_FLAG_WRITE) {
> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> +
> +#ifdef CONFIG_NUMA
> + /*
> + * MPOL_INTERLEAVE implies additional checks in
> + * mpol_misplaced() which are not compatible with the
> + *speculative page fault processing.
> + */
> + pol = __get_vma_policy(vma, address);
> + if (!pol)
> + pol = get_task_policy(current);
> + if (pol && pol->mode == MPOL_INTERLEAVE)
> + goto out_put;
> +#endif
> +
> + /*
> + * Do a speculative lookup of the PTE entry.
> + */
> + local_irq_disable();
> + pgd = pgd_offset(mm, address);
> + pgdval = READ_ONCE(*pgd);
> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
> + goto out_walk;
> +
> + p4d = p4d_offset(pgd, address);
> + p4dval = READ_ONCE(*p4d);
> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
> + goto out_walk;
> +
> + vmf.pud = pud_offset(p4d, address);
> + pudval = READ_ONCE(*vmf.pud);
> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
> + goto out_walk;
> +
> + /* Huge pages at PUD level are not supported. */
> + if (unlikely(pud_trans_huge(pudval)))
> + goto out_walk;
> +
> + vmf.pmd = pmd_offset(vmf.pud, address);
> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
> + /*
> + * pmd_none could mean that a hugepage collapse is in progress
> + * in our back as collapse_huge_page() mark it before
> + * invalidating the pte (which is done once the IPI is catched
> + * by all CPU and we have interrupt disabled).
> + * For this reason we cannot handle THP in a speculative way since we
> + * can't safely indentify an in progress collapse operation done in our
> + * back on that PMD.
> + * Regarding the order of the following checks, see comment in
> + * pmd_devmap_trans_unstable()
> + */
> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
> + is_swap_pmd(vmf.orig_pmd)))
> + goto out_walk;
> +
> + /*
> + * The above does not allocate/instantiate page-tables because doing so
> + * would lead to the possibility of instantiating page-tables after
> + * free_pgtables() -- and consequently leaking them.
> + *
> + * The result is that we take at least one !speculative fault per PMD
> + * in order to instantiate it.
> + */
> +
> + vmf.pte = pte_offset_map(vmf.pmd, address);
> + vmf.orig_pte = READ_ONCE(*vmf.pte);
> + barrier(); /* See comment in handle_pte_fault() */
> + if (pte_none(vmf.orig_pte)) {
> + pte_unmap(vmf.pte);
> + vmf.pte = NULL;
> + }
> +
> + vmf.vma = vma;
> + vmf.pgoff = linear_page_index(vma, address);
> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
> + vmf.sequence = seq;
> + vmf.flags = flags;
> +
> + local_irq_enable();
> +
> + /*
> + * We need to re-validate the VMA after checking the bounds, otherwise
> + * we might have a false positive on the bounds.
> + */
> + if (read_seqcount_retry(&vma->vm_sequence, seq))
> + goto out_put;
> +
> + mem_cgroup_oom_enable();
> + ret = handle_pte_fault(&vmf);
> + mem_cgroup_oom_disable();
> +
> + put_vma(vma);
> +
> + /*
> + * The task may have entered a memcg OOM situation but
> + * if the allocation error was handled gracefully (no
> + * VM_FAULT_OOM), there is no need to kill anything.
> + * Just clean up the OOM state peacefully.
> + */
> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
> + mem_cgroup_oom_synchronize(false);
> + return ret;
> +
> +out_walk:
> + local_irq_enable();
> +out_put:
> + put_vma(vma);
> + return ret;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
> /*
> * By the time we get here, we already hold the mm semaphore
> *
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
@ 2018-07-24 14:26 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-24 14:26 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/5/17 19:06, Laurent Dufour wrote:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Provide infrastructure to do a speculative fault (not holding
> mmap_sem).
>
> The not holding of mmap_sem means we can race against VMA
> change/removal and page-table destruction. We use the SRCU VMA freeing
> to keep the VMA around. We use the VMA seqcount to detect change
> (including umapping / page-table deletion) and we use gup_fast() style
> page-table walking to deal with page-table races.
>
> Once we've obtained the page and are ready to update the PTE, we
> validate if the state we started the fault with is still valid, if
> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
> PTE and we're done.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> [Manage the newly introduced pte_spinlock() for speculative page
> fault to fail if the VMA is touched in our back]
> [Rename vma_is_dead() to vma_has_changed() and declare it here]
> [Fetch p4d and pud]
> [Set vmd.sequence in __handle_mm_fault()]
> [Abort speculative path when handle_userfault() has to be called]
> [Add additional VMA's flags checks in handle_speculative_fault()]
> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
> [Remove warning comment about waiting for !seq&1 since we don't want
> to wait]
> [Remove warning about no huge page support, mention it explictly]
> [Don't call do_fault() in the speculative path as __do_fault() calls
> vma->vm_ops->fault() which may want to release mmap_sem]
> [Only vm_fault pointer argument for vma_has_changed()]
> [Fix check against huge page, calling pmd_trans_huge()]
> [Use READ_ONCE() when reading VMA's fields in the speculative path]
> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
> processing done in vm_normal_page()]
> [Check that vma->anon_vma is already set when starting the speculative
> path]
> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
> the processing done in mpol_misplaced()]
> [Don't support VMA growing up or down]
> [Move check on vm_sequence just before calling handle_pte_fault()]
> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
> [Add mem cgroup oom check]
> [Use READ_ONCE to access p*d entries]
> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
> [Don't fetch pte again in handle_pte_fault() when running the speculative
> path]
> [Check PMD against concurrent collapsing operation]
> [Try spin lock the pte during the speculative path to avoid deadlock with
> other CPU's invalidating the TLB and requiring this CPU to catch the
> inter processor's interrupt]
> [Move define of FAULT_FLAG_SPECULATIVE here]
> [Introduce __handle_speculative_fault() and add a check against
> mm->mm_users in handle_speculative_fault() defined in mm.h]
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> include/linux/hugetlb_inline.h | 2 +-
> include/linux/mm.h | 30 ++++
> include/linux/pagemap.h | 4 +-
> mm/internal.h | 16 +-
> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
> 5 files changed, 385 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 0660a03d37d9..9e25283d6fc9 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -8,7 +8,7 @@
>
> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> {
> - return !!(vma->vm_flags & VM_HUGETLB);
> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
> }
>
> #else
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 05cbba70104b..31acf98a7d92 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>
> #define FAULT_FLAG_TRACE \
> { FAULT_FLAG_WRITE, "WRITE" }, \
> @@ -343,6 +344,10 @@ struct vm_fault {
> gfp_t gfp_mask; /* gfp mask to be used for allocations */
> pgoff_t pgoff; /* Logical page offset based on vma */
> unsigned long address; /* Faulting virtual address */
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + unsigned int sequence;
> + pmd_t orig_pmd; /* value of PMD at the time of fault */
> +#endif
> pmd_t *pmd; /* Pointer to pmd entry matching
> * the 'address' */
> pud_t *pud; /* Pointer to pud entry matching
> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
> #ifdef CONFIG_MMU
> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> unsigned int flags);
> +
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +extern int __handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags);
> +static inline int handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags)
> +{
> + /*
> + * Try speculative page fault for multithreaded user space task only.
> + */
> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
> + return VM_FAULT_RETRY;
> + return __handle_speculative_fault(mm, address, flags);
> +}
> +#else
> +static inline int handle_speculative_fault(struct mm_struct *mm,
> + unsigned long address,
> + unsigned int flags)
> +{
> + return VM_FAULT_RETRY;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
> unsigned long address, unsigned int fault_flags,
> bool *unlocked);
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index b1bd2186e6d2..6e2aa4e79af7 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
> pgoff_t pgoff;
> if (unlikely(is_vm_hugetlb_page(vma)))
> return linear_hugepage_index(vma, address);
> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
> - pgoff += vma->vm_pgoff;
> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
> + pgoff += READ_ONCE(vma->vm_pgoff);
> return pgoff;
> }
>
> diff --git a/mm/internal.h b/mm/internal.h
> index fb2667b20f0a..10b188c87fa4 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
> unsigned long addr);
> extern void put_vma(struct vm_area_struct *vma);
> -#endif
> +
> +static inline bool vma_has_changed(struct vm_fault *vmf)
> +{
> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
> +
> + /*
> + * Matches both the wmb in write_seqlock_{begin,end}() and
> + * the wmb in vma_rb_erase().
> + */
> + smp_rmb();
> +
> + return ret || seq != vmf->sequence;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>
> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
> unsigned long floor, unsigned long ceiling);
> diff --git a/mm/memory.c b/mm/memory.c
> index ab32b0b4bd69..7bbbb8c7b9cd 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
> if (page)
> dump_page(page, "bad pte");
> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
> + mapping, index);
> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
> vma->vm_file,
> vma->vm_ops ? vma->vm_ops->fault : NULL,
> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(apply_to_page_range);
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +static bool pte_spinlock(struct vm_fault *vmf)
> +{
> + bool ret = false;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + pmd_t pmdval;
> +#endif
> +
> + /* Check if vma is still valid */
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + spin_lock(vmf->ptl);
> + return true;
> + }
> +
> +again:
> + local_irq_disable();
> + if (vma_has_changed(vmf))
> + goto out;
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + /*
> + * We check if the pmd value is still the same to ensure that there
> + * is not a huge collapse operation in progress in our back.
> + */
> + pmdval = READ_ONCE(*vmf->pmd);
> + if (!pmd_same(pmdval, vmf->orig_pmd))
> + goto out;
> +#endif
> +
> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + if (unlikely(!spin_trylock(vmf->ptl))) {
> + local_irq_enable();
> + goto again;
> + }
> +
> + if (vma_has_changed(vmf)) {
> + spin_unlock(vmf->ptl);
> + goto out;
> + }
> +
> + ret = true;
> +out:
> + local_irq_enable();
> + return ret;
> +}
> +
> +static bool pte_map_lock(struct vm_fault *vmf)
> +{
> + bool ret = false;
> + pte_t *pte;
> + spinlock_t *ptl;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + pmd_t pmdval;
> +#endif
> +
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
> + vmf->address, &vmf->ptl);
> + return true;
> + }
> +
> + /*
> + * The first vma_has_changed() guarantees the page-tables are still
> + * valid, having IRQs disabled ensures they stay around, hence the
> + * second vma_has_changed() to make sure they are still valid once
> + * we've got the lock. After that a concurrent zap_pte_range() will
> + * block on the PTL and thus we're safe.
> + */
> +again:
> + local_irq_disable();
> + if (vma_has_changed(vmf))
> + goto out;
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + /*
> + * We check if the pmd value is still the same to ensure that there
> + * is not a huge collapse operation in progress in our back.
> + */
> + pmdval = READ_ONCE(*vmf->pmd);
> + if (!pmd_same(pmdval, vmf->orig_pmd))
> + goto out;
> +#endif
> +
> + /*
> + * Same as pte_offset_map_lock() except that we call
> + * spin_trylock() in place of spin_lock() to avoid race with
> + * unmap path which may have the lock and wait for this CPU
> + * to invalidate TLB but this CPU has irq disabled.
> + * Since we are in a speculative patch, accept it could fail
> + */
> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> + pte = pte_offset_map(vmf->pmd, vmf->address);
> + if (unlikely(!spin_trylock(ptl))) {
> + pte_unmap(pte);
> + local_irq_enable();
> + goto again;
> + }
> +
> + if (vma_has_changed(vmf)) {
> + pte_unmap_unlock(pte, ptl);
> + goto out;
> + }
> +
> + vmf->pte = pte;
> + vmf->ptl = ptl;
> + ret = true;
> +out:
> + local_irq_enable();
> + return ret;
> +}
> +#else
> static inline bool pte_spinlock(struct vm_fault *vmf)
> {
> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
> vmf->address, &vmf->ptl);
> return true;
> }
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>
> /*
> * handle_pte_fault chooses page fault handler according to an entry which was
> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
> ret = check_stable_address_space(vma->vm_mm);
> if (ret)
> goto unlock;
> + /*
> + * Don't call the userfaultfd during the speculative path.
> + * We already checked for the VMA to not be managed through
> + * userfaultfd, but it may be set in our back once we have lock
> + * the pte. In such a case we can ignore it this time.
> + */
> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + goto setpte;
> /* Deliver the page fault to userland, check inside PT lock */
> if (userfaultfd_missing(vma)) {
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> goto unlock_and_release;
>
> /* Deliver the page fault to userland, check inside PT lock */
> - if (userfaultfd_missing(vma)) {
> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> mem_cgroup_cancel_charge(page, memcg, false);
> put_page(page);
> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>
> if (unlikely(pmd_none(*vmf->pmd))) {
> /*
> + * In the case of the speculative page fault handler we abort
> + * the speculative path immediately as the pmd is probably
> + * in the way to be converted in a huge one. We will try
> + * again holding the mmap_sem (which implies that the collapse
> + * operation is done).
> + */
> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + return VM_FAULT_RETRY;
> + /*
> * Leave __pte_alloc() until later: because vm_ops->fault may
> * want to allocate huge page, and if we expose page table
> * for an instant, it will be difficult to retract from
> * concurrent faults and from rmap lookups.
> */
> vmf->pte = NULL;
> - } else {
> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
> /* See comment in pte_alloc_one_map() */
> if (pmd_devmap_trans_unstable(vmf->pmd))
> return 0;
> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
> * pmd from under us anymore at this point because we hold the
> * mmap_sem read mode and khugepaged takes it in write mode.
> * So now it's safe to run pte_offset_map().
> + * This is not applicable to the speculative page fault handler
> + * but in that case, the pte is fetched earlier in
> + * handle_speculative_fault().
> */
> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
> vmf->orig_pte = *vmf->pte;
> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
> if (!vmf->pte) {
> if (vma_is_anonymous(vmf->vma))
> return do_anonymous_page(vmf);
> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> + return VM_FAULT_RETRY;
> else
> return do_fault(vmf);
> }
> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
> if (!vmf.pmd)
> return VM_FAULT_OOM;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
> +#endif
> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
> ret = create_huge_pmd(&vmf);
> if (!(ret & VM_FAULT_FALLBACK))
> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> return handle_pte_fault(&vmf);
> }
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +/*
> + * Tries to handle the page fault in a speculative way, without grabbing the
> + * mmap_sem.
> + */
> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
> + unsigned int flags)
> +{
> + struct vm_fault vmf = {
> + .address = address,
> + };
> + pgd_t *pgd, pgdval;
> + p4d_t *p4d, p4dval;
> + pud_t pudval;
> + int seq, ret = VM_FAULT_RETRY;
> + struct vm_area_struct *vma;
> +#ifdef CONFIG_NUMA
> + struct mempolicy *pol;
> +#endif
> +
> + /* Clear flags that may lead to release the mmap_sem to retry */
> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
> + flags |= FAULT_FLAG_SPECULATIVE;
> +
> + vma = get_vma(mm, address);
> + if (!vma)
> + return ret;
> +
> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
> + if (seq & 1)
> + goto out_put;
> +
> + /*
> + * Can't call vm_ops service has we don't know what they would do
> + * with the VMA.
> + * This include huge page from hugetlbfs.
> + */
> + if (vma->vm_ops)
> + goto out_put;
> +
Hi Laurent
I think that most of pagefault will leave here. Is there any case need to skip ?
I have tested the following patch, it work well.
diff --git a/mm/memory.c b/mm/memory.c
index 936128b..9bc1545 100644
@@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
if (!fe->pte) {
if (vma_is_anonymous(fe->vma))
return do_anonymous_page(fe);
- else if (fe->flags & FAULT_FLAG_SPECULATIVE)
- return VM_FAULT_RETRY;
else
return do_fault(fe);
}
@@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
goto out_put;
}
/*
- * Can't call vm_ops service has we don't know what they would do
- * with the VMA.
- * This include huge page from hugetlbfs.
- */
- if (vma->vm_ops) {
- trace_spf_vma_notsup(_RET_IP_, vma, address);
- goto out_put;
- }
Thanks
zhong jiang
> + /*
> + * __anon_vma_prepare() requires the mmap_sem to be held
> + * because vm_next and vm_prev must be safe. This can't be guaranteed
> + * in the speculative path.
> + */
> + if (unlikely(!vma->anon_vma))
> + goto out_put;
> +
> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
> +
> + /* Can't call userland page fault handler in the speculative path */
> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
> + goto out_put;
> +
> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
> + /*
> + * This could be detected by the check address against VMA's
> + * boundaries but we want to trace it as not supported instead
> + * of changed.
> + */
> + goto out_put;
> +
> + if (address < READ_ONCE(vma->vm_start)
> + || READ_ONCE(vma->vm_end) <= address)
> + goto out_put;
> +
> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> + flags & FAULT_FLAG_INSTRUCTION,
> + flags & FAULT_FLAG_REMOTE)) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> +
> + /* This is one is required to check that the VMA has write access set */
> + if (flags & FAULT_FLAG_WRITE) {
> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
> + ret = VM_FAULT_SIGSEGV;
> + goto out_put;
> + }
> +
> +#ifdef CONFIG_NUMA
> + /*
> + * MPOL_INTERLEAVE implies additional checks in
> + * mpol_misplaced() which are not compatible with the
> + *speculative page fault processing.
> + */
> + pol = __get_vma_policy(vma, address);
> + if (!pol)
> + pol = get_task_policy(current);
> + if (pol && pol->mode == MPOL_INTERLEAVE)
> + goto out_put;
> +#endif
> +
> + /*
> + * Do a speculative lookup of the PTE entry.
> + */
> + local_irq_disable();
> + pgd = pgd_offset(mm, address);
> + pgdval = READ_ONCE(*pgd);
> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
> + goto out_walk;
> +
> + p4d = p4d_offset(pgd, address);
> + p4dval = READ_ONCE(*p4d);
> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
> + goto out_walk;
> +
> + vmf.pud = pud_offset(p4d, address);
> + pudval = READ_ONCE(*vmf.pud);
> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
> + goto out_walk;
> +
> + /* Huge pages at PUD level are not supported. */
> + if (unlikely(pud_trans_huge(pudval)))
> + goto out_walk;
> +
> + vmf.pmd = pmd_offset(vmf.pud, address);
> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
> + /*
> + * pmd_none could mean that a hugepage collapse is in progress
> + * in our back as collapse_huge_page() mark it before
> + * invalidating the pte (which is done once the IPI is catched
> + * by all CPU and we have interrupt disabled).
> + * For this reason we cannot handle THP in a speculative way since we
> + * can't safely indentify an in progress collapse operation done in our
> + * back on that PMD.
> + * Regarding the order of the following checks, see comment in
> + * pmd_devmap_trans_unstable()
> + */
> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
> + is_swap_pmd(vmf.orig_pmd)))
> + goto out_walk;
> +
> + /*
> + * The above does not allocate/instantiate page-tables because doing so
> + * would lead to the possibility of instantiating page-tables after
> + * free_pgtables() -- and consequently leaking them.
> + *
> + * The result is that we take at least one !speculative fault per PMD
> + * in order to instantiate it.
> + */
> +
> + vmf.pte = pte_offset_map(vmf.pmd, address);
> + vmf.orig_pte = READ_ONCE(*vmf.pte);
> + barrier(); /* See comment in handle_pte_fault() */
> + if (pte_none(vmf.orig_pte)) {
> + pte_unmap(vmf.pte);
> + vmf.pte = NULL;
> + }
> +
> + vmf.vma = vma;
> + vmf.pgoff = linear_page_index(vma, address);
> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
> + vmf.sequence = seq;
> + vmf.flags = flags;
> +
> + local_irq_enable();
> +
> + /*
> + * We need to re-validate the VMA after checking the bounds, otherwise
> + * we might have a false positive on the bounds.
> + */
> + if (read_seqcount_retry(&vma->vm_sequence, seq))
> + goto out_put;
> +
> + mem_cgroup_oom_enable();
> + ret = handle_pte_fault(&vmf);
> + mem_cgroup_oom_disable();
> +
> + put_vma(vma);
> +
> + /*
> + * The task may have entered a memcg OOM situation but
> + * if the allocation error was handled gracefully (no
> + * VM_FAULT_OOM), there is no need to kill anything.
> + * Just clean up the OOM state peacefully.
> + */
> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
> + mem_cgroup_oom_synchronize(false);
> + return ret;
> +
> +out_walk:
> + local_irq_enable();
> +out_put:
> + put_vma(vma);
> + return ret;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
> /*
> * By the time we get here, we already hold the mm semaphore
> *
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-07-24 14:26 ` zhong jiang
(?)
@ 2018-07-24 16:10 ` Laurent Dufour
2018-07-25 9:04 ` zhong jiang
-1 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-07-24 16:10 UTC (permalink / raw)
To: zhong jiang
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 24/07/2018 16:26, zhong jiang wrote:
> On 2018/5/17 19:06, Laurent Dufour wrote:
>> From: Peter Zijlstra <peterz@infradead.org>
>>
>> Provide infrastructure to do a speculative fault (not holding
>> mmap_sem).
>>
>> The not holding of mmap_sem means we can race against VMA
>> change/removal and page-table destruction. We use the SRCU VMA freeing
>> to keep the VMA around. We use the VMA seqcount to detect change
>> (including umapping / page-table deletion) and we use gup_fast() style
>> page-table walking to deal with page-table races.
>>
>> Once we've obtained the page and are ready to update the PTE, we
>> validate if the state we started the fault with is still valid, if
>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>> PTE and we're done.
>>
>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>
>> [Manage the newly introduced pte_spinlock() for speculative page
>> fault to fail if the VMA is touched in our back]
>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>> [Fetch p4d and pud]
>> [Set vmd.sequence in __handle_mm_fault()]
>> [Abort speculative path when handle_userfault() has to be called]
>> [Add additional VMA's flags checks in handle_speculative_fault()]
>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>> [Remove warning comment about waiting for !seq&1 since we don't want
>> to wait]
>> [Remove warning about no huge page support, mention it explictly]
>> [Don't call do_fault() in the speculative path as __do_fault() calls
>> vma->vm_ops->fault() which may want to release mmap_sem]
>> [Only vm_fault pointer argument for vma_has_changed()]
>> [Fix check against huge page, calling pmd_trans_huge()]
>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>> processing done in vm_normal_page()]
>> [Check that vma->anon_vma is already set when starting the speculative
>> path]
>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>> the processing done in mpol_misplaced()]
>> [Don't support VMA growing up or down]
>> [Move check on vm_sequence just before calling handle_pte_fault()]
>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>> [Add mem cgroup oom check]
>> [Use READ_ONCE to access p*d entries]
>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>> path]
>> [Check PMD against concurrent collapsing operation]
>> [Try spin lock the pte during the speculative path to avoid deadlock with
>> other CPU's invalidating the TLB and requiring this CPU to catch the
>> inter processor's interrupt]
>> [Move define of FAULT_FLAG_SPECULATIVE here]
>> [Introduce __handle_speculative_fault() and add a check against
>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> include/linux/hugetlb_inline.h | 2 +-
>> include/linux/mm.h | 30 ++++
>> include/linux/pagemap.h | 4 +-
>> mm/internal.h | 16 +-
>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>
>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>> index 0660a03d37d9..9e25283d6fc9 100644
>> --- a/include/linux/hugetlb_inline.h
>> +++ b/include/linux/hugetlb_inline.h
>> @@ -8,7 +8,7 @@
>>
>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>> {
>> - return !!(vma->vm_flags & VM_HUGETLB);
>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>> }
>>
>> #else
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 05cbba70104b..31acf98a7d92 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>
>> #define FAULT_FLAG_TRACE \
>> { FAULT_FLAG_WRITE, "WRITE" }, \
>> @@ -343,6 +344,10 @@ struct vm_fault {
>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>> pgoff_t pgoff; /* Logical page offset based on vma */
>> unsigned long address; /* Faulting virtual address */
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> + unsigned int sequence;
>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>> +#endif
>> pmd_t *pmd; /* Pointer to pmd entry matching
>> * the 'address' */
>> pud_t *pud; /* Pointer to pud entry matching
>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>> #ifdef CONFIG_MMU
>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>> unsigned int flags);
>> +
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>> + unsigned long address,
>> + unsigned int flags);
>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>> + unsigned long address,
>> + unsigned int flags)
>> +{
>> + /*
>> + * Try speculative page fault for multithreaded user space task only.
>> + */
>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>> + return VM_FAULT_RETRY;
>> + return __handle_speculative_fault(mm, address, flags);
>> +}
>> +#else
>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>> + unsigned long address,
>> + unsigned int flags)
>> +{
>> + return VM_FAULT_RETRY;
>> +}
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>> +
>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>> unsigned long address, unsigned int fault_flags,
>> bool *unlocked);
>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>> index b1bd2186e6d2..6e2aa4e79af7 100644
>> --- a/include/linux/pagemap.h
>> +++ b/include/linux/pagemap.h
>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>> pgoff_t pgoff;
>> if (unlikely(is_vm_hugetlb_page(vma)))
>> return linear_hugepage_index(vma, address);
>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>> - pgoff += vma->vm_pgoff;
>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>> + pgoff += READ_ONCE(vma->vm_pgoff);
>> return pgoff;
>> }
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index fb2667b20f0a..10b188c87fa4 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>> unsigned long addr);
>> extern void put_vma(struct vm_area_struct *vma);
>> -#endif
>> +
>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>> +{
>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>> +
>> + /*
>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>> + * the wmb in vma_rb_erase().
>> + */
>> + smp_rmb();
>> +
>> + return ret || seq != vmf->sequence;
>> +}
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>
>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>> unsigned long floor, unsigned long ceiling);
>> diff --git a/mm/memory.c b/mm/memory.c
>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>> if (page)
>> dump_page(page, "bad pte");
>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>> + mapping, index);
>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>> vma->vm_file,
>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>> }
>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +static bool pte_spinlock(struct vm_fault *vmf)
>> +{
>> + bool ret = false;
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + pmd_t pmdval;
>> +#endif
>> +
>> + /* Check if vma is still valid */
>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>> + spin_lock(vmf->ptl);
>> + return true;
>> + }
>> +
>> +again:
>> + local_irq_disable();
>> + if (vma_has_changed(vmf))
>> + goto out;
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + /*
>> + * We check if the pmd value is still the same to ensure that there
>> + * is not a huge collapse operation in progress in our back.
>> + */
>> + pmdval = READ_ONCE(*vmf->pmd);
>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>> + goto out;
>> +#endif
>> +
>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>> + local_irq_enable();
>> + goto again;
>> + }
>> +
>> + if (vma_has_changed(vmf)) {
>> + spin_unlock(vmf->ptl);
>> + goto out;
>> + }
>> +
>> + ret = true;
>> +out:
>> + local_irq_enable();
>> + return ret;
>> +}
>> +
>> +static bool pte_map_lock(struct vm_fault *vmf)
>> +{
>> + bool ret = false;
>> + pte_t *pte;
>> + spinlock_t *ptl;
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + pmd_t pmdval;
>> +#endif
>> +
>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>> + vmf->address, &vmf->ptl);
>> + return true;
>> + }
>> +
>> + /*
>> + * The first vma_has_changed() guarantees the page-tables are still
>> + * valid, having IRQs disabled ensures they stay around, hence the
>> + * second vma_has_changed() to make sure they are still valid once
>> + * we've got the lock. After that a concurrent zap_pte_range() will
>> + * block on the PTL and thus we're safe.
>> + */
>> +again:
>> + local_irq_disable();
>> + if (vma_has_changed(vmf))
>> + goto out;
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + /*
>> + * We check if the pmd value is still the same to ensure that there
>> + * is not a huge collapse operation in progress in our back.
>> + */
>> + pmdval = READ_ONCE(*vmf->pmd);
>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>> + goto out;
>> +#endif
>> +
>> + /*
>> + * Same as pte_offset_map_lock() except that we call
>> + * spin_trylock() in place of spin_lock() to avoid race with
>> + * unmap path which may have the lock and wait for this CPU
>> + * to invalidate TLB but this CPU has irq disabled.
>> + * Since we are in a speculative patch, accept it could fail
>> + */
>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>> + if (unlikely(!spin_trylock(ptl))) {
>> + pte_unmap(pte);
>> + local_irq_enable();
>> + goto again;
>> + }
>> +
>> + if (vma_has_changed(vmf)) {
>> + pte_unmap_unlock(pte, ptl);
>> + goto out;
>> + }
>> +
>> + vmf->pte = pte;
>> + vmf->ptl = ptl;
>> + ret = true;
>> +out:
>> + local_irq_enable();
>> + return ret;
>> +}
>> +#else
>> static inline bool pte_spinlock(struct vm_fault *vmf)
>> {
>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>> vmf->address, &vmf->ptl);
>> return true;
>> }
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>
>> /*
>> * handle_pte_fault chooses page fault handler according to an entry which was
>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>> ret = check_stable_address_space(vma->vm_mm);
>> if (ret)
>> goto unlock;
>> + /*
>> + * Don't call the userfaultfd during the speculative path.
>> + * We already checked for the VMA to not be managed through
>> + * userfaultfd, but it may be set in our back once we have lock
>> + * the pte. In such a case we can ignore it this time.
>> + */
>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>> + goto setpte;
>> /* Deliver the page fault to userland, check inside PT lock */
>> if (userfaultfd_missing(vma)) {
>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>> goto unlock_and_release;
>>
>> /* Deliver the page fault to userland, check inside PT lock */
>> - if (userfaultfd_missing(vma)) {
>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>> mem_cgroup_cancel_charge(page, memcg, false);
>> put_page(page);
>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>
>> if (unlikely(pmd_none(*vmf->pmd))) {
>> /*
>> + * In the case of the speculative page fault handler we abort
>> + * the speculative path immediately as the pmd is probably
>> + * in the way to be converted in a huge one. We will try
>> + * again holding the mmap_sem (which implies that the collapse
>> + * operation is done).
>> + */
>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>> + return VM_FAULT_RETRY;
>> + /*
>> * Leave __pte_alloc() until later: because vm_ops->fault may
>> * want to allocate huge page, and if we expose page table
>> * for an instant, it will be difficult to retract from
>> * concurrent faults and from rmap lookups.
>> */
>> vmf->pte = NULL;
>> - } else {
>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>> /* See comment in pte_alloc_one_map() */
>> if (pmd_devmap_trans_unstable(vmf->pmd))
>> return 0;
>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>> * pmd from under us anymore at this point because we hold the
>> * mmap_sem read mode and khugepaged takes it in write mode.
>> * So now it's safe to run pte_offset_map().
>> + * This is not applicable to the speculative page fault handler
>> + * but in that case, the pte is fetched earlier in
>> + * handle_speculative_fault().
>> */
>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>> vmf->orig_pte = *vmf->pte;
>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>> if (!vmf->pte) {
>> if (vma_is_anonymous(vmf->vma))
>> return do_anonymous_page(vmf);
>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>> + return VM_FAULT_RETRY;
>> else
>> return do_fault(vmf);
>> }
>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>> if (!vmf.pmd)
>> return VM_FAULT_OOM;
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>> +#endif
>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>> ret = create_huge_pmd(&vmf);
>> if (!(ret & VM_FAULT_FALLBACK))
>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>> return handle_pte_fault(&vmf);
>> }
>>
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +/*
>> + * Tries to handle the page fault in a speculative way, without grabbing the
>> + * mmap_sem.
>> + */
>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> + unsigned int flags)
>> +{
>> + struct vm_fault vmf = {
>> + .address = address,
>> + };
>> + pgd_t *pgd, pgdval;
>> + p4d_t *p4d, p4dval;
>> + pud_t pudval;
>> + int seq, ret = VM_FAULT_RETRY;
>> + struct vm_area_struct *vma;
>> +#ifdef CONFIG_NUMA
>> + struct mempolicy *pol;
>> +#endif
>> +
>> + /* Clear flags that may lead to release the mmap_sem to retry */
>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>> + flags |= FAULT_FLAG_SPECULATIVE;
>> +
>> + vma = get_vma(mm, address);
>> + if (!vma)
>> + return ret;
>> +
>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>> + if (seq & 1)
>> + goto out_put;
>> +
>> + /*
>> + * Can't call vm_ops service has we don't know what they would do
>> + * with the VMA.
>> + * This include huge page from hugetlbfs.
>> + */
>> + if (vma->vm_ops)
>> + goto out_put;
>> +
> Hi Laurent
>
> I think that most of pagefault will leave here. Is there any case need to skip ?
> I have tested the following patch, it work well.
Hi Zhong,
Well this will allow file mapping to be handle in a speculative way, but that's
a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
operation will be fair.
In the case of the anonymous file mapping that's often not a problem, depending
on the underlying file system, but there are so many cases to check and this is
hard to say this can be done in a speculative way as is.
The huge work to do is to double check that all the code called by
vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
is managing as it may assume that it is under the protection of the mmap_sem in
read mode, and that can be done implicitly.
Cheers,
Laurent.
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 936128b..9bc1545 100644
> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
> if (!fe->pte) {
> if (vma_is_anonymous(fe->vma))
> return do_anonymous_page(fe);
> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
> - return VM_FAULT_RETRY;
> else
> return do_fault(fe);
> }
> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
> goto out_put;
> }
> /*
> - * Can't call vm_ops service has we don't know what they would do
> - * with the VMA.
> - * This include huge page from hugetlbfs.
> - */
> - if (vma->vm_ops) {
> - trace_spf_vma_notsup(_RET_IP_, vma, address);
> - goto out_put;
> - }
>
>
> Thanks
> zhong jiang
>> + /*
>> + * __anon_vma_prepare() requires the mmap_sem to be held
>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>> + * in the speculative path.
>> + */
>> + if (unlikely(!vma->anon_vma))
>> + goto out_put;
>> +
>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>> +
>> + /* Can't call userland page fault handler in the speculative path */
>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>> + goto out_put;
>> +
>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>> + /*
>> + * This could be detected by the check address against VMA's
>> + * boundaries but we want to trace it as not supported instead
>> + * of changed.
>> + */
>> + goto out_put;
>> +
>> + if (address < READ_ONCE(vma->vm_start)
>> + || READ_ONCE(vma->vm_end) <= address)
>> + goto out_put;
>> +
>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>> + flags & FAULT_FLAG_INSTRUCTION,
>> + flags & FAULT_FLAG_REMOTE)) {
>> + ret = VM_FAULT_SIGSEGV;
>> + goto out_put;
>> + }
>> +
>> + /* This is one is required to check that the VMA has write access set */
>> + if (flags & FAULT_FLAG_WRITE) {
>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>> + ret = VM_FAULT_SIGSEGV;
>> + goto out_put;
>> + }
>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>> + ret = VM_FAULT_SIGSEGV;
>> + goto out_put;
>> + }
>> +
>> +#ifdef CONFIG_NUMA
>> + /*
>> + * MPOL_INTERLEAVE implies additional checks in
>> + * mpol_misplaced() which are not compatible with the
>> + *speculative page fault processing.
>> + */
>> + pol = __get_vma_policy(vma, address);
>> + if (!pol)
>> + pol = get_task_policy(current);
>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>> + goto out_put;
>> +#endif
>> +
>> + /*
>> + * Do a speculative lookup of the PTE entry.
>> + */
>> + local_irq_disable();
>> + pgd = pgd_offset(mm, address);
>> + pgdval = READ_ONCE(*pgd);
>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>> + goto out_walk;
>> +
>> + p4d = p4d_offset(pgd, address);
>> + p4dval = READ_ONCE(*p4d);
>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>> + goto out_walk;
>> +
>> + vmf.pud = pud_offset(p4d, address);
>> + pudval = READ_ONCE(*vmf.pud);
>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>> + goto out_walk;
>> +
>> + /* Huge pages at PUD level are not supported. */
>> + if (unlikely(pud_trans_huge(pudval)))
>> + goto out_walk;
>> +
>> + vmf.pmd = pmd_offset(vmf.pud, address);
>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>> + /*
>> + * pmd_none could mean that a hugepage collapse is in progress
>> + * in our back as collapse_huge_page() mark it before
>> + * invalidating the pte (which is done once the IPI is catched
>> + * by all CPU and we have interrupt disabled).
>> + * For this reason we cannot handle THP in a speculative way since we
>> + * can't safely indentify an in progress collapse operation done in our
>> + * back on that PMD.
>> + * Regarding the order of the following checks, see comment in
>> + * pmd_devmap_trans_unstable()
>> + */
>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>> + is_swap_pmd(vmf.orig_pmd)))
>> + goto out_walk;
>> +
>> + /*
>> + * The above does not allocate/instantiate page-tables because doing so
>> + * would lead to the possibility of instantiating page-tables after
>> + * free_pgtables() -- and consequently leaking them.
>> + *
>> + * The result is that we take at least one !speculative fault per PMD
>> + * in order to instantiate it.
>> + */
>> +
>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>> + barrier(); /* See comment in handle_pte_fault() */
>> + if (pte_none(vmf.orig_pte)) {
>> + pte_unmap(vmf.pte);
>> + vmf.pte = NULL;
>> + }
>> +
>> + vmf.vma = vma;
>> + vmf.pgoff = linear_page_index(vma, address);
>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>> + vmf.sequence = seq;
>> + vmf.flags = flags;
>> +
>> + local_irq_enable();
>> +
>> + /*
>> + * We need to re-validate the VMA after checking the bounds, otherwise
>> + * we might have a false positive on the bounds.
>> + */
>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>> + goto out_put;
>> +
>> + mem_cgroup_oom_enable();
>> + ret = handle_pte_fault(&vmf);
>> + mem_cgroup_oom_disable();
>> +
>> + put_vma(vma);
>> +
>> + /*
>> + * The task may have entered a memcg OOM situation but
>> + * if the allocation error was handled gracefully (no
>> + * VM_FAULT_OOM), there is no need to kill anything.
>> + * Just clean up the OOM state peacefully.
>> + */
>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>> + mem_cgroup_oom_synchronize(false);
>> + return ret;
>> +
>> +out_walk:
>> + local_irq_enable();
>> +out_put:
>> + put_vma(vma);
>> + return ret;
>> +}
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>> +
>> /*
>> * By the time we get here, we already hold the mm semaphore
>> *
>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-07-24 16:10 ` Laurent Dufour
@ 2018-07-25 9:04 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-25 9:04 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/7/25 0:10, Laurent Dufour wrote:
>
> On 24/07/2018 16:26, zhong jiang wrote:
>> On 2018/5/17 19:06, Laurent Dufour wrote:
>>> From: Peter Zijlstra <peterz@infradead.org>
>>>
>>> Provide infrastructure to do a speculative fault (not holding
>>> mmap_sem).
>>>
>>> The not holding of mmap_sem means we can race against VMA
>>> change/removal and page-table destruction. We use the SRCU VMA freeing
>>> to keep the VMA around. We use the VMA seqcount to detect change
>>> (including umapping / page-table deletion) and we use gup_fast() style
>>> page-table walking to deal with page-table races.
>>>
>>> Once we've obtained the page and are ready to update the PTE, we
>>> validate if the state we started the fault with is still valid, if
>>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>>> PTE and we're done.
>>>
>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>
>>> [Manage the newly introduced pte_spinlock() for speculative page
>>> fault to fail if the VMA is touched in our back]
>>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>>> [Fetch p4d and pud]
>>> [Set vmd.sequence in __handle_mm_fault()]
>>> [Abort speculative path when handle_userfault() has to be called]
>>> [Add additional VMA's flags checks in handle_speculative_fault()]
>>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>>> [Remove warning comment about waiting for !seq&1 since we don't want
>>> to wait]
>>> [Remove warning about no huge page support, mention it explictly]
>>> [Don't call do_fault() in the speculative path as __do_fault() calls
>>> vma->vm_ops->fault() which may want to release mmap_sem]
>>> [Only vm_fault pointer argument for vma_has_changed()]
>>> [Fix check against huge page, calling pmd_trans_huge()]
>>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>>> processing done in vm_normal_page()]
>>> [Check that vma->anon_vma is already set when starting the speculative
>>> path]
>>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>>> the processing done in mpol_misplaced()]
>>> [Don't support VMA growing up or down]
>>> [Move check on vm_sequence just before calling handle_pte_fault()]
>>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>>> [Add mem cgroup oom check]
>>> [Use READ_ONCE to access p*d entries]
>>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>>> path]
>>> [Check PMD against concurrent collapsing operation]
>>> [Try spin lock the pte during the speculative path to avoid deadlock with
>>> other CPU's invalidating the TLB and requiring this CPU to catch the
>>> inter processor's interrupt]
>>> [Move define of FAULT_FLAG_SPECULATIVE here]
>>> [Introduce __handle_speculative_fault() and add a check against
>>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/mm.h | 30 ++++
>>> include/linux/pagemap.h | 4 +-
>>> mm/internal.h | 16 +-
>>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>>> index 0660a03d37d9..9e25283d6fc9 100644
>>> --- a/include/linux/hugetlb_inline.h
>>> +++ b/include/linux/hugetlb_inline.h
>>> @@ -8,7 +8,7 @@
>>>
>>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>>> {
>>> - return !!(vma->vm_flags & VM_HUGETLB);
>>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>>> }
>>>
>>> #else
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 05cbba70104b..31acf98a7d92 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>>
>>> #define FAULT_FLAG_TRACE \
>>> { FAULT_FLAG_WRITE, "WRITE" }, \
>>> @@ -343,6 +344,10 @@ struct vm_fault {
>>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>>> pgoff_t pgoff; /* Logical page offset based on vma */
>>> unsigned long address; /* Faulting virtual address */
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> + unsigned int sequence;
>>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>>> +#endif
>>> pmd_t *pmd; /* Pointer to pmd entry matching
>>> * the 'address' */
>>> pud_t *pud; /* Pointer to pud entry matching
>>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>>> #ifdef CONFIG_MMU
>>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> unsigned int flags);
>>> +
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags);
>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + /*
>>> + * Try speculative page fault for multithreaded user space task only.
>>> + */
>>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>>> + return VM_FAULT_RETRY;
>>> + return __handle_speculative_fault(mm, address, flags);
>>> +}
>>> +#else
>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + return VM_FAULT_RETRY;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>> +
>>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>>> unsigned long address, unsigned int fault_flags,
>>> bool *unlocked);
>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>> index b1bd2186e6d2..6e2aa4e79af7 100644
>>> --- a/include/linux/pagemap.h
>>> +++ b/include/linux/pagemap.h
>>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>>> pgoff_t pgoff;
>>> if (unlikely(is_vm_hugetlb_page(vma)))
>>> return linear_hugepage_index(vma, address);
>>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>>> - pgoff += vma->vm_pgoff;
>>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>>> + pgoff += READ_ONCE(vma->vm_pgoff);
>>> return pgoff;
>>> }
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index fb2667b20f0a..10b188c87fa4 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>>> unsigned long addr);
>>> extern void put_vma(struct vm_area_struct *vma);
>>> -#endif
>>> +
>>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>>> +{
>>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>>> +
>>> + /*
>>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>>> + * the wmb in vma_rb_erase().
>>> + */
>>> + smp_rmb();
>>> +
>>> + return ret || seq != vmf->sequence;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>
>>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>>> unsigned long floor, unsigned long ceiling);
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>> if (page)
>>> dump_page(page, "bad pte");
>>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>>> + mapping, index);
>>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>>> vma->vm_file,
>>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>>> }
>>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>>
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +static bool pte_spinlock(struct vm_fault *vmf)
>>> +{
>>> + bool ret = false;
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + pmd_t pmdval;
>>> +#endif
>>> +
>>> + /* Check if vma is still valid */
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + spin_lock(vmf->ptl);
>>> + return true;
>>> + }
>>> +
>>> +again:
>>> + local_irq_disable();
>>> + if (vma_has_changed(vmf))
>>> + goto out;
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + /*
>>> + * We check if the pmd value is still the same to ensure that there
>>> + * is not a huge collapse operation in progress in our back.
>>> + */
>>> + pmdval = READ_ONCE(*vmf->pmd);
>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>> + goto out;
>>> +#endif
>>> +
>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>>> + local_irq_enable();
>>> + goto again;
>>> + }
>>> +
>>> + if (vma_has_changed(vmf)) {
>>> + spin_unlock(vmf->ptl);
>>> + goto out;
>>> + }
>>> +
>>> + ret = true;
>>> +out:
>>> + local_irq_enable();
>>> + return ret;
>>> +}
>>> +
>>> +static bool pte_map_lock(struct vm_fault *vmf)
>>> +{
>>> + bool ret = false;
>>> + pte_t *pte;
>>> + spinlock_t *ptl;
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + pmd_t pmdval;
>>> +#endif
>>> +
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>> + vmf->address, &vmf->ptl);
>>> + return true;
>>> + }
>>> +
>>> + /*
>>> + * The first vma_has_changed() guarantees the page-tables are still
>>> + * valid, having IRQs disabled ensures they stay around, hence the
>>> + * second vma_has_changed() to make sure they are still valid once
>>> + * we've got the lock. After that a concurrent zap_pte_range() will
>>> + * block on the PTL and thus we're safe.
>>> + */
>>> +again:
>>> + local_irq_disable();
>>> + if (vma_has_changed(vmf))
>>> + goto out;
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + /*
>>> + * We check if the pmd value is still the same to ensure that there
>>> + * is not a huge collapse operation in progress in our back.
>>> + */
>>> + pmdval = READ_ONCE(*vmf->pmd);
>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>> + goto out;
>>> +#endif
>>> +
>>> + /*
>>> + * Same as pte_offset_map_lock() except that we call
>>> + * spin_trylock() in place of spin_lock() to avoid race with
>>> + * unmap path which may have the lock and wait for this CPU
>>> + * to invalidate TLB but this CPU has irq disabled.
>>> + * Since we are in a speculative patch, accept it could fail
>>> + */
>>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>>> + if (unlikely(!spin_trylock(ptl))) {
>>> + pte_unmap(pte);
>>> + local_irq_enable();
>>> + goto again;
>>> + }
>>> +
>>> + if (vma_has_changed(vmf)) {
>>> + pte_unmap_unlock(pte, ptl);
>>> + goto out;
>>> + }
>>> +
>>> + vmf->pte = pte;
>>> + vmf->ptl = ptl;
>>> + ret = true;
>>> +out:
>>> + local_irq_enable();
>>> + return ret;
>>> +}
>>> +#else
>>> static inline bool pte_spinlock(struct vm_fault *vmf)
>>> {
>>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>> vmf->address, &vmf->ptl);
>>> return true;
>>> }
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>
>>> /*
>>> * handle_pte_fault chooses page fault handler according to an entry which was
>>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>> ret = check_stable_address_space(vma->vm_mm);
>>> if (ret)
>>> goto unlock;
>>> + /*
>>> + * Don't call the userfaultfd during the speculative path.
>>> + * We already checked for the VMA to not be managed through
>>> + * userfaultfd, but it may be set in our back once we have lock
>>> + * the pte. In such a case we can ignore it this time.
>>> + */
>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + goto setpte;
>>> /* Deliver the page fault to userland, check inside PT lock */
>>> if (userfaultfd_missing(vma)) {
>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>> goto unlock_and_release;
>>>
>>> /* Deliver the page fault to userland, check inside PT lock */
>>> - if (userfaultfd_missing(vma)) {
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>> mem_cgroup_cancel_charge(page, memcg, false);
>>> put_page(page);
>>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>
>>> if (unlikely(pmd_none(*vmf->pmd))) {
>>> /*
>>> + * In the case of the speculative page fault handler we abort
>>> + * the speculative path immediately as the pmd is probably
>>> + * in the way to be converted in a huge one. We will try
>>> + * again holding the mmap_sem (which implies that the collapse
>>> + * operation is done).
>>> + */
>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + return VM_FAULT_RETRY;
>>> + /*
>>> * Leave __pte_alloc() until later: because vm_ops->fault may
>>> * want to allocate huge page, and if we expose page table
>>> * for an instant, it will be difficult to retract from
>>> * concurrent faults and from rmap lookups.
>>> */
>>> vmf->pte = NULL;
>>> - } else {
>>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> /* See comment in pte_alloc_one_map() */
>>> if (pmd_devmap_trans_unstable(vmf->pmd))
>>> return 0;
>>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>> * pmd from under us anymore at this point because we hold the
>>> * mmap_sem read mode and khugepaged takes it in write mode.
>>> * So now it's safe to run pte_offset_map().
>>> + * This is not applicable to the speculative page fault handler
>>> + * but in that case, the pte is fetched earlier in
>>> + * handle_speculative_fault().
>>> */
>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>>> vmf->orig_pte = *vmf->pte;
>>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>> if (!vmf->pte) {
>>> if (vma_is_anonymous(vmf->vma))
>>> return do_anonymous_page(vmf);
>>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + return VM_FAULT_RETRY;
>>> else
>>> return do_fault(vmf);
>>> }
>>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>>> if (!vmf.pmd)
>>> return VM_FAULT_OOM;
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>>> +#endif
>>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>>> ret = create_huge_pmd(&vmf);
>>> if (!(ret & VM_FAULT_FALLBACK))
>>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> return handle_pte_fault(&vmf);
>>> }
>>>
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +/*
>>> + * Tries to handle the page fault in a speculative way, without grabbing the
>>> + * mmap_sem.
>>> + */
>>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + struct vm_fault vmf = {
>>> + .address = address,
>>> + };
>>> + pgd_t *pgd, pgdval;
>>> + p4d_t *p4d, p4dval;
>>> + pud_t pudval;
>>> + int seq, ret = VM_FAULT_RETRY;
>>> + struct vm_area_struct *vma;
>>> +#ifdef CONFIG_NUMA
>>> + struct mempolicy *pol;
>>> +#endif
>>> +
>>> + /* Clear flags that may lead to release the mmap_sem to retry */
>>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>>> + flags |= FAULT_FLAG_SPECULATIVE;
>>> +
>>> + vma = get_vma(mm, address);
>>> + if (!vma)
>>> + return ret;
>>> +
>>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>>> + if (seq & 1)
>>> + goto out_put;
>>> +
>>> + /*
>>> + * Can't call vm_ops service has we don't know what they would do
>>> + * with the VMA.
>>> + * This include huge page from hugetlbfs.
>>> + */
>>> + if (vma->vm_ops)
>>> + goto out_put;
>>> +
>> Hi Laurent
>>
>> I think that most of pagefault will leave here. Is there any case need to skip ?
>> I have tested the following patch, it work well.
> Hi Zhong,
>
> Well this will allow file mapping to be handle in a speculative way, but that's
> a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
> operation will be fair.
>
> In the case of the anonymous file mapping that's often not a problem, depending
> on the underlying file system, but there are so many cases to check and this is
> hard to say this can be done in a speculative way as is.
This patch say that spf just handle anonyous page. but I find that do_swap_page
also maybe release the mmap_sem without FAULT_FLAG_RETRY_NOWAIT. why is it safe
to handle the case. I think that the case is similar to file page. Maybe I miss
something else.
I test the patches and find just only 18% of the pagefault will enter into the
speculative page fault during a process startup. As I had said. most of pagefault
will be handled by ops->fault. I do not know the data you had posted is how to get.
Thanks
zhong jiang
> The huge work to do is to double check that all the code called by
> vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
> FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
> is managing as it may assume that it is under the protection of the mmap_sem in
> read mode, and that can be done implicitly.
>
> Cheers,
> Laurent.
>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 936128b..9bc1545 100644
>> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
>> if (!fe->pte) {
>> if (vma_is_anonymous(fe->vma))
>> return do_anonymous_page(fe);
>> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
>> - return VM_FAULT_RETRY;
>> else
>> return do_fault(fe);
>> }
>> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> goto out_put;
>> }
>> /*
>> - * Can't call vm_ops service has we don't know what they would do
>> - * with the VMA.
>> - * This include huge page from hugetlbfs.
>> - */
>> - if (vma->vm_ops) {
>> - trace_spf_vma_notsup(_RET_IP_, vma, address);
>> - goto out_put;
>> - }
>>
>>
>> Thanks
>> zhong jiang
>>> + /*
>>> + * __anon_vma_prepare() requires the mmap_sem to be held
>>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>>> + * in the speculative path.
>>> + */
>>> + if (unlikely(!vma->anon_vma))
>>> + goto out_put;
>>> +
>>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>>> +
>>> + /* Can't call userland page fault handler in the speculative path */
>>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>>> + goto out_put;
>>> +
>>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>>> + /*
>>> + * This could be detected by the check address against VMA's
>>> + * boundaries but we want to trace it as not supported instead
>>> + * of changed.
>>> + */
>>> + goto out_put;
>>> +
>>> + if (address < READ_ONCE(vma->vm_start)
>>> + || READ_ONCE(vma->vm_end) <= address)
>>> + goto out_put;
>>> +
>>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>>> + flags & FAULT_FLAG_INSTRUCTION,
>>> + flags & FAULT_FLAG_REMOTE)) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> +
>>> + /* This is one is required to check that the VMA has write access set */
>>> + if (flags & FAULT_FLAG_WRITE) {
>>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> +
>>> +#ifdef CONFIG_NUMA
>>> + /*
>>> + * MPOL_INTERLEAVE implies additional checks in
>>> + * mpol_misplaced() which are not compatible with the
>>> + *speculative page fault processing.
>>> + */
>>> + pol = __get_vma_policy(vma, address);
>>> + if (!pol)
>>> + pol = get_task_policy(current);
>>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>>> + goto out_put;
>>> +#endif
>>> +
>>> + /*
>>> + * Do a speculative lookup of the PTE entry.
>>> + */
>>> + local_irq_disable();
>>> + pgd = pgd_offset(mm, address);
>>> + pgdval = READ_ONCE(*pgd);
>>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>>> + goto out_walk;
>>> +
>>> + p4d = p4d_offset(pgd, address);
>>> + p4dval = READ_ONCE(*p4d);
>>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>>> + goto out_walk;
>>> +
>>> + vmf.pud = pud_offset(p4d, address);
>>> + pudval = READ_ONCE(*vmf.pud);
>>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>>> + goto out_walk;
>>> +
>>> + /* Huge pages at PUD level are not supported. */
>>> + if (unlikely(pud_trans_huge(pudval)))
>>> + goto out_walk;
>>> +
>>> + vmf.pmd = pmd_offset(vmf.pud, address);
>>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>>> + /*
>>> + * pmd_none could mean that a hugepage collapse is in progress
>>> + * in our back as collapse_huge_page() mark it before
>>> + * invalidating the pte (which is done once the IPI is catched
>>> + * by all CPU and we have interrupt disabled).
>>> + * For this reason we cannot handle THP in a speculative way since we
>>> + * can't safely indentify an in progress collapse operation done in our
>>> + * back on that PMD.
>>> + * Regarding the order of the following checks, see comment in
>>> + * pmd_devmap_trans_unstable()
>>> + */
>>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>>> + is_swap_pmd(vmf.orig_pmd)))
>>> + goto out_walk;
>>> +
>>> + /*
>>> + * The above does not allocate/instantiate page-tables because doing so
>>> + * would lead to the possibility of instantiating page-tables after
>>> + * free_pgtables() -- and consequently leaking them.
>>> + *
>>> + * The result is that we take at least one !speculative fault per PMD
>>> + * in order to instantiate it.
>>> + */
>>> +
>>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>>> + barrier(); /* See comment in handle_pte_fault() */
>>> + if (pte_none(vmf.orig_pte)) {
>>> + pte_unmap(vmf.pte);
>>> + vmf.pte = NULL;
>>> + }
>>> +
>>> + vmf.vma = vma;
>>> + vmf.pgoff = linear_page_index(vma, address);
>>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>>> + vmf.sequence = seq;
>>> + vmf.flags = flags;
>>> +
>>> + local_irq_enable();
>>> +
>>> + /*
>>> + * We need to re-validate the VMA after checking the bounds, otherwise
>>> + * we might have a false positive on the bounds.
>>> + */
>>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>>> + goto out_put;
>>> +
>>> + mem_cgroup_oom_enable();
>>> + ret = handle_pte_fault(&vmf);
>>> + mem_cgroup_oom_disable();
>>> +
>>> + put_vma(vma);
>>> +
>>> + /*
>>> + * The task may have entered a memcg OOM situation but
>>> + * if the allocation error was handled gracefully (no
>>> + * VM_FAULT_OOM), there is no need to kill anything.
>>> + * Just clean up the OOM state peacefully.
>>> + */
>>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>>> + mem_cgroup_oom_synchronize(false);
>>> + return ret;
>>> +
>>> +out_walk:
>>> + local_irq_enable();
>>> +out_put:
>>> + put_vma(vma);
>>> + return ret;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>> +
>>> /*
>>> * By the time we get here, we already hold the mm semaphore
>>> *
>>
>
> .
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
@ 2018-07-25 9:04 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-25 9:04 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/7/25 0:10, Laurent Dufour wrote:
>
> On 24/07/2018 16:26, zhong jiang wrote:
>> On 2018/5/17 19:06, Laurent Dufour wrote:
>>> From: Peter Zijlstra <peterz@infradead.org>
>>>
>>> Provide infrastructure to do a speculative fault (not holding
>>> mmap_sem).
>>>
>>> The not holding of mmap_sem means we can race against VMA
>>> change/removal and page-table destruction. We use the SRCU VMA freeing
>>> to keep the VMA around. We use the VMA seqcount to detect change
>>> (including umapping / page-table deletion) and we use gup_fast() style
>>> page-table walking to deal with page-table races.
>>>
>>> Once we've obtained the page and are ready to update the PTE, we
>>> validate if the state we started the fault with is still valid, if
>>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>>> PTE and we're done.
>>>
>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>
>>> [Manage the newly introduced pte_spinlock() for speculative page
>>> fault to fail if the VMA is touched in our back]
>>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>>> [Fetch p4d and pud]
>>> [Set vmd.sequence in __handle_mm_fault()]
>>> [Abort speculative path when handle_userfault() has to be called]
>>> [Add additional VMA's flags checks in handle_speculative_fault()]
>>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>>> [Remove warning comment about waiting for !seq&1 since we don't want
>>> to wait]
>>> [Remove warning about no huge page support, mention it explictly]
>>> [Don't call do_fault() in the speculative path as __do_fault() calls
>>> vma->vm_ops->fault() which may want to release mmap_sem]
>>> [Only vm_fault pointer argument for vma_has_changed()]
>>> [Fix check against huge page, calling pmd_trans_huge()]
>>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>>> processing done in vm_normal_page()]
>>> [Check that vma->anon_vma is already set when starting the speculative
>>> path]
>>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>>> the processing done in mpol_misplaced()]
>>> [Don't support VMA growing up or down]
>>> [Move check on vm_sequence just before calling handle_pte_fault()]
>>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>>> [Add mem cgroup oom check]
>>> [Use READ_ONCE to access p*d entries]
>>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>>> path]
>>> [Check PMD against concurrent collapsing operation]
>>> [Try spin lock the pte during the speculative path to avoid deadlock with
>>> other CPU's invalidating the TLB and requiring this CPU to catch the
>>> inter processor's interrupt]
>>> [Move define of FAULT_FLAG_SPECULATIVE here]
>>> [Introduce __handle_speculative_fault() and add a check against
>>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/mm.h | 30 ++++
>>> include/linux/pagemap.h | 4 +-
>>> mm/internal.h | 16 +-
>>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>>> index 0660a03d37d9..9e25283d6fc9 100644
>>> --- a/include/linux/hugetlb_inline.h
>>> +++ b/include/linux/hugetlb_inline.h
>>> @@ -8,7 +8,7 @@
>>>
>>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>>> {
>>> - return !!(vma->vm_flags & VM_HUGETLB);
>>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>>> }
>>>
>>> #else
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 05cbba70104b..31acf98a7d92 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>>
>>> #define FAULT_FLAG_TRACE \
>>> { FAULT_FLAG_WRITE, "WRITE" }, \
>>> @@ -343,6 +344,10 @@ struct vm_fault {
>>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>>> pgoff_t pgoff; /* Logical page offset based on vma */
>>> unsigned long address; /* Faulting virtual address */
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> + unsigned int sequence;
>>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>>> +#endif
>>> pmd_t *pmd; /* Pointer to pmd entry matching
>>> * the 'address' */
>>> pud_t *pud; /* Pointer to pud entry matching
>>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>>> #ifdef CONFIG_MMU
>>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> unsigned int flags);
>>> +
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags);
>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + /*
>>> + * Try speculative page fault for multithreaded user space task only.
>>> + */
>>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>>> + return VM_FAULT_RETRY;
>>> + return __handle_speculative_fault(mm, address, flags);
>>> +}
>>> +#else
>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>> + unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + return VM_FAULT_RETRY;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>> +
>>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>>> unsigned long address, unsigned int fault_flags,
>>> bool *unlocked);
>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>> index b1bd2186e6d2..6e2aa4e79af7 100644
>>> --- a/include/linux/pagemap.h
>>> +++ b/include/linux/pagemap.h
>>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>>> pgoff_t pgoff;
>>> if (unlikely(is_vm_hugetlb_page(vma)))
>>> return linear_hugepage_index(vma, address);
>>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>>> - pgoff += vma->vm_pgoff;
>>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>>> + pgoff += READ_ONCE(vma->vm_pgoff);
>>> return pgoff;
>>> }
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index fb2667b20f0a..10b188c87fa4 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>>> unsigned long addr);
>>> extern void put_vma(struct vm_area_struct *vma);
>>> -#endif
>>> +
>>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>>> +{
>>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>>> +
>>> + /*
>>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>>> + * the wmb in vma_rb_erase().
>>> + */
>>> + smp_rmb();
>>> +
>>> + return ret || seq != vmf->sequence;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>
>>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>>> unsigned long floor, unsigned long ceiling);
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>> if (page)
>>> dump_page(page, "bad pte");
>>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>>> + mapping, index);
>>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>>> vma->vm_file,
>>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>>> }
>>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>>
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +static bool pte_spinlock(struct vm_fault *vmf)
>>> +{
>>> + bool ret = false;
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + pmd_t pmdval;
>>> +#endif
>>> +
>>> + /* Check if vma is still valid */
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + spin_lock(vmf->ptl);
>>> + return true;
>>> + }
>>> +
>>> +again:
>>> + local_irq_disable();
>>> + if (vma_has_changed(vmf))
>>> + goto out;
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + /*
>>> + * We check if the pmd value is still the same to ensure that there
>>> + * is not a huge collapse operation in progress in our back.
>>> + */
>>> + pmdval = READ_ONCE(*vmf->pmd);
>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>> + goto out;
>>> +#endif
>>> +
>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>>> + local_irq_enable();
>>> + goto again;
>>> + }
>>> +
>>> + if (vma_has_changed(vmf)) {
>>> + spin_unlock(vmf->ptl);
>>> + goto out;
>>> + }
>>> +
>>> + ret = true;
>>> +out:
>>> + local_irq_enable();
>>> + return ret;
>>> +}
>>> +
>>> +static bool pte_map_lock(struct vm_fault *vmf)
>>> +{
>>> + bool ret = false;
>>> + pte_t *pte;
>>> + spinlock_t *ptl;
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + pmd_t pmdval;
>>> +#endif
>>> +
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>> + vmf->address, &vmf->ptl);
>>> + return true;
>>> + }
>>> +
>>> + /*
>>> + * The first vma_has_changed() guarantees the page-tables are still
>>> + * valid, having IRQs disabled ensures they stay around, hence the
>>> + * second vma_has_changed() to make sure they are still valid once
>>> + * we've got the lock. After that a concurrent zap_pte_range() will
>>> + * block on the PTL and thus we're safe.
>>> + */
>>> +again:
>>> + local_irq_disable();
>>> + if (vma_has_changed(vmf))
>>> + goto out;
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + /*
>>> + * We check if the pmd value is still the same to ensure that there
>>> + * is not a huge collapse operation in progress in our back.
>>> + */
>>> + pmdval = READ_ONCE(*vmf->pmd);
>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>> + goto out;
>>> +#endif
>>> +
>>> + /*
>>> + * Same as pte_offset_map_lock() except that we call
>>> + * spin_trylock() in place of spin_lock() to avoid race with
>>> + * unmap path which may have the lock and wait for this CPU
>>> + * to invalidate TLB but this CPU has irq disabled.
>>> + * Since we are in a speculative patch, accept it could fail
>>> + */
>>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>>> + if (unlikely(!spin_trylock(ptl))) {
>>> + pte_unmap(pte);
>>> + local_irq_enable();
>>> + goto again;
>>> + }
>>> +
>>> + if (vma_has_changed(vmf)) {
>>> + pte_unmap_unlock(pte, ptl);
>>> + goto out;
>>> + }
>>> +
>>> + vmf->pte = pte;
>>> + vmf->ptl = ptl;
>>> + ret = true;
>>> +out:
>>> + local_irq_enable();
>>> + return ret;
>>> +}
>>> +#else
>>> static inline bool pte_spinlock(struct vm_fault *vmf)
>>> {
>>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>> vmf->address, &vmf->ptl);
>>> return true;
>>> }
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>
>>> /*
>>> * handle_pte_fault chooses page fault handler according to an entry which was
>>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>> ret = check_stable_address_space(vma->vm_mm);
>>> if (ret)
>>> goto unlock;
>>> + /*
>>> + * Don't call the userfaultfd during the speculative path.
>>> + * We already checked for the VMA to not be managed through
>>> + * userfaultfd, but it may be set in our back once we have lock
>>> + * the pte. In such a case we can ignore it this time.
>>> + */
>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + goto setpte;
>>> /* Deliver the page fault to userland, check inside PT lock */
>>> if (userfaultfd_missing(vma)) {
>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>> goto unlock_and_release;
>>>
>>> /* Deliver the page fault to userland, check inside PT lock */
>>> - if (userfaultfd_missing(vma)) {
>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>> mem_cgroup_cancel_charge(page, memcg, false);
>>> put_page(page);
>>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>
>>> if (unlikely(pmd_none(*vmf->pmd))) {
>>> /*
>>> + * In the case of the speculative page fault handler we abort
>>> + * the speculative path immediately as the pmd is probably
>>> + * in the way to be converted in a huge one. We will try
>>> + * again holding the mmap_sem (which implies that the collapse
>>> + * operation is done).
>>> + */
>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + return VM_FAULT_RETRY;
>>> + /*
>>> * Leave __pte_alloc() until later: because vm_ops->fault may
>>> * want to allocate huge page, and if we expose page table
>>> * for an instant, it will be difficult to retract from
>>> * concurrent faults and from rmap lookups.
>>> */
>>> vmf->pte = NULL;
>>> - } else {
>>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>> /* See comment in pte_alloc_one_map() */
>>> if (pmd_devmap_trans_unstable(vmf->pmd))
>>> return 0;
>>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>> * pmd from under us anymore at this point because we hold the
>>> * mmap_sem read mode and khugepaged takes it in write mode.
>>> * So now it's safe to run pte_offset_map().
>>> + * This is not applicable to the speculative page fault handler
>>> + * but in that case, the pte is fetched earlier in
>>> + * handle_speculative_fault().
>>> */
>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>>> vmf->orig_pte = *vmf->pte;
>>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>> if (!vmf->pte) {
>>> if (vma_is_anonymous(vmf->vma))
>>> return do_anonymous_page(vmf);
>>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>> + return VM_FAULT_RETRY;
>>> else
>>> return do_fault(vmf);
>>> }
>>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>>> if (!vmf.pmd)
>>> return VM_FAULT_OOM;
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>>> +#endif
>>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>>> ret = create_huge_pmd(&vmf);
>>> if (!(ret & VM_FAULT_FALLBACK))
>>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>> return handle_pte_fault(&vmf);
>>> }
>>>
>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>> +/*
>>> + * Tries to handle the page fault in a speculative way, without grabbing the
>>> + * mmap_sem.
>>> + */
>>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>> + unsigned int flags)
>>> +{
>>> + struct vm_fault vmf = {
>>> + .address = address,
>>> + };
>>> + pgd_t *pgd, pgdval;
>>> + p4d_t *p4d, p4dval;
>>> + pud_t pudval;
>>> + int seq, ret = VM_FAULT_RETRY;
>>> + struct vm_area_struct *vma;
>>> +#ifdef CONFIG_NUMA
>>> + struct mempolicy *pol;
>>> +#endif
>>> +
>>> + /* Clear flags that may lead to release the mmap_sem to retry */
>>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>>> + flags |= FAULT_FLAG_SPECULATIVE;
>>> +
>>> + vma = get_vma(mm, address);
>>> + if (!vma)
>>> + return ret;
>>> +
>>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>>> + if (seq & 1)
>>> + goto out_put;
>>> +
>>> + /*
>>> + * Can't call vm_ops service has we don't know what they would do
>>> + * with the VMA.
>>> + * This include huge page from hugetlbfs.
>>> + */
>>> + if (vma->vm_ops)
>>> + goto out_put;
>>> +
>> Hi Laurent
>>
>> I think that most of pagefault will leave here. Is there any case need to skip ?
>> I have tested the following patch, it work well.
> Hi Zhong,
>
> Well this will allow file mapping to be handle in a speculative way, but that's
> a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
> operation will be fair.
>
> In the case of the anonymous file mapping that's often not a problem, depending
> on the underlying file system, but there are so many cases to check and this is
> hard to say this can be done in a speculative way as is.
This patch say that spf just handle anonyous page. but I find that do_swap_page
also maybe release the mmap_sem without FAULT_FLAG_RETRY_NOWAIT. why is it safe
to handle the case. I think that the case is similar to file page. Maybe I miss
something else.
I test the patches and find just only 18% of the pagefault will enter into the
speculative page fault during a process startup. As I had said. most of pagefault
will be handled by ops->fault. I do not know the data you had posted is how to get.
Thanks
zhong jiang
> The huge work to do is to double check that all the code called by
> vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
> FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
> is managing as it may assume that it is under the protection of the mmap_sem in
> read mode, and that can be done implicitly.
>
> Cheers,
> Laurent.
>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 936128b..9bc1545 100644
>> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
>> if (!fe->pte) {
>> if (vma_is_anonymous(fe->vma))
>> return do_anonymous_page(fe);
>> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
>> - return VM_FAULT_RETRY;
>> else
>> return do_fault(fe);
>> }
>> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> goto out_put;
>> }
>> /*
>> - * Can't call vm_ops service has we don't know what they would do
>> - * with the VMA.
>> - * This include huge page from hugetlbfs.
>> - */
>> - if (vma->vm_ops) {
>> - trace_spf_vma_notsup(_RET_IP_, vma, address);
>> - goto out_put;
>> - }
>>
>>
>> Thanks
>> zhong jiang
>>> + /*
>>> + * __anon_vma_prepare() requires the mmap_sem to be held
>>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>>> + * in the speculative path.
>>> + */
>>> + if (unlikely(!vma->anon_vma))
>>> + goto out_put;
>>> +
>>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>>> +
>>> + /* Can't call userland page fault handler in the speculative path */
>>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>>> + goto out_put;
>>> +
>>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>>> + /*
>>> + * This could be detected by the check address against VMA's
>>> + * boundaries but we want to trace it as not supported instead
>>> + * of changed.
>>> + */
>>> + goto out_put;
>>> +
>>> + if (address < READ_ONCE(vma->vm_start)
>>> + || READ_ONCE(vma->vm_end) <= address)
>>> + goto out_put;
>>> +
>>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>>> + flags & FAULT_FLAG_INSTRUCTION,
>>> + flags & FAULT_FLAG_REMOTE)) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> +
>>> + /* This is one is required to check that the VMA has write access set */
>>> + if (flags & FAULT_FLAG_WRITE) {
>>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>>> + ret = VM_FAULT_SIGSEGV;
>>> + goto out_put;
>>> + }
>>> +
>>> +#ifdef CONFIG_NUMA
>>> + /*
>>> + * MPOL_INTERLEAVE implies additional checks in
>>> + * mpol_misplaced() which are not compatible with the
>>> + *speculative page fault processing.
>>> + */
>>> + pol = __get_vma_policy(vma, address);
>>> + if (!pol)
>>> + pol = get_task_policy(current);
>>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>>> + goto out_put;
>>> +#endif
>>> +
>>> + /*
>>> + * Do a speculative lookup of the PTE entry.
>>> + */
>>> + local_irq_disable();
>>> + pgd = pgd_offset(mm, address);
>>> + pgdval = READ_ONCE(*pgd);
>>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>>> + goto out_walk;
>>> +
>>> + p4d = p4d_offset(pgd, address);
>>> + p4dval = READ_ONCE(*p4d);
>>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>>> + goto out_walk;
>>> +
>>> + vmf.pud = pud_offset(p4d, address);
>>> + pudval = READ_ONCE(*vmf.pud);
>>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>>> + goto out_walk;
>>> +
>>> + /* Huge pages at PUD level are not supported. */
>>> + if (unlikely(pud_trans_huge(pudval)))
>>> + goto out_walk;
>>> +
>>> + vmf.pmd = pmd_offset(vmf.pud, address);
>>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>>> + /*
>>> + * pmd_none could mean that a hugepage collapse is in progress
>>> + * in our back as collapse_huge_page() mark it before
>>> + * invalidating the pte (which is done once the IPI is catched
>>> + * by all CPU and we have interrupt disabled).
>>> + * For this reason we cannot handle THP in a speculative way since we
>>> + * can't safely indentify an in progress collapse operation done in our
>>> + * back on that PMD.
>>> + * Regarding the order of the following checks, see comment in
>>> + * pmd_devmap_trans_unstable()
>>> + */
>>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>>> + is_swap_pmd(vmf.orig_pmd)))
>>> + goto out_walk;
>>> +
>>> + /*
>>> + * The above does not allocate/instantiate page-tables because doing so
>>> + * would lead to the possibility of instantiating page-tables after
>>> + * free_pgtables() -- and consequently leaking them.
>>> + *
>>> + * The result is that we take at least one !speculative fault per PMD
>>> + * in order to instantiate it.
>>> + */
>>> +
>>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>>> + barrier(); /* See comment in handle_pte_fault() */
>>> + if (pte_none(vmf.orig_pte)) {
>>> + pte_unmap(vmf.pte);
>>> + vmf.pte = NULL;
>>> + }
>>> +
>>> + vmf.vma = vma;
>>> + vmf.pgoff = linear_page_index(vma, address);
>>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>>> + vmf.sequence = seq;
>>> + vmf.flags = flags;
>>> +
>>> + local_irq_enable();
>>> +
>>> + /*
>>> + * We need to re-validate the VMA after checking the bounds, otherwise
>>> + * we might have a false positive on the bounds.
>>> + */
>>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>>> + goto out_put;
>>> +
>>> + mem_cgroup_oom_enable();
>>> + ret = handle_pte_fault(&vmf);
>>> + mem_cgroup_oom_disable();
>>> +
>>> + put_vma(vma);
>>> +
>>> + /*
>>> + * The task may have entered a memcg OOM situation but
>>> + * if the allocation error was handled gracefully (no
>>> + * VM_FAULT_OOM), there is no need to kill anything.
>>> + * Just clean up the OOM state peacefully.
>>> + */
>>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>>> + mem_cgroup_oom_synchronize(false);
>>> + return ret;
>>> +
>>> +out_walk:
>>> + local_irq_enable();
>>> +out_put:
>>> + put_vma(vma);
>>> + return ret;
>>> +}
>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>> +
>>> /*
>>> * By the time we get here, we already hold the mm semaphore
>>> *
>>
>
> .
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-07-25 9:04 ` zhong jiang
(?)
@ 2018-07-25 10:44 ` Laurent Dufour
2018-07-25 11:23 ` zhong jiang
-1 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-07-25 10:44 UTC (permalink / raw)
To: zhong jiang
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 25/07/2018 11:04, zhong jiang wrote:
> On 2018/7/25 0:10, Laurent Dufour wrote:
>>
>> On 24/07/2018 16:26, zhong jiang wrote:
>>> On 2018/5/17 19:06, Laurent Dufour wrote:
>>>> From: Peter Zijlstra <peterz@infradead.org>
>>>>
>>>> Provide infrastructure to do a speculative fault (not holding
>>>> mmap_sem).
>>>>
>>>> The not holding of mmap_sem means we can race against VMA
>>>> change/removal and page-table destruction. We use the SRCU VMA freeing
>>>> to keep the VMA around. We use the VMA seqcount to detect change
>>>> (including umapping / page-table deletion) and we use gup_fast() style
>>>> page-table walking to deal with page-table races.
>>>>
>>>> Once we've obtained the page and are ready to update the PTE, we
>>>> validate if the state we started the fault with is still valid, if
>>>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>>>> PTE and we're done.
>>>>
>>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>>
>>>> [Manage the newly introduced pte_spinlock() for speculative page
>>>> fault to fail if the VMA is touched in our back]
>>>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>>>> [Fetch p4d and pud]
>>>> [Set vmd.sequence in __handle_mm_fault()]
>>>> [Abort speculative path when handle_userfault() has to be called]
>>>> [Add additional VMA's flags checks in handle_speculative_fault()]
>>>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>>>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>>>> [Remove warning comment about waiting for !seq&1 since we don't want
>>>> to wait]
>>>> [Remove warning about no huge page support, mention it explictly]
>>>> [Don't call do_fault() in the speculative path as __do_fault() calls
>>>> vma->vm_ops->fault() which may want to release mmap_sem]
>>>> [Only vm_fault pointer argument for vma_has_changed()]
>>>> [Fix check against huge page, calling pmd_trans_huge()]
>>>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>>>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>>>> processing done in vm_normal_page()]
>>>> [Check that vma->anon_vma is already set when starting the speculative
>>>> path]
>>>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>>>> the processing done in mpol_misplaced()]
>>>> [Don't support VMA growing up or down]
>>>> [Move check on vm_sequence just before calling handle_pte_fault()]
>>>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>>>> [Add mem cgroup oom check]
>>>> [Use READ_ONCE to access p*d entries]
>>>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>>>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>>>> path]
>>>> [Check PMD against concurrent collapsing operation]
>>>> [Try spin lock the pte during the speculative path to avoid deadlock with
>>>> other CPU's invalidating the TLB and requiring this CPU to catch the
>>>> inter processor's interrupt]
>>>> [Move define of FAULT_FLAG_SPECULATIVE here]
>>>> [Introduce __handle_speculative_fault() and add a check against
>>>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>>> ---
>>>> include/linux/hugetlb_inline.h | 2 +-
>>>> include/linux/mm.h | 30 ++++
>>>> include/linux/pagemap.h | 4 +-
>>>> mm/internal.h | 16 +-
>>>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>>>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>>>> index 0660a03d37d9..9e25283d6fc9 100644
>>>> --- a/include/linux/hugetlb_inline.h
>>>> +++ b/include/linux/hugetlb_inline.h
>>>> @@ -8,7 +8,7 @@
>>>>
>>>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>>>> {
>>>> - return !!(vma->vm_flags & VM_HUGETLB);
>>>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>>>> }
>>>>
>>>> #else
>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>> index 05cbba70104b..31acf98a7d92 100644
>>>> --- a/include/linux/mm.h
>>>> +++ b/include/linux/mm.h
>>>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>>>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>>>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>>>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>>>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>>>
>>>> #define FAULT_FLAG_TRACE \
>>>> { FAULT_FLAG_WRITE, "WRITE" }, \
>>>> @@ -343,6 +344,10 @@ struct vm_fault {
>>>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>>>> pgoff_t pgoff; /* Logical page offset based on vma */
>>>> unsigned long address; /* Faulting virtual address */
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> + unsigned int sequence;
>>>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>>>> +#endif
>>>> pmd_t *pmd; /* Pointer to pmd entry matching
>>>> * the 'address' */
>>>> pud_t *pud; /* Pointer to pud entry matching
>>>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>>>> #ifdef CONFIG_MMU
>>>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>> unsigned int flags);
>>>> +
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>>>> + unsigned long address,
>>>> + unsigned int flags);
>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>> + unsigned long address,
>>>> + unsigned int flags)
>>>> +{
>>>> + /*
>>>> + * Try speculative page fault for multithreaded user space task only.
>>>> + */
>>>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>>>> + return VM_FAULT_RETRY;
>>>> + return __handle_speculative_fault(mm, address, flags);
>>>> +}
>>>> +#else
>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>> + unsigned long address,
>>>> + unsigned int flags)
>>>> +{
>>>> + return VM_FAULT_RETRY;
>>>> +}
>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>> +
>>>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>>>> unsigned long address, unsigned int fault_flags,
>>>> bool *unlocked);
>>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>>> index b1bd2186e6d2..6e2aa4e79af7 100644
>>>> --- a/include/linux/pagemap.h
>>>> +++ b/include/linux/pagemap.h
>>>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>>>> pgoff_t pgoff;
>>>> if (unlikely(is_vm_hugetlb_page(vma)))
>>>> return linear_hugepage_index(vma, address);
>>>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>>>> - pgoff += vma->vm_pgoff;
>>>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>>>> + pgoff += READ_ONCE(vma->vm_pgoff);
>>>> return pgoff;
>>>> }
>>>>
>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>> index fb2667b20f0a..10b188c87fa4 100644
>>>> --- a/mm/internal.h
>>>> +++ b/mm/internal.h
>>>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>>>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>>>> unsigned long addr);
>>>> extern void put_vma(struct vm_area_struct *vma);
>>>> -#endif
>>>> +
>>>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>>>> +{
>>>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>>>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>>>> +
>>>> + /*
>>>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>>>> + * the wmb in vma_rb_erase().
>>>> + */
>>>> + smp_rmb();
>>>> +
>>>> + return ret || seq != vmf->sequence;
>>>> +}
>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>
>>>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>>>> unsigned long floor, unsigned long ceiling);
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>>> if (page)
>>>> dump_page(page, "bad pte");
>>>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>>>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>>>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>>>> + mapping, index);
>>>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>>>> vma->vm_file,
>>>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>>>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>>>> }
>>>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>>>
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +static bool pte_spinlock(struct vm_fault *vmf)
>>>> +{
>>>> + bool ret = false;
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + pmd_t pmdval;
>>>> +#endif
>>>> +
>>>> + /* Check if vma is still valid */
>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>> + spin_lock(vmf->ptl);
>>>> + return true;
>>>> + }
>>>> +
>>>> +again:
>>>> + local_irq_disable();
>>>> + if (vma_has_changed(vmf))
>>>> + goto out;
>>>> +
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + /*
>>>> + * We check if the pmd value is still the same to ensure that there
>>>> + * is not a huge collapse operation in progress in our back.
>>>> + */
>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>> + goto out;
>>>> +#endif
>>>> +
>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>>>> + local_irq_enable();
>>>> + goto again;
>>>> + }
>>>> +
>>>> + if (vma_has_changed(vmf)) {
>>>> + spin_unlock(vmf->ptl);
>>>> + goto out;
>>>> + }
>>>> +
>>>> + ret = true;
>>>> +out:
>>>> + local_irq_enable();
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static bool pte_map_lock(struct vm_fault *vmf)
>>>> +{
>>>> + bool ret = false;
>>>> + pte_t *pte;
>>>> + spinlock_t *ptl;
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + pmd_t pmdval;
>>>> +#endif
>>>> +
>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>> + vmf->address, &vmf->ptl);
>>>> + return true;
>>>> + }
>>>> +
>>>> + /*
>>>> + * The first vma_has_changed() guarantees the page-tables are still
>>>> + * valid, having IRQs disabled ensures they stay around, hence the
>>>> + * second vma_has_changed() to make sure they are still valid once
>>>> + * we've got the lock. After that a concurrent zap_pte_range() will
>>>> + * block on the PTL and thus we're safe.
>>>> + */
>>>> +again:
>>>> + local_irq_disable();
>>>> + if (vma_has_changed(vmf))
>>>> + goto out;
>>>> +
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + /*
>>>> + * We check if the pmd value is still the same to ensure that there
>>>> + * is not a huge collapse operation in progress in our back.
>>>> + */
>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>> + goto out;
>>>> +#endif
>>>> +
>>>> + /*
>>>> + * Same as pte_offset_map_lock() except that we call
>>>> + * spin_trylock() in place of spin_lock() to avoid race with
>>>> + * unmap path which may have the lock and wait for this CPU
>>>> + * to invalidate TLB but this CPU has irq disabled.
>>>> + * Since we are in a speculative patch, accept it could fail
>>>> + */
>>>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>>>> + if (unlikely(!spin_trylock(ptl))) {
>>>> + pte_unmap(pte);
>>>> + local_irq_enable();
>>>> + goto again;
>>>> + }
>>>> +
>>>> + if (vma_has_changed(vmf)) {
>>>> + pte_unmap_unlock(pte, ptl);
>>>> + goto out;
>>>> + }
>>>> +
>>>> + vmf->pte = pte;
>>>> + vmf->ptl = ptl;
>>>> + ret = true;
>>>> +out:
>>>> + local_irq_enable();
>>>> + return ret;
>>>> +}
>>>> +#else
>>>> static inline bool pte_spinlock(struct vm_fault *vmf)
>>>> {
>>>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>>> vmf->address, &vmf->ptl);
>>>> return true;
>>>> }
>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>
>>>> /*
>>>> * handle_pte_fault chooses page fault handler according to an entry which was
>>>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>> ret = check_stable_address_space(vma->vm_mm);
>>>> if (ret)
>>>> goto unlock;
>>>> + /*
>>>> + * Don't call the userfaultfd during the speculative path.
>>>> + * We already checked for the VMA to not be managed through
>>>> + * userfaultfd, but it may be set in our back once we have lock
>>>> + * the pte. In such a case we can ignore it this time.
>>>> + */
>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>> + goto setpte;
>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>> if (userfaultfd_missing(vma)) {
>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>> goto unlock_and_release;
>>>>
>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>> - if (userfaultfd_missing(vma)) {
>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>> mem_cgroup_cancel_charge(page, memcg, false);
>>>> put_page(page);
>>>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>
>>>> if (unlikely(pmd_none(*vmf->pmd))) {
>>>> /*
>>>> + * In the case of the speculative page fault handler we abort
>>>> + * the speculative path immediately as the pmd is probably
>>>> + * in the way to be converted in a huge one. We will try
>>>> + * again holding the mmap_sem (which implies that the collapse
>>>> + * operation is done).
>>>> + */
>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>> + return VM_FAULT_RETRY;
>>>> + /*
>>>> * Leave __pte_alloc() until later: because vm_ops->fault may
>>>> * want to allocate huge page, and if we expose page table
>>>> * for an instant, it will be difficult to retract from
>>>> * concurrent faults and from rmap lookups.
>>>> */
>>>> vmf->pte = NULL;
>>>> - } else {
>>>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>> /* See comment in pte_alloc_one_map() */
>>>> if (pmd_devmap_trans_unstable(vmf->pmd))
>>>> return 0;
>>>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>> * pmd from under us anymore at this point because we hold the
>>>> * mmap_sem read mode and khugepaged takes it in write mode.
>>>> * So now it's safe to run pte_offset_map().
>>>> + * This is not applicable to the speculative page fault handler
>>>> + * but in that case, the pte is fetched earlier in
>>>> + * handle_speculative_fault().
>>>> */
>>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>>>> vmf->orig_pte = *vmf->pte;
>>>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>> if (!vmf->pte) {
>>>> if (vma_is_anonymous(vmf->vma))
>>>> return do_anonymous_page(vmf);
>>>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>> + return VM_FAULT_RETRY;
>>>> else
>>>> return do_fault(vmf);
>>>> }
>>>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>>>> if (!vmf.pmd)
>>>> return VM_FAULT_OOM;
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>>>> +#endif
>>>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>>>> ret = create_huge_pmd(&vmf);
>>>> if (!(ret & VM_FAULT_FALLBACK))
>>>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>> return handle_pte_fault(&vmf);
>>>> }
>>>>
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +/*
>>>> + * Tries to handle the page fault in a speculative way, without grabbing the
>>>> + * mmap_sem.
>>>> + */
>>>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>>> + unsigned int flags)
>>>> +{
>>>> + struct vm_fault vmf = {
>>>> + .address = address,
>>>> + };
>>>> + pgd_t *pgd, pgdval;
>>>> + p4d_t *p4d, p4dval;
>>>> + pud_t pudval;
>>>> + int seq, ret = VM_FAULT_RETRY;
>>>> + struct vm_area_struct *vma;
>>>> +#ifdef CONFIG_NUMA
>>>> + struct mempolicy *pol;
>>>> +#endif
>>>> +
>>>> + /* Clear flags that may lead to release the mmap_sem to retry */
>>>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>>>> + flags |= FAULT_FLAG_SPECULATIVE;
>>>> +
>>>> + vma = get_vma(mm, address);
>>>> + if (!vma)
>>>> + return ret;
>>>> +
>>>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>>>> + if (seq & 1)
>>>> + goto out_put;
>>>> +
>>>> + /*
>>>> + * Can't call vm_ops service has we don't know what they would do
>>>> + * with the VMA.
>>>> + * This include huge page from hugetlbfs.
>>>> + */
>>>> + if (vma->vm_ops)
>>>> + goto out_put;
>>>> +
>>> Hi Laurent
>>>
>>> I think that most of pagefault will leave here. Is there any case need to skip ?
>>> I have tested the following patch, it work well.
>> Hi Zhong,
>>
>> Well this will allow file mapping to be handle in a speculative way, but that's
>> a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
>> operation will be fair.
>>
>> In the case of the anonymous file mapping that's often not a problem, depending
>> on the underlying file system, but there are so many cases to check and this is
>> hard to say this can be done in a speculative way as is.
> This patch say that spf just handle anonyous page. but I find that do_swap_page
> also maybe release the mmap_sem without FAULT_FLAG_RETRY_NOWAIT. why is it safe
> to handle the case. I think that the case is similar to file page. Maybe I miss
> something else.
do_swap_page() may released the mmap_sem through the call to
__lock_page_or_retry(), but this can only happen if FAULT_FLAG_ALLOW_RETRY or
FAULT_FLAG_KILLABLE are set and they are unset in __handle_speculative_fault().
>
> I test the patches and find just only 18% of the pagefault will enter into the
> speculative page fault during a process startup. As I had said. most of pagefault
> will be handled by ops->fault. I do not know the data you had posted is how to get.
I do agree that handling file mapping will be required, but this will add more
complexity to this series, since we need a way for drivers to tell they are
compatible with the speculative path.
May be I should give it a try on the next send.
For my information, what was the performance improvement you seen when handling
file page faulting this way ?
Thanks,
Laurent.
>
>
> Thanks
> zhong jiang
>> The huge work to do is to double check that all the code called by
>> vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
>> FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
>> is managing as it may assume that it is under the protection of the mmap_sem in
>> read mode, and that can be done implicitly.
>>
>> Cheers,
>> Laurent.
>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 936128b..9bc1545 100644
>>> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
>>> if (!fe->pte) {
>>> if (vma_is_anonymous(fe->vma))
>>> return do_anonymous_page(fe);
>>> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
>>> - return VM_FAULT_RETRY;
>>> else
>>> return do_fault(fe);
>>> }
>>> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>> goto out_put;
>>> }
>>> /*
>>> - * Can't call vm_ops service has we don't know what they would do
>>> - * with the VMA.
>>> - * This include huge page from hugetlbfs.
>>> - */
>>> - if (vma->vm_ops) {
>>> - trace_spf_vma_notsup(_RET_IP_, vma, address);
>>> - goto out_put;
>>> - }
>>>
>>>
>>> Thanks
>>> zhong jiang
>>>> + /*
>>>> + * __anon_vma_prepare() requires the mmap_sem to be held
>>>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>>>> + * in the speculative path.
>>>> + */
>>>> + if (unlikely(!vma->anon_vma))
>>>> + goto out_put;
>>>> +
>>>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>>>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>>>> +
>>>> + /* Can't call userland page fault handler in the speculative path */
>>>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>>>> + goto out_put;
>>>> +
>>>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>>>> + /*
>>>> + * This could be detected by the check address against VMA's
>>>> + * boundaries but we want to trace it as not supported instead
>>>> + * of changed.
>>>> + */
>>>> + goto out_put;
>>>> +
>>>> + if (address < READ_ONCE(vma->vm_start)
>>>> + || READ_ONCE(vma->vm_end) <= address)
>>>> + goto out_put;
>>>> +
>>>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>>>> + flags & FAULT_FLAG_INSTRUCTION,
>>>> + flags & FAULT_FLAG_REMOTE)) {
>>>> + ret = VM_FAULT_SIGSEGV;
>>>> + goto out_put;
>>>> + }
>>>> +
>>>> + /* This is one is required to check that the VMA has write access set */
>>>> + if (flags & FAULT_FLAG_WRITE) {
>>>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>>>> + ret = VM_FAULT_SIGSEGV;
>>>> + goto out_put;
>>>> + }
>>>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>>>> + ret = VM_FAULT_SIGSEGV;
>>>> + goto out_put;
>>>> + }
>>>> +
>>>> +#ifdef CONFIG_NUMA
>>>> + /*
>>>> + * MPOL_INTERLEAVE implies additional checks in
>>>> + * mpol_misplaced() which are not compatible with the
>>>> + *speculative page fault processing.
>>>> + */
>>>> + pol = __get_vma_policy(vma, address);
>>>> + if (!pol)
>>>> + pol = get_task_policy(current);
>>>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>>>> + goto out_put;
>>>> +#endif
>>>> +
>>>> + /*
>>>> + * Do a speculative lookup of the PTE entry.
>>>> + */
>>>> + local_irq_disable();
>>>> + pgd = pgd_offset(mm, address);
>>>> + pgdval = READ_ONCE(*pgd);
>>>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>>>> + goto out_walk;
>>>> +
>>>> + p4d = p4d_offset(pgd, address);
>>>> + p4dval = READ_ONCE(*p4d);
>>>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>>>> + goto out_walk;
>>>> +
>>>> + vmf.pud = pud_offset(p4d, address);
>>>> + pudval = READ_ONCE(*vmf.pud);
>>>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>>>> + goto out_walk;
>>>> +
>>>> + /* Huge pages at PUD level are not supported. */
>>>> + if (unlikely(pud_trans_huge(pudval)))
>>>> + goto out_walk;
>>>> +
>>>> + vmf.pmd = pmd_offset(vmf.pud, address);
>>>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>>>> + /*
>>>> + * pmd_none could mean that a hugepage collapse is in progress
>>>> + * in our back as collapse_huge_page() mark it before
>>>> + * invalidating the pte (which is done once the IPI is catched
>>>> + * by all CPU and we have interrupt disabled).
>>>> + * For this reason we cannot handle THP in a speculative way since we
>>>> + * can't safely indentify an in progress collapse operation done in our
>>>> + * back on that PMD.
>>>> + * Regarding the order of the following checks, see comment in
>>>> + * pmd_devmap_trans_unstable()
>>>> + */
>>>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>>>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>>>> + is_swap_pmd(vmf.orig_pmd)))
>>>> + goto out_walk;
>>>> +
>>>> + /*
>>>> + * The above does not allocate/instantiate page-tables because doing so
>>>> + * would lead to the possibility of instantiating page-tables after
>>>> + * free_pgtables() -- and consequently leaking them.
>>>> + *
>>>> + * The result is that we take at least one !speculative fault per PMD
>>>> + * in order to instantiate it.
>>>> + */
>>>> +
>>>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>>>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>>>> + barrier(); /* See comment in handle_pte_fault() */
>>>> + if (pte_none(vmf.orig_pte)) {
>>>> + pte_unmap(vmf.pte);
>>>> + vmf.pte = NULL;
>>>> + }
>>>> +
>>>> + vmf.vma = vma;
>>>> + vmf.pgoff = linear_page_index(vma, address);
>>>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>>>> + vmf.sequence = seq;
>>>> + vmf.flags = flags;
>>>> +
>>>> + local_irq_enable();
>>>> +
>>>> + /*
>>>> + * We need to re-validate the VMA after checking the bounds, otherwise
>>>> + * we might have a false positive on the bounds.
>>>> + */
>>>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>>>> + goto out_put;
>>>> +
>>>> + mem_cgroup_oom_enable();
>>>> + ret = handle_pte_fault(&vmf);
>>>> + mem_cgroup_oom_disable();
>>>> +
>>>> + put_vma(vma);
>>>> +
>>>> + /*
>>>> + * The task may have entered a memcg OOM situation but
>>>> + * if the allocation error was handled gracefully (no
>>>> + * VM_FAULT_OOM), there is no need to kill anything.
>>>> + * Just clean up the OOM state peacefully.
>>>> + */
>>>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>>>> + mem_cgroup_oom_synchronize(false);
>>>> + return ret;
>>>> +
>>>> +out_walk:
>>>> + local_irq_enable();
>>>> +out_put:
>>>> + put_vma(vma);
>>>> + return ret;
>>>> +}
>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>> +
>>>> /*
>>>> * By the time we get here, we already hold the mm semaphore
>>>> *
>>>
>>
>> .
>>
>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
2018-07-25 10:44 ` Laurent Dufour
@ 2018-07-25 11:23 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-25 11:23 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/7/25 18:44, Laurent Dufour wrote:
>
> On 25/07/2018 11:04, zhong jiang wrote:
>> On 2018/7/25 0:10, Laurent Dufour wrote:
>>> On 24/07/2018 16:26, zhong jiang wrote:
>>>> On 2018/5/17 19:06, Laurent Dufour wrote:
>>>>> From: Peter Zijlstra <peterz@infradead.org>
>>>>>
>>>>> Provide infrastructure to do a speculative fault (not holding
>>>>> mmap_sem).
>>>>>
>>>>> The not holding of mmap_sem means we can race against VMA
>>>>> change/removal and page-table destruction. We use the SRCU VMA freeing
>>>>> to keep the VMA around. We use the VMA seqcount to detect change
>>>>> (including umapping / page-table deletion) and we use gup_fast() style
>>>>> page-table walking to deal with page-table races.
>>>>>
>>>>> Once we've obtained the page and are ready to update the PTE, we
>>>>> validate if the state we started the fault with is still valid, if
>>>>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>>>>> PTE and we're done.
>>>>>
>>>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>>>
>>>>> [Manage the newly introduced pte_spinlock() for speculative page
>>>>> fault to fail if the VMA is touched in our back]
>>>>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>>>>> [Fetch p4d and pud]
>>>>> [Set vmd.sequence in __handle_mm_fault()]
>>>>> [Abort speculative path when handle_userfault() has to be called]
>>>>> [Add additional VMA's flags checks in handle_speculative_fault()]
>>>>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>>>>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>>>>> [Remove warning comment about waiting for !seq&1 since we don't want
>>>>> to wait]
>>>>> [Remove warning about no huge page support, mention it explictly]
>>>>> [Don't call do_fault() in the speculative path as __do_fault() calls
>>>>> vma->vm_ops->fault() which may want to release mmap_sem]
>>>>> [Only vm_fault pointer argument for vma_has_changed()]
>>>>> [Fix check against huge page, calling pmd_trans_huge()]
>>>>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>>>>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>>>>> processing done in vm_normal_page()]
>>>>> [Check that vma->anon_vma is already set when starting the speculative
>>>>> path]
>>>>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>>>>> the processing done in mpol_misplaced()]
>>>>> [Don't support VMA growing up or down]
>>>>> [Move check on vm_sequence just before calling handle_pte_fault()]
>>>>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>>>>> [Add mem cgroup oom check]
>>>>> [Use READ_ONCE to access p*d entries]
>>>>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>>>>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>>>>> path]
>>>>> [Check PMD against concurrent collapsing operation]
>>>>> [Try spin lock the pte during the speculative path to avoid deadlock with
>>>>> other CPU's invalidating the TLB and requiring this CPU to catch the
>>>>> inter processor's interrupt]
>>>>> [Move define of FAULT_FLAG_SPECULATIVE here]
>>>>> [Introduce __handle_speculative_fault() and add a check against
>>>>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>>>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>>>> ---
>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>> include/linux/mm.h | 30 ++++
>>>>> include/linux/pagemap.h | 4 +-
>>>>> mm/internal.h | 16 +-
>>>>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>>>>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>>>>> index 0660a03d37d9..9e25283d6fc9 100644
>>>>> --- a/include/linux/hugetlb_inline.h
>>>>> +++ b/include/linux/hugetlb_inline.h
>>>>> @@ -8,7 +8,7 @@
>>>>>
>>>>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>>>>> {
>>>>> - return !!(vma->vm_flags & VM_HUGETLB);
>>>>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>>>>> }
>>>>>
>>>>> #else
>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>>> index 05cbba70104b..31acf98a7d92 100644
>>>>> --- a/include/linux/mm.h
>>>>> +++ b/include/linux/mm.h
>>>>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>>>>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>>>>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>>>>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>>>>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>>>>
>>>>> #define FAULT_FLAG_TRACE \
>>>>> { FAULT_FLAG_WRITE, "WRITE" }, \
>>>>> @@ -343,6 +344,10 @@ struct vm_fault {
>>>>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>>>>> pgoff_t pgoff; /* Logical page offset based on vma */
>>>>> unsigned long address; /* Faulting virtual address */
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> + unsigned int sequence;
>>>>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>>>>> +#endif
>>>>> pmd_t *pmd; /* Pointer to pmd entry matching
>>>>> * the 'address' */
>>>>> pud_t *pud; /* Pointer to pud entry matching
>>>>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>>>>> #ifdef CONFIG_MMU
>>>>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> unsigned int flags);
>>>>> +
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags);
>>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + /*
>>>>> + * Try speculative page fault for multithreaded user space task only.
>>>>> + */
>>>>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>>>>> + return VM_FAULT_RETRY;
>>>>> + return __handle_speculative_fault(mm, address, flags);
>>>>> +}
>>>>> +#else
>>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + return VM_FAULT_RETRY;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>> +
>>>>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>>>>> unsigned long address, unsigned int fault_flags,
>>>>> bool *unlocked);
>>>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>>>> index b1bd2186e6d2..6e2aa4e79af7 100644
>>>>> --- a/include/linux/pagemap.h
>>>>> +++ b/include/linux/pagemap.h
>>>>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>>>>> pgoff_t pgoff;
>>>>> if (unlikely(is_vm_hugetlb_page(vma)))
>>>>> return linear_hugepage_index(vma, address);
>>>>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>>>>> - pgoff += vma->vm_pgoff;
>>>>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>>>>> + pgoff += READ_ONCE(vma->vm_pgoff);
>>>>> return pgoff;
>>>>> }
>>>>>
>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>> index fb2667b20f0a..10b188c87fa4 100644
>>>>> --- a/mm/internal.h
>>>>> +++ b/mm/internal.h
>>>>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>>>>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>>>>> unsigned long addr);
>>>>> extern void put_vma(struct vm_area_struct *vma);
>>>>> -#endif
>>>>> +
>>>>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>>>>> +{
>>>>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>>>>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>>>>> +
>>>>> + /*
>>>>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>>>>> + * the wmb in vma_rb_erase().
>>>>> + */
>>>>> + smp_rmb();
>>>>> +
>>>>> + return ret || seq != vmf->sequence;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>>
>>>>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>>>>> unsigned long floor, unsigned long ceiling);
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>>>> if (page)
>>>>> dump_page(page, "bad pte");
>>>>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>>>>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>>>>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>>>>> + mapping, index);
>>>>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>>>>> vma->vm_file,
>>>>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>>>>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>>>>
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +static bool pte_spinlock(struct vm_fault *vmf)
>>>>> +{
>>>>> + bool ret = false;
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + pmd_t pmdval;
>>>>> +#endif
>>>>> +
>>>>> + /* Check if vma is still valid */
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + spin_lock(vmf->ptl);
>>>>> + return true;
>>>>> + }
>>>>> +
>>>>> +again:
>>>>> + local_irq_disable();
>>>>> + if (vma_has_changed(vmf))
>>>>> + goto out;
>>>>> +
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + /*
>>>>> + * We check if the pmd value is still the same to ensure that there
>>>>> + * is not a huge collapse operation in progress in our back.
>>>>> + */
>>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>>> + goto out;
>>>>> +#endif
>>>>> +
>>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>>>>> + local_irq_enable();
>>>>> + goto again;
>>>>> + }
>>>>> +
>>>>> + if (vma_has_changed(vmf)) {
>>>>> + spin_unlock(vmf->ptl);
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + ret = true;
>>>>> +out:
>>>>> + local_irq_enable();
>>>>> + return ret;
>>>>> +}
>>>>> +
>>>>> +static bool pte_map_lock(struct vm_fault *vmf)
>>>>> +{
>>>>> + bool ret = false;
>>>>> + pte_t *pte;
>>>>> + spinlock_t *ptl;
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + pmd_t pmdval;
>>>>> +#endif
>>>>> +
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>> + vmf->address, &vmf->ptl);
>>>>> + return true;
>>>>> + }
>>>>> +
>>>>> + /*
>>>>> + * The first vma_has_changed() guarantees the page-tables are still
>>>>> + * valid, having IRQs disabled ensures they stay around, hence the
>>>>> + * second vma_has_changed() to make sure they are still valid once
>>>>> + * we've got the lock. After that a concurrent zap_pte_range() will
>>>>> + * block on the PTL and thus we're safe.
>>>>> + */
>>>>> +again:
>>>>> + local_irq_disable();
>>>>> + if (vma_has_changed(vmf))
>>>>> + goto out;
>>>>> +
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + /*
>>>>> + * We check if the pmd value is still the same to ensure that there
>>>>> + * is not a huge collapse operation in progress in our back.
>>>>> + */
>>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>>> + goto out;
>>>>> +#endif
>>>>> +
>>>>> + /*
>>>>> + * Same as pte_offset_map_lock() except that we call
>>>>> + * spin_trylock() in place of spin_lock() to avoid race with
>>>>> + * unmap path which may have the lock and wait for this CPU
>>>>> + * to invalidate TLB but this CPU has irq disabled.
>>>>> + * Since we are in a speculative patch, accept it could fail
>>>>> + */
>>>>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>>>>> + if (unlikely(!spin_trylock(ptl))) {
>>>>> + pte_unmap(pte);
>>>>> + local_irq_enable();
>>>>> + goto again;
>>>>> + }
>>>>> +
>>>>> + if (vma_has_changed(vmf)) {
>>>>> + pte_unmap_unlock(pte, ptl);
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + vmf->pte = pte;
>>>>> + vmf->ptl = ptl;
>>>>> + ret = true;
>>>>> +out:
>>>>> + local_irq_enable();
>>>>> + return ret;
>>>>> +}
>>>>> +#else
>>>>> static inline bool pte_spinlock(struct vm_fault *vmf)
>>>>> {
>>>>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>>>> vmf->address, &vmf->ptl);
>>>>> return true;
>>>>> }
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>>
>>>>> /*
>>>>> * handle_pte_fault chooses page fault handler according to an entry which was
>>>>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>>> ret = check_stable_address_space(vma->vm_mm);
>>>>> if (ret)
>>>>> goto unlock;
>>>>> + /*
>>>>> + * Don't call the userfaultfd during the speculative path.
>>>>> + * We already checked for the VMA to not be managed through
>>>>> + * userfaultfd, but it may be set in our back once we have lock
>>>>> + * the pte. In such a case we can ignore it this time.
>>>>> + */
>>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + goto setpte;
>>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>>> if (userfaultfd_missing(vma)) {
>>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>>> goto unlock_and_release;
>>>>>
>>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>>> - if (userfaultfd_missing(vma)) {
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>>> mem_cgroup_cancel_charge(page, memcg, false);
>>>>> put_page(page);
>>>>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>>
>>>>> if (unlikely(pmd_none(*vmf->pmd))) {
>>>>> /*
>>>>> + * In the case of the speculative page fault handler we abort
>>>>> + * the speculative path immediately as the pmd is probably
>>>>> + * in the way to be converted in a huge one. We will try
>>>>> + * again holding the mmap_sem (which implies that the collapse
>>>>> + * operation is done).
>>>>> + */
>>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + return VM_FAULT_RETRY;
>>>>> + /*
>>>>> * Leave __pte_alloc() until later: because vm_ops->fault may
>>>>> * want to allocate huge page, and if we expose page table
>>>>> * for an instant, it will be difficult to retract from
>>>>> * concurrent faults and from rmap lookups.
>>>>> */
>>>>> vmf->pte = NULL;
>>>>> - } else {
>>>>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> /* See comment in pte_alloc_one_map() */
>>>>> if (pmd_devmap_trans_unstable(vmf->pmd))
>>>>> return 0;
>>>>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>> * pmd from under us anymore at this point because we hold the
>>>>> * mmap_sem read mode and khugepaged takes it in write mode.
>>>>> * So now it's safe to run pte_offset_map().
>>>>> + * This is not applicable to the speculative page fault handler
>>>>> + * but in that case, the pte is fetched earlier in
>>>>> + * handle_speculative_fault().
>>>>> */
>>>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>>>>> vmf->orig_pte = *vmf->pte;
>>>>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>> if (!vmf->pte) {
>>>>> if (vma_is_anonymous(vmf->vma))
>>>>> return do_anonymous_page(vmf);
>>>>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + return VM_FAULT_RETRY;
>>>>> else
>>>>> return do_fault(vmf);
>>>>> }
>>>>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>>>>> if (!vmf.pmd)
>>>>> return VM_FAULT_OOM;
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>>>>> +#endif
>>>>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>>>>> ret = create_huge_pmd(&vmf);
>>>>> if (!(ret & VM_FAULT_FALLBACK))
>>>>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> return handle_pte_fault(&vmf);
>>>>> }
>>>>>
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +/*
>>>>> + * Tries to handle the page fault in a speculative way, without grabbing the
>>>>> + * mmap_sem.
>>>>> + */
>>>>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + struct vm_fault vmf = {
>>>>> + .address = address,
>>>>> + };
>>>>> + pgd_t *pgd, pgdval;
>>>>> + p4d_t *p4d, p4dval;
>>>>> + pud_t pudval;
>>>>> + int seq, ret = VM_FAULT_RETRY;
>>>>> + struct vm_area_struct *vma;
>>>>> +#ifdef CONFIG_NUMA
>>>>> + struct mempolicy *pol;
>>>>> +#endif
>>>>> +
>>>>> + /* Clear flags that may lead to release the mmap_sem to retry */
>>>>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>>>>> + flags |= FAULT_FLAG_SPECULATIVE;
>>>>> +
>>>>> + vma = get_vma(mm, address);
>>>>> + if (!vma)
>>>>> + return ret;
>>>>> +
>>>>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>>>>> + if (seq & 1)
>>>>> + goto out_put;
>>>>> +
>>>>> + /*
>>>>> + * Can't call vm_ops service has we don't know what they would do
>>>>> + * with the VMA.
>>>>> + * This include huge page from hugetlbfs.
>>>>> + */
>>>>> + if (vma->vm_ops)
>>>>> + goto out_put;
>>>>> +
>>>> Hi Laurent
>>>>
>>>> I think that most of pagefault will leave here. Is there any case need to skip ?
>>>> I have tested the following patch, it work well.
>>> Hi Zhong,
>>>
>>> Well this will allow file mapping to be handle in a speculative way, but that's
>>> a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
>>> operation will be fair.
>>>
>>> In the case of the anonymous file mapping that's often not a problem, depending
>>> on the underlying file system, but there are so many cases to check and this is
>>> hard to say this can be done in a speculative way as is.
>> This patch say that spf just handle anonyous page. but I find that do_swap_page
>> also maybe release the mmap_sem without FAULT_FLAG_RETRY_NOWAIT. why is it safe
>> to handle the case. I think that the case is similar to file page. Maybe I miss
>> something else.
> do_swap_page() may released the mmap_sem through the call to
> __lock_page_or_retry(), but this can only happen if FAULT_FLAG_ALLOW_RETRY or
> FAULT_FLAG_KILLABLE are set and they are unset in __handle_speculative_fault().
For spf. Indeed. Thank you for clarification.
>> I test the patches and find just only 18% of the pagefault will enter into the
>> speculative page fault during a process startup. As I had said. most of pagefault
>> will be handled by ops->fault. I do not know the data you had posted is how to get.
> I do agree that handling file mapping will be required, but this will add more
> complexity to this series, since we need a way for drivers to tell they are
> compatible with the speculative path.
As the above mentioned. the specualtive page fault do not pass FAULT_FLAG_ALLOW_RETRY.
In other words, File page will not refer to release mmap_sem for spf.
but I am still not quite clear that what should drivers do to compatible with speculatve path.
The speculative path should not refer to the mmap_sem for filemap_fault.
Thanks,
zhong jiang
> May be I should give it a try on the next send.
Ok, I will try.
> For my information, what was the performance improvement you seen when handling
> file page faulting this way ?
I am sorry that. It is the data that Ganesh test the launch time on Andriod.
> Thanks,
> Laurent.
>
>>
>> Thanks
>> zhong jiang
>>> The huge work to do is to double check that all the code called by
>>> vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
>>> FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
>>> is managing as it may assume that it is under the protection of the mmap_sem in
>>> read mode, and that can be done implicitly.
>>>
>>> Cheers,
>>> Laurent.
>>>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 936128b..9bc1545 100644
>>>> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
>>>> if (!fe->pte) {
>>>> if (vma_is_anonymous(fe->vma))
>>>> return do_anonymous_page(fe);
>>>> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
>>>> - return VM_FAULT_RETRY;
>>>> else
>>>> return do_fault(fe);
>>>> }
>>>> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>>> goto out_put;
>>>> }
>>>> /*
>>>> - * Can't call vm_ops service has we don't know what they would do
>>>> - * with the VMA.
>>>> - * This include huge page from hugetlbfs.
>>>> - */
>>>> - if (vma->vm_ops) {
>>>> - trace_spf_vma_notsup(_RET_IP_, vma, address);
>>>> - goto out_put;
>>>> - }
>>>>
>>>>
>>>> Thanks
>>>> zhong jiang
>>>>> + /*
>>>>> + * __anon_vma_prepare() requires the mmap_sem to be held
>>>>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>>>>> + * in the speculative path.
>>>>> + */
>>>>> + if (unlikely(!vma->anon_vma))
>>>>> + goto out_put;
>>>>> +
>>>>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>>>>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>>>>> +
>>>>> + /* Can't call userland page fault handler in the speculative path */
>>>>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>>>>> + goto out_put;
>>>>> +
>>>>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>>>>> + /*
>>>>> + * This could be detected by the check address against VMA's
>>>>> + * boundaries but we want to trace it as not supported instead
>>>>> + * of changed.
>>>>> + */
>>>>> + goto out_put;
>>>>> +
>>>>> + if (address < READ_ONCE(vma->vm_start)
>>>>> + || READ_ONCE(vma->vm_end) <= address)
>>>>> + goto out_put;
>>>>> +
>>>>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>>>>> + flags & FAULT_FLAG_INSTRUCTION,
>>>>> + flags & FAULT_FLAG_REMOTE)) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> +
>>>>> + /* This is one is required to check that the VMA has write access set */
>>>>> + if (flags & FAULT_FLAG_WRITE) {
>>>>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> +
>>>>> +#ifdef CONFIG_NUMA
>>>>> + /*
>>>>> + * MPOL_INTERLEAVE implies additional checks in
>>>>> + * mpol_misplaced() which are not compatible with the
>>>>> + *speculative page fault processing.
>>>>> + */
>>>>> + pol = __get_vma_policy(vma, address);
>>>>> + if (!pol)
>>>>> + pol = get_task_policy(current);
>>>>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>>>>> + goto out_put;
>>>>> +#endif
>>>>> +
>>>>> + /*
>>>>> + * Do a speculative lookup of the PTE entry.
>>>>> + */
>>>>> + local_irq_disable();
>>>>> + pgd = pgd_offset(mm, address);
>>>>> + pgdval = READ_ONCE(*pgd);
>>>>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + p4d = p4d_offset(pgd, address);
>>>>> + p4dval = READ_ONCE(*p4d);
>>>>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + vmf.pud = pud_offset(p4d, address);
>>>>> + pudval = READ_ONCE(*vmf.pud);
>>>>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + /* Huge pages at PUD level are not supported. */
>>>>> + if (unlikely(pud_trans_huge(pudval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + vmf.pmd = pmd_offset(vmf.pud, address);
>>>>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>>>>> + /*
>>>>> + * pmd_none could mean that a hugepage collapse is in progress
>>>>> + * in our back as collapse_huge_page() mark it before
>>>>> + * invalidating the pte (which is done once the IPI is catched
>>>>> + * by all CPU and we have interrupt disabled).
>>>>> + * For this reason we cannot handle THP in a speculative way since we
>>>>> + * can't safely indentify an in progress collapse operation done in our
>>>>> + * back on that PMD.
>>>>> + * Regarding the order of the following checks, see comment in
>>>>> + * pmd_devmap_trans_unstable()
>>>>> + */
>>>>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>>>>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>>>>> + is_swap_pmd(vmf.orig_pmd)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + /*
>>>>> + * The above does not allocate/instantiate page-tables because doing so
>>>>> + * would lead to the possibility of instantiating page-tables after
>>>>> + * free_pgtables() -- and consequently leaking them.
>>>>> + *
>>>>> + * The result is that we take at least one !speculative fault per PMD
>>>>> + * in order to instantiate it.
>>>>> + */
>>>>> +
>>>>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>>>>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>>>>> + barrier(); /* See comment in handle_pte_fault() */
>>>>> + if (pte_none(vmf.orig_pte)) {
>>>>> + pte_unmap(vmf.pte);
>>>>> + vmf.pte = NULL;
>>>>> + }
>>>>> +
>>>>> + vmf.vma = vma;
>>>>> + vmf.pgoff = linear_page_index(vma, address);
>>>>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>>>>> + vmf.sequence = seq;
>>>>> + vmf.flags = flags;
>>>>> +
>>>>> + local_irq_enable();
>>>>> +
>>>>> + /*
>>>>> + * We need to re-validate the VMA after checking the bounds, otherwise
>>>>> + * we might have a false positive on the bounds.
>>>>> + */
>>>>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>>>>> + goto out_put;
>>>>> +
>>>>> + mem_cgroup_oom_enable();
>>>>> + ret = handle_pte_fault(&vmf);
>>>>> + mem_cgroup_oom_disable();
>>>>> +
>>>>> + put_vma(vma);
>>>>> +
>>>>> + /*
>>>>> + * The task may have entered a memcg OOM situation but
>>>>> + * if the allocation error was handled gracefully (no
>>>>> + * VM_FAULT_OOM), there is no need to kill anything.
>>>>> + * Just clean up the OOM state peacefully.
>>>>> + */
>>>>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>>>>> + mem_cgroup_oom_synchronize(false);
>>>>> + return ret;
>>>>> +
>>>>> +out_walk:
>>>>> + local_irq_enable();
>>>>> +out_put:
>>>>> + put_vma(vma);
>>>>> + return ret;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>> +
>>>>> /*
>>>>> * By the time we get here, we already hold the mm semaphore
>>>>> *
>>> .
>>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 19/26] mm: provide speculative fault infrastructure
@ 2018-07-25 11:23 ` zhong jiang
0 siblings, 0 replies; 106+ messages in thread
From: zhong jiang @ 2018-07-25 11:23 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 2018/7/25 18:44, Laurent Dufour wrote:
>
> On 25/07/2018 11:04, zhong jiang wrote:
>> On 2018/7/25 0:10, Laurent Dufour wrote:
>>> On 24/07/2018 16:26, zhong jiang wrote:
>>>> On 2018/5/17 19:06, Laurent Dufour wrote:
>>>>> From: Peter Zijlstra <peterz@infradead.org>
>>>>>
>>>>> Provide infrastructure to do a speculative fault (not holding
>>>>> mmap_sem).
>>>>>
>>>>> The not holding of mmap_sem means we can race against VMA
>>>>> change/removal and page-table destruction. We use the SRCU VMA freeing
>>>>> to keep the VMA around. We use the VMA seqcount to detect change
>>>>> (including umapping / page-table deletion) and we use gup_fast() style
>>>>> page-table walking to deal with page-table races.
>>>>>
>>>>> Once we've obtained the page and are ready to update the PTE, we
>>>>> validate if the state we started the fault with is still valid, if
>>>>> not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
>>>>> PTE and we're done.
>>>>>
>>>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>>>
>>>>> [Manage the newly introduced pte_spinlock() for speculative page
>>>>> fault to fail if the VMA is touched in our back]
>>>>> [Rename vma_is_dead() to vma_has_changed() and declare it here]
>>>>> [Fetch p4d and pud]
>>>>> [Set vmd.sequence in __handle_mm_fault()]
>>>>> [Abort speculative path when handle_userfault() has to be called]
>>>>> [Add additional VMA's flags checks in handle_speculative_fault()]
>>>>> [Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
>>>>> [Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
>>>>> [Remove warning comment about waiting for !seq&1 since we don't want
>>>>> to wait]
>>>>> [Remove warning about no huge page support, mention it explictly]
>>>>> [Don't call do_fault() in the speculative path as __do_fault() calls
>>>>> vma->vm_ops->fault() which may want to release mmap_sem]
>>>>> [Only vm_fault pointer argument for vma_has_changed()]
>>>>> [Fix check against huge page, calling pmd_trans_huge()]
>>>>> [Use READ_ONCE() when reading VMA's fields in the speculative path]
>>>>> [Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
>>>>> processing done in vm_normal_page()]
>>>>> [Check that vma->anon_vma is already set when starting the speculative
>>>>> path]
>>>>> [Check for memory policy as we can't support MPOL_INTERLEAVE case due to
>>>>> the processing done in mpol_misplaced()]
>>>>> [Don't support VMA growing up or down]
>>>>> [Move check on vm_sequence just before calling handle_pte_fault()]
>>>>> [Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
>>>>> [Add mem cgroup oom check]
>>>>> [Use READ_ONCE to access p*d entries]
>>>>> [Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
>>>>> [Don't fetch pte again in handle_pte_fault() when running the speculative
>>>>> path]
>>>>> [Check PMD against concurrent collapsing operation]
>>>>> [Try spin lock the pte during the speculative path to avoid deadlock with
>>>>> other CPU's invalidating the TLB and requiring this CPU to catch the
>>>>> inter processor's interrupt]
>>>>> [Move define of FAULT_FLAG_SPECULATIVE here]
>>>>> [Introduce __handle_speculative_fault() and add a check against
>>>>> mm->mm_users in handle_speculative_fault() defined in mm.h]
>>>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>>>> ---
>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>> include/linux/mm.h | 30 ++++
>>>>> include/linux/pagemap.h | 4 +-
>>>>> mm/internal.h | 16 +-
>>>>> mm/memory.c | 340 ++++++++++++++++++++++++++++++++++++++++-
>>>>> 5 files changed, 385 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>>>>> index 0660a03d37d9..9e25283d6fc9 100644
>>>>> --- a/include/linux/hugetlb_inline.h
>>>>> +++ b/include/linux/hugetlb_inline.h
>>>>> @@ -8,7 +8,7 @@
>>>>>
>>>>> static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>>>>> {
>>>>> - return !!(vma->vm_flags & VM_HUGETLB);
>>>>> + return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
>>>>> }
>>>>>
>>>>> #else
>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>>> index 05cbba70104b..31acf98a7d92 100644
>>>>> --- a/include/linux/mm.h
>>>>> +++ b/include/linux/mm.h
>>>>> @@ -315,6 +315,7 @@ extern pgprot_t protection_map[16];
>>>>> #define FAULT_FLAG_USER 0x40 /* The fault originated in userspace */
>>>>> #define FAULT_FLAG_REMOTE 0x80 /* faulting for non current tsk/mm */
>>>>> #define FAULT_FLAG_INSTRUCTION 0x100 /* The fault was during an instruction fetch */
>>>>> +#define FAULT_FLAG_SPECULATIVE 0x200 /* Speculative fault, not holding mmap_sem */
>>>>>
>>>>> #define FAULT_FLAG_TRACE \
>>>>> { FAULT_FLAG_WRITE, "WRITE" }, \
>>>>> @@ -343,6 +344,10 @@ struct vm_fault {
>>>>> gfp_t gfp_mask; /* gfp mask to be used for allocations */
>>>>> pgoff_t pgoff; /* Logical page offset based on vma */
>>>>> unsigned long address; /* Faulting virtual address */
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> + unsigned int sequence;
>>>>> + pmd_t orig_pmd; /* value of PMD at the time of fault */
>>>>> +#endif
>>>>> pmd_t *pmd; /* Pointer to pmd entry matching
>>>>> * the 'address' */
>>>>> pud_t *pud; /* Pointer to pud entry matching
>>>>> @@ -1415,6 +1420,31 @@ int invalidate_inode_page(struct page *page);
>>>>> #ifdef CONFIG_MMU
>>>>> extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> unsigned int flags);
>>>>> +
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +extern int __handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags);
>>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + /*
>>>>> + * Try speculative page fault for multithreaded user space task only.
>>>>> + */
>>>>> + if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
>>>>> + return VM_FAULT_RETRY;
>>>>> + return __handle_speculative_fault(mm, address, flags);
>>>>> +}
>>>>> +#else
>>>>> +static inline int handle_speculative_fault(struct mm_struct *mm,
>>>>> + unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + return VM_FAULT_RETRY;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>> +
>>>>> extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
>>>>> unsigned long address, unsigned int fault_flags,
>>>>> bool *unlocked);
>>>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>>>> index b1bd2186e6d2..6e2aa4e79af7 100644
>>>>> --- a/include/linux/pagemap.h
>>>>> +++ b/include/linux/pagemap.h
>>>>> @@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
>>>>> pgoff_t pgoff;
>>>>> if (unlikely(is_vm_hugetlb_page(vma)))
>>>>> return linear_hugepage_index(vma, address);
>>>>> - pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
>>>>> - pgoff += vma->vm_pgoff;
>>>>> + pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
>>>>> + pgoff += READ_ONCE(vma->vm_pgoff);
>>>>> return pgoff;
>>>>> }
>>>>>
>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>> index fb2667b20f0a..10b188c87fa4 100644
>>>>> --- a/mm/internal.h
>>>>> +++ b/mm/internal.h
>>>>> @@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
>>>>> extern struct vm_area_struct *get_vma(struct mm_struct *mm,
>>>>> unsigned long addr);
>>>>> extern void put_vma(struct vm_area_struct *vma);
>>>>> -#endif
>>>>> +
>>>>> +static inline bool vma_has_changed(struct vm_fault *vmf)
>>>>> +{
>>>>> + int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
>>>>> + unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
>>>>> +
>>>>> + /*
>>>>> + * Matches both the wmb in write_seqlock_{begin,end}() and
>>>>> + * the wmb in vma_rb_erase().
>>>>> + */
>>>>> + smp_rmb();
>>>>> +
>>>>> + return ret || seq != vmf->sequence;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>>
>>>>> void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
>>>>> unsigned long floor, unsigned long ceiling);
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index ab32b0b4bd69..7bbbb8c7b9cd 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>>>> if (page)
>>>>> dump_page(page, "bad pte");
>>>>> pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
>>>>> - (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
>>>>> + (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
>>>>> + mapping, index);
>>>>> pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
>>>>> vma->vm_file,
>>>>> vma->vm_ops ? vma->vm_ops->fault : NULL,
>>>>> @@ -2306,6 +2307,118 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(apply_to_page_range);
>>>>>
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +static bool pte_spinlock(struct vm_fault *vmf)
>>>>> +{
>>>>> + bool ret = false;
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + pmd_t pmdval;
>>>>> +#endif
>>>>> +
>>>>> + /* Check if vma is still valid */
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + spin_lock(vmf->ptl);
>>>>> + return true;
>>>>> + }
>>>>> +
>>>>> +again:
>>>>> + local_irq_disable();
>>>>> + if (vma_has_changed(vmf))
>>>>> + goto out;
>>>>> +
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + /*
>>>>> + * We check if the pmd value is still the same to ensure that there
>>>>> + * is not a huge collapse operation in progress in our back.
>>>>> + */
>>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>>> + goto out;
>>>>> +#endif
>>>>> +
>>>>> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + if (unlikely(!spin_trylock(vmf->ptl))) {
>>>>> + local_irq_enable();
>>>>> + goto again;
>>>>> + }
>>>>> +
>>>>> + if (vma_has_changed(vmf)) {
>>>>> + spin_unlock(vmf->ptl);
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + ret = true;
>>>>> +out:
>>>>> + local_irq_enable();
>>>>> + return ret;
>>>>> +}
>>>>> +
>>>>> +static bool pte_map_lock(struct vm_fault *vmf)
>>>>> +{
>>>>> + bool ret = false;
>>>>> + pte_t *pte;
>>>>> + spinlock_t *ptl;
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + pmd_t pmdval;
>>>>> +#endif
>>>>> +
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>> + vmf->address, &vmf->ptl);
>>>>> + return true;
>>>>> + }
>>>>> +
>>>>> + /*
>>>>> + * The first vma_has_changed() guarantees the page-tables are still
>>>>> + * valid, having IRQs disabled ensures they stay around, hence the
>>>>> + * second vma_has_changed() to make sure they are still valid once
>>>>> + * we've got the lock. After that a concurrent zap_pte_range() will
>>>>> + * block on the PTL and thus we're safe.
>>>>> + */
>>>>> +again:
>>>>> + local_irq_disable();
>>>>> + if (vma_has_changed(vmf))
>>>>> + goto out;
>>>>> +
>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>>> + /*
>>>>> + * We check if the pmd value is still the same to ensure that there
>>>>> + * is not a huge collapse operation in progress in our back.
>>>>> + */
>>>>> + pmdval = READ_ONCE(*vmf->pmd);
>>>>> + if (!pmd_same(pmdval, vmf->orig_pmd))
>>>>> + goto out;
>>>>> +#endif
>>>>> +
>>>>> + /*
>>>>> + * Same as pte_offset_map_lock() except that we call
>>>>> + * spin_trylock() in place of spin_lock() to avoid race with
>>>>> + * unmap path which may have the lock and wait for this CPU
>>>>> + * to invalidate TLB but this CPU has irq disabled.
>>>>> + * Since we are in a speculative patch, accept it could fail
>>>>> + */
>>>>> + ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> + pte = pte_offset_map(vmf->pmd, vmf->address);
>>>>> + if (unlikely(!spin_trylock(ptl))) {
>>>>> + pte_unmap(pte);
>>>>> + local_irq_enable();
>>>>> + goto again;
>>>>> + }
>>>>> +
>>>>> + if (vma_has_changed(vmf)) {
>>>>> + pte_unmap_unlock(pte, ptl);
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + vmf->pte = pte;
>>>>> + vmf->ptl = ptl;
>>>>> + ret = true;
>>>>> +out:
>>>>> + local_irq_enable();
>>>>> + return ret;
>>>>> +}
>>>>> +#else
>>>>> static inline bool pte_spinlock(struct vm_fault *vmf)
>>>>> {
>>>>> vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
>>>>> @@ -2319,6 +2432,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>>>> vmf->address, &vmf->ptl);
>>>>> return true;
>>>>> }
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>>
>>>>> /*
>>>>> * handle_pte_fault chooses page fault handler according to an entry which was
>>>>> @@ -3208,6 +3322,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>>> ret = check_stable_address_space(vma->vm_mm);
>>>>> if (ret)
>>>>> goto unlock;
>>>>> + /*
>>>>> + * Don't call the userfaultfd during the speculative path.
>>>>> + * We already checked for the VMA to not be managed through
>>>>> + * userfaultfd, but it may be set in our back once we have lock
>>>>> + * the pte. In such a case we can ignore it this time.
>>>>> + */
>>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + goto setpte;
>>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>>> if (userfaultfd_missing(vma)) {
>>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>>> @@ -3249,7 +3371,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>>>> goto unlock_and_release;
>>>>>
>>>>> /* Deliver the page fault to userland, check inside PT lock */
>>>>> - if (userfaultfd_missing(vma)) {
>>>>> + if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
>>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>>> mem_cgroup_cancel_charge(page, memcg, false);
>>>>> put_page(page);
>>>>> @@ -3994,13 +4116,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>>
>>>>> if (unlikely(pmd_none(*vmf->pmd))) {
>>>>> /*
>>>>> + * In the case of the speculative page fault handler we abort
>>>>> + * the speculative path immediately as the pmd is probably
>>>>> + * in the way to be converted in a huge one. We will try
>>>>> + * again holding the mmap_sem (which implies that the collapse
>>>>> + * operation is done).
>>>>> + */
>>>>> + if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + return VM_FAULT_RETRY;
>>>>> + /*
>>>>> * Leave __pte_alloc() until later: because vm_ops->fault may
>>>>> * want to allocate huge page, and if we expose page table
>>>>> * for an instant, it will be difficult to retract from
>>>>> * concurrent faults and from rmap lookups.
>>>>> */
>>>>> vmf->pte = NULL;
>>>>> - } else {
>>>>> + } else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
>>>>> /* See comment in pte_alloc_one_map() */
>>>>> if (pmd_devmap_trans_unstable(vmf->pmd))
>>>>> return 0;
>>>>> @@ -4009,6 +4140,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>> * pmd from under us anymore at this point because we hold the
>>>>> * mmap_sem read mode and khugepaged takes it in write mode.
>>>>> * So now it's safe to run pte_offset_map().
>>>>> + * This is not applicable to the speculative page fault handler
>>>>> + * but in that case, the pte is fetched earlier in
>>>>> + * handle_speculative_fault().
>>>>> */
>>>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>>>>> vmf->orig_pte = *vmf->pte;
>>>>> @@ -4031,6 +4165,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
>>>>> if (!vmf->pte) {
>>>>> if (vma_is_anonymous(vmf->vma))
>>>>> return do_anonymous_page(vmf);
>>>>> + else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
>>>>> + return VM_FAULT_RETRY;
>>>>> else
>>>>> return do_fault(vmf);
>>>>> }
>>>>> @@ -4128,6 +4264,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> vmf.pmd = pmd_alloc(mm, vmf.pud, address);
>>>>> if (!vmf.pmd)
>>>>> return VM_FAULT_OOM;
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> + vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
>>>>> +#endif
>>>>> if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
>>>>> ret = create_huge_pmd(&vmf);
>>>>> if (!(ret & VM_FAULT_FALLBACK))
>>>>> @@ -4161,6 +4300,201 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>>>> return handle_pte_fault(&vmf);
>>>>> }
>>>>>
>>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> +/*
>>>>> + * Tries to handle the page fault in a speculative way, without grabbing the
>>>>> + * mmap_sem.
>>>>> + */
>>>>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>>>> + unsigned int flags)
>>>>> +{
>>>>> + struct vm_fault vmf = {
>>>>> + .address = address,
>>>>> + };
>>>>> + pgd_t *pgd, pgdval;
>>>>> + p4d_t *p4d, p4dval;
>>>>> + pud_t pudval;
>>>>> + int seq, ret = VM_FAULT_RETRY;
>>>>> + struct vm_area_struct *vma;
>>>>> +#ifdef CONFIG_NUMA
>>>>> + struct mempolicy *pol;
>>>>> +#endif
>>>>> +
>>>>> + /* Clear flags that may lead to release the mmap_sem to retry */
>>>>> + flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>>>>> + flags |= FAULT_FLAG_SPECULATIVE;
>>>>> +
>>>>> + vma = get_vma(mm, address);
>>>>> + if (!vma)
>>>>> + return ret;
>>>>> +
>>>>> + seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>>>>> + if (seq & 1)
>>>>> + goto out_put;
>>>>> +
>>>>> + /*
>>>>> + * Can't call vm_ops service has we don't know what they would do
>>>>> + * with the VMA.
>>>>> + * This include huge page from hugetlbfs.
>>>>> + */
>>>>> + if (vma->vm_ops)
>>>>> + goto out_put;
>>>>> +
>>>> Hi Laurent
>>>>
>>>> I think that most of pagefault will leave here. Is there any case need to skip ?
>>>> I have tested the following patch, it work well.
>>> Hi Zhong,
>>>
>>> Well this will allow file mapping to be handle in a speculative way, but that's
>>> a bit dangerous today as there is no guaranty that the vm_ops.vm_fault()
>>> operation will be fair.
>>>
>>> In the case of the anonymous file mapping that's often not a problem, depending
>>> on the underlying file system, but there are so many cases to check and this is
>>> hard to say this can be done in a speculative way as is.
>> This patch say that spf just handle anonyous page. but I find that do_swap_page
>> also maybe release the mmap_sem without FAULT_FLAG_RETRY_NOWAIT. why is it safe
>> to handle the case. I think that the case is similar to file page. Maybe I miss
>> something else.
> do_swap_page() may released the mmap_sem through the call to
> __lock_page_or_retry(), but this can only happen if FAULT_FLAG_ALLOW_RETRY or
> FAULT_FLAG_KILLABLE are set and they are unset in __handle_speculative_fault().
For spf. Indeed. Thank you for clarification.
>> I test the patches and find just only 18% of the pagefault will enter into the
>> speculative page fault during a process startup. As I had said. most of pagefault
>> will be handled by ops->fault. I do not know the data you had posted is how to get.
> I do agree that handling file mapping will be required, but this will add more
> complexity to this series, since we need a way for drivers to tell they are
> compatible with the speculative path.
As the above mentioned. the specualtive page fault do not pass FAULT_FLAG_ALLOW_RETRY.
In other words, File page will not refer to release mmap_sem for spf.
but I am still not quite clear that what should drivers do to compatible with speculatve path.
The speculative path should not refer to the mmap_sem for filemap_fault.
Thanks,
zhong jiang
> May be I should give it a try on the next send.
Ok, I will try.
> For my information, what was the performance improvement you seen when handling
> file page faulting this way ?
I am sorry that. It is the data that Ganesh test the launch time on Andriod.
> Thanks,
> Laurent.
>
>>
>> Thanks
>> zhong jiang
>>> The huge work to do is to double check that all the code called by
>>> vm_ops.fault() is not dealing with the mmap_sem, which could be handled using
>>> FAULT_FLAG_RETRY_NOWAIT, and care is also needed about the resources that code
>>> is managing as it may assume that it is under the protection of the mmap_sem in
>>> read mode, and that can be done implicitly.
>>>
>>> Cheers,
>>> Laurent.
>>>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 936128b..9bc1545 100644
>>>> @@ -3893,8 +3898,6 @@ static int handle_pte_fault(struct fault_env *fe)
>>>> if (!fe->pte) {
>>>> if (vma_is_anonymous(fe->vma))
>>>> return do_anonymous_page(fe);
>>>> - else if (fe->flags & FAULT_FLAG_SPECULATIVE)
>>>> - return VM_FAULT_RETRY;
>>>> else
>>>> return do_fault(fe);
>>>> }
>>>> @@ -4026,20 +4029,11 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>>> goto out_put;
>>>> }
>>>> /*
>>>> - * Can't call vm_ops service has we don't know what they would do
>>>> - * with the VMA.
>>>> - * This include huge page from hugetlbfs.
>>>> - */
>>>> - if (vma->vm_ops) {
>>>> - trace_spf_vma_notsup(_RET_IP_, vma, address);
>>>> - goto out_put;
>>>> - }
>>>>
>>>>
>>>> Thanks
>>>> zhong jiang
>>>>> + /*
>>>>> + * __anon_vma_prepare() requires the mmap_sem to be held
>>>>> + * because vm_next and vm_prev must be safe. This can't be guaranteed
>>>>> + * in the speculative path.
>>>>> + */
>>>>> + if (unlikely(!vma->anon_vma))
>>>>> + goto out_put;
>>>>> +
>>>>> + vmf.vma_flags = READ_ONCE(vma->vm_flags);
>>>>> + vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>>>>> +
>>>>> + /* Can't call userland page fault handler in the speculative path */
>>>>> + if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>>>>> + goto out_put;
>>>>> +
>>>>> + if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>>>>> + /*
>>>>> + * This could be detected by the check address against VMA's
>>>>> + * boundaries but we want to trace it as not supported instead
>>>>> + * of changed.
>>>>> + */
>>>>> + goto out_put;
>>>>> +
>>>>> + if (address < READ_ONCE(vma->vm_start)
>>>>> + || READ_ONCE(vma->vm_end) <= address)
>>>>> + goto out_put;
>>>>> +
>>>>> + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>>>>> + flags & FAULT_FLAG_INSTRUCTION,
>>>>> + flags & FAULT_FLAG_REMOTE)) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> +
>>>>> + /* This is one is required to check that the VMA has write access set */
>>>>> + if (flags & FAULT_FLAG_WRITE) {
>>>>> + if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> + } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>>>>> + ret = VM_FAULT_SIGSEGV;
>>>>> + goto out_put;
>>>>> + }
>>>>> +
>>>>> +#ifdef CONFIG_NUMA
>>>>> + /*
>>>>> + * MPOL_INTERLEAVE implies additional checks in
>>>>> + * mpol_misplaced() which are not compatible with the
>>>>> + *speculative page fault processing.
>>>>> + */
>>>>> + pol = __get_vma_policy(vma, address);
>>>>> + if (!pol)
>>>>> + pol = get_task_policy(current);
>>>>> + if (pol && pol->mode == MPOL_INTERLEAVE)
>>>>> + goto out_put;
>>>>> +#endif
>>>>> +
>>>>> + /*
>>>>> + * Do a speculative lookup of the PTE entry.
>>>>> + */
>>>>> + local_irq_disable();
>>>>> + pgd = pgd_offset(mm, address);
>>>>> + pgdval = READ_ONCE(*pgd);
>>>>> + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + p4d = p4d_offset(pgd, address);
>>>>> + p4dval = READ_ONCE(*p4d);
>>>>> + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + vmf.pud = pud_offset(p4d, address);
>>>>> + pudval = READ_ONCE(*vmf.pud);
>>>>> + if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + /* Huge pages at PUD level are not supported. */
>>>>> + if (unlikely(pud_trans_huge(pudval)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + vmf.pmd = pmd_offset(vmf.pud, address);
>>>>> + vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>>>>> + /*
>>>>> + * pmd_none could mean that a hugepage collapse is in progress
>>>>> + * in our back as collapse_huge_page() mark it before
>>>>> + * invalidating the pte (which is done once the IPI is catched
>>>>> + * by all CPU and we have interrupt disabled).
>>>>> + * For this reason we cannot handle THP in a speculative way since we
>>>>> + * can't safely indentify an in progress collapse operation done in our
>>>>> + * back on that PMD.
>>>>> + * Regarding the order of the following checks, see comment in
>>>>> + * pmd_devmap_trans_unstable()
>>>>> + */
>>>>> + if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>>>>> + pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>>>>> + is_swap_pmd(vmf.orig_pmd)))
>>>>> + goto out_walk;
>>>>> +
>>>>> + /*
>>>>> + * The above does not allocate/instantiate page-tables because doing so
>>>>> + * would lead to the possibility of instantiating page-tables after
>>>>> + * free_pgtables() -- and consequently leaking them.
>>>>> + *
>>>>> + * The result is that we take at least one !speculative fault per PMD
>>>>> + * in order to instantiate it.
>>>>> + */
>>>>> +
>>>>> + vmf.pte = pte_offset_map(vmf.pmd, address);
>>>>> + vmf.orig_pte = READ_ONCE(*vmf.pte);
>>>>> + barrier(); /* See comment in handle_pte_fault() */
>>>>> + if (pte_none(vmf.orig_pte)) {
>>>>> + pte_unmap(vmf.pte);
>>>>> + vmf.pte = NULL;
>>>>> + }
>>>>> +
>>>>> + vmf.vma = vma;
>>>>> + vmf.pgoff = linear_page_index(vma, address);
>>>>> + vmf.gfp_mask = __get_fault_gfp_mask(vma);
>>>>> + vmf.sequence = seq;
>>>>> + vmf.flags = flags;
>>>>> +
>>>>> + local_irq_enable();
>>>>> +
>>>>> + /*
>>>>> + * We need to re-validate the VMA after checking the bounds, otherwise
>>>>> + * we might have a false positive on the bounds.
>>>>> + */
>>>>> + if (read_seqcount_retry(&vma->vm_sequence, seq))
>>>>> + goto out_put;
>>>>> +
>>>>> + mem_cgroup_oom_enable();
>>>>> + ret = handle_pte_fault(&vmf);
>>>>> + mem_cgroup_oom_disable();
>>>>> +
>>>>> + put_vma(vma);
>>>>> +
>>>>> + /*
>>>>> + * The task may have entered a memcg OOM situation but
>>>>> + * if the allocation error was handled gracefully (no
>>>>> + * VM_FAULT_OOM), there is no need to kill anything.
>>>>> + * Just clean up the OOM state peacefully.
>>>>> + */
>>>>> + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>>>>> + mem_cgroup_oom_synchronize(false);
>>>>> + return ret;
>>>>> +
>>>>> +out_walk:
>>>>> + local_irq_enable();
>>>>> +out_put:
>>>>> + put_vma(vma);
>>>>> + return ret;
>>>>> +}
>>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>>> +
>>>>> /*
>>>>> * By the time we get here, we already hold the mm semaphore
>>>>> *
>>> .
>>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* [PATCH v11 20/26] mm: adding speculative page fault failure trace events
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (18 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 19/26] mm: provide speculative fault infrastructure Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 21/26] perf: add a speculative page fault sw event Laurent Dufour
` (7 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
This patch a set of new trace events to collect the speculative page fault
event failures.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/trace/events/pagefault.h | 80 ++++++++++++++++++++++++++++++++++++++++
mm/memory.c | 57 ++++++++++++++++++++++------
2 files changed, 125 insertions(+), 12 deletions(-)
create mode 100644 include/trace/events/pagefault.h
diff --git a/include/trace/events/pagefault.h b/include/trace/events/pagefault.h
new file mode 100644
index 000000000000..d9438f3e6bad
--- /dev/null
+++ b/include/trace/events/pagefault.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM pagefault
+
+#if !defined(_TRACE_PAGEFAULT_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PAGEFAULT_H
+
+#include <linux/tracepoint.h>
+#include <linux/mm.h>
+
+DECLARE_EVENT_CLASS(spf,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, caller)
+ __field(unsigned long, vm_start)
+ __field(unsigned long, vm_end)
+ __field(unsigned long, address)
+ ),
+
+ TP_fast_assign(
+ __entry->caller = caller;
+ __entry->vm_start = vma->vm_start;
+ __entry->vm_end = vma->vm_end;
+ __entry->address = address;
+ ),
+
+ TP_printk("ip:%lx vma:%lx-%lx address:%lx",
+ __entry->caller, __entry->vm_start, __entry->vm_end,
+ __entry->address)
+);
+
+DEFINE_EVENT(spf, spf_vma_changed,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_noanon,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_notsup,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_access,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_pmd_changed,
+
+ TP_PROTO(unsigned long caller,
+ struct vm_area_struct *vma, unsigned long address),
+
+ TP_ARGS(caller, vma, address)
+);
+
+#endif /* _TRACE_PAGEFAULT_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/memory.c b/mm/memory.c
index 7bbbb8c7b9cd..30433bde32f2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -80,6 +80,9 @@
#include "internal.h"
+#define CREATE_TRACE_POINTS
+#include <trace/events/pagefault.h>
+
#if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
#warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
#endif
@@ -2324,8 +2327,10 @@ static bool pte_spinlock(struct vm_fault *vmf)
again:
local_irq_disable();
- if (vma_has_changed(vmf))
+ if (vma_has_changed(vmf)) {
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
@@ -2333,8 +2338,10 @@ static bool pte_spinlock(struct vm_fault *vmf)
* is not a huge collapse operation in progress in our back.
*/
pmdval = READ_ONCE(*vmf->pmd);
- if (!pmd_same(pmdval, vmf->orig_pmd))
+ if (!pmd_same(pmdval, vmf->orig_pmd)) {
+ trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#endif
vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
@@ -2345,6 +2352,7 @@ static bool pte_spinlock(struct vm_fault *vmf)
if (vma_has_changed(vmf)) {
spin_unlock(vmf->ptl);
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
}
@@ -2378,8 +2386,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
*/
again:
local_irq_disable();
- if (vma_has_changed(vmf))
+ if (vma_has_changed(vmf)) {
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
@@ -2387,8 +2397,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
* is not a huge collapse operation in progress in our back.
*/
pmdval = READ_ONCE(*vmf->pmd);
- if (!pmd_same(pmdval, vmf->orig_pmd))
+ if (!pmd_same(pmdval, vmf->orig_pmd)) {
+ trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
+ }
#endif
/*
@@ -2408,6 +2420,7 @@ static bool pte_map_lock(struct vm_fault *vmf)
if (vma_has_changed(vmf)) {
pte_unmap_unlock(pte, ptl);
+ trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
goto out;
}
@@ -4329,47 +4342,60 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return ret;
seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
- if (seq & 1)
+ if (seq & 1) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
/*
* Can't call vm_ops service has we don't know what they would do
* with the VMA.
* This include huge page from hugetlbfs.
*/
- if (vma->vm_ops)
+ if (vma->vm_ops) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
/*
* __anon_vma_prepare() requires the mmap_sem to be held
* because vm_next and vm_prev must be safe. This can't be guaranteed
* in the speculative path.
*/
- if (unlikely(!vma->anon_vma))
+ if (unlikely(!vma->anon_vma)) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
vmf.vma_flags = READ_ONCE(vma->vm_flags);
vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
/* Can't call userland page fault handler in the speculative path */
- if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+ if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
- if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+ if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
/*
* This could be detected by the check address against VMA's
* boundaries but we want to trace it as not supported instead
* of changed.
*/
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
if (address < READ_ONCE(vma->vm_start)
- || READ_ONCE(vma->vm_end) <= address)
+ || READ_ONCE(vma->vm_end) <= address) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
flags & FAULT_FLAG_INSTRUCTION,
flags & FAULT_FLAG_REMOTE)) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
@@ -4377,10 +4403,12 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
/* This is one is required to check that the VMA has write access set */
if (flags & FAULT_FLAG_WRITE) {
if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+ trace_spf_vma_access(_RET_IP_, vma, address);
ret = VM_FAULT_SIGSEGV;
goto out_put;
}
@@ -4394,8 +4422,10 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
pol = __get_vma_policy(vma, address);
if (!pol)
pol = get_task_policy(current);
- if (pol && pol->mode == MPOL_INTERLEAVE)
+ if (pol && pol->mode == MPOL_INTERLEAVE) {
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
goto out_put;
+ }
#endif
/*
@@ -4468,8 +4498,10 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
* We need to re-validate the VMA after checking the bounds, otherwise
* we might have a false positive on the bounds.
*/
- if (read_seqcount_retry(&vma->vm_sequence, seq))
+ if (read_seqcount_retry(&vma->vm_sequence, seq)) {
+ trace_spf_vma_changed(_RET_IP_, vma, address);
goto out_put;
+ }
mem_cgroup_oom_enable();
ret = handle_pte_fault(&vmf);
@@ -4488,6 +4520,7 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
return ret;
out_walk:
+ trace_spf_vma_notsup(_RET_IP_, vma, address);
local_irq_enable();
out_put:
put_vma(vma);
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 21/26] perf: add a speculative page fault sw event
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (19 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 20/26] mm: adding speculative page fault failure trace events Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 22/26] perf tools: add support for the SPF perf event Laurent Dufour
` (6 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Add a new software event to count succeeded speculative page faults.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/uapi/linux/perf_event.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index b8e288a1f740..e2b74c055f51 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
PERF_COUNT_SW_EMULATION_FAULTS = 8,
PERF_COUNT_SW_DUMMY = 9,
PERF_COUNT_SW_BPF_OUTPUT = 10,
+ PERF_COUNT_SW_SPF = 11,
PERF_COUNT_SW_MAX, /* non-ABI */
};
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 22/26] perf tools: add support for the SPF perf event
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (20 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 21/26] perf: add a speculative page fault sw event Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 23/26] mm: add speculative page fault vmstats Laurent Dufour
` (5 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Add support for the new speculative faults event.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c | 4 ++++
tools/perf/util/parse-events.l | 1 +
tools/perf/util/python.c | 1 +
5 files changed, 8 insertions(+)
diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index b8e288a1f740..e2b74c055f51 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
PERF_COUNT_SW_EMULATION_FAULTS = 8,
PERF_COUNT_SW_DUMMY = 9,
PERF_COUNT_SW_BPF_OUTPUT = 10,
+ PERF_COUNT_SW_SPF = 11,
PERF_COUNT_SW_MAX, /* non-ABI */
};
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 4cd2cf93f726..088ed45c68c1 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -429,6 +429,7 @@ const char *perf_evsel__sw_names[PERF_COUNT_SW_MAX] = {
"alignment-faults",
"emulation-faults",
"dummy",
+ "speculative-faults",
};
static const char *__perf_evsel__sw_name(u64 config)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 2fb0272146d8..54719f566314 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -140,6 +140,10 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = {
.symbol = "bpf-output",
.alias = "",
},
+ [PERF_COUNT_SW_SPF] = {
+ .symbol = "speculative-faults",
+ .alias = "spf",
+ },
};
#define __PERF_EVENT_FIELD(config, name) \
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index a1a01b1ac8b8..86584d3a3068 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -308,6 +308,7 @@ emulation-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EM
dummy { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
duration_time { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
bpf-output { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); }
+speculative-faults|spf { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_SPF); }
/*
* We have to handle the kernel PMU event cycles-ct/cycles-t/mem-loads/mem-stores separately.
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index 863b61478edd..df4f7ff9bdff 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -1181,6 +1181,7 @@ static struct {
PERF_CONST(COUNT_SW_ALIGNMENT_FAULTS),
PERF_CONST(COUNT_SW_EMULATION_FAULTS),
PERF_CONST(COUNT_SW_DUMMY),
+ PERF_CONST(COUNT_SW_SPF),
PERF_CONST(SAMPLE_IP),
PERF_CONST(SAMPLE_TID),
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 23/26] mm: add speculative page fault vmstats
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (21 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 22/26] perf tools: add support for the SPF perf event Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 24/26] x86/mm: add speculative pagefault handling Laurent Dufour
` (4 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Add speculative_pgfault vmstat counter to count successful speculative page
fault handling.
Also fixing a minor typo in include/linux/vm_event_item.h.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/vm_event_item.h | 3 +++
mm/memory.c | 3 +++
mm/vmstat.c | 5 ++++-
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5c7f010676a7..a240acc09684 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -111,6 +111,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
SWAP_RA,
SWAP_RA_HIT,
#endif
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ SPECULATIVE_PGFAULT,
+#endif
NR_VM_EVENT_ITEMS
};
diff --git a/mm/memory.c b/mm/memory.c
index 30433bde32f2..48e1cf0a54ef 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4509,6 +4509,9 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
put_vma(vma);
+ if (ret != VM_FAULT_RETRY)
+ count_vm_event(SPECULATIVE_PGFAULT);
+
/*
* The task may have entered a memcg OOM situation but
* if the allocation error was handled gracefully (no
diff --git a/mm/vmstat.c b/mm/vmstat.c
index a2b9518980ce..3af74498a969 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1289,7 +1289,10 @@ const char * const vmstat_text[] = {
"swap_ra",
"swap_ra_hit",
#endif
-#endif /* CONFIG_VM_EVENTS_COUNTERS */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ "speculative_pgfault",
+#endif
+#endif /* CONFIG_VM_EVENT_COUNTERS */
};
#endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 24/26] x86/mm: add speculative pagefault handling
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (22 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 23/26] mm: add speculative page fault vmstats Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 25/26] powerpc/mm: add speculative page fault Laurent Dufour
` (3 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Peter Zijlstra <peterz@infradead.org>
Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Handle memory protection key fault]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/x86/mm/fault.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index fd84edf82252..11944bfc805a 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1224,7 +1224,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
struct mm_struct *mm;
int fault, major = 0;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
- u32 pkey;
+ u32 pkey, *pt_pkey = &pkey;
tsk = current;
mm = tsk->mm;
@@ -1314,6 +1314,27 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
flags |= FAULT_FLAG_INSTRUCTION;
/*
+ * Do not try to do a speculative page fault if the fault was due to
+ * protection keys since it can't be resolved.
+ */
+ if (!(error_code & X86_PF_PK)) {
+ fault = handle_speculative_fault(mm, address, flags);
+ if (fault != VM_FAULT_RETRY) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+ /*
+ * Do not advertise for the pkey value since we don't
+ * know it.
+ * This is not a matter as we checked for X86_PF_PK
+ * earlier, so we should not handle pkey fault here,
+ * but to be sure that mm_fault_error() callees will
+ * not try to use it, we invalidate the pointer.
+ */
+ pt_pkey = NULL;
+ goto done;
+ }
+ }
+
+ /*
* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in
* the kernel and should generate an OOPS. Unfortunately, in the
@@ -1427,8 +1448,10 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
}
up_read(&mm->mmap_sem);
+
+done:
if (unlikely(fault & VM_FAULT_ERROR)) {
- mm_fault_error(regs, error_code, address, &pkey, fault);
+ mm_fault_error(regs, error_code, address, pt_pkey, fault);
return;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 25/26] powerpc/mm: add speculative page fault
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (23 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 24/26] x86/mm: add speculative pagefault handling Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-17 11:06 ` [PATCH v11 26/26] arm64/mm: " Laurent Dufour
` (2 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
This patch enable the speculative page fault on the PowerPC
architecture.
This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.
The speculative path is only tried for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ef268d5d9db7..d7b5742ffb2b 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -465,6 +465,21 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
if (is_exec)
flags |= FAULT_FLAG_INSTRUCTION;
+ /*
+ * Try speculative page fault before grabbing the mmap_sem.
+ * The Page fault is done if VM_FAULT_RETRY is not returned.
+ * But if the memory protection keys are active, we don't know if the
+ * fault is due to key mistmatch or due to a classic protection check.
+ * To differentiate that, we will need the VMA we no more have, so
+ * let's retry with the mmap_sem held.
+ */
+ fault = handle_speculative_fault(mm, address, flags);
+ if (fault != VM_FAULT_RETRY && (IS_ENABLED(CONFIG_PPC_MEM_KEYS) &&
+ fault != VM_FAULT_SIGSEGV)) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+ goto done;
+ }
+
/* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in the
* kernel and should generate an OOPS. Unfortunately, in the case of an
@@ -565,6 +580,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
up_read(¤t->mm->mmap_sem);
+done:
if (unlikely(fault & VM_FAULT_ERROR))
return mm_fault_error(regs, address, fault);
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* [PATCH v11 26/26] arm64/mm: add speculative page fault
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
` (24 preceding siblings ...)
2018-05-17 11:06 ` [PATCH v11 25/26] powerpc/mm: add speculative page fault Laurent Dufour
@ 2018-05-17 11:06 ` Laurent Dufour
2018-05-28 5:23 ` Song, HaiyanX
2018-11-05 10:42 ` Balbir Singh
27 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-17 11:06 UTC (permalink / raw)
To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
From: Mahendran Ganesh <opensource.ganesh@gmail.com>
This patch enables the speculative page fault on the arm64
architecture.
I completed spf porting in 4.9. From the test result,
we can see app launching time improved by about 10% in average.
For the apps which have more than 50 threads, 15% or even more
improvement can be got.
Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
[handle_speculative_fault() is no more returning the vma pointer]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/arm64/mm/fault.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 91c53a7d2575..fb9f840367f9 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -411,6 +411,16 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
/*
+ * let's try a speculative page fault without grabbing the
+ * mmap_sem.
+ */
+ fault = handle_speculative_fault(mm, addr, mm_flags);
+ if (fault != VM_FAULT_RETRY) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, addr);
+ goto done;
+ }
+
+ /*
* As per x86, we may deadlock here. However, since the kernel only
* validly references user space from well defined areas of the code,
* we can bug out early if this is from code which shouldn't.
@@ -460,6 +470,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
}
up_read(&mm->mmap_sem);
+done:
+
/*
* Handle the "normal" (no error) case first.
*/
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
@ 2018-05-28 5:23 ` Song, HaiyanX
2018-05-17 11:06 ` [PATCH v11 02/26] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
` (26 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-05-28 5:23 UTC (permalink / raw)
To: Laurent Dufour, akpm, mhocko, peterz, kirill, ak, dave, jack,
Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, Wang, Kemi, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, Punit Agrawal,
vinayak menon, Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
tested on Intel 4s Skylake platform.
The regression result is sorted by the metric will-it-scale.per_thread_ops.
Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
Commit id:
base commit: d55f34411b1b126429a823d06c3124c16283231f
head commit: 0355322b3577eeab7669066df42c550a56801110
Benchmark suite: will-it-scale
Download link:
https://github.com/antonblanchard/will-it-scale/tree/master/tests
Metrics:
will-it-scale.per_process_ops=processes/nr_cpu
will-it-scale.per_thread_ops=threads/nr_cpu
test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
THP: enable / disable
nr_task: 100%
1. Regressions:
a) THP enabled:
testcase base change head metric
page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
b) THP disabled:
testcase base change head metric
page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
2. Improvements:
a) THP enabled:
testcase base change head metric
malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
b) THP disabled:
testcase base change head metric
malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
Notes: for above values in column "change", the higher value means that the related testcase result
on head commit is better than that on base commit for this benchmark.
Best regards
Haiyan Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Thursday, May 17, 2018 7:06 PM
To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: [PATCH v11 00/26] Speculative page faults
This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
page fault without holding the mm semaphore [1].
The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault handler will not wait for other threads memory
layout change to be done, assuming that this change is done in another part
of the process's memory space. This type page fault is named speculative
page fault. If the speculative page fault fails because of a concurrency is
detected or because underlying PMD or PTE tables are not yet allocating, it
is failing its processing and a classic page fault is then tried.
The speculative page fault (SPF) has to look for the VMA matching the fault
address without holding the mmap_sem, this is done by introducing a rwlock
which protects the access to the mm_rb tree. Previously this was done using
SRCU but it was introducing a lot of scheduling to process the VMA's
freeing operation which was hitting the performance by 20% as reported by
Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
limiting the locking contention to these operations which are expected to
be in a O(log n) order. In addition to ensure that the VMA is not freed in
our back a reference count is added and 2 services (get_vma() and
put_vma()) are introduced to handle the reference count. Once a VMA is
fetched from the RB tree using get_vma(), it must be later freed using
put_vma(). I can't see anymore the overhead I got while will-it-scale
benchmark anymore.
The VMA's attributes checked during the speculative page fault processing
have to be protected against parallel changes. This is done by using a per
VMA sequence lock. This sequence lock allows the speculative page fault
handler to fast check for parallel changes in progress and to abort the
speculative page fault in that case.
Once the VMA has been found, the speculative page fault handler would check
for the VMA's attributes to verify that the page fault has to be handled
correctly or not. Thus, the VMA is protected through a sequence lock which
allows fast detection of concurrent VMA changes. If such a change is
detected, the speculative page fault is aborted and a *classic* page fault
is tried. VMA sequence lockings are added when VMA attributes which are
checked during the page fault are modified.
When the PTE is fetched, the VMA is checked to see if it has been changed,
so once the page table is locked, the VMA is valid, so any other changes
leading to touching this PTE will need to lock the page table, so no
parallel change is possible at this time.
The locking of the PTE is done with interrupts disabled, this allows
checking for the PMD to ensure that there is not an ongoing collapsing
operation. Since khugepaged is firstly set the PMD to pmd_none and then is
waiting for the other CPU to have caught the IPI interrupt, if the pmd is
valid at the time the PTE is locked, we have the guarantee that the
collapsing operation will have to wait on the PTE lock to move forward.
This allows the SPF handler to map the PTE safely. If the PMD value is
different from the one recorded at the beginning of the SPF operation, the
classic page fault handler will be called to handle the operation while
holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
the lock is done using spin_trylock() to avoid dead lock when handling a
page fault while a TLB invalidate is requested by another CPU holding the
PTE.
In pseudo code, this could be seen as:
speculative_page_fault()
{
vma = get_vma()
check vma sequence count
check vma's support
disable interrupt
check pgd,p4d,...,pte
save pmd and pte in vmf
save vma sequence counter in vmf
enable interrupt
check vma sequence count
handle_pte_fault(vma)
..
page = alloc_page()
pte_map_lock()
disable interrupt
abort if sequence counter has changed
abort if pmd or pte has changed
pte map and lock
enable interrupt
if abort
free page
abort
...
}
arch_fault_handler()
{
if (speculative_page_fault(&vma))
goto done
again:
lock(mmap_sem)
vma = find_vma();
handle_pte_fault(vma);
if retry
unlock(mmap_sem)
goto again;
done:
handle fault error
}
Support for THP is not done because when checking for the PMD, we can be
confused by an in progress collapsing operation done by khugepaged. The
issue is that pmd_none() could be true either if the PMD is not already
populated or if the underlying PTE are in the way to be collapsed. So we
cannot safely allocate a PMD if pmd_none() is true.
This series add a new software performance event named 'speculative-faults'
or 'spf'. It counts the number of successful page fault event handled
speculatively. When recording 'faults,spf' events, the faults one is
counting the total number of page fault events while 'spf' is only counting
the part of the faults processed speculatively.
There are some trace events introduced by this series. They allow
identifying why the page faults were not processed speculatively. This
doesn't take in account the faults generated by a monothreaded process
which directly processed while holding the mmap_sem. This trace events are
grouped in a system named 'pagefault', they are:
- pagefault:spf_vma_changed : if the VMA has been changed in our back
- pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
- pagefault:spf_vma_notsup : the VMA's type is not supported
- pagefault:spf_vma_access : the VMA's access right are not respected
- pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
back.
To record all the related events, the easier is to run perf with the
following arguments :
$ perf stat -e 'faults,spf,pagefault:*' <command>
There is also a dedicated vmstat counter showing the number of successful
page fault handled speculatively. I can be seen this way:
$ grep speculative_pgfault /proc/vmstat
This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
on x86, PowerPC and arm64.
---------------------
Real Workload results
As mentioned in previous email, we did non official runs using a "popular
in memory multithreaded database product" on 176 cores SMT8 Power system
which showed a 30% improvements in the number of transaction processed per
second. This run has been done on the v6 series, but changes introduced in
this new version should not impact the performance boost seen.
Here are the perf data captured during 2 of these runs on top of the v8
series:
vanilla spf
faults 89.418 101.364 +13%
spf n/a 97.989
With the SPF kernel, most of the page fault were processed in a speculative
way.
Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
it a try on an android device. He reported that the application launch time
was improved in average by 6%, and for large applications (~100 threads) by
20%.
Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
MSM845 (8 cores) with 6GB (the less is better):
Application 4.9 4.9+spf delta
com.tencent.mm 416 389 -7%
com.eg.android.AlipayGphone 1135 986 -13%
com.tencent.mtt 455 454 0%
com.qqgame.hlddz 1497 1409 -6%
com.autonavi.minimap 711 701 -1%
com.tencent.tmgp.sgame 788 748 -5%
com.immomo.momo 501 487 -3%
com.tencent.peng 2145 2112 -2%
com.smile.gifmaker 491 461 -6%
com.baidu.BaiduMap 479 366 -23%
com.taobao.taobao 1341 1198 -11%
com.baidu.searchbox 333 314 -6%
com.tencent.mobileqq 394 384 -3%
com.sina.weibo 907 906 0%
com.youku.phone 816 731 -11%
com.happyelements.AndroidAnimal.qq 763 717 -6%
com.UCMobile 415 411 -1%
com.tencent.tmgp.ak 1464 1431 -2%
com.tencent.qqmusic 336 329 -2%
com.sankuai.meituan 1661 1302 -22%
com.netease.cloudmusic 1193 1200 1%
air.tv.douyu.android 4257 4152 -2%
------------------
Benchmarks results
Base kernel is v4.17.0-rc4-mm1
SPF is BASE + this series
Kernbench:
----------
Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
kernel (kernel is build 5 times):
Average Half load -j 8
Run (std deviation)
BASE SPF
Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
Average Optimal load -j 16
Run (std deviation)
BASE SPF
Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
User Time 11064.8 (981.142) 11085 (990.897) 0.18%
System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
During a run on the SPF, perf events were captured:
Performance counter stats for '../kernbench -M':
526743764 faults
210 spf
3 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
2278 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
Very few speculative page faults were recorded as most of the processes
involved are monothreaded (sounds that on this architecture some threads
were created during the kernel build processing).
Here are the kerbench results on a 80 CPUs Power8 system:
Average Half load -j 40
Run (std deviation)
BASE SPF
Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
Average Optimal load -j 80
Run (std deviation)
BASE SPF
Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
Context Switches 223861 (138865) 225032 (139632) 0.52%
Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
During a run on the SPF, perf events were captured:
Performance counter stats for '../kernbench -M':
116730856 faults
0 spf
3 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
476 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
Most of the processes involved are monothreaded so SPF is not activated but
there is no impact on the performance.
Ebizzy:
-------
The test is counting the number of records per second it can manage, the
higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
consistent result I repeated the test 100 times and measure the average
result. The number is the record processes per second, the higher is the
best.
BASE SPF delta
16 CPUs x86 VM 742.57 1490.24 100.69%
80 CPUs P8 node 13105.4 24174.23 84.46%
Here are the performance counter read during a run on a 16 CPUs x86 VM:
Performance counter stats for './ebizzy -mTt 16':
1706379 faults
1674599 spf
30588 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
363 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
And the ones captured during a run on a 80 CPUs Power node:
Performance counter stats for './ebizzy -mTt 80':
1874773 faults
1461153 spf
413293 pagefault:spf_vma_changed
0 pagefault:spf_vma_noanon
200 pagefault:spf_vma_notsup
0 pagefault:spf_vma_access
0 pagefault:spf_pmd_changed
In ebizzy's case most of the page fault were handled in a speculative way,
leading the ebizzy performance boost.
------------------
Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
- Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
and Minchan Kim, hopefully.
- Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
__do_page_fault().
- Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
instead
of aborting the speculative page fault handling. Dropping the now
useless
trace event pagefault:spf_pte_lock.
- No more try to reuse the fetched VMA during the speculative page fault
handling when retrying is needed. This adds a lot of complexity and
additional tests done didn't show a significant performance improvement.
- Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
[1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/9999687/
Laurent Dufour (20):
mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
mm: make pte_unmap_same compatible with SPF
mm: introduce INIT_VMA()
mm: protect VMA modifications using VMA sequence count
mm: protect mremap() against SPF hanlder
mm: protect SPF handler against anon_vma changes
mm: cache some VMA fields in the vm_fault structure
mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
mm: introduce __lru_cache_add_active_or_unevictable
mm: introduce __vm_normal_page()
mm: introduce __page_add_new_anon_rmap()
mm: protect mm_rb tree with a rwlock
mm: adding speculative page fault failure trace events
perf: add a speculative page fault sw event
perf tools: add support for the SPF perf event
mm: add speculative page fault vmstats
powerpc/mm: add speculative page fault
Mahendran Ganesh (2):
arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
arm64/mm: add speculative page fault
Peter Zijlstra (4):
mm: prepare for FAULT_FLAG_SPECULATIVE
mm: VMA sequence count
mm: provide speculative fault infrastructure
x86/mm: add speculative pagefault handling
arch/arm64/Kconfig | 1 +
arch/arm64/mm/fault.c | 12 +
arch/powerpc/Kconfig | 1 +
arch/powerpc/mm/fault.c | 16 +
arch/x86/Kconfig | 1 +
arch/x86/mm/fault.c | 27 +-
fs/exec.c | 2 +-
fs/proc/task_mmu.c | 5 +-
fs/userfaultfd.c | 17 +-
include/linux/hugetlb_inline.h | 2 +-
include/linux/migrate.h | 4 +-
include/linux/mm.h | 136 +++++++-
include/linux/mm_types.h | 7 +
include/linux/pagemap.h | 4 +-
include/linux/rmap.h | 12 +-
include/linux/swap.h | 10 +-
include/linux/vm_event_item.h | 3 +
include/trace/events/pagefault.h | 80 +++++
include/uapi/linux/perf_event.h | 1 +
kernel/fork.c | 5 +-
mm/Kconfig | 22 ++
mm/huge_memory.c | 6 +-
mm/hugetlb.c | 2 +
mm/init-mm.c | 3 +
mm/internal.h | 20 ++
mm/khugepaged.c | 5 +
mm/madvise.c | 6 +-
mm/memory.c | 612 +++++++++++++++++++++++++++++-----
mm/mempolicy.c | 51 ++-
mm/migrate.c | 6 +-
mm/mlock.c | 13 +-
mm/mmap.c | 229 ++++++++++---
mm/mprotect.c | 4 +-
mm/mremap.c | 13 +
mm/nommu.c | 2 +-
mm/rmap.c | 5 +-
mm/swap.c | 6 +-
mm/swap_state.c | 8 +-
mm/vmstat.c | 5 +-
tools/include/uapi/linux/perf_event.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/parse-events.c | 4 +
tools/perf/util/parse-events.l | 1 +
tools/perf/util/python.c | 1 +
44 files changed, 1161 insertions(+), 211 deletions(-)
create mode 100644 include/trace/events/pagefault.h
--
2.7.4
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
@ 2018-05-28 5:23 ` Song, HaiyanX
0 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-05-28 5:23 UTC (permalink / raw)
To: Laurent Dufour, akpm, mhocko, peterz, kirill, ak, dave, jack,
Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, Wang, Kemi, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, Punit Agrawal,
vinayak menon, Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
=0A=
Some regression and improvements is found by LKP-tools(linux kernel perform=
ance) on V9 patch series=0A=
tested on Intel 4s Skylake platform.=0A=
=0A=
The regression result is sorted by the metric will-it-scale.per_thread_ops.=
=0A=
Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch se=
ries)=0A=
Commit id:=0A=
base commit: d55f34411b1b126429a823d06c3124c16283231f=0A=
head commit: 0355322b3577eeab7669066df42c550a56801110=0A=
Benchmark suite: will-it-scale=0A=
Download link:=0A=
https://github.com/antonblanchard/will-it-scale/tree/master/tests=0A=
Metrics:=0A=
will-it-scale.per_process_ops=3Dprocesses/nr_cpu=0A=
will-it-scale.per_thread_ops=3Dthreads/nr_cpu=0A=
test box: lkp-skl-4sp1(nr_cpu=3D192,memory=3D768G)=0A=
THP: enable / disable=0A=
nr_task: 100%=0A=
=0A=
1. Regressions:=0A=
a) THP enabled:=0A=
testcase base change head =
metric=0A=
page_fault3/ enable THP 10092 -17.5% 8323 =
will-it-scale.per_thread_ops=0A=
page_fault2/ enable THP 8300 -17.2% 6869 =
will-it-scale.per_thread_ops=0A=
brk1/ enable THP 957.67 -7.6% 885 =
will-it-scale.per_thread_ops=0A=
page_fault3/ enable THP 172821 -5.3% 163692 =
will-it-scale.per_process_ops=0A=
signal1/ enable THP 9125 -3.2% 8834 =
will-it-scale.per_process_ops=0A=
=0A=
b) THP disabled:=0A=
testcase base change head =
metric=0A=
page_fault3/ disable THP 10107 -19.1% 8180 =
will-it-scale.per_thread_ops=0A=
page_fault2/ disable THP 8432 -17.8% 6931 =
will-it-scale.per_thread_ops=0A=
context_switch1/ disable THP 215389 -6.8% 200776 =
will-it-scale.per_thread_ops=0A=
brk1/ disable THP 939.67 -6.6% 877.33 =
will-it-scale.per_thread_ops=0A=
page_fault3/ disable THP 173145 -4.7% 165064 =
will-it-scale.per_process_ops=0A=
signal1/ disable THP 9162 -3.9% 8802 =
will-it-scale.per_process_ops=0A=
=0A=
2. Improvements:=0A=
a) THP enabled:=0A=
testcase base change head =
metric=0A=
malloc1/ enable THP 66.33 +469.8% 383.67 =
will-it-scale.per_thread_ops=0A=
writeseek3/ enable THP 2531 +4.5% 2646 =
will-it-scale.per_thread_ops=0A=
signal1/ enable THP 989.33 +2.8% 1016 =
will-it-scale.per_thread_ops=0A=
=0A=
b) THP disabled:=0A=
testcase base change head =
metric=0A=
malloc1/ disable THP 90.33 +417.3% 467.33 =
will-it-scale.per_thread_ops=0A=
read2/ disable THP 58934 +39.2% 82060 =
will-it-scale.per_thread_ops=0A=
page_fault1/ disable THP 8607 +36.4% 11736 =
will-it-scale.per_thread_ops=0A=
read1/ disable THP 314063 +12.7% 353934 =
will-it-scale.per_thread_ops=0A=
writeseek3/ disable THP 2452 +12.5% 2759 =
will-it-scale.per_thread_ops=0A=
signal1/ disable THP 971.33 +5.5% 1024 =
will-it-scale.per_thread_ops=0A=
=0A=
Notes: for above values in column "change", the higher value means that the=
related testcase result=0A=
on head commit is better than that on base commit for this benchmark.=0A=
=0A=
=0A=
Best regards=0A=
Haiyan Song=0A=
=0A=
________________________________________=0A=
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laur=
ent Dufour [ldufour@linux.vnet.ibm.com]=0A=
Sent: Thursday, May 17, 2018 7:06 PM=0A=
To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kir=
ill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Mat=
thew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; =
benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Glei=
xner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.s=
enozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi=
; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan K=
im; Punit Agrawal; vinayak menon; Yang Shi=0A=
Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.=
com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; =
Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org=0A=
Subject: [PATCH v11 00/26] Speculative page faults=0A=
=0A=
This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle=
=0A=
page fault without holding the mm semaphore [1].=0A=
=0A=
The idea is to try to handle user space page faults without holding the=0A=
mmap_sem. This should allow better concurrency for massively threaded=0A=
process since the page fault handler will not wait for other threads memory=
=0A=
layout change to be done, assuming that this change is done in another part=
=0A=
of the process's memory space. This type page fault is named speculative=0A=
page fault. If the speculative page fault fails because of a concurrency is=
=0A=
detected or because underlying PMD or PTE tables are not yet allocating, it=
=0A=
is failing its processing and a classic page fault is then tried.=0A=
=0A=
The speculative page fault (SPF) has to look for the VMA matching the fault=
=0A=
address without holding the mmap_sem, this is done by introducing a rwlock=
=0A=
which protects the access to the mm_rb tree. Previously this was done using=
=0A=
SRCU but it was introducing a lot of scheduling to process the VMA's=0A=
freeing operation which was hitting the performance by 20% as reported by=
=0A=
Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is=0A=
limiting the locking contention to these operations which are expected to=
=0A=
be in a O(log n) order. In addition to ensure that the VMA is not freed in=
=0A=
our back a reference count is added and 2 services (get_vma() and=0A=
put_vma()) are introduced to handle the reference count. Once a VMA is=0A=
fetched from the RB tree using get_vma(), it must be later freed using=0A=
put_vma(). I can't see anymore the overhead I got while will-it-scale=0A=
benchmark anymore.=0A=
=0A=
The VMA's attributes checked during the speculative page fault processing=
=0A=
have to be protected against parallel changes. This is done by using a per=
=0A=
VMA sequence lock. This sequence lock allows the speculative page fault=0A=
handler to fast check for parallel changes in progress and to abort the=0A=
speculative page fault in that case.=0A=
=0A=
Once the VMA has been found, the speculative page fault handler would check=
=0A=
for the VMA's attributes to verify that the page fault has to be handled=0A=
correctly or not. Thus, the VMA is protected through a sequence lock which=
=0A=
allows fast detection of concurrent VMA changes. If such a change is=0A=
detected, the speculative page fault is aborted and a *classic* page fault=
=0A=
is tried. VMA sequence lockings are added when VMA attributes which are=0A=
checked during the page fault are modified.=0A=
=0A=
When the PTE is fetched, the VMA is checked to see if it has been changed,=
=0A=
so once the page table is locked, the VMA is valid, so any other changes=0A=
leading to touching this PTE will need to lock the page table, so no=0A=
parallel change is possible at this time.=0A=
=0A=
The locking of the PTE is done with interrupts disabled, this allows=0A=
checking for the PMD to ensure that there is not an ongoing collapsing=0A=
operation. Since khugepaged is firstly set the PMD to pmd_none and then is=
=0A=
waiting for the other CPU to have caught the IPI interrupt, if the pmd is=
=0A=
valid at the time the PTE is locked, we have the guarantee that the=0A=
collapsing operation will have to wait on the PTE lock to move forward.=0A=
This allows the SPF handler to map the PTE safely. If the PMD value is=0A=
different from the one recorded at the beginning of the SPF operation, the=
=0A=
classic page fault handler will be called to handle the operation while=0A=
holding the mmap_sem. As the PTE lock is done with the interrupts disabled,=
=0A=
the lock is done using spin_trylock() to avoid dead lock when handling a=0A=
page fault while a TLB invalidate is requested by another CPU holding the=
=0A=
PTE.=0A=
=0A=
In pseudo code, this could be seen as:=0A=
speculative_page_fault()=0A=
{=0A=
vma =3D get_vma()=0A=
check vma sequence count=0A=
check vma's support=0A=
disable interrupt=0A=
check pgd,p4d,...,pte=0A=
save pmd and pte in vmf=0A=
save vma sequence counter in vmf=0A=
enable interrupt=0A=
check vma sequence count=0A=
handle_pte_fault(vma)=0A=
..=0A=
page =3D alloc_page()=0A=
pte_map_lock()=0A=
disable interrupt=0A=
abort if sequence counter has changed=
=0A=
abort if pmd or pte has changed=0A=
pte map and lock=0A=
enable interrupt=0A=
if abort=0A=
free page=0A=
abort=0A=
...=0A=
}=0A=
=0A=
arch_fault_handler()=0A=
{=0A=
if (speculative_page_fault(&vma))=0A=
goto done=0A=
again:=0A=
lock(mmap_sem)=0A=
vma =3D find_vma();=0A=
handle_pte_fault(vma);=0A=
if retry=0A=
unlock(mmap_sem)=0A=
goto again;=0A=
done:=0A=
handle fault error=0A=
}=0A=
=0A=
Support for THP is not done because when checking for the PMD, we can be=0A=
confused by an in progress collapsing operation done by khugepaged. The=0A=
issue is that pmd_none() could be true either if the PMD is not already=0A=
populated or if the underlying PTE are in the way to be collapsed. So we=0A=
cannot safely allocate a PMD if pmd_none() is true.=0A=
=0A=
This series add a new software performance event named 'speculative-faults'=
=0A=
or 'spf'. It counts the number of successful page fault event handled=0A=
speculatively. When recording 'faults,spf' events, the faults one is=0A=
counting the total number of page fault events while 'spf' is only counting=
=0A=
the part of the faults processed speculatively.=0A=
=0A=
There are some trace events introduced by this series. They allow=0A=
identifying why the page faults were not processed speculatively. This=0A=
doesn't take in account the faults generated by a monothreaded process=0A=
which directly processed while holding the mmap_sem. This trace events are=
=0A=
grouped in a system named 'pagefault', they are:=0A=
- pagefault:spf_vma_changed : if the VMA has been changed in our back=0A=
- pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.=0A=
- pagefault:spf_vma_notsup : the VMA's type is not supported=0A=
- pagefault:spf_vma_access : the VMA's access right are not respected=0A=
- pagefault:spf_pmd_changed : the upper PMD pointer has changed in our=0A=
back.=0A=
=0A=
To record all the related events, the easier is to run perf with the=0A=
following arguments :=0A=
$ perf stat -e 'faults,spf,pagefault:*' <command>=0A=
=0A=
There is also a dedicated vmstat counter showing the number of successful=
=0A=
page fault handled speculatively. I can be seen this way:=0A=
$ grep speculative_pgfault /proc/vmstat=0A=
=0A=
This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional=
=0A=
on x86, PowerPC and arm64.=0A=
=0A=
---------------------=0A=
Real Workload results=0A=
=0A=
As mentioned in previous email, we did non official runs using a "popular=
=0A=
in memory multithreaded database product" on 176 cores SMT8 Power system=0A=
which showed a 30% improvements in the number of transaction processed per=
=0A=
second. This run has been done on the v6 series, but changes introduced in=
=0A=
this new version should not impact the performance boost seen.=0A=
=0A=
Here are the perf data captured during 2 of these runs on top of the v8=0A=
series:=0A=
vanilla spf=0A=
faults 89.418 101.364 +13%=0A=
spf n/a 97.989=0A=
=0A=
With the SPF kernel, most of the page fault were processed in a speculative=
=0A=
way.=0A=
=0A=
Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave=
=0A=
it a try on an android device. He reported that the application launch time=
=0A=
was improved in average by 6%, and for large applications (~100 threads) by=
=0A=
20%.=0A=
=0A=
Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom=0A=
MSM845 (8 cores) with 6GB (the less is better):=0A=
=0A=
Application 4.9 4.9+spf delta=0A=
com.tencent.mm 416 389 -7%=0A=
com.eg.android.AlipayGphone 1135 986 -13%=0A=
com.tencent.mtt 455 454 0%=0A=
com.qqgame.hlddz 1497 1409 -6%=0A=
com.autonavi.minimap 711 701 -1%=0A=
com.tencent.tmgp.sgame 788 748 -5%=0A=
com.immomo.momo 501 487 -3%=0A=
com.tencent.peng 2145 2112 -2%=0A=
com.smile.gifmaker 491 461 -6%=0A=
com.baidu.BaiduMap 479 366 -23%=0A=
com.taobao.taobao 1341 1198 -11%=0A=
com.baidu.searchbox 333 314 -6%=0A=
com.tencent.mobileqq 394 384 -3%=0A=
com.sina.weibo 907 906 0%=0A=
com.youku.phone 816 731 -11%=0A=
com.happyelements.AndroidAnimal.qq 763 717 -6%=0A=
com.UCMobile 415 411 -1%=0A=
com.tencent.tmgp.ak 1464 1431 -2%=0A=
com.tencent.qqmusic 336 329 -2%=0A=
com.sankuai.meituan 1661 1302 -22%=0A=
com.netease.cloudmusic 1193 1200 1%=0A=
air.tv.douyu.android 4257 4152 -2%=0A=
=0A=
------------------=0A=
Benchmarks results=0A=
=0A=
Base kernel is v4.17.0-rc4-mm1=0A=
SPF is BASE + this series=0A=
=0A=
Kernbench:=0A=
----------=0A=
Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15=0A=
kernel (kernel is build 5 times):=0A=
=0A=
Average Half load -j 8=0A=
Run (std deviation)=0A=
BASE SPF=0A=
Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%=0A=
User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%=0A=
System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%=0A=
Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%=0A=
Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%=0A=
Sleeps 105064 (1240.96) 105074 (337.612) 0.01%=0A=
=0A=
Average Optimal load -j 16=0A=
Run (std deviation)=0A=
BASE SPF=0A=
Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%=0A=
User Time 11064.8 (981.142) 11085 (990.897) 0.18%=0A=
System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%=0A=
Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%=0A=
Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%=0A=
Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%=0A=
=0A=
=0A=
During a run on the SPF, perf events were captured:=0A=
Performance counter stats for '../kernbench -M':=0A=
526743764 faults=0A=
210 spf=0A=
3 pagefault:spf_vma_changed=0A=
0 pagefault:spf_vma_noanon=0A=
2278 pagefault:spf_vma_notsup=0A=
0 pagefault:spf_vma_access=0A=
0 pagefault:spf_pmd_changed=0A=
=0A=
Very few speculative page faults were recorded as most of the processes=0A=
involved are monothreaded (sounds that on this architecture some threads=0A=
were created during the kernel build processing).=0A=
=0A=
Here are the kerbench results on a 80 CPUs Power8 system:=0A=
=0A=
Average Half load -j 40=0A=
Run (std deviation)=0A=
BASE SPF=0A=
Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%=0A=
User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%=0A=
System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%=0A=
Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%=0A=
Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%=0A=
Sleeps 317923 (652.499) 318469 (1255.59) 0.17%=0A=
=0A=
Average Optimal load -j 80=0A=
Run (std deviation)=0A=
BASE SPF=0A=
Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%=0A=
User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%=0A=
System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%=0A=
Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%=0A=
Context Switches 223861 (138865) 225032 (139632) 0.52%=0A=
Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%=0A=
=0A=
During a run on the SPF, perf events were captured:=0A=
Performance counter stats for '../kernbench -M':=0A=
116730856 faults=0A=
0 spf=0A=
3 pagefault:spf_vma_changed=0A=
0 pagefault:spf_vma_noanon=0A=
476 pagefault:spf_vma_notsup=0A=
0 pagefault:spf_vma_access=0A=
0 pagefault:spf_pmd_changed=0A=
=0A=
Most of the processes involved are monothreaded so SPF is not activated but=
=0A=
there is no impact on the performance.=0A=
=0A=
Ebizzy:=0A=
-------=0A=
The test is counting the number of records per second it can manage, the=0A=
higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get=0A=
consistent result I repeated the test 100 times and measure the average=0A=
result. The number is the record processes per second, the higher is the=0A=
best.=0A=
=0A=
BASE SPF delta=0A=
16 CPUs x86 VM 742.57 1490.24 100.69%=0A=
80 CPUs P8 node 13105.4 24174.23 84.46%=0A=
=0A=
Here are the performance counter read during a run on a 16 CPUs x86 VM:=0A=
Performance counter stats for './ebizzy -mTt 16':=0A=
1706379 faults=0A=
1674599 spf=0A=
30588 pagefault:spf_vma_changed=0A=
0 pagefault:spf_vma_noanon=0A=
363 pagefault:spf_vma_notsup=0A=
0 pagefault:spf_vma_access=0A=
0 pagefault:spf_pmd_changed=0A=
=0A=
And the ones captured during a run on a 80 CPUs Power node:=0A=
Performance counter stats for './ebizzy -mTt 80':=0A=
1874773 faults=0A=
1461153 spf=0A=
413293 pagefault:spf_vma_changed=0A=
0 pagefault:spf_vma_noanon=0A=
200 pagefault:spf_vma_notsup=0A=
0 pagefault:spf_vma_access=0A=
0 pagefault:spf_pmd_changed=0A=
=0A=
In ebizzy's case most of the page fault were handled in a speculative way,=
=0A=
leading the ebizzy performance boost.=0A=
=0A=
------------------=0A=
Changes since v10 (https://lkml.org/lkml/2018/4/17/572):=0A=
- Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran=
=0A=
and Minchan Kim, hopefully.=0A=
- Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in=0A=
__do_page_fault().=0A=
- Loop in pte_spinlock() and pte_map_lock() when pte try lock fails=0A=
instead=0A=
of aborting the speculative page fault handling. Dropping the now=0A=
useless=0A=
trace event pagefault:spf_pte_lock.=0A=
- No more try to reuse the fetched VMA during the speculative page fault=
=0A=
handling when retrying is needed. This adds a lot of complexity and=0A=
additional tests done didn't show a significant performance improvement.=
=0A=
- Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.=0A=
=0A=
[1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-spec=
ulative-page-faults-tt965642.html#none=0A=
[2] https://patchwork.kernel.org/patch/9999687/=0A=
=0A=
=0A=
Laurent Dufour (20):=0A=
mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT=0A=
x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE=0A=
mm: make pte_unmap_same compatible with SPF=0A=
mm: introduce INIT_VMA()=0A=
mm: protect VMA modifications using VMA sequence count=0A=
mm: protect mremap() against SPF hanlder=0A=
mm: protect SPF handler against anon_vma changes=0A=
mm: cache some VMA fields in the vm_fault structure=0A=
mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()=0A=
mm: introduce __lru_cache_add_active_or_unevictable=0A=
mm: introduce __vm_normal_page()=0A=
mm: introduce __page_add_new_anon_rmap()=0A=
mm: protect mm_rb tree with a rwlock=0A=
mm: adding speculative page fault failure trace events=0A=
perf: add a speculative page fault sw event=0A=
perf tools: add support for the SPF perf event=0A=
mm: add speculative page fault vmstats=0A=
powerpc/mm: add speculative page fault=0A=
=0A=
Mahendran Ganesh (2):=0A=
arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
arm64/mm: add speculative page fault=0A=
=0A=
Peter Zijlstra (4):=0A=
mm: prepare for FAULT_FLAG_SPECULATIVE=0A=
mm: VMA sequence count=0A=
mm: provide speculative fault infrastructure=0A=
x86/mm: add speculative pagefault handling=0A=
=0A=
arch/arm64/Kconfig | 1 +=0A=
arch/arm64/mm/fault.c | 12 +=0A=
arch/powerpc/Kconfig | 1 +=0A=
arch/powerpc/mm/fault.c | 16 +=0A=
arch/x86/Kconfig | 1 +=0A=
arch/x86/mm/fault.c | 27 +-=0A=
fs/exec.c | 2 +-=0A=
fs/proc/task_mmu.c | 5 +-=0A=
fs/userfaultfd.c | 17 +-=0A=
include/linux/hugetlb_inline.h | 2 +-=0A=
include/linux/migrate.h | 4 +-=0A=
include/linux/mm.h | 136 +++++++-=0A=
include/linux/mm_types.h | 7 +=0A=
include/linux/pagemap.h | 4 +-=0A=
include/linux/rmap.h | 12 +-=0A=
include/linux/swap.h | 10 +-=0A=
include/linux/vm_event_item.h | 3 +=0A=
include/trace/events/pagefault.h | 80 +++++=0A=
include/uapi/linux/perf_event.h | 1 +=0A=
kernel/fork.c | 5 +-=0A=
mm/Kconfig | 22 ++=0A=
mm/huge_memory.c | 6 +-=0A=
mm/hugetlb.c | 2 +=0A=
mm/init-mm.c | 3 +=0A=
mm/internal.h | 20 ++=0A=
mm/khugepaged.c | 5 +=0A=
mm/madvise.c | 6 +-=0A=
mm/memory.c | 612 +++++++++++++++++++++++++++++-=
----=0A=
mm/mempolicy.c | 51 ++-=0A=
mm/migrate.c | 6 +-=0A=
mm/mlock.c | 13 +-=0A=
mm/mmap.c | 229 ++++++++++---=0A=
mm/mprotect.c | 4 +-=0A=
mm/mremap.c | 13 +=0A=
mm/nommu.c | 2 +-=0A=
mm/rmap.c | 5 +-=0A=
mm/swap.c | 6 +-=0A=
mm/swap_state.c | 8 +-=0A=
mm/vmstat.c | 5 +-=0A=
tools/include/uapi/linux/perf_event.h | 1 +=0A=
tools/perf/util/evsel.c | 1 +=0A=
tools/perf/util/parse-events.c | 4 +=0A=
tools/perf/util/parse-events.l | 1 +=0A=
tools/perf/util/python.c | 1 +=0A=
44 files changed, 1161 insertions(+), 211 deletions(-)=0A=
create mode 100644 include/trace/events/pagefault.h=0A=
=0A=
--=0A=
2.7.4=0A=
=0A=
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-05-28 5:23 ` Song, HaiyanX
(?)
@ 2018-05-28 7:51 ` Laurent Dufour
2018-05-28 8:22 ` Haiyan Song
-1 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-05-28 7:51 UTC (permalink / raw)
To: Song, HaiyanX, akpm, mhocko, peterz, kirill, ak, dave, jack,
Matthew Wilcox, khandual, aneesh.kumar, benh, mpe, paulus,
Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
Sergey Senozhatsky, sergey.senozhatsky.work, Andrea Arcangeli,
Alexei Starovoitov, Wang, Kemi, Daniel Jordan, David Rientjes,
Jerome Glisse, Ganesh Mahendran, Minchan Kim, Punit Agrawal,
vinayak menon, Yang Shi
Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
On 28/05/2018 07:23, Song, HaiyanX wrote:
>
> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
> tested on Intel 4s Skylake platform.
Hi,
Thanks for reporting this benchmark results, but you mentioned the "V9 patch
series" while responding to the v11 header series...
Were these tests done on v9 or v11 ?
Cheers,
Laurent.
>
> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
> Commit id:
> base commit: d55f34411b1b126429a823d06c3124c16283231f
> head commit: 0355322b3577eeab7669066df42c550a56801110
> Benchmark suite: will-it-scale
> Download link:
> https://github.com/antonblanchard/will-it-scale/tree/master/tests
> Metrics:
> will-it-scale.per_process_ops=processes/nr_cpu
> will-it-scale.per_thread_ops=threads/nr_cpu
> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> THP: enable / disable
> nr_task: 100%
>
> 1. Regressions:
> a) THP enabled:
> testcase base change head metric
> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>
> b) THP disabled:
> testcase base change head metric
> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>
> 2. Improvements:
> a) THP enabled:
> testcase base change head metric
> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>
> b) THP disabled:
> testcase base change head metric
> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>
> Notes: for above values in column "change", the higher value means that the related testcase result
> on head commit is better than that on base commit for this benchmark.
>
>
> Best regards
> Haiyan Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Thursday, May 17, 2018 7:06 PM
> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: [PATCH v11 00/26] Speculative page faults
>
> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
>
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, this is done by introducing a rwlock
> which protects the access to the mm_rb tree. Previously this was done using
> SRCU but it was introducing a lot of scheduling to process the VMA's
> freeing operation which was hitting the performance by 20% as reported by
> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
> limiting the locking contention to these operations which are expected to
> be in a O(log n) order. In addition to ensure that the VMA is not freed in
> our back a reference count is added and 2 services (get_vma() and
> put_vma()) are introduced to handle the reference count. Once a VMA is
> fetched from the RB tree using get_vma(), it must be later freed using
> put_vma(). I can't see anymore the overhead I got while will-it-scale
> benchmark anymore.
>
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
>
> Once the VMA has been found, the speculative page fault handler would check
> for the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus, the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried. VMA sequence lockings are added when VMA attributes which are
> checked during the page fault are modified.
>
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.
>
> The locking of the PTE is done with interrupts disabled, this allows
> checking for the PMD to ensure that there is not an ongoing collapsing
> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
> valid at the time the PTE is locked, we have the guarantee that the
> collapsing operation will have to wait on the PTE lock to move forward.
> This allows the SPF handler to map the PTE safely. If the PMD value is
> different from the one recorded at the beginning of the SPF operation, the
> classic page fault handler will be called to handle the operation while
> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
> the lock is done using spin_trylock() to avoid dead lock when handling a
> page fault while a TLB invalidate is requested by another CPU holding the
> PTE.
>
> In pseudo code, this could be seen as:
> speculative_page_fault()
> {
> vma = get_vma()
> check vma sequence count
> check vma's support
> disable interrupt
> check pgd,p4d,...,pte
> save pmd and pte in vmf
> save vma sequence counter in vmf
> enable interrupt
> check vma sequence count
> handle_pte_fault(vma)
> ..
> page = alloc_page()
> pte_map_lock()
> disable interrupt
> abort if sequence counter has changed
> abort if pmd or pte has changed
> pte map and lock
> enable interrupt
> if abort
> free page
> abort
> ...
> }
>
> arch_fault_handler()
> {
> if (speculative_page_fault(&vma))
> goto done
> again:
> lock(mmap_sem)
> vma = find_vma();
> handle_pte_fault(vma);
> if retry
> unlock(mmap_sem)
> goto again;
> done:
> handle fault error
> }
>
> Support for THP is not done because when checking for the PMD, we can be
> confused by an in progress collapsing operation done by khugepaged. The
> issue is that pmd_none() could be true either if the PMD is not already
> populated or if the underlying PTE are in the way to be collapsed. So we
> cannot safely allocate a PMD if pmd_none() is true.
>
> This series add a new software performance event named 'speculative-faults'
> or 'spf'. It counts the number of successful page fault event handled
> speculatively. When recording 'faults,spf' events, the faults one is
> counting the total number of page fault events while 'spf' is only counting
> the part of the faults processed speculatively.
>
> There are some trace events introduced by this series. They allow
> identifying why the page faults were not processed speculatively. This
> doesn't take in account the faults generated by a monothreaded process
> which directly processed while holding the mmap_sem. This trace events are
> grouped in a system named 'pagefault', they are:
> - pagefault:spf_vma_changed : if the VMA has been changed in our back
> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
> - pagefault:spf_vma_notsup : the VMA's type is not supported
> - pagefault:spf_vma_access : the VMA's access right are not respected
> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
> back.
>
> To record all the related events, the easier is to run perf with the
> following arguments :
> $ perf stat -e 'faults,spf,pagefault:*' <command>
>
> There is also a dedicated vmstat counter showing the number of successful
> page fault handled speculatively. I can be seen this way:
> $ grep speculative_pgfault /proc/vmstat
>
> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
> on x86, PowerPC and arm64.
>
> ---------------------
> Real Workload results
>
> As mentioned in previous email, we did non official runs using a "popular
> in memory multithreaded database product" on 176 cores SMT8 Power system
> which showed a 30% improvements in the number of transaction processed per
> second. This run has been done on the v6 series, but changes introduced in
> this new version should not impact the performance boost seen.
>
> Here are the perf data captured during 2 of these runs on top of the v8
> series:
> vanilla spf
> faults 89.418 101.364 +13%
> spf n/a 97.989
>
> With the SPF kernel, most of the page fault were processed in a speculative
> way.
>
> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
> it a try on an android device. He reported that the application launch time
> was improved in average by 6%, and for large applications (~100 threads) by
> 20%.
>
> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
> MSM845 (8 cores) with 6GB (the less is better):
>
> Application 4.9 4.9+spf delta
> com.tencent.mm 416 389 -7%
> com.eg.android.AlipayGphone 1135 986 -13%
> com.tencent.mtt 455 454 0%
> com.qqgame.hlddz 1497 1409 -6%
> com.autonavi.minimap 711 701 -1%
> com.tencent.tmgp.sgame 788 748 -5%
> com.immomo.momo 501 487 -3%
> com.tencent.peng 2145 2112 -2%
> com.smile.gifmaker 491 461 -6%
> com.baidu.BaiduMap 479 366 -23%
> com.taobao.taobao 1341 1198 -11%
> com.baidu.searchbox 333 314 -6%
> com.tencent.mobileqq 394 384 -3%
> com.sina.weibo 907 906 0%
> com.youku.phone 816 731 -11%
> com.happyelements.AndroidAnimal.qq 763 717 -6%
> com.UCMobile 415 411 -1%
> com.tencent.tmgp.ak 1464 1431 -2%
> com.tencent.qqmusic 336 329 -2%
> com.sankuai.meituan 1661 1302 -22%
> com.netease.cloudmusic 1193 1200 1%
> air.tv.douyu.android 4257 4152 -2%
>
> ------------------
> Benchmarks results
>
> Base kernel is v4.17.0-rc4-mm1
> SPF is BASE + this series
>
> Kernbench:
> ----------
> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
> kernel (kernel is build 5 times):
>
> Average Half load -j 8
> Run (std deviation)
> BASE SPF
> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>
> Average Optimal load -j 16
> Run (std deviation)
> BASE SPF
> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>
>
> During a run on the SPF, perf events were captured:
> Performance counter stats for '../kernbench -M':
> 526743764 faults
> 210 spf
> 3 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 2278 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> Very few speculative page faults were recorded as most of the processes
> involved are monothreaded (sounds that on this architecture some threads
> were created during the kernel build processing).
>
> Here are the kerbench results on a 80 CPUs Power8 system:
>
> Average Half load -j 40
> Run (std deviation)
> BASE SPF
> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>
> Average Optimal load -j 80
> Run (std deviation)
> BASE SPF
> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
> Context Switches 223861 (138865) 225032 (139632) 0.52%
> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>
> During a run on the SPF, perf events were captured:
> Performance counter stats for '../kernbench -M':
> 116730856 faults
> 0 spf
> 3 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 476 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> Most of the processes involved are monothreaded so SPF is not activated but
> there is no impact on the performance.
>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
> consistent result I repeated the test 100 times and measure the average
> result. The number is the record processes per second, the higher is the
> best.
>
> BASE SPF delta
> 16 CPUs x86 VM 742.57 1490.24 100.69%
> 80 CPUs P8 node 13105.4 24174.23 84.46%
>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
> Performance counter stats for './ebizzy -mTt 16':
> 1706379 faults
> 1674599 spf
> 30588 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 363 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
> Performance counter stats for './ebizzy -mTt 80':
> 1874773 faults
> 1461153 spf
> 413293 pagefault:spf_vma_changed
> 0 pagefault:spf_vma_noanon
> 200 pagefault:spf_vma_notsup
> 0 pagefault:spf_vma_access
> 0 pagefault:spf_pmd_changed
>
> In ebizzy's case most of the page fault were handled in a speculative way,
> leading the ebizzy performance boost.
>
> ------------------
> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
> and Minchan Kim, hopefully.
> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
> __do_page_fault().
> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
> instead
> of aborting the speculative page fault handling. Dropping the now
> useless
> trace event pagefault:spf_pte_lock.
> - No more try to reuse the fetched VMA during the speculative page fault
> handling when retrying is needed. This adds a lot of complexity and
> additional tests done didn't show a significant performance improvement.
> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>
> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
> [2] https://patchwork.kernel.org/patch/9999687/
>
>
> Laurent Dufour (20):
> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
> mm: make pte_unmap_same compatible with SPF
> mm: introduce INIT_VMA()
> mm: protect VMA modifications using VMA sequence count
> mm: protect mremap() against SPF hanlder
> mm: protect SPF handler against anon_vma changes
> mm: cache some VMA fields in the vm_fault structure
> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
> mm: introduce __lru_cache_add_active_or_unevictable
> mm: introduce __vm_normal_page()
> mm: introduce __page_add_new_anon_rmap()
> mm: protect mm_rb tree with a rwlock
> mm: adding speculative page fault failure trace events
> perf: add a speculative page fault sw event
> perf tools: add support for the SPF perf event
> mm: add speculative page fault vmstats
> powerpc/mm: add speculative page fault
>
> Mahendran Ganesh (2):
> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> arm64/mm: add speculative page fault
>
> Peter Zijlstra (4):
> mm: prepare for FAULT_FLAG_SPECULATIVE
> mm: VMA sequence count
> mm: provide speculative fault infrastructure
> x86/mm: add speculative pagefault handling
>
> arch/arm64/Kconfig | 1 +
> arch/arm64/mm/fault.c | 12 +
> arch/powerpc/Kconfig | 1 +
> arch/powerpc/mm/fault.c | 16 +
> arch/x86/Kconfig | 1 +
> arch/x86/mm/fault.c | 27 +-
> fs/exec.c | 2 +-
> fs/proc/task_mmu.c | 5 +-
> fs/userfaultfd.c | 17 +-
> include/linux/hugetlb_inline.h | 2 +-
> include/linux/migrate.h | 4 +-
> include/linux/mm.h | 136 +++++++-
> include/linux/mm_types.h | 7 +
> include/linux/pagemap.h | 4 +-
> include/linux/rmap.h | 12 +-
> include/linux/swap.h | 10 +-
> include/linux/vm_event_item.h | 3 +
> include/trace/events/pagefault.h | 80 +++++
> include/uapi/linux/perf_event.h | 1 +
> kernel/fork.c | 5 +-
> mm/Kconfig | 22 ++
> mm/huge_memory.c | 6 +-
> mm/hugetlb.c | 2 +
> mm/init-mm.c | 3 +
> mm/internal.h | 20 ++
> mm/khugepaged.c | 5 +
> mm/madvise.c | 6 +-
> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
> mm/mempolicy.c | 51 ++-
> mm/migrate.c | 6 +-
> mm/mlock.c | 13 +-
> mm/mmap.c | 229 ++++++++++---
> mm/mprotect.c | 4 +-
> mm/mremap.c | 13 +
> mm/nommu.c | 2 +-
> mm/rmap.c | 5 +-
> mm/swap.c | 6 +-
> mm/swap_state.c | 8 +-
> mm/vmstat.c | 5 +-
> tools/include/uapi/linux/perf_event.h | 1 +
> tools/perf/util/evsel.c | 1 +
> tools/perf/util/parse-events.c | 4 +
> tools/perf/util/parse-events.l | 1 +
> tools/perf/util/python.c | 1 +
> 44 files changed, 1161 insertions(+), 211 deletions(-)
> create mode 100644 include/trace/events/pagefault.h
>
> --
> 2.7.4
>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-05-28 7:51 ` Laurent Dufour
@ 2018-05-28 8:22 ` Haiyan Song
2018-05-28 8:54 ` Laurent Dufour
0 siblings, 1 reply; 106+ messages in thread
From: Haiyan Song @ 2018-05-28 8:22 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Laurent,
Yes, these tests are done on V9 patch.
Best regards,
Haiyan Song
On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
> On 28/05/2018 07:23, Song, HaiyanX wrote:
> >
> > Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
> > tested on Intel 4s Skylake platform.
>
> Hi,
>
> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
> series" while responding to the v11 header series...
> Were these tests done on v9 or v11 ?
>
> Cheers,
> Laurent.
>
> >
> > The regression result is sorted by the metric will-it-scale.per_thread_ops.
> > Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
> > Commit id:
> > base commit: d55f34411b1b126429a823d06c3124c16283231f
> > head commit: 0355322b3577eeab7669066df42c550a56801110
> > Benchmark suite: will-it-scale
> > Download link:
> > https://github.com/antonblanchard/will-it-scale/tree/master/tests
> > Metrics:
> > will-it-scale.per_process_ops=processes/nr_cpu
> > will-it-scale.per_thread_ops=threads/nr_cpu
> > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> > THP: enable / disable
> > nr_task: 100%
> >
> > 1. Regressions:
> > a) THP enabled:
> > testcase base change head metric
> > page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
> > page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
> > brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
> > page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
> > signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
> >
> > b) THP disabled:
> > testcase base change head metric
> > page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
> > page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
> > context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
> > brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
> > page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
> > signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
> >
> > 2. Improvements:
> > a) THP enabled:
> > testcase base change head metric
> > malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
> > writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
> > signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
> >
> > b) THP disabled:
> > testcase base change head metric
> > malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
> > read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
> > page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
> > read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
> > writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
> > signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
> >
> > Notes: for above values in column "change", the higher value means that the related testcase result
> > on head commit is better than that on base commit for this benchmark.
> >
> >
> > Best regards
> > Haiyan Song
> >
> > ________________________________________
> > From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> > Sent: Thursday, May 17, 2018 7:06 PM
> > To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
> > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> > Subject: [PATCH v11 00/26] Speculative page faults
> >
> > This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> > page fault without holding the mm semaphore [1].
> >
> > The idea is to try to handle user space page faults without holding the
> > mmap_sem. This should allow better concurrency for massively threaded
> > process since the page fault handler will not wait for other threads memory
> > layout change to be done, assuming that this change is done in another part
> > of the process's memory space. This type page fault is named speculative
> > page fault. If the speculative page fault fails because of a concurrency is
> > detected or because underlying PMD or PTE tables are not yet allocating, it
> > is failing its processing and a classic page fault is then tried.
> >
> > The speculative page fault (SPF) has to look for the VMA matching the fault
> > address without holding the mmap_sem, this is done by introducing a rwlock
> > which protects the access to the mm_rb tree. Previously this was done using
> > SRCU but it was introducing a lot of scheduling to process the VMA's
> > freeing operation which was hitting the performance by 20% as reported by
> > Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
> > limiting the locking contention to these operations which are expected to
> > be in a O(log n) order. In addition to ensure that the VMA is not freed in
> > our back a reference count is added and 2 services (get_vma() and
> > put_vma()) are introduced to handle the reference count. Once a VMA is
> > fetched from the RB tree using get_vma(), it must be later freed using
> > put_vma(). I can't see anymore the overhead I got while will-it-scale
> > benchmark anymore.
> >
> > The VMA's attributes checked during the speculative page fault processing
> > have to be protected against parallel changes. This is done by using a per
> > VMA sequence lock. This sequence lock allows the speculative page fault
> > handler to fast check for parallel changes in progress and to abort the
> > speculative page fault in that case.
> >
> > Once the VMA has been found, the speculative page fault handler would check
> > for the VMA's attributes to verify that the page fault has to be handled
> > correctly or not. Thus, the VMA is protected through a sequence lock which
> > allows fast detection of concurrent VMA changes. If such a change is
> > detected, the speculative page fault is aborted and a *classic* page fault
> > is tried. VMA sequence lockings are added when VMA attributes which are
> > checked during the page fault are modified.
> >
> > When the PTE is fetched, the VMA is checked to see if it has been changed,
> > so once the page table is locked, the VMA is valid, so any other changes
> > leading to touching this PTE will need to lock the page table, so no
> > parallel change is possible at this time.
> >
> > The locking of the PTE is done with interrupts disabled, this allows
> > checking for the PMD to ensure that there is not an ongoing collapsing
> > operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> > waiting for the other CPU to have caught the IPI interrupt, if the pmd is
> > valid at the time the PTE is locked, we have the guarantee that the
> > collapsing operation will have to wait on the PTE lock to move forward.
> > This allows the SPF handler to map the PTE safely. If the PMD value is
> > different from the one recorded at the beginning of the SPF operation, the
> > classic page fault handler will be called to handle the operation while
> > holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
> > the lock is done using spin_trylock() to avoid dead lock when handling a
> > page fault while a TLB invalidate is requested by another CPU holding the
> > PTE.
> >
> > In pseudo code, this could be seen as:
> > speculative_page_fault()
> > {
> > vma = get_vma()
> > check vma sequence count
> > check vma's support
> > disable interrupt
> > check pgd,p4d,...,pte
> > save pmd and pte in vmf
> > save vma sequence counter in vmf
> > enable interrupt
> > check vma sequence count
> > handle_pte_fault(vma)
> > ..
> > page = alloc_page()
> > pte_map_lock()
> > disable interrupt
> > abort if sequence counter has changed
> > abort if pmd or pte has changed
> > pte map and lock
> > enable interrupt
> > if abort
> > free page
> > abort
> > ...
> > }
> >
> > arch_fault_handler()
> > {
> > if (speculative_page_fault(&vma))
> > goto done
> > again:
> > lock(mmap_sem)
> > vma = find_vma();
> > handle_pte_fault(vma);
> > if retry
> > unlock(mmap_sem)
> > goto again;
> > done:
> > handle fault error
> > }
> >
> > Support for THP is not done because when checking for the PMD, we can be
> > confused by an in progress collapsing operation done by khugepaged. The
> > issue is that pmd_none() could be true either if the PMD is not already
> > populated or if the underlying PTE are in the way to be collapsed. So we
> > cannot safely allocate a PMD if pmd_none() is true.
> >
> > This series add a new software performance event named 'speculative-faults'
> > or 'spf'. It counts the number of successful page fault event handled
> > speculatively. When recording 'faults,spf' events, the faults one is
> > counting the total number of page fault events while 'spf' is only counting
> > the part of the faults processed speculatively.
> >
> > There are some trace events introduced by this series. They allow
> > identifying why the page faults were not processed speculatively. This
> > doesn't take in account the faults generated by a monothreaded process
> > which directly processed while holding the mmap_sem. This trace events are
> > grouped in a system named 'pagefault', they are:
> > - pagefault:spf_vma_changed : if the VMA has been changed in our back
> > - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
> > - pagefault:spf_vma_notsup : the VMA's type is not supported
> > - pagefault:spf_vma_access : the VMA's access right are not respected
> > - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
> > back.
> >
> > To record all the related events, the easier is to run perf with the
> > following arguments :
> > $ perf stat -e 'faults,spf,pagefault:*' <command>
> >
> > There is also a dedicated vmstat counter showing the number of successful
> > page fault handled speculatively. I can be seen this way:
> > $ grep speculative_pgfault /proc/vmstat
> >
> > This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
> > on x86, PowerPC and arm64.
> >
> > ---------------------
> > Real Workload results
> >
> > As mentioned in previous email, we did non official runs using a "popular
> > in memory multithreaded database product" on 176 cores SMT8 Power system
> > which showed a 30% improvements in the number of transaction processed per
> > second. This run has been done on the v6 series, but changes introduced in
> > this new version should not impact the performance boost seen.
> >
> > Here are the perf data captured during 2 of these runs on top of the v8
> > series:
> > vanilla spf
> > faults 89.418 101.364 +13%
> > spf n/a 97.989
> >
> > With the SPF kernel, most of the page fault were processed in a speculative
> > way.
> >
> > Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
> > it a try on an android device. He reported that the application launch time
> > was improved in average by 6%, and for large applications (~100 threads) by
> > 20%.
> >
> > Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
> > MSM845 (8 cores) with 6GB (the less is better):
> >
> > Application 4.9 4.9+spf delta
> > com.tencent.mm 416 389 -7%
> > com.eg.android.AlipayGphone 1135 986 -13%
> > com.tencent.mtt 455 454 0%
> > com.qqgame.hlddz 1497 1409 -6%
> > com.autonavi.minimap 711 701 -1%
> > com.tencent.tmgp.sgame 788 748 -5%
> > com.immomo.momo 501 487 -3%
> > com.tencent.peng 2145 2112 -2%
> > com.smile.gifmaker 491 461 -6%
> > com.baidu.BaiduMap 479 366 -23%
> > com.taobao.taobao 1341 1198 -11%
> > com.baidu.searchbox 333 314 -6%
> > com.tencent.mobileqq 394 384 -3%
> > com.sina.weibo 907 906 0%
> > com.youku.phone 816 731 -11%
> > com.happyelements.AndroidAnimal.qq 763 717 -6%
> > com.UCMobile 415 411 -1%
> > com.tencent.tmgp.ak 1464 1431 -2%
> > com.tencent.qqmusic 336 329 -2%
> > com.sankuai.meituan 1661 1302 -22%
> > com.netease.cloudmusic 1193 1200 1%
> > air.tv.douyu.android 4257 4152 -2%
> >
> > ------------------
> > Benchmarks results
> >
> > Base kernel is v4.17.0-rc4-mm1
> > SPF is BASE + this series
> >
> > Kernbench:
> > ----------
> > Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
> > kernel (kernel is build 5 times):
> >
> > Average Half load -j 8
> > Run (std deviation)
> > BASE SPF
> > Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
> > User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
> > System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
> > Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
> > Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
> > Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
> >
> > Average Optimal load -j 16
> > Run (std deviation)
> > BASE SPF
> > Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
> > User Time 11064.8 (981.142) 11085 (990.897) 0.18%
> > System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
> > Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
> > Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
> > Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
> >
> >
> > During a run on the SPF, perf events were captured:
> > Performance counter stats for '../kernbench -M':
> > 526743764 faults
> > 210 spf
> > 3 pagefault:spf_vma_changed
> > 0 pagefault:spf_vma_noanon
> > 2278 pagefault:spf_vma_notsup
> > 0 pagefault:spf_vma_access
> > 0 pagefault:spf_pmd_changed
> >
> > Very few speculative page faults were recorded as most of the processes
> > involved are monothreaded (sounds that on this architecture some threads
> > were created during the kernel build processing).
> >
> > Here are the kerbench results on a 80 CPUs Power8 system:
> >
> > Average Half load -j 40
> > Run (std deviation)
> > BASE SPF
> > Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
> > User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
> > System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
> > Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
> > Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
> > Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
> >
> > Average Optimal load -j 80
> > Run (std deviation)
> > BASE SPF
> > Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
> > User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
> > System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
> > Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
> > Context Switches 223861 (138865) 225032 (139632) 0.52%
> > Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
> >
> > During a run on the SPF, perf events were captured:
> > Performance counter stats for '../kernbench -M':
> > 116730856 faults
> > 0 spf
> > 3 pagefault:spf_vma_changed
> > 0 pagefault:spf_vma_noanon
> > 476 pagefault:spf_vma_notsup
> > 0 pagefault:spf_vma_access
> > 0 pagefault:spf_pmd_changed
> >
> > Most of the processes involved are monothreaded so SPF is not activated but
> > there is no impact on the performance.
> >
> > Ebizzy:
> > -------
> > The test is counting the number of records per second it can manage, the
> > higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
> > consistent result I repeated the test 100 times and measure the average
> > result. The number is the record processes per second, the higher is the
> > best.
> >
> > BASE SPF delta
> > 16 CPUs x86 VM 742.57 1490.24 100.69%
> > 80 CPUs P8 node 13105.4 24174.23 84.46%
> >
> > Here are the performance counter read during a run on a 16 CPUs x86 VM:
> > Performance counter stats for './ebizzy -mTt 16':
> > 1706379 faults
> > 1674599 spf
> > 30588 pagefault:spf_vma_changed
> > 0 pagefault:spf_vma_noanon
> > 363 pagefault:spf_vma_notsup
> > 0 pagefault:spf_vma_access
> > 0 pagefault:spf_pmd_changed
> >
> > And the ones captured during a run on a 80 CPUs Power node:
> > Performance counter stats for './ebizzy -mTt 80':
> > 1874773 faults
> > 1461153 spf
> > 413293 pagefault:spf_vma_changed
> > 0 pagefault:spf_vma_noanon
> > 200 pagefault:spf_vma_notsup
> > 0 pagefault:spf_vma_access
> > 0 pagefault:spf_pmd_changed
> >
> > In ebizzy's case most of the page fault were handled in a speculative way,
> > leading the ebizzy performance boost.
> >
> > ------------------
> > Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
> > - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
> > and Minchan Kim, hopefully.
> > - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
> > __do_page_fault().
> > - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
> > instead
> > of aborting the speculative page fault handling. Dropping the now
> > useless
> > trace event pagefault:spf_pte_lock.
> > - No more try to reuse the fetched VMA during the speculative page fault
> > handling when retrying is needed. This adds a lot of complexity and
> > additional tests done didn't show a significant performance improvement.
> > - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
> >
> > [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
> > [2] https://patchwork.kernel.org/patch/9999687/
> >
> >
> > Laurent Dufour (20):
> > mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
> > x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> > powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> > mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
> > mm: make pte_unmap_same compatible with SPF
> > mm: introduce INIT_VMA()
> > mm: protect VMA modifications using VMA sequence count
> > mm: protect mremap() against SPF hanlder
> > mm: protect SPF handler against anon_vma changes
> > mm: cache some VMA fields in the vm_fault structure
> > mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
> > mm: introduce __lru_cache_add_active_or_unevictable
> > mm: introduce __vm_normal_page()
> > mm: introduce __page_add_new_anon_rmap()
> > mm: protect mm_rb tree with a rwlock
> > mm: adding speculative page fault failure trace events
> > perf: add a speculative page fault sw event
> > perf tools: add support for the SPF perf event
> > mm: add speculative page fault vmstats
> > powerpc/mm: add speculative page fault
> >
> > Mahendran Ganesh (2):
> > arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> > arm64/mm: add speculative page fault
> >
> > Peter Zijlstra (4):
> > mm: prepare for FAULT_FLAG_SPECULATIVE
> > mm: VMA sequence count
> > mm: provide speculative fault infrastructure
> > x86/mm: add speculative pagefault handling
> >
> > arch/arm64/Kconfig | 1 +
> > arch/arm64/mm/fault.c | 12 +
> > arch/powerpc/Kconfig | 1 +
> > arch/powerpc/mm/fault.c | 16 +
> > arch/x86/Kconfig | 1 +
> > arch/x86/mm/fault.c | 27 +-
> > fs/exec.c | 2 +-
> > fs/proc/task_mmu.c | 5 +-
> > fs/userfaultfd.c | 17 +-
> > include/linux/hugetlb_inline.h | 2 +-
> > include/linux/migrate.h | 4 +-
> > include/linux/mm.h | 136 +++++++-
> > include/linux/mm_types.h | 7 +
> > include/linux/pagemap.h | 4 +-
> > include/linux/rmap.h | 12 +-
> > include/linux/swap.h | 10 +-
> > include/linux/vm_event_item.h | 3 +
> > include/trace/events/pagefault.h | 80 +++++
> > include/uapi/linux/perf_event.h | 1 +
> > kernel/fork.c | 5 +-
> > mm/Kconfig | 22 ++
> > mm/huge_memory.c | 6 +-
> > mm/hugetlb.c | 2 +
> > mm/init-mm.c | 3 +
> > mm/internal.h | 20 ++
> > mm/khugepaged.c | 5 +
> > mm/madvise.c | 6 +-
> > mm/memory.c | 612 +++++++++++++++++++++++++++++-----
> > mm/mempolicy.c | 51 ++-
> > mm/migrate.c | 6 +-
> > mm/mlock.c | 13 +-
> > mm/mmap.c | 229 ++++++++++---
> > mm/mprotect.c | 4 +-
> > mm/mremap.c | 13 +
> > mm/nommu.c | 2 +-
> > mm/rmap.c | 5 +-
> > mm/swap.c | 6 +-
> > mm/swap_state.c | 8 +-
> > mm/vmstat.c | 5 +-
> > tools/include/uapi/linux/perf_event.h | 1 +
> > tools/perf/util/evsel.c | 1 +
> > tools/perf/util/parse-events.c | 4 +
> > tools/perf/util/parse-events.l | 1 +
> > tools/perf/util/python.c | 1 +
> > 44 files changed, 1161 insertions(+), 211 deletions(-)
> > create mode 100644 include/trace/events/pagefault.h
> >
> > --
> > 2.7.4
> >
> >
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-05-28 8:22 ` Haiyan Song
@ 2018-05-28 8:54 ` Laurent Dufour
2018-05-28 11:04 ` Wang, Kemi
2018-06-11 7:49 ` Song, HaiyanX
0 siblings, 2 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-05-28 8:54 UTC (permalink / raw)
To: Haiyan Song
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 28/05/2018 10:22, Haiyan Song wrote:
> Hi Laurent,
>
> Yes, these tests are done on V9 patch.
Do you plan to give this V11 a run ?
>
>
> Best regards,
> Haiyan Song
>
> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>
>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>> tested on Intel 4s Skylake platform.
>>
>> Hi,
>>
>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>> series" while responding to the v11 header series...
>> Were these tests done on v9 or v11 ?
>>
>> Cheers,
>> Laurent.
>>
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>> Commit id:
>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>> Benchmark suite: will-it-scale
>>> Download link:
>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task: 100%
>>>
>>> 1. Regressions:
>>> a) THP enabled:
>>> testcase base change head metric
>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>
>>> 2. Improvements:
>>> a) THP enabled:
>>> testcase base change head metric
>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>
>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>> on head commit is better than that on base commit for this benchmark.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>>
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Thursday, May 17, 2018 7:06 PM
>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>
>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>> page fault without holding the mm semaphore [1].
>>>
>>> The idea is to try to handle user space page faults without holding the
>>> mmap_sem. This should allow better concurrency for massively threaded
>>> process since the page fault handler will not wait for other threads memory
>>> layout change to be done, assuming that this change is done in another part
>>> of the process's memory space. This type page fault is named speculative
>>> page fault. If the speculative page fault fails because of a concurrency is
>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>> is failing its processing and a classic page fault is then tried.
>>>
>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>> which protects the access to the mm_rb tree. Previously this was done using
>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>> freeing operation which was hitting the performance by 20% as reported by
>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>> limiting the locking contention to these operations which are expected to
>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>> our back a reference count is added and 2 services (get_vma() and
>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>> fetched from the RB tree using get_vma(), it must be later freed using
>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>> benchmark anymore.
>>>
>>> The VMA's attributes checked during the speculative page fault processing
>>> have to be protected against parallel changes. This is done by using a per
>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>> handler to fast check for parallel changes in progress and to abort the
>>> speculative page fault in that case.
>>>
>>> Once the VMA has been found, the speculative page fault handler would check
>>> for the VMA's attributes to verify that the page fault has to be handled
>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>> allows fast detection of concurrent VMA changes. If such a change is
>>> detected, the speculative page fault is aborted and a *classic* page fault
>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>> checked during the page fault are modified.
>>>
>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>> so once the page table is locked, the VMA is valid, so any other changes
>>> leading to touching this PTE will need to lock the page table, so no
>>> parallel change is possible at this time.
>>>
>>> The locking of the PTE is done with interrupts disabled, this allows
>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>> valid at the time the PTE is locked, we have the guarantee that the
>>> collapsing operation will have to wait on the PTE lock to move forward.
>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>> different from the one recorded at the beginning of the SPF operation, the
>>> classic page fault handler will be called to handle the operation while
>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>> page fault while a TLB invalidate is requested by another CPU holding the
>>> PTE.
>>>
>>> In pseudo code, this could be seen as:
>>> speculative_page_fault()
>>> {
>>> vma = get_vma()
>>> check vma sequence count
>>> check vma's support
>>> disable interrupt
>>> check pgd,p4d,...,pte
>>> save pmd and pte in vmf
>>> save vma sequence counter in vmf
>>> enable interrupt
>>> check vma sequence count
>>> handle_pte_fault(vma)
>>> ..
>>> page = alloc_page()
>>> pte_map_lock()
>>> disable interrupt
>>> abort if sequence counter has changed
>>> abort if pmd or pte has changed
>>> pte map and lock
>>> enable interrupt
>>> if abort
>>> free page
>>> abort
>>> ...
>>> }
>>>
>>> arch_fault_handler()
>>> {
>>> if (speculative_page_fault(&vma))
>>> goto done
>>> again:
>>> lock(mmap_sem)
>>> vma = find_vma();
>>> handle_pte_fault(vma);
>>> if retry
>>> unlock(mmap_sem)
>>> goto again;
>>> done:
>>> handle fault error
>>> }
>>>
>>> Support for THP is not done because when checking for the PMD, we can be
>>> confused by an in progress collapsing operation done by khugepaged. The
>>> issue is that pmd_none() could be true either if the PMD is not already
>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>> cannot safely allocate a PMD if pmd_none() is true.
>>>
>>> This series add a new software performance event named 'speculative-faults'
>>> or 'spf'. It counts the number of successful page fault event handled
>>> speculatively. When recording 'faults,spf' events, the faults one is
>>> counting the total number of page fault events while 'spf' is only counting
>>> the part of the faults processed speculatively.
>>>
>>> There are some trace events introduced by this series. They allow
>>> identifying why the page faults were not processed speculatively. This
>>> doesn't take in account the faults generated by a monothreaded process
>>> which directly processed while holding the mmap_sem. This trace events are
>>> grouped in a system named 'pagefault', they are:
>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>> back.
>>>
>>> To record all the related events, the easier is to run perf with the
>>> following arguments :
>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>
>>> There is also a dedicated vmstat counter showing the number of successful
>>> page fault handled speculatively. I can be seen this way:
>>> $ grep speculative_pgfault /proc/vmstat
>>>
>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>> on x86, PowerPC and arm64.
>>>
>>> ---------------------
>>> Real Workload results
>>>
>>> As mentioned in previous email, we did non official runs using a "popular
>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>> which showed a 30% improvements in the number of transaction processed per
>>> second. This run has been done on the v6 series, but changes introduced in
>>> this new version should not impact the performance boost seen.
>>>
>>> Here are the perf data captured during 2 of these runs on top of the v8
>>> series:
>>> vanilla spf
>>> faults 89.418 101.364 +13%
>>> spf n/a 97.989
>>>
>>> With the SPF kernel, most of the page fault were processed in a speculative
>>> way.
>>>
>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>> it a try on an android device. He reported that the application launch time
>>> was improved in average by 6%, and for large applications (~100 threads) by
>>> 20%.
>>>
>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>> MSM845 (8 cores) with 6GB (the less is better):
>>>
>>> Application 4.9 4.9+spf delta
>>> com.tencent.mm 416 389 -7%
>>> com.eg.android.AlipayGphone 1135 986 -13%
>>> com.tencent.mtt 455 454 0%
>>> com.qqgame.hlddz 1497 1409 -6%
>>> com.autonavi.minimap 711 701 -1%
>>> com.tencent.tmgp.sgame 788 748 -5%
>>> com.immomo.momo 501 487 -3%
>>> com.tencent.peng 2145 2112 -2%
>>> com.smile.gifmaker 491 461 -6%
>>> com.baidu.BaiduMap 479 366 -23%
>>> com.taobao.taobao 1341 1198 -11%
>>> com.baidu.searchbox 333 314 -6%
>>> com.tencent.mobileqq 394 384 -3%
>>> com.sina.weibo 907 906 0%
>>> com.youku.phone 816 731 -11%
>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>> com.UCMobile 415 411 -1%
>>> com.tencent.tmgp.ak 1464 1431 -2%
>>> com.tencent.qqmusic 336 329 -2%
>>> com.sankuai.meituan 1661 1302 -22%
>>> com.netease.cloudmusic 1193 1200 1%
>>> air.tv.douyu.android 4257 4152 -2%
>>>
>>> ------------------
>>> Benchmarks results
>>>
>>> Base kernel is v4.17.0-rc4-mm1
>>> SPF is BASE + this series
>>>
>>> Kernbench:
>>> ----------
>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>> kernel (kernel is build 5 times):
>>>
>>> Average Half load -j 8
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>
>>> Average Optimal load -j 16
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 526743764 faults
>>> 210 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 2278 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Very few speculative page faults were recorded as most of the processes
>>> involved are monothreaded (sounds that on this architecture some threads
>>> were created during the kernel build processing).
>>>
>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>
>>> Average Half load -j 40
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>
>>> Average Optimal load -j 80
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 116730856 faults
>>> 0 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 476 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Most of the processes involved are monothreaded so SPF is not activated but
>>> there is no impact on the performance.
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage, the
>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>> consistent result I repeated the test 100 times and measure the average
>>> result. The number is the record processes per second, the higher is the
>>> best.
>>>
>>> BASE SPF delta
>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>
>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>> Performance counter stats for './ebizzy -mTt 16':
>>> 1706379 faults
>>> 1674599 spf
>>> 30588 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 363 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> And the ones captured during a run on a 80 CPUs Power node:
>>> Performance counter stats for './ebizzy -mTt 80':
>>> 1874773 faults
>>> 1461153 spf
>>> 413293 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 200 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>> leading the ebizzy performance boost.
>>>
>>> ------------------
>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>> and Minchan Kim, hopefully.
>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>> __do_page_fault().
>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>> instead
>>> of aborting the speculative page fault handling. Dropping the now
>>> useless
>>> trace event pagefault:spf_pte_lock.
>>> - No more try to reuse the fetched VMA during the speculative page fault
>>> handling when retrying is needed. This adds a lot of complexity and
>>> additional tests done didn't show a significant performance improvement.
>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>
>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>
>>>
>>> Laurent Dufour (20):
>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>> mm: make pte_unmap_same compatible with SPF
>>> mm: introduce INIT_VMA()
>>> mm: protect VMA modifications using VMA sequence count
>>> mm: protect mremap() against SPF hanlder
>>> mm: protect SPF handler against anon_vma changes
>>> mm: cache some VMA fields in the vm_fault structure
>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>> mm: introduce __lru_cache_add_active_or_unevictable
>>> mm: introduce __vm_normal_page()
>>> mm: introduce __page_add_new_anon_rmap()
>>> mm: protect mm_rb tree with a rwlock
>>> mm: adding speculative page fault failure trace events
>>> perf: add a speculative page fault sw event
>>> perf tools: add support for the SPF perf event
>>> mm: add speculative page fault vmstats
>>> powerpc/mm: add speculative page fault
>>>
>>> Mahendran Ganesh (2):
>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> arm64/mm: add speculative page fault
>>>
>>> Peter Zijlstra (4):
>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>> mm: VMA sequence count
>>> mm: provide speculative fault infrastructure
>>> x86/mm: add speculative pagefault handling
>>>
>>> arch/arm64/Kconfig | 1 +
>>> arch/arm64/mm/fault.c | 12 +
>>> arch/powerpc/Kconfig | 1 +
>>> arch/powerpc/mm/fault.c | 16 +
>>> arch/x86/Kconfig | 1 +
>>> arch/x86/mm/fault.c | 27 +-
>>> fs/exec.c | 2 +-
>>> fs/proc/task_mmu.c | 5 +-
>>> fs/userfaultfd.c | 17 +-
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/migrate.h | 4 +-
>>> include/linux/mm.h | 136 +++++++-
>>> include/linux/mm_types.h | 7 +
>>> include/linux/pagemap.h | 4 +-
>>> include/linux/rmap.h | 12 +-
>>> include/linux/swap.h | 10 +-
>>> include/linux/vm_event_item.h | 3 +
>>> include/trace/events/pagefault.h | 80 +++++
>>> include/uapi/linux/perf_event.h | 1 +
>>> kernel/fork.c | 5 +-
>>> mm/Kconfig | 22 ++
>>> mm/huge_memory.c | 6 +-
>>> mm/hugetlb.c | 2 +
>>> mm/init-mm.c | 3 +
>>> mm/internal.h | 20 ++
>>> mm/khugepaged.c | 5 +
>>> mm/madvise.c | 6 +-
>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>> mm/mempolicy.c | 51 ++-
>>> mm/migrate.c | 6 +-
>>> mm/mlock.c | 13 +-
>>> mm/mmap.c | 229 ++++++++++---
>>> mm/mprotect.c | 4 +-
>>> mm/mremap.c | 13 +
>>> mm/nommu.c | 2 +-
>>> mm/rmap.c | 5 +-
>>> mm/swap.c | 6 +-
>>> mm/swap_state.c | 8 +-
>>> mm/vmstat.c | 5 +-
>>> tools/include/uapi/linux/perf_event.h | 1 +
>>> tools/perf/util/evsel.c | 1 +
>>> tools/perf/util/parse-events.c | 4 +
>>> tools/perf/util/parse-events.l | 1 +
>>> tools/perf/util/python.c | 1 +
>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>> create mode 100644 include/trace/events/pagefault.h
>>>
>>> --
>>> 2.7.4
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-05-28 8:54 ` Laurent Dufour
@ 2018-05-28 11:04 ` Wang, Kemi
2018-06-11 7:49 ` Song, HaiyanX
1 sibling, 0 replies; 106+ messages in thread
From: Wang, Kemi @ 2018-05-28 11:04 UTC (permalink / raw)
To: Laurent Dufour, Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
Minchan Kim, Punit Agrawal, vinayak menon, Yang Shi,
linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
Full run would take one or two weeks depended on our resource available. Could you pick some ones up, e.g. those have performance regression?
-----Original Message-----
From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On Behalf Of Laurent Dufour
Sent: Monday, May 28, 2018 4:55 PM
To: Song, HaiyanX <haiyanx.song@intel.com>
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox <willy@infradead.org>; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar <mingo@redhat.com>; hpa@zytor.com; Will Deacon <will.deacon@arm.com>; Sergey Senozhatsky <sergey.senozhatsky@gmail.com>; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli <aarcange@redhat.com>; Alexei Starovoitov <alexei.starovoitov@gmail.com>; Wang, Kemi <kemi.wang@intel.com>; Daniel Jordan <daniel.m.jordan@oracle.com>; David Rientjes <rientjes@google.com>; Jerome Glisse <jglisse@redhat.com>; Ganesh Mahendran <opensource.ganesh@gmail.com>; Minchan Kim <minchan@kernel.org>; Punit Agrawal <punitagrawal@gmail.com>; vinayak menon <vinayakm.list@gmail.com>; Yang Shi <yang.shi@linux.alibaba.com>; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen <tim.c.chen@linux.intel.com>; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 28/05/2018 10:22, Haiyan Song wrote:
> Hi Laurent,
>
> Yes, these tests are done on V9 patch.
Do you plan to give this V11 a run ?
>
>
> Best regards,
> Haiyan Song
>
> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>
>>> Some regression and improvements is found by LKP-tools(linux kernel
>>> performance) on V9 patch series tested on Intel 4s Skylake platform.
>>
>> Hi,
>>
>> Thanks for reporting this benchmark results, but you mentioned the
>> "V9 patch series" while responding to the v11 header series...
>> Were these tests done on v9 or v11 ?
>>
>> Cheers,
>> Laurent.
>>
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9
>>> patch series) Commit id:
>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>> Benchmark suite: will-it-scale
>>> Download link:
>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task: 100%
>>>
>>> 1. Regressions:
>>> a) THP enabled:
>>> testcase base change head metric
>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>
>>> 2. Improvements:
>>> a) THP enabled:
>>> testcase base change head metric
>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>
>>> Notes: for above values in column "change", the higher value means
>>> that the related testcase result on head commit is better than that on base commit for this benchmark.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>>
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf
>>> of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Thursday, May 17, 2018 7:06 PM
>>> To: akpm@linux-foundation.org; mhocko@kernel.org;
>>> peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com;
>>> dave@stgolabs.net; jack@suse.cz; Matthew Wilcox;
>>> khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com;
>>> benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org;
>>> Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey
>>> Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli;
>>> Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes;
>>> Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak
>>> menon; Yang Shi
>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org;
>>> haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com;
>>> paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org;
>>> x86@kernel.org
>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>
>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to
>>> handle page fault without holding the mm semaphore [1].
>>>
>>> The idea is to try to handle user space page faults without holding
>>> the mmap_sem. This should allow better concurrency for massively
>>> threaded process since the page fault handler will not wait for
>>> other threads memory layout change to be done, assuming that this
>>> change is done in another part of the process's memory space. This
>>> type page fault is named speculative page fault. If the speculative
>>> page fault fails because of a concurrency is detected or because
>>> underlying PMD or PTE tables are not yet allocating, it is failing its processing and a classic page fault is then tried.
>>>
>>> The speculative page fault (SPF) has to look for the VMA matching
>>> the fault address without holding the mmap_sem, this is done by
>>> introducing a rwlock which protects the access to the mm_rb tree.
>>> Previously this was done using SRCU but it was introducing a lot of
>>> scheduling to process the VMA's freeing operation which was hitting
>>> the performance by 20% as reported by Kemi Wang [2]. Using a rwlock
>>> to protect access to the mm_rb tree is limiting the locking
>>> contention to these operations which are expected to be in a O(log
>>> n) order. In addition to ensure that the VMA is not freed in our
>>> back a reference count is added and 2 services (get_vma() and
>>> put_vma()) are introduced to handle the reference count. Once a VMA
>>> is fetched from the RB tree using get_vma(), it must be later freed
>>> using put_vma(). I can't see anymore the overhead I got while
>>> will-it-scale benchmark anymore.
>>>
>>> The VMA's attributes checked during the speculative page fault
>>> processing have to be protected against parallel changes. This is
>>> done by using a per VMA sequence lock. This sequence lock allows the
>>> speculative page fault handler to fast check for parallel changes in
>>> progress and to abort the speculative page fault in that case.
>>>
>>> Once the VMA has been found, the speculative page fault handler
>>> would check for the VMA's attributes to verify that the page fault
>>> has to be handled correctly or not. Thus, the VMA is protected
>>> through a sequence lock which allows fast detection of concurrent
>>> VMA changes. If such a change is detected, the speculative page
>>> fault is aborted and a *classic* page fault is tried. VMA sequence
>>> lockings are added when VMA attributes which are checked during the page fault are modified.
>>>
>>> When the PTE is fetched, the VMA is checked to see if it has been
>>> changed, so once the page table is locked, the VMA is valid, so any
>>> other changes leading to touching this PTE will need to lock the
>>> page table, so no parallel change is possible at this time.
>>>
>>> The locking of the PTE is done with interrupts disabled, this allows
>>> checking for the PMD to ensure that there is not an ongoing
>>> collapsing operation. Since khugepaged is firstly set the PMD to
>>> pmd_none and then is waiting for the other CPU to have caught the
>>> IPI interrupt, if the pmd is valid at the time the PTE is locked, we
>>> have the guarantee that the collapsing operation will have to wait on the PTE lock to move forward.
>>> This allows the SPF handler to map the PTE safely. If the PMD value
>>> is different from the one recorded at the beginning of the SPF
>>> operation, the classic page fault handler will be called to handle
>>> the operation while holding the mmap_sem. As the PTE lock is done
>>> with the interrupts disabled, the lock is done using spin_trylock()
>>> to avoid dead lock when handling a page fault while a TLB invalidate
>>> is requested by another CPU holding the PTE.
>>>
>>> In pseudo code, this could be seen as:
>>> speculative_page_fault()
>>> {
>>> vma = get_vma()
>>> check vma sequence count
>>> check vma's support
>>> disable interrupt
>>> check pgd,p4d,...,pte
>>> save pmd and pte in vmf
>>> save vma sequence counter in vmf
>>> enable interrupt
>>> check vma sequence count
>>> handle_pte_fault(vma)
>>> ..
>>> page = alloc_page()
>>> pte_map_lock()
>>> disable interrupt
>>> abort if sequence counter has changed
>>> abort if pmd or pte has changed
>>> pte map and lock
>>> enable interrupt
>>> if abort
>>> free page
>>> abort
>>> ...
>>> }
>>>
>>> arch_fault_handler()
>>> {
>>> if (speculative_page_fault(&vma))
>>> goto done
>>> again:
>>> lock(mmap_sem)
>>> vma = find_vma();
>>> handle_pte_fault(vma);
>>> if retry
>>> unlock(mmap_sem)
>>> goto again;
>>> done:
>>> handle fault error
>>> }
>>>
>>> Support for THP is not done because when checking for the PMD, we
>>> can be confused by an in progress collapsing operation done by
>>> khugepaged. The issue is that pmd_none() could be true either if the
>>> PMD is not already populated or if the underlying PTE are in the way
>>> to be collapsed. So we cannot safely allocate a PMD if pmd_none() is true.
>>>
>>> This series add a new software performance event named 'speculative-faults'
>>> or 'spf'. It counts the number of successful page fault event
>>> handled speculatively. When recording 'faults,spf' events, the
>>> faults one is counting the total number of page fault events while
>>> 'spf' is only counting the part of the faults processed speculatively.
>>>
>>> There are some trace events introduced by this series. They allow
>>> identifying why the page faults were not processed speculatively.
>>> This doesn't take in account the faults generated by a monothreaded
>>> process which directly processed while holding the mmap_sem. This
>>> trace events are grouped in a system named 'pagefault', they are:
>>> - pagefault:spf_vma_changed : if the VMA has been changed in our
>>> back
>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>> - pagefault:spf_vma_access : the VMA's access right are not
>>> respected
>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>> back.
>>>
>>> To record all the related events, the easier is to run perf with the
>>> following arguments :
>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>
>>> There is also a dedicated vmstat counter showing the number of
>>> successful page fault handled speculatively. I can be seen this way:
>>> $ grep speculative_pgfault /proc/vmstat
>>>
>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is
>>> functional on x86, PowerPC and arm64.
>>>
>>> ---------------------
>>> Real Workload results
>>>
>>> As mentioned in previous email, we did non official runs using a
>>> "popular in memory multithreaded database product" on 176 cores SMT8
>>> Power system which showed a 30% improvements in the number of
>>> transaction processed per second. This run has been done on the v6
>>> series, but changes introduced in this new version should not impact the performance boost seen.
>>>
>>> Here are the perf data captured during 2 of these runs on top of the
>>> v8
>>> series:
>>> vanilla spf
>>> faults 89.418 101.364 +13%
>>> spf n/a 97.989
>>>
>>> With the SPF kernel, most of the page fault were processed in a
>>> speculative way.
>>>
>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel
>>> and gave it a try on an android device. He reported that the
>>> application launch time was improved in average by 6%, and for large
>>> applications (~100 threads) by 20%.
>>>
>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a
>>> Qcom
>>> MSM845 (8 cores) with 6GB (the less is better):
>>>
>>> Application 4.9 4.9+spf delta
>>> com.tencent.mm 416 389 -7%
>>> com.eg.android.AlipayGphone 1135 986 -13%
>>> com.tencent.mtt 455 454 0%
>>> com.qqgame.hlddz 1497 1409 -6%
>>> com.autonavi.minimap 711 701 -1%
>>> com.tencent.tmgp.sgame 788 748 -5%
>>> com.immomo.momo 501 487 -3%
>>> com.tencent.peng 2145 2112 -2%
>>> com.smile.gifmaker 491 461 -6%
>>> com.baidu.BaiduMap 479 366 -23%
>>> com.taobao.taobao 1341 1198 -11%
>>> com.baidu.searchbox 333 314 -6%
>>> com.tencent.mobileqq 394 384 -3%
>>> com.sina.weibo 907 906 0%
>>> com.youku.phone 816 731 -11%
>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>> com.UCMobile 415 411 -1%
>>> com.tencent.tmgp.ak 1464 1431 -2%
>>> com.tencent.qqmusic 336 329 -2%
>>> com.sankuai.meituan 1661 1302 -22%
>>> com.netease.cloudmusic 1193 1200 1%
>>> air.tv.douyu.android 4257 4152 -2%
>>>
>>> ------------------
>>> Benchmarks results
>>>
>>> Base kernel is v4.17.0-rc4-mm1
>>> SPF is BASE + this series
>>>
>>> Kernbench:
>>> ----------
>>> Here are the results on a 16 CPUs X86 guest using kernbench on a
>>> 4.15 kernel (kernel is build 5 times):
>>>
>>> Average Half load -j 8
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>
>>> Average Optimal load -j 16
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 526743764 faults
>>> 210 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 2278 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Very few speculative page faults were recorded as most of the
>>> processes involved are monothreaded (sounds that on this
>>> architecture some threads were created during the kernel build processing).
>>>
>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>
>>> Average Half load -j 40
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>
>>> Average Optimal load -j 80
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 116730856 faults
>>> 0 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 476 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Most of the processes involved are monothreaded so SPF is not
>>> activated but there is no impact on the performance.
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage,
>>> the higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'.
>>> To get consistent result I repeated the test 100 times and measure
>>> the average result. The number is the record processes per second,
>>> the higher is the best.
>>>
>>> BASE SPF delta
>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>
>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>> Performance counter stats for './ebizzy -mTt 16':
>>> 1706379 faults
>>> 1674599 spf
>>> 30588 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 363 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> And the ones captured during a run on a 80 CPUs Power node:
>>> Performance counter stats for './ebizzy -mTt 80':
>>> 1874773 faults
>>> 1461153 spf
>>> 413293 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 200 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> In ebizzy's case most of the page fault were handled in a
>>> speculative way, leading the ebizzy performance boost.
>>>
>>> ------------------
>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>> and Minchan Kim, hopefully.
>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>> __do_page_fault().
>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>> instead
>>> of aborting the speculative page fault handling. Dropping the now
>>> useless
>>> trace event pagefault:spf_pte_lock.
>>> - No more try to reuse the fetched VMA during the speculative page fault
>>> handling when retrying is needed. This adds a lot of complexity and
>>> additional tests done didn't show a significant performance improvement.
>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>
>>> [1]
>>> http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-s
>>> peculative-page-faults-tt965642.html#none
>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>
>>>
>>> Laurent Dufour (20):
>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>> mm: make pte_unmap_same compatible with SPF
>>> mm: introduce INIT_VMA()
>>> mm: protect VMA modifications using VMA sequence count
>>> mm: protect mremap() against SPF hanlder
>>> mm: protect SPF handler against anon_vma changes
>>> mm: cache some VMA fields in the vm_fault structure
>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>> mm: introduce __lru_cache_add_active_or_unevictable
>>> mm: introduce __vm_normal_page()
>>> mm: introduce __page_add_new_anon_rmap()
>>> mm: protect mm_rb tree with a rwlock
>>> mm: adding speculative page fault failure trace events
>>> perf: add a speculative page fault sw event
>>> perf tools: add support for the SPF perf event
>>> mm: add speculative page fault vmstats
>>> powerpc/mm: add speculative page fault
>>>
>>> Mahendran Ganesh (2):
>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> arm64/mm: add speculative page fault
>>>
>>> Peter Zijlstra (4):
>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>> mm: VMA sequence count
>>> mm: provide speculative fault infrastructure
>>> x86/mm: add speculative pagefault handling
>>>
>>> arch/arm64/Kconfig | 1 +
>>> arch/arm64/mm/fault.c | 12 +
>>> arch/powerpc/Kconfig | 1 +
>>> arch/powerpc/mm/fault.c | 16 +
>>> arch/x86/Kconfig | 1 +
>>> arch/x86/mm/fault.c | 27 +-
>>> fs/exec.c | 2 +-
>>> fs/proc/task_mmu.c | 5 +-
>>> fs/userfaultfd.c | 17 +-
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/migrate.h | 4 +-
>>> include/linux/mm.h | 136 +++++++-
>>> include/linux/mm_types.h | 7 +
>>> include/linux/pagemap.h | 4 +-
>>> include/linux/rmap.h | 12 +-
>>> include/linux/swap.h | 10 +-
>>> include/linux/vm_event_item.h | 3 +
>>> include/trace/events/pagefault.h | 80 +++++
>>> include/uapi/linux/perf_event.h | 1 +
>>> kernel/fork.c | 5 +-
>>> mm/Kconfig | 22 ++
>>> mm/huge_memory.c | 6 +-
>>> mm/hugetlb.c | 2 +
>>> mm/init-mm.c | 3 +
>>> mm/internal.h | 20 ++
>>> mm/khugepaged.c | 5 +
>>> mm/madvise.c | 6 +-
>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>> mm/mempolicy.c | 51 ++-
>>> mm/migrate.c | 6 +-
>>> mm/mlock.c | 13 +-
>>> mm/mmap.c | 229 ++++++++++---
>>> mm/mprotect.c | 4 +-
>>> mm/mremap.c | 13 +
>>> mm/nommu.c | 2 +-
>>> mm/rmap.c | 5 +-
>>> mm/swap.c | 6 +-
>>> mm/swap_state.c | 8 +-
>>> mm/vmstat.c | 5 +-
>>> tools/include/uapi/linux/perf_event.h | 1 +
>>> tools/perf/util/evsel.c | 1 +
>>> tools/perf/util/parse-events.c | 4 +
>>> tools/perf/util/parse-events.l | 1 +
>>> tools/perf/util/python.c | 1 +
>>> 44 files changed, 1161 insertions(+), 211 deletions(-) create mode
>>> 100644 include/trace/events/pagefault.h
>>>
>>> --
>>> 2.7.4
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
@ 2018-05-28 11:04 ` Wang, Kemi
0 siblings, 0 replies; 106+ messages in thread
From: Wang, Kemi @ 2018-05-28 11:04 UTC (permalink / raw)
To: Laurent Dufour, Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
Minchan Kim, Punit Agrawal, vinayak menon, Yang Shi,
linux-kernel, linux-mm, haren, npiggin, bsingharora, paulmck,
Tim Chen, linuxppc-dev, x86
RnVsbCBydW4gd291bGQgdGFrZSBvbmUgb3IgdHdvIHdlZWtzIGRlcGVuZGVkIG9uIG91ciByZXNv
dXJjZSBhdmFpbGFibGUuIENvdWxkIHlvdSBwaWNrIHNvbWUgb25lcyB1cCwgZS5nLiB0aG9zZSBo
YXZlIHBlcmZvcm1hbmNlIHJlZ3Jlc3Npb24/DQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0t
DQpGcm9tOiBvd25lci1saW51eC1tbUBrdmFjay5vcmcgW21haWx0bzpvd25lci1saW51eC1tbUBr
dmFjay5vcmddIE9uIEJlaGFsZiBPZiBMYXVyZW50IER1Zm91cg0KU2VudDogTW9uZGF5LCBNYXkg
MjgsIDIwMTggNDo1NSBQTQ0KVG86IFNvbmcsIEhhaXlhblggPGhhaXlhbnguc29uZ0BpbnRlbC5j
b20+DQpDYzogYWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZzsgbWhvY2tvQGtlcm5lbC5vcmc7IHBl
dGVyekBpbmZyYWRlYWQub3JnOyBraXJpbGxAc2h1dGVtb3YubmFtZTsgYWtAbGludXguaW50ZWwu
Y29tOyBkYXZlQHN0Z29sYWJzLm5ldDsgamFja0BzdXNlLmN6OyBNYXR0aGV3IFdpbGNveCA8d2ls
bHlAaW5mcmFkZWFkLm9yZz47IGtoYW5kdWFsQGxpbnV4LnZuZXQuaWJtLmNvbTsgYW5lZXNoLmt1
bWFyQGxpbnV4LnZuZXQuaWJtLmNvbTsgYmVuaEBrZXJuZWwuY3Jhc2hpbmcub3JnOyBtcGVAZWxs
ZXJtYW4uaWQuYXU7IHBhdWx1c0BzYW1iYS5vcmc7IFRob21hcyBHbGVpeG5lciA8dGdseEBsaW51
dHJvbml4LmRlPjsgSW5nbyBNb2xuYXIgPG1pbmdvQHJlZGhhdC5jb20+OyBocGFAenl0b3IuY29t
OyBXaWxsIERlYWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT47IFNlcmdleSBTZW5vemhhdHNreSA8
c2VyZ2V5LnNlbm96aGF0c2t5QGdtYWlsLmNvbT47IHNlcmdleS5zZW5vemhhdHNreS53b3JrQGdt
YWlsLmNvbTsgQW5kcmVhIEFyY2FuZ2VsaSA8YWFyY2FuZ2VAcmVkaGF0LmNvbT47IEFsZXhlaSBT
dGFyb3ZvaXRvdiA8YWxleGVpLnN0YXJvdm9pdG92QGdtYWlsLmNvbT47IFdhbmcsIEtlbWkgPGtl
bWkud2FuZ0BpbnRlbC5jb20+OyBEYW5pZWwgSm9yZGFuIDxkYW5pZWwubS5qb3JkYW5Ab3JhY2xl
LmNvbT47IERhdmlkIFJpZW50amVzIDxyaWVudGplc0Bnb29nbGUuY29tPjsgSmVyb21lIEdsaXNz
ZSA8amdsaXNzZUByZWRoYXQuY29tPjsgR2FuZXNoIE1haGVuZHJhbiA8b3BlbnNvdXJjZS5nYW5l
c2hAZ21haWwuY29tPjsgTWluY2hhbiBLaW0gPG1pbmNoYW5Aa2VybmVsLm9yZz47IFB1bml0IEFn
cmF3YWwgPHB1bml0YWdyYXdhbEBnbWFpbC5jb20+OyB2aW5heWFrIG1lbm9uIDx2aW5heWFrbS5s
aXN0QGdtYWlsLmNvbT47IFlhbmcgU2hpIDx5YW5nLnNoaUBsaW51eC5hbGliYWJhLmNvbT47IGxp
bnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LW1tQGt2YWNrLm9yZzsgaGFyZW5AbGlu
dXgudm5ldC5pYm0uY29tOyBucGlnZ2luQGdtYWlsLmNvbTsgYnNpbmdoYXJvcmFAZ21haWwuY29t
OyBwYXVsbWNrQGxpbnV4LnZuZXQuaWJtLmNvbTsgVGltIENoZW4gPHRpbS5jLmNoZW5AbGludXgu
aW50ZWwuY29tPjsgbGludXhwcGMtZGV2QGxpc3RzLm96bGFicy5vcmc7IHg4NkBrZXJuZWwub3Jn
DQpTdWJqZWN0OiBSZTogW1BBVENIIHYxMSAwMC8yNl0gU3BlY3VsYXRpdmUgcGFnZSBmYXVsdHMN
Cg0KT24gMjgvMDUvMjAxOCAxMDoyMiwgSGFpeWFuIFNvbmcgd3JvdGU6DQo+IEhpIExhdXJlbnQs
DQo+IA0KPiBZZXMsIHRoZXNlIHRlc3RzIGFyZSBkb25lIG9uIFY5IHBhdGNoLg0KDQpEbyB5b3Ug
cGxhbiB0byBnaXZlIHRoaXMgVjExIGEgcnVuID8NCg0KPiANCj4gDQo+IEJlc3QgcmVnYXJkcywN
Cj4gSGFpeWFuIFNvbmcNCj4gDQo+IE9uIE1vbiwgTWF5IDI4LCAyMDE4IGF0IDA5OjUxOjM0QU0g
KzAyMDAsIExhdXJlbnQgRHVmb3VyIHdyb3RlOg0KPj4gT24gMjgvMDUvMjAxOCAwNzoyMywgU29u
ZywgSGFpeWFuWCB3cm90ZToNCj4+Pg0KPj4+IFNvbWUgcmVncmVzc2lvbiBhbmQgaW1wcm92ZW1l
bnRzIGlzIGZvdW5kIGJ5IExLUC10b29scyhsaW51eCBrZXJuZWwgDQo+Pj4gcGVyZm9ybWFuY2Up
IG9uIFY5IHBhdGNoIHNlcmllcyB0ZXN0ZWQgb24gSW50ZWwgNHMgU2t5bGFrZSBwbGF0Zm9ybS4N
Cj4+DQo+PiBIaSwNCj4+DQo+PiBUaGFua3MgZm9yIHJlcG9ydGluZyB0aGlzIGJlbmNobWFyayBy
ZXN1bHRzLCBidXQgeW91IG1lbnRpb25lZCB0aGUgDQo+PiAiVjkgcGF0Y2ggc2VyaWVzIiB3aGls
ZSByZXNwb25kaW5nIHRvIHRoZSB2MTEgaGVhZGVyIHNlcmllcy4uLg0KPj4gV2VyZSB0aGVzZSB0
ZXN0cyBkb25lIG9uIHY5IG9yIHYxMSA/DQo+Pg0KPj4gQ2hlZXJzLA0KPj4gTGF1cmVudC4NCj4+
DQo+Pj4NCj4+PiBUaGUgcmVncmVzc2lvbiByZXN1bHQgaXMgc29ydGVkIGJ5IHRoZSBtZXRyaWMg
d2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29wcy4NCj4+PiBCcmFuY2g6IExhdXJlbnQtRHVmb3Vy
L1NwZWN1bGF0aXZlLXBhZ2UtZmF1bHRzLzIwMTgwMzE2LTE1MTgzMyAoVjkgDQo+Pj4gcGF0Y2gg
c2VyaWVzKSBDb21taXQgaWQ6DQo+Pj4gICAgIGJhc2UgY29tbWl0OiBkNTVmMzQ0MTFiMWIxMjY0
MjlhODIzZDA2YzMxMjRjMTYyODMyMzFmDQo+Pj4gICAgIGhlYWQgY29tbWl0OiAwMzU1MzIyYjM1
NzdlZWFiNzY2OTA2NmRmNDJjNTUwYTU2ODAxMTEwDQo+Pj4gQmVuY2htYXJrIHN1aXRlOiB3aWxs
LWl0LXNjYWxlDQo+Pj4gRG93bmxvYWQgbGluazoNCj4+PiBodHRwczovL2dpdGh1Yi5jb20vYW50
b25ibGFuY2hhcmQvd2lsbC1pdC1zY2FsZS90cmVlL21hc3Rlci90ZXN0cw0KPj4+IE1ldHJpY3M6
DQo+Pj4gICAgIHdpbGwtaXQtc2NhbGUucGVyX3Byb2Nlc3Nfb3BzPXByb2Nlc3Nlcy9ucl9jcHUN
Cj4+PiAgICAgd2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29wcz10aHJlYWRzL25yX2NwdQ0KPj4+
IHRlc3QgYm94OiBsa3Atc2tsLTRzcDEobnJfY3B1PTE5MixtZW1vcnk9NzY4RykNCj4+PiBUSFA6
IGVuYWJsZSAvIGRpc2FibGUNCj4+PiBucl90YXNrOiAxMDAlDQo+Pj4NCj4+PiAxLiBSZWdyZXNz
aW9uczoNCj4+PiBhKSBUSFAgZW5hYmxlZDoNCj4+PiB0ZXN0Y2FzZSAgICAgICAgICAgICAgICAg
ICAgICAgIGJhc2UgICAgICAgICAgICBjaGFuZ2UgICAgICAgICAgaGVhZCAgICAgICBtZXRyaWMN
Cj4+PiBwYWdlX2ZhdWx0My8gZW5hYmxlIFRIUCAgICAgICAgIDEwMDkyICAgICAgICAgICAtMTcu
NSUgICAgICAgICAgODMyMyAgICAgICB3aWxsLWl0LXNjYWxlLnBlcl90aHJlYWRfb3BzDQo+Pj4g
cGFnZV9mYXVsdDIvIGVuYWJsZSBUSFAgICAgICAgICAgODMwMCAgICAgICAgICAgLTE3LjIlICAg
ICAgICAgIDY4NjkgICAgICAgd2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29wcw0KPj4+IGJyazEv
IGVuYWJsZSBUSFAgICAgICAgICAgICAgICAgICA5NTcuNjcgICAgICAgICAtNy42JSAgICAgICAg
ICAgODg1ICAgICAgIHdpbGwtaXQtc2NhbGUucGVyX3RocmVhZF9vcHMNCj4+PiBwYWdlX2ZhdWx0
My8gZW5hYmxlIFRIUCAgICAgICAgMTcyODIxICAgICAgICAgICAgLTUuMyUgICAgICAgIDE2MzY5
MiAgICAgICB3aWxsLWl0LXNjYWxlLnBlcl9wcm9jZXNzX29wcw0KPj4+IHNpZ25hbDEvIGVuYWJs
ZSBUSFAgICAgICAgICAgICAgIDkxMjUgICAgICAgICAgICAtMy4yJSAgICAgICAgICA4ODM0ICAg
ICAgIHdpbGwtaXQtc2NhbGUucGVyX3Byb2Nlc3Nfb3BzDQo+Pj4NCj4+PiBiKSBUSFAgZGlzYWJs
ZWQ6DQo+Pj4gdGVzdGNhc2UgICAgICAgICAgICAgICAgICAgICAgICBiYXNlICAgICAgICAgICAg
Y2hhbmdlICAgICAgICAgIGhlYWQgICAgICAgbWV0cmljDQo+Pj4gcGFnZV9mYXVsdDMvIGRpc2Fi
bGUgVEhQICAgICAgICAxMDEwNyAgICAgICAgICAgLTE5LjElICAgICAgICAgIDgxODAgICAgICAg
d2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29wcw0KPj4+IHBhZ2VfZmF1bHQyLyBkaXNhYmxlIFRI
UCAgICAgICAgIDg0MzIgICAgICAgICAgIC0xNy44JSAgICAgICAgICA2OTMxICAgICAgIHdpbGwt
aXQtc2NhbGUucGVyX3RocmVhZF9vcHMNCj4+PiBjb250ZXh0X3N3aXRjaDEvIGRpc2FibGUgVEhQ
ICAgMjE1Mzg5ICAgICAgICAgICAgLTYuOCUgICAgICAgIDIwMDc3NiAgICAgICB3aWxsLWl0LXNj
YWxlLnBlcl90aHJlYWRfb3BzDQo+Pj4gYnJrMS8gZGlzYWJsZSBUSFAgICAgICAgICAgICAgICAg
IDkzOS42NyAgICAgICAgIC02LjYlICAgICAgICAgICA4NzcuMzMgICAgd2lsbC1pdC1zY2FsZS5w
ZXJfdGhyZWFkX29wcw0KPj4+IHBhZ2VfZmF1bHQzLyBkaXNhYmxlIFRIUCAgICAgICAxNzMxNDUg
ICAgICAgICAgICAtNC43JSAgICAgICAgMTY1MDY0ICAgICAgIHdpbGwtaXQtc2NhbGUucGVyX3By
b2Nlc3Nfb3BzDQo+Pj4gc2lnbmFsMS8gZGlzYWJsZSBUSFAgICAgICAgICAgICAgOTE2MiAgICAg
ICAgICAgIC0zLjklICAgICAgICAgIDg4MDIgICAgICAgd2lsbC1pdC1zY2FsZS5wZXJfcHJvY2Vz
c19vcHMNCj4+Pg0KPj4+IDIuIEltcHJvdmVtZW50czoNCj4+PiBhKSBUSFAgZW5hYmxlZDoNCj4+
PiB0ZXN0Y2FzZSAgICAgICAgICAgICAgICAgICAgICAgIGJhc2UgICAgICAgICAgICBjaGFuZ2Ug
ICAgICAgICAgaGVhZCAgICAgICBtZXRyaWMNCj4+PiBtYWxsb2MxLyBlbmFibGUgVEhQICAgICAg
ICAgICAgICAgNjYuMzMgICAgICAgICs0NjkuOCUgICAgICAgICAgIDM4My42NyAgICB3aWxsLWl0
LXNjYWxlLnBlcl90aHJlYWRfb3BzDQo+Pj4gd3JpdGVzZWVrMy8gZW5hYmxlIFRIUCAgICAgICAg
ICAyNTMxICAgICAgICAgICAgICs0LjUlICAgICAgICAgIDI2NDYgICAgICAgd2lsbC1pdC1zY2Fs
ZS5wZXJfdGhyZWFkX29wcw0KPj4+IHNpZ25hbDEvIGVuYWJsZSBUSFAgICAgICAgICAgICAgIDk4
OS4zMyAgICAgICAgICArMi44JSAgICAgICAgICAxMDE2ICAgICAgIHdpbGwtaXQtc2NhbGUucGVy
X3RocmVhZF9vcHMNCj4+Pg0KPj4+IGIpIFRIUCBkaXNhYmxlZDoNCj4+PiB0ZXN0Y2FzZSAgICAg
ICAgICAgICAgICAgICAgICAgIGJhc2UgICAgICAgICAgICBjaGFuZ2UgICAgICAgICAgaGVhZCAg
ICAgICBtZXRyaWMNCj4+PiBtYWxsb2MxLyBkaXNhYmxlIFRIUCAgICAgICAgICAgICAgOTAuMzMg
ICAgICAgICs0MTcuMyUgICAgICAgICAgIDQ2Ny4zMyAgICB3aWxsLWl0LXNjYWxlLnBlcl90aHJl
YWRfb3BzDQo+Pj4gcmVhZDIvIGRpc2FibGUgVEhQICAgICAgICAgICAgIDU4OTM0ICAgICAgICAg
ICAgKzM5LjIlICAgICAgICAgODIwNjAgICAgICAgd2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29w
cw0KPj4+IHBhZ2VfZmF1bHQxLyBkaXNhYmxlIFRIUCAgICAgICAgODYwNyAgICAgICAgICAgICsz
Ni40JSAgICAgICAgIDExNzM2ICAgICAgIHdpbGwtaXQtc2NhbGUucGVyX3RocmVhZF9vcHMNCj4+
PiByZWFkMS8gZGlzYWJsZSBUSFAgICAgICAgICAgICAzMTQwNjMgICAgICAgICAgICArMTIuNyUg
ICAgICAgIDM1MzkzNCAgICAgICB3aWxsLWl0LXNjYWxlLnBlcl90aHJlYWRfb3BzDQo+Pj4gd3Jp
dGVzZWVrMy8gZGlzYWJsZSBUSFAgICAgICAgICAyNDUyICAgICAgICAgICAgKzEyLjUlICAgICAg
ICAgIDI3NTkgICAgICAgd2lsbC1pdC1zY2FsZS5wZXJfdGhyZWFkX29wcw0KPj4+IHNpZ25hbDEv
IGRpc2FibGUgVEhQICAgICAgICAgICAgIDk3MS4zMyAgICAgICAgICArNS41JSAgICAgICAgICAx
MDI0ICAgICAgIHdpbGwtaXQtc2NhbGUucGVyX3RocmVhZF9vcHMNCj4+Pg0KPj4+IE5vdGVzOiBm
b3IgYWJvdmUgdmFsdWVzIGluIGNvbHVtbiAiY2hhbmdlIiwgdGhlIGhpZ2hlciB2YWx1ZSBtZWFu
cyANCj4+PiB0aGF0IHRoZSByZWxhdGVkIHRlc3RjYXNlIHJlc3VsdCBvbiBoZWFkIGNvbW1pdCBp
cyBiZXR0ZXIgdGhhbiB0aGF0IG9uIGJhc2UgY29tbWl0IGZvciB0aGlzIGJlbmNobWFyay4NCj4+
Pg0KPj4+DQo+Pj4gQmVzdCByZWdhcmRzDQo+Pj4gSGFpeWFuIFNvbmcNCj4+Pg0KPj4+IF9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+PiBGcm9tOiBvd25lci1saW51
eC1tbUBrdmFjay5vcmcgW293bmVyLWxpbnV4LW1tQGt2YWNrLm9yZ10gb24gYmVoYWxmIA0KPj4+
IG9mIExhdXJlbnQgRHVmb3VyIFtsZHVmb3VyQGxpbnV4LnZuZXQuaWJtLmNvbV0NCj4+PiBTZW50
OiBUaHVyc2RheSwgTWF5IDE3LCAyMDE4IDc6MDYgUE0NCj4+PiBUbzogYWtwbUBsaW51eC1mb3Vu
ZGF0aW9uLm9yZzsgbWhvY2tvQGtlcm5lbC5vcmc7IA0KPj4+IHBldGVyekBpbmZyYWRlYWQub3Jn
OyBraXJpbGxAc2h1dGVtb3YubmFtZTsgYWtAbGludXguaW50ZWwuY29tOyANCj4+PiBkYXZlQHN0
Z29sYWJzLm5ldDsgamFja0BzdXNlLmN6OyBNYXR0aGV3IFdpbGNveDsgDQo+Pj4ga2hhbmR1YWxA
bGludXgudm5ldC5pYm0uY29tOyBhbmVlc2gua3VtYXJAbGludXgudm5ldC5pYm0uY29tOyANCj4+
PiBiZW5oQGtlcm5lbC5jcmFzaGluZy5vcmc7IG1wZUBlbGxlcm1hbi5pZC5hdTsgcGF1bHVzQHNh
bWJhLm9yZzsgDQo+Pj4gVGhvbWFzIEdsZWl4bmVyOyBJbmdvIE1vbG5hcjsgaHBhQHp5dG9yLmNv
bTsgV2lsbCBEZWFjb247IFNlcmdleSANCj4+PiBTZW5vemhhdHNreTsgc2VyZ2V5LnNlbm96aGF0
c2t5LndvcmtAZ21haWwuY29tOyBBbmRyZWEgQXJjYW5nZWxpOyANCj4+PiBBbGV4ZWkgU3Rhcm92
b2l0b3Y7IFdhbmcsIEtlbWk7IERhbmllbCBKb3JkYW47IERhdmlkIFJpZW50amVzOyANCj4+PiBK
ZXJvbWUgR2xpc3NlOyBHYW5lc2ggTWFoZW5kcmFuOyBNaW5jaGFuIEtpbTsgUHVuaXQgQWdyYXdh
bDsgdmluYXlhayANCj4+PiBtZW5vbjsgWWFuZyBTaGkNCj4+PiBDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZzsgbGludXgtbW1Aa3ZhY2sub3JnOyANCj4+PiBoYXJlbkBsaW51eC52bmV0
LmlibS5jb207IG5waWdnaW5AZ21haWwuY29tOyBic2luZ2hhcm9yYUBnbWFpbC5jb207IA0KPj4+
IHBhdWxtY2tAbGludXgudm5ldC5pYm0uY29tOyBUaW0gQ2hlbjsgbGludXhwcGMtZGV2QGxpc3Rz
Lm96bGFicy5vcmc7IA0KPj4+IHg4NkBrZXJuZWwub3JnDQo+Pj4gU3ViamVjdDogW1BBVENIIHYx
MSAwMC8yNl0gU3BlY3VsYXRpdmUgcGFnZSBmYXVsdHMNCj4+Pg0KPj4+IFRoaXMgaXMgYSBwb3J0
IG9uIGtlcm5lbCA0LjE3IG9mIHRoZSB3b3JrIGRvbmUgYnkgUGV0ZXIgWmlqbHN0cmEgdG8gDQo+
Pj4gaGFuZGxlIHBhZ2UgZmF1bHQgd2l0aG91dCBob2xkaW5nIHRoZSBtbSBzZW1hcGhvcmUgWzFd
Lg0KPj4+DQo+Pj4gVGhlIGlkZWEgaXMgdG8gdHJ5IHRvIGhhbmRsZSB1c2VyIHNwYWNlIHBhZ2Ug
ZmF1bHRzIHdpdGhvdXQgaG9sZGluZyANCj4+PiB0aGUgbW1hcF9zZW0uIFRoaXMgc2hvdWxkIGFs
bG93IGJldHRlciBjb25jdXJyZW5jeSBmb3IgbWFzc2l2ZWx5IA0KPj4+IHRocmVhZGVkIHByb2Nl
c3Mgc2luY2UgdGhlIHBhZ2UgZmF1bHQgaGFuZGxlciB3aWxsIG5vdCB3YWl0IGZvciANCj4+PiBv
dGhlciB0aHJlYWRzIG1lbW9yeSBsYXlvdXQgY2hhbmdlIHRvIGJlIGRvbmUsIGFzc3VtaW5nIHRo
YXQgdGhpcyANCj4+PiBjaGFuZ2UgaXMgZG9uZSBpbiBhbm90aGVyIHBhcnQgb2YgdGhlIHByb2Nl
c3MncyBtZW1vcnkgc3BhY2UuIFRoaXMgDQo+Pj4gdHlwZSBwYWdlIGZhdWx0IGlzIG5hbWVkIHNw
ZWN1bGF0aXZlIHBhZ2UgZmF1bHQuIElmIHRoZSBzcGVjdWxhdGl2ZSANCj4+PiBwYWdlIGZhdWx0
IGZhaWxzIGJlY2F1c2Ugb2YgYSBjb25jdXJyZW5jeSBpcyBkZXRlY3RlZCBvciBiZWNhdXNlIA0K
Pj4+IHVuZGVybHlpbmcgUE1EIG9yIFBURSB0YWJsZXMgYXJlIG5vdCB5ZXQgYWxsb2NhdGluZywg
aXQgaXMgZmFpbGluZyBpdHMgcHJvY2Vzc2luZyBhbmQgYSBjbGFzc2ljIHBhZ2UgZmF1bHQgaXMg
dGhlbiB0cmllZC4NCj4+Pg0KPj4+IFRoZSBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0IChTUEYpIGhh
cyB0byBsb29rIGZvciB0aGUgVk1BIG1hdGNoaW5nIA0KPj4+IHRoZSBmYXVsdCBhZGRyZXNzIHdp
dGhvdXQgaG9sZGluZyB0aGUgbW1hcF9zZW0sIHRoaXMgaXMgZG9uZSBieSANCj4+PiBpbnRyb2R1
Y2luZyBhIHJ3bG9jayB3aGljaCBwcm90ZWN0cyB0aGUgYWNjZXNzIHRvIHRoZSBtbV9yYiB0cmVl
LiANCj4+PiBQcmV2aW91c2x5IHRoaXMgd2FzIGRvbmUgdXNpbmcgU1JDVSBidXQgaXQgd2FzIGlu
dHJvZHVjaW5nIGEgbG90IG9mIA0KPj4+IHNjaGVkdWxpbmcgdG8gcHJvY2VzcyB0aGUgVk1BJ3Mg
ZnJlZWluZyBvcGVyYXRpb24gd2hpY2ggd2FzIGhpdHRpbmcgDQo+Pj4gdGhlIHBlcmZvcm1hbmNl
IGJ5IDIwJSBhcyByZXBvcnRlZCBieSBLZW1pIFdhbmcgWzJdLiBVc2luZyBhIHJ3bG9jayANCj4+
PiB0byBwcm90ZWN0IGFjY2VzcyB0byB0aGUgbW1fcmIgdHJlZSBpcyBsaW1pdGluZyB0aGUgbG9j
a2luZyANCj4+PiBjb250ZW50aW9uIHRvIHRoZXNlIG9wZXJhdGlvbnMgd2hpY2ggYXJlIGV4cGVj
dGVkIHRvIGJlIGluIGEgTyhsb2cgDQo+Pj4gbikgb3JkZXIuIEluIGFkZGl0aW9uIHRvIGVuc3Vy
ZSB0aGF0IHRoZSBWTUEgaXMgbm90IGZyZWVkIGluIG91ciANCj4+PiBiYWNrIGEgcmVmZXJlbmNl
IGNvdW50IGlzIGFkZGVkIGFuZCAyIHNlcnZpY2VzIChnZXRfdm1hKCkgYW5kDQo+Pj4gcHV0X3Zt
YSgpKSBhcmUgaW50cm9kdWNlZCB0byBoYW5kbGUgdGhlIHJlZmVyZW5jZSBjb3VudC4gT25jZSBh
IFZNQSANCj4+PiBpcyBmZXRjaGVkIGZyb20gdGhlIFJCIHRyZWUgdXNpbmcgZ2V0X3ZtYSgpLCBp
dCBtdXN0IGJlIGxhdGVyIGZyZWVkIA0KPj4+IHVzaW5nIHB1dF92bWEoKS4gSSBjYW4ndCBzZWUg
YW55bW9yZSB0aGUgb3ZlcmhlYWQgSSBnb3Qgd2hpbGUgDQo+Pj4gd2lsbC1pdC1zY2FsZSBiZW5j
aG1hcmsgYW55bW9yZS4NCj4+Pg0KPj4+IFRoZSBWTUEncyBhdHRyaWJ1dGVzIGNoZWNrZWQgZHVy
aW5nIHRoZSBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0IA0KPj4+IHByb2Nlc3NpbmcgaGF2ZSB0byBi
ZSBwcm90ZWN0ZWQgYWdhaW5zdCBwYXJhbGxlbCBjaGFuZ2VzLiBUaGlzIGlzIA0KPj4+IGRvbmUg
YnkgdXNpbmcgYSBwZXIgVk1BIHNlcXVlbmNlIGxvY2suIFRoaXMgc2VxdWVuY2UgbG9jayBhbGxv
d3MgdGhlIA0KPj4+IHNwZWN1bGF0aXZlIHBhZ2UgZmF1bHQgaGFuZGxlciB0byBmYXN0IGNoZWNr
IGZvciBwYXJhbGxlbCBjaGFuZ2VzIGluIA0KPj4+IHByb2dyZXNzIGFuZCB0byBhYm9ydCB0aGUg
c3BlY3VsYXRpdmUgcGFnZSBmYXVsdCBpbiB0aGF0IGNhc2UuDQo+Pj4NCj4+PiBPbmNlIHRoZSBW
TUEgaGFzIGJlZW4gZm91bmQsIHRoZSBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0IGhhbmRsZXIgDQo+
Pj4gd291bGQgY2hlY2sgZm9yIHRoZSBWTUEncyBhdHRyaWJ1dGVzIHRvIHZlcmlmeSB0aGF0IHRo
ZSBwYWdlIGZhdWx0IA0KPj4+IGhhcyB0byBiZSBoYW5kbGVkIGNvcnJlY3RseSBvciBub3QuIFRo
dXMsIHRoZSBWTUEgaXMgcHJvdGVjdGVkIA0KPj4+IHRocm91Z2ggYSBzZXF1ZW5jZSBsb2NrIHdo
aWNoIGFsbG93cyBmYXN0IGRldGVjdGlvbiBvZiBjb25jdXJyZW50IA0KPj4+IFZNQSBjaGFuZ2Vz
LiBJZiBzdWNoIGEgY2hhbmdlIGlzIGRldGVjdGVkLCB0aGUgc3BlY3VsYXRpdmUgcGFnZSANCj4+
PiBmYXVsdCBpcyBhYm9ydGVkIGFuZCBhICpjbGFzc2ljKiBwYWdlIGZhdWx0IGlzIHRyaWVkLiAg
Vk1BIHNlcXVlbmNlIA0KPj4+IGxvY2tpbmdzIGFyZSBhZGRlZCB3aGVuIFZNQSBhdHRyaWJ1dGVz
IHdoaWNoIGFyZSBjaGVja2VkIGR1cmluZyB0aGUgcGFnZSBmYXVsdCBhcmUgbW9kaWZpZWQuDQo+
Pj4NCj4+PiBXaGVuIHRoZSBQVEUgaXMgZmV0Y2hlZCwgdGhlIFZNQSBpcyBjaGVja2VkIHRvIHNl
ZSBpZiBpdCBoYXMgYmVlbiANCj4+PiBjaGFuZ2VkLCBzbyBvbmNlIHRoZSBwYWdlIHRhYmxlIGlz
IGxvY2tlZCwgdGhlIFZNQSBpcyB2YWxpZCwgc28gYW55IA0KPj4+IG90aGVyIGNoYW5nZXMgbGVh
ZGluZyB0byB0b3VjaGluZyB0aGlzIFBURSB3aWxsIG5lZWQgdG8gbG9jayB0aGUgDQo+Pj4gcGFn
ZSB0YWJsZSwgc28gbm8gcGFyYWxsZWwgY2hhbmdlIGlzIHBvc3NpYmxlIGF0IHRoaXMgdGltZS4N
Cj4+Pg0KPj4+IFRoZSBsb2NraW5nIG9mIHRoZSBQVEUgaXMgZG9uZSB3aXRoIGludGVycnVwdHMg
ZGlzYWJsZWQsIHRoaXMgYWxsb3dzIA0KPj4+IGNoZWNraW5nIGZvciB0aGUgUE1EIHRvIGVuc3Vy
ZSB0aGF0IHRoZXJlIGlzIG5vdCBhbiBvbmdvaW5nIA0KPj4+IGNvbGxhcHNpbmcgb3BlcmF0aW9u
LiBTaW5jZSBraHVnZXBhZ2VkIGlzIGZpcnN0bHkgc2V0IHRoZSBQTUQgdG8gDQo+Pj4gcG1kX25v
bmUgYW5kIHRoZW4gaXMgd2FpdGluZyBmb3IgdGhlIG90aGVyIENQVSB0byBoYXZlIGNhdWdodCB0
aGUgDQo+Pj4gSVBJIGludGVycnVwdCwgaWYgdGhlIHBtZCBpcyB2YWxpZCBhdCB0aGUgdGltZSB0
aGUgUFRFIGlzIGxvY2tlZCwgd2UgDQo+Pj4gaGF2ZSB0aGUgZ3VhcmFudGVlIHRoYXQgdGhlIGNv
bGxhcHNpbmcgb3BlcmF0aW9uIHdpbGwgaGF2ZSB0byB3YWl0IG9uIHRoZSBQVEUgbG9jayB0byBt
b3ZlIGZvcndhcmQuDQo+Pj4gVGhpcyBhbGxvd3MgdGhlIFNQRiBoYW5kbGVyIHRvIG1hcCB0aGUg
UFRFIHNhZmVseS4gSWYgdGhlIFBNRCB2YWx1ZSANCj4+PiBpcyBkaWZmZXJlbnQgZnJvbSB0aGUg
b25lIHJlY29yZGVkIGF0IHRoZSBiZWdpbm5pbmcgb2YgdGhlIFNQRiANCj4+PiBvcGVyYXRpb24s
IHRoZSBjbGFzc2ljIHBhZ2UgZmF1bHQgaGFuZGxlciB3aWxsIGJlIGNhbGxlZCB0byBoYW5kbGUg
DQo+Pj4gdGhlIG9wZXJhdGlvbiB3aGlsZSBob2xkaW5nIHRoZSBtbWFwX3NlbS4gQXMgdGhlIFBU
RSBsb2NrIGlzIGRvbmUgDQo+Pj4gd2l0aCB0aGUgaW50ZXJydXB0cyBkaXNhYmxlZCwgdGhlIGxv
Y2sgaXMgZG9uZSB1c2luZyBzcGluX3RyeWxvY2soKSANCj4+PiB0byBhdm9pZCBkZWFkIGxvY2sg
d2hlbiBoYW5kbGluZyBhIHBhZ2UgZmF1bHQgd2hpbGUgYSBUTEIgaW52YWxpZGF0ZSANCj4+PiBp
cyByZXF1ZXN0ZWQgYnkgYW5vdGhlciBDUFUgaG9sZGluZyB0aGUgUFRFLg0KPj4+DQo+Pj4gSW4g
cHNldWRvIGNvZGUsIHRoaXMgY291bGQgYmUgc2VlbiBhczoNCj4+PiAgICAgc3BlY3VsYXRpdmVf
cGFnZV9mYXVsdCgpDQo+Pj4gICAgIHsNCj4+PiAgICAgICAgICAgICB2bWEgPSBnZXRfdm1hKCkN
Cj4+PiAgICAgICAgICAgICBjaGVjayB2bWEgc2VxdWVuY2UgY291bnQNCj4+PiAgICAgICAgICAg
ICBjaGVjayB2bWEncyBzdXBwb3J0DQo+Pj4gICAgICAgICAgICAgZGlzYWJsZSBpbnRlcnJ1cHQN
Cj4+PiAgICAgICAgICAgICAgICAgICBjaGVjayBwZ2QscDRkLC4uLixwdGUNCj4+PiAgICAgICAg
ICAgICAgICAgICBzYXZlIHBtZCBhbmQgcHRlIGluIHZtZg0KPj4+ICAgICAgICAgICAgICAgICAg
IHNhdmUgdm1hIHNlcXVlbmNlIGNvdW50ZXIgaW4gdm1mDQo+Pj4gICAgICAgICAgICAgZW5hYmxl
IGludGVycnVwdA0KPj4+ICAgICAgICAgICAgIGNoZWNrIHZtYSBzZXF1ZW5jZSBjb3VudA0KPj4+
ICAgICAgICAgICAgIGhhbmRsZV9wdGVfZmF1bHQodm1hKQ0KPj4+ICAgICAgICAgICAgICAgICAg
ICAgLi4NCj4+PiAgICAgICAgICAgICAgICAgICAgIHBhZ2UgPSBhbGxvY19wYWdlKCkNCj4+PiAg
ICAgICAgICAgICAgICAgICAgIHB0ZV9tYXBfbG9jaygpDQo+Pj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGRpc2FibGUgaW50ZXJydXB0DQo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgYWJvcnQgaWYgc2VxdWVuY2UgY291bnRlciBoYXMgY2hhbmdlZA0KPj4+ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFib3J0IGlmIHBtZCBvciBwdGUgaGFz
IGNoYW5nZWQNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwdGUgbWFw
IGFuZCBsb2NrDQo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgIGVuYWJsZSBpbnRlcnJ1
cHQNCj4+PiAgICAgICAgICAgICAgICAgICAgIGlmIGFib3J0DQo+Pj4gICAgICAgICAgICAgICAg
ICAgICAgICBmcmVlIHBhZ2UNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgIGFib3J0DQo+Pj4g
ICAgICAgICAgICAgICAgICAgICAuLi4NCj4+PiAgICAgfQ0KPj4+DQo+Pj4gICAgIGFyY2hfZmF1
bHRfaGFuZGxlcigpDQo+Pj4gICAgIHsNCj4+PiAgICAgICAgICAgICBpZiAoc3BlY3VsYXRpdmVf
cGFnZV9mYXVsdCgmdm1hKSkNCj4+PiAgICAgICAgICAgICAgICBnb3RvIGRvbmUNCj4+PiAgICAg
YWdhaW46DQo+Pj4gICAgICAgICAgICAgbG9jayhtbWFwX3NlbSkNCj4+PiAgICAgICAgICAgICB2
bWEgPSBmaW5kX3ZtYSgpOw0KPj4+ICAgICAgICAgICAgIGhhbmRsZV9wdGVfZmF1bHQodm1hKTsN
Cj4+PiAgICAgICAgICAgICBpZiByZXRyeQ0KPj4+ICAgICAgICAgICAgICAgIHVubG9jayhtbWFw
X3NlbSkNCj4+PiAgICAgICAgICAgICAgICBnb3RvIGFnYWluOw0KPj4+ICAgICBkb25lOg0KPj4+
ICAgICAgICAgICAgIGhhbmRsZSBmYXVsdCBlcnJvcg0KPj4+ICAgICB9DQo+Pj4NCj4+PiBTdXBw
b3J0IGZvciBUSFAgaXMgbm90IGRvbmUgYmVjYXVzZSB3aGVuIGNoZWNraW5nIGZvciB0aGUgUE1E
LCB3ZSANCj4+PiBjYW4gYmUgY29uZnVzZWQgYnkgYW4gaW4gcHJvZ3Jlc3MgY29sbGFwc2luZyBv
cGVyYXRpb24gZG9uZSBieSANCj4+PiBraHVnZXBhZ2VkLiBUaGUgaXNzdWUgaXMgdGhhdCBwbWRf
bm9uZSgpIGNvdWxkIGJlIHRydWUgZWl0aGVyIGlmIHRoZSANCj4+PiBQTUQgaXMgbm90IGFscmVh
ZHkgcG9wdWxhdGVkIG9yIGlmIHRoZSB1bmRlcmx5aW5nIFBURSBhcmUgaW4gdGhlIHdheSANCj4+
PiB0byBiZSBjb2xsYXBzZWQuIFNvIHdlIGNhbm5vdCBzYWZlbHkgYWxsb2NhdGUgYSBQTUQgaWYg
cG1kX25vbmUoKSBpcyB0cnVlLg0KPj4+DQo+Pj4gVGhpcyBzZXJpZXMgYWRkIGEgbmV3IHNvZnR3
YXJlIHBlcmZvcm1hbmNlIGV2ZW50IG5hbWVkICdzcGVjdWxhdGl2ZS1mYXVsdHMnDQo+Pj4gb3Ig
J3NwZicuIEl0IGNvdW50cyB0aGUgbnVtYmVyIG9mIHN1Y2Nlc3NmdWwgcGFnZSBmYXVsdCBldmVu
dCANCj4+PiBoYW5kbGVkIHNwZWN1bGF0aXZlbHkuIFdoZW4gcmVjb3JkaW5nICdmYXVsdHMsc3Bm
JyBldmVudHMsIHRoZSANCj4+PiBmYXVsdHMgb25lIGlzIGNvdW50aW5nIHRoZSB0b3RhbCBudW1i
ZXIgb2YgcGFnZSBmYXVsdCBldmVudHMgd2hpbGUgDQo+Pj4gJ3NwZicgaXMgb25seSBjb3VudGlu
ZyB0aGUgcGFydCBvZiB0aGUgZmF1bHRzIHByb2Nlc3NlZCBzcGVjdWxhdGl2ZWx5Lg0KPj4+DQo+
Pj4gVGhlcmUgYXJlIHNvbWUgdHJhY2UgZXZlbnRzIGludHJvZHVjZWQgYnkgdGhpcyBzZXJpZXMu
IFRoZXkgYWxsb3cgDQo+Pj4gaWRlbnRpZnlpbmcgd2h5IHRoZSBwYWdlIGZhdWx0cyB3ZXJlIG5v
dCBwcm9jZXNzZWQgc3BlY3VsYXRpdmVseS4gDQo+Pj4gVGhpcyBkb2Vzbid0IHRha2UgaW4gYWNj
b3VudCB0aGUgZmF1bHRzIGdlbmVyYXRlZCBieSBhIG1vbm90aHJlYWRlZCANCj4+PiBwcm9jZXNz
IHdoaWNoIGRpcmVjdGx5IHByb2Nlc3NlZCB3aGlsZSBob2xkaW5nIHRoZSBtbWFwX3NlbS4gVGhp
cyANCj4+PiB0cmFjZSBldmVudHMgYXJlIGdyb3VwZWQgaW4gYSBzeXN0ZW0gbmFtZWQgJ3BhZ2Vm
YXVsdCcsIHRoZXkgYXJlOg0KPj4+ICAtIHBhZ2VmYXVsdDpzcGZfdm1hX2NoYW5nZWQgOiBpZiB0
aGUgVk1BIGhhcyBiZWVuIGNoYW5nZWQgaW4gb3VyIA0KPj4+IGJhY2sNCj4+PiAgLSBwYWdlZmF1
bHQ6c3BmX3ZtYV9ub2Fub24gOiB0aGUgdm1hLT5hbm9uX3ZtYSBmaWVsZCB3YXMgbm90IHlldCBz
ZXQuDQo+Pj4gIC0gcGFnZWZhdWx0OnNwZl92bWFfbm90c3VwIDogdGhlIFZNQSdzIHR5cGUgaXMg
bm90IHN1cHBvcnRlZA0KPj4+ICAtIHBhZ2VmYXVsdDpzcGZfdm1hX2FjY2VzcyA6IHRoZSBWTUEn
cyBhY2Nlc3MgcmlnaHQgYXJlIG5vdCANCj4+PiByZXNwZWN0ZWQNCj4+PiAgLSBwYWdlZmF1bHQ6
c3BmX3BtZF9jaGFuZ2VkIDogdGhlIHVwcGVyIFBNRCBwb2ludGVyIGhhcyBjaGFuZ2VkIGluIG91
cg0KPj4+ICAgIGJhY2suDQo+Pj4NCj4+PiBUbyByZWNvcmQgYWxsIHRoZSByZWxhdGVkIGV2ZW50
cywgdGhlIGVhc2llciBpcyB0byBydW4gcGVyZiB3aXRoIHRoZSANCj4+PiBmb2xsb3dpbmcgYXJn
dW1lbnRzIDoNCj4+PiAkIHBlcmYgc3RhdCAtZSAnZmF1bHRzLHNwZixwYWdlZmF1bHQ6KicgPGNv
bW1hbmQ+DQo+Pj4NCj4+PiBUaGVyZSBpcyBhbHNvIGEgZGVkaWNhdGVkIHZtc3RhdCBjb3VudGVy
IHNob3dpbmcgdGhlIG51bWJlciBvZiANCj4+PiBzdWNjZXNzZnVsIHBhZ2UgZmF1bHQgaGFuZGxl
ZCBzcGVjdWxhdGl2ZWx5LiBJIGNhbiBiZSBzZWVuIHRoaXMgd2F5Og0KPj4+ICQgZ3JlcCBzcGVj
dWxhdGl2ZV9wZ2ZhdWx0IC9wcm9jL3Ztc3RhdA0KPj4+DQo+Pj4gVGhpcyBzZXJpZXMgYnVpbGRz
IG9uIHRvcCBvZiB2NC4xNi1tbW90bS0yMDE4LTA0LTEzLTE3LTI4IGFuZCBpcyANCj4+PiBmdW5j
dGlvbmFsIG9uIHg4NiwgUG93ZXJQQyBhbmQgYXJtNjQuDQo+Pj4NCj4+PiAtLS0tLS0tLS0tLS0t
LS0tLS0tLS0NCj4+PiBSZWFsIFdvcmtsb2FkIHJlc3VsdHMNCj4+Pg0KPj4+IEFzIG1lbnRpb25l
ZCBpbiBwcmV2aW91cyBlbWFpbCwgd2UgZGlkIG5vbiBvZmZpY2lhbCBydW5zIHVzaW5nIGEgDQo+
Pj4gInBvcHVsYXIgaW4gbWVtb3J5IG11bHRpdGhyZWFkZWQgZGF0YWJhc2UgcHJvZHVjdCIgb24g
MTc2IGNvcmVzIFNNVDggDQo+Pj4gUG93ZXIgc3lzdGVtIHdoaWNoIHNob3dlZCBhIDMwJSBpbXBy
b3ZlbWVudHMgaW4gdGhlIG51bWJlciBvZiANCj4+PiB0cmFuc2FjdGlvbiBwcm9jZXNzZWQgcGVy
IHNlY29uZC4gVGhpcyBydW4gaGFzIGJlZW4gZG9uZSBvbiB0aGUgdjYgDQo+Pj4gc2VyaWVzLCBi
dXQgY2hhbmdlcyBpbnRyb2R1Y2VkIGluIHRoaXMgbmV3IHZlcnNpb24gc2hvdWxkIG5vdCBpbXBh
Y3QgdGhlIHBlcmZvcm1hbmNlIGJvb3N0IHNlZW4uDQo+Pj4NCj4+PiBIZXJlIGFyZSB0aGUgcGVy
ZiBkYXRhIGNhcHR1cmVkIGR1cmluZyAyIG9mIHRoZXNlIHJ1bnMgb24gdG9wIG9mIHRoZSANCj4+
PiB2OA0KPj4+IHNlcmllczoNCj4+PiAgICAgICAgICAgICAgICAgdmFuaWxsYSAgICAgICAgIHNw
Zg0KPj4+IGZhdWx0cyAgICAgICAgICA4OS40MTggICAgICAgICAgMTAxLjM2NCAgICAgICAgICsx
MyUNCj4+PiBzcGYgICAgICAgICAgICAgICAgbi9hICAgICAgICAgICA5Ny45ODkNCj4+Pg0KPj4+
IFdpdGggdGhlIFNQRiBrZXJuZWwsIG1vc3Qgb2YgdGhlIHBhZ2UgZmF1bHQgd2VyZSBwcm9jZXNz
ZWQgaW4gYSANCj4+PiBzcGVjdWxhdGl2ZSB3YXkuDQo+Pj4NCj4+PiBHYW5lc2ggTWFoZW5kcmFu
IGhhZCBiYWNrcG9ydGVkIHRoZSBzZXJpZXMgb24gdG9wIG9mIGEgNC45IGtlcm5lbCANCj4+PiBh
bmQgZ2F2ZSBpdCBhIHRyeSBvbiBhbiBhbmRyb2lkIGRldmljZS4gSGUgcmVwb3J0ZWQgdGhhdCB0
aGUgDQo+Pj4gYXBwbGljYXRpb24gbGF1bmNoIHRpbWUgd2FzIGltcHJvdmVkIGluIGF2ZXJhZ2Ug
YnkgNiUsIGFuZCBmb3IgbGFyZ2UgDQo+Pj4gYXBwbGljYXRpb25zICh+MTAwIHRocmVhZHMpIGJ5
IDIwJS4NCj4+Pg0KPj4+IEhlcmUgYXJlIHRoZSBsYXVuY2ggdGltZSBHYW5lc2ggbWVzdXJlZCBv
biBBbmRyb2lkIDguMCBvbiB0b3Agb2YgYSANCj4+PiBRY29tDQo+Pj4gTVNNODQ1ICg4IGNvcmVz
KSB3aXRoIDZHQiAodGhlIGxlc3MgaXMgYmV0dGVyKToNCj4+Pg0KPj4+IEFwcGxpY2F0aW9uICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA0LjkgICAgIDQuOStzcGYgZGVsdGENCj4+PiBjb20u
dGVuY2VudC5tbSAgICAgICAgICAgICAgICAgICAgICAgICAgNDE2ICAgICAzODkgICAgIC03JQ0K
Pj4+IGNvbS5lZy5hbmRyb2lkLkFsaXBheUdwaG9uZSAgICAgICAgICAgICAxMTM1ICAgIDk4NiAg
ICAgLTEzJQ0KPj4+IGNvbS50ZW5jZW50Lm10dCAgICAgICAgICAgICAgICAgICAgICAgICA0NTUg
ICAgIDQ1NCAgICAgMCUNCj4+PiBjb20ucXFnYW1lLmhsZGR6ICAgICAgICAgICAgICAgICAgICAg
ICAgMTQ5NyAgICAxNDA5ICAgIC02JQ0KPj4+IGNvbS5hdXRvbmF2aS5taW5pbWFwICAgICAgICAg
ICAgICAgICAgICA3MTEgICAgIDcwMSAgICAgLTElDQo+Pj4gY29tLnRlbmNlbnQudG1ncC5zZ2Ft
ZSAgICAgICAgICAgICAgICAgIDc4OCAgICAgNzQ4ICAgICAtNSUNCj4+PiBjb20uaW1tb21vLm1v
bW8gICAgICAgICAgICAgICAgICAgICAgICAgNTAxICAgICA0ODcgICAgIC0zJQ0KPj4+IGNvbS50
ZW5jZW50LnBlbmcgICAgICAgICAgICAgICAgICAgICAgICAyMTQ1ICAgIDIxMTIgICAgLTIlDQo+
Pj4gY29tLnNtaWxlLmdpZm1ha2VyICAgICAgICAgICAgICAgICAgICAgIDQ5MSAgICAgNDYxICAg
ICAtNiUNCj4+PiBjb20uYmFpZHUuQmFpZHVNYXAgICAgICAgICAgICAgICAgICAgICAgNDc5ICAg
ICAzNjYgICAgIC0yMyUNCj4+PiBjb20udGFvYmFvLnRhb2JhbyAgICAgICAgICAgICAgICAgICAg
ICAgMTM0MSAgICAxMTk4ICAgIC0xMSUNCj4+PiBjb20uYmFpZHUuc2VhcmNoYm94ICAgICAgICAg
ICAgICAgICAgICAgMzMzICAgICAzMTQgICAgIC02JQ0KPj4+IGNvbS50ZW5jZW50Lm1vYmlsZXFx
ICAgICAgICAgICAgICAgICAgICAzOTQgICAgIDM4NCAgICAgLTMlDQo+Pj4gY29tLnNpbmEud2Vp
Ym8gICAgICAgICAgICAgICAgICAgICAgICAgIDkwNyAgICAgOTA2ICAgICAwJQ0KPj4+IGNvbS55
b3VrdS5waG9uZSAgICAgICAgICAgICAgICAgICAgICAgICA4MTYgICAgIDczMSAgICAgLTExJQ0K
Pj4+IGNvbS5oYXBweWVsZW1lbnRzLkFuZHJvaWRBbmltYWwucXEgICAgICA3NjMgICAgIDcxNyAg
ICAgLTYlDQo+Pj4gY29tLlVDTW9iaWxlICAgICAgICAgICAgICAgICAgICAgICAgICAgIDQxNSAg
ICAgNDExICAgICAtMSUNCj4+PiBjb20udGVuY2VudC50bWdwLmFrICAgICAgICAgICAgICAgICAg
ICAgMTQ2NCAgICAxNDMxICAgIC0yJQ0KPj4+IGNvbS50ZW5jZW50LnFxbXVzaWMgICAgICAgICAg
ICAgICAgICAgICAzMzYgICAgIDMyOSAgICAgLTIlDQo+Pj4gY29tLnNhbmt1YWkubWVpdHVhbiAg
ICAgICAgICAgICAgICAgICAgIDE2NjEgICAgMTMwMiAgICAtMjIlDQo+Pj4gY29tLm5ldGVhc2Uu
Y2xvdWRtdXNpYyAgICAgICAgICAgICAgICAgIDExOTMgICAgMTIwMCAgICAxJQ0KPj4+IGFpci50
di5kb3V5dS5hbmRyb2lkICAgICAgICAgICAgICAgICAgICA0MjU3ICAgIDQxNTIgICAgLTIlDQo+
Pj4NCj4+PiAtLS0tLS0tLS0tLS0tLS0tLS0NCj4+PiBCZW5jaG1hcmtzIHJlc3VsdHMNCj4+Pg0K
Pj4+IEJhc2Uga2VybmVsIGlzIHY0LjE3LjAtcmM0LW1tMQ0KPj4+IFNQRiBpcyBCQVNFICsgdGhp
cyBzZXJpZXMNCj4+Pg0KPj4+IEtlcm5iZW5jaDoNCj4+PiAtLS0tLS0tLS0tDQo+Pj4gSGVyZSBh
cmUgdGhlIHJlc3VsdHMgb24gYSAxNiBDUFVzIFg4NiBndWVzdCB1c2luZyBrZXJuYmVuY2ggb24g
YSANCj4+PiA0LjE1IGtlcm5lbCAoa2VybmVsIGlzIGJ1aWxkIDUgdGltZXMpOg0KPj4+DQo+Pj4g
QXZlcmFnZSBIYWxmIGxvYWQgLWogOA0KPj4+ICAgICAgICAgICAgICAgICAgUnVuICAgIChzdGQg
ZGV2aWF0aW9uKQ0KPj4+ICAgICAgICAgICAgICAgICAgQkFTRSAgICAgICAgICAgICAgICAgICBT
UEYNCj4+PiBFbGFwc2VkIFRpbWUgICAgIDE0NDguNjUgKDUuNzIzMTIpICAgICAgMTQ1NS44NCAo
NC44NDk1MSkgICAgICAgMC41MCUNCj4+PiBVc2VyICAgIFRpbWUgICAgIDEwMTM1LjQgKDMwLjM2
OTkpICAgICAgMTAxNDguOCAoMzEuMTI1MikgICAgICAgMC4xMyUNCj4+PiBTeXN0ZW0gIFRpbWUg
ICAgIDkwMC40NyAgKDIuODExMzEpICAgICAgOTIzLjI4ICAoNy41Mjc3OSkgICAgICAgMi41MyUN
Cj4+PiBQZXJjZW50IENQVSAgICAgIDc2MS40ICAgKDEuMTQwMTgpICAgICAgNzYwLjIgICAoMC40
NDcyMTQpICAgICAgLTAuMTYlDQo+Pj4gQ29udGV4dCBTd2l0Y2hlcyA4NTM4MCAgICgzNDE5LjUy
KSAgICAgIDg0NzQ4ICAgKDE5MDQuNDQpICAgICAgIC0wLjc0JQ0KPj4+IFNsZWVwcyAgICAgICAg
ICAgMTA1MDY0ICAoMTI0MC45NikgICAgICAxMDUwNzQgICgzMzcuNjEyKSAgICAgICAwLjAxJQ0K
Pj4+DQo+Pj4gQXZlcmFnZSBPcHRpbWFsIGxvYWQgLWogMTYNCj4+PiAgICAgICAgICAgICAgICAg
IFJ1biAgICAoc3RkIGRldmlhdGlvbikNCj4+PiAgICAgICAgICAgICAgICAgIEJBU0UgICAgICAg
ICAgICAgICAgICAgU1BGDQo+Pj4gRWxhcHNlZCBUaW1lICAgICA5MjAuNTI4ICgxMC4xMjEyKSAg
ICAgIDkyNy40MDQgKDguOTE3ODkpICAgICAgIDAuNzUlDQo+Pj4gVXNlciAgICBUaW1lICAgICAx
MTA2NC44ICg5ODEuMTQyKSAgICAgIDExMDg1ICAgKDk5MC44OTcpICAgICAgIDAuMTglDQo+Pj4g
U3lzdGVtICBUaW1lICAgICA5NzkuOTA0ICg4NC4wNjE1KSAgICAgIDEwMDEuMTQgKDgyLjU1MjMp
ICAgICAgIDIuMTclDQo+Pj4gUGVyY2VudCBDUFUgICAgICAxMDg5LjUgICgzNDUuODk0KSAgICAg
IDEwODYuMSAgKDM0My41NDUpICAgICAgIC0wLjMxJQ0KPj4+IENvbnRleHQgU3dpdGNoZXMgMTU5
NDg4ICAoNzgxNTYuNCkgICAgICAxNTgyMjMgICg3NzQ3Mi4xKSAgICAgICAtMC43OSUNCj4+PiBT
bGVlcHMgICAgICAgICAgIDExMDU2NiAgKDU4NzcuNDkpICAgICAgMTEwMzg4ICAoNTYxNy43NSkg
ICAgICAgLTAuMTYlDQo+Pj4NCj4+Pg0KPj4+IER1cmluZyBhIHJ1biBvbiB0aGUgU1BGLCBwZXJm
IGV2ZW50cyB3ZXJlIGNhcHR1cmVkOg0KPj4+ICBQZXJmb3JtYW5jZSBjb3VudGVyIHN0YXRzIGZv
ciAnLi4va2VybmJlbmNoIC1NJzoNCj4+PiAgICAgICAgICA1MjY3NDM3NjQgICAgICBmYXVsdHMN
Cj4+PiAgICAgICAgICAgICAgICAyMTAgICAgICBzcGYNCj4+PiAgICAgICAgICAgICAgICAgIDMg
ICAgICBwYWdlZmF1bHQ6c3BmX3ZtYV9jaGFuZ2VkDQo+Pj4gICAgICAgICAgICAgICAgICAwICAg
ICAgcGFnZWZhdWx0OnNwZl92bWFfbm9hbm9uDQo+Pj4gICAgICAgICAgICAgICAyMjc4ICAgICAg
cGFnZWZhdWx0OnNwZl92bWFfbm90c3VwDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgcGFn
ZWZhdWx0OnNwZl92bWFfYWNjZXNzDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgcGFnZWZh
dWx0OnNwZl9wbWRfY2hhbmdlZA0KPj4+DQo+Pj4gVmVyeSBmZXcgc3BlY3VsYXRpdmUgcGFnZSBm
YXVsdHMgd2VyZSByZWNvcmRlZCBhcyBtb3N0IG9mIHRoZSANCj4+PiBwcm9jZXNzZXMgaW52b2x2
ZWQgYXJlIG1vbm90aHJlYWRlZCAoc291bmRzIHRoYXQgb24gdGhpcyANCj4+PiBhcmNoaXRlY3R1
cmUgc29tZSB0aHJlYWRzIHdlcmUgY3JlYXRlZCBkdXJpbmcgdGhlIGtlcm5lbCBidWlsZCBwcm9j
ZXNzaW5nKS4NCj4+Pg0KPj4+IEhlcmUgYXJlIHRoZSBrZXJiZW5jaCByZXN1bHRzIG9uIGEgODAg
Q1BVcyBQb3dlcjggc3lzdGVtOg0KPj4+DQo+Pj4gQXZlcmFnZSBIYWxmIGxvYWQgLWogNDANCj4+
PiAgICAgICAgICAgICAgICAgIFJ1biAgICAoc3RkIGRldmlhdGlvbikNCj4+PiAgICAgICAgICAg
ICAgICAgIEJBU0UgICAgICAgICAgICAgICAgICAgU1BGDQo+Pj4gRWxhcHNlZCBUaW1lICAgICAx
MTcuMTUyICgwLjc3NDY0MikgICAgIDExNy4xNjYgKDAuNDc2MDU3KSAgICAgIDAuMDElDQo+Pj4g
VXNlciAgICBUaW1lICAgICA0NDc4LjUyICgyNC43Njg4KSAgICAgIDQ0NzkuNzYgKDkuMDg1NTUp
ICAgICAgIDAuMDMlDQo+Pj4gU3lzdGVtICBUaW1lICAgICAxMzEuMTA0ICgwLjcyMDA1NikgICAg
IDEzNC4wNCAgKDAuNzA4NDE0KSAgICAgIDIuMjQlDQo+Pj4gUGVyY2VudCBDUFUgICAgICAzOTM0
ICAgICgxOS43MTA0KSAgICAgIDM5MzcuMiAgKDE5LjAxODQpICAgICAgIDAuMDglDQo+Pj4gQ29u
dGV4dCBTd2l0Y2hlcyA5MjEyNS40ICg1NzYuNzg3KSAgICAgIDkyNTgxLjYgKDE5OC42MjIpICAg
ICAgIDAuNTAlDQo+Pj4gU2xlZXBzICAgICAgICAgICAzMTc5MjMgICg2NTIuNDk5KSAgICAgIDMx
ODQ2OSAgKDEyNTUuNTkpICAgICAgIDAuMTclDQo+Pj4NCj4+PiBBdmVyYWdlIE9wdGltYWwgbG9h
ZCAtaiA4MA0KPj4+ICAgICAgICAgICAgICAgICAgUnVuICAgIChzdGQgZGV2aWF0aW9uKQ0KPj4+
ICAgICAgICAgICAgICAgICAgQkFTRSAgICAgICAgICAgICAgICAgICBTUEYNCj4+PiBFbGFwc2Vk
IFRpbWUgICAgIDEwNy43MyAgKDAuNjMyNDE2KSAgICAgMTA3LjMxICAoMC41ODQ5MzYpICAgICAg
LTAuMzklDQo+Pj4gVXNlciAgICBUaW1lICAgICA1ODY5Ljg2ICgxNDY2LjcyKSAgICAgIDU4NzEu
NzEgKDE0NjcuMjcpICAgICAgIDAuMDMlDQo+Pj4gU3lzdGVtICBUaW1lICAgICAxNTMuNzI4ICgy
My44NTczKSAgICAgIDE1Ny4xNTMgKDI0LjM3MDQpICAgICAgIDIuMjMlDQo+Pj4gUGVyY2VudCBD
UFUgICAgICA1NDE4LjYgICgxNTY1LjE3KSAgICAgIDU0MzYuNyAgKDE1ODAuOTEpICAgICAgIDAu
MzMlDQo+Pj4gQ29udGV4dCBTd2l0Y2hlcyAyMjM4NjEgICgxMzg4NjUpICAgICAgIDIyNTAzMiAg
KDEzOTYzMikgICAgICAgIDAuNTIlDQo+Pj4gU2xlZXBzICAgICAgICAgICAzMzA1MjkgICgxMzQ5
NS4xKSAgICAgIDMzMjAwMSAgKDE0NzQ2LjIpICAgICAgIDAuNDUlDQo+Pj4NCj4+PiBEdXJpbmcg
YSBydW4gb24gdGhlIFNQRiwgcGVyZiBldmVudHMgd2VyZSBjYXB0dXJlZDoNCj4+PiAgUGVyZm9y
bWFuY2UgY291bnRlciBzdGF0cyBmb3IgJy4uL2tlcm5iZW5jaCAtTSc6DQo+Pj4gICAgICAgICAg
MTE2NzMwODU2ICAgICAgZmF1bHRzDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgc3BmDQo+
Pj4gICAgICAgICAgICAgICAgICAzICAgICAgcGFnZWZhdWx0OnNwZl92bWFfY2hhbmdlZA0KPj4+
ICAgICAgICAgICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX25vYW5vbg0KPj4+ICAg
ICAgICAgICAgICAgIDQ3NiAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX25vdHN1cA0KPj4+ICAgICAg
ICAgICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX2FjY2Vzcw0KPj4+ICAgICAgICAg
ICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfcG1kX2NoYW5nZWQNCj4+Pg0KPj4+IE1vc3Qg
b2YgdGhlIHByb2Nlc3NlcyBpbnZvbHZlZCBhcmUgbW9ub3RocmVhZGVkIHNvIFNQRiBpcyBub3Qg
DQo+Pj4gYWN0aXZhdGVkIGJ1dCB0aGVyZSBpcyBubyBpbXBhY3Qgb24gdGhlIHBlcmZvcm1hbmNl
Lg0KPj4+DQo+Pj4gRWJpenp5Og0KPj4+IC0tLS0tLS0NCj4+PiBUaGUgdGVzdCBpcyBjb3VudGlu
ZyB0aGUgbnVtYmVyIG9mIHJlY29yZHMgcGVyIHNlY29uZCBpdCBjYW4gbWFuYWdlLCANCj4+PiB0
aGUgaGlnaGVyIGlzIHRoZSBiZXN0LiBJIHJ1biBpdCBsaWtlIHRoaXMgJ2ViaXp6eSAtbVR0IDxu
cmNwdXM+Jy4gDQo+Pj4gVG8gZ2V0IGNvbnNpc3RlbnQgcmVzdWx0IEkgcmVwZWF0ZWQgdGhlIHRl
c3QgMTAwIHRpbWVzIGFuZCBtZWFzdXJlIA0KPj4+IHRoZSBhdmVyYWdlIHJlc3VsdC4gVGhlIG51
bWJlciBpcyB0aGUgcmVjb3JkIHByb2Nlc3NlcyBwZXIgc2Vjb25kLCANCj4+PiB0aGUgaGlnaGVy
IGlzIHRoZSBiZXN0Lg0KPj4+DQo+Pj4gICAgICAgICAgICAgICAgIEJBU0UgICAgICAgICAgICBT
UEYgICAgICAgICAgICAgZGVsdGENCj4+PiAxNiBDUFVzIHg4NiBWTSAgNzQyLjU3ICAgICAgICAg
IDE0OTAuMjQgICAgICAgICAxMDAuNjklDQo+Pj4gODAgQ1BVcyBQOCBub2RlIDEzMTA1LjQgICAg
ICAgICAyNDE3NC4yMyAgICAgICAgODQuNDYlDQo+Pj4NCj4+PiBIZXJlIGFyZSB0aGUgcGVyZm9y
bWFuY2UgY291bnRlciByZWFkIGR1cmluZyBhIHJ1biBvbiBhIDE2IENQVXMgeDg2IFZNOg0KPj4+
ICBQZXJmb3JtYW5jZSBjb3VudGVyIHN0YXRzIGZvciAnLi9lYml6enkgLW1UdCAxNic6DQo+Pj4g
ICAgICAgICAgICAxNzA2Mzc5ICAgICAgZmF1bHRzDQo+Pj4gICAgICAgICAgICAxNjc0NTk5ICAg
ICAgc3BmDQo+Pj4gICAgICAgICAgICAgIDMwNTg4ICAgICAgcGFnZWZhdWx0OnNwZl92bWFfY2hh
bmdlZA0KPj4+ICAgICAgICAgICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX25vYW5v
bg0KPj4+ICAgICAgICAgICAgICAgIDM2MyAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX25vdHN1cA0K
Pj4+ICAgICAgICAgICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfdm1hX2FjY2Vzcw0KPj4+
ICAgICAgICAgICAgICAgICAgMCAgICAgIHBhZ2VmYXVsdDpzcGZfcG1kX2NoYW5nZWQNCj4+Pg0K
Pj4+IEFuZCB0aGUgb25lcyBjYXB0dXJlZCBkdXJpbmcgYSBydW4gb24gYSA4MCBDUFVzIFBvd2Vy
IG5vZGU6DQo+Pj4gIFBlcmZvcm1hbmNlIGNvdW50ZXIgc3RhdHMgZm9yICcuL2ViaXp6eSAtbVR0
IDgwJzoNCj4+PiAgICAgICAgICAgIDE4NzQ3NzMgICAgICBmYXVsdHMNCj4+PiAgICAgICAgICAg
IDE0NjExNTMgICAgICBzcGYNCj4+PiAgICAgICAgICAgICA0MTMyOTMgICAgICBwYWdlZmF1bHQ6
c3BmX3ZtYV9jaGFuZ2VkDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgcGFnZWZhdWx0OnNw
Zl92bWFfbm9hbm9uDQo+Pj4gICAgICAgICAgICAgICAgMjAwICAgICAgcGFnZWZhdWx0OnNwZl92
bWFfbm90c3VwDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgcGFnZWZhdWx0OnNwZl92bWFf
YWNjZXNzDQo+Pj4gICAgICAgICAgICAgICAgICAwICAgICAgcGFnZWZhdWx0OnNwZl9wbWRfY2hh
bmdlZA0KPj4+DQo+Pj4gSW4gZWJpenp5J3MgY2FzZSBtb3N0IG9mIHRoZSBwYWdlIGZhdWx0IHdl
cmUgaGFuZGxlZCBpbiBhIA0KPj4+IHNwZWN1bGF0aXZlIHdheSwgbGVhZGluZyB0aGUgZWJpenp5
IHBlcmZvcm1hbmNlIGJvb3N0Lg0KPj4+DQo+Pj4gLS0tLS0tLS0tLS0tLS0tLS0tDQo+Pj4gQ2hh
bmdlcyBzaW5jZSB2MTAgKGh0dHBzOi8vbGttbC5vcmcvbGttbC8yMDE4LzQvMTcvNTcyKToNCj4+
PiAgLSBBY2NvdW50ZWQgZm9yIGFsbCByZXZpZXcgZmVlZGJhY2tzIGZyb20gUHVuaXQgQWdyYXdh
bCwgR2FuZXNoIE1haGVuZHJhbg0KPj4+ICAgIGFuZCBNaW5jaGFuIEtpbSwgaG9wZWZ1bGx5Lg0K
Pj4+ICAtIFJlbW92ZSB1bm5lZWRlZCBjaGVjayBvbiBDT05GSUdfU1BFQ1VMQVRJVkVfUEFHRV9G
QVVMVCBpbg0KPj4+ICAgIF9fZG9fcGFnZV9mYXVsdCgpLg0KPj4+ICAtIExvb3AgaW4gcHRlX3Nw
aW5sb2NrKCkgYW5kIHB0ZV9tYXBfbG9jaygpIHdoZW4gcHRlIHRyeSBsb2NrIGZhaWxzDQo+Pj4g
ICAgaW5zdGVhZA0KPj4+ICAgIG9mIGFib3J0aW5nIHRoZSBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0
IGhhbmRsaW5nLiBEcm9wcGluZyB0aGUgbm93IA0KPj4+IHVzZWxlc3MNCj4+PiAgICB0cmFjZSBl
dmVudCBwYWdlZmF1bHQ6c3BmX3B0ZV9sb2NrLg0KPj4+ICAtIE5vIG1vcmUgdHJ5IHRvIHJldXNl
IHRoZSBmZXRjaGVkIFZNQSBkdXJpbmcgdGhlIHNwZWN1bGF0aXZlIHBhZ2UgZmF1bHQNCj4+PiAg
ICBoYW5kbGluZyB3aGVuIHJldHJ5aW5nIGlzIG5lZWRlZC4gVGhpcyBhZGRzIGEgbG90IG9mIGNv
bXBsZXhpdHkgYW5kDQo+Pj4gICAgYWRkaXRpb25hbCB0ZXN0cyBkb25lIGRpZG4ndCBzaG93IGEg
c2lnbmlmaWNhbnQgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQuDQo+Pj4gIC0gQ29udmVydCBJU19F
TkFCTEVEKENPTkZJR19OVU1BKSBiYWNrIHRvICNpZmRlZiBkdWUgdG8gYnVpbGQgZXJyb3IuDQo+
Pj4NCj4+PiBbMV0gDQo+Pj4gaHR0cDovL2xpbnV4LWtlcm5lbC4yOTM1Lm43Lm5hYmJsZS5jb20v
UkZDLVBBVENILTAtNi1Bbm90aGVyLWdvLWF0LXMNCj4+PiBwZWN1bGF0aXZlLXBhZ2UtZmF1bHRz
LXR0OTY1NjQyLmh0bWwjbm9uZQ0KPj4+IFsyXSBodHRwczovL3BhdGNod29yay5rZXJuZWwub3Jn
L3BhdGNoLzk5OTk2ODcvDQo+Pj4NCj4+Pg0KPj4+IExhdXJlbnQgRHVmb3VyICgyMCk6DQo+Pj4g
ICBtbTogaW50cm9kdWNlIENPTkZJR19TUEVDVUxBVElWRV9QQUdFX0ZBVUxUDQo+Pj4gICB4ODYv
bW06IGRlZmluZSBBUkNIX1NVUFBPUlRTX1NQRUNVTEFUSVZFX1BBR0VfRkFVTFQNCj4+PiAgIHBv
d2VycGMvbW06IHNldCBBUkNIX1NVUFBPUlRTX1NQRUNVTEFUSVZFX1BBR0VfRkFVTFQNCj4+PiAg
IG1tOiBpbnRyb2R1Y2UgcHRlX3NwaW5sb2NrIGZvciBGQVVMVF9GTEFHX1NQRUNVTEFUSVZFDQo+
Pj4gICBtbTogbWFrZSBwdGVfdW5tYXBfc2FtZSBjb21wYXRpYmxlIHdpdGggU1BGDQo+Pj4gICBt
bTogaW50cm9kdWNlIElOSVRfVk1BKCkNCj4+PiAgIG1tOiBwcm90ZWN0IFZNQSBtb2RpZmljYXRp
b25zIHVzaW5nIFZNQSBzZXF1ZW5jZSBjb3VudA0KPj4+ICAgbW06IHByb3RlY3QgbXJlbWFwKCkg
YWdhaW5zdCBTUEYgaGFubGRlcg0KPj4+ICAgbW06IHByb3RlY3QgU1BGIGhhbmRsZXIgYWdhaW5z
dCBhbm9uX3ZtYSBjaGFuZ2VzDQo+Pj4gICBtbTogY2FjaGUgc29tZSBWTUEgZmllbGRzIGluIHRo
ZSB2bV9mYXVsdCBzdHJ1Y3R1cmUNCj4+PiAgIG1tL21pZ3JhdGU6IFBhc3Mgdm1fZmF1bHQgcG9p
bnRlciB0byBtaWdyYXRlX21pc3BsYWNlZF9wYWdlKCkNCj4+PiAgIG1tOiBpbnRyb2R1Y2UgX19s
cnVfY2FjaGVfYWRkX2FjdGl2ZV9vcl91bmV2aWN0YWJsZQ0KPj4+ICAgbW06IGludHJvZHVjZSBf
X3ZtX25vcm1hbF9wYWdlKCkNCj4+PiAgIG1tOiBpbnRyb2R1Y2UgX19wYWdlX2FkZF9uZXdfYW5v
bl9ybWFwKCkNCj4+PiAgIG1tOiBwcm90ZWN0IG1tX3JiIHRyZWUgd2l0aCBhIHJ3bG9jaw0KPj4+
ICAgbW06IGFkZGluZyBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0IGZhaWx1cmUgdHJhY2UgZXZlbnRz
DQo+Pj4gICBwZXJmOiBhZGQgYSBzcGVjdWxhdGl2ZSBwYWdlIGZhdWx0IHN3IGV2ZW50DQo+Pj4g
ICBwZXJmIHRvb2xzOiBhZGQgc3VwcG9ydCBmb3IgdGhlIFNQRiBwZXJmIGV2ZW50DQo+Pj4gICBt
bTogYWRkIHNwZWN1bGF0aXZlIHBhZ2UgZmF1bHQgdm1zdGF0cw0KPj4+ICAgcG93ZXJwYy9tbTog
YWRkIHNwZWN1bGF0aXZlIHBhZ2UgZmF1bHQNCj4+Pg0KPj4+IE1haGVuZHJhbiBHYW5lc2ggKDIp
Og0KPj4+ICAgYXJtNjQvbW06IGRlZmluZSBBUkNIX1NVUFBPUlRTX1NQRUNVTEFUSVZFX1BBR0Vf
RkFVTFQNCj4+PiAgIGFybTY0L21tOiBhZGQgc3BlY3VsYXRpdmUgcGFnZSBmYXVsdA0KPj4+DQo+
Pj4gUGV0ZXIgWmlqbHN0cmEgKDQpOg0KPj4+ICAgbW06IHByZXBhcmUgZm9yIEZBVUxUX0ZMQUdf
U1BFQ1VMQVRJVkUNCj4+PiAgIG1tOiBWTUEgc2VxdWVuY2UgY291bnQNCj4+PiAgIG1tOiBwcm92
aWRlIHNwZWN1bGF0aXZlIGZhdWx0IGluZnJhc3RydWN0dXJlDQo+Pj4gICB4ODYvbW06IGFkZCBz
cGVjdWxhdGl2ZSBwYWdlZmF1bHQgaGFuZGxpbmcNCj4+Pg0KPj4+ICBhcmNoL2FybTY0L0tjb25m
aWcgICAgICAgICAgICAgICAgICAgIHwgICAxICsNCj4+PiAgYXJjaC9hcm02NC9tbS9mYXVsdC5j
ICAgICAgICAgICAgICAgICB8ICAxMiArDQo+Pj4gIGFyY2gvcG93ZXJwYy9LY29uZmlnICAgICAg
ICAgICAgICAgICAgfCAgIDEgKw0KPj4+ICBhcmNoL3Bvd2VycGMvbW0vZmF1bHQuYyAgICAgICAg
ICAgICAgIHwgIDE2ICsNCj4+PiAgYXJjaC94ODYvS2NvbmZpZyAgICAgICAgICAgICAgICAgICAg
ICB8ICAgMSArDQo+Pj4gIGFyY2gveDg2L21tL2ZhdWx0LmMgICAgICAgICAgICAgICAgICAgfCAg
MjcgKy0NCj4+PiAgZnMvZXhlYy5jICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMiAr
LQ0KPj4+ICBmcy9wcm9jL3Rhc2tfbW11LmMgICAgICAgICAgICAgICAgICAgIHwgICA1ICstDQo+
Pj4gIGZzL3VzZXJmYXVsdGZkLmMgICAgICAgICAgICAgICAgICAgICAgfCAgMTcgKy0NCj4+PiAg
aW5jbHVkZS9saW51eC9odWdldGxiX2lubGluZS5oICAgICAgICB8ICAgMiArLQ0KPj4+ICBpbmNs
dWRlL2xpbnV4L21pZ3JhdGUuaCAgICAgICAgICAgICAgIHwgICA0ICstDQo+Pj4gIGluY2x1ZGUv
bGludXgvbW0uaCAgICAgICAgICAgICAgICAgICAgfCAxMzYgKysrKysrKy0NCj4+PiAgaW5jbHVk
ZS9saW51eC9tbV90eXBlcy5oICAgICAgICAgICAgICB8ICAgNyArDQo+Pj4gIGluY2x1ZGUvbGlu
dXgvcGFnZW1hcC5oICAgICAgICAgICAgICAgfCAgIDQgKy0NCj4+PiAgaW5jbHVkZS9saW51eC9y
bWFwLmggICAgICAgICAgICAgICAgICB8ICAxMiArLQ0KPj4+ICBpbmNsdWRlL2xpbnV4L3N3YXAu
aCAgICAgICAgICAgICAgICAgIHwgIDEwICstDQo+Pj4gIGluY2x1ZGUvbGludXgvdm1fZXZlbnRf
aXRlbS5oICAgICAgICAgfCAgIDMgKw0KPj4+ICBpbmNsdWRlL3RyYWNlL2V2ZW50cy9wYWdlZmF1
bHQuaCAgICAgIHwgIDgwICsrKysrDQo+Pj4gIGluY2x1ZGUvdWFwaS9saW51eC9wZXJmX2V2ZW50
LmggICAgICAgfCAgIDEgKw0KPj4+ICBrZXJuZWwvZm9yay5jICAgICAgICAgICAgICAgICAgICAg
ICAgIHwgICA1ICstDQo+Pj4gIG1tL0tjb25maWcgICAgICAgICAgICAgICAgICAgICAgICAgICAg
fCAgMjIgKysNCj4+PiAgbW0vaHVnZV9tZW1vcnkuYyAgICAgICAgICAgICAgICAgICAgICB8ICAg
NiArLQ0KPj4+ICBtbS9odWdldGxiLmMgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAyICsN
Cj4+PiAgbW0vaW5pdC1tbS5jICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMyArDQo+Pj4g
IG1tL2ludGVybmFsLmggICAgICAgICAgICAgICAgICAgICAgICAgfCAgMjAgKysNCj4+PiAgbW0v
a2h1Z2VwYWdlZC5jICAgICAgICAgICAgICAgICAgICAgICB8ICAgNSArDQo+Pj4gIG1tL21hZHZp
c2UuYyAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDYgKy0NCj4+PiAgbW0vbWVtb3J5LmMg
ICAgICAgICAgICAgICAgICAgICAgICAgICB8IDYxMiArKysrKysrKysrKysrKysrKysrKysrKysr
KysrKy0tLS0tDQo+Pj4gIG1tL21lbXBvbGljeS5jICAgICAgICAgICAgICAgICAgICAgICAgfCAg
NTEgKystDQo+Pj4gIG1tL21pZ3JhdGUuYyAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDYg
Ky0NCj4+PiAgbW0vbWxvY2suYyAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAxMyArLQ0K
Pj4+ICBtbS9tbWFwLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgMjI5ICsrKysrKysr
KystLS0NCj4+PiAgbW0vbXByb3RlY3QuYyAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgNCAr
LQ0KPj4+ICBtbS9tcmVtYXAuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEzICsNCj4+
PiAgbW0vbm9tbXUuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMiArLQ0KPj4+ICBt
bS9ybWFwLmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA1ICstDQo+Pj4gIG1tL3N3
YXAuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDYgKy0NCj4+PiAgbW0vc3dhcF9z
dGF0ZS5jICAgICAgICAgICAgICAgICAgICAgICB8ICAgOCArLQ0KPj4+ICBtbS92bXN0YXQuYyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA1ICstDQo+Pj4gIHRvb2xzL2luY2x1ZGUvdWFw
aS9saW51eC9wZXJmX2V2ZW50LmggfCAgIDEgKw0KPj4+ICB0b29scy9wZXJmL3V0aWwvZXZzZWwu
YyAgICAgICAgICAgICAgIHwgICAxICsNCj4+PiAgdG9vbHMvcGVyZi91dGlsL3BhcnNlLWV2ZW50
cy5jICAgICAgICB8ICAgNCArDQo+Pj4gIHRvb2xzL3BlcmYvdXRpbC9wYXJzZS1ldmVudHMubCAg
ICAgICAgfCAgIDEgKw0KPj4+ICB0b29scy9wZXJmL3V0aWwvcHl0aG9uLmMgICAgICAgICAgICAg
IHwgICAxICsNCj4+PiAgNDQgZmlsZXMgY2hhbmdlZCwgMTE2MSBpbnNlcnRpb25zKCspLCAyMTEg
ZGVsZXRpb25zKC0pICBjcmVhdGUgbW9kZSANCj4+PiAxMDA2NDQgaW5jbHVkZS90cmFjZS9ldmVu
dHMvcGFnZWZhdWx0LmgNCj4+Pg0KPj4+IC0tDQo+Pj4gMi43LjQNCj4+Pg0KPj4+DQo+Pg0KPiAN
Cg0K
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-05-28 8:54 ` Laurent Dufour
@ 2018-06-11 7:49 ` Song, HaiyanX
2018-06-11 7:49 ` Song, HaiyanX
1 sibling, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-06-11 7:49 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Laurent,
Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
V9 patch serials.
The regression result is sorted by the metric will-it-scale.per_thread_ops.
branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
commit id:
head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
Benchmark: will-it-scale
Download link: https://github.com/antonblanchard/will-it-scale/tree/master
Metrics:
will-it-scale.per_process_ops=processes/nr_cpu
will-it-scale.per_thread_ops=threads/nr_cpu
test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
THP: enable / disable
nr_task:100%
1. Regressions:
a). Enable THP
testcase base change head metric
page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
b). Disable THP
page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
Notes: for the above values of test result, the higher is better.
2. Improvement: not found improvement based on the selected test cases.
Best regards
Haiyan Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Monday, May 28, 2018 4:54 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 28/05/2018 10:22, Haiyan Song wrote:
> Hi Laurent,
>
> Yes, these tests are done on V9 patch.
Do you plan to give this V11 a run ?
>
>
> Best regards,
> Haiyan Song
>
> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>
>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>> tested on Intel 4s Skylake platform.
>>
>> Hi,
>>
>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>> series" while responding to the v11 header series...
>> Were these tests done on v9 or v11 ?
>>
>> Cheers,
>> Laurent.
>>
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>> Commit id:
>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>> Benchmark suite: will-it-scale
>>> Download link:
>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task: 100%
>>>
>>> 1. Regressions:
>>> a) THP enabled:
>>> testcase base change head metric
>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>
>>> 2. Improvements:
>>> a) THP enabled:
>>> testcase base change head metric
>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>
>>> b) THP disabled:
>>> testcase base change head metric
>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>
>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>> on head commit is better than that on base commit for this benchmark.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>>
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Thursday, May 17, 2018 7:06 PM
>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>
>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>> page fault without holding the mm semaphore [1].
>>>
>>> The idea is to try to handle user space page faults without holding the
>>> mmap_sem. This should allow better concurrency for massively threaded
>>> process since the page fault handler will not wait for other threads memory
>>> layout change to be done, assuming that this change is done in another part
>>> of the process's memory space. This type page fault is named speculative
>>> page fault. If the speculative page fault fails because of a concurrency is
>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>> is failing its processing and a classic page fault is then tried.
>>>
>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>> which protects the access to the mm_rb tree. Previously this was done using
>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>> freeing operation which was hitting the performance by 20% as reported by
>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>> limiting the locking contention to these operations which are expected to
>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>> our back a reference count is added and 2 services (get_vma() and
>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>> fetched from the RB tree using get_vma(), it must be later freed using
>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>> benchmark anymore.
>>>
>>> The VMA's attributes checked during the speculative page fault processing
>>> have to be protected against parallel changes. This is done by using a per
>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>> handler to fast check for parallel changes in progress and to abort the
>>> speculative page fault in that case.
>>>
>>> Once the VMA has been found, the speculative page fault handler would check
>>> for the VMA's attributes to verify that the page fault has to be handled
>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>> allows fast detection of concurrent VMA changes. If such a change is
>>> detected, the speculative page fault is aborted and a *classic* page fault
>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>> checked during the page fault are modified.
>>>
>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>> so once the page table is locked, the VMA is valid, so any other changes
>>> leading to touching this PTE will need to lock the page table, so no
>>> parallel change is possible at this time.
>>>
>>> The locking of the PTE is done with interrupts disabled, this allows
>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>> valid at the time the PTE is locked, we have the guarantee that the
>>> collapsing operation will have to wait on the PTE lock to move forward.
>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>> different from the one recorded at the beginning of the SPF operation, the
>>> classic page fault handler will be called to handle the operation while
>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>> page fault while a TLB invalidate is requested by another CPU holding the
>>> PTE.
>>>
>>> In pseudo code, this could be seen as:
>>> speculative_page_fault()
>>> {
>>> vma = get_vma()
>>> check vma sequence count
>>> check vma's support
>>> disable interrupt
>>> check pgd,p4d,...,pte
>>> save pmd and pte in vmf
>>> save vma sequence counter in vmf
>>> enable interrupt
>>> check vma sequence count
>>> handle_pte_fault(vma)
>>> ..
>>> page = alloc_page()
>>> pte_map_lock()
>>> disable interrupt
>>> abort if sequence counter has changed
>>> abort if pmd or pte has changed
>>> pte map and lock
>>> enable interrupt
>>> if abort
>>> free page
>>> abort
>>> ...
>>> }
>>>
>>> arch_fault_handler()
>>> {
>>> if (speculative_page_fault(&vma))
>>> goto done
>>> again:
>>> lock(mmap_sem)
>>> vma = find_vma();
>>> handle_pte_fault(vma);
>>> if retry
>>> unlock(mmap_sem)
>>> goto again;
>>> done:
>>> handle fault error
>>> }
>>>
>>> Support for THP is not done because when checking for the PMD, we can be
>>> confused by an in progress collapsing operation done by khugepaged. The
>>> issue is that pmd_none() could be true either if the PMD is not already
>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>> cannot safely allocate a PMD if pmd_none() is true.
>>>
>>> This series add a new software performance event named 'speculative-faults'
>>> or 'spf'. It counts the number of successful page fault event handled
>>> speculatively. When recording 'faults,spf' events, the faults one is
>>> counting the total number of page fault events while 'spf' is only counting
>>> the part of the faults processed speculatively.
>>>
>>> There are some trace events introduced by this series. They allow
>>> identifying why the page faults were not processed speculatively. This
>>> doesn't take in account the faults generated by a monothreaded process
>>> which directly processed while holding the mmap_sem. This trace events are
>>> grouped in a system named 'pagefault', they are:
>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>> back.
>>>
>>> To record all the related events, the easier is to run perf with the
>>> following arguments :
>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>
>>> There is also a dedicated vmstat counter showing the number of successful
>>> page fault handled speculatively. I can be seen this way:
>>> $ grep speculative_pgfault /proc/vmstat
>>>
>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>> on x86, PowerPC and arm64.
>>>
>>> ---------------------
>>> Real Workload results
>>>
>>> As mentioned in previous email, we did non official runs using a "popular
>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>> which showed a 30% improvements in the number of transaction processed per
>>> second. This run has been done on the v6 series, but changes introduced in
>>> this new version should not impact the performance boost seen.
>>>
>>> Here are the perf data captured during 2 of these runs on top of the v8
>>> series:
>>> vanilla spf
>>> faults 89.418 101.364 +13%
>>> spf n/a 97.989
>>>
>>> With the SPF kernel, most of the page fault were processed in a speculative
>>> way.
>>>
>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>> it a try on an android device. He reported that the application launch time
>>> was improved in average by 6%, and for large applications (~100 threads) by
>>> 20%.
>>>
>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>> MSM845 (8 cores) with 6GB (the less is better):
>>>
>>> Application 4.9 4.9+spf delta
>>> com.tencent.mm 416 389 -7%
>>> com.eg.android.AlipayGphone 1135 986 -13%
>>> com.tencent.mtt 455 454 0%
>>> com.qqgame.hlddz 1497 1409 -6%
>>> com.autonavi.minimap 711 701 -1%
>>> com.tencent.tmgp.sgame 788 748 -5%
>>> com.immomo.momo 501 487 -3%
>>> com.tencent.peng 2145 2112 -2%
>>> com.smile.gifmaker 491 461 -6%
>>> com.baidu.BaiduMap 479 366 -23%
>>> com.taobao.taobao 1341 1198 -11%
>>> com.baidu.searchbox 333 314 -6%
>>> com.tencent.mobileqq 394 384 -3%
>>> com.sina.weibo 907 906 0%
>>> com.youku.phone 816 731 -11%
>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>> com.UCMobile 415 411 -1%
>>> com.tencent.tmgp.ak 1464 1431 -2%
>>> com.tencent.qqmusic 336 329 -2%
>>> com.sankuai.meituan 1661 1302 -22%
>>> com.netease.cloudmusic 1193 1200 1%
>>> air.tv.douyu.android 4257 4152 -2%
>>>
>>> ------------------
>>> Benchmarks results
>>>
>>> Base kernel is v4.17.0-rc4-mm1
>>> SPF is BASE + this series
>>>
>>> Kernbench:
>>> ----------
>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>> kernel (kernel is build 5 times):
>>>
>>> Average Half load -j 8
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>
>>> Average Optimal load -j 16
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 526743764 faults
>>> 210 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 2278 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Very few speculative page faults were recorded as most of the processes
>>> involved are monothreaded (sounds that on this architecture some threads
>>> were created during the kernel build processing).
>>>
>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>
>>> Average Half load -j 40
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>
>>> Average Optimal load -j 80
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 116730856 faults
>>> 0 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 476 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Most of the processes involved are monothreaded so SPF is not activated but
>>> there is no impact on the performance.
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage, the
>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>> consistent result I repeated the test 100 times and measure the average
>>> result. The number is the record processes per second, the higher is the
>>> best.
>>>
>>> BASE SPF delta
>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>
>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>> Performance counter stats for './ebizzy -mTt 16':
>>> 1706379 faults
>>> 1674599 spf
>>> 30588 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 363 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> And the ones captured during a run on a 80 CPUs Power node:
>>> Performance counter stats for './ebizzy -mTt 80':
>>> 1874773 faults
>>> 1461153 spf
>>> 413293 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 200 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>> leading the ebizzy performance boost.
>>>
>>> ------------------
>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>> and Minchan Kim, hopefully.
>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>> __do_page_fault().
>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>> instead
>>> of aborting the speculative page fault handling. Dropping the now
>>> useless
>>> trace event pagefault:spf_pte_lock.
>>> - No more try to reuse the fetched VMA during the speculative page fault
>>> handling when retrying is needed. This adds a lot of complexity and
>>> additional tests done didn't show a significant performance improvement.
>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>
>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>
>>>
>>> Laurent Dufour (20):
>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>> mm: make pte_unmap_same compatible with SPF
>>> mm: introduce INIT_VMA()
>>> mm: protect VMA modifications using VMA sequence count
>>> mm: protect mremap() against SPF hanlder
>>> mm: protect SPF handler against anon_vma changes
>>> mm: cache some VMA fields in the vm_fault structure
>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>> mm: introduce __lru_cache_add_active_or_unevictable
>>> mm: introduce __vm_normal_page()
>>> mm: introduce __page_add_new_anon_rmap()
>>> mm: protect mm_rb tree with a rwlock
>>> mm: adding speculative page fault failure trace events
>>> perf: add a speculative page fault sw event
>>> perf tools: add support for the SPF perf event
>>> mm: add speculative page fault vmstats
>>> powerpc/mm: add speculative page fault
>>>
>>> Mahendran Ganesh (2):
>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> arm64/mm: add speculative page fault
>>>
>>> Peter Zijlstra (4):
>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>> mm: VMA sequence count
>>> mm: provide speculative fault infrastructure
>>> x86/mm: add speculative pagefault handling
>>>
>>> arch/arm64/Kconfig | 1 +
>>> arch/arm64/mm/fault.c | 12 +
>>> arch/powerpc/Kconfig | 1 +
>>> arch/powerpc/mm/fault.c | 16 +
>>> arch/x86/Kconfig | 1 +
>>> arch/x86/mm/fault.c | 27 +-
>>> fs/exec.c | 2 +-
>>> fs/proc/task_mmu.c | 5 +-
>>> fs/userfaultfd.c | 17 +-
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/migrate.h | 4 +-
>>> include/linux/mm.h | 136 +++++++-
>>> include/linux/mm_types.h | 7 +
>>> include/linux/pagemap.h | 4 +-
>>> include/linux/rmap.h | 12 +-
>>> include/linux/swap.h | 10 +-
>>> include/linux/vm_event_item.h | 3 +
>>> include/trace/events/pagefault.h | 80 +++++
>>> include/uapi/linux/perf_event.h | 1 +
>>> kernel/fork.c | 5 +-
>>> mm/Kconfig | 22 ++
>>> mm/huge_memory.c | 6 +-
>>> mm/hugetlb.c | 2 +
>>> mm/init-mm.c | 3 +
>>> mm/internal.h | 20 ++
>>> mm/khugepaged.c | 5 +
>>> mm/madvise.c | 6 +-
>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>> mm/mempolicy.c | 51 ++-
>>> mm/migrate.c | 6 +-
>>> mm/mlock.c | 13 +-
>>> mm/mmap.c | 229 ++++++++++---
>>> mm/mprotect.c | 4 +-
>>> mm/mremap.c | 13 +
>>> mm/nommu.c | 2 +-
>>> mm/rmap.c | 5 +-
>>> mm/swap.c | 6 +-
>>> mm/swap_state.c | 8 +-
>>> mm/vmstat.c | 5 +-
>>> tools/include/uapi/linux/perf_event.h | 1 +
>>> tools/perf/util/evsel.c | 1 +
>>> tools/perf/util/parse-events.c | 4 +
>>> tools/perf/util/parse-events.l | 1 +
>>> tools/perf/util/python.c | 1 +
>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>> create mode 100644 include/trace/events/pagefault.h
>>>
>>> --
>>> 2.7.4
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
@ 2018-06-11 7:49 ` Song, HaiyanX
0 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-06-11 7:49 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Laurent,
Regression test for v11 patch serials have been run, some regression is fou=
nd by LKP-tools (linux kernel performance)
tested on Intel 4s skylake platform. This time only test the cases which ha=
ve been run and found regressions on
V9 patch serials.
The regression result is sorted by the metric will-it-scale.per_thread_ops.
branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
commit id:
head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
Benchmark: will-it-scale
Download link: https://github.com/antonblanchard/will-it-scale/tree/master
Metrics:
will-it-scale.per_process_ops=3Dprocesses/nr_cpu
will-it-scale.per_thread_ops=3Dthreads/nr_cpu
test box: lkp-skl-4sp1(nr_cpu=3D192,memory=3D768G)
THP: enable / disable
nr_task:100%
1. Regressions:
a). Enable THP
testcase base change head =
metric
page_fault3/enable THP 10519 -20.5% 836 will=
-it-scale.per_thread_ops
page_fault2/enalbe THP 8281 -18.8% 6728 will=
-it-scale.per_thread_ops
brk1/eanble THP 998475 -2.2% 976893 will=
-it-scale.per_process_ops
context_switch1/enable THP 223910 -1.3% 220930 will=
-it-scale.per_process_ops
context_switch1/enable THP 233722 -1.0% 231288 will=
-it-scale.per_thread_ops
b). Disable THP
page_fault3/disable THP 10856 -23.1% 8344 will=
-it-scale.per_thread_ops
page_fault2/disable THP 8147 -18.8% 6613 will=
-it-scale.per_thread_ops
brk1/disable THP 957 -7.9% 881 will=
-it-scale.per_thread_ops
context_switch1/disable THP 237006 -2.2% 231907 will=
-it-scale.per_thread_ops
brk1/disable THP 997317 -2.0% 977778 will=
-it-scale.per_process_ops
page_fault3/disable THP 467454 -1.8% 459251 will=
-it-scale.per_process_ops
context_switch1/disable THP 224431 -1.3% 221567 will=
-it-scale.per_process_ops
Notes: for the above values of test result, the higher is better.
2. Improvement: not found improvement based on the selected test cases.
Best regards
Haiyan Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laur=
ent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Monday, May 28, 2018 4:54 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kir=
ill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Mat=
thew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; =
benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Glei=
xner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.s=
enozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi=
; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan K=
im; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; l=
inux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora=
@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs=
.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 28/05/2018 10:22, Haiyan Song wrote:
> Hi Laurent,
>
> Yes, these tests are done on V9 patch.
Do you plan to give this V11 a run ?
>
>
> Best regards,
> Haiyan Song
>
> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>
>>> Some regression and improvements is found by LKP-tools(linux kernel per=
formance) on V9 patch series
>>> tested on Intel 4s Skylake platform.
>>
>> Hi,
>>
>> Thanks for reporting this benchmark results, but you mentioned the "V9 p=
atch
>> series" while responding to the v11 header series...
>> Were these tests done on v9 or v11 ?
>>
>> Cheers,
>> Laurent.
>>
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_=
ops.
>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patc=
h series)
>>> Commit id:
>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>> Benchmark suite: will-it-scale
>>> Download link:
>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>> Metrics:
>>> will-it-scale.per_process_ops=3Dprocesses/nr_cpu
>>> will-it-scale.per_thread_ops=3Dthreads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=3D192,memory=3D768G)
>>> THP: enable / disable
>>> nr_task: 100%
>>>
>>> 1. Regressions:
>>> a) THP enabled:
>>> testcase base change head =
metric
>>> page_fault3/ enable THP 10092 -17.5% 8323 =
will-it-scale.per_thread_ops
>>> page_fault2/ enable THP 8300 -17.2% 6869 =
will-it-scale.per_thread_ops
>>> brk1/ enable THP 957.67 -7.6% 885 =
will-it-scale.per_thread_ops
>>> page_fault3/ enable THP 172821 -5.3% 163692 =
will-it-scale.per_process_ops
>>> signal1/ enable THP 9125 -3.2% 8834 =
will-it-scale.per_process_ops
>>>
>>> b) THP disabled:
>>> testcase base change head =
metric
>>> page_fault3/ disable THP 10107 -19.1% 8180 =
will-it-scale.per_thread_ops
>>> page_fault2/ disable THP 8432 -17.8% 6931 =
will-it-scale.per_thread_ops
>>> context_switch1/ disable THP 215389 -6.8% 200776 =
will-it-scale.per_thread_ops
>>> brk1/ disable THP 939.67 -6.6% 877.33=
will-it-scale.per_thread_ops
>>> page_fault3/ disable THP 173145 -4.7% 165064 =
will-it-scale.per_process_ops
>>> signal1/ disable THP 9162 -3.9% 8802 =
will-it-scale.per_process_ops
>>>
>>> 2. Improvements:
>>> a) THP enabled:
>>> testcase base change head =
metric
>>> malloc1/ enable THP 66.33 +469.8% 383.67=
will-it-scale.per_thread_ops
>>> writeseek3/ enable THP 2531 +4.5% 2646 =
will-it-scale.per_thread_ops
>>> signal1/ enable THP 989.33 +2.8% 1016 =
will-it-scale.per_thread_ops
>>>
>>> b) THP disabled:
>>> testcase base change head =
metric
>>> malloc1/ disable THP 90.33 +417.3% 467.33=
will-it-scale.per_thread_ops
>>> read2/ disable THP 58934 +39.2% 82060 =
will-it-scale.per_thread_ops
>>> page_fault1/ disable THP 8607 +36.4% 11736 =
will-it-scale.per_thread_ops
>>> read1/ disable THP 314063 +12.7% 353934 =
will-it-scale.per_thread_ops
>>> writeseek3/ disable THP 2452 +12.5% 2759 =
will-it-scale.per_thread_ops
>>> signal1/ disable THP 971.33 +5.5% 1024 =
will-it-scale.per_thread_ops
>>>
>>> Notes: for above values in column "change", the higher value means that=
the related testcase result
>>> on head commit is better than that on base commit for this benchmark.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>>
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of =
Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Thursday, May 17, 2018 7:06 PM
>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org;=
kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz;=
Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.c=
om; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas =
Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; serg=
ey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, =
Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minch=
an Kim; Punit Agrawal; vinayak menon; Yang Shi
>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.=
ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.c=
om; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>
>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to han=
dle
>>> page fault without holding the mm semaphore [1].
>>>
>>> The idea is to try to handle user space page faults without holding the
>>> mmap_sem. This should allow better concurrency for massively threaded
>>> process since the page fault handler will not wait for other threads me=
mory
>>> layout change to be done, assuming that this change is done in another =
part
>>> of the process's memory space. This type page fault is named speculativ=
e
>>> page fault. If the speculative page fault fails because of a concurrenc=
y is
>>> detected or because underlying PMD or PTE tables are not yet allocating=
, it
>>> is failing its processing and a classic page fault is then tried.
>>>
>>> The speculative page fault (SPF) has to look for the VMA matching the f=
ault
>>> address without holding the mmap_sem, this is done by introducing a rwl=
ock
>>> which protects the access to the mm_rb tree. Previously this was done u=
sing
>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>> freeing operation which was hitting the performance by 20% as reported =
by
>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>> limiting the locking contention to these operations which are expected =
to
>>> be in a O(log n) order. In addition to ensure that the VMA is not freed=
in
>>> our back a reference count is added and 2 services (get_vma() and
>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>> fetched from the RB tree using get_vma(), it must be later freed using
>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>> benchmark anymore.
>>>
>>> The VMA's attributes checked during the speculative page fault processi=
ng
>>> have to be protected against parallel changes. This is done by using a =
per
>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>> handler to fast check for parallel changes in progress and to abort the
>>> speculative page fault in that case.
>>>
>>> Once the VMA has been found, the speculative page fault handler would c=
heck
>>> for the VMA's attributes to verify that the page fault has to be handle=
d
>>> correctly or not. Thus, the VMA is protected through a sequence lock wh=
ich
>>> allows fast detection of concurrent VMA changes. If such a change is
>>> detected, the speculative page fault is aborted and a *classic* page fa=
ult
>>> is tried. VMA sequence lockings are added when VMA attributes which ar=
e
>>> checked during the page fault are modified.
>>>
>>> When the PTE is fetched, the VMA is checked to see if it has been chang=
ed,
>>> so once the page table is locked, the VMA is valid, so any other change=
s
>>> leading to touching this PTE will need to lock the page table, so no
>>> parallel change is possible at this time.
>>>
>>> The locking of the PTE is done with interrupts disabled, this allows
>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then=
is
>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd =
is
>>> valid at the time the PTE is locked, we have the guarantee that the
>>> collapsing operation will have to wait on the PTE lock to move forward.
>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>> different from the one recorded at the beginning of the SPF operation, =
the
>>> classic page fault handler will be called to handle the operation while
>>> holding the mmap_sem. As the PTE lock is done with the interrupts disab=
led,
>>> the lock is done using spin_trylock() to avoid dead lock when handling =
a
>>> page fault while a TLB invalidate is requested by another CPU holding t=
he
>>> PTE.
>>>
>>> In pseudo code, this could be seen as:
>>> speculative_page_fault()
>>> {
>>> vma =3D get_vma()
>>> check vma sequence count
>>> check vma's support
>>> disable interrupt
>>> check pgd,p4d,...,pte
>>> save pmd and pte in vmf
>>> save vma sequence counter in vmf
>>> enable interrupt
>>> check vma sequence count
>>> handle_pte_fault(vma)
>>> ..
>>> page =3D alloc_page()
>>> pte_map_lock()
>>> disable interrupt
>>> abort if sequence counter has chang=
ed
>>> abort if pmd or pte has changed
>>> pte map and lock
>>> enable interrupt
>>> if abort
>>> free page
>>> abort
>>> ...
>>> }
>>>
>>> arch_fault_handler()
>>> {
>>> if (speculative_page_fault(&vma))
>>> goto done
>>> again:
>>> lock(mmap_sem)
>>> vma =3D find_vma();
>>> handle_pte_fault(vma);
>>> if retry
>>> unlock(mmap_sem)
>>> goto again;
>>> done:
>>> handle fault error
>>> }
>>>
>>> Support for THP is not done because when checking for the PMD, we can b=
e
>>> confused by an in progress collapsing operation done by khugepaged. The
>>> issue is that pmd_none() could be true either if the PMD is not already
>>> populated or if the underlying PTE are in the way to be collapsed. So w=
e
>>> cannot safely allocate a PMD if pmd_none() is true.
>>>
>>> This series add a new software performance event named 'speculative-fau=
lts'
>>> or 'spf'. It counts the number of successful page fault event handled
>>> speculatively. When recording 'faults,spf' events, the faults one is
>>> counting the total number of page fault events while 'spf' is only coun=
ting
>>> the part of the faults processed speculatively.
>>>
>>> There are some trace events introduced by this series. They allow
>>> identifying why the page faults were not processed speculatively. This
>>> doesn't take in account the faults generated by a monothreaded process
>>> which directly processed while holding the mmap_sem. This trace events =
are
>>> grouped in a system named 'pagefault', they are:
>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>> back.
>>>
>>> To record all the related events, the easier is to run perf with the
>>> following arguments :
>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>
>>> There is also a dedicated vmstat counter showing the number of successf=
ul
>>> page fault handled speculatively. I can be seen this way:
>>> $ grep speculative_pgfault /proc/vmstat
>>>
>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functi=
onal
>>> on x86, PowerPC and arm64.
>>>
>>> ---------------------
>>> Real Workload results
>>>
>>> As mentioned in previous email, we did non official runs using a "popul=
ar
>>> in memory multithreaded database product" on 176 cores SMT8 Power syste=
m
>>> which showed a 30% improvements in the number of transaction processed =
per
>>> second. This run has been done on the v6 series, but changes introduced=
in
>>> this new version should not impact the performance boost seen.
>>>
>>> Here are the perf data captured during 2 of these runs on top of the v8
>>> series:
>>> vanilla spf
>>> faults 89.418 101.364 +13%
>>> spf n/a 97.989
>>>
>>> With the SPF kernel, most of the page fault were processed in a specula=
tive
>>> way.
>>>
>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and g=
ave
>>> it a try on an android device. He reported that the application launch =
time
>>> was improved in average by 6%, and for large applications (~100 threads=
) by
>>> 20%.
>>>
>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>> MSM845 (8 cores) with 6GB (the less is better):
>>>
>>> Application 4.9 4.9+spf delta
>>> com.tencent.mm 416 389 -7%
>>> com.eg.android.AlipayGphone 1135 986 -13%
>>> com.tencent.mtt 455 454 0%
>>> com.qqgame.hlddz 1497 1409 -6%
>>> com.autonavi.minimap 711 701 -1%
>>> com.tencent.tmgp.sgame 788 748 -5%
>>> com.immomo.momo 501 487 -3%
>>> com.tencent.peng 2145 2112 -2%
>>> com.smile.gifmaker 491 461 -6%
>>> com.baidu.BaiduMap 479 366 -23%
>>> com.taobao.taobao 1341 1198 -11%
>>> com.baidu.searchbox 333 314 -6%
>>> com.tencent.mobileqq 394 384 -3%
>>> com.sina.weibo 907 906 0%
>>> com.youku.phone 816 731 -11%
>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>> com.UCMobile 415 411 -1%
>>> com.tencent.tmgp.ak 1464 1431 -2%
>>> com.tencent.qqmusic 336 329 -2%
>>> com.sankuai.meituan 1661 1302 -22%
>>> com.netease.cloudmusic 1193 1200 1%
>>> air.tv.douyu.android 4257 4152 -2%
>>>
>>> ------------------
>>> Benchmarks results
>>>
>>> Base kernel is v4.17.0-rc4-mm1
>>> SPF is BASE + this series
>>>
>>> Kernbench:
>>> ----------
>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>> kernel (kernel is build 5 times):
>>>
>>> Average Half load -j 8
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>
>>> Average Optimal load -j 16
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 526743764 faults
>>> 210 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 2278 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Very few speculative page faults were recorded as most of the processes
>>> involved are monothreaded (sounds that on this architecture some thread=
s
>>> were created during the kernel build processing).
>>>
>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>
>>> Average Half load -j 40
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>
>>> Average Optimal load -j 80
>>> Run (std deviation)
>>> BASE SPF
>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>
>>> During a run on the SPF, perf events were captured:
>>> Performance counter stats for '../kernbench -M':
>>> 116730856 faults
>>> 0 spf
>>> 3 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 476 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> Most of the processes involved are monothreaded so SPF is not activated=
but
>>> there is no impact on the performance.
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage, th=
e
>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>> consistent result I repeated the test 100 times and measure the average
>>> result. The number is the record processes per second, the higher is th=
e
>>> best.
>>>
>>> BASE SPF delta
>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>
>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>> Performance counter stats for './ebizzy -mTt 16':
>>> 1706379 faults
>>> 1674599 spf
>>> 30588 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 363 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> And the ones captured during a run on a 80 CPUs Power node:
>>> Performance counter stats for './ebizzy -mTt 80':
>>> 1874773 faults
>>> 1461153 spf
>>> 413293 pagefault:spf_vma_changed
>>> 0 pagefault:spf_vma_noanon
>>> 200 pagefault:spf_vma_notsup
>>> 0 pagefault:spf_vma_access
>>> 0 pagefault:spf_pmd_changed
>>>
>>> In ebizzy's case most of the page fault were handled in a speculative w=
ay,
>>> leading the ebizzy performance boost.
>>>
>>> ------------------
>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahend=
ran
>>> and Minchan Kim, hopefully.
>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>> __do_page_fault().
>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>> instead
>>> of aborting the speculative page fault handling. Dropping the now
>>> useless
>>> trace event pagefault:spf_pte_lock.
>>> - No more try to reuse the fetched VMA during the speculative page fau=
lt
>>> handling when retrying is needed. This adds a lot of complexity and
>>> additional tests done didn't show a significant performance improvem=
ent.
>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>
>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-=
speculative-page-faults-tt965642.html#none
>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>
>>>
>>> Laurent Dufour (20):
>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>> mm: make pte_unmap_same compatible with SPF
>>> mm: introduce INIT_VMA()
>>> mm: protect VMA modifications using VMA sequence count
>>> mm: protect mremap() against SPF hanlder
>>> mm: protect SPF handler against anon_vma changes
>>> mm: cache some VMA fields in the vm_fault structure
>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>> mm: introduce __lru_cache_add_active_or_unevictable
>>> mm: introduce __vm_normal_page()
>>> mm: introduce __page_add_new_anon_rmap()
>>> mm: protect mm_rb tree with a rwlock
>>> mm: adding speculative page fault failure trace events
>>> perf: add a speculative page fault sw event
>>> perf tools: add support for the SPF perf event
>>> mm: add speculative page fault vmstats
>>> powerpc/mm: add speculative page fault
>>>
>>> Mahendran Ganesh (2):
>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>> arm64/mm: add speculative page fault
>>>
>>> Peter Zijlstra (4):
>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>> mm: VMA sequence count
>>> mm: provide speculative fault infrastructure
>>> x86/mm: add speculative pagefault handling
>>>
>>> arch/arm64/Kconfig | 1 +
>>> arch/arm64/mm/fault.c | 12 +
>>> arch/powerpc/Kconfig | 1 +
>>> arch/powerpc/mm/fault.c | 16 +
>>> arch/x86/Kconfig | 1 +
>>> arch/x86/mm/fault.c | 27 +-
>>> fs/exec.c | 2 +-
>>> fs/proc/task_mmu.c | 5 +-
>>> fs/userfaultfd.c | 17 +-
>>> include/linux/hugetlb_inline.h | 2 +-
>>> include/linux/migrate.h | 4 +-
>>> include/linux/mm.h | 136 +++++++-
>>> include/linux/mm_types.h | 7 +
>>> include/linux/pagemap.h | 4 +-
>>> include/linux/rmap.h | 12 +-
>>> include/linux/swap.h | 10 +-
>>> include/linux/vm_event_item.h | 3 +
>>> include/trace/events/pagefault.h | 80 +++++
>>> include/uapi/linux/perf_event.h | 1 +
>>> kernel/fork.c | 5 +-
>>> mm/Kconfig | 22 ++
>>> mm/huge_memory.c | 6 +-
>>> mm/hugetlb.c | 2 +
>>> mm/init-mm.c | 3 +
>>> mm/internal.h | 20 ++
>>> mm/khugepaged.c | 5 +
>>> mm/madvise.c | 6 +-
>>> mm/memory.c | 612 ++++++++++++++++++++++++++=
+++-----
>>> mm/mempolicy.c | 51 ++-
>>> mm/migrate.c | 6 +-
>>> mm/mlock.c | 13 +-
>>> mm/mmap.c | 229 ++++++++++---
>>> mm/mprotect.c | 4 +-
>>> mm/mremap.c | 13 +
>>> mm/nommu.c | 2 +-
>>> mm/rmap.c | 5 +-
>>> mm/swap.c | 6 +-
>>> mm/swap_state.c | 8 +-
>>> mm/vmstat.c | 5 +-
>>> tools/include/uapi/linux/perf_event.h | 1 +
>>> tools/perf/util/evsel.c | 1 +
>>> tools/perf/util/parse-events.c | 4 +
>>> tools/perf/util/parse-events.l | 1 +
>>> tools/perf/util/python.c | 1 +
>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>> create mode 100644 include/trace/events/pagefault.h
>>>
>>> --
>>> 2.7.4
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-06-11 7:49 ` Song, HaiyanX
(?)
@ 2018-06-11 15:15 ` Laurent Dufour
2018-06-19 9:16 ` Haiyan Song
-1 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-06-11 15:15 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Haiyan,
I don't have access to the same hardware you ran the test on, but I give a try
to those test on a Power8 system (2 sockets, 5 cores/s, 8 threads/c, 80 CPUs 32G).
I run each will-it-scale test 10 times and compute the average.
test THP enabled 4.17.0-rc4-mm1 spf delta
page_fault3_threads 2697.7 2683.5 -0.53%
page_fault2_threads 170660.6 169574.1 -0.64%
context_switch1_threads 6915269.2 6877507.3 -0.55%
context_switch1_processes 6478076.2 6529493.5 0.79%
brk1 243391.2 238527.5 -2.00%
Test were launched with the arguments '-t 80 -s 5', only the average report is
taken in account. Note that page size is 64K by default on ppc64.
It would be nice if you could capture some perf data to figure out why the
page_fault2/3 are showing such a performance regression.
Thanks,
Laurent.
On 11/06/2018 09:49, Song, HaiyanX wrote:
> Hi Laurent,
>
> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
> V9 patch serials.
>
> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
> commit id:
> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
> Benchmark: will-it-scale
> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>
> Metrics:
> will-it-scale.per_process_ops=processes/nr_cpu
> will-it-scale.per_thread_ops=threads/nr_cpu
> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> THP: enable / disable
> nr_task:100%
>
> 1. Regressions:
>
> a). Enable THP
> testcase base change head metric
> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>
> b). Disable THP
> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>
> Notes: for the above values of test result, the higher is better.
>
> 2. Improvement: not found improvement based on the selected test cases.
>
>
> Best regards
> Haiyan Song
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Monday, May 28, 2018 4:54 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 28/05/2018 10:22, Haiyan Song wrote:
>> Hi Laurent,
>>
>> Yes, these tests are done on V9 patch.
>
> Do you plan to give this V11 a run ?
>
>>
>>
>> Best regards,
>> Haiyan Song
>>
>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>
>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>> tested on Intel 4s Skylake platform.
>>>
>>> Hi,
>>>
>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>> series" while responding to the v11 header series...
>>> Were these tests done on v9 or v11 ?
>>>
>>> Cheers,
>>> Laurent.
>>>
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>> Commit id:
>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>> Benchmark suite: will-it-scale
>>>> Download link:
>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task: 100%
>>>>
>>>> 1. Regressions:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>
>>>> 2. Improvements:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>
>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>> on head commit is better than that on base commit for this benchmark.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>>
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>> page fault without holding the mm semaphore [1].
>>>>
>>>> The idea is to try to handle user space page faults without holding the
>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>> process since the page fault handler will not wait for other threads memory
>>>> layout change to be done, assuming that this change is done in another part
>>>> of the process's memory space. This type page fault is named speculative
>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>> is failing its processing and a classic page fault is then tried.
>>>>
>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>> freeing operation which was hitting the performance by 20% as reported by
>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>> limiting the locking contention to these operations which are expected to
>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>> our back a reference count is added and 2 services (get_vma() and
>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>> benchmark anymore.
>>>>
>>>> The VMA's attributes checked during the speculative page fault processing
>>>> have to be protected against parallel changes. This is done by using a per
>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>> handler to fast check for parallel changes in progress and to abort the
>>>> speculative page fault in that case.
>>>>
>>>> Once the VMA has been found, the speculative page fault handler would check
>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>> checked during the page fault are modified.
>>>>
>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>> leading to touching this PTE will need to lock the page table, so no
>>>> parallel change is possible at this time.
>>>>
>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>> different from the one recorded at the beginning of the SPF operation, the
>>>> classic page fault handler will be called to handle the operation while
>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>> PTE.
>>>>
>>>> In pseudo code, this could be seen as:
>>>> speculative_page_fault()
>>>> {
>>>> vma = get_vma()
>>>> check vma sequence count
>>>> check vma's support
>>>> disable interrupt
>>>> check pgd,p4d,...,pte
>>>> save pmd and pte in vmf
>>>> save vma sequence counter in vmf
>>>> enable interrupt
>>>> check vma sequence count
>>>> handle_pte_fault(vma)
>>>> ..
>>>> page = alloc_page()
>>>> pte_map_lock()
>>>> disable interrupt
>>>> abort if sequence counter has changed
>>>> abort if pmd or pte has changed
>>>> pte map and lock
>>>> enable interrupt
>>>> if abort
>>>> free page
>>>> abort
>>>> ...
>>>> }
>>>>
>>>> arch_fault_handler()
>>>> {
>>>> if (speculative_page_fault(&vma))
>>>> goto done
>>>> again:
>>>> lock(mmap_sem)
>>>> vma = find_vma();
>>>> handle_pte_fault(vma);
>>>> if retry
>>>> unlock(mmap_sem)
>>>> goto again;
>>>> done:
>>>> handle fault error
>>>> }
>>>>
>>>> Support for THP is not done because when checking for the PMD, we can be
>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>
>>>> This series add a new software performance event named 'speculative-faults'
>>>> or 'spf'. It counts the number of successful page fault event handled
>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>> counting the total number of page fault events while 'spf' is only counting
>>>> the part of the faults processed speculatively.
>>>>
>>>> There are some trace events introduced by this series. They allow
>>>> identifying why the page faults were not processed speculatively. This
>>>> doesn't take in account the faults generated by a monothreaded process
>>>> which directly processed while holding the mmap_sem. This trace events are
>>>> grouped in a system named 'pagefault', they are:
>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>> back.
>>>>
>>>> To record all the related events, the easier is to run perf with the
>>>> following arguments :
>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>
>>>> There is also a dedicated vmstat counter showing the number of successful
>>>> page fault handled speculatively. I can be seen this way:
>>>> $ grep speculative_pgfault /proc/vmstat
>>>>
>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>> on x86, PowerPC and arm64.
>>>>
>>>> ---------------------
>>>> Real Workload results
>>>>
>>>> As mentioned in previous email, we did non official runs using a "popular
>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>> which showed a 30% improvements in the number of transaction processed per
>>>> second. This run has been done on the v6 series, but changes introduced in
>>>> this new version should not impact the performance boost seen.
>>>>
>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>> series:
>>>> vanilla spf
>>>> faults 89.418 101.364 +13%
>>>> spf n/a 97.989
>>>>
>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>> way.
>>>>
>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>> it a try on an android device. He reported that the application launch time
>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>> 20%.
>>>>
>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>
>>>> Application 4.9 4.9+spf delta
>>>> com.tencent.mm 416 389 -7%
>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>> com.tencent.mtt 455 454 0%
>>>> com.qqgame.hlddz 1497 1409 -6%
>>>> com.autonavi.minimap 711 701 -1%
>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>> com.immomo.momo 501 487 -3%
>>>> com.tencent.peng 2145 2112 -2%
>>>> com.smile.gifmaker 491 461 -6%
>>>> com.baidu.BaiduMap 479 366 -23%
>>>> com.taobao.taobao 1341 1198 -11%
>>>> com.baidu.searchbox 333 314 -6%
>>>> com.tencent.mobileqq 394 384 -3%
>>>> com.sina.weibo 907 906 0%
>>>> com.youku.phone 816 731 -11%
>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>> com.UCMobile 415 411 -1%
>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>> com.tencent.qqmusic 336 329 -2%
>>>> com.sankuai.meituan 1661 1302 -22%
>>>> com.netease.cloudmusic 1193 1200 1%
>>>> air.tv.douyu.android 4257 4152 -2%
>>>>
>>>> ------------------
>>>> Benchmarks results
>>>>
>>>> Base kernel is v4.17.0-rc4-mm1
>>>> SPF is BASE + this series
>>>>
>>>> Kernbench:
>>>> ----------
>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>> kernel (kernel is build 5 times):
>>>>
>>>> Average Half load -j 8
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>
>>>> Average Optimal load -j 16
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 526743764 faults
>>>> 210 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 2278 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Very few speculative page faults were recorded as most of the processes
>>>> involved are monothreaded (sounds that on this architecture some threads
>>>> were created during the kernel build processing).
>>>>
>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>
>>>> Average Half load -j 40
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>
>>>> Average Optimal load -j 80
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 116730856 faults
>>>> 0 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 476 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>> there is no impact on the performance.
>>>>
>>>> Ebizzy:
>>>> -------
>>>> The test is counting the number of records per second it can manage, the
>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>> consistent result I repeated the test 100 times and measure the average
>>>> result. The number is the record processes per second, the higher is the
>>>> best.
>>>>
>>>> BASE SPF delta
>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>
>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>> Performance counter stats for './ebizzy -mTt 16':
>>>> 1706379 faults
>>>> 1674599 spf
>>>> 30588 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 363 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>> Performance counter stats for './ebizzy -mTt 80':
>>>> 1874773 faults
>>>> 1461153 spf
>>>> 413293 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 200 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>> leading the ebizzy performance boost.
>>>>
>>>> ------------------
>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>> and Minchan Kim, hopefully.
>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>> __do_page_fault().
>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>> instead
>>>> of aborting the speculative page fault handling. Dropping the now
>>>> useless
>>>> trace event pagefault:spf_pte_lock.
>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>> handling when retrying is needed. This adds a lot of complexity and
>>>> additional tests done didn't show a significant performance improvement.
>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>
>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>
>>>>
>>>> Laurent Dufour (20):
>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>> mm: make pte_unmap_same compatible with SPF
>>>> mm: introduce INIT_VMA()
>>>> mm: protect VMA modifications using VMA sequence count
>>>> mm: protect mremap() against SPF hanlder
>>>> mm: protect SPF handler against anon_vma changes
>>>> mm: cache some VMA fields in the vm_fault structure
>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>> mm: introduce __vm_normal_page()
>>>> mm: introduce __page_add_new_anon_rmap()
>>>> mm: protect mm_rb tree with a rwlock
>>>> mm: adding speculative page fault failure trace events
>>>> perf: add a speculative page fault sw event
>>>> perf tools: add support for the SPF perf event
>>>> mm: add speculative page fault vmstats
>>>> powerpc/mm: add speculative page fault
>>>>
>>>> Mahendran Ganesh (2):
>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> arm64/mm: add speculative page fault
>>>>
>>>> Peter Zijlstra (4):
>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>> mm: VMA sequence count
>>>> mm: provide speculative fault infrastructure
>>>> x86/mm: add speculative pagefault handling
>>>>
>>>> arch/arm64/Kconfig | 1 +
>>>> arch/arm64/mm/fault.c | 12 +
>>>> arch/powerpc/Kconfig | 1 +
>>>> arch/powerpc/mm/fault.c | 16 +
>>>> arch/x86/Kconfig | 1 +
>>>> arch/x86/mm/fault.c | 27 +-
>>>> fs/exec.c | 2 +-
>>>> fs/proc/task_mmu.c | 5 +-
>>>> fs/userfaultfd.c | 17 +-
>>>> include/linux/hugetlb_inline.h | 2 +-
>>>> include/linux/migrate.h | 4 +-
>>>> include/linux/mm.h | 136 +++++++-
>>>> include/linux/mm_types.h | 7 +
>>>> include/linux/pagemap.h | 4 +-
>>>> include/linux/rmap.h | 12 +-
>>>> include/linux/swap.h | 10 +-
>>>> include/linux/vm_event_item.h | 3 +
>>>> include/trace/events/pagefault.h | 80 +++++
>>>> include/uapi/linux/perf_event.h | 1 +
>>>> kernel/fork.c | 5 +-
>>>> mm/Kconfig | 22 ++
>>>> mm/huge_memory.c | 6 +-
>>>> mm/hugetlb.c | 2 +
>>>> mm/init-mm.c | 3 +
>>>> mm/internal.h | 20 ++
>>>> mm/khugepaged.c | 5 +
>>>> mm/madvise.c | 6 +-
>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>> mm/mempolicy.c | 51 ++-
>>>> mm/migrate.c | 6 +-
>>>> mm/mlock.c | 13 +-
>>>> mm/mmap.c | 229 ++++++++++---
>>>> mm/mprotect.c | 4 +-
>>>> mm/mremap.c | 13 +
>>>> mm/nommu.c | 2 +-
>>>> mm/rmap.c | 5 +-
>>>> mm/swap.c | 6 +-
>>>> mm/swap_state.c | 8 +-
>>>> mm/vmstat.c | 5 +-
>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>> tools/perf/util/evsel.c | 1 +
>>>> tools/perf/util/parse-events.c | 4 +
>>>> tools/perf/util/parse-events.l | 1 +
>>>> tools/perf/util/python.c | 1 +
>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>> create mode 100644 include/trace/events/pagefault.h
>>>>
>>>> --
>>>> 2.7.4
>>>>
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-06-11 15:15 ` Laurent Dufour
@ 2018-06-19 9:16 ` Haiyan Song
0 siblings, 0 replies; 106+ messages in thread
From: Haiyan Song @ 2018-06-19 9:16 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 31691 bytes --]
On Mon, Jun 11, 2018 at 05:15:22PM +0200, Laurent Dufour wrote:
Hi Laurent,
For perf date tested on Intel 4s Skylake platform, here attached the compare result
between base and head commit in attachment, which include the perf-profile comparision information.
And also attached some perf-profile.json captured from test result for page_fault2 and page_fault3 for
checking the regression, thanks.
Best regards,
Haiyan Song
> Hi Haiyan,
>
> I don't have access to the same hardware you ran the test on, but I give a try
> to those test on a Power8 system (2 sockets, 5 cores/s, 8 threads/c, 80 CPUs 32G).
> I run each will-it-scale test 10 times and compute the average.
>
> test THP enabled 4.17.0-rc4-mm1 spf delta
> page_fault3_threads 2697.7 2683.5 -0.53%
> page_fault2_threads 170660.6 169574.1 -0.64%
> context_switch1_threads 6915269.2 6877507.3 -0.55%
> context_switch1_processes 6478076.2 6529493.5 0.79%
> rk1 243391.2 238527.5 -2.00%
>
> Test were launched with the arguments '-t 80 -s 5', only the average report is
> taken in account. Note that page size is 64K by default on ppc64.
>
> It would be nice if you could capture some perf data to figure out why the
> page_fault2/3 are showing such a performance regression.
>
> Thanks,
> Laurent.
>
> On 11/06/2018 09:49, Song, HaiyanX wrote:
> > Hi Laurent,
> >
> > Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
> > tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
> > V9 patch serials.
> >
> > The regression result is sorted by the metric will-it-scale.per_thread_ops.
> > branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
> > commit id:
> > head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
> > base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
> > Benchmark: will-it-scale
> > Download link: https://github.com/antonblanchard/will-it-scale/tree/master
> >
> > Metrics:
> > will-it-scale.per_process_ops=processes/nr_cpu
> > will-it-scale.per_thread_ops=threads/nr_cpu
> > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> > THP: enable / disable
> > nr_task:100%
> >
> > 1. Regressions:
> >
> > a). Enable THP
> > testcase base change head metric
> > page_fault3/enable THP 10519 -20.5% 8368 will-it-scale.per_thread_ops
> > page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
> > brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> > context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> > context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
> >
> > b). Disable THP
> > page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> > page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> > brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> > context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> > brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> > page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> > context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
> >
> > Notes: for the above values of test result, the higher is better.
> >
> > 2. Improvement: not found improvement based on the selected test cases.
> >
> >
> > Best regards
> > Haiyan Song
> > ________________________________________
> > From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> > Sent: Monday, May 28, 2018 4:54 PM
> > To: Song, HaiyanX
> > Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> > Subject: Re: [PATCH v11 00/26] Speculative page faults
> >
> > On 28/05/2018 10:22, Haiyan Song wrote:
> >> Hi Laurent,
> >>
> >> Yes, these tests are done on V9 patch.
> >
> > Do you plan to give this V11 a run ?
> >
> >>
> >>
> >> Best regards,
> >> Haiyan Song
> >>
> >> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
> >>> On 28/05/2018 07:23, Song, HaiyanX wrote:
> >>>>
> >>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
> >>>> tested on Intel 4s Skylake platform.
> >>>
> >>> Hi,
> >>>
> >>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
> >>> series" while responding to the v11 header series...
> >>> Were these tests done on v9 or v11 ?
> >>>
> >>> Cheers,
> >>> Laurent.
> >>>
> >>>>
> >>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> >>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
> >>>> Commit id:
> >>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
> >>>> head commit: 0355322b3577eeab7669066df42c550a56801110
> >>>> Benchmark suite: will-it-scale
> >>>> Download link:
> >>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
> >>>> Metrics:
> >>>> will-it-scale.per_process_ops=processes/nr_cpu
> >>>> will-it-scale.per_thread_ops=threads/nr_cpu
> >>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> >>>> THP: enable / disable
> >>>> nr_task: 100%
> >>>>
> >>>> 1. Regressions:
> >>>> a) THP enabled:
> >>>> testcase base change head metric
> >>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
> >>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
> >>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
> >>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
> >>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
> >>>>
> >>>> b) THP disabled:
> >>>> testcase base change head metric
> >>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
> >>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
> >>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
> >>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
> >>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
> >>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
> >>>>
> >>>> 2. Improvements:
> >>>> a) THP enabled:
> >>>> testcase base change head metric
> >>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
> >>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
> >>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
> >>>>
> >>>> b) THP disabled:
> >>>> testcase base change head metric
> >>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
> >>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
> >>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
> >>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
> >>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
> >>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
> >>>>
> >>>> Notes: for above values in column "change", the higher value means that the related testcase result
> >>>> on head commit is better than that on base commit for this benchmark.
> >>>>
> >>>>
> >>>> Best regards
> >>>> Haiyan Song
> >>>>
> >>>> ________________________________________
> >>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> >>>> Sent: Thursday, May 17, 2018 7:06 PM
> >>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
> >>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> >>>> Subject: [PATCH v11 00/26] Speculative page faults
> >>>>
> >>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> >>>> page fault without holding the mm semaphore [1].
> >>>>
> >>>> The idea is to try to handle user space page faults without holding the
> >>>> mmap_sem. This should allow better concurrency for massively threaded
> >>>> process since the page fault handler will not wait for other threads memory
> >>>> layout change to be done, assuming that this change is done in another part
> >>>> of the process's memory space. This type page fault is named speculative
> >>>> page fault. If the speculative page fault fails because of a concurrency is
> >>>> detected or because underlying PMD or PTE tables are not yet allocating, it
> >>>> is failing its processing and a classic page fault is then tried.
> >>>>
> >>>> The speculative page fault (SPF) has to look for the VMA matching the fault
> >>>> address without holding the mmap_sem, this is done by introducing a rwlock
> >>>> which protects the access to the mm_rb tree. Previously this was done using
> >>>> SRCU but it was introducing a lot of scheduling to process the VMA's
> >>>> freeing operation which was hitting the performance by 20% as reported by
> >>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
> >>>> limiting the locking contention to these operations which are expected to
> >>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
> >>>> our back a reference count is added and 2 services (get_vma() and
> >>>> put_vma()) are introduced to handle the reference count. Once a VMA is
> >>>> fetched from the RB tree using get_vma(), it must be later freed using
> >>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
> >>>> benchmark anymore.
> >>>>
> >>>> The VMA's attributes checked during the speculative page fault processing
> >>>> have to be protected against parallel changes. This is done by using a per
> >>>> VMA sequence lock. This sequence lock allows the speculative page fault
> >>>> handler to fast check for parallel changes in progress and to abort the
> >>>> speculative page fault in that case.
> >>>>
> >>>> Once the VMA has been found, the speculative page fault handler would check
> >>>> for the VMA's attributes to verify that the page fault has to be handled
> >>>> correctly or not. Thus, the VMA is protected through a sequence lock which
> >>>> allows fast detection of concurrent VMA changes. If such a change is
> >>>> detected, the speculative page fault is aborted and a *classic* page fault
> >>>> is tried. VMA sequence lockings are added when VMA attributes which are
> >>>> checked during the page fault are modified.
> >>>>
> >>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
> >>>> so once the page table is locked, the VMA is valid, so any other changes
> >>>> leading to touching this PTE will need to lock the page table, so no
> >>>> parallel change is possible at this time.
> >>>>
> >>>> The locking of the PTE is done with interrupts disabled, this allows
> >>>> checking for the PMD to ensure that there is not an ongoing collapsing
> >>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> >>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
> >>>> valid at the time the PTE is locked, we have the guarantee that the
> >>>> collapsing operation will have to wait on the PTE lock to move forward.
> >>>> This allows the SPF handler to map the PTE safely. If the PMD value is
> >>>> different from the one recorded at the beginning of the SPF operation, the
> >>>> classic page fault handler will be called to handle the operation while
> >>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
> >>>> the lock is done using spin_trylock() to avoid dead lock when handling a
> >>>> page fault while a TLB invalidate is requested by another CPU holding the
> >>>> PTE.
> >>>>
> >>>> In pseudo code, this could be seen as:
> >>>> speculative_page_fault()
> >>>> {
> >>>> vma = get_vma()
> >>>> check vma sequence count
> >>>> check vma's support
> >>>> disable interrupt
> >>>> check pgd,p4d,...,pte
> >>>> save pmd and pte in vmf
> >>>> save vma sequence counter in vmf
> >>>> enable interrupt
> >>>> check vma sequence count
> >>>> handle_pte_fault(vma)
> >>>> ..
> >>>> page = alloc_page()
> >>>> pte_map_lock()
> >>>> disable interrupt
> >>>> abort if sequence counter has changed
> >>>> abort if pmd or pte has changed
> >>>> pte map and lock
> >>>> enable interrupt
> >>>> if abort
> >>>> free page
> >>>> abort
> >>>> ...
> >>>> }
> >>>>
> >>>> arch_fault_handler()
> >>>> {
> >>>> if (speculative_page_fault(&vma))
> >>>> goto done
> >>>> again:
> >>>> lock(mmap_sem)
> >>>> vma = find_vma();
> >>>> handle_pte_fault(vma);
> >>>> if retry
> >>>> unlock(mmap_sem)
> >>>> goto again;
> >>>> done:
> >>>> handle fault error
> >>>> }
> >>>>
> >>>> Support for THP is not done because when checking for the PMD, we can be
> >>>> confused by an in progress collapsing operation done by khugepaged. The
> >>>> issue is that pmd_none() could be true either if the PMD is not already
> >>>> populated or if the underlying PTE are in the way to be collapsed. So we
> >>>> cannot safely allocate a PMD if pmd_none() is true.
> >>>>
> >>>> This series add a new software performance event named 'speculative-faults'
> >>>> or 'spf'. It counts the number of successful page fault event handled
> >>>> speculatively. When recording 'faults,spf' events, the faults one is
> >>>> counting the total number of page fault events while 'spf' is only counting
> >>>> the part of the faults processed speculatively.
> >>>>
> >>>> There are some trace events introduced by this series. They allow
> >>>> identifying why the page faults were not processed speculatively. This
> >>>> doesn't take in account the faults generated by a monothreaded process
> >>>> which directly processed while holding the mmap_sem. This trace events are
> >>>> grouped in a system named 'pagefault', they are:
> >>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
> >>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
> >>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
> >>>> - pagefault:spf_vma_access : the VMA's access right are not respected
> >>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
> >>>> back.
> >>>>
> >>>> To record all the related events, the easier is to run perf with the
> >>>> following arguments :
> >>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
> >>>>
> >>>> There is also a dedicated vmstat counter showing the number of successful
> >>>> page fault handled speculatively. I can be seen this way:
> >>>> $ grep speculative_pgfault /proc/vmstat
> >>>>
> >>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
> >>>> on x86, PowerPC and arm64.
> >>>>
> >>>> ---------------------
> >>>> Real Workload results
> >>>>
> >>>> As mentioned in previous email, we did non official runs using a "popular
> >>>> in memory multithreaded database product" on 176 cores SMT8 Power system
> >>>> which showed a 30% improvements in the number of transaction processed per
> >>>> second. This run has been done on the v6 series, but changes introduced in
> >>>> this new version should not impact the performance boost seen.
> >>>>
> >>>> Here are the perf data captured during 2 of these runs on top of the v8
> >>>> series:
> >>>> vanilla spf
> >>>> faults 89.418 101.364 +13%
> >>>> spf n/a 97.989
> >>>>
> >>>> With the SPF kernel, most of the page fault were processed in a speculative
> >>>> way.
> >>>>
> >>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
> >>>> it a try on an android device. He reported that the application launch time
> >>>> was improved in average by 6%, and for large applications (~100 threads) by
> >>>> 20%.
> >>>>
> >>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
> >>>> MSM845 (8 cores) with 6GB (the less is better):
> >>>>
> >>>> Application 4.9 4.9+spf delta
> >>>> com.tencent.mm 416 389 -7%
> >>>> com.eg.android.AlipayGphone 1135 986 -13%
> >>>> com.tencent.mtt 455 454 0%
> >>>> com.qqgame.hlddz 1497 1409 -6%
> >>>> com.autonavi.minimap 711 701 -1%
> >>>> com.tencent.tmgp.sgame 788 748 -5%
> >>>> com.immomo.momo 501 487 -3%
> >>>> com.tencent.peng 2145 2112 -2%
> >>>> com.smile.gifmaker 491 461 -6%
> >>>> com.baidu.BaiduMap 479 366 -23%
> >>>> com.taobao.taobao 1341 1198 -11%
> >>>> com.baidu.searchbox 333 314 -6%
> >>>> com.tencent.mobileqq 394 384 -3%
> >>>> com.sina.weibo 907 906 0%
> >>>> com.youku.phone 816 731 -11%
> >>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
> >>>> com.UCMobile 415 411 -1%
> >>>> com.tencent.tmgp.ak 1464 1431 -2%
> >>>> com.tencent.qqmusic 336 329 -2%
> >>>> com.sankuai.meituan 1661 1302 -22%
> >>>> com.netease.cloudmusic 1193 1200 1%
> >>>> air.tv.douyu.android 4257 4152 -2%
> >>>>
> >>>> ------------------
> >>>> Benchmarks results
> >>>>
> >>>> Base kernel is v4.17.0-rc4-mm1
> >>>> SPF is BASE + this series
> >>>>
> >>>> Kernbench:
> >>>> ----------
> >>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
> >>>> kernel (kernel is build 5 times):
> >>>>
> >>>> Average Half load -j 8
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
> >>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
> >>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
> >>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
> >>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
> >>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
> >>>>
> >>>> Average Optimal load -j 16
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
> >>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
> >>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
> >>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
> >>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
> >>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
> >>>>
> >>>>
> >>>> During a run on the SPF, perf events were captured:
> >>>> Performance counter stats for '../kernbench -M':
> >>>> 526743764 faults
> >>>> 210 spf
> >>>> 3 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 2278 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> Very few speculative page faults were recorded as most of the processes
> >>>> involved are monothreaded (sounds that on this architecture some threads
> >>>> were created during the kernel build processing).
> >>>>
> >>>> Here are the kerbench results on a 80 CPUs Power8 system:
> >>>>
> >>>> Average Half load -j 40
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
> >>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
> >>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
> >>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
> >>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
> >>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
> >>>>
> >>>> Average Optimal load -j 80
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
> >>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
> >>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
> >>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
> >>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
> >>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
> >>>>
> >>>> During a run on the SPF, perf events were captured:
> >>>> Performance counter stats for '../kernbench -M':
> >>>> 116730856 faults
> >>>> 0 spf
> >>>> 3 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 476 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> Most of the processes involved are monothreaded so SPF is not activated but
> >>>> there is no impact on the performance.
> >>>>
> >>>> Ebizzy:
> >>>> -------
> >>>> The test is counting the number of records per second it can manage, the
> >>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
> >>>> consistent result I repeated the test 100 times and measure the average
> >>>> result. The number is the record processes per second, the higher is the
> >>>> best.
> >>>>
> >>>> BASE SPF delta
> >>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
> >>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
> >>>>
> >>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
> >>>> Performance counter stats for './ebizzy -mTt 16':
> >>>> 1706379 faults
> >>>> 1674599 spf
> >>>> 30588 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 363 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> And the ones captured during a run on a 80 CPUs Power node:
> >>>> Performance counter stats for './ebizzy -mTt 80':
> >>>> 1874773 faults
> >>>> 1461153 spf
> >>>> 413293 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 200 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> In ebizzy's case most of the page fault were handled in a speculative way,
> >>>> leading the ebizzy performance boost.
> >>>>
> >>>> ------------------
> >>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
> >>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
> >>>> and Minchan Kim, hopefully.
> >>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
> >>>> __do_page_fault().
> >>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
> >>>> instead
> >>>> of aborting the speculative page fault handling. Dropping the now
> >>>> useless
> >>>> trace event pagefault:spf_pte_lock.
> >>>> - No more try to reuse the fetched VMA during the speculative page fault
> >>>> handling when retrying is needed. This adds a lot of complexity and
> >>>> additional tests done didn't show a significant performance improvement.
> >>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
> >>>>
> >>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
> >>>> [2] https://patchwork.kernel.org/patch/9999687/
> >>>>
> >>>>
> >>>> Laurent Dufour (20):
> >>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
> >>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
> >>>> mm: make pte_unmap_same compatible with SPF
> >>>> mm: introduce INIT_VMA()
> >>>> mm: protect VMA modifications using VMA sequence count
> >>>> mm: protect mremap() against SPF hanlder
> >>>> mm: protect SPF handler against anon_vma changes
> >>>> mm: cache some VMA fields in the vm_fault structure
> >>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
> >>>> mm: introduce __lru_cache_add_active_or_unevictable
> >>>> mm: introduce __vm_normal_page()
> >>>> mm: introduce __page_add_new_anon_rmap()
> >>>> mm: protect mm_rb tree with a rwlock
> >>>> mm: adding speculative page fault failure trace events
> >>>> perf: add a speculative page fault sw event
> >>>> perf tools: add support for the SPF perf event
> >>>> mm: add speculative page fault vmstats
> >>>> powerpc/mm: add speculative page fault
> >>>>
> >>>> Mahendran Ganesh (2):
> >>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> arm64/mm: add speculative page fault
> >>>>
> >>>> Peter Zijlstra (4):
> >>>> mm: prepare for FAULT_FLAG_SPECULATIVE
> >>>> mm: VMA sequence count
> >>>> mm: provide speculative fault infrastructure
> >>>> x86/mm: add speculative pagefault handling
> >>>>
> >>>> arch/arm64/Kconfig | 1 +
> >>>> arch/arm64/mm/fault.c | 12 +
> >>>> arch/powerpc/Kconfig | 1 +
> >>>> arch/powerpc/mm/fault.c | 16 +
> >>>> arch/x86/Kconfig | 1 +
> >>>> arch/x86/mm/fault.c | 27 +-
> >>>> fs/exec.c | 2 +-
> >>>> fs/proc/task_mmu.c | 5 +-
> >>>> fs/userfaultfd.c | 17 +-
> >>>> include/linux/hugetlb_inline.h | 2 +-
> >>>> include/linux/migrate.h | 4 +-
> >>>> include/linux/mm.h | 136 +++++++-
> >>>> include/linux/mm_types.h | 7 +
> >>>> include/linux/pagemap.h | 4 +-
> >>>> include/linux/rmap.h | 12 +-
> >>>> include/linux/swap.h | 10 +-
> >>>> include/linux/vm_event_item.h | 3 +
> >>>> include/trace/events/pagefault.h | 80 +++++
> >>>> include/uapi/linux/perf_event.h | 1 +
> >>>> kernel/fork.c | 5 +-
> >>>> mm/Kconfig | 22 ++
> >>>> mm/huge_memory.c | 6 +-
> >>>> mm/hugetlb.c | 2 +
> >>>> mm/init-mm.c | 3 +
> >>>> mm/internal.h | 20 ++
> >>>> mm/khugepaged.c | 5 +
> >>>> mm/madvise.c | 6 +-
> >>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
> >>>> mm/mempolicy.c | 51 ++-
> >>>> mm/migrate.c | 6 +-
> >>>> mm/mlock.c | 13 +-
> >>>> mm/mmap.c | 229 ++++++++++---
> >>>> mm/mprotect.c | 4 +-
> >>>> mm/mremap.c | 13 +
> >>>> mm/nommu.c | 2 +-
> >>>> mm/rmap.c | 5 +-
> >>>> mm/swap.c | 6 +-
> >>>> mm/swap_state.c | 8 +-
> >>>> mm/vmstat.c | 5 +-
> >>>> tools/include/uapi/linux/perf_event.h | 1 +
> >>>> tools/perf/util/evsel.c | 1 +
> >>>> tools/perf/util/parse-events.c | 4 +
> >>>> tools/perf/util/parse-events.l | 1 +
> >>>> tools/perf/util/python.c | 1 +
> >>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
> >>>> create mode 100644 include/trace/events/pagefault.h
> >>>>
> >>>> --
> >>>> 2.7.4
> >>>>
> >>>>
> >>>
> >>
> >
>
[-- Attachment #2: compare-result.txt --]
[-- Type: text/plain, Size: 185207 bytes --]
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/page_fault3/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
44:3 -13% 43:3 perf-profile.calltrace.cycles-pp.error_entry
22:3 -6% 22:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
44:3 -13% 44:3 perf-profile.children.cycles-pp.error_entry
21:3 -7% 21:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
10519 +- 3% -20.5% 8368 +- 6% will-it-scale.per_thread_ops
118098 +11.2% 131287 +- 2% will-it-scale.time.involuntary_context_switches
6.084e+08 +- 3% -20.4% 4.845e+08 +- 6% will-it-scale.time.minor_page_faults
7467 +5.0% 7841 will-it-scale.time.percent_of_cpu_this_job_got
44922 +5.0% 47176 will-it-scale.time.system_time
7126337 +- 3% -15.4% 6025689 +- 6% will-it-scale.time.voluntary_context_switches
91905646 -1.3% 90673935 will-it-scale.workload
27.15 +- 6% -8.7% 24.80 +- 10% boot-time.boot
2516213 +- 6% +8.3% 2726303 interrupts.CAL:Function_call_interrupts
388.00 +- 9% +60.2% 621.67 +- 20% irq_exception_noise.softirq_nr
11.28 +- 2% -1.9 9.37 +- 4% mpstat.cpu.idle%
10065 +-140% +243.4% 34559 +- 4% numa-numastat.node0.other_node
18739 -11.6% 16573 +- 3% uptime.idle
29406 +- 2% -11.8% 25929 +- 5% vmstat.system.cs
329614 +- 8% +17.0% 385618 +- 10% meminfo.DirectMap4k
237851 +21.2% 288160 +- 5% meminfo.Inactive
237615 +21.2% 287924 +- 5% meminfo.Inactive(anon)
7917847 -10.7% 7071860 softirqs.RCU
4784181 +- 3% -14.5% 4089039 +- 4% softirqs.SCHED
45666107 +- 7% +12.9% 51535472 +- 3% softirqs.TIMER
2.617e+09 +- 2% -13.9% 2.253e+09 +- 6% cpuidle.C1E.time
6688774 +- 2% -12.8% 5835101 +- 5% cpuidle.C1E.usage
1.022e+10 +- 2% -18.0% 8.376e+09 +- 3% cpuidle.C6.time
13440993 +- 2% -16.3% 11243794 +- 4% cpuidle.C6.usage
54781 +- 16% +37.5% 75347 +- 12% numa-meminfo.node0.Inactive
54705 +- 16% +37.7% 75347 +- 12% numa-meminfo.node0.Inactive(anon)
52522 +35.0% 70886 +- 6% numa-meminfo.node2.Inactive
52443 +34.7% 70653 +- 6% numa-meminfo.node2.Inactive(anon)
31046 +- 6% +30.3% 40457 +- 11% numa-meminfo.node2.SReclaimable
58563 +21.1% 70945 +- 6% proc-vmstat.nr_inactive_anon
58564 +21.1% 70947 +- 6% proc-vmstat.nr_zone_inactive_anon
69701118 -1.2% 68842151 proc-vmstat.pgalloc_normal
2.765e+10 -1.3% 2.729e+10 proc-vmstat.pgfault
69330418 -1.2% 68466824 proc-vmstat.pgfree
118098 +11.2% 131287 +- 2% time.involuntary_context_switches
6.084e+08 +- 3% -20.4% 4.845e+08 +- 6% time.minor_page_faults
7467 +5.0% 7841 time.percent_of_cpu_this_job_got
44922 +5.0% 47176 time.system_time
7126337 +- 3% -15.4% 6025689 +- 6% time.voluntary_context_switches
13653 +- 16% +33.5% 18225 +- 12% numa-vmstat.node0.nr_inactive_anon
13651 +- 16% +33.5% 18224 +- 12% numa-vmstat.node0.nr_zone_inactive_anon
13069 +- 3% +30.1% 17001 +- 4% numa-vmstat.node2.nr_inactive_anon
134.67 +- 42% -49.5% 68.00 +- 31% numa-vmstat.node2.nr_mlock
7758 +- 6% +30.4% 10112 +- 11% numa-vmstat.node2.nr_slab_reclaimable
13066 +- 3% +30.1% 16998 +- 4% numa-vmstat.node2.nr_zone_inactive_anon
1039 +- 11% -17.5% 857.33 slabinfo.Acpi-ParseExt.active_objs
1039 +- 11% -17.5% 857.33 slabinfo.Acpi-ParseExt.num_objs
2566 +- 6% -8.8% 2340 +- 5% slabinfo.biovec-64.active_objs
2566 +- 6% -8.8% 2340 +- 5% slabinfo.biovec-64.num_objs
898.33 +- 3% -9.5% 813.33 +- 3% slabinfo.kmem_cache_node.active_objs
1066 +- 2% -8.0% 981.33 +- 3% slabinfo.kmem_cache_node.num_objs
1940 +2.3% 1984 turbostat.Avg_MHz
6679037 +- 2% -12.7% 5830270 +- 5% turbostat.C1E
2.25 +- 2% -0.3 1.94 +- 6% turbostat.C1E%
13418115 -16.3% 11234510 +- 4% turbostat.C6
8.75 +- 2% -1.6 7.18 +- 3% turbostat.C6%
5.99 +- 2% -14.4% 5.13 +- 4% turbostat.CPU%c1
5.01 +- 3% -20.1% 4.00 +- 4% turbostat.CPU%c6
1.77 +- 3% -34.7% 1.15 turbostat.Pkg%pc2
1.378e+13 +1.2% 1.394e+13 perf-stat.branch-instructions
0.98 -0.0 0.94 perf-stat.branch-miss-rate%
1.344e+11 -2.3% 1.313e+11 perf-stat.branch-misses
1.076e+11 -1.8% 1.057e+11 perf-stat.cache-misses
2.258e+11 -2.1% 2.21e+11 perf-stat.cache-references
17788064 +- 2% -11.9% 15674207 +- 6% perf-stat.context-switches
2.241e+14 +2.4% 2.294e+14 perf-stat.cpu-cycles
1.929e+13 +2.2% 1.971e+13 perf-stat.dTLB-loads
4.01 -0.2 3.83 perf-stat.dTLB-store-miss-rate%
4.519e+11 -1.3% 4.461e+11 perf-stat.dTLB-store-misses
1.082e+13 +3.6% 1.121e+13 perf-stat.dTLB-stores
3.02e+10 +23.2% 3.721e+10 +- 3% perf-stat.iTLB-load-misses
2.721e+08 +- 8% -8.8% 2.481e+08 +- 3% perf-stat.iTLB-loads
6.985e+13 +1.8% 7.111e+13 perf-stat.instructions
2313 -17.2% 1914 +- 3% perf-stat.instructions-per-iTLB-miss
2.764e+10 -1.3% 2.729e+10 perf-stat.minor-faults
1.421e+09 +- 2% -16.4% 1.188e+09 +- 9% perf-stat.node-load-misses
1.538e+10 -9.3% 1.395e+10 perf-stat.node-loads
9.75 +1.4 11.10 perf-stat.node-store-miss-rate%
3.012e+09 +14.1% 3.437e+09 perf-stat.node-store-misses
2.789e+10 -1.3% 2.753e+10 perf-stat.node-stores
2.764e+10 -1.3% 2.729e+10 perf-stat.page-faults
760059 +3.2% 784235 perf-stat.path-length
193545 +- 25% -57.8% 81757 +- 46% sched_debug.cfs_rq:/.MIN_vruntime.avg
26516863 +- 19% -49.7% 13338070 +- 33% sched_debug.cfs_rq:/.MIN_vruntime.max
2202271 +- 21% -53.2% 1029581 +- 38% sched_debug.cfs_rq:/.MIN_vruntime.stddev
193545 +- 25% -57.8% 81757 +- 46% sched_debug.cfs_rq:/.max_vruntime.avg
26516863 +- 19% -49.7% 13338070 +- 33% sched_debug.cfs_rq:/.max_vruntime.max
2202271 +- 21% -53.2% 1029581 +- 38% sched_debug.cfs_rq:/.max_vruntime.stddev
0.32 +- 70% +253.2% 1.14 +- 54% sched_debug.cfs_rq:/.removed.load_avg.avg
4.44 +- 70% +120.7% 9.80 +- 27% sched_debug.cfs_rq:/.removed.load_avg.stddev
14.90 +- 70% +251.0% 52.31 +- 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
205.71 +- 70% +119.5% 451.60 +- 27% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.16 +- 70% +237.9% 0.54 +- 50% sched_debug.cfs_rq:/.removed.util_avg.avg
2.23 +- 70% +114.2% 4.77 +- 24% sched_debug.cfs_rq:/.removed.util_avg.stddev
573.70 +- 5% -9.7% 518.06 +- 6% sched_debug.cfs_rq:/.util_avg.min
114.87 +- 8% +14.1% 131.04 +- 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
64.42 +- 54% -63.9% 23.27 +- 68% sched_debug.cpu.cpu_load[1].max
5.05 +- 48% -55.2% 2.26 +- 51% sched_debug.cpu.cpu_load[1].stddev
57.58 +- 59% -60.3% 22.88 +- 70% sched_debug.cpu.cpu_load[2].max
21019 +- 3% -15.1% 17841 +- 6% sched_debug.cpu.nr_switches.min
20797 +- 3% -15.0% 17670 +- 6% sched_debug.cpu.sched_count.min
10287 +- 3% -15.1% 8736 +- 6% sched_debug.cpu.sched_goidle.avg
13693 +- 2% -10.7% 12233 +- 5% sched_debug.cpu.sched_goidle.max
9976 +- 3% -16.0% 8381 +- 7% sched_debug.cpu.sched_goidle.min
0.00 +- 26% +98.9% 0.00 +- 28% sched_debug.rt_rq:/.rt_time.min
4230 +-141% -100.0% 0.00 latency_stats.avg.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 +-141% -100.0% 0.00 latency_stats.avg.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 +-138% -92.2% 315.33 +- 91% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
0.00 +3.6e+105% 3641 +-141% latency_stats.avg.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 +-141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 +-141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +4.8e+106% 47686 +-141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
4230 +-141% -100.0% 0.00 latency_stats.max.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 +-141% -100.0% 0.00 latency_stats.max.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 +-138% -92.2% 315.33 +- 91% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
4254 +-134% -88.0% 511.67 +- 90% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
43093 +- 35% +76.6% 76099 +-115% latency_stats.max.blk_execute_rq.scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
24139 +- 70% +228.5% 79285 +-105% latency_stats.max.blk_execute_rq.scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.disk_clear_events.check_disk_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get
0.00 +3.6e+105% 3641 +-141% latency_stats.max.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +6.5e+106% 64518 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
4230 +-141% -100.0% 0.00 latency_stats.sum.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 +-141% -100.0% 0.00 latency_stats.sum.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 +-138% -92.2% 315.33 +- 91% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
57884 +- 9% +47.3% 85264 +-118% latency_stats.sum.blk_execute_rq.scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
0.00 +3.6e+105% 3641 +-141% latency_stats.sum.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +9.5e+106% 95373 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
11.70 -11.7 0.00 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
11.52 -11.5 0.00 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
10.44 -10.4 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
9.83 -9.8 0.00 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
9.55 -9.5 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
9.35 -9.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
6.81 -6.8 0.00 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
7.71 -0.3 7.45 perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.59 +- 7% -0.2 0.35 +- 70% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.__do_page_fault.do_page_fault.page_fault
0.59 +- 7% -0.2 0.35 +- 70% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.__do_page_fault.do_page_fault.page_fault
10.41 -0.2 10.24 perf-profile.calltrace.cycles-pp.native_irq_return_iret
7.68 -0.1 7.60 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.76 -0.1 0.70 perf-profile.calltrace.cycles-pp.down_read_trylock.__do_page_fault.do_page_fault.page_fault
1.38 -0.0 1.34 perf-profile.calltrace.cycles-pp.do_page_fault
1.05 -0.0 1.02 perf-profile.calltrace.cycles-pp.trace_graph_entry.do_page_fault
0.92 +0.0 0.94 perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
0.91 +0.0 0.93 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.__do_page_fault.do_page_fault.page_fault
0.65 +0.0 0.67 perf-profile.calltrace.cycles-pp.set_page_dirty.unmap_page_range.unmap_vmas.unmap_region.do_munmap
0.62 +0.0 0.66 perf-profile.calltrace.cycles-pp.page_mapping.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
4.15 +0.1 4.27 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region.do_munmap
10.17 +0.2 10.39 perf-profile.calltrace.cycles-pp.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64
9.54 +0.2 9.76 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.vm_munmap
9.54 +0.2 9.76 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
0.00 +0.6 0.56 +- 2% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.6 0.59 perf-profile.calltrace.cycles-pp.page_mapping.set_page_dirty.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault
0.00 +0.6 0.60 perf-profile.calltrace.cycles-pp.current_time.file_update_time.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.7 0.68 perf-profile.calltrace.cycles-pp.___might_sleep.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.7 0.74 perf-profile.calltrace.cycles-pp.unlock_page.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.8 0.80 perf-profile.calltrace.cycles-pp.set_page_dirty.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.91 perf-profile.calltrace.cycles-pp.__set_page_dirty_no_writeback.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.3 1.27 perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.30 perf-profile.calltrace.cycles-pp.file_update_time.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +2.8 2.76 perf-profile.calltrace.cycles-pp.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +6.8 6.81 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +9.4 9.39 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +9.6 9.59 perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +9.8 9.77 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +10.4 10.37 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +11.5 11.46 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +11.6 11.60 perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +26.6 26.62 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.88 -0.3 7.61 perf-profile.children.cycles-pp.find_get_entry
1.34 +- 8% -0.2 1.16 +- 2% perf-profile.children.cycles-pp.hrtimer_interrupt
10.41 -0.2 10.24 perf-profile.children.cycles-pp.native_irq_return_iret
0.38 +- 28% -0.1 0.26 +- 4% perf-profile.children.cycles-pp.tick_sched_timer
11.80 -0.1 11.68 perf-profile.children.cycles-pp.__do_fault
0.55 +- 15% -0.1 0.43 +- 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.60 -0.1 0.51 perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.38 +- 13% -0.1 0.29 +- 4% perf-profile.children.cycles-pp.ktime_get
7.68 -0.1 7.60 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
5.18 -0.1 5.12 perf-profile.children.cycles-pp.trace_graph_entry
0.79 -0.1 0.73 perf-profile.children.cycles-pp.down_read_trylock
7.83 -0.1 7.76 perf-profile.children.cycles-pp.sync_regs
3.01 -0.1 2.94 perf-profile.children.cycles-pp.fault_dirty_shared_page
1.02 -0.1 0.96 perf-profile.children.cycles-pp._raw_spin_lock
4.66 -0.1 4.61 perf-profile.children.cycles-pp.prepare_ftrace_return
0.37 +- 8% -0.1 0.32 +- 3% perf-profile.children.cycles-pp.current_kernel_time64
5.26 -0.1 5.21 perf-profile.children.cycles-pp.ftrace_graph_caller
0.66 +- 5% -0.1 0.61 perf-profile.children.cycles-pp.current_time
0.18 +- 5% -0.0 0.15 +- 3% perf-profile.children.cycles-pp.update_process_times
0.27 -0.0 0.26 perf-profile.children.cycles-pp._cond_resched
0.16 -0.0 0.15 +- 3% perf-profile.children.cycles-pp.rcu_all_qs
0.94 +0.0 0.95 perf-profile.children.cycles-pp.vmacache_find
0.48 +0.0 0.50 perf-profile.children.cycles-pp.__mod_node_page_state
0.17 +0.0 0.19 +- 2% perf-profile.children.cycles-pp.__unlock_page_memcg
1.07 +0.0 1.10 perf-profile.children.cycles-pp.find_vma
0.79 +- 3% +0.1 0.86 +- 2% perf-profile.children.cycles-pp.lock_page_memcg
4.29 +0.1 4.40 perf-profile.children.cycles-pp.page_remove_rmap
1.39 +- 2% +0.1 1.52 perf-profile.children.cycles-pp.file_update_time
0.00 +0.2 0.16 perf-profile.children.cycles-pp.__vm_normal_page
9.63 +0.2 9.84 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
9.63 +0.2 9.84 perf-profile.children.cycles-pp.do_syscall_64
9.63 +0.2 9.84 perf-profile.children.cycles-pp.unmap_page_range
10.17 +0.2 10.39 perf-profile.children.cycles-pp.munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.unmap_region
9.56 +0.2 9.78 perf-profile.children.cycles-pp.do_munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.vm_munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.__x64_sys_munmap
9.54 +0.2 9.77 perf-profile.children.cycles-pp.unmap_vmas
1.01 +0.2 1.25 perf-profile.children.cycles-pp.___might_sleep
0.00 +1.6 1.59 perf-profile.children.cycles-pp.pte_map_lock
0.00 +26.9 26.89 perf-profile.children.cycles-pp.handle_pte_fault
4.25 -1.0 3.24 perf-profile.self.cycles-pp.__handle_mm_fault
1.42 -0.3 1.11 perf-profile.self.cycles-pp.alloc_set_pte
4.87 -0.3 4.59 perf-profile.self.cycles-pp.find_get_entry
10.41 -0.2 10.24 perf-profile.self.cycles-pp.native_irq_return_iret
0.37 +- 13% -0.1 0.28 +- 4% perf-profile.self.cycles-pp.ktime_get
0.60 -0.1 0.51 perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
7.50 -0.1 7.42 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
7.83 -0.1 7.76 perf-profile.self.cycles-pp.sync_regs
4.85 -0.1 4.79 perf-profile.self.cycles-pp.trace_graph_entry
1.01 -0.1 0.95 perf-profile.self.cycles-pp._raw_spin_lock
0.78 -0.1 0.73 perf-profile.self.cycles-pp.down_read_trylock
0.36 +- 9% -0.1 0.31 +- 4% perf-profile.self.cycles-pp.current_kernel_time64
0.28 -0.0 0.23 +- 2% perf-profile.self.cycles-pp.__do_fault
1.04 -0.0 1.00 perf-profile.self.cycles-pp.find_lock_entry
0.30 -0.0 0.28 +- 3% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.70 -0.0 0.67 perf-profile.self.cycles-pp.prepare_ftrace_return
0.44 -0.0 0.42 perf-profile.self.cycles-pp.do_page_fault
0.16 -0.0 0.14 perf-profile.self.cycles-pp.rcu_all_qs
0.78 -0.0 0.77 perf-profile.self.cycles-pp.shmem_getpage_gfp
0.20 -0.0 0.19 perf-profile.self.cycles-pp._cond_resched
0.50 +0.0 0.51 perf-profile.self.cycles-pp.set_page_dirty
0.93 +0.0 0.95 perf-profile.self.cycles-pp.vmacache_find
0.36 +- 2% +0.0 0.38 perf-profile.self.cycles-pp.__might_sleep
0.47 +0.0 0.50 perf-profile.self.cycles-pp.__mod_node_page_state
0.17 +0.0 0.19 +- 2% perf-profile.self.cycles-pp.__unlock_page_memcg
2.34 +0.0 2.38 perf-profile.self.cycles-pp.unmap_page_range
0.78 +- 3% +0.1 0.85 +- 2% perf-profile.self.cycles-pp.lock_page_memcg
2.17 +0.1 2.24 perf-profile.self.cycles-pp.__do_page_fault
0.00 +0.2 0.16 +- 3% perf-profile.self.cycles-pp.__vm_normal_page
1.00 +0.2 1.24 perf-profile.self.cycles-pp.___might_sleep
0.00 +0.7 0.70 perf-profile.self.cycles-pp.pte_map_lock
0.00 +1.4 1.42 +- 2% perf-profile.self.cycles-pp.handle_pte_fault
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/context_switch1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_interrupt_entry/0x
2:3 -67% :3 kmsg.pstore:crypto_comp_decompress_failed,ret=
2:3 -67% :3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
224431 -1.3% 221567 will-it-scale.per_process_ops
237006 -2.2% 231907 will-it-scale.per_thread_ops
1.601e+09 +- 29% -46.9% 8.501e+08 +- 12% will-it-scale.time.involuntary_context_switches
5429 -1.6% 5344 will-it-scale.time.user_time
88596221 -1.7% 87067269 will-it-scale.workload
6863 +- 6% -9.7% 6200 boot-time.idle
144908 +- 40% -66.8% 48173 +- 93% meminfo.CmaFree
0.00 +- 70% +0.0 0.00 mpstat.cpu.iowait%
448336 +- 14% -34.8% 292125 +- 3% turbostat.C1
7684 +- 6% -9.5% 6957 uptime.idle
1.601e+09 +- 29% -46.9% 8.501e+08 +- 12% time.involuntary_context_switches
5429 -1.6% 5344 time.user_time
44013162 -1.7% 43243125 vmstat.system.cs
207684 -1.1% 205485 vmstat.system.in
2217033 +- 15% -15.8% 1866876 +- 2% cpuidle.C1.time
451218 +- 14% -34.7% 294841 +- 2% cpuidle.C1.usage
24839 +- 10% -19.9% 19896 cpuidle.POLL.time
7656 +- 11% -38.9% 4676 +- 8% cpuidle.POLL.usage
5.48 +- 49% -67.3% 1.79 +-100% irq_exception_noise.__do_page_fault.95th
9.46 +- 21% -58.2% 3.95 +- 64% irq_exception_noise.__do_page_fault.99th
35.67 +- 8% +1394.4% 533.00 +- 96% irq_exception_noise.irq_nr
52109 +- 3% -16.0% 43784 +- 4% irq_exception_noise.softirq_time
36226 +- 40% -66.7% 12048 +- 93% proc-vmstat.nr_free_cma
25916 -1.0% 25659 proc-vmstat.nr_slab_reclaimable
16279 +- 8% +2646.1% 447053 +- 82% proc-vmstat.pgalloc_movable
2231117 -18.4% 1820828 +- 20% proc-vmstat.pgalloc_normal
1109316 +- 46% -86.9% 145207 +-109% numa-numastat.node1.local_node
1114700 +- 45% -84.5% 172877 +- 85% numa-numastat.node1.numa_hit
5523 +-140% +402.8% 27768 +- 39% numa-numastat.node1.other_node
29013 +- 29% +3048.1% 913379 +- 73% numa-numastat.node3.local_node
65032 +- 13% +1335.1% 933270 +- 70% numa-numastat.node3.numa_hit
36018 -44.8% 19897 +- 75% numa-numastat.node3.other_node
12.79 +- 21% +7739.1% 1002 +-136% sched_debug.cpu.cpu_load[1].max
1.82 +- 10% +3901.1% 72.92 +-135% sched_debug.cpu.cpu_load[1].stddev
1.71 +- 4% +5055.8% 88.08 +-137% sched_debug.cpu.cpu_load[2].stddev
12.33 +- 23% +9061.9% 1129 +-139% sched_debug.cpu.cpu_load[3].max
1.78 +- 10% +4514.8% 82.18 +-137% sched_debug.cpu.cpu_load[3].stddev
4692 +- 72% +154.5% 11945 +- 29% sched_debug.cpu.max_idle_balance_cost.stddev
23979 -8.3% 21983 slabinfo.kmalloc-96.active_objs
1358 +- 6% -17.9% 1114 +- 3% slabinfo.nsproxy.active_objs
1358 +- 6% -17.9% 1114 +- 3% slabinfo.nsproxy.num_objs
15229 +12.4% 17119 slabinfo.pde_opener.active_objs
15229 +12.4% 17119 slabinfo.pde_opener.num_objs
59541 +- 8% -10.1% 53537 +- 8% slabinfo.vm_area_struct.active_objs
59612 +- 8% -10.1% 53604 +- 8% slabinfo.vm_area_struct.num_objs
4.163e+13 -1.4% 4.105e+13 perf-stat.branch-instructions
6.537e+11 -1.2% 6.459e+11 perf-stat.branch-misses
2.667e+10 -1.7% 2.621e+10 perf-stat.context-switches
1.21 +1.3% 1.22 perf-stat.cpi
150508 -9.8% 135825 +- 3% perf-stat.cpu-migrations
5.75 +- 33% +5.4 11.11 +- 26% perf-stat.iTLB-load-miss-rate%
3.619e+09 +- 36% +100.9% 7.272e+09 +- 30% perf-stat.iTLB-load-misses
2.089e+14 -1.3% 2.062e+14 perf-stat.instructions
64607 +- 29% -50.5% 31964 +- 37% perf-stat.instructions-per-iTLB-miss
0.83 -1.3% 0.82 perf-stat.ipc
3972 +- 4% -14.7% 3388 +- 8% numa-meminfo.node0.PageTables
207919 +- 25% -57.2% 88989 +- 74% numa-meminfo.node1.Active
207715 +- 26% -57.3% 88785 +- 74% numa-meminfo.node1.Active(anon)
356529 -34.3% 234069 +- 2% numa-meminfo.node1.FilePages
789129 +- 5% -19.8% 633161 +- 12% numa-meminfo.node1.MemUsed
34777 +- 8% -48.2% 18010 +- 30% numa-meminfo.node1.SReclaimable
69641 +- 4% -20.7% 55250 +- 12% numa-meminfo.node1.SUnreclaim
125526 +- 4% -96.3% 4602 +- 41% numa-meminfo.node1.Shmem
104419 -29.8% 73261 +- 16% numa-meminfo.node1.Slab
103661 +- 17% -72.0% 29029 +- 99% numa-meminfo.node2.Active
103661 +- 17% -72.2% 28829 +-101% numa-meminfo.node2.Active(anon)
103564 +- 18% -72.0% 29007 +-100% numa-meminfo.node2.AnonPages
671654 +- 7% -14.6% 573598 +- 4% numa-meminfo.node2.MemUsed
44206 +-127% +301.4% 177465 +- 42% numa-meminfo.node3.Active
44206 +-127% +301.0% 177263 +- 42% numa-meminfo.node3.Active(anon)
8738 +12.2% 9805 +- 8% numa-meminfo.node3.KernelStack
603605 +- 9% +27.8% 771554 +- 14% numa-meminfo.node3.MemUsed
14438 +- 6% +122.9% 32181 +- 42% numa-meminfo.node3.SReclaimable
2786 +-137% +3302.0% 94792 +- 71% numa-meminfo.node3.Shmem
71461 +- 7% +45.2% 103771 +- 29% numa-meminfo.node3.Slab
247197 +- 4% -7.8% 227843 numa-meminfo.node3.Unevictable
991.67 +- 4% -14.7% 846.00 +- 8% numa-vmstat.node0.nr_page_table_pages
51926 +- 26% -57.3% 22196 +- 74% numa-vmstat.node1.nr_active_anon
89137 -34.4% 58516 +- 2% numa-vmstat.node1.nr_file_pages
1679 +- 5% -10.8% 1498 +- 4% numa-vmstat.node1.nr_mapped
31386 +- 4% -96.3% 1150 +- 41% numa-vmstat.node1.nr_shmem
8694 +- 8% -48.2% 4502 +- 30% numa-vmstat.node1.nr_slab_reclaimable
17410 +- 4% -20.7% 13812 +- 12% numa-vmstat.node1.nr_slab_unreclaimable
51926 +- 26% -57.3% 22196 +- 74% numa-vmstat.node1.nr_zone_active_anon
1037174 +- 24% -57.0% 446205 +- 35% numa-vmstat.node1.numa_hit
961611 +- 26% -65.8% 328687 +- 50% numa-vmstat.node1.numa_local
75563 +- 44% +55.5% 117517 +- 9% numa-vmstat.node1.numa_other
25914 +- 17% -72.2% 7206 +-101% numa-vmstat.node2.nr_active_anon
25891 +- 18% -72.0% 7251 +-100% numa-vmstat.node2.nr_anon_pages
25914 +- 17% -72.2% 7206 +-101% numa-vmstat.node2.nr_zone_active_anon
11051 +-127% +301.0% 44309 +- 42% numa-vmstat.node3.nr_active_anon
36227 +- 40% -66.7% 12049 +- 93% numa-vmstat.node3.nr_free_cma
0.33 +-141% +25000.0% 83.67 +- 81% numa-vmstat.node3.nr_inactive_file
8739 +12.2% 9806 +- 8% numa-vmstat.node3.nr_kernel_stack
696.67 +-137% +3299.7% 23684 +- 71% numa-vmstat.node3.nr_shmem
3609 +- 6% +122.9% 8044 +- 42% numa-vmstat.node3.nr_slab_reclaimable
61799 +- 4% -7.8% 56960 numa-vmstat.node3.nr_unevictable
11053 +-127% +301.4% 44361 +- 42% numa-vmstat.node3.nr_zone_active_anon
0.33 +-141% +25000.0% 83.67 +- 81% numa-vmstat.node3.nr_zone_inactive_file
61799 +- 4% -7.8% 56960 numa-vmstat.node3.nr_zone_unevictable
217951 +- 8% +280.8% 829976 +- 65% numa-vmstat.node3.numa_hit
91303 +- 19% +689.3% 720647 +- 77% numa-vmstat.node3.numa_local
126648 -13.7% 109329 +- 13% numa-vmstat.node3.numa_other
8.54 -0.1 8.40 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
5.04 -0.1 4.94 perf-profile.calltrace.cycles-pp.__switch_to.read
3.43 -0.1 3.35 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
2.77 -0.1 2.72 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
1.99 -0.0 1.94 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
0.60 +- 2% -0.0 0.57 +- 2% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.81 -0.0 0.78 perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.78 +0.0 0.80 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.73 +0.0 0.75 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.92 +0.0 0.95 perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
2.11 +0.0 2.15 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.00 -0.1 6.86 perf-profile.children.cycles-pp.syscall_return_via_sysret
5.26 -0.1 5.14 perf-profile.children.cycles-pp.__switch_to
5.65 -0.1 5.56 perf-profile.children.cycles-pp.reweight_entity
2.17 -0.1 2.12 perf-profile.children.cycles-pp.copy_page_to_iter
2.94 -0.0 2.90 perf-profile.children.cycles-pp.update_cfs_group
3.11 -0.0 3.07 perf-profile.children.cycles-pp.pick_next_task_fair
2.59 -0.0 2.55 perf-profile.children.cycles-pp.load_new_mm_cr3
1.92 -0.0 1.88 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.11 -0.0 1.08 +- 2% perf-profile.children.cycles-pp.find_next_bit
0.59 -0.0 0.56 perf-profile.children.cycles-pp.finish_task_switch
0.14 +- 15% -0.0 0.11 +- 16% perf-profile.children.cycles-pp.write@plt
1.21 -0.0 1.18 perf-profile.children.cycles-pp.set_next_entity
0.85 -0.0 0.82 perf-profile.children.cycles-pp.___perf_sw_event
0.13 +- 3% -0.0 0.11 +- 4% perf-profile.children.cycles-pp.timespec_trunc
0.47 +- 2% -0.0 0.45 perf-profile.children.cycles-pp.anon_pipe_buf_release
0.38 +- 2% -0.0 0.36 perf-profile.children.cycles-pp.file_update_time
0.74 -0.0 0.73 perf-profile.children.cycles-pp.copyout
0.41 +- 2% -0.0 0.39 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.32 -0.0 0.30 perf-profile.children.cycles-pp.__x64_sys_read
0.14 -0.0 0.12 +- 3% perf-profile.children.cycles-pp.current_kernel_time64
0.91 +0.0 0.92 perf-profile.children.cycles-pp.touch_atime
0.40 +0.0 0.41 perf-profile.children.cycles-pp._cond_resched
0.18 +- 2% +0.0 0.20 perf-profile.children.cycles-pp.activate_task
0.05 +0.0 0.07 +- 6% perf-profile.children.cycles-pp.default_wake_function
0.24 +0.0 0.27 +- 3% perf-profile.children.cycles-pp.rcu_all_qs
0.60 +- 2% +0.0 0.64 +- 2% perf-profile.children.cycles-pp.update_min_vruntime
0.42 +- 4% +0.0 0.46 +- 4% perf-profile.children.cycles-pp.probe_sched_switch
1.33 +0.0 1.38 perf-profile.children.cycles-pp.__fget_light
0.53 +- 2% +0.1 0.58 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.31 +0.1 0.36 +- 2% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
4.35 +0.1 4.41 perf-profile.children.cycles-pp.switch_mm_irqs_off
2.52 +0.1 2.58 perf-profile.children.cycles-pp.selinux_file_permission
0.00 +0.1 0.07 +- 11% perf-profile.children.cycles-pp.hrtick_update
7.00 -0.1 6.86 perf-profile.self.cycles-pp.syscall_return_via_sysret
5.26 -0.1 5.14 perf-profile.self.cycles-pp.__switch_to
0.29 -0.1 0.19 +- 2% perf-profile.self.cycles-pp.ksys_read
1.49 -0.1 1.43 perf-profile.self.cycles-pp.dequeue_task_fair
2.41 -0.1 2.35 perf-profile.self.cycles-pp.__schedule
1.46 -0.0 1.41 perf-profile.self.cycles-pp.select_task_rq_fair
2.94 -0.0 2.90 perf-profile.self.cycles-pp.update_cfs_group
0.44 -0.0 0.40 perf-profile.self.cycles-pp.dequeue_entity
0.48 -0.0 0.44 perf-profile.self.cycles-pp.finish_task_switch
2.59 -0.0 2.55 perf-profile.self.cycles-pp.load_new_mm_cr3
1.11 -0.0 1.08 +- 2% perf-profile.self.cycles-pp.find_next_bit
1.91 -0.0 1.88 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.78 -0.0 0.75 perf-profile.self.cycles-pp.___perf_sw_event
0.14 +- 15% -0.0 0.11 +- 16% perf-profile.self.cycles-pp.write@plt
0.37 -0.0 0.35 +- 2% perf-profile.self.cycles-pp.__wake_up_common_lock
0.20 +- 2% -0.0 0.17 +- 2% perf-profile.self.cycles-pp.__fdget_pos
0.47 +- 2% -0.0 0.44 perf-profile.self.cycles-pp.anon_pipe_buf_release
0.87 -0.0 0.85 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.13 +- 3% -0.0 0.11 +- 4% perf-profile.self.cycles-pp.timespec_trunc
0.41 +- 2% -0.0 0.39 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.38 -0.0 0.36 perf-profile.self.cycles-pp.__wake_up_common
0.32 -0.0 0.30 perf-profile.self.cycles-pp.__x64_sys_read
0.14 +- 3% -0.0 0.12 +- 3% perf-profile.self.cycles-pp.current_kernel_time64
0.30 -0.0 0.28 perf-profile.self.cycles-pp.set_next_entity
0.28 +- 3% +0.0 0.30 perf-profile.self.cycles-pp._cond_resched
0.18 +- 2% +0.0 0.20 perf-profile.self.cycles-pp.activate_task
0.17 +- 2% +0.0 0.19 perf-profile.self.cycles-pp.__might_fault
0.05 +0.0 0.07 +- 6% perf-profile.self.cycles-pp.default_wake_function
0.17 +- 2% +0.0 0.20 perf-profile.self.cycles-pp.ttwu_do_activate
0.66 +0.0 0.69 perf-profile.self.cycles-pp.write
0.24 +0.0 0.27 +- 3% perf-profile.self.cycles-pp.rcu_all_qs
0.67 +0.0 0.70 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.60 +- 2% +0.0 0.64 +- 2% perf-profile.self.cycles-pp.update_min_vruntime
0.42 +- 4% +0.0 0.46 +- 4% perf-profile.self.cycles-pp.probe_sched_switch
1.33 +0.0 1.37 perf-profile.self.cycles-pp.__fget_light
1.61 +0.0 1.66 perf-profile.self.cycles-pp.pipe_read
0.53 +- 2% +0.1 0.58 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.31 +0.1 0.36 +- 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
1.04 +0.1 1.11 perf-profile.self.cycles-pp.pipe_write
0.00 +0.1 0.07 +- 11% perf-profile.self.cycles-pp.hrtick_update
2.00 +0.1 2.08 perf-profile.self.cycles-pp.switch_mm_irqs_off
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/page_fault3/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=file_update_time/0x
:3 33% 1:3 stderr.mount.nfs:Connection_timed_out
34:3 -401% 22:3 perf-profile.calltrace.cycles-pp.error_entry.testcase
17:3 -207% 11:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.testcase
34:3 -404% 22:3 perf-profile.children.cycles-pp.error_entry
0:3 -2% 0:3 perf-profile.children.cycles-pp.error_exit
16:3 -196% 11:3 perf-profile.self.cycles-pp.error_entry
0:3 -2% 0:3 perf-profile.self.cycles-pp.error_exit
%stddev %change %stddev
\ | \
467454 -1.8% 459251 will-it-scale.per_process_ops
10856 +- 4% -23.1% 8344 +- 7% will-it-scale.per_thread_ops
118134 +- 2% +11.7% 131943 will-it-scale.time.involuntary_context_switches
6.277e+08 +- 4% -23.1% 4.827e+08 +- 7% will-it-scale.time.minor_page_faults
7406 +5.8% 7839 will-it-scale.time.percent_of_cpu_this_job_got
44526 +5.8% 47106 will-it-scale.time.system_time
7351468 +- 5% -18.3% 6009014 +- 7% will-it-scale.time.voluntary_context_switches
91835846 -2.2% 89778599 will-it-scale.workload
2534640 +4.3% 2643005 +- 2% interrupts.CAL:Function_call_interrupts
2819 +- 5% +22.9% 3464 +- 18% kthread_noise.total_time
30273 +- 4% -12.7% 26415 +- 5% vmstat.system.cs
1.52 +- 2% +15.2% 1.75 +- 2% irq_exception_noise.__do_page_fault.99th
296.67 +- 12% -36.7% 187.67 +- 12% irq_exception_noise.softirq_time
230900 +- 3% +30.3% 300925 +- 3% meminfo.Inactive
230184 +- 3% +30.4% 300180 +- 3% meminfo.Inactive(anon)
11.62 +- 3% -2.2 9.40 +- 5% mpstat.cpu.idle%
0.00 +- 14% -0.0 0.00 +- 4% mpstat.cpu.iowait%
7992174 -11.1% 7101976 +- 3% softirqs.RCU
4973624 +- 2% -12.9% 4333370 +- 2% softirqs.SCHED
118134 +- 2% +11.7% 131943 time.involuntary_context_switches
6.277e+08 +- 4% -23.1% 4.827e+08 +- 7% time.minor_page_faults
7406 +5.8% 7839 time.percent_of_cpu_this_job_got
44526 +5.8% 47106 time.system_time
7351468 +- 5% -18.3% 6009014 +- 7% time.voluntary_context_switches
2.702e+09 +- 5% -16.7% 2.251e+09 +- 7% cpuidle.C1E.time
6834329 +- 5% -15.8% 5756243 +- 7% cpuidle.C1E.usage
1.046e+10 +- 3% -19.8% 8.389e+09 +- 4% cpuidle.C6.time
13961845 +- 3% -19.3% 11265555 +- 4% cpuidle.C6.usage
1309307 +- 7% -14.8% 1116168 +- 8% cpuidle.POLL.time
19774 +- 6% -13.7% 17063 +- 7% cpuidle.POLL.usage
2523 +- 4% -11.1% 2243 +- 4% slabinfo.biovec-64.active_objs
2523 +- 4% -11.1% 2243 +- 4% slabinfo.biovec-64.num_objs
2610 +- 8% -33.7% 1731 +- 22% slabinfo.dmaengine-unmap-16.active_objs
2610 +- 8% -33.7% 1731 +- 22% slabinfo.dmaengine-unmap-16.num_objs
5118 +- 17% -22.6% 3962 +- 9% slabinfo.eventpoll_pwq.active_objs
5118 +- 17% -22.6% 3962 +- 9% slabinfo.eventpoll_pwq.num_objs
4583 +- 3% -14.0% 3941 +- 4% slabinfo.sock_inode_cache.active_objs
4583 +- 3% -14.0% 3941 +- 4% slabinfo.sock_inode_cache.num_objs
1933 +2.6% 1984 turbostat.Avg_MHz
6832021 +- 5% -15.8% 5754156 +- 7% turbostat.C1E
2.32 +- 5% -0.4 1.94 +- 7% turbostat.C1E%
13954211 +- 3% -19.3% 11259436 +- 4% turbostat.C6
8.97 +- 3% -1.8 7.20 +- 4% turbostat.C6%
6.18 +- 4% -17.1% 5.13 +- 5% turbostat.CPU%c1
5.12 +- 3% -21.7% 4.01 +- 4% turbostat.CPU%c6
1.76 +- 2% -34.7% 1.15 +- 2% turbostat.Pkg%pc2
57314 +- 4% +30.4% 74717 +- 4% proc-vmstat.nr_inactive_anon
57319 +- 4% +30.4% 74719 +- 4% proc-vmstat.nr_zone_inactive_anon
24415 +- 19% -62.2% 9236 +- 7% proc-vmstat.numa_hint_faults
69661453 -1.8% 68405712 proc-vmstat.numa_hit
69553390 -1.8% 68297790 proc-vmstat.numa_local
8792 +- 29% -92.6% 654.33 +- 23% proc-vmstat.numa_pages_migrated
40251 +- 32% -76.5% 9474 +- 3% proc-vmstat.numa_pte_updates
69522532 -1.6% 68383074 proc-vmstat.pgalloc_normal
2.762e+10 -2.2% 2.701e+10 proc-vmstat.pgfault
68825100 -1.5% 67772256 proc-vmstat.pgfree
8792 +- 29% -92.6% 654.33 +- 23% proc-vmstat.pgmigrate_success
57992 +- 6% +56.2% 90591 +- 3% numa-meminfo.node0.Inactive
57916 +- 6% +56.3% 90513 +- 3% numa-meminfo.node0.Inactive(anon)
37285 +- 12% +36.0% 50709 +- 5% numa-meminfo.node0.SReclaimable
110971 +- 8% +22.7% 136209 +- 8% numa-meminfo.node0.Slab
23601 +- 55% +559.5% 155651 +- 36% numa-meminfo.node1.AnonPages
62484 +- 12% +17.5% 73417 +- 3% numa-meminfo.node1.Inactive
62323 +- 12% +17.2% 73023 +- 4% numa-meminfo.node1.Inactive(anon)
109714 +- 63% -85.6% 15832 +- 96% numa-meminfo.node2.AnonPages
52236 +- 13% +22.7% 64074 +- 3% numa-meminfo.node2.Inactive
51922 +- 12% +23.2% 63963 +- 3% numa-meminfo.node2.Inactive(anon)
60241 +- 11% +21.9% 73442 +- 8% numa-meminfo.node3.Inactive
60077 +- 12% +22.0% 73279 +- 8% numa-meminfo.node3.Inactive(anon)
14093 +- 6% +55.9% 21977 +- 3% numa-vmstat.node0.nr_inactive_anon
9321 +- 12% +36.0% 12675 +- 5% numa-vmstat.node0.nr_slab_reclaimable
14090 +- 6% +56.0% 21977 +- 3% numa-vmstat.node0.nr_zone_inactive_anon
5900 +- 55% +559.4% 38909 +- 36% numa-vmstat.node1.nr_anon_pages
15413 +- 12% +14.8% 17688 +- 4% numa-vmstat.node1.nr_inactive_anon
15413 +- 12% +14.8% 17688 +- 4% numa-vmstat.node1.nr_zone_inactive_anon
27430 +- 63% -85.6% 3960 +- 96% numa-vmstat.node2.nr_anon_pages
12928 +- 12% +20.0% 15508 +- 3% numa-vmstat.node2.nr_inactive_anon
12927 +- 12% +20.0% 15507 +- 3% numa-vmstat.node2.nr_zone_inactive_anon
6229 +- 10% +117.5% 13547 +- 44% numa-vmstat.node3
14669 +- 11% +19.6% 17537 +- 7% numa-vmstat.node3.nr_inactive_anon
14674 +- 11% +19.5% 17541 +- 7% numa-vmstat.node3.nr_zone_inactive_anon
24617 +-141% -100.0% 0.00 latency_stats.avg.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
5049 +-105% -99.4% 28.33 +- 82% latency_stats.avg.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
152457 +- 27% +233.6% 508656 +- 92% latency_stats.avg.max
0.00 +3.9e+107% 390767 +-141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
24617 +-141% -100.0% 0.00 latency_stats.max.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4240 +-141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
8565 +- 70% -99.1% 80.33 +-115% latency_stats.max.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
204835 +- 6% +457.6% 1142244 +-114% latency_stats.max.max
0.00 +5.1e+105% 5057 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 +1e+108% 995083 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
13175 +- 4% -100.0% 0.00 latency_stats.sum.io_schedule.__lock_page_or_retry.filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
24617 +-141% -100.0% 0.00 latency_stats.sum.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4260 +-141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
8640 +- 70% -97.5% 216.33 +-108% latency_stats.sum.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
6673 +- 89% -92.8% 477.67 +- 74% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
0.00 +4.2e+105% 4228 +-130% latency_stats.sum.io_schedule.__lock_page_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +7.5e+105% 7450 +- 98% latency_stats.sum.io_schedule.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +1.3e+106% 13050 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 +1.5e+110% 1.508e+08 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
0.97 -0.0 0.94 perf-stat.branch-miss-rate%
1.329e+11 -2.6% 1.294e+11 perf-stat.branch-misses
2.254e+11 -1.9% 2.21e+11 perf-stat.cache-references
18308779 +- 4% -12.8% 15969618 +- 5% perf-stat.context-switches
3.20 +1.8% 3.26 perf-stat.cpi
2.233e+14 +2.7% 2.293e+14 perf-stat.cpu-cycles
4.01 -0.2 3.83 perf-stat.dTLB-store-miss-rate%
4.51e+11 -2.2% 4.41e+11 perf-stat.dTLB-store-misses
1.08e+13 +2.6% 1.109e+13 perf-stat.dTLB-stores
3.158e+10 +- 5% +16.8% 3.689e+10 +- 2% perf-stat.iTLB-load-misses
2214 +- 5% -13.8% 1907 +- 2% perf-stat.instructions-per-iTLB-miss
0.31 -1.8% 0.31 perf-stat.ipc
2.762e+10 -2.2% 2.701e+10 perf-stat.minor-faults
1.535e+10 -11.2% 1.362e+10 perf-stat.node-loads
9.75 +1.1 10.89 perf-stat.node-store-miss-rate%
3.012e+09 +10.6% 3.332e+09 +- 2% perf-stat.node-store-misses
2.787e+10 -2.2% 2.725e+10 perf-stat.node-stores
2.762e+10 -2.2% 2.701e+10 perf-stat.page-faults
759458 +3.2% 783404 perf-stat.path-length
246.39 +- 15% -20.4% 196.12 +- 6% sched_debug.cfs_rq:/.load_avg.max
0.21 +- 3% +9.0% 0.23 +- 4% sched_debug.cfs_rq:/.nr_running.stddev
16.64 +- 27% +61.0% 26.79 +- 17% sched_debug.cfs_rq:/.nr_spread_over.max
75.15 -14.4% 64.30 +- 4% sched_debug.cfs_rq:/.util_avg.stddev
178.80 +- 3% +25.4% 224.12 +- 7% sched_debug.cfs_rq:/.util_est_enqueued.avg
1075 +- 5% -12.3% 943.36 +- 2% sched_debug.cfs_rq:/.util_est_enqueued.max
2093630 +- 27% -36.1% 1337941 +- 16% sched_debug.cpu.avg_idle.max
297057 +- 11% +37.8% 409294 +- 14% sched_debug.cpu.avg_idle.min
293240 +- 55% -62.3% 110571 +- 13% sched_debug.cpu.avg_idle.stddev
770075 +- 9% -19.3% 621136 +- 12% sched_debug.cpu.max_idle_balance_cost.max
48919 +- 46% -66.9% 16190 +- 81% sched_debug.cpu.max_idle_balance_cost.stddev
21716 +- 5% -16.8% 18061 +- 7% sched_debug.cpu.nr_switches.min
21519 +- 5% -17.7% 17700 +- 7% sched_debug.cpu.sched_count.min
10586 +- 5% -18.1% 8669 +- 7% sched_debug.cpu.sched_goidle.avg
14183 +- 3% -17.6% 11693 +- 5% sched_debug.cpu.sched_goidle.max
10322 +- 5% -18.6% 8407 +- 7% sched_debug.cpu.sched_goidle.min
400.99 +- 8% -13.0% 348.75 +- 3% sched_debug.cpu.sched_goidle.stddev
5459 +- 8% +10.0% 6006 +- 3% sched_debug.cpu.ttwu_local.avg
8.47 +- 42% +345.8% 37.73 +- 77% sched_debug.rt_rq:/.rt_time.max
0.61 +- 42% +343.0% 2.72 +- 77% sched_debug.rt_rq:/.rt_time.stddev
91.98 -30.9 61.11 +- 70% perf-profile.calltrace.cycles-pp.testcase
9.05 -9.1 0.00 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
8.91 -8.9 0.00 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
8.06 -8.1 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
7.59 -7.6 0.00 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
7.44 -7.4 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.28 -7.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.31 -5.3 0.00 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
8.08 -2.8 5.30 +- 70% perf-profile.calltrace.cycles-pp.native_irq_return_iret.testcase
5.95 -2.1 3.83 +- 70% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
5.95 -2.0 3.93 +- 70% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode.testcase
3.10 -1.1 2.01 +- 70% perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault.testcase
2.36 -0.8 1.55 +- 70% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.08 -0.4 0.70 +- 70% perf-profile.calltrace.cycles-pp.do_page_fault.testcase
0.82 -0.3 0.54 +- 70% perf-profile.calltrace.cycles-pp.trace_graph_entry.do_page_fault.testcase
0.77 -0.3 0.50 +- 70% perf-profile.calltrace.cycles-pp.ftrace_graph_caller.__do_page_fault.do_page_fault.page_fault.testcase
0.59 -0.2 0.37 +- 70% perf-profile.calltrace.cycles-pp.down_read_trylock.__do_page_fault.do_page_fault.page_fault.testcase
91.98 -30.9 61.11 +- 70% perf-profile.children.cycles-pp.testcase
9.14 -3.2 5.99 +- 70% perf-profile.children.cycles-pp.__do_fault
8.20 -2.8 5.40 +- 70% perf-profile.children.cycles-pp.shmem_getpage_gfp
8.08 -2.8 5.31 +- 70% perf-profile.children.cycles-pp.native_irq_return_iret
6.08 -2.2 3.92 +- 70% perf-profile.children.cycles-pp.find_get_entry
6.08 -2.1 3.96 +- 70% perf-profile.children.cycles-pp.sync_regs
5.95 -2.0 3.93 +- 70% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
4.12 -1.4 2.73 +- 70% perf-profile.children.cycles-pp.ftrace_graph_caller
3.65 -1.2 2.42 +- 70% perf-profile.children.cycles-pp.prepare_ftrace_return
3.18 -1.1 2.07 +- 70% perf-profile.children.cycles-pp.__perf_sw_event
2.34 -0.8 1.52 +- 70% perf-profile.children.cycles-pp.fault_dirty_shared_page
0.80 -0.3 0.50 +- 70% perf-profile.children.cycles-pp._raw_spin_lock
0.76 -0.3 0.50 +- 70% perf-profile.children.cycles-pp.tlb_flush_mmu_free
0.61 -0.2 0.39 +- 70% perf-profile.children.cycles-pp.down_read_trylock
0.48 +- 2% -0.2 0.28 +- 70% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.26 +- 6% -0.1 0.15 +- 71% perf-profile.children.cycles-pp.ktime_get
0.20 +- 2% -0.1 0.12 +- 70% perf-profile.children.cycles-pp.perf_exclude_event
0.22 +- 2% -0.1 0.13 +- 70% perf-profile.children.cycles-pp._cond_resched
0.17 -0.1 0.11 +- 70% perf-profile.children.cycles-pp.page_rmapping
0.13 -0.1 0.07 +- 70% perf-profile.children.cycles-pp.rcu_all_qs
0.07 -0.0 0.04 +- 70% perf-profile.children.cycles-pp.ftrace_lookup_ip
22.36 -7.8 14.59 +- 70% perf-profile.self.cycles-pp.testcase
8.08 -2.8 5.31 +- 70% perf-profile.self.cycles-pp.native_irq_return_iret
6.08 -2.1 3.96 +- 70% perf-profile.self.cycles-pp.sync_regs
5.81 -2.0 3.84 +- 70% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.27 -1.6 1.65 +- 70% perf-profile.self.cycles-pp.__handle_mm_fault
3.79 -1.4 2.36 +- 70% perf-profile.self.cycles-pp.find_get_entry
3.80 -1.3 2.53 +- 70% perf-profile.self.cycles-pp.trace_graph_entry
1.10 -0.5 0.57 +- 70% perf-profile.self.cycles-pp.alloc_set_pte
1.24 -0.4 0.81 +- 70% perf-profile.self.cycles-pp.shmem_fault
0.80 -0.3 0.50 +- 70% perf-profile.self.cycles-pp._raw_spin_lock
0.81 -0.3 0.51 +- 70% perf-profile.self.cycles-pp.find_lock_entry
0.80 +- 2% -0.3 0.51 +- 70% perf-profile.self.cycles-pp.__perf_sw_event
0.61 -0.2 0.38 +- 70% perf-profile.self.cycles-pp.down_read_trylock
0.60 -0.2 0.39 +- 70% perf-profile.self.cycles-pp.shmem_getpage_gfp
0.48 -0.2 0.27 +- 70% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.47 -0.2 0.30 +- 70% perf-profile.self.cycles-pp.file_update_time
0.34 -0.1 0.22 +- 70% perf-profile.self.cycles-pp.do_page_fault
0.22 +- 4% -0.1 0.11 +- 70% perf-profile.self.cycles-pp.__do_fault
0.25 +- 5% -0.1 0.14 +- 71% perf-profile.self.cycles-pp.ktime_get
0.21 +- 2% -0.1 0.12 +- 70% perf-profile.self.cycles-pp.finish_fault
0.23 +- 2% -0.1 0.14 +- 70% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.22 +- 2% -0.1 0.14 +- 70% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.20 +- 2% -0.1 0.12 +- 70% perf-profile.self.cycles-pp.perf_exclude_event
0.16 -0.1 0.10 +- 70% perf-profile.self.cycles-pp._cond_resched
0.13 -0.1 0.07 +- 70% perf-profile.self.cycles-pp.rcu_all_qs
0.07 -0.0 0.04 +- 70% perf-profile.self.cycles-pp.ftrace_lookup_ip
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/context_switch1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:3 33% 1:3 dmesg.WARNING:at#for_ip_ret_from_intr/0x
:3 67% 2:3 kmsg.pstore:crypto_comp_decompress_failed,ret=
:3 67% 2:3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
223910 -1.3% 220930 will-it-scale.per_process_ops
233722 -1.0% 231288 will-it-scale.per_thread_ops
6.001e+08 +- 13% +31.4% 7.887e+08 +- 4% will-it-scale.time.involuntary_context_switches
18003 +- 4% +10.9% 19956 will-it-scale.time.minor_page_faults
1.29e+10 -2.5% 1.258e+10 will-it-scale.time.voluntary_context_switches
87865617 -1.2% 86826277 will-it-scale.workload
2880329 +- 2% +5.4% 3034904 interrupts.CAL:Function_call_interrupts
7695018 -23.3% 5905066 +- 8% meminfo.DirectMap2M
0.00 +- 39% -0.0 0.00 +- 78% mpstat.cpu.iowait%
4621 +- 12% +13.4% 5241 proc-vmstat.numa_hint_faults_local
715714 +27.6% 913142 +- 13% softirqs.SCHED
515653 +- 6% -20.0% 412650 +- 15% turbostat.C1
43643516 -1.2% 43127031 vmstat.system.cs
2893393 +- 4% -23.6% 2210524 +- 10% cpuidle.C1.time
518051 +- 6% -19.9% 415081 +- 15% cpuidle.C1.usage
23.10 +22.9% 28.38 +- 9% boot-time.boot
18.38 +23.2% 22.64 +- 12% boot-time.dhcp
5216 +5.0% 5478 +- 2% boot-time.idle
963.76 +- 44% +109.7% 2021 +- 34% irq_exception_noise.__do_page_fault.sum
6.33 +- 14% +726.3% 52.33 +- 62% irq_exception_noise.irq_time
56524 +- 7% -18.8% 45915 +- 4% irq_exception_noise.softirq_time
6.001e+08 +- 13% +31.4% 7.887e+08 +- 4% time.involuntary_context_switches
18003 +- 4% +10.9% 19956 time.minor_page_faults
1.29e+10 -2.5% 1.258e+10 time.voluntary_context_switches
1386 +- 7% +15.4% 1600 +- 11% slabinfo.scsi_sense_cache.active_objs
1386 +- 7% +15.4% 1600 +- 11% slabinfo.scsi_sense_cache.num_objs
1427 +- 5% -8.9% 1299 +- 2% slabinfo.task_group.active_objs
1427 +- 5% -8.9% 1299 +- 2% slabinfo.task_group.num_objs
65519 +- 12% +20.6% 79014 +- 16% numa-meminfo.node0.SUnreclaim
8484 -11.9% 7475 +- 7% numa-meminfo.node1.KernelStack
9264 +- 26% -33.7% 6146 +- 7% numa-meminfo.node1.Mapped
2138 +- 61% +373.5% 10127 +- 92% numa-meminfo.node3.Inactive
2059 +- 61% +387.8% 10046 +- 93% numa-meminfo.node3.Inactive(anon)
16379 +- 12% +20.6% 19752 +- 16% numa-vmstat.node0.nr_slab_unreclaimable
8483 -11.9% 7474 +- 7% numa-vmstat.node1.nr_kernel_stack
6250 +- 29% -42.8% 3575 +- 24% numa-vmstat.node2
3798 +- 17% +63.7% 6218 +- 5% numa-vmstat.node3
543.00 +- 61% +368.1% 2541 +- 91% numa-vmstat.node3.nr_inactive_anon
543.33 +- 61% +367.8% 2541 +- 91% numa-vmstat.node3.nr_zone_inactive_anon
4.138e+13 -1.1% 4.09e+13 perf-stat.branch-instructions
6.569e+11 -2.0% 6.441e+11 perf-stat.branch-misses
2.645e+10 -1.2% 2.613e+10 perf-stat.context-switches
1.21 +1.2% 1.23 perf-stat.cpi
153343 +- 2% -12.1% 134776 perf-stat.cpu-migrations
5.966e+13 -1.3% 5.889e+13 perf-stat.dTLB-loads
3.736e+13 -1.2% 3.69e+13 perf-stat.dTLB-stores
5.85 +- 15% +8.8 14.67 +- 9% perf-stat.iTLB-load-miss-rate%
3.736e+09 +- 17% +161.3% 9.76e+09 +- 11% perf-stat.iTLB-load-misses
5.987e+10 -5.4% 5.667e+10 perf-stat.iTLB-loads
2.079e+14 -1.2% 2.054e+14 perf-stat.instructions
57547 +- 18% -62.9% 21340 +- 11% perf-stat.instructions-per-iTLB-miss
0.82 -1.2% 0.81 perf-stat.ipc
27502531 +- 8% +9.5% 30122136 +- 3% perf-stat.node-store-misses
1449 +- 27% -34.6% 948.85 sched_debug.cfs_rq:/.load.min
319416 +-115% -188.5% -282549 sched_debug.cfs_rq:/.spread0.avg
657044 +- 55% -88.3% 76887 +- 23% sched_debug.cfs_rq:/.spread0.max
-1525243 +54.6% -2357898 sched_debug.cfs_rq:/.spread0.min
101614 +- 6% +30.6% 132713 +- 19% sched_debug.cpu.avg_idle.stddev
11.54 +- 41% -61.2% 4.48 sched_debug.cpu.cpu_load[1].avg
1369 +- 67% -98.5% 20.67 +- 48% sched_debug.cpu.cpu_load[1].max
99.29 +- 67% -97.6% 2.35 +- 26% sched_debug.cpu.cpu_load[1].stddev
9.58 +- 38% -55.2% 4.29 sched_debug.cpu.cpu_load[2].avg
1024 +- 68% -98.5% 15.27 +- 36% sched_debug.cpu.cpu_load[2].max
74.51 +- 67% -97.3% 1.99 +- 15% sched_debug.cpu.cpu_load[2].stddev
7.37 +- 29% -42.0% 4.28 sched_debug.cpu.cpu_load[3].avg
600.58 +- 68% -97.9% 12.48 +- 20% sched_debug.cpu.cpu_load[3].max
43.98 +- 66% -95.8% 1.83 +- 5% sched_debug.cpu.cpu_load[3].stddev
5.95 +- 19% -28.1% 4.28 sched_debug.cpu.cpu_load[4].avg
325.39 +- 67% -96.4% 11.67 +- 10% sched_debug.cpu.cpu_load[4].max
24.19 +- 65% -92.5% 1.81 +- 3% sched_debug.cpu.cpu_load[4].stddev
907.23 +- 4% -14.1% 779.70 +- 10% sched_debug.cpu.nr_load_updates.stddev
0.00 +- 83% +122.5% 0.00 sched_debug.rt_rq:/.rt_time.min
8.49 +- 2% -0.3 8.21 +- 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
57.28 -0.3 57.01 perf-profile.calltrace.cycles-pp.read
5.06 -0.2 4.85 perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
4.98 -0.2 4.78 perf-profile.calltrace.cycles-pp.__switch_to.read
3.55 -0.2 3.39 +- 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.read
2.72 -0.1 2.60 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
2.67 -0.1 2.57 +- 2% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
3.40 -0.1 3.31 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
3.77 -0.1 3.68 perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.95 -0.1 1.88 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
2.19 -0.1 2.13 perf-profile.calltrace.cycles-pp.__switch_to_asm.read
1.30 -0.1 1.25 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
1.27 -0.1 1.22 +- 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
2.29 -0.0 2.24 perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.pipe_wait
0.96 -0.0 0.92 perf-profile.calltrace.cycles-pp.__calc_delta.update_curr.reweight_entity.dequeue_task_fair.__schedule
0.85 -0.0 0.81 +- 3% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
1.63 -0.0 1.59 perf-profile.calltrace.cycles-pp.native_write_msr.read
0.72 -0.0 0.69 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
0.65 +- 2% -0.0 0.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.61 -0.0 0.58 +- 2% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.88 -0.0 0.85 perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.ksys_read
0.80 -0.0 0.77 +- 2% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.82 -0.0 0.79 perf-profile.calltrace.cycles-pp.prepare_to_wait.pipe_wait.pipe_read.__vfs_read.vfs_read
0.72 -0.0 0.70 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.__vfs_write.vfs_write.ksys_write
0.56 +- 2% -0.0 0.53 perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.83 -0.0 0.81 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read.ksys_read
42.40 +0.3 42.69 perf-profile.calltrace.cycles-pp.write
31.80 +0.4 32.18 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.35 +0.5 24.84 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.ksys_read
20.36 +0.6 20.92 +- 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
22.01 +0.6 22.58 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
21.87 +0.6 22.46 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
3.15 +- 11% +1.0 4.12 +- 14% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.07 +- 34% +1.1 2.12 +- 31% perf-profile.calltrace.cycles-pp.tracing_record_taskinfo_sched_switch.__schedule.schedule.pipe_wait.pipe_read
0.66 +- 75% +1.1 1.72 +- 37% perf-profile.calltrace.cycles-pp.trace_save_cmdline.tracing_record_taskinfo.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.75 +- 74% +1.1 1.88 +- 34% perf-profile.calltrace.cycles-pp.tracing_record_taskinfo.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.69 +- 76% +1.2 1.85 +- 36% perf-profile.calltrace.cycles-pp.trace_save_cmdline.tracing_record_taskinfo_sched_switch.__schedule.schedule.pipe_wait
8.73 +- 2% -0.3 8.45 perf-profile.children.cycles-pp.dequeue_task_fair
57.28 -0.3 57.01 perf-profile.children.cycles-pp.read
6.95 -0.2 6.70 perf-profile.children.cycles-pp.syscall_return_via_sysret
5.57 -0.2 5.35 perf-profile.children.cycles-pp.reweight_entity
5.26 -0.2 5.05 perf-profile.children.cycles-pp.select_task_rq_fair
5.19 -0.2 4.99 perf-profile.children.cycles-pp.__switch_to
4.90 -0.2 4.73 +- 2% perf-profile.children.cycles-pp.update_curr
1.27 -0.1 1.13 +- 8% perf-profile.children.cycles-pp.fsnotify
3.92 -0.1 3.83 perf-profile.children.cycles-pp.select_idle_sibling
2.01 -0.1 1.93 perf-profile.children.cycles-pp.__calc_delta
2.14 -0.1 2.06 perf-profile.children.cycles-pp.copy_page_to_iter
1.58 -0.1 1.51 perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
2.90 -0.1 2.84 perf-profile.children.cycles-pp.update_cfs_group
1.93 -0.1 1.87 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
2.35 -0.1 2.29 perf-profile.children.cycles-pp.__switch_to_asm
1.33 -0.1 1.27 +- 3% perf-profile.children.cycles-pp.cpumask_next_wrap
2.57 -0.1 2.52 perf-profile.children.cycles-pp.load_new_mm_cr3
1.53 -0.1 1.47 +- 2% perf-profile.children.cycles-pp.__fdget_pos
1.11 -0.0 1.07 +- 2% perf-profile.children.cycles-pp.find_next_bit
1.18 -0.0 1.14 perf-profile.children.cycles-pp.update_rq_clock
0.88 -0.0 0.83 perf-profile.children.cycles-pp.copy_user_generic_unrolled
1.70 -0.0 1.65 perf-profile.children.cycles-pp.native_write_msr
0.97 -0.0 0.93 +- 2% perf-profile.children.cycles-pp.account_entity_dequeue
0.59 -0.0 0.56 perf-profile.children.cycles-pp.finish_task_switch
0.91 -0.0 0.88 perf-profile.children.cycles-pp.touch_atime
0.69 -0.0 0.65 perf-profile.children.cycles-pp.account_entity_enqueue
2.13 -0.0 2.09 perf-profile.children.cycles-pp.mutex_lock
0.32 +- 3% -0.0 0.29 +- 4% perf-profile.children.cycles-pp.__sb_start_write
0.84 -0.0 0.81 +- 2% perf-profile.children.cycles-pp.___perf_sw_event
0.89 -0.0 0.87 perf-profile.children.cycles-pp.prepare_to_wait
0.73 -0.0 0.71 perf-profile.children.cycles-pp.copyout
0.31 +- 2% -0.0 0.28 +- 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.46 +- 2% -0.0 0.44 perf-profile.children.cycles-pp.anon_pipe_buf_release
0.38 -0.0 0.36 +- 3% perf-profile.children.cycles-pp.idle_cpu
0.32 -0.0 0.30 +- 2% perf-profile.children.cycles-pp.__x64_sys_read
0.21 +- 2% -0.0 0.20 +- 2% perf-profile.children.cycles-pp.deactivate_task
0.13 -0.0 0.12 +- 4% perf-profile.children.cycles-pp.timespec_trunc
0.09 -0.0 0.08 perf-profile.children.cycles-pp.iov_iter_init
0.08 -0.0 0.07 perf-profile.children.cycles-pp.native_load_tls
0.11 +- 4% +0.0 0.12 perf-profile.children.cycles-pp.tick_sched_timer
0.08 +- 5% +0.0 0.10 +- 4% perf-profile.children.cycles-pp.finish_wait
0.38 +- 2% +0.0 0.40 +- 2% perf-profile.children.cycles-pp.file_update_time
0.31 +0.0 0.33 +- 2% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.24 +- 3% +0.0 0.26 +- 3% perf-profile.children.cycles-pp.rcu_all_qs
0.39 +0.0 0.41 perf-profile.children.cycles-pp._cond_resched
0.05 +0.0 0.07 +- 6% perf-profile.children.cycles-pp.default_wake_function
0.23 +- 2% +0.0 0.26 +- 3% perf-profile.children.cycles-pp.current_time
0.30 +0.0 0.35 +- 2% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.52 +0.1 0.58 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.1 0.08 +- 5% perf-profile.children.cycles-pp.hrtick_update
42.40 +0.3 42.69 perf-profile.children.cycles-pp.write
31.86 +0.4 32.26 perf-profile.children.cycles-pp.__vfs_read
24.40 +0.5 24.89 perf-profile.children.cycles-pp.pipe_wait
20.40 +0.6 20.96 +- 2% perf-profile.children.cycles-pp.try_to_wake_up
22.30 +0.6 22.89 perf-profile.children.cycles-pp.schedule
22.22 +0.6 22.84 perf-profile.children.cycles-pp.__schedule
0.99 +- 36% +0.9 1.94 +- 32% perf-profile.children.cycles-pp.tracing_record_taskinfo
3.30 +- 10% +1.0 4.27 +- 13% perf-profile.children.cycles-pp.ttwu_do_wakeup
1.14 +- 31% +1.1 2.24 +- 29% perf-profile.children.cycles-pp.tracing_record_taskinfo_sched_switch
1.59 +- 46% +2.0 3.60 +- 36% perf-profile.children.cycles-pp.trace_save_cmdline
6.95 -0.2 6.70 perf-profile.self.cycles-pp.syscall_return_via_sysret
5.19 -0.2 4.99 perf-profile.self.cycles-pp.__switch_to
1.27 -0.1 1.12 +- 8% perf-profile.self.cycles-pp.fsnotify
1.49 -0.1 1.36 perf-profile.self.cycles-pp.select_task_rq_fair
2.47 -0.1 2.37 +- 2% perf-profile.self.cycles-pp.reweight_entity
0.29 -0.1 0.19 +- 2% perf-profile.self.cycles-pp.ksys_read
1.50 -0.1 1.42 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
2.01 -0.1 1.93 perf-profile.self.cycles-pp.__calc_delta
1.93 -0.1 1.86 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.47 -0.1 1.40 perf-profile.self.cycles-pp.dequeue_task_fair
2.90 -0.1 2.84 perf-profile.self.cycles-pp.update_cfs_group
1.29 -0.1 1.23 perf-profile.self.cycles-pp.do_syscall_64
2.57 -0.1 2.52 perf-profile.self.cycles-pp.load_new_mm_cr3
2.28 -0.1 2.23 perf-profile.self.cycles-pp.__switch_to_asm
1.80 -0.1 1.75 perf-profile.self.cycles-pp.select_idle_sibling
1.11 -0.0 1.07 +- 2% perf-profile.self.cycles-pp.find_next_bit
0.87 -0.0 0.83 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.43 -0.0 0.39 +- 2% perf-profile.self.cycles-pp.dequeue_entity
1.70 -0.0 1.65 perf-profile.self.cycles-pp.native_write_msr
0.92 -0.0 0.88 +- 2% perf-profile.self.cycles-pp.account_entity_dequeue
0.48 -0.0 0.44 perf-profile.self.cycles-pp.finish_task_switch
0.77 -0.0 0.74 perf-profile.self.cycles-pp.___perf_sw_event
0.66 -0.0 0.63 perf-profile.self.cycles-pp.account_entity_enqueue
0.46 +- 2% -0.0 0.43 +- 2% perf-profile.self.cycles-pp.anon_pipe_buf_release
0.32 +- 3% -0.0 0.29 +- 4% perf-profile.self.cycles-pp.__sb_start_write
0.31 +- 2% -0.0 0.28 +- 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.38 -0.0 0.36 +- 3% perf-profile.self.cycles-pp.idle_cpu
0.19 +- 4% -0.0 0.17 +- 2% perf-profile.self.cycles-pp.__fdget_pos
0.50 -0.0 0.48 perf-profile.self.cycles-pp.__atime_needs_update
0.23 +- 2% -0.0 0.21 +- 3% perf-profile.self.cycles-pp.touch_atime
0.31 -0.0 0.30 perf-profile.self.cycles-pp.__x64_sys_read
0.21 +- 2% -0.0 0.20 +- 2% perf-profile.self.cycles-pp.deactivate_task
0.21 +- 2% -0.0 0.19 perf-profile.self.cycles-pp.check_preempt_curr
0.40 -0.0 0.39 perf-profile.self.cycles-pp.autoremove_wake_function
0.40 -0.0 0.38 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.27 -0.0 0.26 perf-profile.self.cycles-pp.pipe_wait
0.13 -0.0 0.12 +- 4% perf-profile.self.cycles-pp.timespec_trunc
0.22 +- 2% -0.0 0.20 +- 2% perf-profile.self.cycles-pp.put_prev_entity
0.09 -0.0 0.08 perf-profile.self.cycles-pp.iov_iter_init
0.08 -0.0 0.07 perf-profile.self.cycles-pp.native_load_tls
0.11 -0.0 0.10 perf-profile.self.cycles-pp.schedule
0.12 +- 4% +0.0 0.13 perf-profile.self.cycles-pp.copyin
0.08 +- 5% +0.0 0.10 +- 4% perf-profile.self.cycles-pp.finish_wait
0.18 +0.0 0.20 +- 2% perf-profile.self.cycles-pp.ttwu_do_activate
0.28 +- 2% +0.0 0.30 +- 2% perf-profile.self.cycles-pp._cond_resched
0.24 +- 3% +0.0 0.26 +- 3% perf-profile.self.cycles-pp.rcu_all_qs
0.05 +0.0 0.07 +- 6% perf-profile.self.cycles-pp.default_wake_function
0.08 +- 14% +0.0 0.11 +- 14% perf-profile.self.cycles-pp.tracing_record_taskinfo_sched_switch
0.51 +0.0 0.55 +- 4% perf-profile.self.cycles-pp.vfs_write
0.30 +0.0 0.35 +- 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.52 +0.1 0.58 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.1 0.08 +- 5% perf-profile.self.cycles-pp.hrtick_update
1.97 +0.1 2.07 +- 2% perf-profile.self.cycles-pp.switch_mm_irqs_off
1.59 +- 46% +2.0 3.60 +- 36% perf-profile.self.cycles-pp.trace_save_cmdline
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/brk1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 kmsg.pstore:crypto_comp_decompress_failed,ret=
:3 33% 1:3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
997317 -2.0% 977778 will-it-scale.per_process_ops
957.00 -7.9% 881.00 +- 3% will-it-scale.per_thread_ops
18.42 +- 3% -8.2% 16.90 will-it-scale.time.user_time
1.917e+08 -2.0% 1.879e+08 will-it-scale.workload
18.42 +- 3% -8.2% 16.90 time.user_time
0.30 +- 11% -36.7% 0.19 +- 11% turbostat.Pkg%pc2
57539 +- 51% +140.6% 138439 +- 31% meminfo.CmaFree
410877 +- 11% -22.1% 320082 +- 22% meminfo.DirectMap4k
343575 +- 27% +71.3% 588703 +- 31% numa-numastat.node0.local_node
374176 +- 24% +63.3% 611007 +- 27% numa-numastat.node0.numa_hit
1056347 +- 4% -39.9% 634843 +- 38% numa-numastat.node3.local_node
1060682 +- 4% -39.0% 646862 +- 35% numa-numastat.node3.numa_hit
14383 +- 51% +140.6% 34608 +- 31% proc-vmstat.nr_free_cma
179.00 +2.4% 183.33 proc-vmstat.nr_inactive_file
179.00 +2.4% 183.33 proc-vmstat.nr_zone_inactive_file
564483 +- 3% -38.0% 350064 +- 36% proc-vmstat.pgalloc_movable
1811959 +10.8% 2008488 +- 5% proc-vmstat.pgalloc_normal
7153 +- 42% -94.0% 431.33 +-119% latency_stats.max.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6627 +-141% +380.5% 31843 +-110% latency_stats.max.call_rwsem_down_write_failed_killable.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
15244 +- 31% -99.9% 15.00 +-141% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.__get_user_8.exit_robust_list.mm_release.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
4301 +-117% -83.7% 700.33 +- 6% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
12153 +- 28% -83.1% 2056 +- 70% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6772 +-141% +1105.8% 81665 +-127% latency_stats.sum.call_rwsem_down_write_failed_killable.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.465e+13 -1.3% 2.434e+13 perf-stat.branch-instructions
2.691e+11 -2.1% 2.635e+11 perf-stat.branch-misses
3.402e+13 -1.4% 3.355e+13 perf-stat.dTLB-loads
1.694e+13 +1.4% 1.718e+13 perf-stat.dTLB-stores
1.75 +- 50% +4.7 6.45 +- 11% perf-stat.iTLB-load-miss-rate%
4.077e+08 +- 48% +232.3% 1.355e+09 +- 11% perf-stat.iTLB-load-misses
2.31e+10 +- 2% -14.9% 1.965e+10 +- 3% perf-stat.iTLB-loads
1.163e+14 -1.6% 1.144e+14 perf-stat.instructions
346171 +- 36% -75.3% 85575 +- 11% perf-stat.instructions-per-iTLB-miss
6.174e+08 +- 2% -9.5% 5.589e+08 perf-stat.node-store-misses
595.00 +- 10% +31.4% 782.00 +- 3% slabinfo.Acpi-State.active_objs
595.00 +- 10% +31.4% 782.00 +- 3% slabinfo.Acpi-State.num_objs
2831 +- 3% -14.0% 2434 +- 5% slabinfo.avtab_node.active_objs
2831 +- 3% -14.0% 2434 +- 5% slabinfo.avtab_node.num_objs
934.00 -10.9% 832.33 +- 5% slabinfo.inotify_inode_mark.active_objs
934.00 -10.9% 832.33 +- 5% slabinfo.inotify_inode_mark.num_objs
1232 +- 4% +13.4% 1397 +- 6% slabinfo.nsproxy.active_objs
1232 +- 4% +13.4% 1397 +- 6% slabinfo.nsproxy.num_objs
499.67 +- 12% +24.8% 623.67 +- 10% slabinfo.secpath_cache.active_objs
499.67 +- 12% +24.8% 623.67 +- 10% slabinfo.secpath_cache.num_objs
31393 +- 84% +220.1% 100477 +- 21% numa-meminfo.node0.Active
31393 +- 84% +220.1% 100477 +- 21% numa-meminfo.node0.Active(anon)
30013 +- 85% +232.1% 99661 +- 21% numa-meminfo.node0.AnonPages
21603 +- 34% -85.0% 3237 +-100% numa-meminfo.node0.Inactive
21528 +- 34% -85.0% 3237 +-100% numa-meminfo.node0.Inactive(anon)
10247 +- 35% -46.4% 5495 numa-meminfo.node0.Mapped
35388 +- 14% -41.6% 20670 +- 15% numa-meminfo.node0.SReclaimable
22911 +- 29% -82.3% 4057 +- 84% numa-meminfo.node0.Shmem
117387 +- 9% -22.5% 90986 +- 12% numa-meminfo.node0.Slab
68863 +- 67% +77.7% 122351 +- 13% numa-meminfo.node1.Active
68863 +- 67% +77.7% 122351 +- 13% numa-meminfo.node1.Active(anon)
228376 +22.3% 279406 +- 17% numa-meminfo.node1.FilePages
1481 +-116% +1062.1% 17218 +- 39% numa-meminfo.node1.Inactive
1481 +-116% +1062.0% 17216 +- 39% numa-meminfo.node1.Inactive(anon)
6593 +- 2% +11.7% 7367 +- 3% numa-meminfo.node1.KernelStack
596227 +- 8% +18.0% 703748 +- 4% numa-meminfo.node1.MemUsed
15298 +- 12% +88.5% 28843 +- 36% numa-meminfo.node1.SReclaimable
52718 +- 9% +21.0% 63810 +- 11% numa-meminfo.node1.SUnreclaim
1808 +- 97% +2723.8% 51054 +- 97% numa-meminfo.node1.Shmem
68017 +- 5% +36.2% 92654 +- 18% numa-meminfo.node1.Slab
125541 +- 29% -64.9% 44024 +- 98% numa-meminfo.node3.Active
125137 +- 29% -65.0% 43823 +- 98% numa-meminfo.node3.Active(anon)
93173 +- 25% -87.8% 11381 +- 20% numa-meminfo.node3.AnonPages
9150 +- 5% -9.3% 8301 +- 8% numa-meminfo.node3.KernelStack
7848 +- 84% +220.0% 25118 +- 21% numa-vmstat.node0.nr_active_anon
7503 +- 85% +232.1% 24914 +- 21% numa-vmstat.node0.nr_anon_pages
5381 +- 34% -85.0% 809.00 +-100% numa-vmstat.node0.nr_inactive_anon
2559 +- 35% -46.4% 1372 numa-vmstat.node0.nr_mapped
5727 +- 29% -82.3% 1014 +- 84% numa-vmstat.node0.nr_shmem
8846 +- 14% -41.6% 5167 +- 15% numa-vmstat.node0.nr_slab_reclaimable
7848 +- 84% +220.0% 25118 +- 21% numa-vmstat.node0.nr_zone_active_anon
5381 +- 34% -85.0% 809.00 +-100% numa-vmstat.node0.nr_zone_inactive_anon
4821 +- 2% +30.3% 6283 +- 15% numa-vmstat.node1
17215 +- 67% +77.7% 30591 +- 13% numa-vmstat.node1.nr_active_anon
57093 +22.3% 69850 +- 17% numa-vmstat.node1.nr_file_pages
370.00 +-116% +1061.8% 4298 +- 39% numa-vmstat.node1.nr_inactive_anon
6593 +- 2% +11.7% 7366 +- 3% numa-vmstat.node1.nr_kernel_stack
451.67 +- 97% +2725.6% 12762 +- 97% numa-vmstat.node1.nr_shmem
3824 +- 12% +88.6% 7211 +- 36% numa-vmstat.node1.nr_slab_reclaimable
13179 +- 9% +21.0% 15952 +- 11% numa-vmstat.node1.nr_slab_unreclaimable
17215 +- 67% +77.7% 30591 +- 13% numa-vmstat.node1.nr_zone_active_anon
370.00 +-116% +1061.8% 4298 +- 39% numa-vmstat.node1.nr_zone_inactive_anon
364789 +- 12% +62.8% 593926 +- 34% numa-vmstat.node1.numa_hit
239539 +- 19% +95.4% 468113 +- 43% numa-vmstat.node1.numa_local
71.00 +- 28% +42.3% 101.00 numa-vmstat.node2.nr_mlock
31285 +- 29% -65.0% 10960 +- 98% numa-vmstat.node3.nr_active_anon
23292 +- 25% -87.8% 2844 +- 19% numa-vmstat.node3.nr_anon_pages
14339 +- 52% +141.1% 34566 +- 32% numa-vmstat.node3.nr_free_cma
9151 +- 5% -9.3% 8299 +- 8% numa-vmstat.node3.nr_kernel_stack
31305 +- 29% -64.9% 10975 +- 98% numa-vmstat.node3.nr_zone_active_anon
930131 +- 3% -35.9% 596006 +- 34% numa-vmstat.node3.numa_hit
836455 +- 3% -40.9% 493947 +- 44% numa-vmstat.node3.numa_local
75182 +- 58% -83.8% 12160 +- 2% sched_debug.cfs_rq:/.load.max
6.65 +- 5% -10.6% 5.94 +- 6% sched_debug.cfs_rq:/.load_avg.avg
0.16 +- 7% +22.6% 0.20 +- 12% sched_debug.cfs_rq:/.nr_running.stddev
5.58 +- 24% +427.7% 29.42 +- 93% sched_debug.cfs_rq:/.nr_spread_over.max
0.54 +- 15% +306.8% 2.19 +- 86% sched_debug.cfs_rq:/.nr_spread_over.stddev
1.05 +- 25% -65.1% 0.37 +- 71% sched_debug.cfs_rq:/.removed.load_avg.avg
9.62 +- 11% -50.7% 4.74 +- 70% sched_debug.cfs_rq:/.removed.load_avg.stddev
48.70 +- 25% -65.1% 17.02 +- 71% sched_debug.cfs_rq:/.removed.runnable_sum.avg
444.31 +- 11% -50.7% 219.26 +- 70% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.47 +- 13% -60.9% 0.19 +- 71% sched_debug.cfs_rq:/.removed.util_avg.avg
4.47 +- 4% -46.5% 2.39 +- 70% sched_debug.cfs_rq:/.removed.util_avg.stddev
1.64 +- 7% +22.1% 2.00 +- 13% sched_debug.cfs_rq:/.runnable_load_avg.stddev
74653 +- 59% -84.4% 11676 sched_debug.cfs_rq:/.runnable_weight.max
-119169 -491.3% 466350 +- 27% sched_debug.cfs_rq:/.spread0.avg
517161 +- 30% +145.8% 1271292 +- 23% sched_debug.cfs_rq:/.spread0.max
624.79 +- 5% -14.2% 535.76 +- 7% sched_debug.cfs_rq:/.util_est_enqueued.avg
247.91 +- 32% -99.8% 0.48 +- 8% sched_debug.cfs_rq:/.util_est_enqueued.min
179704 +- 3% +30.4% 234297 +- 16% sched_debug.cpu.avg_idle.stddev
1.56 +- 9% +24.4% 1.94 +- 14% sched_debug.cpu.cpu_load[0].stddev
1.50 +- 6% +27.7% 1.91 +- 14% sched_debug.cpu.cpu_load[1].stddev
1.45 +- 3% +30.8% 1.90 +- 14% sched_debug.cpu.cpu_load[2].stddev
1.43 +- 3% +36.1% 1.95 +- 11% sched_debug.cpu.cpu_load[3].stddev
1.55 +- 7% +43.5% 2.22 +- 7% sched_debug.cpu.cpu_load[4].stddev
10004 +- 3% -11.6% 8839 +- 3% sched_debug.cpu.curr->pid.avg
1146 +- 26% +52.2% 1745 +- 7% sched_debug.cpu.curr->pid.min
3162 +- 6% +25.4% 3966 +- 11% sched_debug.cpu.curr->pid.stddev
403738 +- 3% -11.7% 356696 +- 7% sched_debug.cpu.nr_switches.max
0.08 +- 21% +78.2% 0.14 +- 14% sched_debug.cpu.nr_uninterruptible.avg
404435 +- 3% -11.8% 356732 +- 7% sched_debug.cpu.sched_count.max
4.17 -0.3 3.87 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.40 -0.2 2.17 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
7.58 -0.2 7.36 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.00 -0.2 14.81 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.brk
7.83 -0.2 7.66 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
28.66 -0.1 28.51 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.15 -0.1 2.03 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 -0.1 0.99 perf-profile.calltrace.cycles-pp.memcpy_erms.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk
1.03 -0.1 0.95 perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
7.33 -0.1 7.25 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk
0.76 -0.1 0.69 perf-profile.calltrace.cycles-pp.__vm_enough_memory.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.85 -0.1 11.77 perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.64 -0.1 1.57 perf-profile.calltrace.cycles-pp.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.06 -0.1 0.99 perf-profile.calltrace.cycles-pp.__indirect_thunk_start.brk
0.73 -0.1 0.67 perf-profile.calltrace.cycles-pp.sync_mm_rss.unmap_page_range.unmap_vmas.unmap_region.do_munmap
4.59 -0.1 4.52 perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.82 -0.1 2.76 perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64
2.89 -0.1 2.84 perf-profile.calltrace.cycles-pp.down_write_killable.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.37 -0.1 3.32 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.99 -0.0 1.94 perf-profile.calltrace.cycles-pp.cred_has_capability.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk
2.32 -0.0 2.27 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.88 -0.0 1.84 perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64
0.77 -0.0 0.73 perf-profile.calltrace.cycles-pp._raw_spin_lock.unmap_page_range.unmap_vmas.unmap_region.do_munmap
1.62 -0.0 1.59 perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64
0.81 -0.0 0.79 perf-profile.calltrace.cycles-pp.___might_sleep.down_write_killable.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 -0.0 0.64 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.brk
0.72 +0.0 0.74 perf-profile.calltrace.cycles-pp.do_munmap.brk
0.90 +0.0 0.93 perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.do_munmap
4.40 +0.1 4.47 perf-profile.calltrace.cycles-pp.find_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.96 +0.1 2.09 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.52 +- 2% +0.2 0.68 perf-profile.calltrace.cycles-pp.__vma_link_rb.brk
0.35 +- 70% +0.2 0.54 +- 2% perf-profile.calltrace.cycles-pp.find_vma.brk
2.20 +0.3 2.50 perf-profile.calltrace.cycles-pp.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.62 +0.3 64.94 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
60.53 +0.4 60.92 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
63.20 +0.4 63.60 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.73 +0.5 4.26 perf-profile.calltrace.cycles-pp.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.56 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
24.54 +0.6 25.14 perf-profile.calltrace.cycles-pp.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.put_vma.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.71 +0.6 1.36 perf-profile.calltrace.cycles-pp.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64
3.10 +0.7 3.82 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64
0.00 +0.8 0.76 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
0.00 +0.8 0.85 perf-profile.calltrace.cycles-pp.__vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.09 -0.5 4.62 perf-profile.children.cycles-pp.vma_compute_subtree_gap
4.54 -0.3 4.21 perf-profile.children.cycles-pp.kmem_cache_alloc
8.11 -0.2 7.89 perf-profile.children.cycles-pp.perf_event_mmap
8.05 -0.2 7.85 perf-profile.children.cycles-pp.unmap_vmas
15.01 -0.2 14.81 perf-profile.children.cycles-pp.syscall_return_via_sysret
29.20 -0.1 29.06 perf-profile.children.cycles-pp.do_brk_flags
1.11 -0.1 1.00 perf-profile.children.cycles-pp.kmem_cache_free
12.28 -0.1 12.17 perf-profile.children.cycles-pp.unmap_region
7.83 -0.1 7.74 perf-profile.children.cycles-pp.unmap_page_range
0.87 +- 3% -0.1 0.79 perf-profile.children.cycles-pp.__vm_enough_memory
1.29 -0.1 1.22 perf-profile.children.cycles-pp.__indirect_thunk_start
1.81 -0.1 1.74 perf-profile.children.cycles-pp.strlcpy
4.65 -0.1 4.58 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
3.08 -0.1 3.02 perf-profile.children.cycles-pp.down_write_killable
2.88 -0.1 2.82 perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.73 -0.1 0.67 perf-profile.children.cycles-pp.sync_mm_rss
3.65 -0.1 3.59 perf-profile.children.cycles-pp.get_unmapped_area
2.26 -0.1 2.20 perf-profile.children.cycles-pp.cred_has_capability
1.12 -0.1 1.07 perf-profile.children.cycles-pp.memcpy_erms
0.39 -0.0 0.35 perf-profile.children.cycles-pp.__rb_insert_augmented
2.52 -0.0 2.48 perf-profile.children.cycles-pp.perf_iterate_sb
2.13 -0.0 2.09 perf-profile.children.cycles-pp.security_mmap_addr
0.55 +- 2% -0.0 0.52 perf-profile.children.cycles-pp.unmap_single_vma
1.62 -0.0 1.59 perf-profile.children.cycles-pp.memset_erms
0.13 +- 3% -0.0 0.11 +- 4% perf-profile.children.cycles-pp.__vma_link_file
0.80 -0.0 0.77 perf-profile.children.cycles-pp._raw_spin_lock
0.43 -0.0 0.41 perf-profile.children.cycles-pp.strlen
0.07 +- 6% -0.0 0.06 +- 8% perf-profile.children.cycles-pp.should_failslab
0.43 -0.0 0.42 perf-profile.children.cycles-pp.may_expand_vm
0.15 +0.0 0.16 perf-profile.children.cycles-pp.__vma_link_list
0.45 +0.0 0.47 perf-profile.children.cycles-pp.rcu_all_qs
0.81 +0.1 0.89 perf-profile.children.cycles-pp.free_pgtables
6.35 +0.1 6.49 perf-profile.children.cycles-pp.find_vma
2.28 +0.2 2.45 perf-profile.children.cycles-pp.vmacache_find
64.66 +0.3 64.98 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.42 +0.3 2.76 perf-profile.children.cycles-pp.remove_vma
61.77 +0.4 62.13 perf-profile.children.cycles-pp.__x64_sys_brk
63.40 +0.4 63.79 perf-profile.children.cycles-pp.do_syscall_64
1.27 +0.4 1.72 perf-profile.children.cycles-pp.__vma_rb_erase
4.02 +0.5 4.53 perf-profile.children.cycles-pp.vma_link
25.26 +0.6 25.89 perf-profile.children.cycles-pp.do_munmap
0.00 +0.7 0.70 perf-profile.children.cycles-pp.put_vma
3.80 +0.7 4.53 perf-profile.children.cycles-pp.__vma_link_rb
0.00 +1.2 1.24 perf-profile.children.cycles-pp.__vma_merge
0.00 +1.5 1.51 perf-profile.children.cycles-pp._raw_write_lock
5.07 -0.5 4.60 perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.59 -0.2 0.38 perf-profile.self.cycles-pp.remove_vma
15.01 -0.2 14.81 perf-profile.self.cycles-pp.syscall_return_via_sysret
3.15 -0.2 2.96 perf-profile.self.cycles-pp.do_munmap
0.98 -0.1 0.87 perf-profile.self.cycles-pp.__vma_rb_erase
1.10 -0.1 0.99 perf-profile.self.cycles-pp.kmem_cache_free
0.68 -0.1 0.58 perf-profile.self.cycles-pp.__vm_enough_memory
0.42 -0.1 0.33 perf-profile.self.cycles-pp.unmap_vmas
3.62 -0.1 3.53 perf-profile.self.cycles-pp.perf_event_mmap
1.41 -0.1 1.34 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.29 -0.1 1.22 perf-profile.self.cycles-pp.__indirect_thunk_start
0.73 -0.1 0.66 perf-profile.self.cycles-pp.sync_mm_rss
2.96 -0.1 2.90 perf-profile.self.cycles-pp.__x64_sys_brk
3.24 -0.1 3.19 perf-profile.self.cycles-pp.brk
1.11 -0.0 1.07 perf-profile.self.cycles-pp.memcpy_erms
0.53 +- 3% -0.0 0.49 +- 2% perf-profile.self.cycles-pp.vma_link
0.73 -0.0 0.69 perf-profile.self.cycles-pp.unmap_region
1.66 -0.0 1.61 perf-profile.self.cycles-pp.down_write_killable
0.39 -0.0 0.35 perf-profile.self.cycles-pp.__rb_insert_augmented
1.74 -0.0 1.71 perf-profile.self.cycles-pp.kmem_cache_alloc
0.55 +- 2% -0.0 0.52 perf-profile.self.cycles-pp.unmap_single_vma
1.61 -0.0 1.59 perf-profile.self.cycles-pp.memset_erms
0.80 -0.0 0.77 perf-profile.self.cycles-pp._raw_spin_lock
0.13 -0.0 0.11 +- 4% perf-profile.self.cycles-pp.__vma_link_file
0.43 -0.0 0.41 perf-profile.self.cycles-pp.strlen
0.07 +- 6% -0.0 0.06 +- 8% perf-profile.self.cycles-pp.should_failslab
0.81 -0.0 0.79 perf-profile.self.cycles-pp.tlb_finish_mmu
0.15 +0.0 0.16 perf-profile.self.cycles-pp.__vma_link_list
0.45 +0.0 0.47 perf-profile.self.cycles-pp.rcu_all_qs
0.71 +0.0 0.72 perf-profile.self.cycles-pp.strlcpy
0.51 +0.1 0.56 perf-profile.self.cycles-pp.free_pgtables
1.41 +0.1 1.48 perf-profile.self.cycles-pp.__vma_link_rb
2.27 +0.2 2.44 perf-profile.self.cycles-pp.vmacache_find
0.00 +0.7 0.69 perf-profile.self.cycles-pp.put_vma
0.00 +1.2 1.23 perf-profile.self.cycles-pp.__vma_merge
0.00 +1.5 1.50 perf-profile.self.cycles-pp._raw_write_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/brk1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
:3 33% 1:3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
%stddev %change %stddev
\ | \
998475 -2.2% 976893 will-it-scale.per_process_ops
625.87 -2.3% 611.42 will-it-scale.time.elapsed_time
625.87 -2.3% 611.42 will-it-scale.time.elapsed_time.max
8158 -1.9% 8000 will-it-scale.time.maximum_resident_set_size
18.42 +- 2% -11.9% 16.24 will-it-scale.time.user_time
34349225 +- 13% -14.5% 29371024 +- 17% will-it-scale.time.voluntary_context_switches
1.919e+08 -2.2% 1.877e+08 will-it-scale.workload
1639 +- 23% -18.4% 1337 +- 30% meminfo.Mlocked
17748 +- 82% +103.1% 36051 numa-numastat.node3.other_node
33410486 +- 14% -14.8% 28449258 +- 18% cpuidle.C1.usage
698749 +- 15% -18.0% 573307 +- 20% cpuidle.POLL.usage
3013702 +- 14% -15.1% 2559405 +- 17% softirqs.SCHED
54361293 +- 2% -19.0% 44044816 +- 2% softirqs.TIMER
33408303 +- 14% -14.9% 28447123 +- 18% turbostat.C1
0.34 +- 16% -52.0% 0.16 +- 15% turbostat.Pkg%pc2
1310 +- 74% +412.1% 6710 +- 58% irq_exception_noise.__do_page_fault.samples
3209 +- 74% +281.9% 12258 +- 53% irq_exception_noise.__do_page_fault.sum
600.67 +-132% -96.0% 24.00 +- 23% irq_exception_noise.irq_nr
99557 +- 7% -24.0% 75627 +- 7% irq_exception_noise.softirq_nr
41424 +- 9% -24.6% 31253 +- 6% irq_exception_noise.softirq_time
625.87 -2.3% 611.42 time.elapsed_time
625.87 -2.3% 611.42 time.elapsed_time.max
8158 -1.9% 8000 time.maximum_resident_set_size
18.42 +- 2% -11.9% 16.24 time.user_time
34349225 +- 13% -14.5% 29371024 +- 17% time.voluntary_context_switches
988.00 +- 8% +14.5% 1131 +- 2% slabinfo.Acpi-ParseExt.active_objs
988.00 +- 8% +14.5% 1131 +- 2% slabinfo.Acpi-ParseExt.num_objs
2384 +- 3% +21.1% 2888 +- 11% slabinfo.pool_workqueue.active_objs
2474 +- 2% +20.4% 2979 +- 11% slabinfo.pool_workqueue.num_objs
490.33 +- 10% -19.2% 396.00 +- 11% slabinfo.secpath_cache.active_objs
490.33 +- 10% -19.2% 396.00 +- 11% slabinfo.secpath_cache.num_objs
1123 +- 7% +14.2% 1282 +- 3% slabinfo.skbuff_fclone_cache.active_objs
1123 +- 7% +14.2% 1282 +- 3% slabinfo.skbuff_fclone_cache.num_objs
1.09 -0.0 1.07 perf-stat.branch-miss-rate%
2.691e+11 -2.4% 2.628e+11 perf-stat.branch-misses
71981351 +- 12% -13.8% 62013509 +- 16% perf-stat.context-switches
1.697e+13 +1.1% 1.715e+13 perf-stat.dTLB-stores
2.36 +- 29% +4.4 6.76 +- 11% perf-stat.iTLB-load-miss-rate%
5.21e+08 +- 28% +194.8% 1.536e+09 +- 10% perf-stat.iTLB-load-misses
239983 +- 24% -68.4% 75819 +- 11% perf-stat.instructions-per-iTLB-miss
3295653 +- 2% -6.3% 3088753 +- 3% perf-stat.node-stores
606239 +1.1% 612799 perf-stat.path-length
3755 +- 28% -37.5% 2346 +- 52% sched_debug.cfs_rq:/.exec_clock.stddev
10.45 +- 4% +24.3% 12.98 +- 18% sched_debug.cfs_rq:/.load_avg.stddev
6243 +- 46% -38.6% 3831 +- 78% sched_debug.cpu.load.stddev
867.80 +- 7% +25.3% 1087 +- 6% sched_debug.cpu.nr_load_updates.stddev
395898 +- 3% -11.1% 352071 +- 7% sched_debug.cpu.nr_switches.max
-13.33 -21.1% -10.52 sched_debug.cpu.nr_uninterruptible.min
395674 +- 3% -11.1% 351762 +- 7% sched_debug.cpu.sched_count.max
33152 +- 4% -12.8% 28899 sched_debug.cpu.ttwu_count.min
0.03 +- 20% +77.7% 0.05 +- 15% sched_debug.rt_rq:/.rt_time.max
89523 +1.8% 91099 proc-vmstat.nr_active_anon
409.67 +- 23% -18.4% 334.33 +- 30% proc-vmstat.nr_mlock
89530 +1.8% 91117 proc-vmstat.nr_zone_active_anon
2337130 -2.2% 2286775 proc-vmstat.numa_hit
2229090 -2.3% 2178626 proc-vmstat.numa_local
8460 +- 39% -75.5% 2076 +- 53% proc-vmstat.numa_pages_migrated
28643 +- 55% -83.5% 4727 +- 58% proc-vmstat.numa_pte_updates
2695806 -1.8% 2646639 proc-vmstat.pgfault
2330191 -2.1% 2281197 proc-vmstat.pgfree
8460 +- 39% -75.5% 2076 +- 53% proc-vmstat.pgmigrate_success
237651 +- 2% +31.3% 312092 +- 16% numa-meminfo.node0.FilePages
8059 +- 2% +10.7% 8925 +- 7% numa-meminfo.node0.KernelStack
6830 +- 25% +48.8% 10164 +- 35% numa-meminfo.node0.Mapped
1612 +- 21% +70.0% 2740 +- 19% numa-meminfo.node0.PageTables
10772 +- 65% +679.4% 83962 +- 59% numa-meminfo.node0.Shmem
163195 +- 15% -36.9% 103036 +- 32% numa-meminfo.node1.Active
163195 +- 15% -36.9% 103036 +- 32% numa-meminfo.node1.Active(anon)
1730 +- 4% +33.9% 2317 +- 14% numa-meminfo.node1.PageTables
55778 +- 19% +32.5% 73910 +- 8% numa-meminfo.node1.SUnreclaim
2671 +- 16% -45.0% 1469 +- 15% numa-meminfo.node2.PageTables
61537 +- 13% -17.7% 50647 +- 3% numa-meminfo.node2.SUnreclaim
48644 +- 94% +149.8% 121499 +- 11% numa-meminfo.node3.Active
48440 +- 94% +150.4% 121295 +- 11% numa-meminfo.node3.Active(anon)
11832 +- 79% -91.5% 1008 +- 67% numa-meminfo.node3.Inactive
11597 +- 82% -93.3% 772.00 +- 82% numa-meminfo.node3.Inactive(anon)
10389 +- 32% -43.0% 5921 +- 6% numa-meminfo.node3.Mapped
33704 +- 24% -44.2% 18792 +- 15% numa-meminfo.node3.SReclaimable
104733 +- 14% -25.3% 78275 +- 8% numa-meminfo.node3.Slab
139329 +-133% -99.8% 241.67 +- 79% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 +-139% -97.5% 137.67 +- 71% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
165968 +-101% -61.9% 63304 +- 58% latency_stats.avg.max
83.00 +12810.4% 10715 +-140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
102.67 +- 6% +18845.5% 19450 +-140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
136.33 +- 16% +25043.5% 34279 +-141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
18497 +-141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
140500 +-131% -99.8% 247.00 +- 78% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 +-139% -97.5% 137.67 +- 71% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
87.33 +- 5% +23963.0% 21015 +-140% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
136.33 +- 16% +25043.5% 34279 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
149.33 +- 14% +25485.9% 38208 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
18761 +-141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
23363 +-114% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.__get_user_8.exit_robust_list.mm_release.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
144810 +-125% -99.8% 326.67 +- 70% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 +-139% -97.5% 137.67 +- 71% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
59698 +- 98% -78.0% 13110 +-141% latency_stats.sum.call_rwsem_down_read_failed.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
166.33 +12768.5% 21404 +-140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
825.00 +- 6% +18761.7% 155609 +-140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
136.33 +- 16% +25043.5% 34279 +-141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
59412 +- 2% +31.3% 78021 +- 16% numa-vmstat.node0.nr_file_pages
8059 +- 2% +10.7% 8923 +- 7% numa-vmstat.node0.nr_kernel_stack
1701 +- 25% +49.1% 2536 +- 35% numa-vmstat.node0.nr_mapped
402.33 +- 21% +70.0% 684.00 +- 19% numa-vmstat.node0.nr_page_table_pages
2692 +- 65% +679.5% 20988 +- 59% numa-vmstat.node0.nr_shmem
622587 +- 36% +37.7% 857545 +- 13% numa-vmstat.node0.numa_local
40797 +- 15% -36.9% 25757 +- 32% numa-vmstat.node1.nr_active_anon
432.00 +- 4% +33.9% 578.33 +- 14% numa-vmstat.node1.nr_page_table_pages
13944 +- 19% +32.5% 18477 +- 8% numa-vmstat.node1.nr_slab_unreclaimable
40797 +- 15% -36.9% 25757 +- 32% numa-vmstat.node1.nr_zone_active_anon
625073 +- 26% +29.4% 808657 +- 18% numa-vmstat.node1.numa_hit
503969 +- 34% +39.2% 701446 +- 23% numa-vmstat.node1.numa_local
137.33 +- 40% -49.0% 70.00 +- 29% numa-vmstat.node2.nr_mlock
667.67 +- 17% -45.1% 366.33 +- 15% numa-vmstat.node2.nr_page_table_pages
15384 +- 13% -17.7% 12662 +- 3% numa-vmstat.node2.nr_slab_unreclaimable
12114 +- 94% +150.3% 30326 +- 11% numa-vmstat.node3.nr_active_anon
2887 +- 83% -93.4% 190.00 +- 82% numa-vmstat.node3.nr_inactive_anon
2632 +- 30% -39.2% 1600 +- 5% numa-vmstat.node3.nr_mapped
101.00 -30.0% 70.67 +- 29% numa-vmstat.node3.nr_mlock
8425 +- 24% -44.2% 4697 +- 15% numa-vmstat.node3.nr_slab_reclaimable
12122 +- 94% +150.3% 30346 +- 11% numa-vmstat.node3.nr_zone_active_anon
2887 +- 83% -93.4% 190.00 +- 82% numa-vmstat.node3.nr_zone_inactive_anon
106945 +- 13% +17.4% 125554 numa-vmstat.node3.numa_other
4.17 -0.3 3.82 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.02 -0.3 14.77 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.brk
2.42 -0.2 2.18 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
7.60 -0.2 7.39 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.79 -0.2 7.63 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
0.82 +- 9% -0.1 0.68 perf-profile.calltrace.cycles-pp.__vm_enough_memory.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.13 -0.1 2.00 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.1 0.95 perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
7.31 -0.1 7.21 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk
0.74 -0.1 0.67 perf-profile.calltrace.cycles-pp.sync_mm_rss.unmap_page_range.unmap_vmas.unmap_region.do_munmap
1.06 -0.1 1.00 perf-profile.calltrace.cycles-pp.memcpy_erms.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk
3.38 -0.1 3.33 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.0 1.00 +- 2% perf-profile.calltrace.cycles-pp.__indirect_thunk_start.brk
2.34 -0.0 2.29 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.64 -0.0 1.59 perf-profile.calltrace.cycles-pp.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.89 -0.0 1.86 perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64
0.76 -0.0 0.73 perf-profile.calltrace.cycles-pp._raw_spin_lock.unmap_page_range.unmap_vmas.unmap_region.do_munmap
0.57 +- 2% -0.0 0.55 perf-profile.calltrace.cycles-pp.selinux_mmap_addr.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk
0.54 +- 2% +0.0 0.56 perf-profile.calltrace.cycles-pp.do_brk_flags.brk
0.72 +0.0 0.76 +- 2% perf-profile.calltrace.cycles-pp.do_munmap.brk
4.38 +0.1 4.43 perf-profile.calltrace.cycles-pp.find_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.96 +0.1 2.04 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.53 +0.2 0.68 perf-profile.calltrace.cycles-pp.__vma_link_rb.brk
2.21 +0.3 2.51 perf-profile.calltrace.cycles-pp.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.44 +0.5 64.90 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
63.04 +0.5 63.54 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
60.37 +0.5 60.88 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.75 +0.5 4.29 perf-profile.calltrace.cycles-pp.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.put_vma.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.72 +0.7 1.37 perf-profile.calltrace.cycles-pp.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.42 +0.7 25.08 perf-profile.calltrace.cycles-pp.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.00 +0.7 0.71 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64
3.12 +0.7 3.84 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
0.00 +0.9 0.85 perf-profile.calltrace.cycles-pp.__vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.10 -0.5 4.60 perf-profile.children.cycles-pp.vma_compute_subtree_gap
4.53 -0.3 4.18 perf-profile.children.cycles-pp.kmem_cache_alloc
15.03 -0.3 14.77 perf-profile.children.cycles-pp.syscall_return_via_sysret
8.13 -0.2 7.92 perf-profile.children.cycles-pp.perf_event_mmap
8.01 -0.2 7.81 perf-profile.children.cycles-pp.unmap_vmas
0.97 +- 14% -0.2 0.78 perf-profile.children.cycles-pp.__vm_enough_memory
1.13 -0.1 1.00 perf-profile.children.cycles-pp.kmem_cache_free
7.82 -0.1 7.70 perf-profile.children.cycles-pp.unmap_page_range
12.23 -0.1 12.13 perf-profile.children.cycles-pp.unmap_region
0.74 -0.1 0.67 perf-profile.children.cycles-pp.sync_mm_rss
3.06 -0.1 3.00 perf-profile.children.cycles-pp.down_write_killable
0.40 +- 2% -0.1 0.34 perf-profile.children.cycles-pp.__rb_insert_augmented
1.29 -0.1 1.23 perf-profile.children.cycles-pp.__indirect_thunk_start
2.54 -0.1 2.49 perf-profile.children.cycles-pp.perf_iterate_sb
3.66 -0.0 3.61 perf-profile.children.cycles-pp.get_unmapped_area
1.80 -0.0 1.75 perf-profile.children.cycles-pp.strlcpy
0.53 +- 2% -0.0 0.49 +- 2% perf-profile.children.cycles-pp.cap_capable
1.57 -0.0 1.53 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
1.11 -0.0 1.08 perf-profile.children.cycles-pp.memcpy_erms
0.13 -0.0 0.10 perf-profile.children.cycles-pp.__vma_link_file
0.55 -0.0 0.52 perf-profile.children.cycles-pp.unmap_single_vma
1.47 -0.0 1.44 perf-profile.children.cycles-pp.cap_vm_enough_memory
2.14 -0.0 2.12 perf-profile.children.cycles-pp.security_mmap_addr
0.32 -0.0 0.30 perf-profile.children.cycles-pp.userfaultfd_unmap_complete
1.25 -0.0 1.23 perf-profile.children.cycles-pp.up_write
0.50 -0.0 0.49 perf-profile.children.cycles-pp.userfaultfd_unmap_prep
0.27 -0.0 0.26 perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.14 -0.0 1.12 perf-profile.children.cycles-pp.__might_sleep
0.07 -0.0 0.06 perf-profile.children.cycles-pp.should_failslab
0.72 +0.0 0.74 perf-profile.children.cycles-pp._cond_resched
0.45 +0.0 0.47 perf-profile.children.cycles-pp.rcu_all_qs
0.15 +- 3% +0.0 0.17 +- 4% perf-profile.children.cycles-pp.__vma_link_list
0.15 +- 5% +0.0 0.18 +- 5% perf-profile.children.cycles-pp.tick_sched_timer
0.05 +- 8% +0.1 0.12 +- 17% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.80 +0.1 0.89 perf-profile.children.cycles-pp.free_pgtables
0.22 +- 7% +0.1 0.31 +- 9% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.11 +- 15% perf-profile.children.cycles-pp.clockevents_program_event
6.34 +0.1 6.47 perf-profile.children.cycles-pp.find_vma
2.27 +0.1 2.40 perf-profile.children.cycles-pp.vmacache_find
0.40 +- 4% +0.2 0.58 +- 5% perf-profile.children.cycles-pp.apic_timer_interrupt
0.40 +- 4% +0.2 0.58 +- 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.37 +- 4% +0.2 0.54 +- 5% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.2 0.19 +- 12% perf-profile.children.cycles-pp.ktime_get
2.42 +0.3 2.77 perf-profile.children.cycles-pp.remove_vma
64.49 +0.5 64.94 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.27 +0.5 1.73 perf-profile.children.cycles-pp.__vma_rb_erase
61.62 +0.5 62.10 perf-profile.children.cycles-pp.__x64_sys_brk
63.24 +0.5 63.74 perf-profile.children.cycles-pp.do_syscall_64
4.03 +0.5 4.56 perf-profile.children.cycles-pp.vma_link
0.00 +0.7 0.69 perf-profile.children.cycles-pp.put_vma
25.13 +0.7 25.84 perf-profile.children.cycles-pp.do_munmap
3.83 +0.7 4.56 perf-profile.children.cycles-pp.__vma_link_rb
0.00 +1.2 1.25 perf-profile.children.cycles-pp.__vma_merge
0.00 +1.5 1.53 perf-profile.children.cycles-pp._raw_write_lock
5.08 -0.5 4.58 perf-profile.self.cycles-pp.vma_compute_subtree_gap
15.03 -0.3 14.77 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.59 -0.2 0.39 perf-profile.self.cycles-pp.remove_vma
0.72 +- 7% -0.1 0.58 perf-profile.self.cycles-pp.__vm_enough_memory
1.12 -0.1 0.99 perf-profile.self.cycles-pp.kmem_cache_free
3.11 -0.1 2.99 perf-profile.self.cycles-pp.do_munmap
0.99 -0.1 0.88 perf-profile.self.cycles-pp.__vma_rb_erase
3.63 -0.1 3.52 perf-profile.self.cycles-pp.perf_event_mmap
3.26 -0.1 3.17 perf-profile.self.cycles-pp.brk
0.41 +- 2% -0.1 0.33 perf-profile.self.cycles-pp.unmap_vmas
0.74 -0.1 0.67 perf-profile.self.cycles-pp.sync_mm_rss
1.75 -0.1 1.68 perf-profile.self.cycles-pp.kmem_cache_alloc
0.40 +- 2% -0.1 0.34 perf-profile.self.cycles-pp.__rb_insert_augmented
1.29 +- 2% -0.1 1.23 perf-profile.self.cycles-pp.__indirect_thunk_start
0.73 -0.0 0.68 +- 2% perf-profile.self.cycles-pp.unmap_region
0.53 -0.0 0.49 perf-profile.self.cycles-pp.vma_link
1.40 -0.0 1.35 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
5.22 -0.0 5.18 perf-profile.self.cycles-pp.unmap_page_range
0.53 +- 2% -0.0 0.49 +- 2% perf-profile.self.cycles-pp.cap_capable
1.11 -0.0 1.07 perf-profile.self.cycles-pp.memcpy_erms
1.86 -0.0 1.82 perf-profile.self.cycles-pp.perf_iterate_sb
1.30 -0.0 1.27 perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.13 -0.0 0.10 perf-profile.self.cycles-pp.__vma_link_file
0.55 -0.0 0.52 perf-profile.self.cycles-pp.unmap_single_vma
0.74 -0.0 0.72 perf-profile.self.cycles-pp.selinux_mmap_addr
0.32 -0.0 0.30 perf-profile.self.cycles-pp.userfaultfd_unmap_complete
1.13 -0.0 1.12 perf-profile.self.cycles-pp.__might_sleep
1.24 -0.0 1.23 perf-profile.self.cycles-pp.up_write
0.50 -0.0 0.49 perf-profile.self.cycles-pp.userfaultfd_unmap_prep
0.27 -0.0 0.26 perf-profile.self.cycles-pp.tlb_flush_mmu_free
0.07 -0.0 0.06 perf-profile.self.cycles-pp.should_failslab
0.45 +0.0 0.47 perf-profile.self.cycles-pp.rcu_all_qs
0.71 +0.0 0.73 perf-profile.self.cycles-pp.strlcpy
0.15 +- 3% +0.0 0.17 +- 4% perf-profile.self.cycles-pp.__vma_link_list
0.51 +0.1 0.57 perf-profile.self.cycles-pp.free_pgtables
1.40 +0.1 1.49 perf-profile.self.cycles-pp.__vma_link_rb
2.27 +0.1 2.39 perf-profile.self.cycles-pp.vmacache_find
0.00 +0.2 0.18 +- 12% perf-profile.self.cycles-pp.ktime_get
0.00 +0.7 0.69 perf-profile.self.cycles-pp.put_vma
0.00 +1.2 1.24 perf-profile.self.cycles-pp.__vma_merge
0.00 +1.5 1.52 perf-profile.self.cycles-pp._raw_write_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/page_fault2/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_native_iret/0x
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__schedule/0x
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
1:3 -33% :3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
3:3 -100% :3 kmsg.pstore:crypto_comp_decompress_failed,ret=
3:3 -100% :3 kmsg.pstore:decompression_failed
2:3 4% 2:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
5:3 7% 5:3 perf-profile.calltrace.cycles-pp.error_entry
5:3 7% 5:3 perf-profile.children.cycles-pp.error_entry
2:3 3% 2:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8281 +- 2% -18.8% 6728 will-it-scale.per_thread_ops
92778 +- 2% +17.6% 109080 will-it-scale.time.involuntary_context_switches
21954366 +- 3% +4.1% 22857988 +- 2% will-it-scale.time.maximum_resident_set_size
4.81e+08 +- 2% -18.9% 3.899e+08 will-it-scale.time.minor_page_faults
5804 +12.2% 6512 will-it-scale.time.percent_of_cpu_this_job_got
34918 +12.2% 39193 will-it-scale.time.system_time
5638528 +- 2% -15.3% 4778392 will-it-scale.time.voluntary_context_switches
15846405 -2.0% 15531034 will-it-scale.workload
2818137 +1.5% 2861500 interrupts.CAL:Function_call_interrupts
3.33 +- 28% -60.0% 1.33 +- 93% irq_exception_noise.irq_time
2866 +23.9% 3552 +- 2% kthread_noise.total_time
5589674 +- 14% +31.4% 7344810 +- 6% meminfo.DirectMap2M
31169 -16.9% 25906 uptime.idle
25242 +- 4% -14.2% 21654 +- 6% vmstat.system.cs
7055 -11.6% 6237 boot-time.idle
21.12 +19.3% 25.19 +- 9% boot-time.kernel_boot
20.03 +- 2% -3.7 16.38 mpstat.cpu.idle%
0.00 +- 8% -0.0 0.00 +- 4% mpstat.cpu.iowait%
7284147 +- 2% -16.4% 6092495 softirqs.RCU
5350756 +- 2% -10.9% 4769417 +- 4% softirqs.SCHED
42933 +- 21% -28.2% 30807 +- 7% numa-meminfo.node2.SReclaimable
63219 +- 13% -16.6% 52717 +- 6% numa-meminfo.node2.SUnreclaim
106153 +- 16% -21.3% 83525 +- 5% numa-meminfo.node2.Slab
247154 +- 4% -7.6% 228415 numa-meminfo.node3.Unevictable
11904 +- 4% +17.1% 13945 +- 8% numa-vmstat.node0
2239 +- 22% -26.6% 1644 +- 2% numa-vmstat.node2.nr_mapped
10728 +- 21% -28.2% 7701 +- 7% numa-vmstat.node2.nr_slab_reclaimable
15803 +- 13% -16.6% 13179 +- 6% numa-vmstat.node2.nr_slab_unreclaimable
61788 +- 4% -7.6% 57103 numa-vmstat.node3.nr_unevictable
61788 +- 4% -7.6% 57103 numa-vmstat.node3.nr_zone_unevictable
92778 +- 2% +17.6% 109080 time.involuntary_context_switches
21954366 +- 3% +4.1% 22857988 +- 2% time.maximum_resident_set_size
4.81e+08 +- 2% -18.9% 3.899e+08 time.minor_page_faults
5804 +12.2% 6512 time.percent_of_cpu_this_job_got
34918 +12.2% 39193 time.system_time
5638528 +- 2% -15.3% 4778392 time.voluntary_context_switches
3942289 +- 2% -10.5% 3528902 +- 2% cpuidle.C1.time
242290 -14.2% 207992 cpuidle.C1.usage
1.64e+09 +- 2% -15.7% 1.381e+09 cpuidle.C1E.time
4621281 +- 2% -14.7% 3939757 cpuidle.C1E.usage
2.115e+10 +- 2% -18.5% 1.723e+10 cpuidle.C6.time
24771099 +- 2% -18.0% 20305766 cpuidle.C6.usage
1210810 +- 4% -17.6% 997270 +- 2% cpuidle.POLL.time
18742 +- 3% -17.0% 15559 +- 2% cpuidle.POLL.usage
4135 +-141% -100.0% 0.00 latency_stats.avg.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
33249 +-129% -100.0% 0.00 latency_stats.max.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4135 +-141% -100.0% 0.00 latency_stats.max.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
65839 +-116% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4135 +-141% -100.0% 0.00 latency_stats.sum.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
8387 +-122% -90.9% 767.00 +- 13% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
263970 +- 10% -68.6% 82994 +- 3% latency_stats.sum.do_syslog.kmsg_read.proc_reg_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6173 +- 77% +173.3% 16869 +- 98% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
101.33 -4.6% 96.67 proc-vmstat.nr_anon_transparent_hugepages
39967 -1.8% 39241 proc-vmstat.nr_slab_reclaimable
67166 -2.4% 65522 proc-vmstat.nr_slab_unreclaimable
237743 -3.9% 228396 proc-vmstat.nr_unevictable
237743 -3.9% 228396 proc-vmstat.nr_zone_unevictable
4.807e+09 -2.0% 4.71e+09 proc-vmstat.numa_hit
4.807e+09 -2.0% 4.71e+09 proc-vmstat.numa_local
4.791e+09 -2.1% 4.69e+09 proc-vmstat.pgalloc_normal
4.783e+09 -2.0% 4.685e+09 proc-vmstat.pgfault
4.807e+09 -2.0% 4.709e+09 proc-vmstat.pgfree
1753 +4.6% 1833 turbostat.Avg_MHz
239445 -14.1% 205783 turbostat.C1
4617105 +- 2% -14.8% 3934693 turbostat.C1E
1.40 +- 2% -0.2 1.18 turbostat.C1E%
24764661 +- 2% -18.0% 20297643 turbostat.C6
18.09 +- 2% -3.4 14.74 turbostat.C6%
7.53 +- 2% -17.1% 6.24 turbostat.CPU%c1
11.88 +- 2% -19.1% 9.61 turbostat.CPU%c6
7.62 +- 3% -20.8% 6.04 turbostat.Pkg%pc2
388.30 +1.5% 393.93 turbostat.PkgWatt
390974 +- 8% +35.8% 530867 +- 11% sched_debug.cfs_rq:/.min_vruntime.stddev
-1754042 +75.7% -3081270 sched_debug.cfs_rq:/.spread0.min
388140 +- 8% +36.2% 528494 +- 11% sched_debug.cfs_rq:/.spread0.stddev
542.30 +- 3% -10.0% 488.21 +- 3% sched_debug.cfs_rq:/.util_avg.min
53.35 +- 16% +48.7% 79.35 +- 12% sched_debug.cfs_rq:/.util_est_enqueued.avg
30520 +- 6% -15.2% 25883 +- 12% sched_debug.cpu.nr_switches.avg
473770 +- 27% -37.4% 296623 +- 32% sched_debug.cpu.nr_switches.max
17077 +- 2% -15.1% 14493 sched_debug.cpu.nr_switches.min
30138 +- 6% -15.0% 25606 +- 12% sched_debug.cpu.sched_count.avg
472345 +- 27% -37.2% 296419 +- 32% sched_debug.cpu.sched_count.max
16858 +- 2% -15.2% 14299 sched_debug.cpu.sched_count.min
8358 +- 2% -15.5% 7063 sched_debug.cpu.sched_goidle.avg
12225 -13.6% 10565 sched_debug.cpu.sched_goidle.max
8032 +- 2% -16.0% 6749 sched_debug.cpu.sched_goidle.min
14839 +- 6% -15.3% 12568 +- 12% sched_debug.cpu.ttwu_count.avg
235115 +- 28% -38.3% 145175 +- 31% sched_debug.cpu.ttwu_count.max
7627 +- 3% -15.9% 6413 +- 2% sched_debug.cpu.ttwu_count.min
226299 +- 29% -39.5% 136827 +- 32% sched_debug.cpu.ttwu_local.max
0.85 -0.0 0.81 perf-stat.branch-miss-rate%
3.675e+10 -4.1% 3.523e+10 perf-stat.branch-misses
4.052e+11 -2.3% 3.958e+11 perf-stat.cache-misses
7.008e+11 -2.5% 6.832e+11 perf-stat.cache-references
15320995 +- 4% -14.3% 13136557 +- 6% perf-stat.context-switches
9.16 +4.8% 9.59 perf-stat.cpi
2.03e+14 +4.6% 2.124e+14 perf-stat.cpu-cycles
44508 -1.7% 43743 perf-stat.cpu-migrations
1.30 -0.1 1.24 perf-stat.dTLB-store-miss-rate%
4.064e+10 -3.5% 3.922e+10 perf-stat.dTLB-store-misses
3.086e+12 +1.1% 3.119e+12 perf-stat.dTLB-stores
3.611e+08 +- 6% -8.5% 3.304e+08 +- 5% perf-stat.iTLB-loads
0.11 -4.6% 0.10 perf-stat.ipc
4.783e+09 -2.0% 4.685e+09 perf-stat.minor-faults
1.53 +- 2% -0.3 1.22 +- 8% perf-stat.node-load-miss-rate%
1.389e+09 +- 3% -22.1% 1.083e+09 +- 9% perf-stat.node-load-misses
8.922e+10 -1.9% 8.75e+10 perf-stat.node-loads
5.06 +1.7 6.77 +- 3% perf-stat.node-store-miss-rate%
1.204e+09 +29.3% 1.556e+09 +- 3% perf-stat.node-store-misses
2.256e+10 -5.1% 2.142e+10 +- 2% perf-stat.node-stores
4.783e+09 -2.0% 4.685e+09 perf-stat.page-faults
1399242 +1.9% 1425404 perf-stat.path-length
1144 +- 8% -13.6% 988.00 +- 8% slabinfo.Acpi-ParseExt.active_objs
1144 +- 8% -13.6% 988.00 +- 8% slabinfo.Acpi-ParseExt.num_objs
1878 +- 17% +29.0% 2422 +- 16% slabinfo.dmaengine-unmap-16.active_objs
1878 +- 17% +29.0% 2422 +- 16% slabinfo.dmaengine-unmap-16.num_objs
1085 +- 5% -24.1% 823.33 +- 9% slabinfo.file_lock_cache.active_objs
1085 +- 5% -24.1% 823.33 +- 9% slabinfo.file_lock_cache.num_objs
61584 +- 4% -16.6% 51381 +- 5% slabinfo.filp.active_objs
967.00 +- 4% -16.5% 807.67 +- 5% slabinfo.filp.active_slabs
61908 +- 4% -16.5% 51713 +- 5% slabinfo.filp.num_objs
967.00 +- 4% -16.5% 807.67 +- 5% slabinfo.filp.num_slabs
1455 -15.4% 1232 +- 4% slabinfo.nsproxy.active_objs
1455 -15.4% 1232 +- 4% slabinfo.nsproxy.num_objs
84720 +- 6% -18.3% 69210 +- 4% slabinfo.pid.active_objs
1324 +- 6% -18.2% 1083 +- 4% slabinfo.pid.active_slabs
84820 +- 5% -18.2% 69386 +- 4% slabinfo.pid.num_objs
1324 +- 6% -18.2% 1083 +- 4% slabinfo.pid.num_slabs
2112 +- 18% -26.3% 1557 +- 5% slabinfo.scsi_sense_cache.active_objs
2112 +- 18% -26.3% 1557 +- 5% slabinfo.scsi_sense_cache.num_objs
5018 +- 5% -7.6% 4635 +- 4% slabinfo.sock_inode_cache.active_objs
5018 +- 5% -7.6% 4635 +- 4% slabinfo.sock_inode_cache.num_objs
1193 +- 4% +13.8% 1358 +- 4% slabinfo.task_group.active_objs
1193 +- 4% +13.8% 1358 +- 4% slabinfo.task_group.num_objs
62807 +- 3% -14.4% 53757 +- 3% slabinfo.vm_area_struct.active_objs
1571 +- 3% -12.1% 1381 +- 3% slabinfo.vm_area_struct.active_slabs
62877 +- 3% -14.3% 53880 +- 3% slabinfo.vm_area_struct.num_objs
1571 +- 3% -12.1% 1381 +- 3% slabinfo.vm_area_struct.num_slabs
47.45 -47.4 0.00 perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
47.16 -47.2 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
46.99 -47.0 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
44.95 -44.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
7.42 +- 2% -7.4 0.00 perf-profile.calltrace.cycles-pp.copy_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
6.32 +- 10% -6.3 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
6.28 +- 10% -6.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +0.9 0.85 +- 11% perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.92 +- 4% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +1.1 1.13 +- 7% perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +1.2 1.19 +- 7% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.2 1.22 +- 5% perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.34 +- 7% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.4 1.36 +- 7% perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +4.5 4.54 +- 19% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +4.6 4.64 +- 19% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +6.6 6.64 +- 15% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +6.7 6.68 +- 15% perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +7.5 7.54 +- 5% perf-profile.calltrace.cycles-pp.copy_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +44.6 44.55 +- 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +46.6 46.63 +- 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
0.00 +46.8 46.81 +- 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +47.1 47.10 +- 3% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +63.1 63.15 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.39 +- 3% +0.0 0.42 +- 3% perf-profile.children.cycles-pp.radix_tree_lookup_slot
0.21 +- 3% +0.0 0.25 +- 5% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.06 +- 8% perf-profile.children.cycles-pp.get_vma_policy
0.00 +0.1 0.08 +- 5% perf-profile.children.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.18 +- 6% perf-profile.children.cycles-pp.__page_add_new_anon_rmap
0.00 +1.4 1.35 +- 5% perf-profile.children.cycles-pp.pte_map_lock
0.00 +63.2 63.21 perf-profile.children.cycles-pp.handle_pte_fault
1.40 +- 2% -0.4 1.03 +- 10% perf-profile.self.cycles-pp._raw_spin_lock
0.56 +- 3% -0.2 0.35 +- 6% perf-profile.self.cycles-pp.__handle_mm_fault
0.22 +- 3% -0.0 0.18 +- 7% perf-profile.self.cycles-pp.alloc_set_pte
0.09 +0.0 0.10 +- 4% perf-profile.self.cycles-pp.vmacache_find
0.39 +- 2% +0.0 0.41 +- 3% perf-profile.self.cycles-pp.__radix_tree_lookup
0.18 +0.0 0.20 +- 6% perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.17 +- 2% +0.0 0.20 +- 7% perf-profile.self.cycles-pp.___might_sleep
0.33 +- 2% +0.0 0.36 +- 6% perf-profile.self.cycles-pp.handle_mm_fault
0.20 +- 2% +0.0 0.24 +- 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 perf-profile.self.cycles-pp.finish_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.get_vma_policy
0.00 +0.1 0.08 +- 10% perf-profile.self.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.25 +- 5% perf-profile.self.cycles-pp.handle_pte_fault
0.00 +0.5 0.49 +- 8% perf-profile.self.cycles-pp.pte_map_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/page_fault2/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:3 -33% :3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
1:3 24% 2:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
3:3 46% 5:3 perf-profile.calltrace.cycles-pp.error_entry
5:3 -9% 5:3 perf-profile.children.cycles-pp.error_entry
2:3 -4% 2:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8147 -18.8% 6613 will-it-scale.per_thread_ops
93113 +17.0% 108982 will-it-scale.time.involuntary_context_switches
4.732e+08 -19.0% 3.833e+08 will-it-scale.time.minor_page_faults
5854 +12.0% 6555 will-it-scale.time.percent_of_cpu_this_job_got
35247 +12.1% 39495 will-it-scale.time.system_time
5546661 -15.5% 4689314 will-it-scale.time.voluntary_context_switches
15801637 -1.9% 15504487 will-it-scale.workload
1.43 +- 11% -59.7% 0.58 +- 28% irq_exception_noise.__do_page_fault.min
2811 +- 3% +23.7% 3477 +- 3% kthread_noise.total_time
292776 +- 5% +39.6% 408829 +- 21% meminfo.DirectMap4k
19.80 -3.7 16.12 mpstat.cpu.idle%
29940 -14.5% 25593 uptime.idle
24064 +- 3% -8.5% 22016 vmstat.system.cs
34.86 -1.9% 34.19 boot-time.boot
26.95 -2.8% 26.19 +- 2% boot-time.kernel_boot
7190569 +- 2% -15.2% 6100136 +- 3% softirqs.RCU
5513663 -13.8% 4751548 softirqs.SCHED
18064 +- 2% +24.3% 22461 +- 7% numa-vmstat.node0.nr_slab_unreclaimable
8507 +- 12% -16.8% 7075 +- 4% numa-vmstat.node2.nr_slab_reclaimable
18719 +- 9% -19.6% 15043 +- 4% numa-vmstat.node3.nr_slab_unreclaimable
72265 +- 2% +24.3% 89855 +- 7% numa-meminfo.node0.SUnreclaim
115980 +- 4% +22.6% 142233 +- 12% numa-meminfo.node0.Slab
34035 +- 12% -16.8% 28307 +- 4% numa-meminfo.node2.SReclaimable
74888 +- 9% -19.7% 60162 +- 4% numa-meminfo.node3.SUnreclaim
93113 +17.0% 108982 time.involuntary_context_switches
4.732e+08 -19.0% 3.833e+08 time.minor_page_faults
5854 +12.0% 6555 time.percent_of_cpu_this_job_got
35247 +12.1% 39495 time.system_time
5546661 -15.5% 4689314 time.voluntary_context_switches
4.792e+09 -1.9% 4.699e+09 proc-vmstat.numa_hit
4.791e+09 -1.9% 4.699e+09 proc-vmstat.numa_local
40447 +- 11% +13.2% 45804 +- 6% proc-vmstat.pgactivate
4.778e+09 -1.9% 4.688e+09 proc-vmstat.pgalloc_normal
4.767e+09 -1.9% 4.675e+09 proc-vmstat.pgfault
4.791e+09 -1.9% 4.699e+09 proc-vmstat.pgfree
230178 +- 2% -10.1% 206883 +- 3% cpuidle.C1.usage
1.617e+09 -15.0% 1.375e+09 cpuidle.C1E.time
4514401 -14.1% 3878206 cpuidle.C1E.usage
2.087e+10 -18.5% 1.701e+10 cpuidle.C6.time
24458365 -18.0% 20045336 cpuidle.C6.usage
1163758 -16.1% 976094 +- 4% cpuidle.POLL.time
17907 -14.6% 15294 +- 4% cpuidle.POLL.usage
1758 +4.5% 1838 turbostat.Avg_MHz
227522 +- 2% -10.2% 204426 +- 3% turbostat.C1
4512700 -14.2% 3873264 turbostat.C1E
1.39 -0.2 1.18 turbostat.C1E%
24452583 -18.0% 20039031 turbostat.C6
17.85 -3.3 14.55 turbostat.C6%
7.44 -16.8% 6.19 turbostat.CPU%c1
11.72 -19.3% 9.45 turbostat.CPU%c6
7.51 -21.3% 5.91 turbostat.Pkg%pc2
389.33 +1.6% 395.59 turbostat.PkgWatt
559.33 +- 13% -17.9% 459.33 +- 20% slabinfo.dmaengine-unmap-128.active_objs
559.33 +- 13% -17.9% 459.33 +- 20% slabinfo.dmaengine-unmap-128.num_objs
57734 +- 3% -5.7% 54421 +- 4% slabinfo.filp.active_objs
905.67 +- 3% -5.6% 854.67 +- 4% slabinfo.filp.active_slabs
57981 +- 3% -5.6% 54720 +- 4% slabinfo.filp.num_objs
905.67 +- 3% -5.6% 854.67 +- 4% slabinfo.filp.num_slabs
1378 -12.0% 1212 +- 7% slabinfo.nsproxy.active_objs
1378 -12.0% 1212 +- 7% slabinfo.nsproxy.num_objs
507.33 +- 7% -26.8% 371.33 +- 2% slabinfo.secpath_cache.active_objs
507.33 +- 7% -26.8% 371.33 +- 2% slabinfo.secpath_cache.num_objs
4788 +- 5% -8.3% 4391 +- 2% slabinfo.sock_inode_cache.active_objs
4788 +- 5% -8.3% 4391 +- 2% slabinfo.sock_inode_cache.num_objs
1431 +- 8% -12.3% 1255 +- 3% slabinfo.task_group.active_objs
1431 +- 8% -12.3% 1255 +- 3% slabinfo.task_group.num_objs
4.27 +- 17% +27.0% 5.42 +- 7% sched_debug.cfs_rq:/.runnable_load_avg.avg
13.44 +- 62% +73.6% 23.33 +- 24% sched_debug.cfs_rq:/.runnable_load_avg.stddev
772.55 +- 21% -32.7% 520.27 +- 4% sched_debug.cfs_rq:/.util_est_enqueued.max
4.39 +- 15% +29.0% 5.66 +- 11% sched_debug.cpu.cpu_load[0].avg
152.09 +- 72% +83.9% 279.67 +- 33% sched_debug.cpu.cpu_load[0].max
13.84 +- 58% +78.7% 24.72 +- 29% sched_debug.cpu.cpu_load[0].stddev
4.53 +- 14% +25.8% 5.70 +- 10% sched_debug.cpu.cpu_load[1].avg
156.58 +- 66% +76.6% 276.58 +- 33% sched_debug.cpu.cpu_load[1].max
14.02 +- 55% +72.4% 24.17 +- 28% sched_debug.cpu.cpu_load[1].stddev
4.87 +- 11% +17.3% 5.72 +- 9% sched_debug.cpu.cpu_load[2].avg
1.58 +- 2% +13.5% 1.79 +- 6% sched_debug.cpu.nr_running.max
16694 -14.6% 14259 sched_debug.cpu.nr_switches.min
31989 +- 13% +20.6% 38584 +- 6% sched_debug.cpu.nr_switches.stddev
16505 -14.8% 14068 sched_debug.cpu.sched_count.min
32084 +- 13% +19.9% 38482 +- 6% sched_debug.cpu.sched_count.stddev
8185 -15.0% 6957 sched_debug.cpu.sched_goidle.avg
12151 +- 2% -13.5% 10507 sched_debug.cpu.sched_goidle.max
7867 -15.7% 6631 sched_debug.cpu.sched_goidle.min
7595 -16.1% 6375 sched_debug.cpu.ttwu_count.min
15873 +- 13% +21.2% 19239 +- 6% sched_debug.cpu.ttwu_count.stddev
5244 +- 17% +17.0% 6134 +- 5% sched_debug.cpu.ttwu_local.avg
15646 +- 12% +21.5% 19008 +- 6% sched_debug.cpu.ttwu_local.stddev
0.85 -0.0 0.81 perf-stat.branch-miss-rate%
3.689e+10 -4.6% 3.518e+10 perf-stat.branch-misses
57.39 +0.6 58.00 perf-stat.cache-miss-rate%
4.014e+11 -1.2% 3.967e+11 perf-stat.cache-misses
6.994e+11 -2.2% 6.84e+11 perf-stat.cache-references
14605393 +- 3% -8.5% 13369913 perf-stat.context-switches
9.21 +4.5% 9.63 perf-stat.cpi
2.037e+14 +4.6% 2.13e+14 perf-stat.cpu-cycles
44424 -2.0% 43541 perf-stat.cpu-migrations
1.29 -0.1 1.24 perf-stat.dTLB-store-miss-rate%
4.018e+10 -2.8% 3.905e+10 perf-stat.dTLB-store-misses
3.071e+12 +1.4% 3.113e+12 perf-stat.dTLB-stores
93.04 +1.5 94.51 perf-stat.iTLB-load-miss-rate%
4.946e+09 +19.3% 5.903e+09 +- 5% perf-stat.iTLB-load-misses
3.702e+08 -7.5% 3.423e+08 +- 2% perf-stat.iTLB-loads
4470 -15.9% 3760 +- 5% perf-stat.instructions-per-iTLB-miss
0.11 -4.3% 0.10 perf-stat.ipc
4.767e+09 -1.9% 4.675e+09 perf-stat.minor-faults
1.46 +- 4% -0.1 1.33 +- 9% perf-stat.node-load-miss-rate%
4.91 +1.7 6.65 +- 2% perf-stat.node-store-miss-rate%
1.195e+09 +32.8% 1.587e+09 +- 2% perf-stat.node-store-misses
2.313e+10 -3.7% 2.227e+10 perf-stat.node-stores
4.767e+09 -1.9% 4.675e+09 perf-stat.page-faults
1399047 +2.0% 1427115 perf-stat.path-length
8908 +- 73% -100.0% 0.00 latency_stats.avg.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 +-141% -100.0% 0.00 latency_stats.avg.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
61499 +-130% -92.6% 4534 +- 16% latency_stats.avg.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 +-138% -70.9% 1277 +-129% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
67311 +-112% -48.5% 34681 +- 36% latency_stats.avg.max
3956 +-138% +320.4% 16635 +-140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 +- 30% +7264.0% 12126 +-138% latency_stats.avg.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +5.4e+105% 5367 +-141% latency_stats.avg.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
36937 +-119% -100.0% 0.00 latency_stats.max.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 +-141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
84146 +-107% -72.5% 23171 +- 31% latency_stats.max.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 +-138% -70.9% 1277 +-129% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
5817 +- 83% -69.7% 1760 +- 67% latency_stats.max.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6720 +-137% +1628.2% 116147 +-141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 +- 30% +7264.0% 12126 +-138% latency_stats.max.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2e+106% 12153 +-141% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
110122 +-120% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 +-141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
12078828 +-139% -99.3% 89363 +- 29% latency_stats.sum.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
144453 +-120% -80.9% 27650 +- 19% latency_stats.sum.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 +-138% -70.9% 1277 +-129% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
9438 +- 86% -68.4% 2980 +- 35% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
31656 +-138% +320.4% 133084 +-140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 +- 30% +7264.0% 12126 +-138% latency_stats.sum.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +8.8e+105% 8760 +-141% latency_stats.sum.msleep_interruptible.uart_wait_until_sent.tty_wait_until_sent.tty_port_close_start.tty_port_close.tty_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.3e+106% 12897 +-141% latency_stats.sum.tty_wait_until_sent.tty_port_close_start.tty_port_close.tty_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.2e+106% 32207 +-141% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.43 +- 3% -44.4 0.00 perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
44.13 +- 3% -44.1 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
43.95 +- 3% -43.9 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
41.85 +- 4% -41.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
7.74 +- 8% -7.7 0.00 perf-profile.calltrace.cycles-pp.copy_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.19 +- 4% -7.2 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.15 +- 4% -7.2 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.09 +- 3% -5.1 0.00 perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
4.99 +- 3% -5.0 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
0.93 +- 6% -0.1 0.81 +- 2% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.8 0.84 perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.92 +- 3% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +1.1 1.08 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +1.1 1.14 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.2 1.17 perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.29 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.3 1.31 perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
61.62 +1.7 63.33 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
41.73 +- 4% +3.0 44.75 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.00 +4.6 4.55 +- 15% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +4.6 4.65 +- 14% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +6.6 6.57 +- 10% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +6.6 6.61 +- 10% perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +7.2 7.25 +- 2% perf-profile.calltrace.cycles-pp.copy_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
41.41 +- 70% +22.3 63.67 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
42.19 +- 70% +22.6 64.75 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
42.20 +- 70% +22.6 64.76 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
42.27 +- 70% +22.6 64.86 perf-profile.calltrace.cycles-pp.page_fault
0.00 +44.9 44.88 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +46.9 46.92 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
0.00 +47.1 47.10 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +47.4 47.37 perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +63.0 63.00 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.97 +- 6% -0.1 0.84 +- 2% perf-profile.children.cycles-pp.find_get_entry
1.23 +- 6% -0.1 1.11 perf-profile.children.cycles-pp.find_lock_entry
0.09 +- 10% -0.0 0.07 +- 6% perf-profile.children.cycles-pp.unlock_page
0.19 +- 4% +0.0 0.21 +- 2% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.21 +- 2% +0.0 0.25 perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 +- 8% perf-profile.children.cycles-pp.get_vma_policy
0.00 +0.1 0.08 perf-profile.children.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.18 +- 2% perf-profile.children.cycles-pp.__page_add_new_anon_rmap
0.00 +1.3 1.30 perf-profile.children.cycles-pp.pte_map_lock
63.40 +1.6 64.97 perf-profile.children.cycles-pp.__do_page_fault
63.19 +1.6 64.83 perf-profile.children.cycles-pp.do_page_fault
61.69 +1.7 63.36 perf-profile.children.cycles-pp.__handle_mm_fault
63.19 +1.7 64.86 perf-profile.children.cycles-pp.page_fault
61.99 +1.7 63.70 perf-profile.children.cycles-pp.handle_mm_fault
72.27 +2.2 74.52 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
67.51 +2.4 69.87 perf-profile.children.cycles-pp._raw_spin_lock
44.49 +- 3% +3.0 47.45 perf-profile.children.cycles-pp.alloc_pages_vma
44.28 +- 3% +3.0 47.26 perf-profile.children.cycles-pp.__alloc_pages_nodemask
44.13 +- 3% +3.0 47.12 perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +63.1 63.06 perf-profile.children.cycles-pp.handle_pte_fault
1.46 +- 7% -0.5 1.01 perf-profile.self.cycles-pp._raw_spin_lock
0.58 +- 6% -0.2 0.34 perf-profile.self.cycles-pp.__handle_mm_fault
0.55 +- 6% -0.1 0.44 +- 2% perf-profile.self.cycles-pp.find_get_entry
0.22 +- 5% -0.1 0.16 +- 2% perf-profile.self.cycles-pp.alloc_set_pte
0.10 +- 8% -0.0 0.08 perf-profile.self.cycles-pp.down_read_trylock
0.09 +- 5% -0.0 0.07 perf-profile.self.cycles-pp.unlock_page
0.06 -0.0 0.05 perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.20 +- 2% +0.0 0.24 +- 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 perf-profile.self.cycles-pp.finish_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.get_vma_policy
0.00 +0.1 0.08 +- 6% perf-profile.self.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.25 perf-profile.self.cycles-pp.handle_pte_fault
0.00 +0.5 0.46 +- 7% perf-profile.self.cycles-pp.pte_map_lock
72.26 +2.3 74.52 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
[-- Attachment #3: perf-profile.zip --]
[-- Type: application/zip, Size: 19025 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-06-19 9:16 ` Haiyan Song
0 siblings, 0 replies; 106+ messages in thread
From: Haiyan Song @ 2018-06-19 9:16 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 31691 bytes --]
On Mon, Jun 11, 2018 at 05:15:22PM +0200, Laurent Dufour wrote:
Hi Laurent,
For perf date tested on Intel 4s Skylake platform, here attached the compare result
between base and head commit in attachment, which include the perf-profile comparision information.
And also attached some perf-profile.json captured from test result for page_fault2 and page_fault3 for
checking the regression, thanks.
Best regards,
Haiyan Song
> Hi Haiyan,
>
> I don't have access to the same hardware you ran the test on, but I give a try
> to those test on a Power8 system (2 sockets, 5 cores/s, 8 threads/c, 80 CPUs 32G).
> I run each will-it-scale test 10 times and compute the average.
>
> test THP enabled 4.17.0-rc4-mm1 spf delta
> page_fault3_threads 2697.7 2683.5 -0.53%
> page_fault2_threads 170660.6 169574.1 -0.64%
> context_switch1_threads 6915269.2 6877507.3 -0.55%
> context_switch1_processes 6478076.2 6529493.5 0.79%
> rk1 243391.2 238527.5 -2.00%
>
> Test were launched with the arguments '-t 80 -s 5', only the average report is
> taken in account. Note that page size is 64K by default on ppc64.
>
> It would be nice if you could capture some perf data to figure out why the
> page_fault2/3 are showing such a performance regression.
>
> Thanks,
> Laurent.
>
> On 11/06/2018 09:49, Song, HaiyanX wrote:
> > Hi Laurent,
> >
> > Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
> > tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
> > V9 patch serials.
> >
> > The regression result is sorted by the metric will-it-scale.per_thread_ops.
> > branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
> > commit id:
> > head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
> > base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
> > Benchmark: will-it-scale
> > Download link: https://github.com/antonblanchard/will-it-scale/tree/master
> >
> > Metrics:
> > will-it-scale.per_process_ops=processes/nr_cpu
> > will-it-scale.per_thread_ops=threads/nr_cpu
> > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> > THP: enable / disable
> > nr_task:100%
> >
> > 1. Regressions:
> >
> > a). Enable THP
> > testcase base change head metric
> > page_fault3/enable THP 10519 -20.5% 8368 will-it-scale.per_thread_ops
> > page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
> > brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> > context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> > context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
> >
> > b). Disable THP
> > page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> > page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> > brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> > context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> > brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> > page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> > context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
> >
> > Notes: for the above values of test result, the higher is better.
> >
> > 2. Improvement: not found improvement based on the selected test cases.
> >
> >
> > Best regards
> > Haiyan Song
> > ________________________________________
> > From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> > Sent: Monday, May 28, 2018 4:54 PM
> > To: Song, HaiyanX
> > Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> > Subject: Re: [PATCH v11 00/26] Speculative page faults
> >
> > On 28/05/2018 10:22, Haiyan Song wrote:
> >> Hi Laurent,
> >>
> >> Yes, these tests are done on V9 patch.
> >
> > Do you plan to give this V11 a run ?
> >
> >>
> >>
> >> Best regards,
> >> Haiyan Song
> >>
> >> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
> >>> On 28/05/2018 07:23, Song, HaiyanX wrote:
> >>>>
> >>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
> >>>> tested on Intel 4s Skylake platform.
> >>>
> >>> Hi,
> >>>
> >>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
> >>> series" while responding to the v11 header series...
> >>> Were these tests done on v9 or v11 ?
> >>>
> >>> Cheers,
> >>> Laurent.
> >>>
> >>>>
> >>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> >>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
> >>>> Commit id:
> >>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
> >>>> head commit: 0355322b3577eeab7669066df42c550a56801110
> >>>> Benchmark suite: will-it-scale
> >>>> Download link:
> >>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
> >>>> Metrics:
> >>>> will-it-scale.per_process_ops=processes/nr_cpu
> >>>> will-it-scale.per_thread_ops=threads/nr_cpu
> >>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> >>>> THP: enable / disable
> >>>> nr_task: 100%
> >>>>
> >>>> 1. Regressions:
> >>>> a) THP enabled:
> >>>> testcase base change head metric
> >>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
> >>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
> >>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
> >>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
> >>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
> >>>>
> >>>> b) THP disabled:
> >>>> testcase base change head metric
> >>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
> >>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
> >>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
> >>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
> >>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
> >>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
> >>>>
> >>>> 2. Improvements:
> >>>> a) THP enabled:
> >>>> testcase base change head metric
> >>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
> >>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
> >>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
> >>>>
> >>>> b) THP disabled:
> >>>> testcase base change head metric
> >>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
> >>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
> >>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
> >>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
> >>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
> >>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
> >>>>
> >>>> Notes: for above values in column "change", the higher value means that the related testcase result
> >>>> on head commit is better than that on base commit for this benchmark.
> >>>>
> >>>>
> >>>> Best regards
> >>>> Haiyan Song
> >>>>
> >>>> ________________________________________
> >>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> >>>> Sent: Thursday, May 17, 2018 7:06 PM
> >>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
> >>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> >>>> Subject: [PATCH v11 00/26] Speculative page faults
> >>>>
> >>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> >>>> page fault without holding the mm semaphore [1].
> >>>>
> >>>> The idea is to try to handle user space page faults without holding the
> >>>> mmap_sem. This should allow better concurrency for massively threaded
> >>>> process since the page fault handler will not wait for other threads memory
> >>>> layout change to be done, assuming that this change is done in another part
> >>>> of the process's memory space. This type page fault is named speculative
> >>>> page fault. If the speculative page fault fails because of a concurrency is
> >>>> detected or because underlying PMD or PTE tables are not yet allocating, it
> >>>> is failing its processing and a classic page fault is then tried.
> >>>>
> >>>> The speculative page fault (SPF) has to look for the VMA matching the fault
> >>>> address without holding the mmap_sem, this is done by introducing a rwlock
> >>>> which protects the access to the mm_rb tree. Previously this was done using
> >>>> SRCU but it was introducing a lot of scheduling to process the VMA's
> >>>> freeing operation which was hitting the performance by 20% as reported by
> >>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
> >>>> limiting the locking contention to these operations which are expected to
> >>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
> >>>> our back a reference count is added and 2 services (get_vma() and
> >>>> put_vma()) are introduced to handle the reference count. Once a VMA is
> >>>> fetched from the RB tree using get_vma(), it must be later freed using
> >>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
> >>>> benchmark anymore.
> >>>>
> >>>> The VMA's attributes checked during the speculative page fault processing
> >>>> have to be protected against parallel changes. This is done by using a per
> >>>> VMA sequence lock. This sequence lock allows the speculative page fault
> >>>> handler to fast check for parallel changes in progress and to abort the
> >>>> speculative page fault in that case.
> >>>>
> >>>> Once the VMA has been found, the speculative page fault handler would check
> >>>> for the VMA's attributes to verify that the page fault has to be handled
> >>>> correctly or not. Thus, the VMA is protected through a sequence lock which
> >>>> allows fast detection of concurrent VMA changes. If such a change is
> >>>> detected, the speculative page fault is aborted and a *classic* page fault
> >>>> is tried. VMA sequence lockings are added when VMA attributes which are
> >>>> checked during the page fault are modified.
> >>>>
> >>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
> >>>> so once the page table is locked, the VMA is valid, so any other changes
> >>>> leading to touching this PTE will need to lock the page table, so no
> >>>> parallel change is possible at this time.
> >>>>
> >>>> The locking of the PTE is done with interrupts disabled, this allows
> >>>> checking for the PMD to ensure that there is not an ongoing collapsing
> >>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> >>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
> >>>> valid at the time the PTE is locked, we have the guarantee that the
> >>>> collapsing operation will have to wait on the PTE lock to move forward.
> >>>> This allows the SPF handler to map the PTE safely. If the PMD value is
> >>>> different from the one recorded at the beginning of the SPF operation, the
> >>>> classic page fault handler will be called to handle the operation while
> >>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
> >>>> the lock is done using spin_trylock() to avoid dead lock when handling a
> >>>> page fault while a TLB invalidate is requested by another CPU holding the
> >>>> PTE.
> >>>>
> >>>> In pseudo code, this could be seen as:
> >>>> speculative_page_fault()
> >>>> {
> >>>> vma = get_vma()
> >>>> check vma sequence count
> >>>> check vma's support
> >>>> disable interrupt
> >>>> check pgd,p4d,...,pte
> >>>> save pmd and pte in vmf
> >>>> save vma sequence counter in vmf
> >>>> enable interrupt
> >>>> check vma sequence count
> >>>> handle_pte_fault(vma)
> >>>> ..
> >>>> page = alloc_page()
> >>>> pte_map_lock()
> >>>> disable interrupt
> >>>> abort if sequence counter has changed
> >>>> abort if pmd or pte has changed
> >>>> pte map and lock
> >>>> enable interrupt
> >>>> if abort
> >>>> free page
> >>>> abort
> >>>> ...
> >>>> }
> >>>>
> >>>> arch_fault_handler()
> >>>> {
> >>>> if (speculative_page_fault(&vma))
> >>>> goto done
> >>>> again:
> >>>> lock(mmap_sem)
> >>>> vma = find_vma();
> >>>> handle_pte_fault(vma);
> >>>> if retry
> >>>> unlock(mmap_sem)
> >>>> goto again;
> >>>> done:
> >>>> handle fault error
> >>>> }
> >>>>
> >>>> Support for THP is not done because when checking for the PMD, we can be
> >>>> confused by an in progress collapsing operation done by khugepaged. The
> >>>> issue is that pmd_none() could be true either if the PMD is not already
> >>>> populated or if the underlying PTE are in the way to be collapsed. So we
> >>>> cannot safely allocate a PMD if pmd_none() is true.
> >>>>
> >>>> This series add a new software performance event named 'speculative-faults'
> >>>> or 'spf'. It counts the number of successful page fault event handled
> >>>> speculatively. When recording 'faults,spf' events, the faults one is
> >>>> counting the total number of page fault events while 'spf' is only counting
> >>>> the part of the faults processed speculatively.
> >>>>
> >>>> There are some trace events introduced by this series. They allow
> >>>> identifying why the page faults were not processed speculatively. This
> >>>> doesn't take in account the faults generated by a monothreaded process
> >>>> which directly processed while holding the mmap_sem. This trace events are
> >>>> grouped in a system named 'pagefault', they are:
> >>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
> >>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
> >>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
> >>>> - pagefault:spf_vma_access : the VMA's access right are not respected
> >>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
> >>>> back.
> >>>>
> >>>> To record all the related events, the easier is to run perf with the
> >>>> following arguments :
> >>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
> >>>>
> >>>> There is also a dedicated vmstat counter showing the number of successful
> >>>> page fault handled speculatively. I can be seen this way:
> >>>> $ grep speculative_pgfault /proc/vmstat
> >>>>
> >>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
> >>>> on x86, PowerPC and arm64.
> >>>>
> >>>> ---------------------
> >>>> Real Workload results
> >>>>
> >>>> As mentioned in previous email, we did non official runs using a "popular
> >>>> in memory multithreaded database product" on 176 cores SMT8 Power system
> >>>> which showed a 30% improvements in the number of transaction processed per
> >>>> second. This run has been done on the v6 series, but changes introduced in
> >>>> this new version should not impact the performance boost seen.
> >>>>
> >>>> Here are the perf data captured during 2 of these runs on top of the v8
> >>>> series:
> >>>> vanilla spf
> >>>> faults 89.418 101.364 +13%
> >>>> spf n/a 97.989
> >>>>
> >>>> With the SPF kernel, most of the page fault were processed in a speculative
> >>>> way.
> >>>>
> >>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
> >>>> it a try on an android device. He reported that the application launch time
> >>>> was improved in average by 6%, and for large applications (~100 threads) by
> >>>> 20%.
> >>>>
> >>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
> >>>> MSM845 (8 cores) with 6GB (the less is better):
> >>>>
> >>>> Application 4.9 4.9+spf delta
> >>>> com.tencent.mm 416 389 -7%
> >>>> com.eg.android.AlipayGphone 1135 986 -13%
> >>>> com.tencent.mtt 455 454 0%
> >>>> com.qqgame.hlddz 1497 1409 -6%
> >>>> com.autonavi.minimap 711 701 -1%
> >>>> com.tencent.tmgp.sgame 788 748 -5%
> >>>> com.immomo.momo 501 487 -3%
> >>>> com.tencent.peng 2145 2112 -2%
> >>>> com.smile.gifmaker 491 461 -6%
> >>>> com.baidu.BaiduMap 479 366 -23%
> >>>> com.taobao.taobao 1341 1198 -11%
> >>>> com.baidu.searchbox 333 314 -6%
> >>>> com.tencent.mobileqq 394 384 -3%
> >>>> com.sina.weibo 907 906 0%
> >>>> com.youku.phone 816 731 -11%
> >>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
> >>>> com.UCMobile 415 411 -1%
> >>>> com.tencent.tmgp.ak 1464 1431 -2%
> >>>> com.tencent.qqmusic 336 329 -2%
> >>>> com.sankuai.meituan 1661 1302 -22%
> >>>> com.netease.cloudmusic 1193 1200 1%
> >>>> air.tv.douyu.android 4257 4152 -2%
> >>>>
> >>>> ------------------
> >>>> Benchmarks results
> >>>>
> >>>> Base kernel is v4.17.0-rc4-mm1
> >>>> SPF is BASE + this series
> >>>>
> >>>> Kernbench:
> >>>> ----------
> >>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
> >>>> kernel (kernel is build 5 times):
> >>>>
> >>>> Average Half load -j 8
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
> >>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
> >>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
> >>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
> >>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
> >>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
> >>>>
> >>>> Average Optimal load -j 16
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
> >>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
> >>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
> >>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
> >>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
> >>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
> >>>>
> >>>>
> >>>> During a run on the SPF, perf events were captured:
> >>>> Performance counter stats for '../kernbench -M':
> >>>> 526743764 faults
> >>>> 210 spf
> >>>> 3 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 2278 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> Very few speculative page faults were recorded as most of the processes
> >>>> involved are monothreaded (sounds that on this architecture some threads
> >>>> were created during the kernel build processing).
> >>>>
> >>>> Here are the kerbench results on a 80 CPUs Power8 system:
> >>>>
> >>>> Average Half load -j 40
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
> >>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
> >>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
> >>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
> >>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
> >>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
> >>>>
> >>>> Average Optimal load -j 80
> >>>> Run (std deviation)
> >>>> BASE SPF
> >>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
> >>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
> >>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
> >>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
> >>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
> >>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
> >>>>
> >>>> During a run on the SPF, perf events were captured:
> >>>> Performance counter stats for '../kernbench -M':
> >>>> 116730856 faults
> >>>> 0 spf
> >>>> 3 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 476 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> Most of the processes involved are monothreaded so SPF is not activated but
> >>>> there is no impact on the performance.
> >>>>
> >>>> Ebizzy:
> >>>> -------
> >>>> The test is counting the number of records per second it can manage, the
> >>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
> >>>> consistent result I repeated the test 100 times and measure the average
> >>>> result. The number is the record processes per second, the higher is the
> >>>> best.
> >>>>
> >>>> BASE SPF delta
> >>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
> >>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
> >>>>
> >>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
> >>>> Performance counter stats for './ebizzy -mTt 16':
> >>>> 1706379 faults
> >>>> 1674599 spf
> >>>> 30588 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 363 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> And the ones captured during a run on a 80 CPUs Power node:
> >>>> Performance counter stats for './ebizzy -mTt 80':
> >>>> 1874773 faults
> >>>> 1461153 spf
> >>>> 413293 pagefault:spf_vma_changed
> >>>> 0 pagefault:spf_vma_noanon
> >>>> 200 pagefault:spf_vma_notsup
> >>>> 0 pagefault:spf_vma_access
> >>>> 0 pagefault:spf_pmd_changed
> >>>>
> >>>> In ebizzy's case most of the page fault were handled in a speculative way,
> >>>> leading the ebizzy performance boost.
> >>>>
> >>>> ------------------
> >>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
> >>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
> >>>> and Minchan Kim, hopefully.
> >>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
> >>>> __do_page_fault().
> >>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
> >>>> instead
> >>>> of aborting the speculative page fault handling. Dropping the now
> >>>> useless
> >>>> trace event pagefault:spf_pte_lock.
> >>>> - No more try to reuse the fetched VMA during the speculative page fault
> >>>> handling when retrying is needed. This adds a lot of complexity and
> >>>> additional tests done didn't show a significant performance improvement.
> >>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
> >>>>
> >>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
> >>>> [2] https://patchwork.kernel.org/patch/9999687/
> >>>>
> >>>>
> >>>> Laurent Dufour (20):
> >>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
> >>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
> >>>> mm: make pte_unmap_same compatible with SPF
> >>>> mm: introduce INIT_VMA()
> >>>> mm: protect VMA modifications using VMA sequence count
> >>>> mm: protect mremap() against SPF hanlder
> >>>> mm: protect SPF handler against anon_vma changes
> >>>> mm: cache some VMA fields in the vm_fault structure
> >>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
> >>>> mm: introduce __lru_cache_add_active_or_unevictable
> >>>> mm: introduce __vm_normal_page()
> >>>> mm: introduce __page_add_new_anon_rmap()
> >>>> mm: protect mm_rb tree with a rwlock
> >>>> mm: adding speculative page fault failure trace events
> >>>> perf: add a speculative page fault sw event
> >>>> perf tools: add support for the SPF perf event
> >>>> mm: add speculative page fault vmstats
> >>>> powerpc/mm: add speculative page fault
> >>>>
> >>>> Mahendran Ganesh (2):
> >>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> >>>> arm64/mm: add speculative page fault
> >>>>
> >>>> Peter Zijlstra (4):
> >>>> mm: prepare for FAULT_FLAG_SPECULATIVE
> >>>> mm: VMA sequence count
> >>>> mm: provide speculative fault infrastructure
> >>>> x86/mm: add speculative pagefault handling
> >>>>
> >>>> arch/arm64/Kconfig | 1 +
> >>>> arch/arm64/mm/fault.c | 12 +
> >>>> arch/powerpc/Kconfig | 1 +
> >>>> arch/powerpc/mm/fault.c | 16 +
> >>>> arch/x86/Kconfig | 1 +
> >>>> arch/x86/mm/fault.c | 27 +-
> >>>> fs/exec.c | 2 +-
> >>>> fs/proc/task_mmu.c | 5 +-
> >>>> fs/userfaultfd.c | 17 +-
> >>>> include/linux/hugetlb_inline.h | 2 +-
> >>>> include/linux/migrate.h | 4 +-
> >>>> include/linux/mm.h | 136 +++++++-
> >>>> include/linux/mm_types.h | 7 +
> >>>> include/linux/pagemap.h | 4 +-
> >>>> include/linux/rmap.h | 12 +-
> >>>> include/linux/swap.h | 10 +-
> >>>> include/linux/vm_event_item.h | 3 +
> >>>> include/trace/events/pagefault.h | 80 +++++
> >>>> include/uapi/linux/perf_event.h | 1 +
> >>>> kernel/fork.c | 5 +-
> >>>> mm/Kconfig | 22 ++
> >>>> mm/huge_memory.c | 6 +-
> >>>> mm/hugetlb.c | 2 +
> >>>> mm/init-mm.c | 3 +
> >>>> mm/internal.h | 20 ++
> >>>> mm/khugepaged.c | 5 +
> >>>> mm/madvise.c | 6 +-
> >>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
> >>>> mm/mempolicy.c | 51 ++-
> >>>> mm/migrate.c | 6 +-
> >>>> mm/mlock.c | 13 +-
> >>>> mm/mmap.c | 229 ++++++++++---
> >>>> mm/mprotect.c | 4 +-
> >>>> mm/mremap.c | 13 +
> >>>> mm/nommu.c | 2 +-
> >>>> mm/rmap.c | 5 +-
> >>>> mm/swap.c | 6 +-
> >>>> mm/swap_state.c | 8 +-
> >>>> mm/vmstat.c | 5 +-
> >>>> tools/include/uapi/linux/perf_event.h | 1 +
> >>>> tools/perf/util/evsel.c | 1 +
> >>>> tools/perf/util/parse-events.c | 4 +
> >>>> tools/perf/util/parse-events.l | 1 +
> >>>> tools/perf/util/python.c | 1 +
> >>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
> >>>> create mode 100644 include/trace/events/pagefault.h
> >>>>
> >>>> --
> >>>> 2.7.4
> >>>>
> >>>>
> >>>
> >>
> >
>
[-- Attachment #2: compare-result.txt --]
[-- Type: text/plain, Size: 183477 bytes --]
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/page_fault3/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
44:3 -13% 43:3 perf-profile.calltrace.cycles-pp.error_entry
22:3 -6% 22:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
44:3 -13% 44:3 perf-profile.children.cycles-pp.error_entry
21:3 -7% 21:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
10519 ± 3% -20.5% 8368 ± 6% will-it-scale.per_thread_ops
118098 +11.2% 131287 ± 2% will-it-scale.time.involuntary_context_switches
6.084e+08 ± 3% -20.4% 4.845e+08 ± 6% will-it-scale.time.minor_page_faults
7467 +5.0% 7841 will-it-scale.time.percent_of_cpu_this_job_got
44922 +5.0% 47176 will-it-scale.time.system_time
7126337 ± 3% -15.4% 6025689 ± 6% will-it-scale.time.voluntary_context_switches
91905646 -1.3% 90673935 will-it-scale.workload
27.15 ± 6% -8.7% 24.80 ± 10% boot-time.boot
2516213 ± 6% +8.3% 2726303 interrupts.CAL:Function_call_interrupts
388.00 ± 9% +60.2% 621.67 ± 20% irq_exception_noise.softirq_nr
11.28 ± 2% -1.9 9.37 ± 4% mpstat.cpu.idle%
10065 ±140% +243.4% 34559 ± 4% numa-numastat.node0.other_node
18739 -11.6% 16573 ± 3% uptime.idle
29406 ± 2% -11.8% 25929 ± 5% vmstat.system.cs
329614 ± 8% +17.0% 385618 ± 10% meminfo.DirectMap4k
237851 +21.2% 288160 ± 5% meminfo.Inactive
237615 +21.2% 287924 ± 5% meminfo.Inactive(anon)
7917847 -10.7% 7071860 softirqs.RCU
4784181 ± 3% -14.5% 4089039 ± 4% softirqs.SCHED
45666107 ± 7% +12.9% 51535472 ± 3% softirqs.TIMER
2.617e+09 ± 2% -13.9% 2.253e+09 ± 6% cpuidle.C1E.time
6688774 ± 2% -12.8% 5835101 ± 5% cpuidle.C1E.usage
1.022e+10 ± 2% -18.0% 8.376e+09 ± 3% cpuidle.C6.time
13440993 ± 2% -16.3% 11243794 ± 4% cpuidle.C6.usage
54781 ± 16% +37.5% 75347 ± 12% numa-meminfo.node0.Inactive
54705 ± 16% +37.7% 75347 ± 12% numa-meminfo.node0.Inactive(anon)
52522 +35.0% 70886 ± 6% numa-meminfo.node2.Inactive
52443 +34.7% 70653 ± 6% numa-meminfo.node2.Inactive(anon)
31046 ± 6% +30.3% 40457 ± 11% numa-meminfo.node2.SReclaimable
58563 +21.1% 70945 ± 6% proc-vmstat.nr_inactive_anon
58564 +21.1% 70947 ± 6% proc-vmstat.nr_zone_inactive_anon
69701118 -1.2% 68842151 proc-vmstat.pgalloc_normal
2.765e+10 -1.3% 2.729e+10 proc-vmstat.pgfault
69330418 -1.2% 68466824 proc-vmstat.pgfree
118098 +11.2% 131287 ± 2% time.involuntary_context_switches
6.084e+08 ± 3% -20.4% 4.845e+08 ± 6% time.minor_page_faults
7467 +5.0% 7841 time.percent_of_cpu_this_job_got
44922 +5.0% 47176 time.system_time
7126337 ± 3% -15.4% 6025689 ± 6% time.voluntary_context_switches
13653 ± 16% +33.5% 18225 ± 12% numa-vmstat.node0.nr_inactive_anon
13651 ± 16% +33.5% 18224 ± 12% numa-vmstat.node0.nr_zone_inactive_anon
13069 ± 3% +30.1% 17001 ± 4% numa-vmstat.node2.nr_inactive_anon
134.67 ± 42% -49.5% 68.00 ± 31% numa-vmstat.node2.nr_mlock
7758 ± 6% +30.4% 10112 ± 11% numa-vmstat.node2.nr_slab_reclaimable
13066 ± 3% +30.1% 16998 ± 4% numa-vmstat.node2.nr_zone_inactive_anon
1039 ± 11% -17.5% 857.33 slabinfo.Acpi-ParseExt.active_objs
1039 ± 11% -17.5% 857.33 slabinfo.Acpi-ParseExt.num_objs
2566 ± 6% -8.8% 2340 ± 5% slabinfo.biovec-64.active_objs
2566 ± 6% -8.8% 2340 ± 5% slabinfo.biovec-64.num_objs
898.33 ± 3% -9.5% 813.33 ± 3% slabinfo.kmem_cache_node.active_objs
1066 ± 2% -8.0% 981.33 ± 3% slabinfo.kmem_cache_node.num_objs
1940 +2.3% 1984 turbostat.Avg_MHz
6679037 ± 2% -12.7% 5830270 ± 5% turbostat.C1E
2.25 ± 2% -0.3 1.94 ± 6% turbostat.C1E%
13418115 -16.3% 11234510 ± 4% turbostat.C6
8.75 ± 2% -1.6 7.18 ± 3% turbostat.C6%
5.99 ± 2% -14.4% 5.13 ± 4% turbostat.CPU%c1
5.01 ± 3% -20.1% 4.00 ± 4% turbostat.CPU%c6
1.77 ± 3% -34.7% 1.15 turbostat.Pkg%pc2
1.378e+13 +1.2% 1.394e+13 perf-stat.branch-instructions
0.98 -0.0 0.94 perf-stat.branch-miss-rate%
1.344e+11 -2.3% 1.313e+11 perf-stat.branch-misses
1.076e+11 -1.8% 1.057e+11 perf-stat.cache-misses
2.258e+11 -2.1% 2.21e+11 perf-stat.cache-references
17788064 ± 2% -11.9% 15674207 ± 6% perf-stat.context-switches
2.241e+14 +2.4% 2.294e+14 perf-stat.cpu-cycles
1.929e+13 +2.2% 1.971e+13 perf-stat.dTLB-loads
4.01 -0.2 3.83 perf-stat.dTLB-store-miss-rate%
4.519e+11 -1.3% 4.461e+11 perf-stat.dTLB-store-misses
1.082e+13 +3.6% 1.121e+13 perf-stat.dTLB-stores
3.02e+10 +23.2% 3.721e+10 ± 3% perf-stat.iTLB-load-misses
2.721e+08 ± 8% -8.8% 2.481e+08 ± 3% perf-stat.iTLB-loads
6.985e+13 +1.8% 7.111e+13 perf-stat.instructions
2313 -17.2% 1914 ± 3% perf-stat.instructions-per-iTLB-miss
2.764e+10 -1.3% 2.729e+10 perf-stat.minor-faults
1.421e+09 ± 2% -16.4% 1.188e+09 ± 9% perf-stat.node-load-misses
1.538e+10 -9.3% 1.395e+10 perf-stat.node-loads
9.75 +1.4 11.10 perf-stat.node-store-miss-rate%
3.012e+09 +14.1% 3.437e+09 perf-stat.node-store-misses
2.789e+10 -1.3% 2.753e+10 perf-stat.node-stores
2.764e+10 -1.3% 2.729e+10 perf-stat.page-faults
760059 +3.2% 784235 perf-stat.path-length
193545 ± 25% -57.8% 81757 ± 46% sched_debug.cfs_rq:/.MIN_vruntime.avg
26516863 ± 19% -49.7% 13338070 ± 33% sched_debug.cfs_rq:/.MIN_vruntime.max
2202271 ± 21% -53.2% 1029581 ± 38% sched_debug.cfs_rq:/.MIN_vruntime.stddev
193545 ± 25% -57.8% 81757 ± 46% sched_debug.cfs_rq:/.max_vruntime.avg
26516863 ± 19% -49.7% 13338070 ± 33% sched_debug.cfs_rq:/.max_vruntime.max
2202271 ± 21% -53.2% 1029581 ± 38% sched_debug.cfs_rq:/.max_vruntime.stddev
0.32 ± 70% +253.2% 1.14 ± 54% sched_debug.cfs_rq:/.removed.load_avg.avg
4.44 ± 70% +120.7% 9.80 ± 27% sched_debug.cfs_rq:/.removed.load_avg.stddev
14.90 ± 70% +251.0% 52.31 ± 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
205.71 ± 70% +119.5% 451.60 ± 27% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.16 ± 70% +237.9% 0.54 ± 50% sched_debug.cfs_rq:/.removed.util_avg.avg
2.23 ± 70% +114.2% 4.77 ± 24% sched_debug.cfs_rq:/.removed.util_avg.stddev
573.70 ± 5% -9.7% 518.06 ± 6% sched_debug.cfs_rq:/.util_avg.min
114.87 ± 8% +14.1% 131.04 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
64.42 ± 54% -63.9% 23.27 ± 68% sched_debug.cpu.cpu_load[1].max
5.05 ± 48% -55.2% 2.26 ± 51% sched_debug.cpu.cpu_load[1].stddev
57.58 ± 59% -60.3% 22.88 ± 70% sched_debug.cpu.cpu_load[2].max
21019 ± 3% -15.1% 17841 ± 6% sched_debug.cpu.nr_switches.min
20797 ± 3% -15.0% 17670 ± 6% sched_debug.cpu.sched_count.min
10287 ± 3% -15.1% 8736 ± 6% sched_debug.cpu.sched_goidle.avg
13693 ± 2% -10.7% 12233 ± 5% sched_debug.cpu.sched_goidle.max
9976 ± 3% -16.0% 8381 ± 7% sched_debug.cpu.sched_goidle.min
0.00 ± 26% +98.9% 0.00 ± 28% sched_debug.rt_rq:/.rt_time.min
4230 ±141% -100.0% 0.00 latency_stats.avg.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 ±141% -100.0% 0.00 latency_stats.avg.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 ±138% -92.2% 315.33 ± 91% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
0.00 +3.6e+105% 3641 ±141% latency_stats.avg.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +4.8e+106% 47686 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
4230 ±141% -100.0% 0.00 latency_stats.max.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 ±141% -100.0% 0.00 latency_stats.max.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 ±138% -92.2% 315.33 ± 91% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
4254 ±134% -88.0% 511.67 ± 90% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
43093 ± 35% +76.6% 76099 ±115% latency_stats.max.blk_execute_rq.scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
24139 ± 70% +228.5% 79285 ±105% latency_stats.max.blk_execute_rq.scsi_execute.scsi_test_unit_ready.sr_check_events.[sr_mod].cdrom_check_events.[cdrom].sr_block_check_events.[sr_mod].disk_check_events.disk_clear_events.check_disk_change.sr_block_open.[sr_mod].__blkdev_get.blkdev_get
0.00 +3.6e+105% 3641 ±141% latency_stats.max.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +6.5e+106% 64518 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
4230 ±141% -100.0% 0.00 latency_stats.sum.trace_module_notify.notifier_call_chain.blocking_notifier_call_chain.do_init_module.load_module.__do_sys_finit_module.do_syscall_64.entry_SYSCALL_64_after_hwframe
28498 ±141% -100.0% 0.00 latency_stats.sum.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4065 ±138% -92.2% 315.33 ± 91% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
57884 ± 9% +47.3% 85264 ±118% latency_stats.sum.blk_execute_rq.scsi_execute.ioctl_internal_command.scsi_set_medium_removal.cdrom_release.[cdrom].sr_block_release.[sr_mod].__blkdev_put.blkdev_close.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
0.00 +3.6e+105% 3641 ±141% latency_stats.sum.down.console_lock.console_device.tty_lookup_driver.tty_open.chrdev_open.do_dentry_open.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.5e+106% 25040 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.4e+106% 34015 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_get_acl.get_acl.posix_acl_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open
0.00 +9.5e+106% 95373 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
11.70 -11.7 0.00 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
11.52 -11.5 0.00 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
10.44 -10.4 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
9.83 -9.8 0.00 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
9.55 -9.5 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
9.35 -9.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
6.81 -6.8 0.00 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
7.71 -0.3 7.45 perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.59 ± 7% -0.2 0.35 ± 70% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.__do_page_fault.do_page_fault.page_fault
0.59 ± 7% -0.2 0.35 ± 70% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.__do_page_fault.do_page_fault.page_fault
10.41 -0.2 10.24 perf-profile.calltrace.cycles-pp.native_irq_return_iret
7.68 -0.1 7.60 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.76 -0.1 0.70 perf-profile.calltrace.cycles-pp.down_read_trylock.__do_page_fault.do_page_fault.page_fault
1.38 -0.0 1.34 perf-profile.calltrace.cycles-pp.do_page_fault
1.05 -0.0 1.02 perf-profile.calltrace.cycles-pp.trace_graph_entry.do_page_fault
0.92 +0.0 0.94 perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
0.91 +0.0 0.93 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.__do_page_fault.do_page_fault.page_fault
0.65 +0.0 0.67 perf-profile.calltrace.cycles-pp.set_page_dirty.unmap_page_range.unmap_vmas.unmap_region.do_munmap
0.62 +0.0 0.66 perf-profile.calltrace.cycles-pp.page_mapping.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
4.15 +0.1 4.27 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region.do_munmap
10.17 +0.2 10.39 perf-profile.calltrace.cycles-pp.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64
9.54 +0.2 9.76 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.vm_munmap
9.54 +0.2 9.76 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
9.56 +0.2 9.78 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.6 0.59 perf-profile.calltrace.cycles-pp.page_mapping.set_page_dirty.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault
0.00 +0.6 0.60 perf-profile.calltrace.cycles-pp.current_time.file_update_time.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.7 0.68 perf-profile.calltrace.cycles-pp.___might_sleep.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.7 0.74 perf-profile.calltrace.cycles-pp.unlock_page.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.8 0.80 perf-profile.calltrace.cycles-pp.set_page_dirty.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.91 perf-profile.calltrace.cycles-pp.__set_page_dirty_no_writeback.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.3 1.27 perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.30 perf-profile.calltrace.cycles-pp.file_update_time.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +2.8 2.76 perf-profile.calltrace.cycles-pp.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +6.8 6.81 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +9.4 9.39 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +9.6 9.59 perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +9.8 9.77 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +10.4 10.37 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +11.5 11.46 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +11.6 11.60 perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +26.6 26.62 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.88 -0.3 7.61 perf-profile.children.cycles-pp.find_get_entry
1.34 ± 8% -0.2 1.16 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
10.41 -0.2 10.24 perf-profile.children.cycles-pp.native_irq_return_iret
0.38 ± 28% -0.1 0.26 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
11.80 -0.1 11.68 perf-profile.children.cycles-pp.__do_fault
0.55 ± 15% -0.1 0.43 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.60 -0.1 0.51 perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.38 ± 13% -0.1 0.29 ± 4% perf-profile.children.cycles-pp.ktime_get
7.68 -0.1 7.60 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
5.18 -0.1 5.12 perf-profile.children.cycles-pp.trace_graph_entry
0.79 -0.1 0.73 perf-profile.children.cycles-pp.down_read_trylock
7.83 -0.1 7.76 perf-profile.children.cycles-pp.sync_regs
3.01 -0.1 2.94 perf-profile.children.cycles-pp.fault_dirty_shared_page
1.02 -0.1 0.96 perf-profile.children.cycles-pp._raw_spin_lock
4.66 -0.1 4.61 perf-profile.children.cycles-pp.prepare_ftrace_return
0.37 ± 8% -0.1 0.32 ± 3% perf-profile.children.cycles-pp.current_kernel_time64
5.26 -0.1 5.21 perf-profile.children.cycles-pp.ftrace_graph_caller
0.66 ± 5% -0.1 0.61 perf-profile.children.cycles-pp.current_time
0.18 ± 5% -0.0 0.15 ± 3% perf-profile.children.cycles-pp.update_process_times
0.27 -0.0 0.26 perf-profile.children.cycles-pp._cond_resched
0.16 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.rcu_all_qs
0.94 +0.0 0.95 perf-profile.children.cycles-pp.vmacache_find
0.48 +0.0 0.50 perf-profile.children.cycles-pp.__mod_node_page_state
0.17 +0.0 0.19 ± 2% perf-profile.children.cycles-pp.__unlock_page_memcg
1.07 +0.0 1.10 perf-profile.children.cycles-pp.find_vma
0.79 ± 3% +0.1 0.86 ± 2% perf-profile.children.cycles-pp.lock_page_memcg
4.29 +0.1 4.40 perf-profile.children.cycles-pp.page_remove_rmap
1.39 ± 2% +0.1 1.52 perf-profile.children.cycles-pp.file_update_time
0.00 +0.2 0.16 perf-profile.children.cycles-pp.__vm_normal_page
9.63 +0.2 9.84 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
9.63 +0.2 9.84 perf-profile.children.cycles-pp.do_syscall_64
9.63 +0.2 9.84 perf-profile.children.cycles-pp.unmap_page_range
10.17 +0.2 10.39 perf-profile.children.cycles-pp.munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.unmap_region
9.56 +0.2 9.78 perf-profile.children.cycles-pp.do_munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.vm_munmap
9.56 +0.2 9.78 perf-profile.children.cycles-pp.__x64_sys_munmap
9.54 +0.2 9.77 perf-profile.children.cycles-pp.unmap_vmas
1.01 +0.2 1.25 perf-profile.children.cycles-pp.___might_sleep
0.00 +1.6 1.59 perf-profile.children.cycles-pp.pte_map_lock
0.00 +26.9 26.89 perf-profile.children.cycles-pp.handle_pte_fault
4.25 -1.0 3.24 perf-profile.self.cycles-pp.__handle_mm_fault
1.42 -0.3 1.11 perf-profile.self.cycles-pp.alloc_set_pte
4.87 -0.3 4.59 perf-profile.self.cycles-pp.find_get_entry
10.41 -0.2 10.24 perf-profile.self.cycles-pp.native_irq_return_iret
0.37 ± 13% -0.1 0.28 ± 4% perf-profile.self.cycles-pp.ktime_get
0.60 -0.1 0.51 perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
7.50 -0.1 7.42 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
7.83 -0.1 7.76 perf-profile.self.cycles-pp.sync_regs
4.85 -0.1 4.79 perf-profile.self.cycles-pp.trace_graph_entry
1.01 -0.1 0.95 perf-profile.self.cycles-pp._raw_spin_lock
0.78 -0.1 0.73 perf-profile.self.cycles-pp.down_read_trylock
0.36 ± 9% -0.1 0.31 ± 4% perf-profile.self.cycles-pp.current_kernel_time64
0.28 -0.0 0.23 ± 2% perf-profile.self.cycles-pp.__do_fault
1.04 -0.0 1.00 perf-profile.self.cycles-pp.find_lock_entry
0.30 -0.0 0.28 ± 3% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.70 -0.0 0.67 perf-profile.self.cycles-pp.prepare_ftrace_return
0.44 -0.0 0.42 perf-profile.self.cycles-pp.do_page_fault
0.16 -0.0 0.14 perf-profile.self.cycles-pp.rcu_all_qs
0.78 -0.0 0.77 perf-profile.self.cycles-pp.shmem_getpage_gfp
0.20 -0.0 0.19 perf-profile.self.cycles-pp._cond_resched
0.50 +0.0 0.51 perf-profile.self.cycles-pp.set_page_dirty
0.93 +0.0 0.95 perf-profile.self.cycles-pp.vmacache_find
0.36 ± 2% +0.0 0.38 perf-profile.self.cycles-pp.__might_sleep
0.47 +0.0 0.50 perf-profile.self.cycles-pp.__mod_node_page_state
0.17 +0.0 0.19 ± 2% perf-profile.self.cycles-pp.__unlock_page_memcg
2.34 +0.0 2.38 perf-profile.self.cycles-pp.unmap_page_range
0.78 ± 3% +0.1 0.85 ± 2% perf-profile.self.cycles-pp.lock_page_memcg
2.17 +0.1 2.24 perf-profile.self.cycles-pp.__do_page_fault
0.00 +0.2 0.16 ± 3% perf-profile.self.cycles-pp.__vm_normal_page
1.00 +0.2 1.24 perf-profile.self.cycles-pp.___might_sleep
0.00 +0.7 0.70 perf-profile.self.cycles-pp.pte_map_lock
0.00 +1.4 1.42 ± 2% perf-profile.self.cycles-pp.handle_pte_fault
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/context_switch1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_interrupt_entry/0x
2:3 -67% :3 kmsg.pstore:crypto_comp_decompress_failed,ret=
2:3 -67% :3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
224431 -1.3% 221567 will-it-scale.per_process_ops
237006 -2.2% 231907 will-it-scale.per_thread_ops
1.601e+09 ± 29% -46.9% 8.501e+08 ± 12% will-it-scale.time.involuntary_context_switches
5429 -1.6% 5344 will-it-scale.time.user_time
88596221 -1.7% 87067269 will-it-scale.workload
6863 ± 6% -9.7% 6200 boot-time.idle
144908 ± 40% -66.8% 48173 ± 93% meminfo.CmaFree
0.00 ± 70% +0.0 0.00 mpstat.cpu.iowait%
448336 ± 14% -34.8% 292125 ± 3% turbostat.C1
7684 ± 6% -9.5% 6957 uptime.idle
1.601e+09 ± 29% -46.9% 8.501e+08 ± 12% time.involuntary_context_switches
5429 -1.6% 5344 time.user_time
44013162 -1.7% 43243125 vmstat.system.cs
207684 -1.1% 205485 vmstat.system.in
2217033 ± 15% -15.8% 1866876 ± 2% cpuidle.C1.time
451218 ± 14% -34.7% 294841 ± 2% cpuidle.C1.usage
24839 ± 10% -19.9% 19896 cpuidle.POLL.time
7656 ± 11% -38.9% 4676 ± 8% cpuidle.POLL.usage
5.48 ± 49% -67.3% 1.79 ±100% irq_exception_noise.__do_page_fault.95th
9.46 ± 21% -58.2% 3.95 ± 64% irq_exception_noise.__do_page_fault.99th
35.67 ± 8% +1394.4% 533.00 ± 96% irq_exception_noise.irq_nr
52109 ± 3% -16.0% 43784 ± 4% irq_exception_noise.softirq_time
36226 ± 40% -66.7% 12048 ± 93% proc-vmstat.nr_free_cma
25916 -1.0% 25659 proc-vmstat.nr_slab_reclaimable
16279 ± 8% +2646.1% 447053 ± 82% proc-vmstat.pgalloc_movable
2231117 -18.4% 1820828 ± 20% proc-vmstat.pgalloc_normal
1109316 ± 46% -86.9% 145207 ±109% numa-numastat.node1.local_node
1114700 ± 45% -84.5% 172877 ± 85% numa-numastat.node1.numa_hit
5523 ±140% +402.8% 27768 ± 39% numa-numastat.node1.other_node
29013 ± 29% +3048.1% 913379 ± 73% numa-numastat.node3.local_node
65032 ± 13% +1335.1% 933270 ± 70% numa-numastat.node3.numa_hit
36018 -44.8% 19897 ± 75% numa-numastat.node3.other_node
12.79 ± 21% +7739.1% 1002 ±136% sched_debug.cpu.cpu_load[1].max
1.82 ± 10% +3901.1% 72.92 ±135% sched_debug.cpu.cpu_load[1].stddev
1.71 ± 4% +5055.8% 88.08 ±137% sched_debug.cpu.cpu_load[2].stddev
12.33 ± 23% +9061.9% 1129 ±139% sched_debug.cpu.cpu_load[3].max
1.78 ± 10% +4514.8% 82.18 ±137% sched_debug.cpu.cpu_load[3].stddev
4692 ± 72% +154.5% 11945 ± 29% sched_debug.cpu.max_idle_balance_cost.stddev
23979 -8.3% 21983 slabinfo.kmalloc-96.active_objs
1358 ± 6% -17.9% 1114 ± 3% slabinfo.nsproxy.active_objs
1358 ± 6% -17.9% 1114 ± 3% slabinfo.nsproxy.num_objs
15229 +12.4% 17119 slabinfo.pde_opener.active_objs
15229 +12.4% 17119 slabinfo.pde_opener.num_objs
59541 ± 8% -10.1% 53537 ± 8% slabinfo.vm_area_struct.active_objs
59612 ± 8% -10.1% 53604 ± 8% slabinfo.vm_area_struct.num_objs
4.163e+13 -1.4% 4.105e+13 perf-stat.branch-instructions
6.537e+11 -1.2% 6.459e+11 perf-stat.branch-misses
2.667e+10 -1.7% 2.621e+10 perf-stat.context-switches
1.21 +1.3% 1.22 perf-stat.cpi
150508 -9.8% 135825 ± 3% perf-stat.cpu-migrations
5.75 ± 33% +5.4 11.11 ± 26% perf-stat.iTLB-load-miss-rate%
3.619e+09 ± 36% +100.9% 7.272e+09 ± 30% perf-stat.iTLB-load-misses
2.089e+14 -1.3% 2.062e+14 perf-stat.instructions
64607 ± 29% -50.5% 31964 ± 37% perf-stat.instructions-per-iTLB-miss
0.83 -1.3% 0.82 perf-stat.ipc
3972 ± 4% -14.7% 3388 ± 8% numa-meminfo.node0.PageTables
207919 ± 25% -57.2% 88989 ± 74% numa-meminfo.node1.Active
207715 ± 26% -57.3% 88785 ± 74% numa-meminfo.node1.Active(anon)
356529 -34.3% 234069 ± 2% numa-meminfo.node1.FilePages
789129 ± 5% -19.8% 633161 ± 12% numa-meminfo.node1.MemUsed
34777 ± 8% -48.2% 18010 ± 30% numa-meminfo.node1.SReclaimable
69641 ± 4% -20.7% 55250 ± 12% numa-meminfo.node1.SUnreclaim
125526 ± 4% -96.3% 4602 ± 41% numa-meminfo.node1.Shmem
104419 -29.8% 73261 ± 16% numa-meminfo.node1.Slab
103661 ± 17% -72.0% 29029 ± 99% numa-meminfo.node2.Active
103661 ± 17% -72.2% 28829 ±101% numa-meminfo.node2.Active(anon)
103564 ± 18% -72.0% 29007 ±100% numa-meminfo.node2.AnonPages
671654 ± 7% -14.6% 573598 ± 4% numa-meminfo.node2.MemUsed
44206 ±127% +301.4% 177465 ± 42% numa-meminfo.node3.Active
44206 ±127% +301.0% 177263 ± 42% numa-meminfo.node3.Active(anon)
8738 +12.2% 9805 ± 8% numa-meminfo.node3.KernelStack
603605 ± 9% +27.8% 771554 ± 14% numa-meminfo.node3.MemUsed
14438 ± 6% +122.9% 32181 ± 42% numa-meminfo.node3.SReclaimable
2786 ±137% +3302.0% 94792 ± 71% numa-meminfo.node3.Shmem
71461 ± 7% +45.2% 103771 ± 29% numa-meminfo.node3.Slab
247197 ± 4% -7.8% 227843 numa-meminfo.node3.Unevictable
991.67 ± 4% -14.7% 846.00 ± 8% numa-vmstat.node0.nr_page_table_pages
51926 ± 26% -57.3% 22196 ± 74% numa-vmstat.node1.nr_active_anon
89137 -34.4% 58516 ± 2% numa-vmstat.node1.nr_file_pages
1679 ± 5% -10.8% 1498 ± 4% numa-vmstat.node1.nr_mapped
31386 ± 4% -96.3% 1150 ± 41% numa-vmstat.node1.nr_shmem
8694 ± 8% -48.2% 4502 ± 30% numa-vmstat.node1.nr_slab_reclaimable
17410 ± 4% -20.7% 13812 ± 12% numa-vmstat.node1.nr_slab_unreclaimable
51926 ± 26% -57.3% 22196 ± 74% numa-vmstat.node1.nr_zone_active_anon
1037174 ± 24% -57.0% 446205 ± 35% numa-vmstat.node1.numa_hit
961611 ± 26% -65.8% 328687 ± 50% numa-vmstat.node1.numa_local
75563 ± 44% +55.5% 117517 ± 9% numa-vmstat.node1.numa_other
25914 ± 17% -72.2% 7206 ±101% numa-vmstat.node2.nr_active_anon
25891 ± 18% -72.0% 7251 ±100% numa-vmstat.node2.nr_anon_pages
25914 ± 17% -72.2% 7206 ±101% numa-vmstat.node2.nr_zone_active_anon
11051 ±127% +301.0% 44309 ± 42% numa-vmstat.node3.nr_active_anon
36227 ± 40% -66.7% 12049 ± 93% numa-vmstat.node3.nr_free_cma
0.33 ±141% +25000.0% 83.67 ± 81% numa-vmstat.node3.nr_inactive_file
8739 +12.2% 9806 ± 8% numa-vmstat.node3.nr_kernel_stack
696.67 ±137% +3299.7% 23684 ± 71% numa-vmstat.node3.nr_shmem
3609 ± 6% +122.9% 8044 ± 42% numa-vmstat.node3.nr_slab_reclaimable
61799 ± 4% -7.8% 56960 numa-vmstat.node3.nr_unevictable
11053 ±127% +301.4% 44361 ± 42% numa-vmstat.node3.nr_zone_active_anon
0.33 ±141% +25000.0% 83.67 ± 81% numa-vmstat.node3.nr_zone_inactive_file
61799 ± 4% -7.8% 56960 numa-vmstat.node3.nr_zone_unevictable
217951 ± 8% +280.8% 829976 ± 65% numa-vmstat.node3.numa_hit
91303 ± 19% +689.3% 720647 ± 77% numa-vmstat.node3.numa_local
126648 -13.7% 109329 ± 13% numa-vmstat.node3.numa_other
8.54 -0.1 8.40 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
5.04 -0.1 4.94 perf-profile.calltrace.cycles-pp.__switch_to.read
3.43 -0.1 3.35 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
2.77 -0.1 2.72 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
1.99 -0.0 1.94 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
0.60 ± 2% -0.0 0.57 ± 2% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.81 -0.0 0.78 perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.78 +0.0 0.80 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.73 +0.0 0.75 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.92 +0.0 0.95 perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
2.11 +0.0 2.15 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.00 -0.1 6.86 perf-profile.children.cycles-pp.syscall_return_via_sysret
5.26 -0.1 5.14 perf-profile.children.cycles-pp.__switch_to
5.65 -0.1 5.56 perf-profile.children.cycles-pp.reweight_entity
2.17 -0.1 2.12 perf-profile.children.cycles-pp.copy_page_to_iter
2.94 -0.0 2.90 perf-profile.children.cycles-pp.update_cfs_group
3.11 -0.0 3.07 perf-profile.children.cycles-pp.pick_next_task_fair
2.59 -0.0 2.55 perf-profile.children.cycles-pp.load_new_mm_cr3
1.92 -0.0 1.88 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.11 -0.0 1.08 ± 2% perf-profile.children.cycles-pp.find_next_bit
0.59 -0.0 0.56 perf-profile.children.cycles-pp.finish_task_switch
0.14 ± 15% -0.0 0.11 ± 16% perf-profile.children.cycles-pp.write@plt
1.21 -0.0 1.18 perf-profile.children.cycles-pp.set_next_entity
0.85 -0.0 0.82 perf-profile.children.cycles-pp.___perf_sw_event
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.timespec_trunc
0.47 ± 2% -0.0 0.45 perf-profile.children.cycles-pp.anon_pipe_buf_release
0.38 ± 2% -0.0 0.36 perf-profile.children.cycles-pp.file_update_time
0.74 -0.0 0.73 perf-profile.children.cycles-pp.copyout
0.41 ± 2% -0.0 0.39 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.32 -0.0 0.30 perf-profile.children.cycles-pp.__x64_sys_read
0.14 -0.0 0.12 ± 3% perf-profile.children.cycles-pp.current_kernel_time64
0.91 +0.0 0.92 perf-profile.children.cycles-pp.touch_atime
0.40 +0.0 0.41 perf-profile.children.cycles-pp._cond_resched
0.18 ± 2% +0.0 0.20 perf-profile.children.cycles-pp.activate_task
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.default_wake_function
0.24 +0.0 0.27 ± 3% perf-profile.children.cycles-pp.rcu_all_qs
0.60 ± 2% +0.0 0.64 ± 2% perf-profile.children.cycles-pp.update_min_vruntime
0.42 ± 4% +0.0 0.46 ± 4% perf-profile.children.cycles-pp.probe_sched_switch
1.33 +0.0 1.38 perf-profile.children.cycles-pp.__fget_light
0.53 ± 2% +0.1 0.58 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.31 +0.1 0.36 ± 2% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
4.35 +0.1 4.41 perf-profile.children.cycles-pp.switch_mm_irqs_off
2.52 +0.1 2.58 perf-profile.children.cycles-pp.selinux_file_permission
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.hrtick_update
7.00 -0.1 6.86 perf-profile.self.cycles-pp.syscall_return_via_sysret
5.26 -0.1 5.14 perf-profile.self.cycles-pp.__switch_to
0.29 -0.1 0.19 ± 2% perf-profile.self.cycles-pp.ksys_read
1.49 -0.1 1.43 perf-profile.self.cycles-pp.dequeue_task_fair
2.41 -0.1 2.35 perf-profile.self.cycles-pp.__schedule
1.46 -0.0 1.41 perf-profile.self.cycles-pp.select_task_rq_fair
2.94 -0.0 2.90 perf-profile.self.cycles-pp.update_cfs_group
0.44 -0.0 0.40 perf-profile.self.cycles-pp.dequeue_entity
0.48 -0.0 0.44 perf-profile.self.cycles-pp.finish_task_switch
2.59 -0.0 2.55 perf-profile.self.cycles-pp.load_new_mm_cr3
1.11 -0.0 1.08 ± 2% perf-profile.self.cycles-pp.find_next_bit
1.91 -0.0 1.88 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.78 -0.0 0.75 perf-profile.self.cycles-pp.___perf_sw_event
0.14 ± 15% -0.0 0.11 ± 16% perf-profile.self.cycles-pp.write@plt
0.37 -0.0 0.35 ± 2% perf-profile.self.cycles-pp.__wake_up_common_lock
0.20 ± 2% -0.0 0.17 ± 2% perf-profile.self.cycles-pp.__fdget_pos
0.47 ± 2% -0.0 0.44 perf-profile.self.cycles-pp.anon_pipe_buf_release
0.87 -0.0 0.85 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.timespec_trunc
0.41 ± 2% -0.0 0.39 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.38 -0.0 0.36 perf-profile.self.cycles-pp.__wake_up_common
0.32 -0.0 0.30 perf-profile.self.cycles-pp.__x64_sys_read
0.14 ± 3% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.current_kernel_time64
0.30 -0.0 0.28 perf-profile.self.cycles-pp.set_next_entity
0.28 ± 3% +0.0 0.30 perf-profile.self.cycles-pp._cond_resched
0.18 ± 2% +0.0 0.20 perf-profile.self.cycles-pp.activate_task
0.17 ± 2% +0.0 0.19 perf-profile.self.cycles-pp.__might_fault
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.default_wake_function
0.17 ± 2% +0.0 0.20 perf-profile.self.cycles-pp.ttwu_do_activate
0.66 +0.0 0.69 perf-profile.self.cycles-pp.write
0.24 +0.0 0.27 ± 3% perf-profile.self.cycles-pp.rcu_all_qs
0.67 +0.0 0.70 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.60 ± 2% +0.0 0.64 ± 2% perf-profile.self.cycles-pp.update_min_vruntime
0.42 ± 4% +0.0 0.46 ± 4% perf-profile.self.cycles-pp.probe_sched_switch
1.33 +0.0 1.37 perf-profile.self.cycles-pp.__fget_light
1.61 +0.0 1.66 perf-profile.self.cycles-pp.pipe_read
0.53 ± 2% +0.1 0.58 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.31 +0.1 0.36 ± 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
1.04 +0.1 1.11 perf-profile.self.cycles-pp.pipe_write
0.00 +0.1 0.07 ± 11% perf-profile.self.cycles-pp.hrtick_update
2.00 +0.1 2.08 perf-profile.self.cycles-pp.switch_mm_irqs_off
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/page_fault3/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=file_update_time/0x
:3 33% 1:3 stderr.mount.nfs:Connection_timed_out
34:3 -401% 22:3 perf-profile.calltrace.cycles-pp.error_entry.testcase
17:3 -207% 11:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.testcase
34:3 -404% 22:3 perf-profile.children.cycles-pp.error_entry
0:3 -2% 0:3 perf-profile.children.cycles-pp.error_exit
16:3 -196% 11:3 perf-profile.self.cycles-pp.error_entry
0:3 -2% 0:3 perf-profile.self.cycles-pp.error_exit
%stddev %change %stddev
\ | \
467454 -1.8% 459251 will-it-scale.per_process_ops
10856 ± 4% -23.1% 8344 ± 7% will-it-scale.per_thread_ops
118134 ± 2% +11.7% 131943 will-it-scale.time.involuntary_context_switches
6.277e+08 ± 4% -23.1% 4.827e+08 ± 7% will-it-scale.time.minor_page_faults
7406 +5.8% 7839 will-it-scale.time.percent_of_cpu_this_job_got
44526 +5.8% 47106 will-it-scale.time.system_time
7351468 ± 5% -18.3% 6009014 ± 7% will-it-scale.time.voluntary_context_switches
91835846 -2.2% 89778599 will-it-scale.workload
2534640 +4.3% 2643005 ± 2% interrupts.CAL:Function_call_interrupts
2819 ± 5% +22.9% 3464 ± 18% kthread_noise.total_time
30273 ± 4% -12.7% 26415 ± 5% vmstat.system.cs
1.52 ± 2% +15.2% 1.75 ± 2% irq_exception_noise.__do_page_fault.99th
296.67 ± 12% -36.7% 187.67 ± 12% irq_exception_noise.softirq_time
230900 ± 3% +30.3% 300925 ± 3% meminfo.Inactive
230184 ± 3% +30.4% 300180 ± 3% meminfo.Inactive(anon)
11.62 ± 3% -2.2 9.40 ± 5% mpstat.cpu.idle%
0.00 ± 14% -0.0 0.00 ± 4% mpstat.cpu.iowait%
7992174 -11.1% 7101976 ± 3% softirqs.RCU
4973624 ± 2% -12.9% 4333370 ± 2% softirqs.SCHED
118134 ± 2% +11.7% 131943 time.involuntary_context_switches
6.277e+08 ± 4% -23.1% 4.827e+08 ± 7% time.minor_page_faults
7406 +5.8% 7839 time.percent_of_cpu_this_job_got
44526 +5.8% 47106 time.system_time
7351468 ± 5% -18.3% 6009014 ± 7% time.voluntary_context_switches
2.702e+09 ± 5% -16.7% 2.251e+09 ± 7% cpuidle.C1E.time
6834329 ± 5% -15.8% 5756243 ± 7% cpuidle.C1E.usage
1.046e+10 ± 3% -19.8% 8.389e+09 ± 4% cpuidle.C6.time
13961845 ± 3% -19.3% 11265555 ± 4% cpuidle.C6.usage
1309307 ± 7% -14.8% 1116168 ± 8% cpuidle.POLL.time
19774 ± 6% -13.7% 17063 ± 7% cpuidle.POLL.usage
2523 ± 4% -11.1% 2243 ± 4% slabinfo.biovec-64.active_objs
2523 ± 4% -11.1% 2243 ± 4% slabinfo.biovec-64.num_objs
2610 ± 8% -33.7% 1731 ± 22% slabinfo.dmaengine-unmap-16.active_objs
2610 ± 8% -33.7% 1731 ± 22% slabinfo.dmaengine-unmap-16.num_objs
5118 ± 17% -22.6% 3962 ± 9% slabinfo.eventpoll_pwq.active_objs
5118 ± 17% -22.6% 3962 ± 9% slabinfo.eventpoll_pwq.num_objs
4583 ± 3% -14.0% 3941 ± 4% slabinfo.sock_inode_cache.active_objs
4583 ± 3% -14.0% 3941 ± 4% slabinfo.sock_inode_cache.num_objs
1933 +2.6% 1984 turbostat.Avg_MHz
6832021 ± 5% -15.8% 5754156 ± 7% turbostat.C1E
2.32 ± 5% -0.4 1.94 ± 7% turbostat.C1E%
13954211 ± 3% -19.3% 11259436 ± 4% turbostat.C6
8.97 ± 3% -1.8 7.20 ± 4% turbostat.C6%
6.18 ± 4% -17.1% 5.13 ± 5% turbostat.CPU%c1
5.12 ± 3% -21.7% 4.01 ± 4% turbostat.CPU%c6
1.76 ± 2% -34.7% 1.15 ± 2% turbostat.Pkg%pc2
57314 ± 4% +30.4% 74717 ± 4% proc-vmstat.nr_inactive_anon
57319 ± 4% +30.4% 74719 ± 4% proc-vmstat.nr_zone_inactive_anon
24415 ± 19% -62.2% 9236 ± 7% proc-vmstat.numa_hint_faults
69661453 -1.8% 68405712 proc-vmstat.numa_hit
69553390 -1.8% 68297790 proc-vmstat.numa_local
8792 ± 29% -92.6% 654.33 ± 23% proc-vmstat.numa_pages_migrated
40251 ± 32% -76.5% 9474 ± 3% proc-vmstat.numa_pte_updates
69522532 -1.6% 68383074 proc-vmstat.pgalloc_normal
2.762e+10 -2.2% 2.701e+10 proc-vmstat.pgfault
68825100 -1.5% 67772256 proc-vmstat.pgfree
8792 ± 29% -92.6% 654.33 ± 23% proc-vmstat.pgmigrate_success
57992 ± 6% +56.2% 90591 ± 3% numa-meminfo.node0.Inactive
57916 ± 6% +56.3% 90513 ± 3% numa-meminfo.node0.Inactive(anon)
37285 ± 12% +36.0% 50709 ± 5% numa-meminfo.node0.SReclaimable
110971 ± 8% +22.7% 136209 ± 8% numa-meminfo.node0.Slab
23601 ± 55% +559.5% 155651 ± 36% numa-meminfo.node1.AnonPages
62484 ± 12% +17.5% 73417 ± 3% numa-meminfo.node1.Inactive
62323 ± 12% +17.2% 73023 ± 4% numa-meminfo.node1.Inactive(anon)
109714 ± 63% -85.6% 15832 ± 96% numa-meminfo.node2.AnonPages
52236 ± 13% +22.7% 64074 ± 3% numa-meminfo.node2.Inactive
51922 ± 12% +23.2% 63963 ± 3% numa-meminfo.node2.Inactive(anon)
60241 ± 11% +21.9% 73442 ± 8% numa-meminfo.node3.Inactive
60077 ± 12% +22.0% 73279 ± 8% numa-meminfo.node3.Inactive(anon)
14093 ± 6% +55.9% 21977 ± 3% numa-vmstat.node0.nr_inactive_anon
9321 ± 12% +36.0% 12675 ± 5% numa-vmstat.node0.nr_slab_reclaimable
14090 ± 6% +56.0% 21977 ± 3% numa-vmstat.node0.nr_zone_inactive_anon
5900 ± 55% +559.4% 38909 ± 36% numa-vmstat.node1.nr_anon_pages
15413 ± 12% +14.8% 17688 ± 4% numa-vmstat.node1.nr_inactive_anon
15413 ± 12% +14.8% 17688 ± 4% numa-vmstat.node1.nr_zone_inactive_anon
27430 ± 63% -85.6% 3960 ± 96% numa-vmstat.node2.nr_anon_pages
12928 ± 12% +20.0% 15508 ± 3% numa-vmstat.node2.nr_inactive_anon
12927 ± 12% +20.0% 15507 ± 3% numa-vmstat.node2.nr_zone_inactive_anon
6229 ± 10% +117.5% 13547 ± 44% numa-vmstat.node3
14669 ± 11% +19.6% 17537 ± 7% numa-vmstat.node3.nr_inactive_anon
14674 ± 11% +19.5% 17541 ± 7% numa-vmstat.node3.nr_zone_inactive_anon
24617 ±141% -100.0% 0.00 latency_stats.avg.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
5049 ±105% -99.4% 28.33 ± 82% latency_stats.avg.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
152457 ± 27% +233.6% 508656 ± 92% latency_stats.avg.max
0.00 +3.9e+107% 390767 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
24617 ±141% -100.0% 0.00 latency_stats.max.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4240 ±141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
8565 ± 70% -99.1% 80.33 ±115% latency_stats.max.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
204835 ± 6% +457.6% 1142244 ±114% latency_stats.max.max
0.00 +5.1e+105% 5057 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 +1e+108% 995083 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
13175 ± 4% -100.0% 0.00 latency_stats.sum.io_schedule.__lock_page_or_retry.filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
24617 ±141% -100.0% 0.00 latency_stats.sum.io_schedule.nfs_lock_and_join_requests.nfs_updatepage.nfs_write_end.generic_perform_write.nfs_file_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4260 ±141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
8640 ± 70% -97.5% 216.33 ±108% latency_stats.sum.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
6673 ± 89% -92.8% 477.67 ± 74% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
0.00 +4.2e+105% 4228 ±130% latency_stats.sum.io_schedule.__lock_page_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +7.5e+105% 7450 ± 98% latency_stats.sum.io_schedule.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +1.3e+106% 13050 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat.do_filp_open
0.00 +1.5e+110% 1.508e+08 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_openat
0.97 -0.0 0.94 perf-stat.branch-miss-rate%
1.329e+11 -2.6% 1.294e+11 perf-stat.branch-misses
2.254e+11 -1.9% 2.21e+11 perf-stat.cache-references
18308779 ± 4% -12.8% 15969618 ± 5% perf-stat.context-switches
3.20 +1.8% 3.26 perf-stat.cpi
2.233e+14 +2.7% 2.293e+14 perf-stat.cpu-cycles
4.01 -0.2 3.83 perf-stat.dTLB-store-miss-rate%
4.51e+11 -2.2% 4.41e+11 perf-stat.dTLB-store-misses
1.08e+13 +2.6% 1.109e+13 perf-stat.dTLB-stores
3.158e+10 ± 5% +16.8% 3.689e+10 ± 2% perf-stat.iTLB-load-misses
2214 ± 5% -13.8% 1907 ± 2% perf-stat.instructions-per-iTLB-miss
0.31 -1.8% 0.31 perf-stat.ipc
2.762e+10 -2.2% 2.701e+10 perf-stat.minor-faults
1.535e+10 -11.2% 1.362e+10 perf-stat.node-loads
9.75 +1.1 10.89 perf-stat.node-store-miss-rate%
3.012e+09 +10.6% 3.332e+09 ± 2% perf-stat.node-store-misses
2.787e+10 -2.2% 2.725e+10 perf-stat.node-stores
2.762e+10 -2.2% 2.701e+10 perf-stat.page-faults
759458 +3.2% 783404 perf-stat.path-length
246.39 ± 15% -20.4% 196.12 ± 6% sched_debug.cfs_rq:/.load_avg.max
0.21 ± 3% +9.0% 0.23 ± 4% sched_debug.cfs_rq:/.nr_running.stddev
16.64 ± 27% +61.0% 26.79 ± 17% sched_debug.cfs_rq:/.nr_spread_over.max
75.15 -14.4% 64.30 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
178.80 ± 3% +25.4% 224.12 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.avg
1075 ± 5% -12.3% 943.36 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.max
2093630 ± 27% -36.1% 1337941 ± 16% sched_debug.cpu.avg_idle.max
297057 ± 11% +37.8% 409294 ± 14% sched_debug.cpu.avg_idle.min
293240 ± 55% -62.3% 110571 ± 13% sched_debug.cpu.avg_idle.stddev
770075 ± 9% -19.3% 621136 ± 12% sched_debug.cpu.max_idle_balance_cost.max
48919 ± 46% -66.9% 16190 ± 81% sched_debug.cpu.max_idle_balance_cost.stddev
21716 ± 5% -16.8% 18061 ± 7% sched_debug.cpu.nr_switches.min
21519 ± 5% -17.7% 17700 ± 7% sched_debug.cpu.sched_count.min
10586 ± 5% -18.1% 8669 ± 7% sched_debug.cpu.sched_goidle.avg
14183 ± 3% -17.6% 11693 ± 5% sched_debug.cpu.sched_goidle.max
10322 ± 5% -18.6% 8407 ± 7% sched_debug.cpu.sched_goidle.min
400.99 ± 8% -13.0% 348.75 ± 3% sched_debug.cpu.sched_goidle.stddev
5459 ± 8% +10.0% 6006 ± 3% sched_debug.cpu.ttwu_local.avg
8.47 ± 42% +345.8% 37.73 ± 77% sched_debug.rt_rq:/.rt_time.max
0.61 ± 42% +343.0% 2.72 ± 77% sched_debug.rt_rq:/.rt_time.stddev
91.98 -30.9 61.11 ± 70% perf-profile.calltrace.cycles-pp.testcase
9.05 -9.1 0.00 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
8.91 -8.9 0.00 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
8.06 -8.1 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
7.59 -7.6 0.00 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
7.44 -7.4 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.28 -7.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.31 -5.3 0.00 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
8.08 -2.8 5.30 ± 70% perf-profile.calltrace.cycles-pp.native_irq_return_iret.testcase
5.95 -2.1 3.83 ± 70% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
5.95 -2.0 3.93 ± 70% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode.testcase
3.10 -1.1 2.01 ± 70% perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault.testcase
2.36 -0.8 1.55 ± 70% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.08 -0.4 0.70 ± 70% perf-profile.calltrace.cycles-pp.do_page_fault.testcase
0.82 -0.3 0.54 ± 70% perf-profile.calltrace.cycles-pp.trace_graph_entry.do_page_fault.testcase
0.77 -0.3 0.50 ± 70% perf-profile.calltrace.cycles-pp.ftrace_graph_caller.__do_page_fault.do_page_fault.page_fault.testcase
0.59 -0.2 0.37 ± 70% perf-profile.calltrace.cycles-pp.down_read_trylock.__do_page_fault.do_page_fault.page_fault.testcase
91.98 -30.9 61.11 ± 70% perf-profile.children.cycles-pp.testcase
9.14 -3.2 5.99 ± 70% perf-profile.children.cycles-pp.__do_fault
8.20 -2.8 5.40 ± 70% perf-profile.children.cycles-pp.shmem_getpage_gfp
8.08 -2.8 5.31 ± 70% perf-profile.children.cycles-pp.native_irq_return_iret
6.08 -2.2 3.92 ± 70% perf-profile.children.cycles-pp.find_get_entry
6.08 -2.1 3.96 ± 70% perf-profile.children.cycles-pp.sync_regs
5.95 -2.0 3.93 ± 70% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
4.12 -1.4 2.73 ± 70% perf-profile.children.cycles-pp.ftrace_graph_caller
3.65 -1.2 2.42 ± 70% perf-profile.children.cycles-pp.prepare_ftrace_return
3.18 -1.1 2.07 ± 70% perf-profile.children.cycles-pp.__perf_sw_event
2.34 -0.8 1.52 ± 70% perf-profile.children.cycles-pp.fault_dirty_shared_page
0.80 -0.3 0.50 ± 70% perf-profile.children.cycles-pp._raw_spin_lock
0.76 -0.3 0.50 ± 70% perf-profile.children.cycles-pp.tlb_flush_mmu_free
0.61 -0.2 0.39 ± 70% perf-profile.children.cycles-pp.down_read_trylock
0.48 ± 2% -0.2 0.28 ± 70% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.26 ± 6% -0.1 0.15 ± 71% perf-profile.children.cycles-pp.ktime_get
0.20 ± 2% -0.1 0.12 ± 70% perf-profile.children.cycles-pp.perf_exclude_event
0.22 ± 2% -0.1 0.13 ± 70% perf-profile.children.cycles-pp._cond_resched
0.17 -0.1 0.11 ± 70% perf-profile.children.cycles-pp.page_rmapping
0.13 -0.1 0.07 ± 70% perf-profile.children.cycles-pp.rcu_all_qs
0.07 -0.0 0.04 ± 70% perf-profile.children.cycles-pp.ftrace_lookup_ip
22.36 -7.8 14.59 ± 70% perf-profile.self.cycles-pp.testcase
8.08 -2.8 5.31 ± 70% perf-profile.self.cycles-pp.native_irq_return_iret
6.08 -2.1 3.96 ± 70% perf-profile.self.cycles-pp.sync_regs
5.81 -2.0 3.84 ± 70% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.27 -1.6 1.65 ± 70% perf-profile.self.cycles-pp.__handle_mm_fault
3.79 -1.4 2.36 ± 70% perf-profile.self.cycles-pp.find_get_entry
3.80 -1.3 2.53 ± 70% perf-profile.self.cycles-pp.trace_graph_entry
1.10 -0.5 0.57 ± 70% perf-profile.self.cycles-pp.alloc_set_pte
1.24 -0.4 0.81 ± 70% perf-profile.self.cycles-pp.shmem_fault
0.80 -0.3 0.50 ± 70% perf-profile.self.cycles-pp._raw_spin_lock
0.81 -0.3 0.51 ± 70% perf-profile.self.cycles-pp.find_lock_entry
0.80 ± 2% -0.3 0.51 ± 70% perf-profile.self.cycles-pp.__perf_sw_event
0.61 -0.2 0.38 ± 70% perf-profile.self.cycles-pp.down_read_trylock
0.60 -0.2 0.39 ± 70% perf-profile.self.cycles-pp.shmem_getpage_gfp
0.48 -0.2 0.27 ± 70% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.47 -0.2 0.30 ± 70% perf-profile.self.cycles-pp.file_update_time
0.34 -0.1 0.22 ± 70% perf-profile.self.cycles-pp.do_page_fault
0.22 ± 4% -0.1 0.11 ± 70% perf-profile.self.cycles-pp.__do_fault
0.25 ± 5% -0.1 0.14 ± 71% perf-profile.self.cycles-pp.ktime_get
0.21 ± 2% -0.1 0.12 ± 70% perf-profile.self.cycles-pp.finish_fault
0.23 ± 2% -0.1 0.14 ± 70% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.22 ± 2% -0.1 0.14 ± 70% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.20 ± 2% -0.1 0.12 ± 70% perf-profile.self.cycles-pp.perf_exclude_event
0.16 -0.1 0.10 ± 70% perf-profile.self.cycles-pp._cond_resched
0.13 -0.1 0.07 ± 70% perf-profile.self.cycles-pp.rcu_all_qs
0.07 -0.0 0.04 ± 70% perf-profile.self.cycles-pp.ftrace_lookup_ip
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/context_switch1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:3 33% 1:3 dmesg.WARNING:at#for_ip_ret_from_intr/0x
:3 67% 2:3 kmsg.pstore:crypto_comp_decompress_failed,ret=
:3 67% 2:3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
223910 -1.3% 220930 will-it-scale.per_process_ops
233722 -1.0% 231288 will-it-scale.per_thread_ops
6.001e+08 ± 13% +31.4% 7.887e+08 ± 4% will-it-scale.time.involuntary_context_switches
18003 ± 4% +10.9% 19956 will-it-scale.time.minor_page_faults
1.29e+10 -2.5% 1.258e+10 will-it-scale.time.voluntary_context_switches
87865617 -1.2% 86826277 will-it-scale.workload
2880329 ± 2% +5.4% 3034904 interrupts.CAL:Function_call_interrupts
7695018 -23.3% 5905066 ± 8% meminfo.DirectMap2M
0.00 ± 39% -0.0 0.00 ± 78% mpstat.cpu.iowait%
4621 ± 12% +13.4% 5241 proc-vmstat.numa_hint_faults_local
715714 +27.6% 913142 ± 13% softirqs.SCHED
515653 ± 6% -20.0% 412650 ± 15% turbostat.C1
43643516 -1.2% 43127031 vmstat.system.cs
2893393 ± 4% -23.6% 2210524 ± 10% cpuidle.C1.time
518051 ± 6% -19.9% 415081 ± 15% cpuidle.C1.usage
23.10 +22.9% 28.38 ± 9% boot-time.boot
18.38 +23.2% 22.64 ± 12% boot-time.dhcp
5216 +5.0% 5478 ± 2% boot-time.idle
963.76 ± 44% +109.7% 2021 ± 34% irq_exception_noise.__do_page_fault.sum
6.33 ± 14% +726.3% 52.33 ± 62% irq_exception_noise.irq_time
56524 ± 7% -18.8% 45915 ± 4% irq_exception_noise.softirq_time
6.001e+08 ± 13% +31.4% 7.887e+08 ± 4% time.involuntary_context_switches
18003 ± 4% +10.9% 19956 time.minor_page_faults
1.29e+10 -2.5% 1.258e+10 time.voluntary_context_switches
1386 ± 7% +15.4% 1600 ± 11% slabinfo.scsi_sense_cache.active_objs
1386 ± 7% +15.4% 1600 ± 11% slabinfo.scsi_sense_cache.num_objs
1427 ± 5% -8.9% 1299 ± 2% slabinfo.task_group.active_objs
1427 ± 5% -8.9% 1299 ± 2% slabinfo.task_group.num_objs
65519 ± 12% +20.6% 79014 ± 16% numa-meminfo.node0.SUnreclaim
8484 -11.9% 7475 ± 7% numa-meminfo.node1.KernelStack
9264 ± 26% -33.7% 6146 ± 7% numa-meminfo.node1.Mapped
2138 ± 61% +373.5% 10127 ± 92% numa-meminfo.node3.Inactive
2059 ± 61% +387.8% 10046 ± 93% numa-meminfo.node3.Inactive(anon)
16379 ± 12% +20.6% 19752 ± 16% numa-vmstat.node0.nr_slab_unreclaimable
8483 -11.9% 7474 ± 7% numa-vmstat.node1.nr_kernel_stack
6250 ± 29% -42.8% 3575 ± 24% numa-vmstat.node2
3798 ± 17% +63.7% 6218 ± 5% numa-vmstat.node3
543.00 ± 61% +368.1% 2541 ± 91% numa-vmstat.node3.nr_inactive_anon
543.33 ± 61% +367.8% 2541 ± 91% numa-vmstat.node3.nr_zone_inactive_anon
4.138e+13 -1.1% 4.09e+13 perf-stat.branch-instructions
6.569e+11 -2.0% 6.441e+11 perf-stat.branch-misses
2.645e+10 -1.2% 2.613e+10 perf-stat.context-switches
1.21 +1.2% 1.23 perf-stat.cpi
153343 ± 2% -12.1% 134776 perf-stat.cpu-migrations
5.966e+13 -1.3% 5.889e+13 perf-stat.dTLB-loads
3.736e+13 -1.2% 3.69e+13 perf-stat.dTLB-stores
5.85 ± 15% +8.8 14.67 ± 9% perf-stat.iTLB-load-miss-rate%
3.736e+09 ± 17% +161.3% 9.76e+09 ± 11% perf-stat.iTLB-load-misses
5.987e+10 -5.4% 5.667e+10 perf-stat.iTLB-loads
2.079e+14 -1.2% 2.054e+14 perf-stat.instructions
57547 ± 18% -62.9% 21340 ± 11% perf-stat.instructions-per-iTLB-miss
0.82 -1.2% 0.81 perf-stat.ipc
27502531 ± 8% +9.5% 30122136 ± 3% perf-stat.node-store-misses
1449 ± 27% -34.6% 948.85 sched_debug.cfs_rq:/.load.min
319416 ±115% -188.5% -282549 sched_debug.cfs_rq:/.spread0.avg
657044 ± 55% -88.3% 76887 ± 23% sched_debug.cfs_rq:/.spread0.max
-1525243 +54.6% -2357898 sched_debug.cfs_rq:/.spread0.min
101614 ± 6% +30.6% 132713 ± 19% sched_debug.cpu.avg_idle.stddev
11.54 ± 41% -61.2% 4.48 sched_debug.cpu.cpu_load[1].avg
1369 ± 67% -98.5% 20.67 ± 48% sched_debug.cpu.cpu_load[1].max
99.29 ± 67% -97.6% 2.35 ± 26% sched_debug.cpu.cpu_load[1].stddev
9.58 ± 38% -55.2% 4.29 sched_debug.cpu.cpu_load[2].avg
1024 ± 68% -98.5% 15.27 ± 36% sched_debug.cpu.cpu_load[2].max
74.51 ± 67% -97.3% 1.99 ± 15% sched_debug.cpu.cpu_load[2].stddev
7.37 ± 29% -42.0% 4.28 sched_debug.cpu.cpu_load[3].avg
600.58 ± 68% -97.9% 12.48 ± 20% sched_debug.cpu.cpu_load[3].max
43.98 ± 66% -95.8% 1.83 ± 5% sched_debug.cpu.cpu_load[3].stddev
5.95 ± 19% -28.1% 4.28 sched_debug.cpu.cpu_load[4].avg
325.39 ± 67% -96.4% 11.67 ± 10% sched_debug.cpu.cpu_load[4].max
24.19 ± 65% -92.5% 1.81 ± 3% sched_debug.cpu.cpu_load[4].stddev
907.23 ± 4% -14.1% 779.70 ± 10% sched_debug.cpu.nr_load_updates.stddev
0.00 ± 83% +122.5% 0.00 sched_debug.rt_rq:/.rt_time.min
8.49 ± 2% -0.3 8.21 ± 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
57.28 -0.3 57.01 perf-profile.calltrace.cycles-pp.read
5.06 -0.2 4.85 perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
4.98 -0.2 4.78 perf-profile.calltrace.cycles-pp.__switch_to.read
3.55 -0.2 3.39 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.read
2.72 -0.1 2.60 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
2.67 -0.1 2.57 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
3.40 -0.1 3.31 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
3.77 -0.1 3.68 perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.95 -0.1 1.88 perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.ksys_read
2.19 -0.1 2.13 perf-profile.calltrace.cycles-pp.__switch_to_asm.read
1.30 -0.1 1.25 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
1.27 -0.1 1.22 ± 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
2.29 -0.0 2.24 perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.pipe_wait
0.96 -0.0 0.92 perf-profile.calltrace.cycles-pp.__calc_delta.update_curr.reweight_entity.dequeue_task_fair.__schedule
0.85 -0.0 0.81 ± 3% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
1.63 -0.0 1.59 perf-profile.calltrace.cycles-pp.native_write_msr.read
0.72 -0.0 0.69 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
0.65 ± 2% -0.0 0.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.61 -0.0 0.58 ± 2% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.88 -0.0 0.85 perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.ksys_read
0.80 -0.0 0.77 ± 2% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.82 -0.0 0.79 perf-profile.calltrace.cycles-pp.prepare_to_wait.pipe_wait.pipe_read.__vfs_read.vfs_read
0.72 -0.0 0.70 perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.__vfs_write.vfs_write.ksys_write
0.56 ± 2% -0.0 0.53 perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.83 -0.0 0.81 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_read.__vfs_read.vfs_read.ksys_read
42.40 +0.3 42.69 perf-profile.calltrace.cycles-pp.write
31.80 +0.4 32.18 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.35 +0.5 24.84 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.ksys_read
20.36 +0.6 20.92 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
22.01 +0.6 22.58 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
21.87 +0.6 22.46 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
3.15 ± 11% +1.0 4.12 ± 14% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.07 ± 34% +1.1 2.12 ± 31% perf-profile.calltrace.cycles-pp.tracing_record_taskinfo_sched_switch.__schedule.schedule.pipe_wait.pipe_read
0.66 ± 75% +1.1 1.72 ± 37% perf-profile.calltrace.cycles-pp.trace_save_cmdline.tracing_record_taskinfo.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.75 ± 74% +1.1 1.88 ± 34% perf-profile.calltrace.cycles-pp.tracing_record_taskinfo.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.69 ± 76% +1.2 1.85 ± 36% perf-profile.calltrace.cycles-pp.trace_save_cmdline.tracing_record_taskinfo_sched_switch.__schedule.schedule.pipe_wait
8.73 ± 2% -0.3 8.45 perf-profile.children.cycles-pp.dequeue_task_fair
57.28 -0.3 57.01 perf-profile.children.cycles-pp.read
6.95 -0.2 6.70 perf-profile.children.cycles-pp.syscall_return_via_sysret
5.57 -0.2 5.35 perf-profile.children.cycles-pp.reweight_entity
5.26 -0.2 5.05 perf-profile.children.cycles-pp.select_task_rq_fair
5.19 -0.2 4.99 perf-profile.children.cycles-pp.__switch_to
4.90 -0.2 4.73 ± 2% perf-profile.children.cycles-pp.update_curr
1.27 -0.1 1.13 ± 8% perf-profile.children.cycles-pp.fsnotify
3.92 -0.1 3.83 perf-profile.children.cycles-pp.select_idle_sibling
2.01 -0.1 1.93 perf-profile.children.cycles-pp.__calc_delta
2.14 -0.1 2.06 perf-profile.children.cycles-pp.copy_page_to_iter
1.58 -0.1 1.51 perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
2.90 -0.1 2.84 perf-profile.children.cycles-pp.update_cfs_group
1.93 -0.1 1.87 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
2.35 -0.1 2.29 perf-profile.children.cycles-pp.__switch_to_asm
1.33 -0.1 1.27 ± 3% perf-profile.children.cycles-pp.cpumask_next_wrap
2.57 -0.1 2.52 perf-profile.children.cycles-pp.load_new_mm_cr3
1.53 -0.1 1.47 ± 2% perf-profile.children.cycles-pp.__fdget_pos
1.11 -0.0 1.07 ± 2% perf-profile.children.cycles-pp.find_next_bit
1.18 -0.0 1.14 perf-profile.children.cycles-pp.update_rq_clock
0.88 -0.0 0.83 perf-profile.children.cycles-pp.copy_user_generic_unrolled
1.70 -0.0 1.65 perf-profile.children.cycles-pp.native_write_msr
0.97 -0.0 0.93 ± 2% perf-profile.children.cycles-pp.account_entity_dequeue
0.59 -0.0 0.56 perf-profile.children.cycles-pp.finish_task_switch
0.91 -0.0 0.88 perf-profile.children.cycles-pp.touch_atime
0.69 -0.0 0.65 perf-profile.children.cycles-pp.account_entity_enqueue
2.13 -0.0 2.09 perf-profile.children.cycles-pp.mutex_lock
0.32 ± 3% -0.0 0.29 ± 4% perf-profile.children.cycles-pp.__sb_start_write
0.84 -0.0 0.81 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.89 -0.0 0.87 perf-profile.children.cycles-pp.prepare_to_wait
0.73 -0.0 0.71 perf-profile.children.cycles-pp.copyout
0.31 ± 2% -0.0 0.28 ± 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.46 ± 2% -0.0 0.44 perf-profile.children.cycles-pp.anon_pipe_buf_release
0.38 -0.0 0.36 ± 3% perf-profile.children.cycles-pp.idle_cpu
0.32 -0.0 0.30 ± 2% perf-profile.children.cycles-pp.__x64_sys_read
0.21 ± 2% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.deactivate_task
0.13 -0.0 0.12 ± 4% perf-profile.children.cycles-pp.timespec_trunc
0.09 -0.0 0.08 perf-profile.children.cycles-pp.iov_iter_init
0.08 -0.0 0.07 perf-profile.children.cycles-pp.native_load_tls
0.11 ± 4% +0.0 0.12 perf-profile.children.cycles-pp.tick_sched_timer
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.finish_wait
0.38 ± 2% +0.0 0.40 ± 2% perf-profile.children.cycles-pp.file_update_time
0.31 +0.0 0.33 ± 2% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.24 ± 3% +0.0 0.26 ± 3% perf-profile.children.cycles-pp.rcu_all_qs
0.39 +0.0 0.41 perf-profile.children.cycles-pp._cond_resched
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.default_wake_function
0.23 ± 2% +0.0 0.26 ± 3% perf-profile.children.cycles-pp.current_time
0.30 +0.0 0.35 ± 2% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.52 +0.1 0.58 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.hrtick_update
42.40 +0.3 42.69 perf-profile.children.cycles-pp.write
31.86 +0.4 32.26 perf-profile.children.cycles-pp.__vfs_read
24.40 +0.5 24.89 perf-profile.children.cycles-pp.pipe_wait
20.40 +0.6 20.96 ± 2% perf-profile.children.cycles-pp.try_to_wake_up
22.30 +0.6 22.89 perf-profile.children.cycles-pp.schedule
22.22 +0.6 22.84 perf-profile.children.cycles-pp.__schedule
0.99 ± 36% +0.9 1.94 ± 32% perf-profile.children.cycles-pp.tracing_record_taskinfo
3.30 ± 10% +1.0 4.27 ± 13% perf-profile.children.cycles-pp.ttwu_do_wakeup
1.14 ± 31% +1.1 2.24 ± 29% perf-profile.children.cycles-pp.tracing_record_taskinfo_sched_switch
1.59 ± 46% +2.0 3.60 ± 36% perf-profile.children.cycles-pp.trace_save_cmdline
6.95 -0.2 6.70 perf-profile.self.cycles-pp.syscall_return_via_sysret
5.19 -0.2 4.99 perf-profile.self.cycles-pp.__switch_to
1.27 -0.1 1.12 ± 8% perf-profile.self.cycles-pp.fsnotify
1.49 -0.1 1.36 perf-profile.self.cycles-pp.select_task_rq_fair
2.47 -0.1 2.37 ± 2% perf-profile.self.cycles-pp.reweight_entity
0.29 -0.1 0.19 ± 2% perf-profile.self.cycles-pp.ksys_read
1.50 -0.1 1.42 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
2.01 -0.1 1.93 perf-profile.self.cycles-pp.__calc_delta
1.93 -0.1 1.86 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.47 -0.1 1.40 perf-profile.self.cycles-pp.dequeue_task_fair
2.90 -0.1 2.84 perf-profile.self.cycles-pp.update_cfs_group
1.29 -0.1 1.23 perf-profile.self.cycles-pp.do_syscall_64
2.57 -0.1 2.52 perf-profile.self.cycles-pp.load_new_mm_cr3
2.28 -0.1 2.23 perf-profile.self.cycles-pp.__switch_to_asm
1.80 -0.1 1.75 perf-profile.self.cycles-pp.select_idle_sibling
1.11 -0.0 1.07 ± 2% perf-profile.self.cycles-pp.find_next_bit
0.87 -0.0 0.83 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.43 -0.0 0.39 ± 2% perf-profile.self.cycles-pp.dequeue_entity
1.70 -0.0 1.65 perf-profile.self.cycles-pp.native_write_msr
0.92 -0.0 0.88 ± 2% perf-profile.self.cycles-pp.account_entity_dequeue
0.48 -0.0 0.44 perf-profile.self.cycles-pp.finish_task_switch
0.77 -0.0 0.74 perf-profile.self.cycles-pp.___perf_sw_event
0.66 -0.0 0.63 perf-profile.self.cycles-pp.account_entity_enqueue
0.46 ± 2% -0.0 0.43 ± 2% perf-profile.self.cycles-pp.anon_pipe_buf_release
0.32 ± 3% -0.0 0.29 ± 4% perf-profile.self.cycles-pp.__sb_start_write
0.31 ± 2% -0.0 0.28 ± 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.38 -0.0 0.36 ± 3% perf-profile.self.cycles-pp.idle_cpu
0.19 ± 4% -0.0 0.17 ± 2% perf-profile.self.cycles-pp.__fdget_pos
0.50 -0.0 0.48 perf-profile.self.cycles-pp.__atime_needs_update
0.23 ± 2% -0.0 0.21 ± 3% perf-profile.self.cycles-pp.touch_atime
0.31 -0.0 0.30 perf-profile.self.cycles-pp.__x64_sys_read
0.21 ± 2% -0.0 0.20 ± 2% perf-profile.self.cycles-pp.deactivate_task
0.21 ± 2% -0.0 0.19 perf-profile.self.cycles-pp.check_preempt_curr
0.40 -0.0 0.39 perf-profile.self.cycles-pp.autoremove_wake_function
0.40 -0.0 0.38 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.27 -0.0 0.26 perf-profile.self.cycles-pp.pipe_wait
0.13 -0.0 0.12 ± 4% perf-profile.self.cycles-pp.timespec_trunc
0.22 ± 2% -0.0 0.20 ± 2% perf-profile.self.cycles-pp.put_prev_entity
0.09 -0.0 0.08 perf-profile.self.cycles-pp.iov_iter_init
0.08 -0.0 0.07 perf-profile.self.cycles-pp.native_load_tls
0.11 -0.0 0.10 perf-profile.self.cycles-pp.schedule
0.12 ± 4% +0.0 0.13 perf-profile.self.cycles-pp.copyin
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.finish_wait
0.18 +0.0 0.20 ± 2% perf-profile.self.cycles-pp.ttwu_do_activate
0.28 ± 2% +0.0 0.30 ± 2% perf-profile.self.cycles-pp._cond_resched
0.24 ± 3% +0.0 0.26 ± 3% perf-profile.self.cycles-pp.rcu_all_qs
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.default_wake_function
0.08 ± 14% +0.0 0.11 ± 14% perf-profile.self.cycles-pp.tracing_record_taskinfo_sched_switch
0.51 +0.0 0.55 ± 4% perf-profile.self.cycles-pp.vfs_write
0.30 +0.0 0.35 ± 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.52 +0.1 0.58 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.hrtick_update
1.97 +0.1 2.07 ± 2% perf-profile.self.cycles-pp.switch_mm_irqs_off
1.59 ± 46% +2.0 3.60 ± 36% perf-profile.self.cycles-pp.trace_save_cmdline
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/brk1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 kmsg.pstore:crypto_comp_decompress_failed,ret=
:3 33% 1:3 kmsg.pstore:decompression_failed
%stddev %change %stddev
\ | \
997317 -2.0% 977778 will-it-scale.per_process_ops
957.00 -7.9% 881.00 ± 3% will-it-scale.per_thread_ops
18.42 ± 3% -8.2% 16.90 will-it-scale.time.user_time
1.917e+08 -2.0% 1.879e+08 will-it-scale.workload
18.42 ± 3% -8.2% 16.90 time.user_time
0.30 ± 11% -36.7% 0.19 ± 11% turbostat.Pkg%pc2
57539 ± 51% +140.6% 138439 ± 31% meminfo.CmaFree
410877 ± 11% -22.1% 320082 ± 22% meminfo.DirectMap4k
343575 ± 27% +71.3% 588703 ± 31% numa-numastat.node0.local_node
374176 ± 24% +63.3% 611007 ± 27% numa-numastat.node0.numa_hit
1056347 ± 4% -39.9% 634843 ± 38% numa-numastat.node3.local_node
1060682 ± 4% -39.0% 646862 ± 35% numa-numastat.node3.numa_hit
14383 ± 51% +140.6% 34608 ± 31% proc-vmstat.nr_free_cma
179.00 +2.4% 183.33 proc-vmstat.nr_inactive_file
179.00 +2.4% 183.33 proc-vmstat.nr_zone_inactive_file
564483 ± 3% -38.0% 350064 ± 36% proc-vmstat.pgalloc_movable
1811959 +10.8% 2008488 ± 5% proc-vmstat.pgalloc_normal
7153 ± 42% -94.0% 431.33 ±119% latency_stats.max.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6627 ±141% +380.5% 31843 ±110% latency_stats.max.call_rwsem_down_write_failed_killable.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
15244 ± 31% -99.9% 15.00 ±141% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.__get_user_8.exit_robust_list.mm_release.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
4301 ±117% -83.7% 700.33 ± 6% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
12153 ± 28% -83.1% 2056 ± 70% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6772 ±141% +1105.8% 81665 ±127% latency_stats.sum.call_rwsem_down_write_failed_killable.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.465e+13 -1.3% 2.434e+13 perf-stat.branch-instructions
2.691e+11 -2.1% 2.635e+11 perf-stat.branch-misses
3.402e+13 -1.4% 3.355e+13 perf-stat.dTLB-loads
1.694e+13 +1.4% 1.718e+13 perf-stat.dTLB-stores
1.75 ± 50% +4.7 6.45 ± 11% perf-stat.iTLB-load-miss-rate%
4.077e+08 ± 48% +232.3% 1.355e+09 ± 11% perf-stat.iTLB-load-misses
2.31e+10 ± 2% -14.9% 1.965e+10 ± 3% perf-stat.iTLB-loads
1.163e+14 -1.6% 1.144e+14 perf-stat.instructions
346171 ± 36% -75.3% 85575 ± 11% perf-stat.instructions-per-iTLB-miss
6.174e+08 ± 2% -9.5% 5.589e+08 perf-stat.node-store-misses
595.00 ± 10% +31.4% 782.00 ± 3% slabinfo.Acpi-State.active_objs
595.00 ± 10% +31.4% 782.00 ± 3% slabinfo.Acpi-State.num_objs
2831 ± 3% -14.0% 2434 ± 5% slabinfo.avtab_node.active_objs
2831 ± 3% -14.0% 2434 ± 5% slabinfo.avtab_node.num_objs
934.00 -10.9% 832.33 ± 5% slabinfo.inotify_inode_mark.active_objs
934.00 -10.9% 832.33 ± 5% slabinfo.inotify_inode_mark.num_objs
1232 ± 4% +13.4% 1397 ± 6% slabinfo.nsproxy.active_objs
1232 ± 4% +13.4% 1397 ± 6% slabinfo.nsproxy.num_objs
499.67 ± 12% +24.8% 623.67 ± 10% slabinfo.secpath_cache.active_objs
499.67 ± 12% +24.8% 623.67 ± 10% slabinfo.secpath_cache.num_objs
31393 ± 84% +220.1% 100477 ± 21% numa-meminfo.node0.Active
31393 ± 84% +220.1% 100477 ± 21% numa-meminfo.node0.Active(anon)
30013 ± 85% +232.1% 99661 ± 21% numa-meminfo.node0.AnonPages
21603 ± 34% -85.0% 3237 ±100% numa-meminfo.node0.Inactive
21528 ± 34% -85.0% 3237 ±100% numa-meminfo.node0.Inactive(anon)
10247 ± 35% -46.4% 5495 numa-meminfo.node0.Mapped
35388 ± 14% -41.6% 20670 ± 15% numa-meminfo.node0.SReclaimable
22911 ± 29% -82.3% 4057 ± 84% numa-meminfo.node0.Shmem
117387 ± 9% -22.5% 90986 ± 12% numa-meminfo.node0.Slab
68863 ± 67% +77.7% 122351 ± 13% numa-meminfo.node1.Active
68863 ± 67% +77.7% 122351 ± 13% numa-meminfo.node1.Active(anon)
228376 +22.3% 279406 ± 17% numa-meminfo.node1.FilePages
1481 ±116% +1062.1% 17218 ± 39% numa-meminfo.node1.Inactive
1481 ±116% +1062.0% 17216 ± 39% numa-meminfo.node1.Inactive(anon)
6593 ± 2% +11.7% 7367 ± 3% numa-meminfo.node1.KernelStack
596227 ± 8% +18.0% 703748 ± 4% numa-meminfo.node1.MemUsed
15298 ± 12% +88.5% 28843 ± 36% numa-meminfo.node1.SReclaimable
52718 ± 9% +21.0% 63810 ± 11% numa-meminfo.node1.SUnreclaim
1808 ± 97% +2723.8% 51054 ± 97% numa-meminfo.node1.Shmem
68017 ± 5% +36.2% 92654 ± 18% numa-meminfo.node1.Slab
125541 ± 29% -64.9% 44024 ± 98% numa-meminfo.node3.Active
125137 ± 29% -65.0% 43823 ± 98% numa-meminfo.node3.Active(anon)
93173 ± 25% -87.8% 11381 ± 20% numa-meminfo.node3.AnonPages
9150 ± 5% -9.3% 8301 ± 8% numa-meminfo.node3.KernelStack
7848 ± 84% +220.0% 25118 ± 21% numa-vmstat.node0.nr_active_anon
7503 ± 85% +232.1% 24914 ± 21% numa-vmstat.node0.nr_anon_pages
5381 ± 34% -85.0% 809.00 ±100% numa-vmstat.node0.nr_inactive_anon
2559 ± 35% -46.4% 1372 numa-vmstat.node0.nr_mapped
5727 ± 29% -82.3% 1014 ± 84% numa-vmstat.node0.nr_shmem
8846 ± 14% -41.6% 5167 ± 15% numa-vmstat.node0.nr_slab_reclaimable
7848 ± 84% +220.0% 25118 ± 21% numa-vmstat.node0.nr_zone_active_anon
5381 ± 34% -85.0% 809.00 ±100% numa-vmstat.node0.nr_zone_inactive_anon
4821 ± 2% +30.3% 6283 ± 15% numa-vmstat.node1
17215 ± 67% +77.7% 30591 ± 13% numa-vmstat.node1.nr_active_anon
57093 +22.3% 69850 ± 17% numa-vmstat.node1.nr_file_pages
370.00 ±116% +1061.8% 4298 ± 39% numa-vmstat.node1.nr_inactive_anon
6593 ± 2% +11.7% 7366 ± 3% numa-vmstat.node1.nr_kernel_stack
451.67 ± 97% +2725.6% 12762 ± 97% numa-vmstat.node1.nr_shmem
3824 ± 12% +88.6% 7211 ± 36% numa-vmstat.node1.nr_slab_reclaimable
13179 ± 9% +21.0% 15952 ± 11% numa-vmstat.node1.nr_slab_unreclaimable
17215 ± 67% +77.7% 30591 ± 13% numa-vmstat.node1.nr_zone_active_anon
370.00 ±116% +1061.8% 4298 ± 39% numa-vmstat.node1.nr_zone_inactive_anon
364789 ± 12% +62.8% 593926 ± 34% numa-vmstat.node1.numa_hit
239539 ± 19% +95.4% 468113 ± 43% numa-vmstat.node1.numa_local
71.00 ± 28% +42.3% 101.00 numa-vmstat.node2.nr_mlock
31285 ± 29% -65.0% 10960 ± 98% numa-vmstat.node3.nr_active_anon
23292 ± 25% -87.8% 2844 ± 19% numa-vmstat.node3.nr_anon_pages
14339 ± 52% +141.1% 34566 ± 32% numa-vmstat.node3.nr_free_cma
9151 ± 5% -9.3% 8299 ± 8% numa-vmstat.node3.nr_kernel_stack
31305 ± 29% -64.9% 10975 ± 98% numa-vmstat.node3.nr_zone_active_anon
930131 ± 3% -35.9% 596006 ± 34% numa-vmstat.node3.numa_hit
836455 ± 3% -40.9% 493947 ± 44% numa-vmstat.node3.numa_local
75182 ± 58% -83.8% 12160 ± 2% sched_debug.cfs_rq:/.load.max
6.65 ± 5% -10.6% 5.94 ± 6% sched_debug.cfs_rq:/.load_avg.avg
0.16 ± 7% +22.6% 0.20 ± 12% sched_debug.cfs_rq:/.nr_running.stddev
5.58 ± 24% +427.7% 29.42 ± 93% sched_debug.cfs_rq:/.nr_spread_over.max
0.54 ± 15% +306.8% 2.19 ± 86% sched_debug.cfs_rq:/.nr_spread_over.stddev
1.05 ± 25% -65.1% 0.37 ± 71% sched_debug.cfs_rq:/.removed.load_avg.avg
9.62 ± 11% -50.7% 4.74 ± 70% sched_debug.cfs_rq:/.removed.load_avg.stddev
48.70 ± 25% -65.1% 17.02 ± 71% sched_debug.cfs_rq:/.removed.runnable_sum.avg
444.31 ± 11% -50.7% 219.26 ± 70% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.47 ± 13% -60.9% 0.19 ± 71% sched_debug.cfs_rq:/.removed.util_avg.avg
4.47 ± 4% -46.5% 2.39 ± 70% sched_debug.cfs_rq:/.removed.util_avg.stddev
1.64 ± 7% +22.1% 2.00 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.stddev
74653 ± 59% -84.4% 11676 sched_debug.cfs_rq:/.runnable_weight.max
-119169 -491.3% 466350 ± 27% sched_debug.cfs_rq:/.spread0.avg
517161 ± 30% +145.8% 1271292 ± 23% sched_debug.cfs_rq:/.spread0.max
624.79 ± 5% -14.2% 535.76 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.avg
247.91 ± 32% -99.8% 0.48 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.min
179704 ± 3% +30.4% 234297 ± 16% sched_debug.cpu.avg_idle.stddev
1.56 ± 9% +24.4% 1.94 ± 14% sched_debug.cpu.cpu_load[0].stddev
1.50 ± 6% +27.7% 1.91 ± 14% sched_debug.cpu.cpu_load[1].stddev
1.45 ± 3% +30.8% 1.90 ± 14% sched_debug.cpu.cpu_load[2].stddev
1.43 ± 3% +36.1% 1.95 ± 11% sched_debug.cpu.cpu_load[3].stddev
1.55 ± 7% +43.5% 2.22 ± 7% sched_debug.cpu.cpu_load[4].stddev
10004 ± 3% -11.6% 8839 ± 3% sched_debug.cpu.curr->pid.avg
1146 ± 26% +52.2% 1745 ± 7% sched_debug.cpu.curr->pid.min
3162 ± 6% +25.4% 3966 ± 11% sched_debug.cpu.curr->pid.stddev
403738 ± 3% -11.7% 356696 ± 7% sched_debug.cpu.nr_switches.max
0.08 ± 21% +78.2% 0.14 ± 14% sched_debug.cpu.nr_uninterruptible.avg
404435 ± 3% -11.8% 356732 ± 7% sched_debug.cpu.sched_count.max
4.17 -0.3 3.87 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.40 -0.2 2.17 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
7.58 -0.2 7.36 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.00 -0.2 14.81 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.brk
7.83 -0.2 7.66 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
28.66 -0.1 28.51 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.15 -0.1 2.03 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 -0.1 0.99 perf-profile.calltrace.cycles-pp.memcpy_erms.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk
1.03 -0.1 0.95 perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
7.33 -0.1 7.25 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk
0.76 -0.1 0.69 perf-profile.calltrace.cycles-pp.__vm_enough_memory.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.85 -0.1 11.77 perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.64 -0.1 1.57 perf-profile.calltrace.cycles-pp.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.06 -0.1 0.99 perf-profile.calltrace.cycles-pp.__indirect_thunk_start.brk
0.73 -0.1 0.67 perf-profile.calltrace.cycles-pp.sync_mm_rss.unmap_page_range.unmap_vmas.unmap_region.do_munmap
4.59 -0.1 4.52 perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.82 -0.1 2.76 perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64
2.89 -0.1 2.84 perf-profile.calltrace.cycles-pp.down_write_killable.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.37 -0.1 3.32 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.99 -0.0 1.94 perf-profile.calltrace.cycles-pp.cred_has_capability.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk
2.32 -0.0 2.27 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.88 -0.0 1.84 perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64
0.77 -0.0 0.73 perf-profile.calltrace.cycles-pp._raw_spin_lock.unmap_page_range.unmap_vmas.unmap_region.do_munmap
1.62 -0.0 1.59 perf-profile.calltrace.cycles-pp.memset_erms.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64
0.81 -0.0 0.79 perf-profile.calltrace.cycles-pp.___might_sleep.down_write_killable.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 -0.0 0.64 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.brk
0.72 +0.0 0.74 perf-profile.calltrace.cycles-pp.do_munmap.brk
0.90 +0.0 0.93 perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.do_munmap
4.40 +0.1 4.47 perf-profile.calltrace.cycles-pp.find_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.96 +0.1 2.09 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.52 ± 2% +0.2 0.68 perf-profile.calltrace.cycles-pp.__vma_link_rb.brk
0.35 ± 70% +0.2 0.54 ± 2% perf-profile.calltrace.cycles-pp.find_vma.brk
2.20 +0.3 2.50 perf-profile.calltrace.cycles-pp.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.62 +0.3 64.94 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
60.53 +0.4 60.92 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
63.20 +0.4 63.60 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.73 +0.5 4.26 perf-profile.calltrace.cycles-pp.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.56 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
24.54 +0.6 25.14 perf-profile.calltrace.cycles-pp.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.put_vma.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.71 +0.6 1.36 perf-profile.calltrace.cycles-pp.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64
3.10 +0.7 3.82 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64
0.00 +0.8 0.76 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
0.00 +0.8 0.85 perf-profile.calltrace.cycles-pp.__vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.09 -0.5 4.62 perf-profile.children.cycles-pp.vma_compute_subtree_gap
4.54 -0.3 4.21 perf-profile.children.cycles-pp.kmem_cache_alloc
8.11 -0.2 7.89 perf-profile.children.cycles-pp.perf_event_mmap
8.05 -0.2 7.85 perf-profile.children.cycles-pp.unmap_vmas
15.01 -0.2 14.81 perf-profile.children.cycles-pp.syscall_return_via_sysret
29.20 -0.1 29.06 perf-profile.children.cycles-pp.do_brk_flags
1.11 -0.1 1.00 perf-profile.children.cycles-pp.kmem_cache_free
12.28 -0.1 12.17 perf-profile.children.cycles-pp.unmap_region
7.83 -0.1 7.74 perf-profile.children.cycles-pp.unmap_page_range
0.87 ± 3% -0.1 0.79 perf-profile.children.cycles-pp.__vm_enough_memory
1.29 -0.1 1.22 perf-profile.children.cycles-pp.__indirect_thunk_start
1.81 -0.1 1.74 perf-profile.children.cycles-pp.strlcpy
4.65 -0.1 4.58 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
3.08 -0.1 3.02 perf-profile.children.cycles-pp.down_write_killable
2.88 -0.1 2.82 perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.73 -0.1 0.67 perf-profile.children.cycles-pp.sync_mm_rss
3.65 -0.1 3.59 perf-profile.children.cycles-pp.get_unmapped_area
2.26 -0.1 2.20 perf-profile.children.cycles-pp.cred_has_capability
1.12 -0.1 1.07 perf-profile.children.cycles-pp.memcpy_erms
0.39 -0.0 0.35 perf-profile.children.cycles-pp.__rb_insert_augmented
2.52 -0.0 2.48 perf-profile.children.cycles-pp.perf_iterate_sb
2.13 -0.0 2.09 perf-profile.children.cycles-pp.security_mmap_addr
0.55 ± 2% -0.0 0.52 perf-profile.children.cycles-pp.unmap_single_vma
1.62 -0.0 1.59 perf-profile.children.cycles-pp.memset_erms
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__vma_link_file
0.80 -0.0 0.77 perf-profile.children.cycles-pp._raw_spin_lock
0.43 -0.0 0.41 perf-profile.children.cycles-pp.strlen
0.07 ± 6% -0.0 0.06 ± 8% perf-profile.children.cycles-pp.should_failslab
0.43 -0.0 0.42 perf-profile.children.cycles-pp.may_expand_vm
0.15 +0.0 0.16 perf-profile.children.cycles-pp.__vma_link_list
0.45 +0.0 0.47 perf-profile.children.cycles-pp.rcu_all_qs
0.81 +0.1 0.89 perf-profile.children.cycles-pp.free_pgtables
6.35 +0.1 6.49 perf-profile.children.cycles-pp.find_vma
2.28 +0.2 2.45 perf-profile.children.cycles-pp.vmacache_find
64.66 +0.3 64.98 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.42 +0.3 2.76 perf-profile.children.cycles-pp.remove_vma
61.77 +0.4 62.13 perf-profile.children.cycles-pp.__x64_sys_brk
63.40 +0.4 63.79 perf-profile.children.cycles-pp.do_syscall_64
1.27 +0.4 1.72 perf-profile.children.cycles-pp.__vma_rb_erase
4.02 +0.5 4.53 perf-profile.children.cycles-pp.vma_link
25.26 +0.6 25.89 perf-profile.children.cycles-pp.do_munmap
0.00 +0.7 0.70 perf-profile.children.cycles-pp.put_vma
3.80 +0.7 4.53 perf-profile.children.cycles-pp.__vma_link_rb
0.00 +1.2 1.24 perf-profile.children.cycles-pp.__vma_merge
0.00 +1.5 1.51 perf-profile.children.cycles-pp._raw_write_lock
5.07 -0.5 4.60 perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.59 -0.2 0.38 perf-profile.self.cycles-pp.remove_vma
15.01 -0.2 14.81 perf-profile.self.cycles-pp.syscall_return_via_sysret
3.15 -0.2 2.96 perf-profile.self.cycles-pp.do_munmap
0.98 -0.1 0.87 perf-profile.self.cycles-pp.__vma_rb_erase
1.10 -0.1 0.99 perf-profile.self.cycles-pp.kmem_cache_free
0.68 -0.1 0.58 perf-profile.self.cycles-pp.__vm_enough_memory
0.42 -0.1 0.33 perf-profile.self.cycles-pp.unmap_vmas
3.62 -0.1 3.53 perf-profile.self.cycles-pp.perf_event_mmap
1.41 -0.1 1.34 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.29 -0.1 1.22 perf-profile.self.cycles-pp.__indirect_thunk_start
0.73 -0.1 0.66 perf-profile.self.cycles-pp.sync_mm_rss
2.96 -0.1 2.90 perf-profile.self.cycles-pp.__x64_sys_brk
3.24 -0.1 3.19 perf-profile.self.cycles-pp.brk
1.11 -0.0 1.07 perf-profile.self.cycles-pp.memcpy_erms
0.53 ± 3% -0.0 0.49 ± 2% perf-profile.self.cycles-pp.vma_link
0.73 -0.0 0.69 perf-profile.self.cycles-pp.unmap_region
1.66 -0.0 1.61 perf-profile.self.cycles-pp.down_write_killable
0.39 -0.0 0.35 perf-profile.self.cycles-pp.__rb_insert_augmented
1.74 -0.0 1.71 perf-profile.self.cycles-pp.kmem_cache_alloc
0.55 ± 2% -0.0 0.52 perf-profile.self.cycles-pp.unmap_single_vma
1.61 -0.0 1.59 perf-profile.self.cycles-pp.memset_erms
0.80 -0.0 0.77 perf-profile.self.cycles-pp._raw_spin_lock
0.13 -0.0 0.11 ± 4% perf-profile.self.cycles-pp.__vma_link_file
0.43 -0.0 0.41 perf-profile.self.cycles-pp.strlen
0.07 ± 6% -0.0 0.06 ± 8% perf-profile.self.cycles-pp.should_failslab
0.81 -0.0 0.79 perf-profile.self.cycles-pp.tlb_finish_mmu
0.15 +0.0 0.16 perf-profile.self.cycles-pp.__vma_link_list
0.45 +0.0 0.47 perf-profile.self.cycles-pp.rcu_all_qs
0.71 +0.0 0.72 perf-profile.self.cycles-pp.strlcpy
0.51 +0.1 0.56 perf-profile.self.cycles-pp.free_pgtables
1.41 +0.1 1.48 perf-profile.self.cycles-pp.__vma_link_rb
2.27 +0.2 2.44 perf-profile.self.cycles-pp.vmacache_find
0.00 +0.7 0.69 perf-profile.self.cycles-pp.put_vma
0.00 +1.2 1.23 perf-profile.self.cycles-pp.__vma_merge
0.00 +1.5 1.50 perf-profile.self.cycles-pp._raw_write_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/brk1/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
:3 33% 1:3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
%stddev %change %stddev
\ | \
998475 -2.2% 976893 will-it-scale.per_process_ops
625.87 -2.3% 611.42 will-it-scale.time.elapsed_time
625.87 -2.3% 611.42 will-it-scale.time.elapsed_time.max
8158 -1.9% 8000 will-it-scale.time.maximum_resident_set_size
18.42 ± 2% -11.9% 16.24 will-it-scale.time.user_time
34349225 ± 13% -14.5% 29371024 ± 17% will-it-scale.time.voluntary_context_switches
1.919e+08 -2.2% 1.877e+08 will-it-scale.workload
1639 ± 23% -18.4% 1337 ± 30% meminfo.Mlocked
17748 ± 82% +103.1% 36051 numa-numastat.node3.other_node
33410486 ± 14% -14.8% 28449258 ± 18% cpuidle.C1.usage
698749 ± 15% -18.0% 573307 ± 20% cpuidle.POLL.usage
3013702 ± 14% -15.1% 2559405 ± 17% softirqs.SCHED
54361293 ± 2% -19.0% 44044816 ± 2% softirqs.TIMER
33408303 ± 14% -14.9% 28447123 ± 18% turbostat.C1
0.34 ± 16% -52.0% 0.16 ± 15% turbostat.Pkg%pc2
1310 ± 74% +412.1% 6710 ± 58% irq_exception_noise.__do_page_fault.samples
3209 ± 74% +281.9% 12258 ± 53% irq_exception_noise.__do_page_fault.sum
600.67 ±132% -96.0% 24.00 ± 23% irq_exception_noise.irq_nr
99557 ± 7% -24.0% 75627 ± 7% irq_exception_noise.softirq_nr
41424 ± 9% -24.6% 31253 ± 6% irq_exception_noise.softirq_time
625.87 -2.3% 611.42 time.elapsed_time
625.87 -2.3% 611.42 time.elapsed_time.max
8158 -1.9% 8000 time.maximum_resident_set_size
18.42 ± 2% -11.9% 16.24 time.user_time
34349225 ± 13% -14.5% 29371024 ± 17% time.voluntary_context_switches
988.00 ± 8% +14.5% 1131 ± 2% slabinfo.Acpi-ParseExt.active_objs
988.00 ± 8% +14.5% 1131 ± 2% slabinfo.Acpi-ParseExt.num_objs
2384 ± 3% +21.1% 2888 ± 11% slabinfo.pool_workqueue.active_objs
2474 ± 2% +20.4% 2979 ± 11% slabinfo.pool_workqueue.num_objs
490.33 ± 10% -19.2% 396.00 ± 11% slabinfo.secpath_cache.active_objs
490.33 ± 10% -19.2% 396.00 ± 11% slabinfo.secpath_cache.num_objs
1123 ± 7% +14.2% 1282 ± 3% slabinfo.skbuff_fclone_cache.active_objs
1123 ± 7% +14.2% 1282 ± 3% slabinfo.skbuff_fclone_cache.num_objs
1.09 -0.0 1.07 perf-stat.branch-miss-rate%
2.691e+11 -2.4% 2.628e+11 perf-stat.branch-misses
71981351 ± 12% -13.8% 62013509 ± 16% perf-stat.context-switches
1.697e+13 +1.1% 1.715e+13 perf-stat.dTLB-stores
2.36 ± 29% +4.4 6.76 ± 11% perf-stat.iTLB-load-miss-rate%
5.21e+08 ± 28% +194.8% 1.536e+09 ± 10% perf-stat.iTLB-load-misses
239983 ± 24% -68.4% 75819 ± 11% perf-stat.instructions-per-iTLB-miss
3295653 ± 2% -6.3% 3088753 ± 3% perf-stat.node-stores
606239 +1.1% 612799 perf-stat.path-length
3755 ± 28% -37.5% 2346 ± 52% sched_debug.cfs_rq:/.exec_clock.stddev
10.45 ± 4% +24.3% 12.98 ± 18% sched_debug.cfs_rq:/.load_avg.stddev
6243 ± 46% -38.6% 3831 ± 78% sched_debug.cpu.load.stddev
867.80 ± 7% +25.3% 1087 ± 6% sched_debug.cpu.nr_load_updates.stddev
395898 ± 3% -11.1% 352071 ± 7% sched_debug.cpu.nr_switches.max
-13.33 -21.1% -10.52 sched_debug.cpu.nr_uninterruptible.min
395674 ± 3% -11.1% 351762 ± 7% sched_debug.cpu.sched_count.max
33152 ± 4% -12.8% 28899 sched_debug.cpu.ttwu_count.min
0.03 ± 20% +77.7% 0.05 ± 15% sched_debug.rt_rq:/.rt_time.max
89523 +1.8% 91099 proc-vmstat.nr_active_anon
409.67 ± 23% -18.4% 334.33 ± 30% proc-vmstat.nr_mlock
89530 +1.8% 91117 proc-vmstat.nr_zone_active_anon
2337130 -2.2% 2286775 proc-vmstat.numa_hit
2229090 -2.3% 2178626 proc-vmstat.numa_local
8460 ± 39% -75.5% 2076 ± 53% proc-vmstat.numa_pages_migrated
28643 ± 55% -83.5% 4727 ± 58% proc-vmstat.numa_pte_updates
2695806 -1.8% 2646639 proc-vmstat.pgfault
2330191 -2.1% 2281197 proc-vmstat.pgfree
8460 ± 39% -75.5% 2076 ± 53% proc-vmstat.pgmigrate_success
237651 ± 2% +31.3% 312092 ± 16% numa-meminfo.node0.FilePages
8059 ± 2% +10.7% 8925 ± 7% numa-meminfo.node0.KernelStack
6830 ± 25% +48.8% 10164 ± 35% numa-meminfo.node0.Mapped
1612 ± 21% +70.0% 2740 ± 19% numa-meminfo.node0.PageTables
10772 ± 65% +679.4% 83962 ± 59% numa-meminfo.node0.Shmem
163195 ± 15% -36.9% 103036 ± 32% numa-meminfo.node1.Active
163195 ± 15% -36.9% 103036 ± 32% numa-meminfo.node1.Active(anon)
1730 ± 4% +33.9% 2317 ± 14% numa-meminfo.node1.PageTables
55778 ± 19% +32.5% 73910 ± 8% numa-meminfo.node1.SUnreclaim
2671 ± 16% -45.0% 1469 ± 15% numa-meminfo.node2.PageTables
61537 ± 13% -17.7% 50647 ± 3% numa-meminfo.node2.SUnreclaim
48644 ± 94% +149.8% 121499 ± 11% numa-meminfo.node3.Active
48440 ± 94% +150.4% 121295 ± 11% numa-meminfo.node3.Active(anon)
11832 ± 79% -91.5% 1008 ± 67% numa-meminfo.node3.Inactive
11597 ± 82% -93.3% 772.00 ± 82% numa-meminfo.node3.Inactive(anon)
10389 ± 32% -43.0% 5921 ± 6% numa-meminfo.node3.Mapped
33704 ± 24% -44.2% 18792 ± 15% numa-meminfo.node3.SReclaimable
104733 ± 14% -25.3% 78275 ± 8% numa-meminfo.node3.Slab
139329 ±133% -99.8% 241.67 ± 79% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 ±139% -97.5% 137.67 ± 71% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
165968 ±101% -61.9% 63304 ± 58% latency_stats.avg.max
83.00 +12810.4% 10715 ±140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
102.67 ± 6% +18845.5% 19450 ±140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
136.33 ± 16% +25043.5% 34279 ±141% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
18497 ±141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
140500 ±131% -99.8% 247.00 ± 78% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 ±139% -97.5% 137.67 ± 71% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
87.33 ± 5% +23963.0% 21015 ±140% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
136.33 ± 16% +25043.5% 34279 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
149.33 ± 14% +25485.9% 38208 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
18761 ±141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
23363 ±114% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.__get_user_8.exit_robust_list.mm_release.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
144810 ±125% -99.8% 326.67 ± 70% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_do_create.nfs3_proc_create.nfs_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
5403 ±139% -97.5% 137.67 ± 71% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
59698 ± 98% -78.0% 13110 ±141% latency_stats.sum.call_rwsem_down_read_failed.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
166.33 +12768.5% 21404 ±140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_access.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat.filename_lookup
825.00 ± 6% +18761.7% 155609 ±140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
136.33 ± 16% +25043.5% 34279 ±141% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup.__lookup_slow.lookup_slow.walk_component.path_lookupat.filename_lookup
59412 ± 2% +31.3% 78021 ± 16% numa-vmstat.node0.nr_file_pages
8059 ± 2% +10.7% 8923 ± 7% numa-vmstat.node0.nr_kernel_stack
1701 ± 25% +49.1% 2536 ± 35% numa-vmstat.node0.nr_mapped
402.33 ± 21% +70.0% 684.00 ± 19% numa-vmstat.node0.nr_page_table_pages
2692 ± 65% +679.5% 20988 ± 59% numa-vmstat.node0.nr_shmem
622587 ± 36% +37.7% 857545 ± 13% numa-vmstat.node0.numa_local
40797 ± 15% -36.9% 25757 ± 32% numa-vmstat.node1.nr_active_anon
432.00 ± 4% +33.9% 578.33 ± 14% numa-vmstat.node1.nr_page_table_pages
13944 ± 19% +32.5% 18477 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
40797 ± 15% -36.9% 25757 ± 32% numa-vmstat.node1.nr_zone_active_anon
625073 ± 26% +29.4% 808657 ± 18% numa-vmstat.node1.numa_hit
503969 ± 34% +39.2% 701446 ± 23% numa-vmstat.node1.numa_local
137.33 ± 40% -49.0% 70.00 ± 29% numa-vmstat.node2.nr_mlock
667.67 ± 17% -45.1% 366.33 ± 15% numa-vmstat.node2.nr_page_table_pages
15384 ± 13% -17.7% 12662 ± 3% numa-vmstat.node2.nr_slab_unreclaimable
12114 ± 94% +150.3% 30326 ± 11% numa-vmstat.node3.nr_active_anon
2887 ± 83% -93.4% 190.00 ± 82% numa-vmstat.node3.nr_inactive_anon
2632 ± 30% -39.2% 1600 ± 5% numa-vmstat.node3.nr_mapped
101.00 -30.0% 70.67 ± 29% numa-vmstat.node3.nr_mlock
8425 ± 24% -44.2% 4697 ± 15% numa-vmstat.node3.nr_slab_reclaimable
12122 ± 94% +150.3% 30346 ± 11% numa-vmstat.node3.nr_zone_active_anon
2887 ± 83% -93.4% 190.00 ± 82% numa-vmstat.node3.nr_zone_inactive_anon
106945 ± 13% +17.4% 125554 numa-vmstat.node3.numa_other
4.17 -0.3 3.82 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.02 -0.3 14.77 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.brk
2.42 -0.2 2.18 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
7.60 -0.2 7.39 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.79 -0.2 7.63 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
0.82 ± 9% -0.1 0.68 perf-profile.calltrace.cycles-pp.__vm_enough_memory.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.13 -0.1 2.00 perf-profile.calltrace.cycles-pp.vma_compute_subtree_gap.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.1 0.95 perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
7.31 -0.1 7.21 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.__x64_sys_brk
0.74 -0.1 0.67 perf-profile.calltrace.cycles-pp.sync_mm_rss.unmap_page_range.unmap_vmas.unmap_region.do_munmap
1.06 -0.1 1.00 perf-profile.calltrace.cycles-pp.memcpy_erms.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk
3.38 -0.1 3.33 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.0 1.00 ± 2% perf-profile.calltrace.cycles-pp.__indirect_thunk_start.brk
2.34 -0.0 2.29 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.64 -0.0 1.59 perf-profile.calltrace.cycles-pp.strlcpy.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.89 -0.0 1.86 perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64
0.76 -0.0 0.73 perf-profile.calltrace.cycles-pp._raw_spin_lock.unmap_page_range.unmap_vmas.unmap_region.do_munmap
0.57 ± 2% -0.0 0.55 perf-profile.calltrace.cycles-pp.selinux_mmap_addr.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk
0.54 ± 2% +0.0 0.56 perf-profile.calltrace.cycles-pp.do_brk_flags.brk
0.72 +0.0 0.76 ± 2% perf-profile.calltrace.cycles-pp.do_munmap.brk
4.38 +0.1 4.43 perf-profile.calltrace.cycles-pp.find_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.96 +0.1 2.04 perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.53 +0.2 0.68 perf-profile.calltrace.cycles-pp.__vma_link_rb.brk
2.21 +0.3 2.51 perf-profile.calltrace.cycles-pp.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.44 +0.5 64.90 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
63.04 +0.5 63.54 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
60.37 +0.5 60.88 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
3.75 +0.5 4.29 perf-profile.calltrace.cycles-pp.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.__x64_sys_brk.do_syscall_64
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.put_vma.remove_vma.do_munmap.__x64_sys_brk.do_syscall_64
0.72 +0.7 1.37 perf-profile.calltrace.cycles-pp.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.42 +0.7 25.08 perf-profile.calltrace.cycles-pp.do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.00 +0.7 0.71 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_rb_erase.do_munmap.__x64_sys_brk.do_syscall_64
3.12 +0.7 3.84 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk.do_syscall_64
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp._raw_write_lock.__vma_link_rb.vma_link.do_brk_flags.__x64_sys_brk
0.00 +0.9 0.85 perf-profile.calltrace.cycles-pp.__vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.10 -0.5 4.60 perf-profile.children.cycles-pp.vma_compute_subtree_gap
4.53 -0.3 4.18 perf-profile.children.cycles-pp.kmem_cache_alloc
15.03 -0.3 14.77 perf-profile.children.cycles-pp.syscall_return_via_sysret
8.13 -0.2 7.92 perf-profile.children.cycles-pp.perf_event_mmap
8.01 -0.2 7.81 perf-profile.children.cycles-pp.unmap_vmas
0.97 ± 14% -0.2 0.78 perf-profile.children.cycles-pp.__vm_enough_memory
1.13 -0.1 1.00 perf-profile.children.cycles-pp.kmem_cache_free
7.82 -0.1 7.70 perf-profile.children.cycles-pp.unmap_page_range
12.23 -0.1 12.13 perf-profile.children.cycles-pp.unmap_region
0.74 -0.1 0.67 perf-profile.children.cycles-pp.sync_mm_rss
3.06 -0.1 3.00 perf-profile.children.cycles-pp.down_write_killable
0.40 ± 2% -0.1 0.34 perf-profile.children.cycles-pp.__rb_insert_augmented
1.29 -0.1 1.23 perf-profile.children.cycles-pp.__indirect_thunk_start
2.54 -0.1 2.49 perf-profile.children.cycles-pp.perf_iterate_sb
3.66 -0.0 3.61 perf-profile.children.cycles-pp.get_unmapped_area
1.80 -0.0 1.75 perf-profile.children.cycles-pp.strlcpy
0.53 ± 2% -0.0 0.49 ± 2% perf-profile.children.cycles-pp.cap_capable
1.57 -0.0 1.53 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
1.11 -0.0 1.08 perf-profile.children.cycles-pp.memcpy_erms
0.13 -0.0 0.10 perf-profile.children.cycles-pp.__vma_link_file
0.55 -0.0 0.52 perf-profile.children.cycles-pp.unmap_single_vma
1.47 -0.0 1.44 perf-profile.children.cycles-pp.cap_vm_enough_memory
2.14 -0.0 2.12 perf-profile.children.cycles-pp.security_mmap_addr
0.32 -0.0 0.30 perf-profile.children.cycles-pp.userfaultfd_unmap_complete
1.25 -0.0 1.23 perf-profile.children.cycles-pp.up_write
0.50 -0.0 0.49 perf-profile.children.cycles-pp.userfaultfd_unmap_prep
0.27 -0.0 0.26 perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.14 -0.0 1.12 perf-profile.children.cycles-pp.__might_sleep
0.07 -0.0 0.06 perf-profile.children.cycles-pp.should_failslab
0.72 +0.0 0.74 perf-profile.children.cycles-pp._cond_resched
0.45 +0.0 0.47 perf-profile.children.cycles-pp.rcu_all_qs
0.15 ± 3% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.__vma_link_list
0.15 ± 5% +0.0 0.18 ± 5% perf-profile.children.cycles-pp.tick_sched_timer
0.05 ± 8% +0.1 0.12 ± 17% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.80 +0.1 0.89 perf-profile.children.cycles-pp.free_pgtables
0.22 ± 7% +0.1 0.31 ± 9% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.11 ± 15% perf-profile.children.cycles-pp.clockevents_program_event
6.34 +0.1 6.47 perf-profile.children.cycles-pp.find_vma
2.27 +0.1 2.40 perf-profile.children.cycles-pp.vmacache_find
0.40 ± 4% +0.2 0.58 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
0.40 ± 4% +0.2 0.58 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.37 ± 4% +0.2 0.54 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.2 0.19 ± 12% perf-profile.children.cycles-pp.ktime_get
2.42 +0.3 2.77 perf-profile.children.cycles-pp.remove_vma
64.49 +0.5 64.94 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.27 +0.5 1.73 perf-profile.children.cycles-pp.__vma_rb_erase
61.62 +0.5 62.10 perf-profile.children.cycles-pp.__x64_sys_brk
63.24 +0.5 63.74 perf-profile.children.cycles-pp.do_syscall_64
4.03 +0.5 4.56 perf-profile.children.cycles-pp.vma_link
0.00 +0.7 0.69 perf-profile.children.cycles-pp.put_vma
25.13 +0.7 25.84 perf-profile.children.cycles-pp.do_munmap
3.83 +0.7 4.56 perf-profile.children.cycles-pp.__vma_link_rb
0.00 +1.2 1.25 perf-profile.children.cycles-pp.__vma_merge
0.00 +1.5 1.53 perf-profile.children.cycles-pp._raw_write_lock
5.08 -0.5 4.58 perf-profile.self.cycles-pp.vma_compute_subtree_gap
15.03 -0.3 14.77 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.59 -0.2 0.39 perf-profile.self.cycles-pp.remove_vma
0.72 ± 7% -0.1 0.58 perf-profile.self.cycles-pp.__vm_enough_memory
1.12 -0.1 0.99 perf-profile.self.cycles-pp.kmem_cache_free
3.11 -0.1 2.99 perf-profile.self.cycles-pp.do_munmap
0.99 -0.1 0.88 perf-profile.self.cycles-pp.__vma_rb_erase
3.63 -0.1 3.52 perf-profile.self.cycles-pp.perf_event_mmap
3.26 -0.1 3.17 perf-profile.self.cycles-pp.brk
0.41 ± 2% -0.1 0.33 perf-profile.self.cycles-pp.unmap_vmas
0.74 -0.1 0.67 perf-profile.self.cycles-pp.sync_mm_rss
1.75 -0.1 1.68 perf-profile.self.cycles-pp.kmem_cache_alloc
0.40 ± 2% -0.1 0.34 perf-profile.self.cycles-pp.__rb_insert_augmented
1.29 ± 2% -0.1 1.23 perf-profile.self.cycles-pp.__indirect_thunk_start
0.73 -0.0 0.68 ± 2% perf-profile.self.cycles-pp.unmap_region
0.53 -0.0 0.49 perf-profile.self.cycles-pp.vma_link
1.40 -0.0 1.35 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
5.22 -0.0 5.18 perf-profile.self.cycles-pp.unmap_page_range
0.53 ± 2% -0.0 0.49 ± 2% perf-profile.self.cycles-pp.cap_capable
1.11 -0.0 1.07 perf-profile.self.cycles-pp.memcpy_erms
1.86 -0.0 1.82 perf-profile.self.cycles-pp.perf_iterate_sb
1.30 -0.0 1.27 perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.13 -0.0 0.10 perf-profile.self.cycles-pp.__vma_link_file
0.55 -0.0 0.52 perf-profile.self.cycles-pp.unmap_single_vma
0.74 -0.0 0.72 perf-profile.self.cycles-pp.selinux_mmap_addr
0.32 -0.0 0.30 perf-profile.self.cycles-pp.userfaultfd_unmap_complete
1.13 -0.0 1.12 perf-profile.self.cycles-pp.__might_sleep
1.24 -0.0 1.23 perf-profile.self.cycles-pp.up_write
0.50 -0.0 0.49 perf-profile.self.cycles-pp.userfaultfd_unmap_prep
0.27 -0.0 0.26 perf-profile.self.cycles-pp.tlb_flush_mmu_free
0.07 -0.0 0.06 perf-profile.self.cycles-pp.should_failslab
0.45 +0.0 0.47 perf-profile.self.cycles-pp.rcu_all_qs
0.71 +0.0 0.73 perf-profile.self.cycles-pp.strlcpy
0.15 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.__vma_link_list
0.51 +0.1 0.57 perf-profile.self.cycles-pp.free_pgtables
1.40 +0.1 1.49 perf-profile.self.cycles-pp.__vma_link_rb
2.27 +0.1 2.39 perf-profile.self.cycles-pp.vmacache_find
0.00 +0.2 0.18 ± 12% perf-profile.self.cycles-pp.ktime_get
0.00 +0.7 0.69 perf-profile.self.cycles-pp.put_vma
0.00 +1.2 1.24 perf-profile.self.cycles-pp.__vma_merge
0.00 +1.5 1.52 perf-profile.self.cycles-pp._raw_write_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/always/page_fault2/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:3 33% 1:3 dmesg.WARNING:at#for_ip_native_iret/0x
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__schedule/0x
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
1:3 -33% :3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
3:3 -100% :3 kmsg.pstore:crypto_comp_decompress_failed,ret=
3:3 -100% :3 kmsg.pstore:decompression_failed
2:3 4% 2:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
5:3 7% 5:3 perf-profile.calltrace.cycles-pp.error_entry
5:3 7% 5:3 perf-profile.children.cycles-pp.error_entry
2:3 3% 2:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
92778 ± 2% +17.6% 109080 will-it-scale.time.involuntary_context_switches
21954366 ± 3% +4.1% 22857988 ± 2% will-it-scale.time.maximum_resident_set_size
4.81e+08 ± 2% -18.9% 3.899e+08 will-it-scale.time.minor_page_faults
5804 +12.2% 6512 will-it-scale.time.percent_of_cpu_this_job_got
34918 +12.2% 39193 will-it-scale.time.system_time
5638528 ± 2% -15.3% 4778392 will-it-scale.time.voluntary_context_switches
15846405 -2.0% 15531034 will-it-scale.workload
2818137 +1.5% 2861500 interrupts.CAL:Function_call_interrupts
3.33 ± 28% -60.0% 1.33 ± 93% irq_exception_noise.irq_time
2866 +23.9% 3552 ± 2% kthread_noise.total_time
5589674 ± 14% +31.4% 7344810 ± 6% meminfo.DirectMap2M
31169 -16.9% 25906 uptime.idle
25242 ± 4% -14.2% 21654 ± 6% vmstat.system.cs
7055 -11.6% 6237 boot-time.idle
21.12 +19.3% 25.19 ± 9% boot-time.kernel_boot
20.03 ± 2% -3.7 16.38 mpstat.cpu.idle%
0.00 ± 8% -0.0 0.00 ± 4% mpstat.cpu.iowait%
7284147 ± 2% -16.4% 6092495 softirqs.RCU
5350756 ± 2% -10.9% 4769417 ± 4% softirqs.SCHED
42933 ± 21% -28.2% 30807 ± 7% numa-meminfo.node2.SReclaimable
63219 ± 13% -16.6% 52717 ± 6% numa-meminfo.node2.SUnreclaim
106153 ± 16% -21.3% 83525 ± 5% numa-meminfo.node2.Slab
247154 ± 4% -7.6% 228415 numa-meminfo.node3.Unevictable
11904 ± 4% +17.1% 13945 ± 8% numa-vmstat.node0
2239 ± 22% -26.6% 1644 ± 2% numa-vmstat.node2.nr_mapped
10728 ± 21% -28.2% 7701 ± 7% numa-vmstat.node2.nr_slab_reclaimable
15803 ± 13% -16.6% 13179 ± 6% numa-vmstat.node2.nr_slab_unreclaimable
61788 ± 4% -7.6% 57103 numa-vmstat.node3.nr_unevictable
61788 ± 4% -7.6% 57103 numa-vmstat.node3.nr_zone_unevictable
92778 ± 2% +17.6% 109080 time.involuntary_context_switches
21954366 ± 3% +4.1% 22857988 ± 2% time.maximum_resident_set_size
4.81e+08 ± 2% -18.9% 3.899e+08 time.minor_page_faults
5804 +12.2% 6512 time.percent_of_cpu_this_job_got
34918 +12.2% 39193 time.system_time
5638528 ± 2% -15.3% 4778392 time.voluntary_context_switches
3942289 ± 2% -10.5% 3528902 ± 2% cpuidle.C1.time
242290 -14.2% 207992 cpuidle.C1.usage
1.64e+09 ± 2% -15.7% 1.381e+09 cpuidle.C1E.time
4621281 ± 2% -14.7% 3939757 cpuidle.C1E.usage
2.115e+10 ± 2% -18.5% 1.723e+10 cpuidle.C6.time
24771099 ± 2% -18.0% 20305766 cpuidle.C6.usage
1210810 ± 4% -17.6% 997270 ± 2% cpuidle.POLL.time
18742 ± 3% -17.0% 15559 ± 2% cpuidle.POLL.usage
4135 ±141% -100.0% 0.00 latency_stats.avg.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
33249 ±129% -100.0% 0.00 latency_stats.max.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4135 ±141% -100.0% 0.00 latency_stats.max.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
65839 ±116% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4135 ±141% -100.0% 0.00 latency_stats.sum.x86_reserve_hardware.x86_pmu_event_init.perf_try_init_event.perf_event_alloc.__do_sys_perf_event_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
8387 ±122% -90.9% 767.00 ± 13% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
263970 ± 10% -68.6% 82994 ± 3% latency_stats.sum.do_syslog.kmsg_read.proc_reg_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6173 ± 77% +173.3% 16869 ± 98% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
101.33 -4.6% 96.67 proc-vmstat.nr_anon_transparent_hugepages
39967 -1.8% 39241 proc-vmstat.nr_slab_reclaimable
67166 -2.4% 65522 proc-vmstat.nr_slab_unreclaimable
237743 -3.9% 228396 proc-vmstat.nr_unevictable
237743 -3.9% 228396 proc-vmstat.nr_zone_unevictable
4.807e+09 -2.0% 4.71e+09 proc-vmstat.numa_hit
4.807e+09 -2.0% 4.71e+09 proc-vmstat.numa_local
4.791e+09 -2.1% 4.69e+09 proc-vmstat.pgalloc_normal
4.783e+09 -2.0% 4.685e+09 proc-vmstat.pgfault
4.807e+09 -2.0% 4.709e+09 proc-vmstat.pgfree
1753 +4.6% 1833 turbostat.Avg_MHz
239445 -14.1% 205783 turbostat.C1
4617105 ± 2% -14.8% 3934693 turbostat.C1E
1.40 ± 2% -0.2 1.18 turbostat.C1E%
24764661 ± 2% -18.0% 20297643 turbostat.C6
18.09 ± 2% -3.4 14.74 turbostat.C6%
7.53 ± 2% -17.1% 6.24 turbostat.CPU%c1
11.88 ± 2% -19.1% 9.61 turbostat.CPU%c6
7.62 ± 3% -20.8% 6.04 turbostat.Pkg%pc2
388.30 +1.5% 393.93 turbostat.PkgWatt
390974 ± 8% +35.8% 530867 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
-1754042 +75.7% -3081270 sched_debug.cfs_rq:/.spread0.min
388140 ± 8% +36.2% 528494 ± 11% sched_debug.cfs_rq:/.spread0.stddev
542.30 ± 3% -10.0% 488.21 ± 3% sched_debug.cfs_rq:/.util_avg.min
53.35 ± 16% +48.7% 79.35 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.avg
30520 ± 6% -15.2% 25883 ± 12% sched_debug.cpu.nr_switches.avg
473770 ± 27% -37.4% 296623 ± 32% sched_debug.cpu.nr_switches.max
17077 ± 2% -15.1% 14493 sched_debug.cpu.nr_switches.min
30138 ± 6% -15.0% 25606 ± 12% sched_debug.cpu.sched_count.avg
472345 ± 27% -37.2% 296419 ± 32% sched_debug.cpu.sched_count.max
16858 ± 2% -15.2% 14299 sched_debug.cpu.sched_count.min
8358 ± 2% -15.5% 7063 sched_debug.cpu.sched_goidle.avg
12225 -13.6% 10565 sched_debug.cpu.sched_goidle.max
8032 ± 2% -16.0% 6749 sched_debug.cpu.sched_goidle.min
14839 ± 6% -15.3% 12568 ± 12% sched_debug.cpu.ttwu_count.avg
235115 ± 28% -38.3% 145175 ± 31% sched_debug.cpu.ttwu_count.max
7627 ± 3% -15.9% 6413 ± 2% sched_debug.cpu.ttwu_count.min
226299 ± 29% -39.5% 136827 ± 32% sched_debug.cpu.ttwu_local.max
0.85 -0.0 0.81 perf-stat.branch-miss-rate%
3.675e+10 -4.1% 3.523e+10 perf-stat.branch-misses
4.052e+11 -2.3% 3.958e+11 perf-stat.cache-misses
7.008e+11 -2.5% 6.832e+11 perf-stat.cache-references
15320995 ± 4% -14.3% 13136557 ± 6% perf-stat.context-switches
9.16 +4.8% 9.59 perf-stat.cpi
2.03e+14 +4.6% 2.124e+14 perf-stat.cpu-cycles
44508 -1.7% 43743 perf-stat.cpu-migrations
1.30 -0.1 1.24 perf-stat.dTLB-store-miss-rate%
4.064e+10 -3.5% 3.922e+10 perf-stat.dTLB-store-misses
3.086e+12 +1.1% 3.119e+12 perf-stat.dTLB-stores
3.611e+08 ± 6% -8.5% 3.304e+08 ± 5% perf-stat.iTLB-loads
0.11 -4.6% 0.10 perf-stat.ipc
4.783e+09 -2.0% 4.685e+09 perf-stat.minor-faults
1.53 ± 2% -0.3 1.22 ± 8% perf-stat.node-load-miss-rate%
1.389e+09 ± 3% -22.1% 1.083e+09 ± 9% perf-stat.node-load-misses
8.922e+10 -1.9% 8.75e+10 perf-stat.node-loads
5.06 +1.7 6.77 ± 3% perf-stat.node-store-miss-rate%
1.204e+09 +29.3% 1.556e+09 ± 3% perf-stat.node-store-misses
2.256e+10 -5.1% 2.142e+10 ± 2% perf-stat.node-stores
4.783e+09 -2.0% 4.685e+09 perf-stat.page-faults
1399242 +1.9% 1425404 perf-stat.path-length
1144 ± 8% -13.6% 988.00 ± 8% slabinfo.Acpi-ParseExt.active_objs
1144 ± 8% -13.6% 988.00 ± 8% slabinfo.Acpi-ParseExt.num_objs
1878 ± 17% +29.0% 2422 ± 16% slabinfo.dmaengine-unmap-16.active_objs
1878 ± 17% +29.0% 2422 ± 16% slabinfo.dmaengine-unmap-16.num_objs
1085 ± 5% -24.1% 823.33 ± 9% slabinfo.file_lock_cache.active_objs
1085 ± 5% -24.1% 823.33 ± 9% slabinfo.file_lock_cache.num_objs
61584 ± 4% -16.6% 51381 ± 5% slabinfo.filp.active_objs
967.00 ± 4% -16.5% 807.67 ± 5% slabinfo.filp.active_slabs
61908 ± 4% -16.5% 51713 ± 5% slabinfo.filp.num_objs
967.00 ± 4% -16.5% 807.67 ± 5% slabinfo.filp.num_slabs
1455 -15.4% 1232 ± 4% slabinfo.nsproxy.active_objs
1455 -15.4% 1232 ± 4% slabinfo.nsproxy.num_objs
84720 ± 6% -18.3% 69210 ± 4% slabinfo.pid.active_objs
1324 ± 6% -18.2% 1083 ± 4% slabinfo.pid.active_slabs
84820 ± 5% -18.2% 69386 ± 4% slabinfo.pid.num_objs
1324 ± 6% -18.2% 1083 ± 4% slabinfo.pid.num_slabs
2112 ± 18% -26.3% 1557 ± 5% slabinfo.scsi_sense_cache.active_objs
2112 ± 18% -26.3% 1557 ± 5% slabinfo.scsi_sense_cache.num_objs
5018 ± 5% -7.6% 4635 ± 4% slabinfo.sock_inode_cache.active_objs
5018 ± 5% -7.6% 4635 ± 4% slabinfo.sock_inode_cache.num_objs
1193 ± 4% +13.8% 1358 ± 4% slabinfo.task_group.active_objs
1193 ± 4% +13.8% 1358 ± 4% slabinfo.task_group.num_objs
62807 ± 3% -14.4% 53757 ± 3% slabinfo.vm_area_struct.active_objs
1571 ± 3% -12.1% 1381 ± 3% slabinfo.vm_area_struct.active_slabs
62877 ± 3% -14.3% 53880 ± 3% slabinfo.vm_area_struct.num_objs
1571 ± 3% -12.1% 1381 ± 3% slabinfo.vm_area_struct.num_slabs
47.45 -47.4 0.00 perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
47.16 -47.2 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
46.99 -47.0 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
44.95 -44.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
7.42 ± 2% -7.4 0.00 perf-profile.calltrace.cycles-pp.copy_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
6.32 ± 10% -6.3 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
6.28 ± 10% -6.3 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +0.9 0.85 ± 11% perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.92 ± 4% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +1.1 1.13 ± 7% perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +1.2 1.19 ± 7% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.2 1.22 ± 5% perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.34 ± 7% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.4 1.36 ± 7% perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +4.5 4.54 ± 19% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +4.6 4.64 ± 19% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +6.6 6.64 ± 15% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +6.7 6.68 ± 15% perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +7.5 7.54 ± 5% perf-profile.calltrace.cycles-pp.copy_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +44.6 44.55 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +46.6 46.63 ± 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
0.00 +46.8 46.81 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +47.1 47.10 ± 3% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +63.1 63.15 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.39 ± 3% +0.0 0.42 ± 3% perf-profile.children.cycles-pp.radix_tree_lookup_slot
0.21 ± 3% +0.0 0.25 ± 5% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.06 ± 8% perf-profile.children.cycles-pp.get_vma_policy
0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.18 ± 6% perf-profile.children.cycles-pp.__page_add_new_anon_rmap
0.00 +1.4 1.35 ± 5% perf-profile.children.cycles-pp.pte_map_lock
0.00 +63.2 63.21 perf-profile.children.cycles-pp.handle_pte_fault
1.40 ± 2% -0.4 1.03 ± 10% perf-profile.self.cycles-pp._raw_spin_lock
0.56 ± 3% -0.2 0.35 ± 6% perf-profile.self.cycles-pp.__handle_mm_fault
0.22 ± 3% -0.0 0.18 ± 7% perf-profile.self.cycles-pp.alloc_set_pte
0.09 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.vmacache_find
0.39 ± 2% +0.0 0.41 ± 3% perf-profile.self.cycles-pp.__radix_tree_lookup
0.18 +0.0 0.20 ± 6% perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.17 ± 2% +0.0 0.20 ± 7% perf-profile.self.cycles-pp.___might_sleep
0.33 ± 2% +0.0 0.36 ± 6% perf-profile.self.cycles-pp.handle_mm_fault
0.20 ± 2% +0.0 0.24 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 perf-profile.self.cycles-pp.finish_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.get_vma_policy
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.25 ± 5% perf-profile.self.cycles-pp.handle_pte_fault
0.00 +0.5 0.49 ± 8% perf-profile.self.cycles-pp.pte_map_lock
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_task/thp_enabled/test/cpufreq_governor:
lkp-skl-4sp1/will-it-scale/debian-x86_64-2018-04-03.cgz/x86_64-rhel-7.2/gcc-7/100%/never/page_fault2/performance
commit:
ba98a1cdad71d259a194461b3a61471b49b14df1
a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
ba98a1cdad71d259 a7a8993bfe3ccb54ad468b9f17
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:3 -33% :3 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
:3 33% 1:3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
1:3 -33% :3 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
1:3 24% 2:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
3:3 46% 5:3 perf-profile.calltrace.cycles-pp.error_entry
5:3 -9% 5:3 perf-profile.children.cycles-pp.error_entry
2:3 -4% 2:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8147 -18.8% 6613 will-it-scale.per_thread_ops
93113 +17.0% 108982 will-it-scale.time.involuntary_context_switches
4.732e+08 -19.0% 3.833e+08 will-it-scale.time.minor_page_faults
5854 +12.0% 6555 will-it-scale.time.percent_of_cpu_this_job_got
35247 +12.1% 39495 will-it-scale.time.system_time
5546661 -15.5% 4689314 will-it-scale.time.voluntary_context_switches
15801637 -1.9% 15504487 will-it-scale.workload
1.43 ± 11% -59.7% 0.58 ± 28% irq_exception_noise.__do_page_fault.min
2811 ± 3% +23.7% 3477 ± 3% kthread_noise.total_time
292776 ± 5% +39.6% 408829 ± 21% meminfo.DirectMap4k
19.80 -3.7 16.12 mpstat.cpu.idle%
29940 -14.5% 25593 uptime.idle
24064 ± 3% -8.5% 22016 vmstat.system.cs
34.86 -1.9% 34.19 boot-time.boot
26.95 -2.8% 26.19 ± 2% boot-time.kernel_boot
7190569 ± 2% -15.2% 6100136 ± 3% softirqs.RCU
5513663 -13.8% 4751548 softirqs.SCHED
18064 ± 2% +24.3% 22461 ± 7% numa-vmstat.node0.nr_slab_unreclaimable
8507 ± 12% -16.8% 7075 ± 4% numa-vmstat.node2.nr_slab_reclaimable
18719 ± 9% -19.6% 15043 ± 4% numa-vmstat.node3.nr_slab_unreclaimable
72265 ± 2% +24.3% 89855 ± 7% numa-meminfo.node0.SUnreclaim
115980 ± 4% +22.6% 142233 ± 12% numa-meminfo.node0.Slab
34035 ± 12% -16.8% 28307 ± 4% numa-meminfo.node2.SReclaimable
74888 ± 9% -19.7% 60162 ± 4% numa-meminfo.node3.SUnreclaim
93113 +17.0% 108982 time.involuntary_context_switches
4.732e+08 -19.0% 3.833e+08 time.minor_page_faults
5854 +12.0% 6555 time.percent_of_cpu_this_job_got
35247 +12.1% 39495 time.system_time
5546661 -15.5% 4689314 time.voluntary_context_switches
4.792e+09 -1.9% 4.699e+09 proc-vmstat.numa_hit
4.791e+09 -1.9% 4.699e+09 proc-vmstat.numa_local
40447 ± 11% +13.2% 45804 ± 6% proc-vmstat.pgactivate
4.778e+09 -1.9% 4.688e+09 proc-vmstat.pgalloc_normal
4.767e+09 -1.9% 4.675e+09 proc-vmstat.pgfault
4.791e+09 -1.9% 4.699e+09 proc-vmstat.pgfree
230178 ± 2% -10.1% 206883 ± 3% cpuidle.C1.usage
1.617e+09 -15.0% 1.375e+09 cpuidle.C1E.time
4514401 -14.1% 3878206 cpuidle.C1E.usage
2.087e+10 -18.5% 1.701e+10 cpuidle.C6.time
24458365 -18.0% 20045336 cpuidle.C6.usage
1163758 -16.1% 976094 ± 4% cpuidle.POLL.time
17907 -14.6% 15294 ± 4% cpuidle.POLL.usage
1758 +4.5% 1838 turbostat.Avg_MHz
227522 ± 2% -10.2% 204426 ± 3% turbostat.C1
4512700 -14.2% 3873264 turbostat.C1E
1.39 -0.2 1.18 turbostat.C1E%
24452583 -18.0% 20039031 turbostat.C6
17.85 -3.3 14.55 turbostat.C6%
7.44 -16.8% 6.19 turbostat.CPU%c1
11.72 -19.3% 9.45 turbostat.CPU%c6
7.51 -21.3% 5.91 turbostat.Pkg%pc2
389.33 +1.6% 395.59 turbostat.PkgWatt
559.33 ± 13% -17.9% 459.33 ± 20% slabinfo.dmaengine-unmap-128.active_objs
559.33 ± 13% -17.9% 459.33 ± 20% slabinfo.dmaengine-unmap-128.num_objs
57734 ± 3% -5.7% 54421 ± 4% slabinfo.filp.active_objs
905.67 ± 3% -5.6% 854.67 ± 4% slabinfo.filp.active_slabs
57981 ± 3% -5.6% 54720 ± 4% slabinfo.filp.num_objs
905.67 ± 3% -5.6% 854.67 ± 4% slabinfo.filp.num_slabs
1378 -12.0% 1212 ± 7% slabinfo.nsproxy.active_objs
1378 -12.0% 1212 ± 7% slabinfo.nsproxy.num_objs
507.33 ± 7% -26.8% 371.33 ± 2% slabinfo.secpath_cache.active_objs
507.33 ± 7% -26.8% 371.33 ± 2% slabinfo.secpath_cache.num_objs
4788 ± 5% -8.3% 4391 ± 2% slabinfo.sock_inode_cache.active_objs
4788 ± 5% -8.3% 4391 ± 2% slabinfo.sock_inode_cache.num_objs
1431 ± 8% -12.3% 1255 ± 3% slabinfo.task_group.active_objs
1431 ± 8% -12.3% 1255 ± 3% slabinfo.task_group.num_objs
4.27 ± 17% +27.0% 5.42 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.avg
13.44 ± 62% +73.6% 23.33 ± 24% sched_debug.cfs_rq:/.runnable_load_avg.stddev
772.55 ± 21% -32.7% 520.27 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.max
4.39 ± 15% +29.0% 5.66 ± 11% sched_debug.cpu.cpu_load[0].avg
152.09 ± 72% +83.9% 279.67 ± 33% sched_debug.cpu.cpu_load[0].max
13.84 ± 58% +78.7% 24.72 ± 29% sched_debug.cpu.cpu_load[0].stddev
4.53 ± 14% +25.8% 5.70 ± 10% sched_debug.cpu.cpu_load[1].avg
156.58 ± 66% +76.6% 276.58 ± 33% sched_debug.cpu.cpu_load[1].max
14.02 ± 55% +72.4% 24.17 ± 28% sched_debug.cpu.cpu_load[1].stddev
4.87 ± 11% +17.3% 5.72 ± 9% sched_debug.cpu.cpu_load[2].avg
1.58 ± 2% +13.5% 1.79 ± 6% sched_debug.cpu.nr_running.max
16694 -14.6% 14259 sched_debug.cpu.nr_switches.min
31989 ± 13% +20.6% 38584 ± 6% sched_debug.cpu.nr_switches.stddev
16505 -14.8% 14068 sched_debug.cpu.sched_count.min
32084 ± 13% +19.9% 38482 ± 6% sched_debug.cpu.sched_count.stddev
8185 -15.0% 6957 sched_debug.cpu.sched_goidle.avg
12151 ± 2% -13.5% 10507 sched_debug.cpu.sched_goidle.max
7867 -15.7% 6631 sched_debug.cpu.sched_goidle.min
7595 -16.1% 6375 sched_debug.cpu.ttwu_count.min
15873 ± 13% +21.2% 19239 ± 6% sched_debug.cpu.ttwu_count.stddev
5244 ± 17% +17.0% 6134 ± 5% sched_debug.cpu.ttwu_local.avg
15646 ± 12% +21.5% 19008 ± 6% sched_debug.cpu.ttwu_local.stddev
0.85 -0.0 0.81 perf-stat.branch-miss-rate%
3.689e+10 -4.6% 3.518e+10 perf-stat.branch-misses
57.39 +0.6 58.00 perf-stat.cache-miss-rate%
4.014e+11 -1.2% 3.967e+11 perf-stat.cache-misses
6.994e+11 -2.2% 6.84e+11 perf-stat.cache-references
14605393 ± 3% -8.5% 13369913 perf-stat.context-switches
9.21 +4.5% 9.63 perf-stat.cpi
2.037e+14 +4.6% 2.13e+14 perf-stat.cpu-cycles
44424 -2.0% 43541 perf-stat.cpu-migrations
1.29 -0.1 1.24 perf-stat.dTLB-store-miss-rate%
4.018e+10 -2.8% 3.905e+10 perf-stat.dTLB-store-misses
3.071e+12 +1.4% 3.113e+12 perf-stat.dTLB-stores
93.04 +1.5 94.51 perf-stat.iTLB-load-miss-rate%
4.946e+09 +19.3% 5.903e+09 ± 5% perf-stat.iTLB-load-misses
3.702e+08 -7.5% 3.423e+08 ± 2% perf-stat.iTLB-loads
4470 -15.9% 3760 ± 5% perf-stat.instructions-per-iTLB-miss
0.11 -4.3% 0.10 perf-stat.ipc
4.767e+09 -1.9% 4.675e+09 perf-stat.minor-faults
1.46 ± 4% -0.1 1.33 ± 9% perf-stat.node-load-miss-rate%
4.91 +1.7 6.65 ± 2% perf-stat.node-store-miss-rate%
1.195e+09 +32.8% 1.587e+09 ± 2% perf-stat.node-store-misses
2.313e+10 -3.7% 2.227e+10 perf-stat.node-stores
4.767e+09 -1.9% 4.675e+09 perf-stat.page-faults
1399047 +2.0% 1427115 perf-stat.path-length
8908 ± 73% -100.0% 0.00 latency_stats.avg.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 ±141% -100.0% 0.00 latency_stats.avg.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
61499 ±130% -92.6% 4534 ± 16% latency_stats.avg.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 ±138% -70.9% 1277 ±129% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
67311 ±112% -48.5% 34681 ± 36% latency_stats.avg.max
3956 ±138% +320.4% 16635 ±140% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 ± 30% +7264.0% 12126 ±138% latency_stats.avg.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +5.4e+105% 5367 ±141% latency_stats.avg.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
36937 ±119% -100.0% 0.00 latency_stats.max.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 ±141% -100.0% 0.00 latency_stats.max.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
84146 ±107% -72.5% 23171 ± 31% latency_stats.max.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 ±138% -70.9% 1277 ±129% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
5817 ± 83% -69.7% 1760 ± 67% latency_stats.max.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6720 ±137% +1628.2% 116147 ±141% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 ± 30% +7264.0% 12126 ±138% latency_stats.max.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2e+106% 12153 ±141% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
110122 ±120% -100.0% 0.00 latency_stats.sum.call_rwsem_down_read_failed.m_start.seq_read.__vfs_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3604 ±141% -100.0% 0.00 latency_stats.sum.call_rwsem_down_write_failed.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
12078828 ±139% -99.3% 89363 ± 29% latency_stats.sum.expand_files.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
144453 ±120% -80.9% 27650 ± 19% latency_stats.sum.poll_schedule_timeout.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
4391 ±138% -70.9% 1277 ±129% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_lookup.nfs_lookup_revalidate.lookup_fast.walk_component.link_path_walk.path_lookupat.filename_lookup
9438 ± 86% -68.4% 2980 ± 35% latency_stats.sum.pipe_write.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
31656 ±138% +320.4% 133084 ±140% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_run_task.rpc_call_sync.nfs3_rpc_wrapper.nfs3_proc_getattr.__nfs_revalidate_inode.nfs_do_access.nfs_permission.inode_permission.link_path_walk.path_lookupat
164.67 ± 30% +7264.0% 12126 ±138% latency_stats.sum.flush_work.fsnotify_destroy_group.inotify_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +8.8e+105% 8760 ±141% latency_stats.sum.msleep_interruptible.uart_wait_until_sent.tty_wait_until_sent.tty_port_close_start.tty_port_close.tty_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.3e+106% 12897 ±141% latency_stats.sum.tty_wait_until_sent.tty_port_close_start.tty_port_close.tty_release.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.2e+106% 32207 ±141% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.43 ± 3% -44.4 0.00 perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
44.13 ± 3% -44.1 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
43.95 ± 3% -43.9 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
41.85 ± 4% -41.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
7.74 ± 8% -7.7 0.00 perf-profile.calltrace.cycles-pp.copy_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.19 ± 4% -7.2 0.00 perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
7.15 ± 4% -7.2 0.00 perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.09 ± 3% -5.1 0.00 perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
4.99 ± 3% -5.0 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
0.93 ± 6% -0.1 0.81 ± 2% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.8 0.84 perf-profile.calltrace.cycles-pp._raw_spin_lock.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.9 0.92 ± 3% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +1.1 1.08 perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
0.00 +1.1 1.14 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.2 1.17 perf-profile.calltrace.cycles-pp.pte_map_lock.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +1.3 1.29 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +1.3 1.31 perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
61.62 +1.7 63.33 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
41.73 ± 4% +3.0 44.75 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.00 +4.6 4.55 ± 15% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +4.6 4.65 ± 14% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +6.6 6.57 ± 10% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +6.6 6.61 ± 10% perf-profile.calltrace.cycles-pp.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +7.2 7.25 ± 2% perf-profile.calltrace.cycles-pp.copy_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
41.41 ± 70% +22.3 63.67 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
42.19 ± 70% +22.6 64.75 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
42.20 ± 70% +22.6 64.76 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
42.27 ± 70% +22.6 64.86 perf-profile.calltrace.cycles-pp.page_fault
0.00 +44.9 44.88 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
0.00 +46.9 46.92 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
0.00 +47.1 47.10 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +47.4 47.37 perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +63.0 63.00 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.97 ± 6% -0.1 0.84 ± 2% perf-profile.children.cycles-pp.find_get_entry
1.23 ± 6% -0.1 1.11 perf-profile.children.cycles-pp.find_lock_entry
0.09 ± 10% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.unlock_page
0.19 ± 4% +0.0 0.21 ± 2% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.21 ± 2% +0.0 0.25 perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.get_vma_policy
0.00 +0.1 0.08 perf-profile.children.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.18 ± 2% perf-profile.children.cycles-pp.__page_add_new_anon_rmap
0.00 +1.3 1.30 perf-profile.children.cycles-pp.pte_map_lock
63.40 +1.6 64.97 perf-profile.children.cycles-pp.__do_page_fault
63.19 +1.6 64.83 perf-profile.children.cycles-pp.do_page_fault
61.69 +1.7 63.36 perf-profile.children.cycles-pp.__handle_mm_fault
63.19 +1.7 64.86 perf-profile.children.cycles-pp.page_fault
61.99 +1.7 63.70 perf-profile.children.cycles-pp.handle_mm_fault
72.27 +2.2 74.52 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
67.51 +2.4 69.87 perf-profile.children.cycles-pp._raw_spin_lock
44.49 ± 3% +3.0 47.45 perf-profile.children.cycles-pp.alloc_pages_vma
44.28 ± 3% +3.0 47.26 perf-profile.children.cycles-pp.__alloc_pages_nodemask
44.13 ± 3% +3.0 47.12 perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +63.1 63.06 perf-profile.children.cycles-pp.handle_pte_fault
1.46 ± 7% -0.5 1.01 perf-profile.self.cycles-pp._raw_spin_lock
0.58 ± 6% -0.2 0.34 perf-profile.self.cycles-pp.__handle_mm_fault
0.55 ± 6% -0.1 0.44 ± 2% perf-profile.self.cycles-pp.find_get_entry
0.22 ± 5% -0.1 0.16 ± 2% perf-profile.self.cycles-pp.alloc_set_pte
0.10 ± 8% -0.0 0.08 perf-profile.self.cycles-pp.down_read_trylock
0.09 ± 5% -0.0 0.07 perf-profile.self.cycles-pp.unlock_page
0.06 -0.0 0.05 perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.20 ± 2% +0.0 0.24 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 perf-profile.self.cycles-pp.finish_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.get_vma_policy
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.__lru_cache_add_active_or_unevictable
0.00 +0.2 0.25 perf-profile.self.cycles-pp.handle_pte_fault
0.00 +0.5 0.46 ± 7% perf-profile.self.cycles-pp.pte_map_lock
72.26 +2.3 74.52 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
[-- Attachment #3: perf-profile.zip --]
[-- Type: application/zip, Size: 19025 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-06-11 7:49 ` Song, HaiyanX
(?)
(?)
@ 2018-07-02 8:59 ` Laurent Dufour
2018-07-04 3:23 ` Song, HaiyanX
-1 siblings, 1 reply; 106+ messages in thread
From: Laurent Dufour @ 2018-07-02 8:59 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 11/06/2018 09:49, Song, HaiyanX wrote:
> Hi Laurent,
>
> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
> V9 patch serials.
>
> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
> commit id:
> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
> Benchmark: will-it-scale
> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>
> Metrics:
> will-it-scale.per_process_ops=processes/nr_cpu
> will-it-scale.per_thread_ops=threads/nr_cpu
> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> THP: enable / disable
> nr_task:100%
>
> 1. Regressions:
>
> a). Enable THP
> testcase base change head metric
> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>
> b). Disable THP
> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>
> Notes: for the above values of test result, the higher is better.
I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
get reproducible results. The results have huge variation, even on the vanilla
kernel, and I can't state on any changes due to that.
I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
measure any changes between the vanilla and the SPF patched ones:
test THP enabled 4.17.0-rc4-mm1 spf delta
page_fault3_threads 2697.7 2683.5 -0.53%
page_fault2_threads 170660.6 169574.1 -0.64%
context_switch1_threads 6915269.2 6877507.3 -0.55%
context_switch1_processes 6478076.2 6529493.5 0.79%
brk1 243391.2 238527.5 -2.00%
Tests were run 10 times, no high variation detected.
Did you see high variation on your side ? How many times the test were run to
compute the average values ?
Thanks,
Laurent.
>
> 2. Improvement: not found improvement based on the selected test cases.
>
>
> Best regards
> Haiyan Song
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Monday, May 28, 2018 4:54 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 28/05/2018 10:22, Haiyan Song wrote:
>> Hi Laurent,
>>
>> Yes, these tests are done on V9 patch.
>
> Do you plan to give this V11 a run ?
>
>>
>>
>> Best regards,
>> Haiyan Song
>>
>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>
>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>> tested on Intel 4s Skylake platform.
>>>
>>> Hi,
>>>
>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>> series" while responding to the v11 header series...
>>> Were these tests done on v9 or v11 ?
>>>
>>> Cheers,
>>> Laurent.
>>>
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>> Commit id:
>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>> Benchmark suite: will-it-scale
>>>> Download link:
>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task: 100%
>>>>
>>>> 1. Regressions:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>
>>>> 2. Improvements:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>
>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>> on head commit is better than that on base commit for this benchmark.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>>
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>> page fault without holding the mm semaphore [1].
>>>>
>>>> The idea is to try to handle user space page faults without holding the
>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>> process since the page fault handler will not wait for other threads memory
>>>> layout change to be done, assuming that this change is done in another part
>>>> of the process's memory space. This type page fault is named speculative
>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>> is failing its processing and a classic page fault is then tried.
>>>>
>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>> freeing operation which was hitting the performance by 20% as reported by
>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>> limiting the locking contention to these operations which are expected to
>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>> our back a reference count is added and 2 services (get_vma() and
>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>> benchmark anymore.
>>>>
>>>> The VMA's attributes checked during the speculative page fault processing
>>>> have to be protected against parallel changes. This is done by using a per
>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>> handler to fast check for parallel changes in progress and to abort the
>>>> speculative page fault in that case.
>>>>
>>>> Once the VMA has been found, the speculative page fault handler would check
>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>> checked during the page fault are modified.
>>>>
>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>> leading to touching this PTE will need to lock the page table, so no
>>>> parallel change is possible at this time.
>>>>
>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>> different from the one recorded at the beginning of the SPF operation, the
>>>> classic page fault handler will be called to handle the operation while
>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>> PTE.
>>>>
>>>> In pseudo code, this could be seen as:
>>>> speculative_page_fault()
>>>> {
>>>> vma = get_vma()
>>>> check vma sequence count
>>>> check vma's support
>>>> disable interrupt
>>>> check pgd,p4d,...,pte
>>>> save pmd and pte in vmf
>>>> save vma sequence counter in vmf
>>>> enable interrupt
>>>> check vma sequence count
>>>> handle_pte_fault(vma)
>>>> ..
>>>> page = alloc_page()
>>>> pte_map_lock()
>>>> disable interrupt
>>>> abort if sequence counter has changed
>>>> abort if pmd or pte has changed
>>>> pte map and lock
>>>> enable interrupt
>>>> if abort
>>>> free page
>>>> abort
>>>> ...
>>>> }
>>>>
>>>> arch_fault_handler()
>>>> {
>>>> if (speculative_page_fault(&vma))
>>>> goto done
>>>> again:
>>>> lock(mmap_sem)
>>>> vma = find_vma();
>>>> handle_pte_fault(vma);
>>>> if retry
>>>> unlock(mmap_sem)
>>>> goto again;
>>>> done:
>>>> handle fault error
>>>> }
>>>>
>>>> Support for THP is not done because when checking for the PMD, we can be
>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>
>>>> This series add a new software performance event named 'speculative-faults'
>>>> or 'spf'. It counts the number of successful page fault event handled
>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>> counting the total number of page fault events while 'spf' is only counting
>>>> the part of the faults processed speculatively.
>>>>
>>>> There are some trace events introduced by this series. They allow
>>>> identifying why the page faults were not processed speculatively. This
>>>> doesn't take in account the faults generated by a monothreaded process
>>>> which directly processed while holding the mmap_sem. This trace events are
>>>> grouped in a system named 'pagefault', they are:
>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>> back.
>>>>
>>>> To record all the related events, the easier is to run perf with the
>>>> following arguments :
>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>
>>>> There is also a dedicated vmstat counter showing the number of successful
>>>> page fault handled speculatively. I can be seen this way:
>>>> $ grep speculative_pgfault /proc/vmstat
>>>>
>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>> on x86, PowerPC and arm64.
>>>>
>>>> ---------------------
>>>> Real Workload results
>>>>
>>>> As mentioned in previous email, we did non official runs using a "popular
>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>> which showed a 30% improvements in the number of transaction processed per
>>>> second. This run has been done on the v6 series, but changes introduced in
>>>> this new version should not impact the performance boost seen.
>>>>
>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>> series:
>>>> vanilla spf
>>>> faults 89.418 101.364 +13%
>>>> spf n/a 97.989
>>>>
>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>> way.
>>>>
>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>> it a try on an android device. He reported that the application launch time
>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>> 20%.
>>>>
>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>
>>>> Application 4.9 4.9+spf delta
>>>> com.tencent.mm 416 389 -7%
>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>> com.tencent.mtt 455 454 0%
>>>> com.qqgame.hlddz 1497 1409 -6%
>>>> com.autonavi.minimap 711 701 -1%
>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>> com.immomo.momo 501 487 -3%
>>>> com.tencent.peng 2145 2112 -2%
>>>> com.smile.gifmaker 491 461 -6%
>>>> com.baidu.BaiduMap 479 366 -23%
>>>> com.taobao.taobao 1341 1198 -11%
>>>> com.baidu.searchbox 333 314 -6%
>>>> com.tencent.mobileqq 394 384 -3%
>>>> com.sina.weibo 907 906 0%
>>>> com.youku.phone 816 731 -11%
>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>> com.UCMobile 415 411 -1%
>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>> com.tencent.qqmusic 336 329 -2%
>>>> com.sankuai.meituan 1661 1302 -22%
>>>> com.netease.cloudmusic 1193 1200 1%
>>>> air.tv.douyu.android 4257 4152 -2%
>>>>
>>>> ------------------
>>>> Benchmarks results
>>>>
>>>> Base kernel is v4.17.0-rc4-mm1
>>>> SPF is BASE + this series
>>>>
>>>> Kernbench:
>>>> ----------
>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>> kernel (kernel is build 5 times):
>>>>
>>>> Average Half load -j 8
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>
>>>> Average Optimal load -j 16
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 526743764 faults
>>>> 210 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 2278 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Very few speculative page faults were recorded as most of the processes
>>>> involved are monothreaded (sounds that on this architecture some threads
>>>> were created during the kernel build processing).
>>>>
>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>
>>>> Average Half load -j 40
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>
>>>> Average Optimal load -j 80
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 116730856 faults
>>>> 0 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 476 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>> there is no impact on the performance.
>>>>
>>>> Ebizzy:
>>>> -------
>>>> The test is counting the number of records per second it can manage, the
>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>> consistent result I repeated the test 100 times and measure the average
>>>> result. The number is the record processes per second, the higher is the
>>>> best.
>>>>
>>>> BASE SPF delta
>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>
>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>> Performance counter stats for './ebizzy -mTt 16':
>>>> 1706379 faults
>>>> 1674599 spf
>>>> 30588 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 363 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>> Performance counter stats for './ebizzy -mTt 80':
>>>> 1874773 faults
>>>> 1461153 spf
>>>> 413293 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 200 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>> leading the ebizzy performance boost.
>>>>
>>>> ------------------
>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>> and Minchan Kim, hopefully.
>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>> __do_page_fault().
>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>> instead
>>>> of aborting the speculative page fault handling. Dropping the now
>>>> useless
>>>> trace event pagefault:spf_pte_lock.
>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>> handling when retrying is needed. This adds a lot of complexity and
>>>> additional tests done didn't show a significant performance improvement.
>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>
>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>
>>>>
>>>> Laurent Dufour (20):
>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>> mm: make pte_unmap_same compatible with SPF
>>>> mm: introduce INIT_VMA()
>>>> mm: protect VMA modifications using VMA sequence count
>>>> mm: protect mremap() against SPF hanlder
>>>> mm: protect SPF handler against anon_vma changes
>>>> mm: cache some VMA fields in the vm_fault structure
>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>> mm: introduce __vm_normal_page()
>>>> mm: introduce __page_add_new_anon_rmap()
>>>> mm: protect mm_rb tree with a rwlock
>>>> mm: adding speculative page fault failure trace events
>>>> perf: add a speculative page fault sw event
>>>> perf tools: add support for the SPF perf event
>>>> mm: add speculative page fault vmstats
>>>> powerpc/mm: add speculative page fault
>>>>
>>>> Mahendran Ganesh (2):
>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> arm64/mm: add speculative page fault
>>>>
>>>> Peter Zijlstra (4):
>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>> mm: VMA sequence count
>>>> mm: provide speculative fault infrastructure
>>>> x86/mm: add speculative pagefault handling
>>>>
>>>> arch/arm64/Kconfig | 1 +
>>>> arch/arm64/mm/fault.c | 12 +
>>>> arch/powerpc/Kconfig | 1 +
>>>> arch/powerpc/mm/fault.c | 16 +
>>>> arch/x86/Kconfig | 1 +
>>>> arch/x86/mm/fault.c | 27 +-
>>>> fs/exec.c | 2 +-
>>>> fs/proc/task_mmu.c | 5 +-
>>>> fs/userfaultfd.c | 17 +-
>>>> include/linux/hugetlb_inline.h | 2 +-
>>>> include/linux/migrate.h | 4 +-
>>>> include/linux/mm.h | 136 +++++++-
>>>> include/linux/mm_types.h | 7 +
>>>> include/linux/pagemap.h | 4 +-
>>>> include/linux/rmap.h | 12 +-
>>>> include/linux/swap.h | 10 +-
>>>> include/linux/vm_event_item.h | 3 +
>>>> include/trace/events/pagefault.h | 80 +++++
>>>> include/uapi/linux/perf_event.h | 1 +
>>>> kernel/fork.c | 5 +-
>>>> mm/Kconfig | 22 ++
>>>> mm/huge_memory.c | 6 +-
>>>> mm/hugetlb.c | 2 +
>>>> mm/init-mm.c | 3 +
>>>> mm/internal.h | 20 ++
>>>> mm/khugepaged.c | 5 +
>>>> mm/madvise.c | 6 +-
>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>> mm/mempolicy.c | 51 ++-
>>>> mm/migrate.c | 6 +-
>>>> mm/mlock.c | 13 +-
>>>> mm/mmap.c | 229 ++++++++++---
>>>> mm/mprotect.c | 4 +-
>>>> mm/mremap.c | 13 +
>>>> mm/nommu.c | 2 +-
>>>> mm/rmap.c | 5 +-
>>>> mm/swap.c | 6 +-
>>>> mm/swap_state.c | 8 +-
>>>> mm/vmstat.c | 5 +-
>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>> tools/perf/util/evsel.c | 1 +
>>>> tools/perf/util/parse-events.c | 4 +
>>>> tools/perf/util/parse-events.l | 1 +
>>>> tools/perf/util/python.c | 1 +
>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>> create mode 100644 include/trace/events/pagefault.h
>>>>
>>>> --
>>>> 2.7.4
>>>>
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-07-02 8:59 ` Laurent Dufour
@ 2018-07-04 3:23 ` Song, HaiyanX
0 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-07-04 3:23 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Laurent,
For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
And I did not find other high variation on test case result.
a). Enable THP
testcase base stddev change head stddev metric
page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
b). Disable THP
page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
Best regards,
Haiyan Song
________________________________________
From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Monday, July 02, 2018 4:59 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 11/06/2018 09:49, Song, HaiyanX wrote:
> Hi Laurent,
>
> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
> V9 patch serials.
>
> The regression result is sorted by the metric will-it-scale.per_thread_ops.
> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
> commit id:
> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
> Benchmark: will-it-scale
> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>
> Metrics:
> will-it-scale.per_process_ops=processes/nr_cpu
> will-it-scale.per_thread_ops=threads/nr_cpu
> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
> THP: enable / disable
> nr_task:100%
>
> 1. Regressions:
>
> a). Enable THP
> testcase base change head metric
> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>
> b). Disable THP
> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>
> Notes: for the above values of test result, the higher is better.
I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
get reproducible results. The results have huge variation, even on the vanilla
kernel, and I can't state on any changes due to that.
I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
measure any changes between the vanilla and the SPF patched ones:
test THP enabled 4.17.0-rc4-mm1 spf delta
page_fault3_threads 2697.7 2683.5 -0.53%
page_fault2_threads 170660.6 169574.1 -0.64%
context_switch1_threads 6915269.2 6877507.3 -0.55%
context_switch1_processes 6478076.2 6529493.5 0.79%
brk1 243391.2 238527.5 -2.00%
Tests were run 10 times, no high variation detected.
Did you see high variation on your side ? How many times the test were run to
compute the average values ?
Thanks,
Laurent.
>
> 2. Improvement: not found improvement based on the selected test cases.
>
>
> Best regards
> Haiyan Song
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Monday, May 28, 2018 4:54 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 28/05/2018 10:22, Haiyan Song wrote:
>> Hi Laurent,
>>
>> Yes, these tests are done on V9 patch.
>
> Do you plan to give this V11 a run ?
>
>>
>>
>> Best regards,
>> Haiyan Song
>>
>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>
>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>> tested on Intel 4s Skylake platform.
>>>
>>> Hi,
>>>
>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>> series" while responding to the v11 header series...
>>> Were these tests done on v9 or v11 ?
>>>
>>> Cheers,
>>> Laurent.
>>>
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>> Commit id:
>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>> Benchmark suite: will-it-scale
>>>> Download link:
>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task: 100%
>>>>
>>>> 1. Regressions:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>
>>>> 2. Improvements:
>>>> a) THP enabled:
>>>> testcase base change head metric
>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>
>>>> b) THP disabled:
>>>> testcase base change head metric
>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>
>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>> on head commit is better than that on base commit for this benchmark.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>>
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>> page fault without holding the mm semaphore [1].
>>>>
>>>> The idea is to try to handle user space page faults without holding the
>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>> process since the page fault handler will not wait for other threads memory
>>>> layout change to be done, assuming that this change is done in another part
>>>> of the process's memory space. This type page fault is named speculative
>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>> is failing its processing and a classic page fault is then tried.
>>>>
>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>> freeing operation which was hitting the performance by 20% as reported by
>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>> limiting the locking contention to these operations which are expected to
>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>> our back a reference count is added and 2 services (get_vma() and
>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>> benchmark anymore.
>>>>
>>>> The VMA's attributes checked during the speculative page fault processing
>>>> have to be protected against parallel changes. This is done by using a per
>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>> handler to fast check for parallel changes in progress and to abort the
>>>> speculative page fault in that case.
>>>>
>>>> Once the VMA has been found, the speculative page fault handler would check
>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>> checked during the page fault are modified.
>>>>
>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>> leading to touching this PTE will need to lock the page table, so no
>>>> parallel change is possible at this time.
>>>>
>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>> different from the one recorded at the beginning of the SPF operation, the
>>>> classic page fault handler will be called to handle the operation while
>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>> PTE.
>>>>
>>>> In pseudo code, this could be seen as:
>>>> speculative_page_fault()
>>>> {
>>>> vma = get_vma()
>>>> check vma sequence count
>>>> check vma's support
>>>> disable interrupt
>>>> check pgd,p4d,...,pte
>>>> save pmd and pte in vmf
>>>> save vma sequence counter in vmf
>>>> enable interrupt
>>>> check vma sequence count
>>>> handle_pte_fault(vma)
>>>> ..
>>>> page = alloc_page()
>>>> pte_map_lock()
>>>> disable interrupt
>>>> abort if sequence counter has changed
>>>> abort if pmd or pte has changed
>>>> pte map and lock
>>>> enable interrupt
>>>> if abort
>>>> free page
>>>> abort
>>>> ...
>>>> }
>>>>
>>>> arch_fault_handler()
>>>> {
>>>> if (speculative_page_fault(&vma))
>>>> goto done
>>>> again:
>>>> lock(mmap_sem)
>>>> vma = find_vma();
>>>> handle_pte_fault(vma);
>>>> if retry
>>>> unlock(mmap_sem)
>>>> goto again;
>>>> done:
>>>> handle fault error
>>>> }
>>>>
>>>> Support for THP is not done because when checking for the PMD, we can be
>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>
>>>> This series add a new software performance event named 'speculative-faults'
>>>> or 'spf'. It counts the number of successful page fault event handled
>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>> counting the total number of page fault events while 'spf' is only counting
>>>> the part of the faults processed speculatively.
>>>>
>>>> There are some trace events introduced by this series. They allow
>>>> identifying why the page faults were not processed speculatively. This
>>>> doesn't take in account the faults generated by a monothreaded process
>>>> which directly processed while holding the mmap_sem. This trace events are
>>>> grouped in a system named 'pagefault', they are:
>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>> back.
>>>>
>>>> To record all the related events, the easier is to run perf with the
>>>> following arguments :
>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>
>>>> There is also a dedicated vmstat counter showing the number of successful
>>>> page fault handled speculatively. I can be seen this way:
>>>> $ grep speculative_pgfault /proc/vmstat
>>>>
>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>> on x86, PowerPC and arm64.
>>>>
>>>> ---------------------
>>>> Real Workload results
>>>>
>>>> As mentioned in previous email, we did non official runs using a "popular
>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>> which showed a 30% improvements in the number of transaction processed per
>>>> second. This run has been done on the v6 series, but changes introduced in
>>>> this new version should not impact the performance boost seen.
>>>>
>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>> series:
>>>> vanilla spf
>>>> faults 89.418 101.364 +13%
>>>> spf n/a 97.989
>>>>
>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>> way.
>>>>
>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>> it a try on an android device. He reported that the application launch time
>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>> 20%.
>>>>
>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>
>>>> Application 4.9 4.9+spf delta
>>>> com.tencent.mm 416 389 -7%
>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>> com.tencent.mtt 455 454 0%
>>>> com.qqgame.hlddz 1497 1409 -6%
>>>> com.autonavi.minimap 711 701 -1%
>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>> com.immomo.momo 501 487 -3%
>>>> com.tencent.peng 2145 2112 -2%
>>>> com.smile.gifmaker 491 461 -6%
>>>> com.baidu.BaiduMap 479 366 -23%
>>>> com.taobao.taobao 1341 1198 -11%
>>>> com.baidu.searchbox 333 314 -6%
>>>> com.tencent.mobileqq 394 384 -3%
>>>> com.sina.weibo 907 906 0%
>>>> com.youku.phone 816 731 -11%
>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>> com.UCMobile 415 411 -1%
>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>> com.tencent.qqmusic 336 329 -2%
>>>> com.sankuai.meituan 1661 1302 -22%
>>>> com.netease.cloudmusic 1193 1200 1%
>>>> air.tv.douyu.android 4257 4152 -2%
>>>>
>>>> ------------------
>>>> Benchmarks results
>>>>
>>>> Base kernel is v4.17.0-rc4-mm1
>>>> SPF is BASE + this series
>>>>
>>>> Kernbench:
>>>> ----------
>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>> kernel (kernel is build 5 times):
>>>>
>>>> Average Half load -j 8
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>
>>>> Average Optimal load -j 16
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 526743764 faults
>>>> 210 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 2278 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Very few speculative page faults were recorded as most of the processes
>>>> involved are monothreaded (sounds that on this architecture some threads
>>>> were created during the kernel build processing).
>>>>
>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>
>>>> Average Half load -j 40
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>
>>>> Average Optimal load -j 80
>>>> Run (std deviation)
>>>> BASE SPF
>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>
>>>> During a run on the SPF, perf events were captured:
>>>> Performance counter stats for '../kernbench -M':
>>>> 116730856 faults
>>>> 0 spf
>>>> 3 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 476 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>> there is no impact on the performance.
>>>>
>>>> Ebizzy:
>>>> -------
>>>> The test is counting the number of records per second it can manage, the
>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>> consistent result I repeated the test 100 times and measure the average
>>>> result. The number is the record processes per second, the higher is the
>>>> best.
>>>>
>>>> BASE SPF delta
>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>
>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>> Performance counter stats for './ebizzy -mTt 16':
>>>> 1706379 faults
>>>> 1674599 spf
>>>> 30588 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 363 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>> Performance counter stats for './ebizzy -mTt 80':
>>>> 1874773 faults
>>>> 1461153 spf
>>>> 413293 pagefault:spf_vma_changed
>>>> 0 pagefault:spf_vma_noanon
>>>> 200 pagefault:spf_vma_notsup
>>>> 0 pagefault:spf_vma_access
>>>> 0 pagefault:spf_pmd_changed
>>>>
>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>> leading the ebizzy performance boost.
>>>>
>>>> ------------------
>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>> and Minchan Kim, hopefully.
>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>> __do_page_fault().
>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>> instead
>>>> of aborting the speculative page fault handling. Dropping the now
>>>> useless
>>>> trace event pagefault:spf_pte_lock.
>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>> handling when retrying is needed. This adds a lot of complexity and
>>>> additional tests done didn't show a significant performance improvement.
>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>
>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>
>>>>
>>>> Laurent Dufour (20):
>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>> mm: make pte_unmap_same compatible with SPF
>>>> mm: introduce INIT_VMA()
>>>> mm: protect VMA modifications using VMA sequence count
>>>> mm: protect mremap() against SPF hanlder
>>>> mm: protect SPF handler against anon_vma changes
>>>> mm: cache some VMA fields in the vm_fault structure
>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>> mm: introduce __vm_normal_page()
>>>> mm: introduce __page_add_new_anon_rmap()
>>>> mm: protect mm_rb tree with a rwlock
>>>> mm: adding speculative page fault failure trace events
>>>> perf: add a speculative page fault sw event
>>>> perf tools: add support for the SPF perf event
>>>> mm: add speculative page fault vmstats
>>>> powerpc/mm: add speculative page fault
>>>>
>>>> Mahendran Ganesh (2):
>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>> arm64/mm: add speculative page fault
>>>>
>>>> Peter Zijlstra (4):
>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>> mm: VMA sequence count
>>>> mm: provide speculative fault infrastructure
>>>> x86/mm: add speculative pagefault handling
>>>>
>>>> arch/arm64/Kconfig | 1 +
>>>> arch/arm64/mm/fault.c | 12 +
>>>> arch/powerpc/Kconfig | 1 +
>>>> arch/powerpc/mm/fault.c | 16 +
>>>> arch/x86/Kconfig | 1 +
>>>> arch/x86/mm/fault.c | 27 +-
>>>> fs/exec.c | 2 +-
>>>> fs/proc/task_mmu.c | 5 +-
>>>> fs/userfaultfd.c | 17 +-
>>>> include/linux/hugetlb_inline.h | 2 +-
>>>> include/linux/migrate.h | 4 +-
>>>> include/linux/mm.h | 136 +++++++-
>>>> include/linux/mm_types.h | 7 +
>>>> include/linux/pagemap.h | 4 +-
>>>> include/linux/rmap.h | 12 +-
>>>> include/linux/swap.h | 10 +-
>>>> include/linux/vm_event_item.h | 3 +
>>>> include/trace/events/pagefault.h | 80 +++++
>>>> include/uapi/linux/perf_event.h | 1 +
>>>> kernel/fork.c | 5 +-
>>>> mm/Kconfig | 22 ++
>>>> mm/huge_memory.c | 6 +-
>>>> mm/hugetlb.c | 2 +
>>>> mm/init-mm.c | 3 +
>>>> mm/internal.h | 20 ++
>>>> mm/khugepaged.c | 5 +
>>>> mm/madvise.c | 6 +-
>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>> mm/mempolicy.c | 51 ++-
>>>> mm/migrate.c | 6 +-
>>>> mm/mlock.c | 13 +-
>>>> mm/mmap.c | 229 ++++++++++---
>>>> mm/mprotect.c | 4 +-
>>>> mm/mremap.c | 13 +
>>>> mm/nommu.c | 2 +-
>>>> mm/rmap.c | 5 +-
>>>> mm/swap.c | 6 +-
>>>> mm/swap_state.c | 8 +-
>>>> mm/vmstat.c | 5 +-
>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>> tools/perf/util/evsel.c | 1 +
>>>> tools/perf/util/parse-events.c | 4 +
>>>> tools/perf/util/parse-events.l | 1 +
>>>> tools/perf/util/python.c | 1 +
>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>> create mode 100644 include/trace/events/pagefault.h
>>>>
>>>> --
>>>> 2.7.4
>>>>
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
@ 2018-07-04 3:23 ` Song, HaiyanX
0 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-07-04 3:23 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Laurent,=0A=
=0A=
=0A=
For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), t=
he below test cases all were run 3 times.=0A=
I check the test results, only page_fault3_thread/enable THP have 6% stddev=
for head commit, other tests have lower stddev.=0A=
=0A=
And I did not find other high variation on test case result.=0A=
=0A=
a). Enable THP=0A=
testcase base stddev change head =
stddev metric=0A=
page_fault3/enable THP 10519 =B1 3% -20.5% 8368 =
=B16% will-it-scale.per_thread_ops=0A=
page_fault2/enalbe THP 8281 =B1 2% -18.8% 6728 =
will-it-scale.per_thread_ops=0A=
brk1/eanble THP 998475 -2.2% 976893 =
will-it-scale.per_process_ops=0A=
context_switch1/enable THP 223910 -1.3% 220930 =
will-it-scale.per_process_ops=0A=
context_switch1/enable THP 233722 -1.0% 231288 =
will-it-scale.per_thread_ops=0A=
=0A=
b). Disable THP=0A=
page_fault3/disable THP 10856 -23.1% 8344 =
will-it-scale.per_thread_ops=0A=
page_fault2/disable THP 8147 -18.8% 6613 =
will-it-scale.per_thread_ops=0A=
brk1/disable THP 957 -7.9% 881 =
will-it-scale.per_thread_ops=0A=
context_switch1/disable THP 237006 -2.2% 231907 =
will-it-scale.per_thread_ops=0A=
brk1/disable THP 997317 -2.0% 977778 =
will-it-scale.per_process_ops=0A=
page_fault3/disable THP 467454 -1.8% 459251 =
will-it-scale.per_process_ops=0A=
context_switch1/disable THP 224431 -1.3% 221567 =
will-it-scale.per_process_ops=0A=
=0A=
=0A=
Best regards,=0A=
Haiyan Song=0A=
________________________________________=0A=
From: Laurent Dufour [ldufour@linux.vnet.ibm.com]=0A=
Sent: Monday, July 02, 2018 4:59 PM=0A=
To: Song, HaiyanX=0A=
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kir=
ill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Mat=
thew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; =
benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Glei=
xner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.s=
enozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi=
; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan K=
im; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; l=
inux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora=
@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs=
.org; x86@kernel.org=0A=
Subject: Re: [PATCH v11 00/26] Speculative page faults=0A=
=0A=
On 11/06/2018 09:49, Song, HaiyanX wrote:=0A=
> Hi Laurent,=0A=
>=0A=
> Regression test for v11 patch serials have been run, some regression is f=
ound by LKP-tools (linux kernel performance)=0A=
> tested on Intel 4s skylake platform. This time only test the cases which =
have been run and found regressions on=0A=
> V9 patch serials.=0A=
>=0A=
> The regression result is sorted by the metric will-it-scale.per_thread_op=
s.=0A=
> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126=0A=
> commit id:=0A=
> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12=0A=
> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1=0A=
> Benchmark: will-it-scale=0A=
> Download link: https://github.com/antonblanchard/will-it-scale/tree/maste=
r=0A=
>=0A=
> Metrics:=0A=
> will-it-scale.per_process_ops=3Dprocesses/nr_cpu=0A=
> will-it-scale.per_thread_ops=3Dthreads/nr_cpu=0A=
> test box: lkp-skl-4sp1(nr_cpu=3D192,memory=3D768G)=0A=
> THP: enable / disable=0A=
> nr_task:100%=0A=
>=0A=
> 1. Regressions:=0A=
>=0A=
> a). Enable THP=0A=
> testcase base change head =
metric=0A=
> page_fault3/enable THP 10519 -20.5% 836 wi=
ll-it-scale.per_thread_ops=0A=
> page_fault2/enalbe THP 8281 -18.8% 6728 wi=
ll-it-scale.per_thread_ops=0A=
> brk1/eanble THP 998475 -2.2% 976893 wi=
ll-it-scale.per_process_ops=0A=
> context_switch1/enable THP 223910 -1.3% 220930 wi=
ll-it-scale.per_process_ops=0A=
> context_switch1/enable THP 233722 -1.0% 231288 wi=
ll-it-scale.per_thread_ops=0A=
>=0A=
> b). Disable THP=0A=
> page_fault3/disable THP 10856 -23.1% 8344 wi=
ll-it-scale.per_thread_ops=0A=
> page_fault2/disable THP 8147 -18.8% 6613 wi=
ll-it-scale.per_thread_ops=0A=
> brk1/disable THP 957 -7.9% 881 wi=
ll-it-scale.per_thread_ops=0A=
> context_switch1/disable THP 237006 -2.2% 231907 wi=
ll-it-scale.per_thread_ops=0A=
> brk1/disable THP 997317 -2.0% 977778 wi=
ll-it-scale.per_process_ops=0A=
> page_fault3/disable THP 467454 -1.8% 459251 wi=
ll-it-scale.per_process_ops=0A=
> context_switch1/disable THP 224431 -1.3% 221567 wi=
ll-it-scale.per_process_ops=0A=
>=0A=
> Notes: for the above values of test result, the higher is better.=0A=
=0A=
I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can'=
t=0A=
get reproducible results. The results have huge variation, even on the vani=
lla=0A=
kernel, and I can't state on any changes due to that.=0A=
=0A=
I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I did=
n't=0A=
measure any changes between the vanilla and the SPF patched ones:=0A=
=0A=
test THP enabled 4.17.0-rc4-mm1 spf delta=0A=
page_fault3_threads 2697.7 2683.5 -0.53%=0A=
page_fault2_threads 170660.6 169574.1 -0.64%=0A=
context_switch1_threads 6915269.2 6877507.3 -0.55%=0A=
context_switch1_processes 6478076.2 6529493.5 0.79%=0A=
brk1 243391.2 238527.5 -2.00%=0A=
=0A=
Tests were run 10 times, no high variation detected.=0A=
=0A=
Did you see high variation on your side ? How many times the test were run =
to=0A=
compute the average values ?=0A=
=0A=
Thanks,=0A=
Laurent.=0A=
=0A=
=0A=
>=0A=
> 2. Improvement: not found improvement based on the selected test cases.=
=0A=
>=0A=
>=0A=
> Best regards=0A=
> Haiyan Song=0A=
> ________________________________________=0A=
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of La=
urent Dufour [ldufour@linux.vnet.ibm.com]=0A=
> Sent: Monday, May 28, 2018 4:54 PM=0A=
> To: Song, HaiyanX=0A=
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; k=
irill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; M=
atthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com=
; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gl=
eixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey=
.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Ke=
mi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan=
Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org;=
linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharo=
ra@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozla=
bs.org; x86@kernel.org=0A=
> Subject: Re: [PATCH v11 00/26] Speculative page faults=0A=
>=0A=
> On 28/05/2018 10:22, Haiyan Song wrote:=0A=
>> Hi Laurent,=0A=
>>=0A=
>> Yes, these tests are done on V9 patch.=0A=
>=0A=
> Do you plan to give this V11 a run ?=0A=
>=0A=
>>=0A=
>>=0A=
>> Best regards,=0A=
>> Haiyan Song=0A=
>>=0A=
>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:=0A=
>>> On 28/05/2018 07:23, Song, HaiyanX wrote:=0A=
>>>>=0A=
>>>> Some regression and improvements is found by LKP-tools(linux kernel pe=
rformance) on V9 patch series=0A=
>>>> tested on Intel 4s Skylake platform.=0A=
>>>=0A=
>>> Hi,=0A=
>>>=0A=
>>> Thanks for reporting this benchmark results, but you mentioned the "V9 =
patch=0A=
>>> series" while responding to the v11 header series...=0A=
>>> Were these tests done on v9 or v11 ?=0A=
>>>=0A=
>>> Cheers,=0A=
>>> Laurent.=0A=
>>>=0A=
>>>>=0A=
>>>> The regression result is sorted by the metric will-it-scale.per_thread=
_ops.=0A=
>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 pat=
ch series)=0A=
>>>> Commit id:=0A=
>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f=0A=
>>>> head commit: 0355322b3577eeab7669066df42c550a56801110=0A=
>>>> Benchmark suite: will-it-scale=0A=
>>>> Download link:=0A=
>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests=0A=
>>>> Metrics:=0A=
>>>> will-it-scale.per_process_ops=3Dprocesses/nr_cpu=0A=
>>>> will-it-scale.per_thread_ops=3Dthreads/nr_cpu=0A=
>>>> test box: lkp-skl-4sp1(nr_cpu=3D192,memory=3D768G)=0A=
>>>> THP: enable / disable=0A=
>>>> nr_task: 100%=0A=
>>>>=0A=
>>>> 1. Regressions:=0A=
>>>> a) THP enabled:=0A=
>>>> testcase base change head =
metric=0A=
>>>> page_fault3/ enable THP 10092 -17.5% 8323 =
will-it-scale.per_thread_ops=0A=
>>>> page_fault2/ enable THP 8300 -17.2% 6869 =
will-it-scale.per_thread_ops=0A=
>>>> brk1/ enable THP 957.67 -7.6% 885 =
will-it-scale.per_thread_ops=0A=
>>>> page_fault3/ enable THP 172821 -5.3% 163692 =
will-it-scale.per_process_ops=0A=
>>>> signal1/ enable THP 9125 -3.2% 8834 =
will-it-scale.per_process_ops=0A=
>>>>=0A=
>>>> b) THP disabled:=0A=
>>>> testcase base change head =
metric=0A=
>>>> page_fault3/ disable THP 10107 -19.1% 8180 =
will-it-scale.per_thread_ops=0A=
>>>> page_fault2/ disable THP 8432 -17.8% 6931 =
will-it-scale.per_thread_ops=0A=
>>>> context_switch1/ disable THP 215389 -6.8% 200776 =
will-it-scale.per_thread_ops=0A=
>>>> brk1/ disable THP 939.67 -6.6% 877.3=
3 will-it-scale.per_thread_ops=0A=
>>>> page_fault3/ disable THP 173145 -4.7% 165064 =
will-it-scale.per_process_ops=0A=
>>>> signal1/ disable THP 9162 -3.9% 8802 =
will-it-scale.per_process_ops=0A=
>>>>=0A=
>>>> 2. Improvements:=0A=
>>>> a) THP enabled:=0A=
>>>> testcase base change head =
metric=0A=
>>>> malloc1/ enable THP 66.33 +469.8% 383.6=
7 will-it-scale.per_thread_ops=0A=
>>>> writeseek3/ enable THP 2531 +4.5% 2646 =
will-it-scale.per_thread_ops=0A=
>>>> signal1/ enable THP 989.33 +2.8% 1016 =
will-it-scale.per_thread_ops=0A=
>>>>=0A=
>>>> b) THP disabled:=0A=
>>>> testcase base change head =
metric=0A=
>>>> malloc1/ disable THP 90.33 +417.3% 467.3=
3 will-it-scale.per_thread_ops=0A=
>>>> read2/ disable THP 58934 +39.2% 82060 =
will-it-scale.per_thread_ops=0A=
>>>> page_fault1/ disable THP 8607 +36.4% 11736 =
will-it-scale.per_thread_ops=0A=
>>>> read1/ disable THP 314063 +12.7% 353934 =
will-it-scale.per_thread_ops=0A=
>>>> writeseek3/ disable THP 2452 +12.5% 2759 =
will-it-scale.per_thread_ops=0A=
>>>> signal1/ disable THP 971.33 +5.5% 1024 =
will-it-scale.per_thread_ops=0A=
>>>>=0A=
>>>> Notes: for above values in column "change", the higher value means tha=
t the related testcase result=0A=
>>>> on head commit is better than that on base commit for this benchmark.=
=0A=
>>>>=0A=
>>>>=0A=
>>>> Best regards=0A=
>>>> Haiyan Song=0A=
>>>>=0A=
>>>> ________________________________________=0A=
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of=
Laurent Dufour [ldufour@linux.vnet.ibm.com]=0A=
>>>> Sent: Thursday, May 17, 2018 7:06 PM=0A=
>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org=
; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz=
; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.=
com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas=
Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; ser=
gey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang,=
Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minc=
han Kim; Punit Agrawal; vinayak menon; Yang Shi=0A=
>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet=
.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.=
com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org=0A=
>>>> Subject: [PATCH v11 00/26] Speculative page faults=0A=
>>>>=0A=
>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to ha=
ndle=0A=
>>>> page fault without holding the mm semaphore [1].=0A=
>>>>=0A=
>>>> The idea is to try to handle user space page faults without holding th=
e=0A=
>>>> mmap_sem. This should allow better concurrency for massively threaded=
=0A=
>>>> process since the page fault handler will not wait for other threads m=
emory=0A=
>>>> layout change to be done, assuming that this change is done in another=
part=0A=
>>>> of the process's memory space. This type page fault is named speculati=
ve=0A=
>>>> page fault. If the speculative page fault fails because of a concurren=
cy is=0A=
>>>> detected or because underlying PMD or PTE tables are not yet allocatin=
g, it=0A=
>>>> is failing its processing and a classic page fault is then tried.=0A=
>>>>=0A=
>>>> The speculative page fault (SPF) has to look for the VMA matching the =
fault=0A=
>>>> address without holding the mmap_sem, this is done by introducing a rw=
lock=0A=
>>>> which protects the access to the mm_rb tree. Previously this was done =
using=0A=
>>>> SRCU but it was introducing a lot of scheduling to process the VMA's=
=0A=
>>>> freeing operation which was hitting the performance by 20% as reported=
by=0A=
>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is=
=0A=
>>>> limiting the locking contention to these operations which are expected=
to=0A=
>>>> be in a O(log n) order. In addition to ensure that the VMA is not free=
d in=0A=
>>>> our back a reference count is added and 2 services (get_vma() and=0A=
>>>> put_vma()) are introduced to handle the reference count. Once a VMA is=
=0A=
>>>> fetched from the RB tree using get_vma(), it must be later freed using=
=0A=
>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale=
=0A=
>>>> benchmark anymore.=0A=
>>>>=0A=
>>>> The VMA's attributes checked during the speculative page fault process=
ing=0A=
>>>> have to be protected against parallel changes. This is done by using a=
per=0A=
>>>> VMA sequence lock. This sequence lock allows the speculative page faul=
t=0A=
>>>> handler to fast check for parallel changes in progress and to abort th=
e=0A=
>>>> speculative page fault in that case.=0A=
>>>>=0A=
>>>> Once the VMA has been found, the speculative page fault handler would =
check=0A=
>>>> for the VMA's attributes to verify that the page fault has to be handl=
ed=0A=
>>>> correctly or not. Thus, the VMA is protected through a sequence lock w=
hich=0A=
>>>> allows fast detection of concurrent VMA changes. If such a change is=
=0A=
>>>> detected, the speculative page fault is aborted and a *classic* page f=
ault=0A=
>>>> is tried. VMA sequence lockings are added when VMA attributes which a=
re=0A=
>>>> checked during the page fault are modified.=0A=
>>>>=0A=
>>>> When the PTE is fetched, the VMA is checked to see if it has been chan=
ged,=0A=
>>>> so once the page table is locked, the VMA is valid, so any other chang=
es=0A=
>>>> leading to touching this PTE will need to lock the page table, so no=
=0A=
>>>> parallel change is possible at this time.=0A=
>>>>=0A=
>>>> The locking of the PTE is done with interrupts disabled, this allows=
=0A=
>>>> checking for the PMD to ensure that there is not an ongoing collapsing=
=0A=
>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and the=
n is=0A=
>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd=
is=0A=
>>>> valid at the time the PTE is locked, we have the guarantee that the=0A=
>>>> collapsing operation will have to wait on the PTE lock to move forward=
.=0A=
>>>> This allows the SPF handler to map the PTE safely. If the PMD value is=
=0A=
>>>> different from the one recorded at the beginning of the SPF operation,=
the=0A=
>>>> classic page fault handler will be called to handle the operation whil=
e=0A=
>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disa=
bled,=0A=
>>>> the lock is done using spin_trylock() to avoid dead lock when handling=
a=0A=
>>>> page fault while a TLB invalidate is requested by another CPU holding =
the=0A=
>>>> PTE.=0A=
>>>>=0A=
>>>> In pseudo code, this could be seen as:=0A=
>>>> speculative_page_fault()=0A=
>>>> {=0A=
>>>> vma =3D get_vma()=0A=
>>>> check vma sequence count=0A=
>>>> check vma's support=0A=
>>>> disable interrupt=0A=
>>>> check pgd,p4d,...,pte=0A=
>>>> save pmd and pte in vmf=0A=
>>>> save vma sequence counter in vmf=0A=
>>>> enable interrupt=0A=
>>>> check vma sequence count=0A=
>>>> handle_pte_fault(vma)=0A=
>>>> ..=0A=
>>>> page =3D alloc_page()=0A=
>>>> pte_map_lock()=0A=
>>>> disable interrupt=0A=
>>>> abort if sequence counter has chan=
ged=0A=
>>>> abort if pmd or pte has changed=0A=
>>>> pte map and lock=0A=
>>>> enable interrupt=0A=
>>>> if abort=0A=
>>>> free page=0A=
>>>> abort=0A=
>>>> ...=0A=
>>>> }=0A=
>>>>=0A=
>>>> arch_fault_handler()=0A=
>>>> {=0A=
>>>> if (speculative_page_fault(&vma))=0A=
>>>> goto done=0A=
>>>> again:=0A=
>>>> lock(mmap_sem)=0A=
>>>> vma =3D find_vma();=0A=
>>>> handle_pte_fault(vma);=0A=
>>>> if retry=0A=
>>>> unlock(mmap_sem)=0A=
>>>> goto again;=0A=
>>>> done:=0A=
>>>> handle fault error=0A=
>>>> }=0A=
>>>>=0A=
>>>> Support for THP is not done because when checking for the PMD, we can =
be=0A=
>>>> confused by an in progress collapsing operation done by khugepaged. Th=
e=0A=
>>>> issue is that pmd_none() could be true either if the PMD is not alread=
y=0A=
>>>> populated or if the underlying PTE are in the way to be collapsed. So =
we=0A=
>>>> cannot safely allocate a PMD if pmd_none() is true.=0A=
>>>>=0A=
>>>> This series add a new software performance event named 'speculative-fa=
ults'=0A=
>>>> or 'spf'. It counts the number of successful page fault event handled=
=0A=
>>>> speculatively. When recording 'faults,spf' events, the faults one is=
=0A=
>>>> counting the total number of page fault events while 'spf' is only cou=
nting=0A=
>>>> the part of the faults processed speculatively.=0A=
>>>>=0A=
>>>> There are some trace events introduced by this series. They allow=0A=
>>>> identifying why the page faults were not processed speculatively. This=
=0A=
>>>> doesn't take in account the faults generated by a monothreaded process=
=0A=
>>>> which directly processed while holding the mmap_sem. This trace events=
are=0A=
>>>> grouped in a system named 'pagefault', they are:=0A=
>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back=
=0A=
>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.=
=0A=
>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported=0A=
>>>> - pagefault:spf_vma_access : the VMA's access right are not respected=
=0A=
>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in ou=
r=0A=
>>>> back.=0A=
>>>>=0A=
>>>> To record all the related events, the easier is to run perf with the=
=0A=
>>>> following arguments :=0A=
>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>=0A=
>>>>=0A=
>>>> There is also a dedicated vmstat counter showing the number of success=
ful=0A=
>>>> page fault handled speculatively. I can be seen this way:=0A=
>>>> $ grep speculative_pgfault /proc/vmstat=0A=
>>>>=0A=
>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is funct=
ional=0A=
>>>> on x86, PowerPC and arm64.=0A=
>>>>=0A=
>>>> ---------------------=0A=
>>>> Real Workload results=0A=
>>>>=0A=
>>>> As mentioned in previous email, we did non official runs using a "popu=
lar=0A=
>>>> in memory multithreaded database product" on 176 cores SMT8 Power syst=
em=0A=
>>>> which showed a 30% improvements in the number of transaction processed=
per=0A=
>>>> second. This run has been done on the v6 series, but changes introduce=
d in=0A=
>>>> this new version should not impact the performance boost seen.=0A=
>>>>=0A=
>>>> Here are the perf data captured during 2 of these runs on top of the v=
8=0A=
>>>> series:=0A=
>>>> vanilla spf=0A=
>>>> faults 89.418 101.364 +13%=0A=
>>>> spf n/a 97.989=0A=
>>>>=0A=
>>>> With the SPF kernel, most of the page fault were processed in a specul=
ative=0A=
>>>> way.=0A=
>>>>=0A=
>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and =
gave=0A=
>>>> it a try on an android device. He reported that the application launch=
time=0A=
>>>> was improved in average by 6%, and for large applications (~100 thread=
s) by=0A=
>>>> 20%.=0A=
>>>>=0A=
>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qco=
m=0A=
>>>> MSM845 (8 cores) with 6GB (the less is better):=0A=
>>>>=0A=
>>>> Application 4.9 4.9+spf delta=0A=
>>>> com.tencent.mm 416 389 -7%=0A=
>>>> com.eg.android.AlipayGphone 1135 986 -13%=0A=
>>>> com.tencent.mtt 455 454 0%=0A=
>>>> com.qqgame.hlddz 1497 1409 -6%=0A=
>>>> com.autonavi.minimap 711 701 -1%=0A=
>>>> com.tencent.tmgp.sgame 788 748 -5%=0A=
>>>> com.immomo.momo 501 487 -3%=0A=
>>>> com.tencent.peng 2145 2112 -2%=0A=
>>>> com.smile.gifmaker 491 461 -6%=0A=
>>>> com.baidu.BaiduMap 479 366 -23%=0A=
>>>> com.taobao.taobao 1341 1198 -11%=0A=
>>>> com.baidu.searchbox 333 314 -6%=0A=
>>>> com.tencent.mobileqq 394 384 -3%=0A=
>>>> com.sina.weibo 907 906 0%=0A=
>>>> com.youku.phone 816 731 -11%=0A=
>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%=0A=
>>>> com.UCMobile 415 411 -1%=0A=
>>>> com.tencent.tmgp.ak 1464 1431 -2%=0A=
>>>> com.tencent.qqmusic 336 329 -2%=0A=
>>>> com.sankuai.meituan 1661 1302 -22%=0A=
>>>> com.netease.cloudmusic 1193 1200 1%=0A=
>>>> air.tv.douyu.android 4257 4152 -2%=0A=
>>>>=0A=
>>>> ------------------=0A=
>>>> Benchmarks results=0A=
>>>>=0A=
>>>> Base kernel is v4.17.0-rc4-mm1=0A=
>>>> SPF is BASE + this series=0A=
>>>>=0A=
>>>> Kernbench:=0A=
>>>> ----------=0A=
>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15=
=0A=
>>>> kernel (kernel is build 5 times):=0A=
>>>>=0A=
>>>> Average Half load -j 8=0A=
>>>> Run (std deviation)=0A=
>>>> BASE SPF=0A=
>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%=
=0A=
>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%=
=0A=
>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%=
=0A=
>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%=
=0A=
>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%=
=0A=
>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%=
=0A=
>>>>=0A=
>>>> Average Optimal load -j 16=0A=
>>>> Run (std deviation)=0A=
>>>> BASE SPF=0A=
>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%=
=0A=
>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%=
=0A=
>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%=
=0A=
>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%=
=0A=
>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%=
=0A=
>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%=
=0A=
>>>>=0A=
>>>>=0A=
>>>> During a run on the SPF, perf events were captured:=0A=
>>>> Performance counter stats for '../kernbench -M':=0A=
>>>> 526743764 faults=0A=
>>>> 210 spf=0A=
>>>> 3 pagefault:spf_vma_changed=0A=
>>>> 0 pagefault:spf_vma_noanon=0A=
>>>> 2278 pagefault:spf_vma_notsup=0A=
>>>> 0 pagefault:spf_vma_access=0A=
>>>> 0 pagefault:spf_pmd_changed=0A=
>>>>=0A=
>>>> Very few speculative page faults were recorded as most of the processe=
s=0A=
>>>> involved are monothreaded (sounds that on this architecture some threa=
ds=0A=
>>>> were created during the kernel build processing).=0A=
>>>>=0A=
>>>> Here are the kerbench results on a 80 CPUs Power8 system:=0A=
>>>>=0A=
>>>> Average Half load -j 40=0A=
>>>> Run (std deviation)=0A=
>>>> BASE SPF=0A=
>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%=
=0A=
>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%=
=0A=
>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%=
=0A=
>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%=
=0A=
>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%=
=0A=
>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%=
=0A=
>>>>=0A=
>>>> Average Optimal load -j 80=0A=
>>>> Run (std deviation)=0A=
>>>> BASE SPF=0A=
>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%=
=0A=
>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%=
=0A=
>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%=
=0A=
>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%=
=0A=
>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%=
=0A=
>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%=
=0A=
>>>>=0A=
>>>> During a run on the SPF, perf events were captured:=0A=
>>>> Performance counter stats for '../kernbench -M':=0A=
>>>> 116730856 faults=0A=
>>>> 0 spf=0A=
>>>> 3 pagefault:spf_vma_changed=0A=
>>>> 0 pagefault:spf_vma_noanon=0A=
>>>> 476 pagefault:spf_vma_notsup=0A=
>>>> 0 pagefault:spf_vma_access=0A=
>>>> 0 pagefault:spf_pmd_changed=0A=
>>>>=0A=
>>>> Most of the processes involved are monothreaded so SPF is not activate=
d but=0A=
>>>> there is no impact on the performance.=0A=
>>>>=0A=
>>>> Ebizzy:=0A=
>>>> -------=0A=
>>>> The test is counting the number of records per second it can manage, t=
he=0A=
>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get=
=0A=
>>>> consistent result I repeated the test 100 times and measure the averag=
e=0A=
>>>> result. The number is the record processes per second, the higher is t=
he=0A=
>>>> best.=0A=
>>>>=0A=
>>>> BASE SPF delta=0A=
>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%=0A=
>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%=0A=
>>>>=0A=
>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM=
:=0A=
>>>> Performance counter stats for './ebizzy -mTt 16':=0A=
>>>> 1706379 faults=0A=
>>>> 1674599 spf=0A=
>>>> 30588 pagefault:spf_vma_changed=0A=
>>>> 0 pagefault:spf_vma_noanon=0A=
>>>> 363 pagefault:spf_vma_notsup=0A=
>>>> 0 pagefault:spf_vma_access=0A=
>>>> 0 pagefault:spf_pmd_changed=0A=
>>>>=0A=
>>>> And the ones captured during a run on a 80 CPUs Power node:=0A=
>>>> Performance counter stats for './ebizzy -mTt 80':=0A=
>>>> 1874773 faults=0A=
>>>> 1461153 spf=0A=
>>>> 413293 pagefault:spf_vma_changed=0A=
>>>> 0 pagefault:spf_vma_noanon=0A=
>>>> 200 pagefault:spf_vma_notsup=0A=
>>>> 0 pagefault:spf_vma_access=0A=
>>>> 0 pagefault:spf_pmd_changed=0A=
>>>>=0A=
>>>> In ebizzy's case most of the page fault were handled in a speculative =
way,=0A=
>>>> leading the ebizzy performance boost.=0A=
>>>>=0A=
>>>> ------------------=0A=
>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):=0A=
>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahen=
dran=0A=
>>>> and Minchan Kim, hopefully.=0A=
>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in=0A=
>>>> __do_page_fault().=0A=
>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails=
=0A=
>>>> instead=0A=
>>>> of aborting the speculative page fault handling. Dropping the now=
=0A=
>>>> useless=0A=
>>>> trace event pagefault:spf_pte_lock.=0A=
>>>> - No more try to reuse the fetched VMA during the speculative page fa=
ult=0A=
>>>> handling when retrying is needed. This adds a lot of complexity and=
=0A=
>>>> additional tests done didn't show a significant performance improve=
ment.=0A=
>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.=
=0A=
>>>>=0A=
>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at=
-speculative-page-faults-tt965642.html#none=0A=
>>>> [2] https://patchwork.kernel.org/patch/9999687/=0A=
>>>>=0A=
>>>>=0A=
>>>> Laurent Dufour (20):=0A=
>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT=0A=
>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE=0A=
>>>> mm: make pte_unmap_same compatible with SPF=0A=
>>>> mm: introduce INIT_VMA()=0A=
>>>> mm: protect VMA modifications using VMA sequence count=0A=
>>>> mm: protect mremap() against SPF hanlder=0A=
>>>> mm: protect SPF handler against anon_vma changes=0A=
>>>> mm: cache some VMA fields in the vm_fault structure=0A=
>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()=0A=
>>>> mm: introduce __lru_cache_add_active_or_unevictable=0A=
>>>> mm: introduce __vm_normal_page()=0A=
>>>> mm: introduce __page_add_new_anon_rmap()=0A=
>>>> mm: protect mm_rb tree with a rwlock=0A=
>>>> mm: adding speculative page fault failure trace events=0A=
>>>> perf: add a speculative page fault sw event=0A=
>>>> perf tools: add support for the SPF perf event=0A=
>>>> mm: add speculative page fault vmstats=0A=
>>>> powerpc/mm: add speculative page fault=0A=
>>>>=0A=
>>>> Mahendran Ganesh (2):=0A=
>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT=0A=
>>>> arm64/mm: add speculative page fault=0A=
>>>>=0A=
>>>> Peter Zijlstra (4):=0A=
>>>> mm: prepare for FAULT_FLAG_SPECULATIVE=0A=
>>>> mm: VMA sequence count=0A=
>>>> mm: provide speculative fault infrastructure=0A=
>>>> x86/mm: add speculative pagefault handling=0A=
>>>>=0A=
>>>> arch/arm64/Kconfig | 1 +=0A=
>>>> arch/arm64/mm/fault.c | 12 +=0A=
>>>> arch/powerpc/Kconfig | 1 +=0A=
>>>> arch/powerpc/mm/fault.c | 16 +=0A=
>>>> arch/x86/Kconfig | 1 +=0A=
>>>> arch/x86/mm/fault.c | 27 +-=0A=
>>>> fs/exec.c | 2 +-=0A=
>>>> fs/proc/task_mmu.c | 5 +-=0A=
>>>> fs/userfaultfd.c | 17 +-=0A=
>>>> include/linux/hugetlb_inline.h | 2 +-=0A=
>>>> include/linux/migrate.h | 4 +-=0A=
>>>> include/linux/mm.h | 136 +++++++-=0A=
>>>> include/linux/mm_types.h | 7 +=0A=
>>>> include/linux/pagemap.h | 4 +-=0A=
>>>> include/linux/rmap.h | 12 +-=0A=
>>>> include/linux/swap.h | 10 +-=0A=
>>>> include/linux/vm_event_item.h | 3 +=0A=
>>>> include/trace/events/pagefault.h | 80 +++++=0A=
>>>> include/uapi/linux/perf_event.h | 1 +=0A=
>>>> kernel/fork.c | 5 +-=0A=
>>>> mm/Kconfig | 22 ++=0A=
>>>> mm/huge_memory.c | 6 +-=0A=
>>>> mm/hugetlb.c | 2 +=0A=
>>>> mm/init-mm.c | 3 +=0A=
>>>> mm/internal.h | 20 ++=0A=
>>>> mm/khugepaged.c | 5 +=0A=
>>>> mm/madvise.c | 6 +-=0A=
>>>> mm/memory.c | 612 +++++++++++++++++++++++++=
++++-----=0A=
>>>> mm/mempolicy.c | 51 ++-=0A=
>>>> mm/migrate.c | 6 +-=0A=
>>>> mm/mlock.c | 13 +-=0A=
>>>> mm/mmap.c | 229 ++++++++++---=0A=
>>>> mm/mprotect.c | 4 +-=0A=
>>>> mm/mremap.c | 13 +=0A=
>>>> mm/nommu.c | 2 +-=0A=
>>>> mm/rmap.c | 5 +-=0A=
>>>> mm/swap.c | 6 +-=0A=
>>>> mm/swap_state.c | 8 +-=0A=
>>>> mm/vmstat.c | 5 +-=0A=
>>>> tools/include/uapi/linux/perf_event.h | 1 +=0A=
>>>> tools/perf/util/evsel.c | 1 +=0A=
>>>> tools/perf/util/parse-events.c | 4 +=0A=
>>>> tools/perf/util/parse-events.l | 1 +=0A=
>>>> tools/perf/util/python.c | 1 +=0A=
>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)=0A=
>>>> create mode 100644 include/trace/events/pagefault.h=0A=
>>>>=0A=
>>>> --=0A=
>>>> 2.7.4=0A=
>>>>=0A=
>>>>=0A=
>>>=0A=
>>=0A=
>=0A=
=0A=
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-07-04 3:23 ` Song, HaiyanX
@ 2018-07-04 7:51 ` Laurent Dufour
-1 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-04 7:51 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 04/07/2018 05:23, Song, HaiyanX wrote:
> Hi Laurent,
>
>
> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
Repeating the test only 3 times seems a bit too low to me.
I'll focus on the higher change for the moment, but I don't have access to such
a hardware.
Is possible to provide a diff between base and SPF of the performance cycles
measured when running page_fault3 and page_fault2 when the 20% change is detected.
Please stay focus on the test case process to see exactly where the series is
impacting.
Thanks,
Laurent.
>
> And I did not find other high variation on test case result.
>
> a). Enable THP
> testcase base stddev change head stddev metric
> page_fault3/enable THP 10519 A+- 3% -20.5% 8368 A+-6% will-it-scale.per_thread_ops
> page_fault2/enalbe THP 8281 A+- 2% -18.8% 6728 will-it-scale.per_thread_ops
> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>
> b). Disable THP
> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>
>
> Best regards,
> Haiyan Song
> ________________________________________
> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Monday, July 02, 2018 4:59 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 11/06/2018 09:49, Song, HaiyanX wrote:
>> Hi Laurent,
>>
>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>> V9 patch serials.
>>
>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>> commit id:
>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>> Benchmark: will-it-scale
>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>
>> Metrics:
>> will-it-scale.per_process_ops=processes/nr_cpu
>> will-it-scale.per_thread_ops=threads/nr_cpu
>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>> THP: enable / disable
>> nr_task:100%
>>
>> 1. Regressions:
>>
>> a). Enable THP
>> testcase base change head metric
>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>
>> b). Disable THP
>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>
>> Notes: for the above values of test result, the higher is better.
>
> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
> get reproducible results. The results have huge variation, even on the vanilla
> kernel, and I can't state on any changes due to that.
>
> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
> measure any changes between the vanilla and the SPF patched ones:
>
> test THP enabled 4.17.0-rc4-mm1 spf delta
> page_fault3_threads 2697.7 2683.5 -0.53%
> page_fault2_threads 170660.6 169574.1 -0.64%
> context_switch1_threads 6915269.2 6877507.3 -0.55%
> context_switch1_processes 6478076.2 6529493.5 0.79%
> brk1 243391.2 238527.5 -2.00%
>
> Tests were run 10 times, no high variation detected.
>
> Did you see high variation on your side ? How many times the test were run to
> compute the average values ?
>
> Thanks,
> Laurent.
>
>
>>
>> 2. Improvement: not found improvement based on the selected test cases.
>>
>>
>> Best regards
>> Haiyan Song
>> ________________________________________
>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Monday, May 28, 2018 4:54 PM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> On 28/05/2018 10:22, Haiyan Song wrote:
>>> Hi Laurent,
>>>
>>> Yes, these tests are done on V9 patch.
>>
>> Do you plan to give this V11 a run ?
>>
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>>
>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>
>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>> tested on Intel 4s Skylake platform.
>>>>
>>>> Hi,
>>>>
>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>> series" while responding to the v11 header series...
>>>> Were these tests done on v9 or v11 ?
>>>>
>>>> Cheers,
>>>> Laurent.
>>>>
>>>>>
>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>> Commit id:
>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>> Benchmark suite: will-it-scale
>>>>> Download link:
>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>> Metrics:
>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>> THP: enable / disable
>>>>> nr_task: 100%
>>>>>
>>>>> 1. Regressions:
>>>>> a) THP enabled:
>>>>> testcase base change head metric
>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>
>>>>> b) THP disabled:
>>>>> testcase base change head metric
>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>
>>>>> 2. Improvements:
>>>>> a) THP enabled:
>>>>> testcase base change head metric
>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>
>>>>> b) THP disabled:
>>>>> testcase base change head metric
>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>
>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Haiyan Song
>>>>>
>>>>> ________________________________________
>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>
>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>> page fault without holding the mm semaphore [1].
>>>>>
>>>>> The idea is to try to handle user space page faults without holding the
>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>> process since the page fault handler will not wait for other threads memory
>>>>> layout change to be done, assuming that this change is done in another part
>>>>> of the process's memory space. This type page fault is named speculative
>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>> is failing its processing and a classic page fault is then tried.
>>>>>
>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>> limiting the locking contention to these operations which are expected to
>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>> benchmark anymore.
>>>>>
>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>> have to be protected against parallel changes. This is done by using a per
>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>> speculative page fault in that case.
>>>>>
>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>> checked during the page fault are modified.
>>>>>
>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>> parallel change is possible at this time.
>>>>>
>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>> classic page fault handler will be called to handle the operation while
>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>> PTE.
>>>>>
>>>>> In pseudo code, this could be seen as:
>>>>> speculative_page_fault()
>>>>> {
>>>>> vma = get_vma()
>>>>> check vma sequence count
>>>>> check vma's support
>>>>> disable interrupt
>>>>> check pgd,p4d,...,pte
>>>>> save pmd and pte in vmf
>>>>> save vma sequence counter in vmf
>>>>> enable interrupt
>>>>> check vma sequence count
>>>>> handle_pte_fault(vma)
>>>>> ..
>>>>> page = alloc_page()
>>>>> pte_map_lock()
>>>>> disable interrupt
>>>>> abort if sequence counter has changed
>>>>> abort if pmd or pte has changed
>>>>> pte map and lock
>>>>> enable interrupt
>>>>> if abort
>>>>> free page
>>>>> abort
>>>>> ...
>>>>> }
>>>>>
>>>>> arch_fault_handler()
>>>>> {
>>>>> if (speculative_page_fault(&vma))
>>>>> goto done
>>>>> again:
>>>>> lock(mmap_sem)
>>>>> vma = find_vma();
>>>>> handle_pte_fault(vma);
>>>>> if retry
>>>>> unlock(mmap_sem)
>>>>> goto again;
>>>>> done:
>>>>> handle fault error
>>>>> }
>>>>>
>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>
>>>>> This series add a new software performance event named 'speculative-faults'
>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>> the part of the faults processed speculatively.
>>>>>
>>>>> There are some trace events introduced by this series. They allow
>>>>> identifying why the page faults were not processed speculatively. This
>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>> grouped in a system named 'pagefault', they are:
>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>> back.
>>>>>
>>>>> To record all the related events, the easier is to run perf with the
>>>>> following arguments :
>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>
>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>> page fault handled speculatively. I can be seen this way:
>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>
>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>> on x86, PowerPC and arm64.
>>>>>
>>>>> ---------------------
>>>>> Real Workload results
>>>>>
>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>> this new version should not impact the performance boost seen.
>>>>>
>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>> series:
>>>>> vanilla spf
>>>>> faults 89.418 101.364 +13%
>>>>> spf n/a 97.989
>>>>>
>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>> way.
>>>>>
>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>> it a try on an android device. He reported that the application launch time
>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>> 20%.
>>>>>
>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>
>>>>> Application 4.9 4.9+spf delta
>>>>> com.tencent.mm 416 389 -7%
>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>> com.tencent.mtt 455 454 0%
>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>> com.autonavi.minimap 711 701 -1%
>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>> com.immomo.momo 501 487 -3%
>>>>> com.tencent.peng 2145 2112 -2%
>>>>> com.smile.gifmaker 491 461 -6%
>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>> com.taobao.taobao 1341 1198 -11%
>>>>> com.baidu.searchbox 333 314 -6%
>>>>> com.tencent.mobileqq 394 384 -3%
>>>>> com.sina.weibo 907 906 0%
>>>>> com.youku.phone 816 731 -11%
>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>> com.UCMobile 415 411 -1%
>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>> com.tencent.qqmusic 336 329 -2%
>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>
>>>>> ------------------
>>>>> Benchmarks results
>>>>>
>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>> SPF is BASE + this series
>>>>>
>>>>> Kernbench:
>>>>> ----------
>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>> kernel (kernel is build 5 times):
>>>>>
>>>>> Average Half load -j 8
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>
>>>>> Average Optimal load -j 16
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>
>>>>>
>>>>> During a run on the SPF, perf events were captured:
>>>>> Performance counter stats for '../kernbench -M':
>>>>> 526743764 faults
>>>>> 210 spf
>>>>> 3 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 2278 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> Very few speculative page faults were recorded as most of the processes
>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>> were created during the kernel build processing).
>>>>>
>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>
>>>>> Average Half load -j 40
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>
>>>>> Average Optimal load -j 80
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>
>>>>> During a run on the SPF, perf events were captured:
>>>>> Performance counter stats for '../kernbench -M':
>>>>> 116730856 faults
>>>>> 0 spf
>>>>> 3 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 476 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>> there is no impact on the performance.
>>>>>
>>>>> Ebizzy:
>>>>> -------
>>>>> The test is counting the number of records per second it can manage, the
>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>> consistent result I repeated the test 100 times and measure the average
>>>>> result. The number is the record processes per second, the higher is the
>>>>> best.
>>>>>
>>>>> BASE SPF delta
>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>
>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>> 1706379 faults
>>>>> 1674599 spf
>>>>> 30588 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 363 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>> 1874773 faults
>>>>> 1461153 spf
>>>>> 413293 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 200 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>> leading the ebizzy performance boost.
>>>>>
>>>>> ------------------
>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>> and Minchan Kim, hopefully.
>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>> __do_page_fault().
>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>> instead
>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>> useless
>>>>> trace event pagefault:spf_pte_lock.
>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>> additional tests done didn't show a significant performance improvement.
>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>
>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>
>>>>>
>>>>> Laurent Dufour (20):
>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>> mm: make pte_unmap_same compatible with SPF
>>>>> mm: introduce INIT_VMA()
>>>>> mm: protect VMA modifications using VMA sequence count
>>>>> mm: protect mremap() against SPF hanlder
>>>>> mm: protect SPF handler against anon_vma changes
>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>> mm: introduce __vm_normal_page()
>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>> mm: protect mm_rb tree with a rwlock
>>>>> mm: adding speculative page fault failure trace events
>>>>> perf: add a speculative page fault sw event
>>>>> perf tools: add support for the SPF perf event
>>>>> mm: add speculative page fault vmstats
>>>>> powerpc/mm: add speculative page fault
>>>>>
>>>>> Mahendran Ganesh (2):
>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> arm64/mm: add speculative page fault
>>>>>
>>>>> Peter Zijlstra (4):
>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>> mm: VMA sequence count
>>>>> mm: provide speculative fault infrastructure
>>>>> x86/mm: add speculative pagefault handling
>>>>>
>>>>> arch/arm64/Kconfig | 1 +
>>>>> arch/arm64/mm/fault.c | 12 +
>>>>> arch/powerpc/Kconfig | 1 +
>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>> arch/x86/Kconfig | 1 +
>>>>> arch/x86/mm/fault.c | 27 +-
>>>>> fs/exec.c | 2 +-
>>>>> fs/proc/task_mmu.c | 5 +-
>>>>> fs/userfaultfd.c | 17 +-
>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>> include/linux/migrate.h | 4 +-
>>>>> include/linux/mm.h | 136 +++++++-
>>>>> include/linux/mm_types.h | 7 +
>>>>> include/linux/pagemap.h | 4 +-
>>>>> include/linux/rmap.h | 12 +-
>>>>> include/linux/swap.h | 10 +-
>>>>> include/linux/vm_event_item.h | 3 +
>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>> kernel/fork.c | 5 +-
>>>>> mm/Kconfig | 22 ++
>>>>> mm/huge_memory.c | 6 +-
>>>>> mm/hugetlb.c | 2 +
>>>>> mm/init-mm.c | 3 +
>>>>> mm/internal.h | 20 ++
>>>>> mm/khugepaged.c | 5 +
>>>>> mm/madvise.c | 6 +-
>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>> mm/mempolicy.c | 51 ++-
>>>>> mm/migrate.c | 6 +-
>>>>> mm/mlock.c | 13 +-
>>>>> mm/mmap.c | 229 ++++++++++---
>>>>> mm/mprotect.c | 4 +-
>>>>> mm/mremap.c | 13 +
>>>>> mm/nommu.c | 2 +-
>>>>> mm/rmap.c | 5 +-
>>>>> mm/swap.c | 6 +-
>>>>> mm/swap_state.c | 8 +-
>>>>> mm/vmstat.c | 5 +-
>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>> tools/perf/util/evsel.c | 1 +
>>>>> tools/perf/util/parse-events.c | 4 +
>>>>> tools/perf/util/parse-events.l | 1 +
>>>>> tools/perf/util/python.c | 1 +
>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>
>>>>> --
>>>>> 2.7.4
>>>>>
>>>>>
>>>>
>>>
>>
>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-07-04 7:51 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-04 7:51 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 04/07/2018 05:23, Song, HaiyanX wrote:
> Hi Laurent,
>
>
> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
Repeating the test only 3 times seems a bit too low to me.
I'll focus on the higher change for the moment, but I don't have access to such
a hardware.
Is possible to provide a diff between base and SPF of the performance cycles
measured when running page_fault3 and page_fault2 when the 20% change is detected.
Please stay focus on the test case process to see exactly where the series is
impacting.
Thanks,
Laurent.
>
> And I did not find other high variation on test case result.
>
> a). Enable THP
> testcase base stddev change head stddev metric
> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>
> b). Disable THP
> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>
>
> Best regards,
> Haiyan Song
> ________________________________________
> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Monday, July 02, 2018 4:59 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 11/06/2018 09:49, Song, HaiyanX wrote:
>> Hi Laurent,
>>
>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>> V9 patch serials.
>>
>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>> commit id:
>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>> Benchmark: will-it-scale
>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>
>> Metrics:
>> will-it-scale.per_process_ops=processes/nr_cpu
>> will-it-scale.per_thread_ops=threads/nr_cpu
>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>> THP: enable / disable
>> nr_task:100%
>>
>> 1. Regressions:
>>
>> a). Enable THP
>> testcase base change head metric
>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>
>> b). Disable THP
>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>
>> Notes: for the above values of test result, the higher is better.
>
> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
> get reproducible results. The results have huge variation, even on the vanilla
> kernel, and I can't state on any changes due to that.
>
> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
> measure any changes between the vanilla and the SPF patched ones:
>
> test THP enabled 4.17.0-rc4-mm1 spf delta
> page_fault3_threads 2697.7 2683.5 -0.53%
> page_fault2_threads 170660.6 169574.1 -0.64%
> context_switch1_threads 6915269.2 6877507.3 -0.55%
> context_switch1_processes 6478076.2 6529493.5 0.79%
> brk1 243391.2 238527.5 -2.00%
>
> Tests were run 10 times, no high variation detected.
>
> Did you see high variation on your side ? How many times the test were run to
> compute the average values ?
>
> Thanks,
> Laurent.
>
>
>>
>> 2. Improvement: not found improvement based on the selected test cases.
>>
>>
>> Best regards
>> Haiyan Song
>> ________________________________________
>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Monday, May 28, 2018 4:54 PM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> On 28/05/2018 10:22, Haiyan Song wrote:
>>> Hi Laurent,
>>>
>>> Yes, these tests are done on V9 patch.
>>
>> Do you plan to give this V11 a run ?
>>
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>>
>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>
>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>> tested on Intel 4s Skylake platform.
>>>>
>>>> Hi,
>>>>
>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>> series" while responding to the v11 header series...
>>>> Were these tests done on v9 or v11 ?
>>>>
>>>> Cheers,
>>>> Laurent.
>>>>
>>>>>
>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>> Commit id:
>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>> Benchmark suite: will-it-scale
>>>>> Download link:
>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>> Metrics:
>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>> THP: enable / disable
>>>>> nr_task: 100%
>>>>>
>>>>> 1. Regressions:
>>>>> a) THP enabled:
>>>>> testcase base change head metric
>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>
>>>>> b) THP disabled:
>>>>> testcase base change head metric
>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>
>>>>> 2. Improvements:
>>>>> a) THP enabled:
>>>>> testcase base change head metric
>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>
>>>>> b) THP disabled:
>>>>> testcase base change head metric
>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>
>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Haiyan Song
>>>>>
>>>>> ________________________________________
>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>
>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>> page fault without holding the mm semaphore [1].
>>>>>
>>>>> The idea is to try to handle user space page faults without holding the
>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>> process since the page fault handler will not wait for other threads memory
>>>>> layout change to be done, assuming that this change is done in another part
>>>>> of the process's memory space. This type page fault is named speculative
>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>> is failing its processing and a classic page fault is then tried.
>>>>>
>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>> limiting the locking contention to these operations which are expected to
>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>> benchmark anymore.
>>>>>
>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>> have to be protected against parallel changes. This is done by using a per
>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>> speculative page fault in that case.
>>>>>
>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>> checked during the page fault are modified.
>>>>>
>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>> parallel change is possible at this time.
>>>>>
>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>> classic page fault handler will be called to handle the operation while
>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>> PTE.
>>>>>
>>>>> In pseudo code, this could be seen as:
>>>>> speculative_page_fault()
>>>>> {
>>>>> vma = get_vma()
>>>>> check vma sequence count
>>>>> check vma's support
>>>>> disable interrupt
>>>>> check pgd,p4d,...,pte
>>>>> save pmd and pte in vmf
>>>>> save vma sequence counter in vmf
>>>>> enable interrupt
>>>>> check vma sequence count
>>>>> handle_pte_fault(vma)
>>>>> ..
>>>>> page = alloc_page()
>>>>> pte_map_lock()
>>>>> disable interrupt
>>>>> abort if sequence counter has changed
>>>>> abort if pmd or pte has changed
>>>>> pte map and lock
>>>>> enable interrupt
>>>>> if abort
>>>>> free page
>>>>> abort
>>>>> ...
>>>>> }
>>>>>
>>>>> arch_fault_handler()
>>>>> {
>>>>> if (speculative_page_fault(&vma))
>>>>> goto done
>>>>> again:
>>>>> lock(mmap_sem)
>>>>> vma = find_vma();
>>>>> handle_pte_fault(vma);
>>>>> if retry
>>>>> unlock(mmap_sem)
>>>>> goto again;
>>>>> done:
>>>>> handle fault error
>>>>> }
>>>>>
>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>
>>>>> This series add a new software performance event named 'speculative-faults'
>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>> the part of the faults processed speculatively.
>>>>>
>>>>> There are some trace events introduced by this series. They allow
>>>>> identifying why the page faults were not processed speculatively. This
>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>> grouped in a system named 'pagefault', they are:
>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>> back.
>>>>>
>>>>> To record all the related events, the easier is to run perf with the
>>>>> following arguments :
>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>
>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>> page fault handled speculatively. I can be seen this way:
>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>
>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>> on x86, PowerPC and arm64.
>>>>>
>>>>> ---------------------
>>>>> Real Workload results
>>>>>
>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>> this new version should not impact the performance boost seen.
>>>>>
>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>> series:
>>>>> vanilla spf
>>>>> faults 89.418 101.364 +13%
>>>>> spf n/a 97.989
>>>>>
>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>> way.
>>>>>
>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>> it a try on an android device. He reported that the application launch time
>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>> 20%.
>>>>>
>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>
>>>>> Application 4.9 4.9+spf delta
>>>>> com.tencent.mm 416 389 -7%
>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>> com.tencent.mtt 455 454 0%
>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>> com.autonavi.minimap 711 701 -1%
>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>> com.immomo.momo 501 487 -3%
>>>>> com.tencent.peng 2145 2112 -2%
>>>>> com.smile.gifmaker 491 461 -6%
>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>> com.taobao.taobao 1341 1198 -11%
>>>>> com.baidu.searchbox 333 314 -6%
>>>>> com.tencent.mobileqq 394 384 -3%
>>>>> com.sina.weibo 907 906 0%
>>>>> com.youku.phone 816 731 -11%
>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>> com.UCMobile 415 411 -1%
>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>> com.tencent.qqmusic 336 329 -2%
>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>
>>>>> ------------------
>>>>> Benchmarks results
>>>>>
>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>> SPF is BASE + this series
>>>>>
>>>>> Kernbench:
>>>>> ----------
>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>> kernel (kernel is build 5 times):
>>>>>
>>>>> Average Half load -j 8
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>
>>>>> Average Optimal load -j 16
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>
>>>>>
>>>>> During a run on the SPF, perf events were captured:
>>>>> Performance counter stats for '../kernbench -M':
>>>>> 526743764 faults
>>>>> 210 spf
>>>>> 3 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 2278 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> Very few speculative page faults were recorded as most of the processes
>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>> were created during the kernel build processing).
>>>>>
>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>
>>>>> Average Half load -j 40
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>
>>>>> Average Optimal load -j 80
>>>>> Run (std deviation)
>>>>> BASE SPF
>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>
>>>>> During a run on the SPF, perf events were captured:
>>>>> Performance counter stats for '../kernbench -M':
>>>>> 116730856 faults
>>>>> 0 spf
>>>>> 3 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 476 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>> there is no impact on the performance.
>>>>>
>>>>> Ebizzy:
>>>>> -------
>>>>> The test is counting the number of records per second it can manage, the
>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>> consistent result I repeated the test 100 times and measure the average
>>>>> result. The number is the record processes per second, the higher is the
>>>>> best.
>>>>>
>>>>> BASE SPF delta
>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>
>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>> 1706379 faults
>>>>> 1674599 spf
>>>>> 30588 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 363 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>> 1874773 faults
>>>>> 1461153 spf
>>>>> 413293 pagefault:spf_vma_changed
>>>>> 0 pagefault:spf_vma_noanon
>>>>> 200 pagefault:spf_vma_notsup
>>>>> 0 pagefault:spf_vma_access
>>>>> 0 pagefault:spf_pmd_changed
>>>>>
>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>> leading the ebizzy performance boost.
>>>>>
>>>>> ------------------
>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>> and Minchan Kim, hopefully.
>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>> __do_page_fault().
>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>> instead
>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>> useless
>>>>> trace event pagefault:spf_pte_lock.
>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>> additional tests done didn't show a significant performance improvement.
>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>
>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>
>>>>>
>>>>> Laurent Dufour (20):
>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>> mm: make pte_unmap_same compatible with SPF
>>>>> mm: introduce INIT_VMA()
>>>>> mm: protect VMA modifications using VMA sequence count
>>>>> mm: protect mremap() against SPF hanlder
>>>>> mm: protect SPF handler against anon_vma changes
>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>> mm: introduce __vm_normal_page()
>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>> mm: protect mm_rb tree with a rwlock
>>>>> mm: adding speculative page fault failure trace events
>>>>> perf: add a speculative page fault sw event
>>>>> perf tools: add support for the SPF perf event
>>>>> mm: add speculative page fault vmstats
>>>>> powerpc/mm: add speculative page fault
>>>>>
>>>>> Mahendran Ganesh (2):
>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>> arm64/mm: add speculative page fault
>>>>>
>>>>> Peter Zijlstra (4):
>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>> mm: VMA sequence count
>>>>> mm: provide speculative fault infrastructure
>>>>> x86/mm: add speculative pagefault handling
>>>>>
>>>>> arch/arm64/Kconfig | 1 +
>>>>> arch/arm64/mm/fault.c | 12 +
>>>>> arch/powerpc/Kconfig | 1 +
>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>> arch/x86/Kconfig | 1 +
>>>>> arch/x86/mm/fault.c | 27 +-
>>>>> fs/exec.c | 2 +-
>>>>> fs/proc/task_mmu.c | 5 +-
>>>>> fs/userfaultfd.c | 17 +-
>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>> include/linux/migrate.h | 4 +-
>>>>> include/linux/mm.h | 136 +++++++-
>>>>> include/linux/mm_types.h | 7 +
>>>>> include/linux/pagemap.h | 4 +-
>>>>> include/linux/rmap.h | 12 +-
>>>>> include/linux/swap.h | 10 +-
>>>>> include/linux/vm_event_item.h | 3 +
>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>> kernel/fork.c | 5 +-
>>>>> mm/Kconfig | 22 ++
>>>>> mm/huge_memory.c | 6 +-
>>>>> mm/hugetlb.c | 2 +
>>>>> mm/init-mm.c | 3 +
>>>>> mm/internal.h | 20 ++
>>>>> mm/khugepaged.c | 5 +
>>>>> mm/madvise.c | 6 +-
>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>> mm/mempolicy.c | 51 ++-
>>>>> mm/migrate.c | 6 +-
>>>>> mm/mlock.c | 13 +-
>>>>> mm/mmap.c | 229 ++++++++++---
>>>>> mm/mprotect.c | 4 +-
>>>>> mm/mremap.c | 13 +
>>>>> mm/nommu.c | 2 +-
>>>>> mm/rmap.c | 5 +-
>>>>> mm/swap.c | 6 +-
>>>>> mm/swap_state.c | 8 +-
>>>>> mm/vmstat.c | 5 +-
>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>> tools/perf/util/evsel.c | 1 +
>>>>> tools/perf/util/parse-events.c | 4 +
>>>>> tools/perf/util/parse-events.l | 1 +
>>>>> tools/perf/util/python.c | 1 +
>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>
>>>>> --
>>>>> 2.7.4
>>>>>
>>>>>
>>>>
>>>
>>
>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-07-04 7:51 ` Laurent Dufour
@ 2018-07-11 17:05 ` Laurent Dufour
-1 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-11 17:05 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Haiyan,
Do you get a chance to capture some performance cycles on your system ?
I still can't get these numbers on my hardware.
Thanks,
Laurent.
On 04/07/2018 09:51, Laurent Dufour wrote:
> On 04/07/2018 05:23, Song, HaiyanX wrote:
>> Hi Laurent,
>>
>>
>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>
> Repeating the test only 3 times seems a bit too low to me.
>
> I'll focus on the higher change for the moment, but I don't have access to such
> a hardware.
>
> Is possible to provide a diff between base and SPF of the performance cycles
> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>
> Please stay focus on the test case process to see exactly where the series is
> impacting.
>
> Thanks,
> Laurent.
>
>>
>> And I did not find other high variation on test case result.
>>
>> a). Enable THP
>> testcase base stddev change head stddev metric
>> page_fault3/enable THP 10519 A+- 3% -20.5% 8368 A+-6% will-it-scale.per_thread_ops
>> page_fault2/enalbe THP 8281 A+- 2% -18.8% 6728 will-it-scale.per_thread_ops
>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>
>> b). Disable THP
>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>
>>
>> Best regards,
>> Haiyan Song
>> ________________________________________
>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Monday, July 02, 2018 4:59 PM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>> V9 patch serials.
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>> commit id:
>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>> Benchmark: will-it-scale
>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task:100%
>>>
>>> 1. Regressions:
>>>
>>> a). Enable THP
>>> testcase base change head metric
>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>> Notes: for the above values of test result, the higher is better.
>>
>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>> get reproducible results. The results have huge variation, even on the vanilla
>> kernel, and I can't state on any changes due to that.
>>
>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>> measure any changes between the vanilla and the SPF patched ones:
>>
>> test THP enabled 4.17.0-rc4-mm1 spf delta
>> page_fault3_threads 2697.7 2683.5 -0.53%
>> page_fault2_threads 170660.6 169574.1 -0.64%
>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>> context_switch1_processes 6478076.2 6529493.5 0.79%
>> brk1 243391.2 238527.5 -2.00%
>>
>> Tests were run 10 times, no high variation detected.
>>
>> Did you see high variation on your side ? How many times the test were run to
>> compute the average values ?
>>
>> Thanks,
>> Laurent.
>>
>>
>>>
>>> 2. Improvement: not found improvement based on the selected test cases.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, May 28, 2018 4:54 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>> Hi Laurent,
>>>>
>>>> Yes, these tests are done on V9 patch.
>>>
>>> Do you plan to give this V11 a run ?
>>>
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>>
>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>
>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>> tested on Intel 4s Skylake platform.
>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>> series" while responding to the v11 header series...
>>>>> Were these tests done on v9 or v11 ?
>>>>>
>>>>> Cheers,
>>>>> Laurent.
>>>>>
>>>>>>
>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>> Commit id:
>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>> Benchmark suite: will-it-scale
>>>>>> Download link:
>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>> Metrics:
>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>> THP: enable / disable
>>>>>> nr_task: 100%
>>>>>>
>>>>>> 1. Regressions:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>
>>>>>> 2. Improvements:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>
>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>
>>>>>>
>>>>>> Best regards
>>>>>> Haiyan Song
>>>>>>
>>>>>> ________________________________________
>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>
>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>> page fault without holding the mm semaphore [1].
>>>>>>
>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>
>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>> limiting the locking contention to these operations which are expected to
>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>> benchmark anymore.
>>>>>>
>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>> speculative page fault in that case.
>>>>>>
>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>> checked during the page fault are modified.
>>>>>>
>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>> parallel change is possible at this time.
>>>>>>
>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>> classic page fault handler will be called to handle the operation while
>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>> PTE.
>>>>>>
>>>>>> In pseudo code, this could be seen as:
>>>>>> speculative_page_fault()
>>>>>> {
>>>>>> vma = get_vma()
>>>>>> check vma sequence count
>>>>>> check vma's support
>>>>>> disable interrupt
>>>>>> check pgd,p4d,...,pte
>>>>>> save pmd and pte in vmf
>>>>>> save vma sequence counter in vmf
>>>>>> enable interrupt
>>>>>> check vma sequence count
>>>>>> handle_pte_fault(vma)
>>>>>> ..
>>>>>> page = alloc_page()
>>>>>> pte_map_lock()
>>>>>> disable interrupt
>>>>>> abort if sequence counter has changed
>>>>>> abort if pmd or pte has changed
>>>>>> pte map and lock
>>>>>> enable interrupt
>>>>>> if abort
>>>>>> free page
>>>>>> abort
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> arch_fault_handler()
>>>>>> {
>>>>>> if (speculative_page_fault(&vma))
>>>>>> goto done
>>>>>> again:
>>>>>> lock(mmap_sem)
>>>>>> vma = find_vma();
>>>>>> handle_pte_fault(vma);
>>>>>> if retry
>>>>>> unlock(mmap_sem)
>>>>>> goto again;
>>>>>> done:
>>>>>> handle fault error
>>>>>> }
>>>>>>
>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>
>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>> the part of the faults processed speculatively.
>>>>>>
>>>>>> There are some trace events introduced by this series. They allow
>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>> grouped in a system named 'pagefault', they are:
>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>> back.
>>>>>>
>>>>>> To record all the related events, the easier is to run perf with the
>>>>>> following arguments :
>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>
>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>
>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>> on x86, PowerPC and arm64.
>>>>>>
>>>>>> ---------------------
>>>>>> Real Workload results
>>>>>>
>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>> this new version should not impact the performance boost seen.
>>>>>>
>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>> series:
>>>>>> vanilla spf
>>>>>> faults 89.418 101.364 +13%
>>>>>> spf n/a 97.989
>>>>>>
>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>> way.
>>>>>>
>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>> it a try on an android device. He reported that the application launch time
>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>> 20%.
>>>>>>
>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>
>>>>>> Application 4.9 4.9+spf delta
>>>>>> com.tencent.mm 416 389 -7%
>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>> com.tencent.mtt 455 454 0%
>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>> com.immomo.momo 501 487 -3%
>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>> com.sina.weibo 907 906 0%
>>>>>> com.youku.phone 816 731 -11%
>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>> com.UCMobile 415 411 -1%
>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>
>>>>>> ------------------
>>>>>> Benchmarks results
>>>>>>
>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>> SPF is BASE + this series
>>>>>>
>>>>>> Kernbench:
>>>>>> ----------
>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>> kernel (kernel is build 5 times):
>>>>>>
>>>>>> Average Half load -j 8
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>
>>>>>> Average Optimal load -j 16
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 526743764 faults
>>>>>> 210 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>> were created during the kernel build processing).
>>>>>>
>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>
>>>>>> Average Half load -j 40
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>
>>>>>> Average Optimal load -j 80
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 116730856 faults
>>>>>> 0 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 476 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>> there is no impact on the performance.
>>>>>>
>>>>>> Ebizzy:
>>>>>> -------
>>>>>> The test is counting the number of records per second it can manage, the
>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>> result. The number is the record processes per second, the higher is the
>>>>>> best.
>>>>>>
>>>>>> BASE SPF delta
>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>
>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>> 1706379 faults
>>>>>> 1674599 spf
>>>>>> 30588 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 363 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>> 1874773 faults
>>>>>> 1461153 spf
>>>>>> 413293 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 200 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>> leading the ebizzy performance boost.
>>>>>>
>>>>>> ------------------
>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>> and Minchan Kim, hopefully.
>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>> __do_page_fault().
>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>> instead
>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>> useless
>>>>>> trace event pagefault:spf_pte_lock.
>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>
>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>
>>>>>>
>>>>>> Laurent Dufour (20):
>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>> mm: introduce INIT_VMA()
>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>> mm: protect mremap() against SPF hanlder
>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>> mm: introduce __vm_normal_page()
>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>> mm: adding speculative page fault failure trace events
>>>>>> perf: add a speculative page fault sw event
>>>>>> perf tools: add support for the SPF perf event
>>>>>> mm: add speculative page fault vmstats
>>>>>> powerpc/mm: add speculative page fault
>>>>>>
>>>>>> Mahendran Ganesh (2):
>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> arm64/mm: add speculative page fault
>>>>>>
>>>>>> Peter Zijlstra (4):
>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>> mm: VMA sequence count
>>>>>> mm: provide speculative fault infrastructure
>>>>>> x86/mm: add speculative pagefault handling
>>>>>>
>>>>>> arch/arm64/Kconfig | 1 +
>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>> arch/x86/Kconfig | 1 +
>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>> fs/exec.c | 2 +-
>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>> fs/userfaultfd.c | 17 +-
>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>> include/linux/migrate.h | 4 +-
>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>> include/linux/mm_types.h | 7 +
>>>>>> include/linux/pagemap.h | 4 +-
>>>>>> include/linux/rmap.h | 12 +-
>>>>>> include/linux/swap.h | 10 +-
>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>> kernel/fork.c | 5 +-
>>>>>> mm/Kconfig | 22 ++
>>>>>> mm/huge_memory.c | 6 +-
>>>>>> mm/hugetlb.c | 2 +
>>>>>> mm/init-mm.c | 3 +
>>>>>> mm/internal.h | 20 ++
>>>>>> mm/khugepaged.c | 5 +
>>>>>> mm/madvise.c | 6 +-
>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>> mm/mempolicy.c | 51 ++-
>>>>>> mm/migrate.c | 6 +-
>>>>>> mm/mlock.c | 13 +-
>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>> mm/mprotect.c | 4 +-
>>>>>> mm/mremap.c | 13 +
>>>>>> mm/nommu.c | 2 +-
>>>>>> mm/rmap.c | 5 +-
>>>>>> mm/swap.c | 6 +-
>>>>>> mm/swap_state.c | 8 +-
>>>>>> mm/vmstat.c | 5 +-
>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>> tools/perf/util/python.c | 1 +
>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>
>>>>>> --
>>>>>> 2.7.4
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-07-11 17:05 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-11 17:05 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
Hi Haiyan,
Do you get a chance to capture some performance cycles on your system ?
I still can't get these numbers on my hardware.
Thanks,
Laurent.
On 04/07/2018 09:51, Laurent Dufour wrote:
> On 04/07/2018 05:23, Song, HaiyanX wrote:
>> Hi Laurent,
>>
>>
>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>
> Repeating the test only 3 times seems a bit too low to me.
>
> I'll focus on the higher change for the moment, but I don't have access to such
> a hardware.
>
> Is possible to provide a diff between base and SPF of the performance cycles
> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>
> Please stay focus on the test case process to see exactly where the series is
> impacting.
>
> Thanks,
> Laurent.
>
>>
>> And I did not find other high variation on test case result.
>>
>> a). Enable THP
>> testcase base stddev change head stddev metric
>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>
>> b). Disable THP
>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>
>>
>> Best regards,
>> Haiyan Song
>> ________________________________________
>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Monday, July 02, 2018 4:59 PM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>> V9 patch serials.
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>> commit id:
>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>> Benchmark: will-it-scale
>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task:100%
>>>
>>> 1. Regressions:
>>>
>>> a). Enable THP
>>> testcase base change head metric
>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>> Notes: for the above values of test result, the higher is better.
>>
>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>> get reproducible results. The results have huge variation, even on the vanilla
>> kernel, and I can't state on any changes due to that.
>>
>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>> measure any changes between the vanilla and the SPF patched ones:
>>
>> test THP enabled 4.17.0-rc4-mm1 spf delta
>> page_fault3_threads 2697.7 2683.5 -0.53%
>> page_fault2_threads 170660.6 169574.1 -0.64%
>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>> context_switch1_processes 6478076.2 6529493.5 0.79%
>> brk1 243391.2 238527.5 -2.00%
>>
>> Tests were run 10 times, no high variation detected.
>>
>> Did you see high variation on your side ? How many times the test were run to
>> compute the average values ?
>>
>> Thanks,
>> Laurent.
>>
>>
>>>
>>> 2. Improvement: not found improvement based on the selected test cases.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, May 28, 2018 4:54 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>> Hi Laurent,
>>>>
>>>> Yes, these tests are done on V9 patch.
>>>
>>> Do you plan to give this V11 a run ?
>>>
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>>
>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>
>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>> tested on Intel 4s Skylake platform.
>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>> series" while responding to the v11 header series...
>>>>> Were these tests done on v9 or v11 ?
>>>>>
>>>>> Cheers,
>>>>> Laurent.
>>>>>
>>>>>>
>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>> Commit id:
>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>> Benchmark suite: will-it-scale
>>>>>> Download link:
>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>> Metrics:
>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>> THP: enable / disable
>>>>>> nr_task: 100%
>>>>>>
>>>>>> 1. Regressions:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>
>>>>>> 2. Improvements:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>
>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>
>>>>>>
>>>>>> Best regards
>>>>>> Haiyan Song
>>>>>>
>>>>>> ________________________________________
>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>
>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>> page fault without holding the mm semaphore [1].
>>>>>>
>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>
>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>> limiting the locking contention to these operations which are expected to
>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>> benchmark anymore.
>>>>>>
>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>> speculative page fault in that case.
>>>>>>
>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>> checked during the page fault are modified.
>>>>>>
>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>> parallel change is possible at this time.
>>>>>>
>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>> classic page fault handler will be called to handle the operation while
>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>> PTE.
>>>>>>
>>>>>> In pseudo code, this could be seen as:
>>>>>> speculative_page_fault()
>>>>>> {
>>>>>> vma = get_vma()
>>>>>> check vma sequence count
>>>>>> check vma's support
>>>>>> disable interrupt
>>>>>> check pgd,p4d,...,pte
>>>>>> save pmd and pte in vmf
>>>>>> save vma sequence counter in vmf
>>>>>> enable interrupt
>>>>>> check vma sequence count
>>>>>> handle_pte_fault(vma)
>>>>>> ..
>>>>>> page = alloc_page()
>>>>>> pte_map_lock()
>>>>>> disable interrupt
>>>>>> abort if sequence counter has changed
>>>>>> abort if pmd or pte has changed
>>>>>> pte map and lock
>>>>>> enable interrupt
>>>>>> if abort
>>>>>> free page
>>>>>> abort
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> arch_fault_handler()
>>>>>> {
>>>>>> if (speculative_page_fault(&vma))
>>>>>> goto done
>>>>>> again:
>>>>>> lock(mmap_sem)
>>>>>> vma = find_vma();
>>>>>> handle_pte_fault(vma);
>>>>>> if retry
>>>>>> unlock(mmap_sem)
>>>>>> goto again;
>>>>>> done:
>>>>>> handle fault error
>>>>>> }
>>>>>>
>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>
>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>> the part of the faults processed speculatively.
>>>>>>
>>>>>> There are some trace events introduced by this series. They allow
>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>> grouped in a system named 'pagefault', they are:
>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>> back.
>>>>>>
>>>>>> To record all the related events, the easier is to run perf with the
>>>>>> following arguments :
>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>
>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>
>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>> on x86, PowerPC and arm64.
>>>>>>
>>>>>> ---------------------
>>>>>> Real Workload results
>>>>>>
>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>> this new version should not impact the performance boost seen.
>>>>>>
>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>> series:
>>>>>> vanilla spf
>>>>>> faults 89.418 101.364 +13%
>>>>>> spf n/a 97.989
>>>>>>
>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>> way.
>>>>>>
>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>> it a try on an android device. He reported that the application launch time
>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>> 20%.
>>>>>>
>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>
>>>>>> Application 4.9 4.9+spf delta
>>>>>> com.tencent.mm 416 389 -7%
>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>> com.tencent.mtt 455 454 0%
>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>> com.immomo.momo 501 487 -3%
>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>> com.sina.weibo 907 906 0%
>>>>>> com.youku.phone 816 731 -11%
>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>> com.UCMobile 415 411 -1%
>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>
>>>>>> ------------------
>>>>>> Benchmarks results
>>>>>>
>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>> SPF is BASE + this series
>>>>>>
>>>>>> Kernbench:
>>>>>> ----------
>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>> kernel (kernel is build 5 times):
>>>>>>
>>>>>> Average Half load -j 8
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>
>>>>>> Average Optimal load -j 16
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 526743764 faults
>>>>>> 210 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>> were created during the kernel build processing).
>>>>>>
>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>
>>>>>> Average Half load -j 40
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>
>>>>>> Average Optimal load -j 80
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 116730856 faults
>>>>>> 0 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 476 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>> there is no impact on the performance.
>>>>>>
>>>>>> Ebizzy:
>>>>>> -------
>>>>>> The test is counting the number of records per second it can manage, the
>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>> result. The number is the record processes per second, the higher is the
>>>>>> best.
>>>>>>
>>>>>> BASE SPF delta
>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>
>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>> 1706379 faults
>>>>>> 1674599 spf
>>>>>> 30588 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 363 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>> 1874773 faults
>>>>>> 1461153 spf
>>>>>> 413293 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 200 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>> leading the ebizzy performance boost.
>>>>>>
>>>>>> ------------------
>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>> and Minchan Kim, hopefully.
>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>> __do_page_fault().
>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>> instead
>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>> useless
>>>>>> trace event pagefault:spf_pte_lock.
>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>
>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>
>>>>>>
>>>>>> Laurent Dufour (20):
>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>> mm: introduce INIT_VMA()
>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>> mm: protect mremap() against SPF hanlder
>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>> mm: introduce __vm_normal_page()
>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>> mm: adding speculative page fault failure trace events
>>>>>> perf: add a speculative page fault sw event
>>>>>> perf tools: add support for the SPF perf event
>>>>>> mm: add speculative page fault vmstats
>>>>>> powerpc/mm: add speculative page fault
>>>>>>
>>>>>> Mahendran Ganesh (2):
>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> arm64/mm: add speculative page fault
>>>>>>
>>>>>> Peter Zijlstra (4):
>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>> mm: VMA sequence count
>>>>>> mm: provide speculative fault infrastructure
>>>>>> x86/mm: add speculative pagefault handling
>>>>>>
>>>>>> arch/arm64/Kconfig | 1 +
>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>> arch/x86/Kconfig | 1 +
>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>> fs/exec.c | 2 +-
>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>> fs/userfaultfd.c | 17 +-
>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>> include/linux/migrate.h | 4 +-
>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>> include/linux/mm_types.h | 7 +
>>>>>> include/linux/pagemap.h | 4 +-
>>>>>> include/linux/rmap.h | 12 +-
>>>>>> include/linux/swap.h | 10 +-
>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>> kernel/fork.c | 5 +-
>>>>>> mm/Kconfig | 22 ++
>>>>>> mm/huge_memory.c | 6 +-
>>>>>> mm/hugetlb.c | 2 +
>>>>>> mm/init-mm.c | 3 +
>>>>>> mm/internal.h | 20 ++
>>>>>> mm/khugepaged.c | 5 +
>>>>>> mm/madvise.c | 6 +-
>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>> mm/mempolicy.c | 51 ++-
>>>>>> mm/migrate.c | 6 +-
>>>>>> mm/mlock.c | 13 +-
>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>> mm/mprotect.c | 4 +-
>>>>>> mm/mremap.c | 13 +
>>>>>> mm/nommu.c | 2 +-
>>>>>> mm/rmap.c | 5 +-
>>>>>> mm/swap.c | 6 +-
>>>>>> mm/swap_state.c | 8 +-
>>>>>> mm/vmstat.c | 5 +-
>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>> tools/perf/util/python.c | 1 +
>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>
>>>>>> --
>>>>>> 2.7.4
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-07-11 17:05 ` Laurent Dufour
(?)
@ 2018-07-13 3:56 ` Song, HaiyanX
2018-07-17 9:36 ` Laurent Dufour
-1 siblings, 1 reply; 106+ messages in thread
From: Song, HaiyanX @ 2018-07-13 3:56 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 37281 bytes --]
Hi Laurent,
I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
Please help to check on these data if it can help you to find the higher change. Thanks.
File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
Best regards,
Haiyan Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Thursday, July 12, 2018 1:05 AM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
Hi Haiyan,
Do you get a chance to capture some performance cycles on your system ?
I still can't get these numbers on my hardware.
Thanks,
Laurent.
On 04/07/2018 09:51, Laurent Dufour wrote:
> On 04/07/2018 05:23, Song, HaiyanX wrote:
>> Hi Laurent,
>>
>>
>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>
> Repeating the test only 3 times seems a bit too low to me.
>
> I'll focus on the higher change for the moment, but I don't have access to such
> a hardware.
>
> Is possible to provide a diff between base and SPF of the performance cycles
> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>
> Please stay focus on the test case process to see exactly where the series is
> impacting.
>
> Thanks,
> Laurent.
>
>>
>> And I did not find other high variation on test case result.
>>
>> a). Enable THP
>> testcase base stddev change head stddev metric
>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>
>> b). Disable THP
>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>
>>
>> Best regards,
>> Haiyan Song
>> ________________________________________
>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Monday, July 02, 2018 4:59 PM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>> V9 patch serials.
>>>
>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>> commit id:
>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>> Benchmark: will-it-scale
>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>
>>> Metrics:
>>> will-it-scale.per_process_ops=processes/nr_cpu
>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>> THP: enable / disable
>>> nr_task:100%
>>>
>>> 1. Regressions:
>>>
>>> a). Enable THP
>>> testcase base change head metric
>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>> Notes: for the above values of test result, the higher is better.
>>
>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>> get reproducible results. The results have huge variation, even on the vanilla
>> kernel, and I can't state on any changes due to that.
>>
>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>> measure any changes between the vanilla and the SPF patched ones:
>>
>> test THP enabled 4.17.0-rc4-mm1 spf delta
>> page_fault3_threads 2697.7 2683.5 -0.53%
>> page_fault2_threads 170660.6 169574.1 -0.64%
>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>> context_switch1_processes 6478076.2 6529493.5 0.79%
>> brk1 243391.2 238527.5 -2.00%
>>
>> Tests were run 10 times, no high variation detected.
>>
>> Did you see high variation on your side ? How many times the test were run to
>> compute the average values ?
>>
>> Thanks,
>> Laurent.
>>
>>
>>>
>>> 2. Improvement: not found improvement based on the selected test cases.
>>>
>>>
>>> Best regards
>>> Haiyan Song
>>> ________________________________________
>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, May 28, 2018 4:54 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>> Hi Laurent,
>>>>
>>>> Yes, these tests are done on V9 patch.
>>>
>>> Do you plan to give this V11 a run ?
>>>
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>>
>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>
>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>> tested on Intel 4s Skylake platform.
>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>> series" while responding to the v11 header series...
>>>>> Were these tests done on v9 or v11 ?
>>>>>
>>>>> Cheers,
>>>>> Laurent.
>>>>>
>>>>>>
>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>> Commit id:
>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>> Benchmark suite: will-it-scale
>>>>>> Download link:
>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>> Metrics:
>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>> THP: enable / disable
>>>>>> nr_task: 100%
>>>>>>
>>>>>> 1. Regressions:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>
>>>>>> 2. Improvements:
>>>>>> a) THP enabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>
>>>>>> b) THP disabled:
>>>>>> testcase base change head metric
>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>
>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>
>>>>>>
>>>>>> Best regards
>>>>>> Haiyan Song
>>>>>>
>>>>>> ________________________________________
>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>
>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>> page fault without holding the mm semaphore [1].
>>>>>>
>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>
>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>> limiting the locking contention to these operations which are expected to
>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>> benchmark anymore.
>>>>>>
>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>> speculative page fault in that case.
>>>>>>
>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>> checked during the page fault are modified.
>>>>>>
>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>> parallel change is possible at this time.
>>>>>>
>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>> classic page fault handler will be called to handle the operation while
>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>> PTE.
>>>>>>
>>>>>> In pseudo code, this could be seen as:
>>>>>> speculative_page_fault()
>>>>>> {
>>>>>> vma = get_vma()
>>>>>> check vma sequence count
>>>>>> check vma's support
>>>>>> disable interrupt
>>>>>> check pgd,p4d,...,pte
>>>>>> save pmd and pte in vmf
>>>>>> save vma sequence counter in vmf
>>>>>> enable interrupt
>>>>>> check vma sequence count
>>>>>> handle_pte_fault(vma)
>>>>>> ..
>>>>>> page = alloc_page()
>>>>>> pte_map_lock()
>>>>>> disable interrupt
>>>>>> abort if sequence counter has changed
>>>>>> abort if pmd or pte has changed
>>>>>> pte map and lock
>>>>>> enable interrupt
>>>>>> if abort
>>>>>> free page
>>>>>> abort
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> arch_fault_handler()
>>>>>> {
>>>>>> if (speculative_page_fault(&vma))
>>>>>> goto done
>>>>>> again:
>>>>>> lock(mmap_sem)
>>>>>> vma = find_vma();
>>>>>> handle_pte_fault(vma);
>>>>>> if retry
>>>>>> unlock(mmap_sem)
>>>>>> goto again;
>>>>>> done:
>>>>>> handle fault error
>>>>>> }
>>>>>>
>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>
>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>> the part of the faults processed speculatively.
>>>>>>
>>>>>> There are some trace events introduced by this series. They allow
>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>> grouped in a system named 'pagefault', they are:
>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>> back.
>>>>>>
>>>>>> To record all the related events, the easier is to run perf with the
>>>>>> following arguments :
>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>
>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>
>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>> on x86, PowerPC and arm64.
>>>>>>
>>>>>> ---------------------
>>>>>> Real Workload results
>>>>>>
>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>> this new version should not impact the performance boost seen.
>>>>>>
>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>> series:
>>>>>> vanilla spf
>>>>>> faults 89.418 101.364 +13%
>>>>>> spf n/a 97.989
>>>>>>
>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>> way.
>>>>>>
>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>> it a try on an android device. He reported that the application launch time
>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>> 20%.
>>>>>>
>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>
>>>>>> Application 4.9 4.9+spf delta
>>>>>> com.tencent.mm 416 389 -7%
>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>> com.tencent.mtt 455 454 0%
>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>> com.immomo.momo 501 487 -3%
>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>> com.sina.weibo 907 906 0%
>>>>>> com.youku.phone 816 731 -11%
>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>> com.UCMobile 415 411 -1%
>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>
>>>>>> ------------------
>>>>>> Benchmarks results
>>>>>>
>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>> SPF is BASE + this series
>>>>>>
>>>>>> Kernbench:
>>>>>> ----------
>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>> kernel (kernel is build 5 times):
>>>>>>
>>>>>> Average Half load -j 8
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>
>>>>>> Average Optimal load -j 16
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 526743764 faults
>>>>>> 210 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>> were created during the kernel build processing).
>>>>>>
>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>
>>>>>> Average Half load -j 40
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>
>>>>>> Average Optimal load -j 80
>>>>>> Run (std deviation)
>>>>>> BASE SPF
>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>
>>>>>> During a run on the SPF, perf events were captured:
>>>>>> Performance counter stats for '../kernbench -M':
>>>>>> 116730856 faults
>>>>>> 0 spf
>>>>>> 3 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 476 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>> there is no impact on the performance.
>>>>>>
>>>>>> Ebizzy:
>>>>>> -------
>>>>>> The test is counting the number of records per second it can manage, the
>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>> result. The number is the record processes per second, the higher is the
>>>>>> best.
>>>>>>
>>>>>> BASE SPF delta
>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>
>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>> 1706379 faults
>>>>>> 1674599 spf
>>>>>> 30588 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 363 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>> 1874773 faults
>>>>>> 1461153 spf
>>>>>> 413293 pagefault:spf_vma_changed
>>>>>> 0 pagefault:spf_vma_noanon
>>>>>> 200 pagefault:spf_vma_notsup
>>>>>> 0 pagefault:spf_vma_access
>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>
>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>> leading the ebizzy performance boost.
>>>>>>
>>>>>> ------------------
>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>> and Minchan Kim, hopefully.
>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>> __do_page_fault().
>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>> instead
>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>> useless
>>>>>> trace event pagefault:spf_pte_lock.
>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>
>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>
>>>>>>
>>>>>> Laurent Dufour (20):
>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>> mm: introduce INIT_VMA()
>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>> mm: protect mremap() against SPF hanlder
>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>> mm: introduce __vm_normal_page()
>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>> mm: adding speculative page fault failure trace events
>>>>>> perf: add a speculative page fault sw event
>>>>>> perf tools: add support for the SPF perf event
>>>>>> mm: add speculative page fault vmstats
>>>>>> powerpc/mm: add speculative page fault
>>>>>>
>>>>>> Mahendran Ganesh (2):
>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>> arm64/mm: add speculative page fault
>>>>>>
>>>>>> Peter Zijlstra (4):
>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>> mm: VMA sequence count
>>>>>> mm: provide speculative fault infrastructure
>>>>>> x86/mm: add speculative pagefault handling
>>>>>>
>>>>>> arch/arm64/Kconfig | 1 +
>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>> arch/x86/Kconfig | 1 +
>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>> fs/exec.c | 2 +-
>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>> fs/userfaultfd.c | 17 +-
>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>> include/linux/migrate.h | 4 +-
>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>> include/linux/mm_types.h | 7 +
>>>>>> include/linux/pagemap.h | 4 +-
>>>>>> include/linux/rmap.h | 12 +-
>>>>>> include/linux/swap.h | 10 +-
>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>> kernel/fork.c | 5 +-
>>>>>> mm/Kconfig | 22 ++
>>>>>> mm/huge_memory.c | 6 +-
>>>>>> mm/hugetlb.c | 2 +
>>>>>> mm/init-mm.c | 3 +
>>>>>> mm/internal.h | 20 ++
>>>>>> mm/khugepaged.c | 5 +
>>>>>> mm/madvise.c | 6 +-
>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>> mm/mempolicy.c | 51 ++-
>>>>>> mm/migrate.c | 6 +-
>>>>>> mm/mlock.c | 13 +-
>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>> mm/mprotect.c | 4 +-
>>>>>> mm/mremap.c | 13 +
>>>>>> mm/nommu.c | 2 +-
>>>>>> mm/rmap.c | 5 +-
>>>>>> mm/swap.c | 6 +-
>>>>>> mm/swap_state.c | 8 +-
>>>>>> mm/vmstat.c | 5 +-
>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>> tools/perf/util/python.c | 1 +
>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>
>>>>>> --
>>>>>> 2.7.4
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>
[-- Attachment #2: perf-profile_page_fault2_base_THP-Alwasys.gz --]
[-- Type: application/gzip, Size: 10171 bytes --]
[-- Attachment #3: perf-profile_page_fault2_base_thp_never.gz --]
[-- Type: application/gzip, Size: 11474 bytes --]
[-- Attachment #4: perf-profile_page_fault2_head_THP-Always.gz --]
[-- Type: application/gzip, Size: 10374 bytes --]
[-- Attachment #5: perf-profile_page_fault2_head_thp_never.gz --]
[-- Type: application/gzip, Size: 11327 bytes --]
[-- Attachment #6: perf-profile_page_fault3_base_THP-Always.gz --]
[-- Type: application/gzip, Size: 9503 bytes --]
[-- Attachment #7: perf-profile_page_fault3_base_thp_never.gz --]
[-- Type: application/gzip, Size: 9843 bytes --]
[-- Attachment #8: perf-profile_page_fault3_head_THP-Always.gz --]
[-- Type: application/gzip, Size: 9596 bytes --]
[-- Attachment #9: perf-profile_page_fault3_head_thp_never.gz --]
[-- Type: application/gzip, Size: 10137 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-07-13 3:56 ` Song, HaiyanX
@ 2018-07-17 9:36 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-17 9:36 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 13/07/2018 05:56, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Thanks a lot for sharing this perf reports.
I looked at them closely, and I've to admit that I was not able to found a
major difference between the base and the head report, except that
handle_pte_fault() is no more in-lined in the head one.
As expected, __handle_speculative_fault() is never traced since these tests are
dealing with file mapping, not handled in the speculative way.
When running these test did you seen a major differences in the test's result
between base and head ?
>From the number of cycles counted, the biggest difference is page_fault3 when
run with the THP enabled:
BASE HEAD Delta
page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
The very weird thing is the difference of the delta cycles reported between
thp never and thp always, because the speculative way is aborted when checking
for the vma->ops field, which is the same in both case, and the thp is never
checked. So there is no code covering differnce, on the speculative path,
between these 2 cases. This leads me to think that there are other interactions
interfering in the measure.
Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
the head of the perf report is the 92% testcase which is weirdly not reported
on the head side :
92.02% 22.33% page_fault3_processes [.] testcase
92.02% testcase
Then the base reported 37.67% for __do_page_fault() where the head reported
48.41%, but the only difference in this function, between base and head, is the
call to handle_speculative_fault(). But this is a macro checking for the fault
flags, and mm->users and then calling __handle_speculative_fault() if needed.
So this can't explain this difference, except if __handle_speculative_fault()
is inlined in __do_page_fault().
Is this the case on your build ?
Haiyan, do you still have the output of the test to check those numbers too ?
Cheers,
Laurent
> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
> Please help to check on these data if it can help you to find the higher change. Thanks.
>
> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>
> Best regards,
> Haiyan Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Thursday, July 12, 2018 1:05 AM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> Hi Haiyan,
>
> Do you get a chance to capture some performance cycles on your system ?
> I still can't get these numbers on my hardware.
>
> Thanks,
> Laurent.
>
> On 04/07/2018 09:51, Laurent Dufour wrote:
>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>>
>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>
>> Repeating the test only 3 times seems a bit too low to me.
>>
>> I'll focus on the higher change for the moment, but I don't have access to such
>> a hardware.
>>
>> Is possible to provide a diff between base and SPF of the performance cycles
>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>
>> Please stay focus on the test case process to see exactly where the series is
>> impacting.
>>
>> Thanks,
>> Laurent.
>>
>>>
>>> And I did not find other high variation on test case result.
>>>
>>> a). Enable THP
>>> testcase base stddev change head stddev metric
>>> page_fault3/enable THP 10519 +- 3% -20.5% 8368 +-6% will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 +- 2% -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>> ________________________________________
>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, July 02, 2018 4:59 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>> V9 patch serials.
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>> commit id:
>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>> Benchmark: will-it-scale
>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task:100%
>>>>
>>>> 1. Regressions:
>>>>
>>>> a). Enable THP
>>>> testcase base change head metric
>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>> Notes: for the above values of test result, the higher is better.
>>>
>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>> get reproducible results. The results have huge variation, even on the vanilla
>>> kernel, and I can't state on any changes due to that.
>>>
>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>> measure any changes between the vanilla and the SPF patched ones:
>>>
>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>> brk1 243391.2 238527.5 -2.00%
>>>
>>> Tests were run 10 times, no high variation detected.
>>>
>>> Did you see high variation on your side ? How many times the test were run to
>>> compute the average values ?
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>
>>>>
>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Yes, these tests are done on V9 patch.
>>>>
>>>> Do you plan to give this V11 a run ?
>>>>
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Haiyan Song
>>>>>
>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>
>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>> series" while responding to the v11 header series...
>>>>>> Were these tests done on v9 or v11 ?
>>>>>>
>>>>>> Cheers,
>>>>>> Laurent.
>>>>>>
>>>>>>>
>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>> Commit id:
>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>> Benchmark suite: will-it-scale
>>>>>>> Download link:
>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>> Metrics:
>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>> THP: enable / disable
>>>>>>> nr_task: 100%
>>>>>>>
>>>>>>> 1. Regressions:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> 2. Improvements:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>
>>>>>>>
>>>>>>> Best regards
>>>>>>> Haiyan Song
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>
>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>
>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>
>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>> benchmark anymore.
>>>>>>>
>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>> speculative page fault in that case.
>>>>>>>
>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>> checked during the page fault are modified.
>>>>>>>
>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>> parallel change is possible at this time.
>>>>>>>
>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>> PTE.
>>>>>>>
>>>>>>> In pseudo code, this could be seen as:
>>>>>>> speculative_page_fault()
>>>>>>> {
>>>>>>> vma = get_vma()
>>>>>>> check vma sequence count
>>>>>>> check vma's support
>>>>>>> disable interrupt
>>>>>>> check pgd,p4d,...,pte
>>>>>>> save pmd and pte in vmf
>>>>>>> save vma sequence counter in vmf
>>>>>>> enable interrupt
>>>>>>> check vma sequence count
>>>>>>> handle_pte_fault(vma)
>>>>>>> ..
>>>>>>> page = alloc_page()
>>>>>>> pte_map_lock()
>>>>>>> disable interrupt
>>>>>>> abort if sequence counter has changed
>>>>>>> abort if pmd or pte has changed
>>>>>>> pte map and lock
>>>>>>> enable interrupt
>>>>>>> if abort
>>>>>>> free page
>>>>>>> abort
>>>>>>> ...
>>>>>>> }
>>>>>>>
>>>>>>> arch_fault_handler()
>>>>>>> {
>>>>>>> if (speculative_page_fault(&vma))
>>>>>>> goto done
>>>>>>> again:
>>>>>>> lock(mmap_sem)
>>>>>>> vma = find_vma();
>>>>>>> handle_pte_fault(vma);
>>>>>>> if retry
>>>>>>> unlock(mmap_sem)
>>>>>>> goto again;
>>>>>>> done:
>>>>>>> handle fault error
>>>>>>> }
>>>>>>>
>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>
>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>> the part of the faults processed speculatively.
>>>>>>>
>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>> back.
>>>>>>>
>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>> following arguments :
>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>
>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>
>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>> on x86, PowerPC and arm64.
>>>>>>>
>>>>>>> ---------------------
>>>>>>> Real Workload results
>>>>>>>
>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>
>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>> series:
>>>>>>> vanilla spf
>>>>>>> faults 89.418 101.364 +13%
>>>>>>> spf n/a 97.989
>>>>>>>
>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>> way.
>>>>>>>
>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>> 20%.
>>>>>>>
>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>
>>>>>>> Application 4.9 4.9+spf delta
>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>> com.sina.weibo 907 906 0%
>>>>>>> com.youku.phone 816 731 -11%
>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>> com.UCMobile 415 411 -1%
>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>
>>>>>>> ------------------
>>>>>>> Benchmarks results
>>>>>>>
>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>> SPF is BASE + this series
>>>>>>>
>>>>>>> Kernbench:
>>>>>>> ----------
>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>> kernel (kernel is build 5 times):
>>>>>>>
>>>>>>> Average Half load -j 8
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>
>>>>>>> Average Optimal load -j 16
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 526743764 faults
>>>>>>> 210 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>> were created during the kernel build processing).
>>>>>>>
>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>
>>>>>>> Average Half load -j 40
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>
>>>>>>> Average Optimal load -j 80
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 116730856 faults
>>>>>>> 0 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>> there is no impact on the performance.
>>>>>>>
>>>>>>> Ebizzy:
>>>>>>> -------
>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>> best.
>>>>>>>
>>>>>>> BASE SPF delta
>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>
>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>> 1706379 faults
>>>>>>> 1674599 spf
>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>> 1874773 faults
>>>>>>> 1461153 spf
>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>> leading the ebizzy performance boost.
>>>>>>>
>>>>>>> ------------------
>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>> and Minchan Kim, hopefully.
>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>> __do_page_fault().
>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>> instead
>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>> useless
>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>
>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>
>>>>>>>
>>>>>>> Laurent Dufour (20):
>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>> mm: introduce INIT_VMA()
>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>> mm: introduce __vm_normal_page()
>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>> perf: add a speculative page fault sw event
>>>>>>> perf tools: add support for the SPF perf event
>>>>>>> mm: add speculative page fault vmstats
>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>
>>>>>>> Mahendran Ganesh (2):
>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> arm64/mm: add speculative page fault
>>>>>>>
>>>>>>> Peter Zijlstra (4):
>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: VMA sequence count
>>>>>>> mm: provide speculative fault infrastructure
>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>
>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>> fs/exec.c | 2 +-
>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>> include/linux/swap.h | 10 +-
>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>> kernel/fork.c | 5 +-
>>>>>>> mm/Kconfig | 22 ++
>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>> mm/hugetlb.c | 2 +
>>>>>>> mm/init-mm.c | 3 +
>>>>>>> mm/internal.h | 20 ++
>>>>>>> mm/khugepaged.c | 5 +
>>>>>>> mm/madvise.c | 6 +-
>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>> mm/migrate.c | 6 +-
>>>>>>> mm/mlock.c | 13 +-
>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>> mm/mprotect.c | 4 +-
>>>>>>> mm/mremap.c | 13 +
>>>>>>> mm/nommu.c | 2 +-
>>>>>>> mm/rmap.c | 5 +-
>>>>>>> mm/swap.c | 6 +-
>>>>>>> mm/swap_state.c | 8 +-
>>>>>>> mm/vmstat.c | 5 +-
>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>
>>>>>>> --
>>>>>>> 2.7.4
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-07-17 9:36 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-07-17 9:36 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
On 13/07/2018 05:56, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Thanks a lot for sharing this perf reports.
I looked at them closely, and I've to admit that I was not able to found a
major difference between the base and the head report, except that
handle_pte_fault() is no more in-lined in the head one.
As expected, __handle_speculative_fault() is never traced since these tests are
dealing with file mapping, not handled in the speculative way.
When running these test did you seen a major differences in the test's result
between base and head ?
>From the number of cycles counted, the biggest difference is page_fault3 when
run with the THP enabled:
BASE HEAD Delta
page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
The very weird thing is the difference of the delta cycles reported between
thp never and thp always, because the speculative way is aborted when checking
for the vma->ops field, which is the same in both case, and the thp is never
checked. So there is no code covering differnce, on the speculative path,
between these 2 cases. This leads me to think that there are other interactions
interfering in the measure.
Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
the head of the perf report is the 92% testcase which is weirdly not reported
on the head side :
92.02% 22.33% page_fault3_processes [.] testcase
92.02% testcase
Then the base reported 37.67% for __do_page_fault() where the head reported
48.41%, but the only difference in this function, between base and head, is the
call to handle_speculative_fault(). But this is a macro checking for the fault
flags, and mm->users and then calling __handle_speculative_fault() if needed.
So this can't explain this difference, except if __handle_speculative_fault()
is inlined in __do_page_fault().
Is this the case on your build ?
Haiyan, do you still have the output of the test to check those numbers too ?
Cheers,
Laurent
> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
> Please help to check on these data if it can help you to find the higher change. Thanks.
>
> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>
> Best regards,
> Haiyan Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Thursday, July 12, 2018 1:05 AM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> Hi Haiyan,
>
> Do you get a chance to capture some performance cycles on your system ?
> I still can't get these numbers on my hardware.
>
> Thanks,
> Laurent.
>
> On 04/07/2018 09:51, Laurent Dufour wrote:
>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>>
>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>
>> Repeating the test only 3 times seems a bit too low to me.
>>
>> I'll focus on the higher change for the moment, but I don't have access to such
>> a hardware.
>>
>> Is possible to provide a diff between base and SPF of the performance cycles
>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>
>> Please stay focus on the test case process to see exactly where the series is
>> impacting.
>>
>> Thanks,
>> Laurent.
>>
>>>
>>> And I did not find other high variation on test case result.
>>>
>>> a). Enable THP
>>> testcase base stddev change head stddev metric
>>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>> ________________________________________
>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, July 02, 2018 4:59 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>> V9 patch serials.
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>> commit id:
>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>> Benchmark: will-it-scale
>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task:100%
>>>>
>>>> 1. Regressions:
>>>>
>>>> a). Enable THP
>>>> testcase base change head metric
>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>> Notes: for the above values of test result, the higher is better.
>>>
>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>> get reproducible results. The results have huge variation, even on the vanilla
>>> kernel, and I can't state on any changes due to that.
>>>
>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>> measure any changes between the vanilla and the SPF patched ones:
>>>
>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>> brk1 243391.2 238527.5 -2.00%
>>>
>>> Tests were run 10 times, no high variation detected.
>>>
>>> Did you see high variation on your side ? How many times the test were run to
>>> compute the average values ?
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>
>>>>
>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Yes, these tests are done on V9 patch.
>>>>
>>>> Do you plan to give this V11 a run ?
>>>>
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Haiyan Song
>>>>>
>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>
>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>> series" while responding to the v11 header series...
>>>>>> Were these tests done on v9 or v11 ?
>>>>>>
>>>>>> Cheers,
>>>>>> Laurent.
>>>>>>
>>>>>>>
>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>> Commit id:
>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>> Benchmark suite: will-it-scale
>>>>>>> Download link:
>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>> Metrics:
>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>> THP: enable / disable
>>>>>>> nr_task: 100%
>>>>>>>
>>>>>>> 1. Regressions:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> 2. Improvements:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>
>>>>>>>
>>>>>>> Best regards
>>>>>>> Haiyan Song
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>
>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>
>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>
>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>> benchmark anymore.
>>>>>>>
>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>> speculative page fault in that case.
>>>>>>>
>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>> checked during the page fault are modified.
>>>>>>>
>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>> parallel change is possible at this time.
>>>>>>>
>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>> PTE.
>>>>>>>
>>>>>>> In pseudo code, this could be seen as:
>>>>>>> speculative_page_fault()
>>>>>>> {
>>>>>>> vma = get_vma()
>>>>>>> check vma sequence count
>>>>>>> check vma's support
>>>>>>> disable interrupt
>>>>>>> check pgd,p4d,...,pte
>>>>>>> save pmd and pte in vmf
>>>>>>> save vma sequence counter in vmf
>>>>>>> enable interrupt
>>>>>>> check vma sequence count
>>>>>>> handle_pte_fault(vma)
>>>>>>> ..
>>>>>>> page = alloc_page()
>>>>>>> pte_map_lock()
>>>>>>> disable interrupt
>>>>>>> abort if sequence counter has changed
>>>>>>> abort if pmd or pte has changed
>>>>>>> pte map and lock
>>>>>>> enable interrupt
>>>>>>> if abort
>>>>>>> free page
>>>>>>> abort
>>>>>>> ...
>>>>>>> }
>>>>>>>
>>>>>>> arch_fault_handler()
>>>>>>> {
>>>>>>> if (speculative_page_fault(&vma))
>>>>>>> goto done
>>>>>>> again:
>>>>>>> lock(mmap_sem)
>>>>>>> vma = find_vma();
>>>>>>> handle_pte_fault(vma);
>>>>>>> if retry
>>>>>>> unlock(mmap_sem)
>>>>>>> goto again;
>>>>>>> done:
>>>>>>> handle fault error
>>>>>>> }
>>>>>>>
>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>
>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>> the part of the faults processed speculatively.
>>>>>>>
>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>> back.
>>>>>>>
>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>> following arguments :
>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>
>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>
>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>> on x86, PowerPC and arm64.
>>>>>>>
>>>>>>> ---------------------
>>>>>>> Real Workload results
>>>>>>>
>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>
>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>> series:
>>>>>>> vanilla spf
>>>>>>> faults 89.418 101.364 +13%
>>>>>>> spf n/a 97.989
>>>>>>>
>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>> way.
>>>>>>>
>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>> 20%.
>>>>>>>
>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>
>>>>>>> Application 4.9 4.9+spf delta
>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>> com.sina.weibo 907 906 0%
>>>>>>> com.youku.phone 816 731 -11%
>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>> com.UCMobile 415 411 -1%
>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>
>>>>>>> ------------------
>>>>>>> Benchmarks results
>>>>>>>
>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>> SPF is BASE + this series
>>>>>>>
>>>>>>> Kernbench:
>>>>>>> ----------
>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>> kernel (kernel is build 5 times):
>>>>>>>
>>>>>>> Average Half load -j 8
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>
>>>>>>> Average Optimal load -j 16
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 526743764 faults
>>>>>>> 210 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>> were created during the kernel build processing).
>>>>>>>
>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>
>>>>>>> Average Half load -j 40
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>
>>>>>>> Average Optimal load -j 80
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 116730856 faults
>>>>>>> 0 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>> there is no impact on the performance.
>>>>>>>
>>>>>>> Ebizzy:
>>>>>>> -------
>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>> best.
>>>>>>>
>>>>>>> BASE SPF delta
>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>
>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>> 1706379 faults
>>>>>>> 1674599 spf
>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>> 1874773 faults
>>>>>>> 1461153 spf
>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>> leading the ebizzy performance boost.
>>>>>>>
>>>>>>> ------------------
>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>> and Minchan Kim, hopefully.
>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>> __do_page_fault().
>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>> instead
>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>> useless
>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>
>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>
>>>>>>>
>>>>>>> Laurent Dufour (20):
>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>> mm: introduce INIT_VMA()
>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>> mm: introduce __vm_normal_page()
>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>> perf: add a speculative page fault sw event
>>>>>>> perf tools: add support for the SPF perf event
>>>>>>> mm: add speculative page fault vmstats
>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>
>>>>>>> Mahendran Ganesh (2):
>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> arm64/mm: add speculative page fault
>>>>>>>
>>>>>>> Peter Zijlstra (4):
>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: VMA sequence count
>>>>>>> mm: provide speculative fault infrastructure
>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>
>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>> fs/exec.c | 2 +-
>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>> include/linux/swap.h | 10 +-
>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>> kernel/fork.c | 5 +-
>>>>>>> mm/Kconfig | 22 ++
>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>> mm/hugetlb.c | 2 +
>>>>>>> mm/init-mm.c | 3 +
>>>>>>> mm/internal.h | 20 ++
>>>>>>> mm/khugepaged.c | 5 +
>>>>>>> mm/madvise.c | 6 +-
>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>> mm/migrate.c | 6 +-
>>>>>>> mm/mlock.c | 13 +-
>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>> mm/mprotect.c | 4 +-
>>>>>>> mm/mremap.c | 13 +
>>>>>>> mm/nommu.c | 2 +-
>>>>>>> mm/rmap.c | 5 +-
>>>>>>> mm/swap.c | 6 +-
>>>>>>> mm/swap_state.c | 8 +-
>>>>>>> mm/vmstat.c | 5 +-
>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>
>>>>>>> --
>>>>>>> 2.7.4
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-07-17 9:36 ` Laurent Dufour
(?)
@ 2018-08-03 6:36 ` Song, HaiyanX
2018-08-03 6:45 ` Song, HaiyanX
2018-08-22 14:23 ` Laurent Dufour
-1 siblings, 2 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-08-03 6:36 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 42188 bytes --]
Hi Laurent,
Thanks for your analysis for the last perf results.
Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported
on the head side", which is a bug of 0-day,and it caused the item is not counted in perf.
I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times).
I checked the perf report have no above mentioned problem.
I have compared them, found some items have difference, such as below case:
page_fault2-thp-always: handle_mm_fault, base: 45.22% head: 29.41%
page_fault3-thp-always: handle_mm_fault, base: 22.95% head: 14.15%
So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit.
Thanks,
Haiyan, Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Tuesday, July 17, 2018 5:36 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 13/07/2018 05:56, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Thanks a lot for sharing this perf reports.
I looked at them closely, and I've to admit that I was not able to found a
major difference between the base and the head report, except that
handle_pte_fault() is no more in-lined in the head one.
As expected, __handle_speculative_fault() is never traced since these tests are
dealing with file mapping, not handled in the speculative way.
When running these test did you seen a major differences in the test's result
between base and head ?
>From the number of cycles counted, the biggest difference is page_fault3 when
run with the THP enabled:
BASE HEAD Delta
page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
The very weird thing is the difference of the delta cycles reported between
thp never and thp always, because the speculative way is aborted when checking
for the vma->ops field, which is the same in both case, and the thp is never
checked. So there is no code covering differnce, on the speculative path,
between these 2 cases. This leads me to think that there are other interactions
interfering in the measure.
Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
the head of the perf report is the 92% testcase which is weirdly not reported
on the head side :
92.02% 22.33% page_fault3_processes [.] testcase
92.02% testcase
Then the base reported 37.67% for __do_page_fault() where the head reported
48.41%, but the only difference in this function, between base and head, is the
call to handle_speculative_fault(). But this is a macro checking for the fault
flags, and mm->users and then calling __handle_speculative_fault() if needed.
So this can't explain this difference, except if __handle_speculative_fault()
is inlined in __do_page_fault().
Is this the case on your build ?
Haiyan, do you still have the output of the test to check those numbers too ?
Cheers,
Laurent
> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
> Please help to check on these data if it can help you to find the higher change. Thanks.
>
> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>
> Best regards,
> Haiyan Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Thursday, July 12, 2018 1:05 AM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> Hi Haiyan,
>
> Do you get a chance to capture some performance cycles on your system ?
> I still can't get these numbers on my hardware.
>
> Thanks,
> Laurent.
>
> On 04/07/2018 09:51, Laurent Dufour wrote:
>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>>
>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>
>> Repeating the test only 3 times seems a bit too low to me.
>>
>> I'll focus on the higher change for the moment, but I don't have access to such
>> a hardware.
>>
>> Is possible to provide a diff between base and SPF of the performance cycles
>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>
>> Please stay focus on the test case process to see exactly where the series is
>> impacting.
>>
>> Thanks,
>> Laurent.
>>
>>>
>>> And I did not find other high variation on test case result.
>>>
>>> a). Enable THP
>>> testcase base stddev change head stddev metric
>>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>> ________________________________________
>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, July 02, 2018 4:59 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>> V9 patch serials.
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>> commit id:
>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>> Benchmark: will-it-scale
>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task:100%
>>>>
>>>> 1. Regressions:
>>>>
>>>> a). Enable THP
>>>> testcase base change head metric
>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>> Notes: for the above values of test result, the higher is better.
>>>
>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>> get reproducible results. The results have huge variation, even on the vanilla
>>> kernel, and I can't state on any changes due to that.
>>>
>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>> measure any changes between the vanilla and the SPF patched ones:
>>>
>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>> brk1 243391.2 238527.5 -2.00%
>>>
>>> Tests were run 10 times, no high variation detected.
>>>
>>> Did you see high variation on your side ? How many times the test were run to
>>> compute the average values ?
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>
>>>>
>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Yes, these tests are done on V9 patch.
>>>>
>>>> Do you plan to give this V11 a run ?
>>>>
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Haiyan Song
>>>>>
>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>
>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>> series" while responding to the v11 header series...
>>>>>> Were these tests done on v9 or v11 ?
>>>>>>
>>>>>> Cheers,
>>>>>> Laurent.
>>>>>>
>>>>>>>
>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>> Commit id:
>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>> Benchmark suite: will-it-scale
>>>>>>> Download link:
>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>> Metrics:
>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>> THP: enable / disable
>>>>>>> nr_task: 100%
>>>>>>>
>>>>>>> 1. Regressions:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> 2. Improvements:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>
>>>>>>>
>>>>>>> Best regards
>>>>>>> Haiyan Song
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>
>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>
>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>
>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>> benchmark anymore.
>>>>>>>
>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>> speculative page fault in that case.
>>>>>>>
>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>> checked during the page fault are modified.
>>>>>>>
>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>> parallel change is possible at this time.
>>>>>>>
>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>> PTE.
>>>>>>>
>>>>>>> In pseudo code, this could be seen as:
>>>>>>> speculative_page_fault()
>>>>>>> {
>>>>>>> vma = get_vma()
>>>>>>> check vma sequence count
>>>>>>> check vma's support
>>>>>>> disable interrupt
>>>>>>> check pgd,p4d,...,pte
>>>>>>> save pmd and pte in vmf
>>>>>>> save vma sequence counter in vmf
>>>>>>> enable interrupt
>>>>>>> check vma sequence count
>>>>>>> handle_pte_fault(vma)
>>>>>>> ..
>>>>>>> page = alloc_page()
>>>>>>> pte_map_lock()
>>>>>>> disable interrupt
>>>>>>> abort if sequence counter has changed
>>>>>>> abort if pmd or pte has changed
>>>>>>> pte map and lock
>>>>>>> enable interrupt
>>>>>>> if abort
>>>>>>> free page
>>>>>>> abort
>>>>>>> ...
>>>>>>> }
>>>>>>>
>>>>>>> arch_fault_handler()
>>>>>>> {
>>>>>>> if (speculative_page_fault(&vma))
>>>>>>> goto done
>>>>>>> again:
>>>>>>> lock(mmap_sem)
>>>>>>> vma = find_vma();
>>>>>>> handle_pte_fault(vma);
>>>>>>> if retry
>>>>>>> unlock(mmap_sem)
>>>>>>> goto again;
>>>>>>> done:
>>>>>>> handle fault error
>>>>>>> }
>>>>>>>
>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>
>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>> the part of the faults processed speculatively.
>>>>>>>
>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>> back.
>>>>>>>
>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>> following arguments :
>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>
>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>
>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>> on x86, PowerPC and arm64.
>>>>>>>
>>>>>>> ---------------------
>>>>>>> Real Workload results
>>>>>>>
>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>
>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>> series:
>>>>>>> vanilla spf
>>>>>>> faults 89.418 101.364 +13%
>>>>>>> spf n/a 97.989
>>>>>>>
>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>> way.
>>>>>>>
>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>> 20%.
>>>>>>>
>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>
>>>>>>> Application 4.9 4.9+spf delta
>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>> com.sina.weibo 907 906 0%
>>>>>>> com.youku.phone 816 731 -11%
>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>> com.UCMobile 415 411 -1%
>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>
>>>>>>> ------------------
>>>>>>> Benchmarks results
>>>>>>>
>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>> SPF is BASE + this series
>>>>>>>
>>>>>>> Kernbench:
>>>>>>> ----------
>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>> kernel (kernel is build 5 times):
>>>>>>>
>>>>>>> Average Half load -j 8
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>
>>>>>>> Average Optimal load -j 16
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 526743764 faults
>>>>>>> 210 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>> were created during the kernel build processing).
>>>>>>>
>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>
>>>>>>> Average Half load -j 40
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>
>>>>>>> Average Optimal load -j 80
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 116730856 faults
>>>>>>> 0 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>> there is no impact on the performance.
>>>>>>>
>>>>>>> Ebizzy:
>>>>>>> -------
>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>> best.
>>>>>>>
>>>>>>> BASE SPF delta
>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>
>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>> 1706379 faults
>>>>>>> 1674599 spf
>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>> 1874773 faults
>>>>>>> 1461153 spf
>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>> leading the ebizzy performance boost.
>>>>>>>
>>>>>>> ------------------
>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>> and Minchan Kim, hopefully.
>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>> __do_page_fault().
>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>> instead
>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>> useless
>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>
>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>
>>>>>>>
>>>>>>> Laurent Dufour (20):
>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>> mm: introduce INIT_VMA()
>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>> mm: introduce __vm_normal_page()
>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>> perf: add a speculative page fault sw event
>>>>>>> perf tools: add support for the SPF perf event
>>>>>>> mm: add speculative page fault vmstats
>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>
>>>>>>> Mahendran Ganesh (2):
>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> arm64/mm: add speculative page fault
>>>>>>>
>>>>>>> Peter Zijlstra (4):
>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: VMA sequence count
>>>>>>> mm: provide speculative fault infrastructure
>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>
>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>> fs/exec.c | 2 +-
>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>> include/linux/swap.h | 10 +-
>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>> kernel/fork.c | 5 +-
>>>>>>> mm/Kconfig | 22 ++
>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>> mm/hugetlb.c | 2 +
>>>>>>> mm/init-mm.c | 3 +
>>>>>>> mm/internal.h | 20 ++
>>>>>>> mm/khugepaged.c | 5 +
>>>>>>> mm/madvise.c | 6 +-
>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>> mm/migrate.c | 6 +-
>>>>>>> mm/mlock.c | 13 +-
>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>> mm/mprotect.c | 4 +-
>>>>>>> mm/mremap.c | 13 +
>>>>>>> mm/nommu.c | 2 +-
>>>>>>> mm/rmap.c | 5 +-
>>>>>>> mm/swap.c | 6 +-
>>>>>>> mm/swap_state.c | 8 +-
>>>>>>> mm/vmstat.c | 5 +-
>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>
>>>>>>> --
>>>>>>> 2.7.4
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
[-- Attachment #2: perf-profile_page_fault2_base_thp_always.gz --]
[-- Type: application/gzip, Size: 12167 bytes --]
[-- Attachment #3: perf-profile_page_fault2_base_thp_never.gz --]
[-- Type: application/gzip, Size: 11543 bytes --]
[-- Attachment #4: perf-profile_page_fault2_head_thp_always.gz --]
[-- Type: application/gzip, Size: 12019 bytes --]
[-- Attachment #5: perf-profile_page_fault3_base_thp_always.gz --]
[-- Type: application/gzip, Size: 12701 bytes --]
[-- Attachment #6: perf-profile_page_fault3_base_thp_always.gz --]
[-- Type: application/gzip, Size: 12701 bytes --]
[-- Attachment #7: perf-profile_page_fault3_base_thp_never.gz --]
[-- Type: application/gzip, Size: 12699 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-08-03 6:36 ` Song, HaiyanX
@ 2018-08-03 6:45 ` Song, HaiyanX
2018-08-22 14:23 ` Laurent Dufour
1 sibling, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-08-03 6:45 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 43157 bytes --]
Add another 3 perf file.
________________________________________
From: Song, HaiyanX
Sent: Friday, August 03, 2018 2:36 PM
To: Laurent Dufour
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: RE: [PATCH v11 00/26] Speculative page faults
Hi Laurent,
Thanks for your analysis for the last perf results.
Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported
on the head side", which is a bug of 0-day,and it caused the item is not counted in perf.
I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times).
I checked the perf report have no above mentioned problem.
I have compared them, found some items have difference, such as below case:
page_fault2-thp-always: handle_mm_fault, base: 45.22% head: 29.41%
page_fault3-thp-always: handle_mm_fault, base: 22.95% head: 14.15%
So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit.
Thanks,
Haiyan, Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Tuesday, July 17, 2018 5:36 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 13/07/2018 05:56, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Thanks a lot for sharing this perf reports.
I looked at them closely, and I've to admit that I was not able to found a
major difference between the base and the head report, except that
handle_pte_fault() is no more in-lined in the head one.
As expected, __handle_speculative_fault() is never traced since these tests are
dealing with file mapping, not handled in the speculative way.
When running these test did you seen a major differences in the test's result
between base and head ?
>From the number of cycles counted, the biggest difference is page_fault3 when
run with the THP enabled:
BASE HEAD Delta
page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
The very weird thing is the difference of the delta cycles reported between
thp never and thp always, because the speculative way is aborted when checking
for the vma->ops field, which is the same in both case, and the thp is never
checked. So there is no code covering differnce, on the speculative path,
between these 2 cases. This leads me to think that there are other interactions
interfering in the measure.
Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
the head of the perf report is the 92% testcase which is weirdly not reported
on the head side :
92.02% 22.33% page_fault3_processes [.] testcase
92.02% testcase
Then the base reported 37.67% for __do_page_fault() where the head reported
48.41%, but the only difference in this function, between base and head, is the
call to handle_speculative_fault(). But this is a macro checking for the fault
flags, and mm->users and then calling __handle_speculative_fault() if needed.
So this can't explain this difference, except if __handle_speculative_fault()
is inlined in __do_page_fault().
Is this the case on your build ?
Haiyan, do you still have the output of the test to check those numbers too ?
Cheers,
Laurent
> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
> Please help to check on these data if it can help you to find the higher change. Thanks.
>
> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>
> Best regards,
> Haiyan Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Thursday, July 12, 2018 1:05 AM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> Hi Haiyan,
>
> Do you get a chance to capture some performance cycles on your system ?
> I still can't get these numbers on my hardware.
>
> Thanks,
> Laurent.
>
> On 04/07/2018 09:51, Laurent Dufour wrote:
>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>> Hi Laurent,
>>>
>>>
>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>
>> Repeating the test only 3 times seems a bit too low to me.
>>
>> I'll focus on the higher change for the moment, but I don't have access to such
>> a hardware.
>>
>> Is possible to provide a diff between base and SPF of the performance cycles
>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>
>> Please stay focus on the test case process to see exactly where the series is
>> impacting.
>>
>> Thanks,
>> Laurent.
>>
>>>
>>> And I did not find other high variation on test case result.
>>>
>>> a). Enable THP
>>> testcase base stddev change head stddev metric
>>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>
>>> b). Disable THP
>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>
>>>
>>> Best regards,
>>> Haiyan Song
>>> ________________________________________
>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>> Sent: Monday, July 02, 2018 4:59 PM
>>> To: Song, HaiyanX
>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>
>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>> V9 patch serials.
>>>>
>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>> commit id:
>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>> Benchmark: will-it-scale
>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>
>>>> Metrics:
>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>> THP: enable / disable
>>>> nr_task:100%
>>>>
>>>> 1. Regressions:
>>>>
>>>> a). Enable THP
>>>> testcase base change head metric
>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>> Notes: for the above values of test result, the higher is better.
>>>
>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>> get reproducible results. The results have huge variation, even on the vanilla
>>> kernel, and I can't state on any changes due to that.
>>>
>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>> measure any changes between the vanilla and the SPF patched ones:
>>>
>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>> brk1 243391.2 238527.5 -2.00%
>>>
>>> Tests were run 10 times, no high variation detected.
>>>
>>> Did you see high variation on your side ? How many times the test were run to
>>> compute the average values ?
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>
>>>>
>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>
>>>>
>>>> Best regards
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Yes, these tests are done on V9 patch.
>>>>
>>>> Do you plan to give this V11 a run ?
>>>>
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Haiyan Song
>>>>>
>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>
>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>> series" while responding to the v11 header series...
>>>>>> Were these tests done on v9 or v11 ?
>>>>>>
>>>>>> Cheers,
>>>>>> Laurent.
>>>>>>
>>>>>>>
>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>> Commit id:
>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>> Benchmark suite: will-it-scale
>>>>>>> Download link:
>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>> Metrics:
>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>> THP: enable / disable
>>>>>>> nr_task: 100%
>>>>>>>
>>>>>>> 1. Regressions:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>
>>>>>>> 2. Improvements:
>>>>>>> a) THP enabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> b) THP disabled:
>>>>>>> testcase base change head metric
>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>
>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>
>>>>>>>
>>>>>>> Best regards
>>>>>>> Haiyan Song
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>
>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>
>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>
>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>> benchmark anymore.
>>>>>>>
>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>> speculative page fault in that case.
>>>>>>>
>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>> checked during the page fault are modified.
>>>>>>>
>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>> parallel change is possible at this time.
>>>>>>>
>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>> PTE.
>>>>>>>
>>>>>>> In pseudo code, this could be seen as:
>>>>>>> speculative_page_fault()
>>>>>>> {
>>>>>>> vma = get_vma()
>>>>>>> check vma sequence count
>>>>>>> check vma's support
>>>>>>> disable interrupt
>>>>>>> check pgd,p4d,...,pte
>>>>>>> save pmd and pte in vmf
>>>>>>> save vma sequence counter in vmf
>>>>>>> enable interrupt
>>>>>>> check vma sequence count
>>>>>>> handle_pte_fault(vma)
>>>>>>> ..
>>>>>>> page = alloc_page()
>>>>>>> pte_map_lock()
>>>>>>> disable interrupt
>>>>>>> abort if sequence counter has changed
>>>>>>> abort if pmd or pte has changed
>>>>>>> pte map and lock
>>>>>>> enable interrupt
>>>>>>> if abort
>>>>>>> free page
>>>>>>> abort
>>>>>>> ...
>>>>>>> }
>>>>>>>
>>>>>>> arch_fault_handler()
>>>>>>> {
>>>>>>> if (speculative_page_fault(&vma))
>>>>>>> goto done
>>>>>>> again:
>>>>>>> lock(mmap_sem)
>>>>>>> vma = find_vma();
>>>>>>> handle_pte_fault(vma);
>>>>>>> if retry
>>>>>>> unlock(mmap_sem)
>>>>>>> goto again;
>>>>>>> done:
>>>>>>> handle fault error
>>>>>>> }
>>>>>>>
>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>
>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>> the part of the faults processed speculatively.
>>>>>>>
>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>> back.
>>>>>>>
>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>> following arguments :
>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>
>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>
>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>> on x86, PowerPC and arm64.
>>>>>>>
>>>>>>> ---------------------
>>>>>>> Real Workload results
>>>>>>>
>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>
>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>> series:
>>>>>>> vanilla spf
>>>>>>> faults 89.418 101.364 +13%
>>>>>>> spf n/a 97.989
>>>>>>>
>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>> way.
>>>>>>>
>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>> 20%.
>>>>>>>
>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>
>>>>>>> Application 4.9 4.9+spf delta
>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>> com.sina.weibo 907 906 0%
>>>>>>> com.youku.phone 816 731 -11%
>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>> com.UCMobile 415 411 -1%
>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>
>>>>>>> ------------------
>>>>>>> Benchmarks results
>>>>>>>
>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>> SPF is BASE + this series
>>>>>>>
>>>>>>> Kernbench:
>>>>>>> ----------
>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>> kernel (kernel is build 5 times):
>>>>>>>
>>>>>>> Average Half load -j 8
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>
>>>>>>> Average Optimal load -j 16
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 526743764 faults
>>>>>>> 210 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>> were created during the kernel build processing).
>>>>>>>
>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>
>>>>>>> Average Half load -j 40
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>
>>>>>>> Average Optimal load -j 80
>>>>>>> Run (std deviation)
>>>>>>> BASE SPF
>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>
>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>> 116730856 faults
>>>>>>> 0 spf
>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>> there is no impact on the performance.
>>>>>>>
>>>>>>> Ebizzy:
>>>>>>> -------
>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>> best.
>>>>>>>
>>>>>>> BASE SPF delta
>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>
>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>> 1706379 faults
>>>>>>> 1674599 spf
>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>> 1874773 faults
>>>>>>> 1461153 spf
>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>> 0 pagefault:spf_vma_access
>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>
>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>> leading the ebizzy performance boost.
>>>>>>>
>>>>>>> ------------------
>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>> and Minchan Kim, hopefully.
>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>> __do_page_fault().
>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>> instead
>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>> useless
>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>
>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>
>>>>>>>
>>>>>>> Laurent Dufour (20):
>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>> mm: introduce INIT_VMA()
>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>> mm: introduce __vm_normal_page()
>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>> perf: add a speculative page fault sw event
>>>>>>> perf tools: add support for the SPF perf event
>>>>>>> mm: add speculative page fault vmstats
>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>
>>>>>>> Mahendran Ganesh (2):
>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>> arm64/mm: add speculative page fault
>>>>>>>
>>>>>>> Peter Zijlstra (4):
>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>> mm: VMA sequence count
>>>>>>> mm: provide speculative fault infrastructure
>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>
>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>> fs/exec.c | 2 +-
>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>> include/linux/swap.h | 10 +-
>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>> kernel/fork.c | 5 +-
>>>>>>> mm/Kconfig | 22 ++
>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>> mm/hugetlb.c | 2 +
>>>>>>> mm/init-mm.c | 3 +
>>>>>>> mm/internal.h | 20 ++
>>>>>>> mm/khugepaged.c | 5 +
>>>>>>> mm/madvise.c | 6 +-
>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>> mm/migrate.c | 6 +-
>>>>>>> mm/mlock.c | 13 +-
>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>> mm/mprotect.c | 4 +-
>>>>>>> mm/mremap.c | 13 +
>>>>>>> mm/nommu.c | 2 +-
>>>>>>> mm/rmap.c | 5 +-
>>>>>>> mm/swap.c | 6 +-
>>>>>>> mm/swap_state.c | 8 +-
>>>>>>> mm/vmstat.c | 5 +-
>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>
>>>>>>> --
>>>>>>> 2.7.4
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
[-- Attachment #2: perf-profile_page_fault3_head_thp_always.gz --]
[-- Type: application/gzip, Size: 12909 bytes --]
[-- Attachment #3: perf-profile_page_fault3_head_thp_never.gz --]
[-- Type: application/gzip, Size: 12535 bytes --]
[-- Attachment #4: perf-profile_page_fault2_head_thp_never.gz --]
[-- Type: application/gzip, Size: 11782 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-08-03 6:36 ` Song, HaiyanX
@ 2018-08-22 14:23 ` Laurent Dufour
2018-08-22 14:23 ` Laurent Dufour
1 sibling, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-08-22 14:23 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 43707 bytes --]
On 03/08/2018 08:36, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Sorry for the late answer, I was off a couple of days.
>
> Thanks for your analysis for the last perf results.
> Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side", which is a bug of 0-day,and it caused the item is not counted in perf.
>
> I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times).
> I checked the perf report have no above mentioned problem.
>
> I have compared them, found some items have difference, such as below case:
> page_fault2-thp-always: handle_mm_fault, base: 45.22% head: 29.41%
> page_fault3-thp-always: handle_mm_fault, base: 22.95% head: 14.15%
These would mean that the system spends lees time running handle_mm_fault()
when SPF is in the picture in this 2 cases which is good. This should lead to
better results with the SPF series, and I can't find any values higher on the
head side.
>
> So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit.
I took a close look to all the perf result you sent, but I can't identify any
major difference. But the compiler optimization is getting rid of the
handle_pte_fault() symbol on the base kernel which add complexity to check the
differences.
To get rid of that, I'm proposing that you applied the attached patch to the
spf kernel. This patch is allowing to turn on/off the SPF handler through
/proc/sys/vm/speculative_page_fault.
This should ease the testing by limiting the reboot and avoid kernel's symbols
mismatch. Obviously there is still a small overhead due to the check but it
should not be viewable.
With this patch applied you can simply run
echo 1 > /proc/sys/vm/speculative_page_fault
to run a test with the speculative page fault handler activated. Or run
echo 0 > /proc/sys/vm/speculative_page_fault
to run a test without it.
I'm really sorry to asking that again, but could please run the test
page_fault3_base_THP-Always with and without SPF and capture the perf output.
I think we should focus on that test which showed the biggest regression.
Thanks,
Laurent.
>
> Thanks,
> Haiyan, Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Tuesday, July 17, 2018 5:36 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 13/07/2018 05:56, Song, HaiyanX wrote:
>> Hi Laurent,
>
> Hi Haiyan,
>
> Thanks a lot for sharing this perf reports.
>
> I looked at them closely, and I've to admit that I was not able to found a
> major difference between the base and the head report, except that
> handle_pte_fault() is no more in-lined in the head one.
>
> As expected, __handle_speculative_fault() is never traced since these tests are
> dealing with file mapping, not handled in the speculative way.
>
> When running these test did you seen a major differences in the test's result
> between base and head ?
>
> From the number of cycles counted, the biggest difference is page_fault3 when
> run with the THP enabled:
> BASE HEAD Delta
> page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
> page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
> page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
> page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
>
>
> The very weird thing is the difference of the delta cycles reported between
> thp never and thp always, because the speculative way is aborted when checking
> for the vma->ops field, which is the same in both case, and the thp is never
> checked. So there is no code covering differnce, on the speculative path,
> between these 2 cases. This leads me to think that there are other interactions
> interfering in the measure.
>
> Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
> the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side :
> 92.02% 22.33% page_fault3_processes [.] testcase
> 92.02% testcase
>
> Then the base reported 37.67% for __do_page_fault() where the head reported
> 48.41%, but the only difference in this function, between base and head, is the
> call to handle_speculative_fault(). But this is a macro checking for the fault
> flags, and mm->users and then calling __handle_speculative_fault() if needed.
> So this can't explain this difference, except if __handle_speculative_fault()
> is inlined in __do_page_fault().
> Is this the case on your build ?
>
> Haiyan, do you still have the output of the test to check those numbers too ?
>
> Cheers,
> Laurent
>
>> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
>> Please help to check on these data if it can help you to find the higher change. Thanks.
>>
>> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
>> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>>
>> Best regards,
>> Haiyan Song
>>
>> ________________________________________
>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Thursday, July 12, 2018 1:05 AM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> Hi Haiyan,
>>
>> Do you get a chance to capture some performance cycles on your system ?
>> I still can't get these numbers on my hardware.
>>
>> Thanks,
>> Laurent.
>>
>> On 04/07/2018 09:51, Laurent Dufour wrote:
>>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>>
>>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>>
>>> Repeating the test only 3 times seems a bit too low to me.
>>>
>>> I'll focus on the higher change for the moment, but I don't have access to such
>>> a hardware.
>>>
>>> Is possible to provide a diff between base and SPF of the performance cycles
>>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>>
>>> Please stay focus on the test case process to see exactly where the series is
>>> impacting.
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>>
>>>> And I did not find other high variation on test case result.
>>>>
>>>> a). Enable THP
>>>> testcase base stddev change head stddev metric
>>>> page_fault3/enable THP 10519 +- 3% -20.5% 8368 +-6% will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 +- 2% -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, July 02, 2018 4:59 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>>> V9 patch serials.
>>>>>
>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>>> commit id:
>>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>>> Benchmark: will-it-scale
>>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>>
>>>>> Metrics:
>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>> THP: enable / disable
>>>>> nr_task:100%
>>>>>
>>>>> 1. Regressions:
>>>>>
>>>>> a). Enable THP
>>>>> testcase base change head metric
>>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>>
>>>>> b). Disable THP
>>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>>
>>>>> Notes: for the above values of test result, the higher is better.
>>>>
>>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>>> get reproducible results. The results have huge variation, even on the vanilla
>>>> kernel, and I can't state on any changes due to that.
>>>>
>>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>>> measure any changes between the vanilla and the SPF patched ones:
>>>>
>>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>>> brk1 243391.2 238527.5 -2.00%
>>>>
>>>> Tests were run 10 times, no high variation detected.
>>>>
>>>> Did you see high variation on your side ? How many times the test were run to
>>>> compute the average values ?
>>>>
>>>> Thanks,
>>>> Laurent.
>>>>
>>>>
>>>>>
>>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Haiyan Song
>>>>> ________________________________________
>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>>> To: Song, HaiyanX
>>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>>
>>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>>> Hi Laurent,
>>>>>>
>>>>>> Yes, these tests are done on V9 patch.
>>>>>
>>>>> Do you plan to give this V11 a run ?
>>>>>
>>>>>>
>>>>>>
>>>>>> Best regards,
>>>>>> Haiyan Song
>>>>>>
>>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>>
>>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>>> series" while responding to the v11 header series...
>>>>>>> Were these tests done on v9 or v11 ?
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Laurent.
>>>>>>>
>>>>>>>>
>>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>>> Commit id:
>>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>>> Benchmark suite: will-it-scale
>>>>>>>> Download link:
>>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>>> Metrics:
>>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>>> THP: enable / disable
>>>>>>>> nr_task: 100%
>>>>>>>>
>>>>>>>> 1. Regressions:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> 2. Improvements:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>>
>>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Haiyan Song
>>>>>>>>
>>>>>>>> ________________________________________
>>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>>
>>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>>
>>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>>
>>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>>> benchmark anymore.
>>>>>>>>
>>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>>> speculative page fault in that case.
>>>>>>>>
>>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>>> checked during the page fault are modified.
>>>>>>>>
>>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>>> parallel change is possible at this time.
>>>>>>>>
>>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>>> PTE.
>>>>>>>>
>>>>>>>> In pseudo code, this could be seen as:
>>>>>>>> speculative_page_fault()
>>>>>>>> {
>>>>>>>> vma = get_vma()
>>>>>>>> check vma sequence count
>>>>>>>> check vma's support
>>>>>>>> disable interrupt
>>>>>>>> check pgd,p4d,...,pte
>>>>>>>> save pmd and pte in vmf
>>>>>>>> save vma sequence counter in vmf
>>>>>>>> enable interrupt
>>>>>>>> check vma sequence count
>>>>>>>> handle_pte_fault(vma)
>>>>>>>> ..
>>>>>>>> page = alloc_page()
>>>>>>>> pte_map_lock()
>>>>>>>> disable interrupt
>>>>>>>> abort if sequence counter has changed
>>>>>>>> abort if pmd or pte has changed
>>>>>>>> pte map and lock
>>>>>>>> enable interrupt
>>>>>>>> if abort
>>>>>>>> free page
>>>>>>>> abort
>>>>>>>> ...
>>>>>>>> }
>>>>>>>>
>>>>>>>> arch_fault_handler()
>>>>>>>> {
>>>>>>>> if (speculative_page_fault(&vma))
>>>>>>>> goto done
>>>>>>>> again:
>>>>>>>> lock(mmap_sem)
>>>>>>>> vma = find_vma();
>>>>>>>> handle_pte_fault(vma);
>>>>>>>> if retry
>>>>>>>> unlock(mmap_sem)
>>>>>>>> goto again;
>>>>>>>> done:
>>>>>>>> handle fault error
>>>>>>>> }
>>>>>>>>
>>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>>
>>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>>> the part of the faults processed speculatively.
>>>>>>>>
>>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>>> back.
>>>>>>>>
>>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>>> following arguments :
>>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>>
>>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>>
>>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>>> on x86, PowerPC and arm64.
>>>>>>>>
>>>>>>>> ---------------------
>>>>>>>> Real Workload results
>>>>>>>>
>>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>>
>>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>>> series:
>>>>>>>> vanilla spf
>>>>>>>> faults 89.418 101.364 +13%
>>>>>>>> spf n/a 97.989
>>>>>>>>
>>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>>> way.
>>>>>>>>
>>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>>> 20%.
>>>>>>>>
>>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>>
>>>>>>>> Application 4.9 4.9+spf delta
>>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>>> com.sina.weibo 907 906 0%
>>>>>>>> com.youku.phone 816 731 -11%
>>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>>> com.UCMobile 415 411 -1%
>>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Benchmarks results
>>>>>>>>
>>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>>> SPF is BASE + this series
>>>>>>>>
>>>>>>>> Kernbench:
>>>>>>>> ----------
>>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>>> kernel (kernel is build 5 times):
>>>>>>>>
>>>>>>>> Average Half load -j 8
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>>
>>>>>>>> Average Optimal load -j 16
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>>
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 526743764 faults
>>>>>>>> 210 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>>> were created during the kernel build processing).
>>>>>>>>
>>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>>
>>>>>>>> Average Half load -j 40
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>>
>>>>>>>> Average Optimal load -j 80
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 116730856 faults
>>>>>>>> 0 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>>> there is no impact on the performance.
>>>>>>>>
>>>>>>>> Ebizzy:
>>>>>>>> -------
>>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>>> best.
>>>>>>>>
>>>>>>>> BASE SPF delta
>>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>>
>>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>>> 1706379 faults
>>>>>>>> 1674599 spf
>>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>>> 1874773 faults
>>>>>>>> 1461153 spf
>>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>>> leading the ebizzy performance boost.
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>>> and Minchan Kim, hopefully.
>>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>>> __do_page_fault().
>>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>>> instead
>>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>>> useless
>>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>>
>>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>>
>>>>>>>>
>>>>>>>> Laurent Dufour (20):
>>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>>> mm: introduce INIT_VMA()
>>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>>> mm: introduce __vm_normal_page()
>>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>>> perf: add a speculative page fault sw event
>>>>>>>> perf tools: add support for the SPF perf event
>>>>>>>> mm: add speculative page fault vmstats
>>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Mahendran Ganesh (2):
>>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> arm64/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Peter Zijlstra (4):
>>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: VMA sequence count
>>>>>>>> mm: provide speculative fault infrastructure
>>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>>
>>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>>> fs/exec.c | 2 +-
>>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>>> include/linux/swap.h | 10 +-
>>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>>> kernel/fork.c | 5 +-
>>>>>>>> mm/Kconfig | 22 ++
>>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>>> mm/hugetlb.c | 2 +
>>>>>>>> mm/init-mm.c | 3 +
>>>>>>>> mm/internal.h | 20 ++
>>>>>>>> mm/khugepaged.c | 5 +
>>>>>>>> mm/madvise.c | 6 +-
>>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>>> mm/migrate.c | 6 +-
>>>>>>>> mm/mlock.c | 13 +-
>>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>>> mm/mprotect.c | 4 +-
>>>>>>>> mm/mremap.c | 13 +
>>>>>>>> mm/nommu.c | 2 +-
>>>>>>>> mm/rmap.c | 5 +-
>>>>>>>> mm/swap.c | 6 +-
>>>>>>>> mm/swap_state.c | 8 +-
>>>>>>>> mm/vmstat.c | 5 +-
>>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>>
>>>>>>>> --
>>>>>>>> 2.7.4
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-mm-Add-a-speculative-page-fault-switch-in-sysctl.patch --]
[-- Type: text/x-patch; name="0001-mm-Add-a-speculative-page-fault-switch-in-sysctl.patch", Size: 0 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-08-22 14:23 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-08-22 14:23 UTC (permalink / raw)
To: Song, HaiyanX
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 43704 bytes --]
On 03/08/2018 08:36, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Sorry for the late answer, I was off a couple of days.
>
> Thanks for your analysis for the last perf results.
> Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side", which is a bug of 0-day,and it caused the item is not counted in perf.
>
> I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times).
> I checked the perf report have no above mentioned problem.
>
> I have compared them, found some items have difference, such as below case:
> page_fault2-thp-always: handle_mm_fault, base: 45.22% head: 29.41%
> page_fault3-thp-always: handle_mm_fault, base: 22.95% head: 14.15%
These would mean that the system spends lees time running handle_mm_fault()
when SPF is in the picture in this 2 cases which is good. This should lead to
better results with the SPF series, and I can't find any values higher on the
head side.
>
> So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit.
I took a close look to all the perf result you sent, but I can't identify any
major difference. But the compiler optimization is getting rid of the
handle_pte_fault() symbol on the base kernel which add complexity to check the
differences.
To get rid of that, I'm proposing that you applied the attached patch to the
spf kernel. This patch is allowing to turn on/off the SPF handler through
/proc/sys/vm/speculative_page_fault.
This should ease the testing by limiting the reboot and avoid kernel's symbols
mismatch. Obviously there is still a small overhead due to the check but it
should not be viewable.
With this patch applied you can simply run
echo 1 > /proc/sys/vm/speculative_page_fault
to run a test with the speculative page fault handler activated. Or run
echo 0 > /proc/sys/vm/speculative_page_fault
to run a test without it.
I'm really sorry to asking that again, but could please run the test
page_fault3_base_THP-Always with and without SPF and capture the perf output.
I think we should focus on that test which showed the biggest regression.
Thanks,
Laurent.
>
> Thanks,
> Haiyan, Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Tuesday, July 17, 2018 5:36 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 13/07/2018 05:56, Song, HaiyanX wrote:
>> Hi Laurent,
>
> Hi Haiyan,
>
> Thanks a lot for sharing this perf reports.
>
> I looked at them closely, and I've to admit that I was not able to found a
> major difference between the base and the head report, except that
> handle_pte_fault() is no more in-lined in the head one.
>
> As expected, __handle_speculative_fault() is never traced since these tests are
> dealing with file mapping, not handled in the speculative way.
>
> When running these test did you seen a major differences in the test's result
> between base and head ?
>
> From the number of cycles counted, the biggest difference is page_fault3 when
> run with the THP enabled:
> BASE HEAD Delta
> page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
> page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
> page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
> page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
>
>
> The very weird thing is the difference of the delta cycles reported between
> thp never and thp always, because the speculative way is aborted when checking
> for the vma->ops field, which is the same in both case, and the thp is never
> checked. So there is no code covering differnce, on the speculative path,
> between these 2 cases. This leads me to think that there are other interactions
> interfering in the measure.
>
> Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
> the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side :
> 92.02% 22.33% page_fault3_processes [.] testcase
> 92.02% testcase
>
> Then the base reported 37.67% for __do_page_fault() where the head reported
> 48.41%, but the only difference in this function, between base and head, is the
> call to handle_speculative_fault(). But this is a macro checking for the fault
> flags, and mm->users and then calling __handle_speculative_fault() if needed.
> So this can't explain this difference, except if __handle_speculative_fault()
> is inlined in __do_page_fault().
> Is this the case on your build ?
>
> Haiyan, do you still have the output of the test to check those numbers too ?
>
> Cheers,
> Laurent
>
>> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
>> Please help to check on these data if it can help you to find the higher change. Thanks.
>>
>> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
>> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>>
>> Best regards,
>> Haiyan Song
>>
>> ________________________________________
>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Thursday, July 12, 2018 1:05 AM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> Hi Haiyan,
>>
>> Do you get a chance to capture some performance cycles on your system ?
>> I still can't get these numbers on my hardware.
>>
>> Thanks,
>> Laurent.
>>
>> On 04/07/2018 09:51, Laurent Dufour wrote:
>>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>>
>>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>>
>>> Repeating the test only 3 times seems a bit too low to me.
>>>
>>> I'll focus on the higher change for the moment, but I don't have access to such
>>> a hardware.
>>>
>>> Is possible to provide a diff between base and SPF of the performance cycles
>>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>>
>>> Please stay focus on the test case process to see exactly where the series is
>>> impacting.
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>>
>>>> And I did not find other high variation on test case result.
>>>>
>>>> a). Enable THP
>>>> testcase base stddev change head stddev metric
>>>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, July 02, 2018 4:59 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>>> V9 patch serials.
>>>>>
>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>>> commit id:
>>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>>> Benchmark: will-it-scale
>>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>>
>>>>> Metrics:
>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>> THP: enable / disable
>>>>> nr_task:100%
>>>>>
>>>>> 1. Regressions:
>>>>>
>>>>> a). Enable THP
>>>>> testcase base change head metric
>>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>>
>>>>> b). Disable THP
>>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>>
>>>>> Notes: for the above values of test result, the higher is better.
>>>>
>>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>>> get reproducible results. The results have huge variation, even on the vanilla
>>>> kernel, and I can't state on any changes due to that.
>>>>
>>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>>> measure any changes between the vanilla and the SPF patched ones:
>>>>
>>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>>> brk1 243391.2 238527.5 -2.00%
>>>>
>>>> Tests were run 10 times, no high variation detected.
>>>>
>>>> Did you see high variation on your side ? How many times the test were run to
>>>> compute the average values ?
>>>>
>>>> Thanks,
>>>> Laurent.
>>>>
>>>>
>>>>>
>>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Haiyan Song
>>>>> ________________________________________
>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>>> To: Song, HaiyanX
>>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>>
>>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>>> Hi Laurent,
>>>>>>
>>>>>> Yes, these tests are done on V9 patch.
>>>>>
>>>>> Do you plan to give this V11 a run ?
>>>>>
>>>>>>
>>>>>>
>>>>>> Best regards,
>>>>>> Haiyan Song
>>>>>>
>>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>>
>>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>>> series" while responding to the v11 header series...
>>>>>>> Were these tests done on v9 or v11 ?
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Laurent.
>>>>>>>
>>>>>>>>
>>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>>> Commit id:
>>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>>> Benchmark suite: will-it-scale
>>>>>>>> Download link:
>>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>>> Metrics:
>>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>>> THP: enable / disable
>>>>>>>> nr_task: 100%
>>>>>>>>
>>>>>>>> 1. Regressions:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> 2. Improvements:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>>
>>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Haiyan Song
>>>>>>>>
>>>>>>>> ________________________________________
>>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>>
>>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>>
>>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>>
>>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>>> benchmark anymore.
>>>>>>>>
>>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>>> speculative page fault in that case.
>>>>>>>>
>>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>>> checked during the page fault are modified.
>>>>>>>>
>>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>>> parallel change is possible at this time.
>>>>>>>>
>>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>>> PTE.
>>>>>>>>
>>>>>>>> In pseudo code, this could be seen as:
>>>>>>>> speculative_page_fault()
>>>>>>>> {
>>>>>>>> vma = get_vma()
>>>>>>>> check vma sequence count
>>>>>>>> check vma's support
>>>>>>>> disable interrupt
>>>>>>>> check pgd,p4d,...,pte
>>>>>>>> save pmd and pte in vmf
>>>>>>>> save vma sequence counter in vmf
>>>>>>>> enable interrupt
>>>>>>>> check vma sequence count
>>>>>>>> handle_pte_fault(vma)
>>>>>>>> ..
>>>>>>>> page = alloc_page()
>>>>>>>> pte_map_lock()
>>>>>>>> disable interrupt
>>>>>>>> abort if sequence counter has changed
>>>>>>>> abort if pmd or pte has changed
>>>>>>>> pte map and lock
>>>>>>>> enable interrupt
>>>>>>>> if abort
>>>>>>>> free page
>>>>>>>> abort
>>>>>>>> ...
>>>>>>>> }
>>>>>>>>
>>>>>>>> arch_fault_handler()
>>>>>>>> {
>>>>>>>> if (speculative_page_fault(&vma))
>>>>>>>> goto done
>>>>>>>> again:
>>>>>>>> lock(mmap_sem)
>>>>>>>> vma = find_vma();
>>>>>>>> handle_pte_fault(vma);
>>>>>>>> if retry
>>>>>>>> unlock(mmap_sem)
>>>>>>>> goto again;
>>>>>>>> done:
>>>>>>>> handle fault error
>>>>>>>> }
>>>>>>>>
>>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>>
>>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>>> the part of the faults processed speculatively.
>>>>>>>>
>>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>>> back.
>>>>>>>>
>>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>>> following arguments :
>>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>>
>>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>>
>>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>>> on x86, PowerPC and arm64.
>>>>>>>>
>>>>>>>> ---------------------
>>>>>>>> Real Workload results
>>>>>>>>
>>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>>
>>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>>> series:
>>>>>>>> vanilla spf
>>>>>>>> faults 89.418 101.364 +13%
>>>>>>>> spf n/a 97.989
>>>>>>>>
>>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>>> way.
>>>>>>>>
>>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>>> 20%.
>>>>>>>>
>>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>>
>>>>>>>> Application 4.9 4.9+spf delta
>>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>>> com.sina.weibo 907 906 0%
>>>>>>>> com.youku.phone 816 731 -11%
>>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>>> com.UCMobile 415 411 -1%
>>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Benchmarks results
>>>>>>>>
>>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>>> SPF is BASE + this series
>>>>>>>>
>>>>>>>> Kernbench:
>>>>>>>> ----------
>>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>>> kernel (kernel is build 5 times):
>>>>>>>>
>>>>>>>> Average Half load -j 8
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>>
>>>>>>>> Average Optimal load -j 16
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>>
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 526743764 faults
>>>>>>>> 210 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>>> were created during the kernel build processing).
>>>>>>>>
>>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>>
>>>>>>>> Average Half load -j 40
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>>
>>>>>>>> Average Optimal load -j 80
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 116730856 faults
>>>>>>>> 0 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>>> there is no impact on the performance.
>>>>>>>>
>>>>>>>> Ebizzy:
>>>>>>>> -------
>>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>>> best.
>>>>>>>>
>>>>>>>> BASE SPF delta
>>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>>
>>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>>> 1706379 faults
>>>>>>>> 1674599 spf
>>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>>> 1874773 faults
>>>>>>>> 1461153 spf
>>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>>> leading the ebizzy performance boost.
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>>> and Minchan Kim, hopefully.
>>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>>> __do_page_fault().
>>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>>> instead
>>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>>> useless
>>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>>
>>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>>
>>>>>>>>
>>>>>>>> Laurent Dufour (20):
>>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>>> mm: introduce INIT_VMA()
>>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>>> mm: introduce __vm_normal_page()
>>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>>> perf: add a speculative page fault sw event
>>>>>>>> perf tools: add support for the SPF perf event
>>>>>>>> mm: add speculative page fault vmstats
>>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Mahendran Ganesh (2):
>>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> arm64/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Peter Zijlstra (4):
>>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: VMA sequence count
>>>>>>>> mm: provide speculative fault infrastructure
>>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>>
>>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>>> fs/exec.c | 2 +-
>>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>>> include/linux/swap.h | 10 +-
>>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>>> kernel/fork.c | 5 +-
>>>>>>>> mm/Kconfig | 22 ++
>>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>>> mm/hugetlb.c | 2 +
>>>>>>>> mm/init-mm.c | 3 +
>>>>>>>> mm/internal.h | 20 ++
>>>>>>>> mm/khugepaged.c | 5 +
>>>>>>>> mm/madvise.c | 6 +-
>>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>>> mm/migrate.c | 6 +-
>>>>>>>> mm/mlock.c | 13 +-
>>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>>> mm/mprotect.c | 4 +-
>>>>>>>> mm/mremap.c | 13 +
>>>>>>>> mm/nommu.c | 2 +-
>>>>>>>> mm/rmap.c | 5 +-
>>>>>>>> mm/swap.c | 6 +-
>>>>>>>> mm/swap_state.c | 8 +-
>>>>>>>> mm/vmstat.c | 5 +-
>>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>>
>>>>>>>> --
>>>>>>>> 2.7.4
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
[-- Attachment #2: 0001-mm-Add-a-speculative-page-fault-switch-in-sysctl.patch --]
[-- Type: text/x-patch, Size: 2326 bytes --]
>From b6c7fa413f25b8574edf8c764b136715c40299c2 Mon Sep 17 00:00:00 2001
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Date: Mon, 20 Aug 2018 17:51:26 +0200
Subject: [PATCH] mm: Add a speculative page fault switch in sysctl
This allows to turn on/off the use of the speculative page fault handler.
By default it's turned on.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
include/linux/mm.h | 3 +++
kernel/sysctl.c | 9 +++++++++
mm/memory.c | 3 +++
3 files changed, 15 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 31acf98a7d92..ac102efc4c86 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1422,6 +1422,7 @@ extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
unsigned int flags);
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern int sysctl_speculative_page_fault;
extern int __handle_speculative_fault(struct mm_struct *mm,
unsigned long address,
unsigned int flags);
@@ -1429,6 +1430,8 @@ static inline int handle_speculative_fault(struct mm_struct *mm,
unsigned long address,
unsigned int flags)
{
+ if (unlikely(!sysctl_speculative_page_fault))
+ return VM_FAULT_RETRY;
/*
* Try speculative page fault for multithreaded user space task only.
*/
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index f45ed9e696eb..0fb81edd22c1 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1243,6 +1243,15 @@ static struct ctl_table vm_table[] = {
.extra1 = &zero,
.extra2 = &two,
},
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+ {
+ .procname = "speculative_page_fault",
+ .data = &sysctl_speculative_page_fault,
+ .maxlen = sizeof(sysctl_speculative_page_fault),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
+#endif
{
.procname = "panic_on_oom",
.data = &sysctl_panic_on_oom,
diff --git a/mm/memory.c b/mm/memory.c
index 48e1cf0a54ef..c3db3bc4347b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -82,6 +82,9 @@
#define CREATE_TRACE_POINTS
#include <trace/events/pagefault.h>
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+int sysctl_speculative_page_fault = 1;
+#endif
#if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
#warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
--
2.7.4
^ permalink raw reply related [flat|nested] 106+ messages in thread
* RE: [PATCH v11 00/26] Speculative page faults
2018-08-22 14:23 ` Laurent Dufour
(?)
@ 2018-09-18 6:42 ` Song, HaiyanX
-1 siblings, 0 replies; 106+ messages in thread
From: Song, HaiyanX @ 2018-09-18 6:42 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
Wang, Kemi, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, bsingharora,
paulmck, Tim Chen, linuxppc-dev, x86
[-- Attachment #1: Type: text/plain, Size: 46630 bytes --]
Hi Laurent,
I am sorry for replying you so late.
The previous LKP test for this case are running on the same Intel skylake 4s platform, but it need maintain recently.
So I changed to another test box to run the page_fault3 test case, it is Intel skylake 2s platform (nr_cpu: 104, memory: 64G).
I applied your patch to the SPF kernel (commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12), then triggered below 2 cases test.
a) Turn on the SPF handler by below command, then run page_fault3-thp-always test.
echo 1 > /proc/sys/vm/speculative_page_fault
b) Turn off the SPF handler by below command, then run page_fault3-thp-always test.
echo 0 > /proc/sys/vm/speculative_page_fault
Every test run 3 times, and then get test result and capture perf data.
Here is average result for will-it-scale.per_thread_ops:
SPF_turn_off SPF_turn_on
page_fault3-THP-Alwasys.will-it-scale.per_thread_ops 31963 26285
Best regards,
Haiyan Song
________________________________________
From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
Sent: Wednesday, August 22, 2018 10:23 PM
To: Song, HaiyanX
Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
Subject: Re: [PATCH v11 00/26] Speculative page faults
On 03/08/2018 08:36, Song, HaiyanX wrote:
> Hi Laurent,
Hi Haiyan,
Sorry for the late answer, I was off a couple of days.
>
> Thanks for your analysis for the last perf results.
> Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side", which is a bug of 0-day,and it caused the item is not counted in perf.
>
> I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times).
> I checked the perf report have no above mentioned problem.
>
> I have compared them, found some items have difference, such as below case:
> page_fault2-thp-always: handle_mm_fault, base: 45.22% head: 29.41%
> page_fault3-thp-always: handle_mm_fault, base: 22.95% head: 14.15%
These would mean that the system spends lees time running handle_mm_fault()
when SPF is in the picture in this 2 cases which is good. This should lead to
better results with the SPF series, and I can't find any values higher on the
head side.
>
> So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit.
I took a close look to all the perf result you sent, but I can't identify any
major difference. But the compiler optimization is getting rid of the
handle_pte_fault() symbol on the base kernel which add complexity to check the
differences.
To get rid of that, I'm proposing that you applied the attached patch to the
spf kernel. This patch is allowing to turn on/off the SPF handler through
/proc/sys/vm/speculative_page_fault.
This should ease the testing by limiting the reboot and avoid kernel's symbols
mismatch. Obviously there is still a small overhead due to the check but it
should not be viewable.
With this patch applied you can simply run
echo 1 > /proc/sys/vm/speculative_page_fault
to run a test with the speculative page fault handler activated. Or run
echo 0 > /proc/sys/vm/speculative_page_fault
to run a test without it.
I'm really sorry to asking that again, but could please run the test
page_fault3_base_THP-Always with and without SPF and capture the perf output.
I think we should focus on that test which showed the biggest regression.
Thanks,
Laurent.
>
> Thanks,
> Haiyan, Song
>
> ________________________________________
> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
> Sent: Tuesday, July 17, 2018 5:36 PM
> To: Song, HaiyanX
> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
> Subject: Re: [PATCH v11 00/26] Speculative page faults
>
> On 13/07/2018 05:56, Song, HaiyanX wrote:
>> Hi Laurent,
>
> Hi Haiyan,
>
> Thanks a lot for sharing this perf reports.
>
> I looked at them closely, and I've to admit that I was not able to found a
> major difference between the base and the head report, except that
> handle_pte_fault() is no more in-lined in the head one.
>
> As expected, __handle_speculative_fault() is never traced since these tests are
> dealing with file mapping, not handled in the speculative way.
>
> When running these test did you seen a major differences in the test's result
> between base and head ?
>
> From the number of cycles counted, the biggest difference is page_fault3 when
> run with the THP enabled:
> BASE HEAD Delta
> page_fault2_base_thp_never 1142252426747 1065866197589 -6.69%
> page_fault2_base_THP-Alwasys 1124844374523 1076312228927 -4.31%
> page_fault3_base_thp_never 1099387298152 1134118402345 3.16%
> page_fault3_base_THP-Always 1059370178101 853985561949 -19.39%
>
>
> The very weird thing is the difference of the delta cycles reported between
> thp never and thp always, because the speculative way is aborted when checking
> for the vma->ops field, which is the same in both case, and the thp is never
> checked. So there is no code covering differnce, on the speculative path,
> between these 2 cases. This leads me to think that there are other interactions
> interfering in the measure.
>
> Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at
> the head of the perf report is the 92% testcase which is weirdly not reported
> on the head side :
> 92.02% 22.33% page_fault3_processes [.] testcase
> 92.02% testcase
>
> Then the base reported 37.67% for __do_page_fault() where the head reported
> 48.41%, but the only difference in this function, between base and head, is the
> call to handle_speculative_fault(). But this is a macro checking for the fault
> flags, and mm->users and then calling __handle_speculative_fault() if needed.
> So this can't explain this difference, except if __handle_speculative_fault()
> is inlined in __do_page_fault().
> Is this the case on your build ?
>
> Haiyan, do you still have the output of the test to check those numbers too ?
>
> Cheers,
> Laurent
>
>> I attached the perf-profile.gz file for case page_fault2 and page_fault3. These files were captured during test the related test case.
>> Please help to check on these data if it can help you to find the higher change. Thanks.
>>
>> File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile result get from page_fault2
>> tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with THP_always configuration.
>>
>> Best regards,
>> Haiyan Song
>>
>> ________________________________________
>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>> Sent: Thursday, July 12, 2018 1:05 AM
>> To: Song, HaiyanX
>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>
>> Hi Haiyan,
>>
>> Do you get a chance to capture some performance cycles on your system ?
>> I still can't get these numbers on my hardware.
>>
>> Thanks,
>> Laurent.
>>
>> On 04/07/2018 09:51, Laurent Dufour wrote:
>>> On 04/07/2018 05:23, Song, HaiyanX wrote:
>>>> Hi Laurent,
>>>>
>>>>
>>>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times.
>>>> I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev.
>>>
>>> Repeating the test only 3 times seems a bit too low to me.
>>>
>>> I'll focus on the higher change for the moment, but I don't have access to such
>>> a hardware.
>>>
>>> Is possible to provide a diff between base and SPF of the performance cycles
>>> measured when running page_fault3 and page_fault2 when the 20% change is detected.
>>>
>>> Please stay focus on the test case process to see exactly where the series is
>>> impacting.
>>>
>>> Thanks,
>>> Laurent.
>>>
>>>>
>>>> And I did not find other high variation on test case result.
>>>>
>>>> a). Enable THP
>>>> testcase base stddev change head stddev metric
>>>> page_fault3/enable THP 10519 ± 3% -20.5% 8368 ±6% will-it-scale.per_thread_ops
>>>> page_fault2/enalbe THP 8281 ± 2% -18.8% 6728 will-it-scale.per_thread_ops
>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>
>>>> b). Disable THP
>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>
>>>>
>>>> Best regards,
>>>> Haiyan Song
>>>> ________________________________________
>>>> From: Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>> Sent: Monday, July 02, 2018 4:59 PM
>>>> To: Song, HaiyanX
>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>
>>>> On 11/06/2018 09:49, Song, HaiyanX wrote:
>>>>> Hi Laurent,
>>>>>
>>>>> Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance)
>>>>> tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on
>>>>> V9 patch serials.
>>>>>
>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126
>>>>> commit id:
>>>>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12
>>>>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1
>>>>> Benchmark: will-it-scale
>>>>> Download link: https://github.com/antonblanchard/will-it-scale/tree/master
>>>>>
>>>>> Metrics:
>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>> THP: enable / disable
>>>>> nr_task:100%
>>>>>
>>>>> 1. Regressions:
>>>>>
>>>>> a). Enable THP
>>>>> testcase base change head metric
>>>>> page_fault3/enable THP 10519 -20.5% 836 will-it-scale.per_thread_ops
>>>>> page_fault2/enalbe THP 8281 -18.8% 6728 will-it-scale.per_thread_ops
>>>>> brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops
>>>>> context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops
>>>>>
>>>>> b). Disable THP
>>>>> page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops
>>>>> page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 957 -7.9% 881 will-it-scale.per_thread_ops
>>>>> context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops
>>>>> brk1/disable THP 997317 -2.0% 977778 will-it-scale.per_process_ops
>>>>> page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops
>>>>> context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops
>>>>>
>>>>> Notes: for the above values of test result, the higher is better.
>>>>
>>>> I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't
>>>> get reproducible results. The results have huge variation, even on the vanilla
>>>> kernel, and I can't state on any changes due to that.
>>>>
>>>> I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't
>>>> measure any changes between the vanilla and the SPF patched ones:
>>>>
>>>> test THP enabled 4.17.0-rc4-mm1 spf delta
>>>> page_fault3_threads 2697.7 2683.5 -0.53%
>>>> page_fault2_threads 170660.6 169574.1 -0.64%
>>>> context_switch1_threads 6915269.2 6877507.3 -0.55%
>>>> context_switch1_processes 6478076.2 6529493.5 0.79%
>>>> brk1 243391.2 238527.5 -2.00%
>>>>
>>>> Tests were run 10 times, no high variation detected.
>>>>
>>>> Did you see high variation on your side ? How many times the test were run to
>>>> compute the average values ?
>>>>
>>>> Thanks,
>>>> Laurent.
>>>>
>>>>
>>>>>
>>>>> 2. Improvement: not found improvement based on the selected test cases.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Haiyan Song
>>>>> ________________________________________
>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>> Sent: Monday, May 28, 2018 4:54 PM
>>>>> To: Song, HaiyanX
>>>>> Cc: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>> Subject: Re: [PATCH v11 00/26] Speculative page faults
>>>>>
>>>>> On 28/05/2018 10:22, Haiyan Song wrote:
>>>>>> Hi Laurent,
>>>>>>
>>>>>> Yes, these tests are done on V9 patch.
>>>>>
>>>>> Do you plan to give this V11 a run ?
>>>>>
>>>>>>
>>>>>>
>>>>>> Best regards,
>>>>>> Haiyan Song
>>>>>>
>>>>>> On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote:
>>>>>>> On 28/05/2018 07:23, Song, HaiyanX wrote:
>>>>>>>>
>>>>>>>> Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series
>>>>>>>> tested on Intel 4s Skylake platform.
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Thanks for reporting this benchmark results, but you mentioned the "V9 patch
>>>>>>> series" while responding to the v11 header series...
>>>>>>> Were these tests done on v9 or v11 ?
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Laurent.
>>>>>>>
>>>>>>>>
>>>>>>>> The regression result is sorted by the metric will-it-scale.per_thread_ops.
>>>>>>>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series)
>>>>>>>> Commit id:
>>>>>>>> base commit: d55f34411b1b126429a823d06c3124c16283231f
>>>>>>>> head commit: 0355322b3577eeab7669066df42c550a56801110
>>>>>>>> Benchmark suite: will-it-scale
>>>>>>>> Download link:
>>>>>>>> https://github.com/antonblanchard/will-it-scale/tree/master/tests
>>>>>>>> Metrics:
>>>>>>>> will-it-scale.per_process_ops=processes/nr_cpu
>>>>>>>> will-it-scale.per_thread_ops=threads/nr_cpu
>>>>>>>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G)
>>>>>>>> THP: enable / disable
>>>>>>>> nr_task: 100%
>>>>>>>>
>>>>>>>> 1. Regressions:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops
>>>>>>>> brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ enable THP 172821 -5.3% 163692 will-it-scale.per_process_ops
>>>>>>>> signal1/ enable THP 9125 -3.2% 8834 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> page_fault3/ disable THP 10107 -19.1% 8180 will-it-scale.per_thread_ops
>>>>>>>> page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops
>>>>>>>> context_switch1/ disable THP 215389 -6.8% 200776 will-it-scale.per_thread_ops
>>>>>>>> brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops
>>>>>>>> page_fault3/ disable THP 173145 -4.7% 165064 will-it-scale.per_process_ops
>>>>>>>> signal1/ disable THP 9162 -3.9% 8802 will-it-scale.per_process_ops
>>>>>>>>
>>>>>>>> 2. Improvements:
>>>>>>>> a) THP enabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ enable THP 66.33 +469.8% 383.67 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops
>>>>>>>> signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> b) THP disabled:
>>>>>>>> testcase base change head metric
>>>>>>>> malloc1/ disable THP 90.33 +417.3% 467.33 will-it-scale.per_thread_ops
>>>>>>>> read2/ disable THP 58934 +39.2% 82060 will-it-scale.per_thread_ops
>>>>>>>> page_fault1/ disable THP 8607 +36.4% 11736 will-it-scale.per_thread_ops
>>>>>>>> read1/ disable THP 314063 +12.7% 353934 will-it-scale.per_thread_ops
>>>>>>>> writeseek3/ disable THP 2452 +12.5% 2759 will-it-scale.per_thread_ops
>>>>>>>> signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops
>>>>>>>>
>>>>>>>> Notes: for above values in column "change", the higher value means that the related testcase result
>>>>>>>> on head commit is better than that on base commit for this benchmark.
>>>>>>>>
>>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Haiyan Song
>>>>>>>>
>>>>>>>> ________________________________________
>>>>>>>> From: owner-linux-mm@kvack.org [owner-linux-mm@kvack.org] on behalf of Laurent Dufour [ldufour@linux.vnet.ibm.com]
>>>>>>>> Sent: Thursday, May 17, 2018 7:06 PM
>>>>>>>> To: akpm@linux-foundation.org; mhocko@kernel.org; peterz@infradead.org; kirill@shutemov.name; ak@linux.intel.com; dave@stgolabs.net; jack@suse.cz; Matthew Wilcox; khandual@linux.vnet.ibm.com; aneesh.kumar@linux.vnet.ibm.com; benh@kernel.crashing.org; mpe@ellerman.id.au; paulus@samba.org; Thomas Gleixner; Ingo Molnar; hpa@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.work@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi
>>>>>>>> Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; haren@linux.vnet.ibm.com; npiggin@gmail.com; bsingharora@gmail.com; paulmck@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x86@kernel.org
>>>>>>>> Subject: [PATCH v11 00/26] Speculative page faults
>>>>>>>>
>>>>>>>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>>>>>>>> page fault without holding the mm semaphore [1].
>>>>>>>>
>>>>>>>> The idea is to try to handle user space page faults without holding the
>>>>>>>> mmap_sem. This should allow better concurrency for massively threaded
>>>>>>>> process since the page fault handler will not wait for other threads memory
>>>>>>>> layout change to be done, assuming that this change is done in another part
>>>>>>>> of the process's memory space. This type page fault is named speculative
>>>>>>>> page fault. If the speculative page fault fails because of a concurrency is
>>>>>>>> detected or because underlying PMD or PTE tables are not yet allocating, it
>>>>>>>> is failing its processing and a classic page fault is then tried.
>>>>>>>>
>>>>>>>> The speculative page fault (SPF) has to look for the VMA matching the fault
>>>>>>>> address without holding the mmap_sem, this is done by introducing a rwlock
>>>>>>>> which protects the access to the mm_rb tree. Previously this was done using
>>>>>>>> SRCU but it was introducing a lot of scheduling to process the VMA's
>>>>>>>> freeing operation which was hitting the performance by 20% as reported by
>>>>>>>> Kemi Wang [2]. Using a rwlock to protect access to the mm_rb tree is
>>>>>>>> limiting the locking contention to these operations which are expected to
>>>>>>>> be in a O(log n) order. In addition to ensure that the VMA is not freed in
>>>>>>>> our back a reference count is added and 2 services (get_vma() and
>>>>>>>> put_vma()) are introduced to handle the reference count. Once a VMA is
>>>>>>>> fetched from the RB tree using get_vma(), it must be later freed using
>>>>>>>> put_vma(). I can't see anymore the overhead I got while will-it-scale
>>>>>>>> benchmark anymore.
>>>>>>>>
>>>>>>>> The VMA's attributes checked during the speculative page fault processing
>>>>>>>> have to be protected against parallel changes. This is done by using a per
>>>>>>>> VMA sequence lock. This sequence lock allows the speculative page fault
>>>>>>>> handler to fast check for parallel changes in progress and to abort the
>>>>>>>> speculative page fault in that case.
>>>>>>>>
>>>>>>>> Once the VMA has been found, the speculative page fault handler would check
>>>>>>>> for the VMA's attributes to verify that the page fault has to be handled
>>>>>>>> correctly or not. Thus, the VMA is protected through a sequence lock which
>>>>>>>> allows fast detection of concurrent VMA changes. If such a change is
>>>>>>>> detected, the speculative page fault is aborted and a *classic* page fault
>>>>>>>> is tried. VMA sequence lockings are added when VMA attributes which are
>>>>>>>> checked during the page fault are modified.
>>>>>>>>
>>>>>>>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>>>>>>>> so once the page table is locked, the VMA is valid, so any other changes
>>>>>>>> leading to touching this PTE will need to lock the page table, so no
>>>>>>>> parallel change is possible at this time.
>>>>>>>>
>>>>>>>> The locking of the PTE is done with interrupts disabled, this allows
>>>>>>>> checking for the PMD to ensure that there is not an ongoing collapsing
>>>>>>>> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
>>>>>>>> waiting for the other CPU to have caught the IPI interrupt, if the pmd is
>>>>>>>> valid at the time the PTE is locked, we have the guarantee that the
>>>>>>>> collapsing operation will have to wait on the PTE lock to move forward.
>>>>>>>> This allows the SPF handler to map the PTE safely. If the PMD value is
>>>>>>>> different from the one recorded at the beginning of the SPF operation, the
>>>>>>>> classic page fault handler will be called to handle the operation while
>>>>>>>> holding the mmap_sem. As the PTE lock is done with the interrupts disabled,
>>>>>>>> the lock is done using spin_trylock() to avoid dead lock when handling a
>>>>>>>> page fault while a TLB invalidate is requested by another CPU holding the
>>>>>>>> PTE.
>>>>>>>>
>>>>>>>> In pseudo code, this could be seen as:
>>>>>>>> speculative_page_fault()
>>>>>>>> {
>>>>>>>> vma = get_vma()
>>>>>>>> check vma sequence count
>>>>>>>> check vma's support
>>>>>>>> disable interrupt
>>>>>>>> check pgd,p4d,...,pte
>>>>>>>> save pmd and pte in vmf
>>>>>>>> save vma sequence counter in vmf
>>>>>>>> enable interrupt
>>>>>>>> check vma sequence count
>>>>>>>> handle_pte_fault(vma)
>>>>>>>> ..
>>>>>>>> page = alloc_page()
>>>>>>>> pte_map_lock()
>>>>>>>> disable interrupt
>>>>>>>> abort if sequence counter has changed
>>>>>>>> abort if pmd or pte has changed
>>>>>>>> pte map and lock
>>>>>>>> enable interrupt
>>>>>>>> if abort
>>>>>>>> free page
>>>>>>>> abort
>>>>>>>> ...
>>>>>>>> }
>>>>>>>>
>>>>>>>> arch_fault_handler()
>>>>>>>> {
>>>>>>>> if (speculative_page_fault(&vma))
>>>>>>>> goto done
>>>>>>>> again:
>>>>>>>> lock(mmap_sem)
>>>>>>>> vma = find_vma();
>>>>>>>> handle_pte_fault(vma);
>>>>>>>> if retry
>>>>>>>> unlock(mmap_sem)
>>>>>>>> goto again;
>>>>>>>> done:
>>>>>>>> handle fault error
>>>>>>>> }
>>>>>>>>
>>>>>>>> Support for THP is not done because when checking for the PMD, we can be
>>>>>>>> confused by an in progress collapsing operation done by khugepaged. The
>>>>>>>> issue is that pmd_none() could be true either if the PMD is not already
>>>>>>>> populated or if the underlying PTE are in the way to be collapsed. So we
>>>>>>>> cannot safely allocate a PMD if pmd_none() is true.
>>>>>>>>
>>>>>>>> This series add a new software performance event named 'speculative-faults'
>>>>>>>> or 'spf'. It counts the number of successful page fault event handled
>>>>>>>> speculatively. When recording 'faults,spf' events, the faults one is
>>>>>>>> counting the total number of page fault events while 'spf' is only counting
>>>>>>>> the part of the faults processed speculatively.
>>>>>>>>
>>>>>>>> There are some trace events introduced by this series. They allow
>>>>>>>> identifying why the page faults were not processed speculatively. This
>>>>>>>> doesn't take in account the faults generated by a monothreaded process
>>>>>>>> which directly processed while holding the mmap_sem. This trace events are
>>>>>>>> grouped in a system named 'pagefault', they are:
>>>>>>>> - pagefault:spf_vma_changed : if the VMA has been changed in our back
>>>>>>>> - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
>>>>>>>> - pagefault:spf_vma_notsup : the VMA's type is not supported
>>>>>>>> - pagefault:spf_vma_access : the VMA's access right are not respected
>>>>>>>> - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
>>>>>>>> back.
>>>>>>>>
>>>>>>>> To record all the related events, the easier is to run perf with the
>>>>>>>> following arguments :
>>>>>>>> $ perf stat -e 'faults,spf,pagefault:*' <command>
>>>>>>>>
>>>>>>>> There is also a dedicated vmstat counter showing the number of successful
>>>>>>>> page fault handled speculatively. I can be seen this way:
>>>>>>>> $ grep speculative_pgfault /proc/vmstat
>>>>>>>>
>>>>>>>> This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is functional
>>>>>>>> on x86, PowerPC and arm64.
>>>>>>>>
>>>>>>>> ---------------------
>>>>>>>> Real Workload results
>>>>>>>>
>>>>>>>> As mentioned in previous email, we did non official runs using a "popular
>>>>>>>> in memory multithreaded database product" on 176 cores SMT8 Power system
>>>>>>>> which showed a 30% improvements in the number of transaction processed per
>>>>>>>> second. This run has been done on the v6 series, but changes introduced in
>>>>>>>> this new version should not impact the performance boost seen.
>>>>>>>>
>>>>>>>> Here are the perf data captured during 2 of these runs on top of the v8
>>>>>>>> series:
>>>>>>>> vanilla spf
>>>>>>>> faults 89.418 101.364 +13%
>>>>>>>> spf n/a 97.989
>>>>>>>>
>>>>>>>> With the SPF kernel, most of the page fault were processed in a speculative
>>>>>>>> way.
>>>>>>>>
>>>>>>>> Ganesh Mahendran had backported the series on top of a 4.9 kernel and gave
>>>>>>>> it a try on an android device. He reported that the application launch time
>>>>>>>> was improved in average by 6%, and for large applications (~100 threads) by
>>>>>>>> 20%.
>>>>>>>>
>>>>>>>> Here are the launch time Ganesh mesured on Android 8.0 on top of a Qcom
>>>>>>>> MSM845 (8 cores) with 6GB (the less is better):
>>>>>>>>
>>>>>>>> Application 4.9 4.9+spf delta
>>>>>>>> com.tencent.mm 416 389 -7%
>>>>>>>> com.eg.android.AlipayGphone 1135 986 -13%
>>>>>>>> com.tencent.mtt 455 454 0%
>>>>>>>> com.qqgame.hlddz 1497 1409 -6%
>>>>>>>> com.autonavi.minimap 711 701 -1%
>>>>>>>> com.tencent.tmgp.sgame 788 748 -5%
>>>>>>>> com.immomo.momo 501 487 -3%
>>>>>>>> com.tencent.peng 2145 2112 -2%
>>>>>>>> com.smile.gifmaker 491 461 -6%
>>>>>>>> com.baidu.BaiduMap 479 366 -23%
>>>>>>>> com.taobao.taobao 1341 1198 -11%
>>>>>>>> com.baidu.searchbox 333 314 -6%
>>>>>>>> com.tencent.mobileqq 394 384 -3%
>>>>>>>> com.sina.weibo 907 906 0%
>>>>>>>> com.youku.phone 816 731 -11%
>>>>>>>> com.happyelements.AndroidAnimal.qq 763 717 -6%
>>>>>>>> com.UCMobile 415 411 -1%
>>>>>>>> com.tencent.tmgp.ak 1464 1431 -2%
>>>>>>>> com.tencent.qqmusic 336 329 -2%
>>>>>>>> com.sankuai.meituan 1661 1302 -22%
>>>>>>>> com.netease.cloudmusic 1193 1200 1%
>>>>>>>> air.tv.douyu.android 4257 4152 -2%
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Benchmarks results
>>>>>>>>
>>>>>>>> Base kernel is v4.17.0-rc4-mm1
>>>>>>>> SPF is BASE + this series
>>>>>>>>
>>>>>>>> Kernbench:
>>>>>>>> ----------
>>>>>>>> Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
>>>>>>>> kernel (kernel is build 5 times):
>>>>>>>>
>>>>>>>> Average Half load -j 8
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 1448.65 (5.72312) 1455.84 (4.84951) 0.50%
>>>>>>>> User Time 10135.4 (30.3699) 10148.8 (31.1252) 0.13%
>>>>>>>> System Time 900.47 (2.81131) 923.28 (7.52779) 2.53%
>>>>>>>> Percent CPU 761.4 (1.14018) 760.2 (0.447214) -0.16%
>>>>>>>> Context Switches 85380 (3419.52) 84748 (1904.44) -0.74%
>>>>>>>> Sleeps 105064 (1240.96) 105074 (337.612) 0.01%
>>>>>>>>
>>>>>>>> Average Optimal load -j 16
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 920.528 (10.1212) 927.404 (8.91789) 0.75%
>>>>>>>> User Time 11064.8 (981.142) 11085 (990.897) 0.18%
>>>>>>>> System Time 979.904 (84.0615) 1001.14 (82.5523) 2.17%
>>>>>>>> Percent CPU 1089.5 (345.894) 1086.1 (343.545) -0.31%
>>>>>>>> Context Switches 159488 (78156.4) 158223 (77472.1) -0.79%
>>>>>>>> Sleeps 110566 (5877.49) 110388 (5617.75) -0.16%
>>>>>>>>
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 526743764 faults
>>>>>>>> 210 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 2278 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Very few speculative page faults were recorded as most of the processes
>>>>>>>> involved are monothreaded (sounds that on this architecture some threads
>>>>>>>> were created during the kernel build processing).
>>>>>>>>
>>>>>>>> Here are the kerbench results on a 80 CPUs Power8 system:
>>>>>>>>
>>>>>>>> Average Half load -j 40
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 117.152 (0.774642) 117.166 (0.476057) 0.01%
>>>>>>>> User Time 4478.52 (24.7688) 4479.76 (9.08555) 0.03%
>>>>>>>> System Time 131.104 (0.720056) 134.04 (0.708414) 2.24%
>>>>>>>> Percent CPU 3934 (19.7104) 3937.2 (19.0184) 0.08%
>>>>>>>> Context Switches 92125.4 (576.787) 92581.6 (198.622) 0.50%
>>>>>>>> Sleeps 317923 (652.499) 318469 (1255.59) 0.17%
>>>>>>>>
>>>>>>>> Average Optimal load -j 80
>>>>>>>> Run (std deviation)
>>>>>>>> BASE SPF
>>>>>>>> Elapsed Time 107.73 (0.632416) 107.31 (0.584936) -0.39%
>>>>>>>> User Time 5869.86 (1466.72) 5871.71 (1467.27) 0.03%
>>>>>>>> System Time 153.728 (23.8573) 157.153 (24.3704) 2.23%
>>>>>>>> Percent CPU 5418.6 (1565.17) 5436.7 (1580.91) 0.33%
>>>>>>>> Context Switches 223861 (138865) 225032 (139632) 0.52%
>>>>>>>> Sleeps 330529 (13495.1) 332001 (14746.2) 0.45%
>>>>>>>>
>>>>>>>> During a run on the SPF, perf events were captured:
>>>>>>>> Performance counter stats for '../kernbench -M':
>>>>>>>> 116730856 faults
>>>>>>>> 0 spf
>>>>>>>> 3 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 476 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> Most of the processes involved are monothreaded so SPF is not activated but
>>>>>>>> there is no impact on the performance.
>>>>>>>>
>>>>>>>> Ebizzy:
>>>>>>>> -------
>>>>>>>> The test is counting the number of records per second it can manage, the
>>>>>>>> higher is the best. I run it like this 'ebizzy -mTt <nrcpus>'. To get
>>>>>>>> consistent result I repeated the test 100 times and measure the average
>>>>>>>> result. The number is the record processes per second, the higher is the
>>>>>>>> best.
>>>>>>>>
>>>>>>>> BASE SPF delta
>>>>>>>> 16 CPUs x86 VM 742.57 1490.24 100.69%
>>>>>>>> 80 CPUs P8 node 13105.4 24174.23 84.46%
>>>>>>>>
>>>>>>>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>>>>>>> Performance counter stats for './ebizzy -mTt 16':
>>>>>>>> 1706379 faults
>>>>>>>> 1674599 spf
>>>>>>>> 30588 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 363 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> And the ones captured during a run on a 80 CPUs Power node:
>>>>>>>> Performance counter stats for './ebizzy -mTt 80':
>>>>>>>> 1874773 faults
>>>>>>>> 1461153 spf
>>>>>>>> 413293 pagefault:spf_vma_changed
>>>>>>>> 0 pagefault:spf_vma_noanon
>>>>>>>> 200 pagefault:spf_vma_notsup
>>>>>>>> 0 pagefault:spf_vma_access
>>>>>>>> 0 pagefault:spf_pmd_changed
>>>>>>>>
>>>>>>>> In ebizzy's case most of the page fault were handled in a speculative way,
>>>>>>>> leading the ebizzy performance boost.
>>>>>>>>
>>>>>>>> ------------------
>>>>>>>> Changes since v10 (https://lkml.org/lkml/2018/4/17/572):
>>>>>>>> - Accounted for all review feedbacks from Punit Agrawal, Ganesh Mahendran
>>>>>>>> and Minchan Kim, hopefully.
>>>>>>>> - Remove unneeded check on CONFIG_SPECULATIVE_PAGE_FAULT in
>>>>>>>> __do_page_fault().
>>>>>>>> - Loop in pte_spinlock() and pte_map_lock() when pte try lock fails
>>>>>>>> instead
>>>>>>>> of aborting the speculative page fault handling. Dropping the now
>>>>>>>> useless
>>>>>>>> trace event pagefault:spf_pte_lock.
>>>>>>>> - No more try to reuse the fetched VMA during the speculative page fault
>>>>>>>> handling when retrying is needed. This adds a lot of complexity and
>>>>>>>> additional tests done didn't show a significant performance improvement.
>>>>>>>> - Convert IS_ENABLED(CONFIG_NUMA) back to #ifdef due to build error.
>>>>>>>>
>>>>>>>> [1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
>>>>>>>> [2] https://patchwork.kernel.org/patch/9999687/
>>>>>>>>
>>>>>>>>
>>>>>>>> Laurent Dufour (20):
>>>>>>>> mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
>>>>>>>> x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: make pte_unmap_same compatible with SPF
>>>>>>>> mm: introduce INIT_VMA()
>>>>>>>> mm: protect VMA modifications using VMA sequence count
>>>>>>>> mm: protect mremap() against SPF hanlder
>>>>>>>> mm: protect SPF handler against anon_vma changes
>>>>>>>> mm: cache some VMA fields in the vm_fault structure
>>>>>>>> mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
>>>>>>>> mm: introduce __lru_cache_add_active_or_unevictable
>>>>>>>> mm: introduce __vm_normal_page()
>>>>>>>> mm: introduce __page_add_new_anon_rmap()
>>>>>>>> mm: protect mm_rb tree with a rwlock
>>>>>>>> mm: adding speculative page fault failure trace events
>>>>>>>> perf: add a speculative page fault sw event
>>>>>>>> perf tools: add support for the SPF perf event
>>>>>>>> mm: add speculative page fault vmstats
>>>>>>>> powerpc/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Mahendran Ganesh (2):
>>>>>>>> arm64/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>>>>>>>> arm64/mm: add speculative page fault
>>>>>>>>
>>>>>>>> Peter Zijlstra (4):
>>>>>>>> mm: prepare for FAULT_FLAG_SPECULATIVE
>>>>>>>> mm: VMA sequence count
>>>>>>>> mm: provide speculative fault infrastructure
>>>>>>>> x86/mm: add speculative pagefault handling
>>>>>>>>
>>>>>>>> arch/arm64/Kconfig | 1 +
>>>>>>>> arch/arm64/mm/fault.c | 12 +
>>>>>>>> arch/powerpc/Kconfig | 1 +
>>>>>>>> arch/powerpc/mm/fault.c | 16 +
>>>>>>>> arch/x86/Kconfig | 1 +
>>>>>>>> arch/x86/mm/fault.c | 27 +-
>>>>>>>> fs/exec.c | 2 +-
>>>>>>>> fs/proc/task_mmu.c | 5 +-
>>>>>>>> fs/userfaultfd.c | 17 +-
>>>>>>>> include/linux/hugetlb_inline.h | 2 +-
>>>>>>>> include/linux/migrate.h | 4 +-
>>>>>>>> include/linux/mm.h | 136 +++++++-
>>>>>>>> include/linux/mm_types.h | 7 +
>>>>>>>> include/linux/pagemap.h | 4 +-
>>>>>>>> include/linux/rmap.h | 12 +-
>>>>>>>> include/linux/swap.h | 10 +-
>>>>>>>> include/linux/vm_event_item.h | 3 +
>>>>>>>> include/trace/events/pagefault.h | 80 +++++
>>>>>>>> include/uapi/linux/perf_event.h | 1 +
>>>>>>>> kernel/fork.c | 5 +-
>>>>>>>> mm/Kconfig | 22 ++
>>>>>>>> mm/huge_memory.c | 6 +-
>>>>>>>> mm/hugetlb.c | 2 +
>>>>>>>> mm/init-mm.c | 3 +
>>>>>>>> mm/internal.h | 20 ++
>>>>>>>> mm/khugepaged.c | 5 +
>>>>>>>> mm/madvise.c | 6 +-
>>>>>>>> mm/memory.c | 612 +++++++++++++++++++++++++++++-----
>>>>>>>> mm/mempolicy.c | 51 ++-
>>>>>>>> mm/migrate.c | 6 +-
>>>>>>>> mm/mlock.c | 13 +-
>>>>>>>> mm/mmap.c | 229 ++++++++++---
>>>>>>>> mm/mprotect.c | 4 +-
>>>>>>>> mm/mremap.c | 13 +
>>>>>>>> mm/nommu.c | 2 +-
>>>>>>>> mm/rmap.c | 5 +-
>>>>>>>> mm/swap.c | 6 +-
>>>>>>>> mm/swap_state.c | 8 +-
>>>>>>>> mm/vmstat.c | 5 +-
>>>>>>>> tools/include/uapi/linux/perf_event.h | 1 +
>>>>>>>> tools/perf/util/evsel.c | 1 +
>>>>>>>> tools/perf/util/parse-events.c | 4 +
>>>>>>>> tools/perf/util/parse-events.l | 1 +
>>>>>>>> tools/perf/util/python.c | 1 +
>>>>>>>> 44 files changed, 1161 insertions(+), 211 deletions(-)
>>>>>>>> create mode 100644 include/trace/events/pagefault.h
>>>>>>>>
>>>>>>>> --
>>>>>>>> 2.7.4
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
[-- Attachment #2: perf-profile_page_fault3-head-thp-always-SPF-off.gz --]
[-- Type: application/gzip, Size: 11278 bytes --]
[-- Attachment #3: perf-profile_page_fault3-head-thp-always-SPF-on.gz --]
[-- Type: application/gzip, Size: 11424 bytes --]
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-05-17 11:06 [PATCH v11 00/26] Speculative page faults Laurent Dufour
@ 2018-11-05 10:42 ` Balbir Singh
2018-05-17 11:06 ` [PATCH v11 02/26] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
` (26 subsequent siblings)
27 siblings, 0 replies; 106+ messages in thread
From: Balbir Singh @ 2018-11-05 10:42 UTC (permalink / raw)
To: Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, paulmck,
Tim Chen, linuxppc-dev, x86
On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote:
> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
Question -- I presume mmap_sem (rw_semaphore implementation tested against)
was qrwlock?
Balbir Singh.
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-11-05 10:42 ` Balbir Singh
0 siblings, 0 replies; 106+ messages in thread
From: Balbir Singh @ 2018-11-05 10:42 UTC (permalink / raw)
To: Laurent Dufour
Cc: jack, sergey.senozhatsky.work, peterz, Will Deacon, mhocko,
linux-mm, paulus, Punit Agrawal, hpa, Alexei Starovoitov,
khandual, Andrea Arcangeli, ak, Minchan Kim, x86, Matthew Wilcox,
Daniel Jordan, Ingo Molnar, David Rientjes, paulmck, npiggin,
Jerome Glisse, dave, kemi.wang, kirill, Thomas Gleixner,
Ganesh Mahendran, Yang Shi, linuxppc-dev, linux-kernel,
Sergey Senozhatsky, vinayak menon, aneesh.kumar, akpm, Tim Chen,
haren
On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote:
> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
> page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
Question -- I presume mmap_sem (rw_semaphore implementation tested against)
was qrwlock?
Balbir Singh.
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
2018-11-05 10:42 ` Balbir Singh
(?)
@ 2018-11-05 16:08 ` Laurent Dufour
-1 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 16:08 UTC (permalink / raw)
To: Balbir Singh, Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, paulmck,
Tim Chen, linuxppc-dev, x86
Le 05/11/2018 à 11:42, Balbir Singh a écrit :
> On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote:
>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>> page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>
> Question -- I presume mmap_sem (rw_semaphore implementation tested against)
> was qrwlock?
I don't think so, this series doesn't change the mmap_sem definition so
it still belongs to the 'struct rw_semaphore'.
Laurent.
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-11-05 16:08 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 16:08 UTC (permalink / raw)
To: Balbir Singh, Laurent Dufour
Cc: jack, sergey.senozhatsky.work, peterz, Will Deacon, mhocko,
linux-mm, paulus, Punit Agrawal, hpa, Alexei Starovoitov,
khandual, Andrea Arcangeli, ak, Minchan Kim, x86, Matthew Wilcox,
Daniel Jordan, Ingo Molnar, David Rientjes, paulmck, npiggin,
Jerome Glisse, dave, kemi.wang, kirill, Thomas Gleixner,
Ganesh Mahendran, Yang Shi, linuxppc-dev, linux-kernel,
Sergey Senozhatsky, vinayak menon, aneesh.kumar, akpm, Tim Chen,
haren
Le 05/11/2018 à 11:42, Balbir Singh a écrit :
> On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote:
>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>> page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>
> Question -- I presume mmap_sem (rw_semaphore implementation tested against)
> was qrwlock?
I don't think so, this series doesn't change the mmap_sem definition so
it still belongs to the 'struct rw_semaphore'.
Laurent.
^ permalink raw reply [flat|nested] 106+ messages in thread
* Re: [PATCH v11 00/26] Speculative page faults
@ 2018-11-05 16:08 ` Laurent Dufour
0 siblings, 0 replies; 106+ messages in thread
From: Laurent Dufour @ 2018-11-05 16:08 UTC (permalink / raw)
To: Balbir Singh, Laurent Dufour
Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
khandual, aneesh.kumar, benh, mpe, paulus, Thomas Gleixner,
Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
sergey.senozhatsky.work, Andrea Arcangeli, Alexei Starovoitov,
kemi.wang, Daniel Jordan, David Rientjes, Jerome Glisse,
Ganesh Mahendran, Minchan Kim, Punit Agrawal, vinayak menon,
Yang Shi, linux-kernel, linux-mm, haren, npiggin, paulmck,
Tim Chen, linuxppc-dev, x86
Le 05/11/2018 A 11:42, Balbir Singh a A(C)critA :
> On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote:
>> This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle
>> page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>
> Question -- I presume mmap_sem (rw_semaphore implementation tested against)
> was qrwlock?
I don't think so, this series doesn't change the mmap_sem definition so
it still belongs to the 'struct rw_semaphore'.
Laurent.
^ permalink raw reply [flat|nested] 106+ messages in thread