All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 00/25] Speculative page faults
@ 2018-04-17 14:33 Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
                   ` (26 more replies)
  0 siblings, 27 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

This is a port on kernel 4.16 of the work done by Peter Zijlstra to
handle page fault without holding the mm semaphore [1].

The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault handler will not wait for other threads memory
layout change to be done, assuming that this change is done in another part
of the process's memory space. This type page fault is named speculative
page fault. If the speculative page fault fails because of a concurrency is
detected or because underlying PMD or PTE tables are not yet allocating, it
is failing its processing and a classic page fault is then tried.

The speculative page fault (SPF) has to look for the VMA matching the fault
address without holding the mmap_sem, this is done by introducing a rwlock
which protects the access to the mm_rb tree. Previously this was done using
SRCU but it was introducing a lot of scheduling to process the VMA's
freeing
operation which was hitting the performance by 20% as reported by Kemi Wang
[2].Using a rwlock to protect access to the mm_rb tree is limiting the
locking contention to these operations which are expected to be in a O(log
n)
order. In addition to ensure that the VMA is not freed in our back a
reference count is added and 2 services (get_vma() and put_vma()) are
introduced to handle the reference count. When a VMA is fetch from the RB
tree using get_vma() is must be later freeed using put_vma(). Furthermore,
to allow the VMA to be used again by the classic page fault handler a
service is introduced can_reuse_spf_vma(). This service is expected to be
called with the mmap_sem hold. It checked that the VMA is still matching
the specified address and is releasing its reference count as the mmap_sem
is hold it is ensure that it will not be freed in our back. In general, the
VMA's reference count could be decremented when holding the mmap_sem but it
should not be increased as holding the mmap_sem is ensuring that the VMA is
stable. I can't see anymore the overhead I got while will-it-scale
benchmark anymore.

The VMA's attributes checked during the speculative page fault processing
have to be protected against parallel changes. This is done by using a per
VMA sequence lock. This sequence lock allows the speculative page fault
handler to fast check for parallel changes in progress and to abort the
speculative page fault in that case.

Once the VMA is found, the speculative page fault handler would check for
the VMA's attributes to verify that the page fault has to be handled
correctly or not. Thus the VMA is protected through a sequence lock which
allows fast detection of concurrent VMA changes. If such a change is
detected, the speculative page fault is aborted and a *classic* page fault
is tried.  VMA sequence lockings are added when VMA attributes which are
checked during the page fault are modified.

When the PTE is fetched, the VMA is checked to see if it has been changed,
so once the page table is locked, the VMA is valid, so any other changes
leading to touching this PTE will need to lock the page table, so no
parallel change is possible at this time.

The locking of the PTE is done with interrupts disabled, this allows to
check for the PMD to ensure that there is not an ongoing collapsing
operation. Since khugepaged is firstly set the PMD to pmd_none and then is
waiting for the other CPU to have catch the IPI interrupt, if the pmd is
valid at the time the PTE is locked, we have the guarantee that the
collapsing opertion will have to wait on the PTE lock to move foward. This
allows the SPF handler to map the PTE safely. If the PMD value is different
than the one recorded at the beginning of the SPF operation, the classic
page fault handler will be called to handle the operation while holding the
mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
done using spin_trylock() to avoid dead lock when handling a page fault
while a TLB invalidate is requested by an other CPU holding the PTE.

In pseudo code, this could be seen as:
    speculative_page_fault()
    {
	    vma = GET_VMA_vma()
	    check vma sequence count
	    check vma's support
	    disable interrupt
		  check pgd,p4d,...,pte
		  save pmd and pte in vmf
		  save vma sequence counter in vmf
	    enable interrupt
	    check vma sequence count
	    handle_pte_fault(vma)
		    ..
		    page = alloc_page()
		    pte_map_lock()
			    disable interrupt
				    abort if sequence counter has changed
				    abort if pmd or pte has changed
				    pte map and lock
			    enable interrupt
		    if abort
		       free page
		       abort
		    ...
    }
    
    arch_fault_handler()
    {
	    if (speculative_page_fault(&vma)) goto done
    again:
	    lock(mmap_sem)
	    if (!vma)
	       try_to_reuse(vma)
	    else
	       vma = find_vma();
	    handle_pte_fault(vma);
	    if retry
	       unlock(mmap_sem)
	       vma = NULL;
	       goto again;
    done
	    handle fault error
    }

Support for THP is not done because when checking for the PMD, we can be
confused by an in progress collapsing operation done by khugepaged. The
issue is that pmd_none() could be true either if the PMD is not already
populated or if the underlying PTE are in the way to be collapsed. So we
cannot safely allocate a PMD if pmd_none() is true.

This series add a new software performance event named 'speculative-faults'
or 'spf'. It counts the number of successful page fault event handled in a
speculative way. When recording 'faults,spf' events, the faults one is
counting the total number of page fault events while 'spf' is only counting
the part of the faults processed in a speculative way.

There are some trace events introduced by this series. They allow to
identify why the page faults where not processed in a speculative way. This
doesn't take in account the faults generated by a monothreaded process
which directly processed while holding the mmap_sem. This trace events are
grouped in a system named 'pagefault', they are:
 - pagefault:spf_pte_lock : if the pte was already locked by another thread
 - pagefault:spf_vma_changed : if the VMA has been changed in our back
 - pagefault:spf_vma_noanon : the vma->anon_vma field was not yet set.
 - pagefault:spf_vma_notsup : the VMA's type is not supported
 - pagefault:spf_vma_access : the VMA's access right are not respected
 - pagefault:spf_pmd_changed : the upper PMD pointer has changed in our
 back.

To record all the related events, the easier is to run perf with the
following arguments :
$ perf stat -e 'faults,spf,pagefault:*' <command>

There is also a dedicated vmstat counter showing the number of successful
page fault handled in a speculative way. I can be seen this way:
$ grep speculative_pgfault /proc/vmstat

This series builds on top of v4.16-mmotm-2018-04-13-17-28 and is
functional on x86 and PowerPC.

---------------------
Real Workload results

As mentioned in previous email, we did non official runs using a "popular
in memory multithreaded database product" on 176 cores SMT8 Power system
which showed a 30% improvements in the number of transaction processed per
second. This run has been done on the v6 series, but changes introduced in
this new verion should not impact the performance boost seen.

Here are the perf data captured during 2 of these runs on top of the v8
series:
		vanilla		spf
faults		89.418		101.364		
spf                n/a		 97.989

With the SPF kernel, most of the page fault were processed in a speculative
way.

Ganesh Mahendran had backported the series on top of a 4.9 kernel and give
it
a try on an android device. He reported that the application launch time
was
improved by 15%, and for large applications (~100 threads) by 20% [3].

------------------
Benchmarks results

Base kernel is v4.16
SPF is BASE + this series

Kernbench:
----------
Here are the results on a 16 CPUs X86 guest using kernbench on a 4.15
kernel (kernel is build 5 times):

Average	Half load -j 8
		 Run	(std deviation)
		 BASE			SPF
Elapsed	Time	 152.5	 (0.631585)	151.406	(0.391446)	-0.72%
User	Time	 1036.4	 (2.42065)	1025.2	(1.9909)	-1.08%
System	Time	 125.688 (0.403695)	126.794	(0.716715)	0.88%
Percent	CPU	 761.4	 (2.07364)	760.4	(1.34164)	-0.13%
Context	Switches 51429	 (804.93)	51435.6	(1108.12)	0.01%
Sleeps		 104625	 (510.468)	105877	(703.774)	1.20%
						
Average	Optimal	load -j	16
		 Run	(std deviation)
		 BASE			SPF
Elapsed	Time	 75.51	 (0.576498)	74.684	(0.279159)	-1.09%
User	Time	 970.701 (69.2768)	964.945	(63.5283)	-0.59%
System	Time	 111.965 (14.4711)	112.465	(15.1159)	0.45%
Percent	CPU	 1044.8	 (298.806)	1051.3	(306.658)	0.62%
Context	Switches 75261.5 (25129.6)	75387.4	(25264.8)	0.17%
Sleeps		 109660	 (5349.62)	110279	(4704.95)	0.56%

During a run on the SPF, perf events were captured:
 Performance counter stats for '../kernbench -M':
         513045402      faults
               202      spf
                 0      pagefault:spf_pte_lock
                 0      pagefault:spf_vma_changed
                 0      pagefault:spf_vma_noanon
              2210      pagefault:spf_vma_notsup
                 0      pagefault:spf_vma_access
                 0      pagefault:spf_pmd_changed

    1837.394054020 seconds time elapsed

Very few speculative page fault were recorded as most of the processes
involved are monothreaded (sounds that on this architecture some threads
were created during the kernel build processing).

Here are the kerbench results on a 80 CPUs Power8 system:

Average	Half load -j 40
		 Run	(std deviation)
		 BASE			SPF
Elapsed	Time	 117.222 (0.733294)	116.784	(0.452139)	-0.37%
User	Time	 4485.58 (27.1243)	4473.9	(8.0409)	-0.26%
System	Time	 134.228 (0.601764)	134.874	(0.680169)	0.48%
Percent	CPU	 3940.4	 (12.4218)	3945.8	(12.5579)	0.14%
Context	Switches 92414.8 (689.529)	92448.6	(511.846)	0.04%
Sleeps		 318388	 (758.783)	318758	(1758.96)	0.12%
						
Average	Optimal	load -j	80
		 Run	(std deviation)
		 BASE			SPF
Elapsed	Time	 107.102 (0.73605)	107.872	(1.08573)	0.72%
User	Time	 5875.13 (1464.89)	5862.59	(1463.87)	-0.21%
System	Time	 157.006 (24.0146)	157.731	(24.1209)	0.46%
Percent	CPU	 5445.4	 (1587.03)	5417.6	(1552.41)	-0.51%
Context	Switches 221714	 (136312)	221526	(136071)	-0.08%
Sleeps		 332500	 (15173.2)	332037	(14202.1)	-0.14%

During a run on the SPF, perf events were captured:
 Performance counter stats for '../kernbench -M':
         116933988      faults
                 0      spf
                 0      pagefault:spf_pte_lock
                 0      pagefault:spf_vma_changed
                 0      pagefault:spf_vma_noanon
               476      pagefault:spf_vma_notsup
                 0      pagefault:spf_vma_access
                 0      pagefault:spf_pmd_changed

Most of the processes involved are monothreaded so SPF is not activated but
there is no impact on the performance.

Ebizzy:
-------
The test is counting the number of records per second it can manage, the
higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
result I repeated the test 100 times and measure the average result. The
number is the record processes per second, the higher is the best.

  		BASE		SPF		delta	
16 CPUs x86 VM	12405.52	91104.52	634.39%
80 CPUs P8 node 37880.01	76201.05	101.16%

Here are the performance counter read during a run on a 16 CPUs x86 VM:
 Performance counter stats for './ebizzy -mRTp':
            860074      faults
            856866      spf
               285      pagefault:spf_pte_lock
              1506      pagefault:spf_vma_changed
                 0      pagefault:spf_vma_noanon
                73      pagefault:spf_vma_notsup
                 0      pagefault:spf_vma_access
                 0      pagefault:spf_pmd_changed

And the ones captured during a run on a 80 CPUs Power node:
 Performance counter stats for './ebizzy -mRTp':
            722695      faults
            699402      spf
             16048      pagefault:spf_pte_lock
              6838      pagefault:spf_vma_changed
                 0      pagefault:spf_vma_noanon
               277      pagefault:spf_vma_notsup
                 0      pagefault:spf_vma_access
                 0      pagefault:spf_pmd_changed

In ebizzy's case most of the page fault were handled in a speculative way,
leading the ebizzy performance boost.

------------------
Changes since v9:
 - Accounted for all review feedback from David Rientjes and Jerome Glisse,
   hopefully
 - Fix a lockdep warning when populate_vma_page_range() is called by
   mprotect_fixup(). The call to vm_write_end(vma) is now made before
calling
   populate_vma_page_range() since vma locking is not required here.
 - Introduce INIT_VMA() move VMA's sequence and refcount initialization out
   of __vma_link_rb(). This fix various lockdep warning raised when
   unmap_region() may be called before vma_link() (patch 7 & 8)
 - Allow CONFIG_SPECULATIVE_PAGE_FAULT to be switched off
 - Pass VMA's flag value to maybe_mkwrite() allowing to use the cached ones
   (patch 12)
 - Make CONFIG_SPECULATIVE_PAGE_FAULT user configurable
 - Add speculative page fault vmstats
 - Remove #ifdef in arch/*/mm/fault.c
Changes since v8:
 - Don't check PMD when locking the pte when THP is disabled
   Thanks to Daniel Jordan for reporting this.
 - Rebase on 4.16
Changes since v7:
 - move pte_map_lock() and pte_spinlock() upper in mm/memory.c (patch 4 &
   5)
 - make pte_unmap_same() compatible with the speculative page fault (patch
   6)
Changes since v6:
 - Rename config variable to CONFIG_SPECULATIVE_PAGE_FAULT (patch 1)
 - Review the way the config variable is set (patch 1 to 3)
 - Introduce mm_rb_write_*lock() in mm/mmap.c (patch 18)
 - Merge patch introducing pte try locking in the patch 18.
Changes since v5:
 - use rwlock agains the mm RB tree in place of SRCU
 - add a VMA's reference count to protect VMA while using it without
   holding the mmap_sem.
 - check PMD value to detect collapsing operation
 - don't try speculative page fault for mono threaded processes
 - try to reuse the fetched VMA if VM_RETRY is returned
 - go directly to the error path if an error is detected during the SPF
   path
 - fix race window when moving VMA in move_vma()
Changes since v4:
 - As requested by Andrew Morton, use CONFIG_SPF and define it earlier in
 the series to ease bisection.
Changes since v3:
 - Don't build when CONFIG_SMP is not set
 - Fixed a lock dependency warning in __vma_adjust()
 - Use READ_ONCE to access p*d values in handle_speculative_fault()
 - Call memcp_oom() service in handle_speculative_fault()
Changes since v2:
 - Perf event is renamed in PERF_COUNT_SW_SPF
 - On Power handle do_page_fault()'s cleaning
 - On Power if the VM_FAULT_ERROR is returned by
 handle_speculative_fault(), do not retry but jump to the error path
 - If VMA's flags are not matching the fault, directly returns
 VM_FAULT_SIGSEGV and not VM_FAULT_RETRY
 - Check for pud_trans_huge() to avoid speculative path
 - Handles _vm_normal_page()'s introduced by 6f16211df3bf
 ("mm/device-public-memory: device memory cache coherent with CPU")
 - add and review few comments in the code
Changes since v1:
 - Remove PERF_COUNT_SW_SPF_FAILED perf event.
 - Add tracing events to details speculative page fault failures.
 - Cache VMA fields values which are used once the PTE is unlocked at the
 end of the page fault events.
 - Ensure that fields read during the speculative path are written and read
 using WRITE_ONCE and READ_ONCE.
 - Add checks at the beginning of the speculative path to abort it if the
 VMA is known to not be supported.
Changes since RFC V5 [5]
 - Port to 4.13 kernel
 - Merging patch fixing lock dependency into the original patch
 - Replace the 2 parameters of vma_has_changed() with the vmf pointer
 - In patch 7, don't call __do_fault() in the speculative path as it may
 want to unlock the mmap_sem.
 - In patch 11-12, don't check for vma boundaries when
 page_add_new_anon_rmap() is called during the spf path and protect against
 anon_vma pointer's update.
 - In patch 13-16, add performance events to report number of successful
 and failed speculative events.

[1]
http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://patchwork.kernel.org/patch/9999687/
[3] https://lkml.org/lkml/2018/3/21/894


Laurent Dufour (21):
  mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
  x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
  mm: make pte_unmap_same compatible with SPF
  mm: introduce INIT_VMA()
  mm: protect VMA modifications using VMA sequence count
  mm: protect mremap() against SPF hanlder
  mm: protect SPF handler against anon_vma changes
  mm: cache some VMA fields in the vm_fault structure
  mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
  mm: introduce __lru_cache_add_active_or_unevictable
  mm: introduce __vm_normal_page()
  mm: introduce __page_add_new_anon_rmap()
  mm: protect mm_rb tree with a rwlock
  mm: adding speculative page fault failure trace events
  perf: add a speculative page fault sw event
  perf tools: add support for the SPF perf event
  mm: speculative page fault handler return VMA
  mm: add speculative page fault vmstats
  powerpc/mm: add speculative page fault

Peter Zijlstra (4):
  mm: prepare for FAULT_FLAG_SPECULATIVE
  mm: VMA sequence count
  mm: provide speculative fault infrastructure
  x86/mm: add speculative pagefault handling

 arch/powerpc/Kconfig                  |   1 +
 arch/powerpc/mm/fault.c               |  33 +-
 arch/x86/Kconfig                      |   1 +
 arch/x86/mm/fault.c                   |  42 ++-
 fs/exec.c                             |   2 +-
 fs/proc/task_mmu.c                    |   5 +-
 fs/userfaultfd.c                      |  17 +-
 include/linux/hugetlb_inline.h        |   2 +-
 include/linux/migrate.h               |   4 +-
 include/linux/mm.h                    | 145 +++++++-
 include/linux/mm_types.h              |   7 +
 include/linux/pagemap.h               |   4 +-
 include/linux/rmap.h                  |  12 +-
 include/linux/swap.h                  |  10 +-
 include/linux/vm_event_item.h         |   3 +
 include/trace/events/pagefault.h      |  88 +++++
 include/uapi/linux/perf_event.h       |   1 +
 kernel/fork.c                         |   5 +-
 mm/Kconfig                            |  22 ++
 mm/huge_memory.c                      |   6 +-
 mm/hugetlb.c                          |   2 +
 mm/init-mm.c                          |   3 +
 mm/internal.h                         |  20 ++
 mm/khugepaged.c                       |   5 +
 mm/madvise.c                          |   6 +-
 mm/memory.c                           | 649 +++++++++++++++++++++++++++++-----
 mm/mempolicy.c                        |  51 ++-
 mm/migrate.c                          |   6 +-
 mm/mlock.c                            |  13 +-
 mm/mmap.c                             | 229 +++++++++---
 mm/mprotect.c                         |   4 +-
 mm/mremap.c                           |  13 +
 mm/nommu.c                            |   2 +-
 mm/rmap.c                             |   5 +-
 mm/swap.c                             |   6 +-
 mm/swap_state.c                       |   8 +-
 mm/vmstat.c                           |   5 +-
 tools/include/uapi/linux/perf_event.h |   1 +
 tools/perf/util/evsel.c               |   1 +
 tools/perf/util/parse-events.c        |   4 +
 tools/perf/util/parse-events.l        |   1 +
 tools/perf/util/python.c              |   1 +
 42 files changed, 1231 insertions(+), 214 deletions(-)
 create mode 100644 include/trace/events/pagefault.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-23  5:58   ` Minchan Kim
  2018-04-17 14:33 ` [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
                   ` (25 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

This configuration variable will be used to build the code needed to
handle speculative page fault.

By default it is turned off, and activated depending on architecture
support, SMP and MMU.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/Kconfig | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index d5004d82a1d6..5484dca11199 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -752,3 +752,25 @@ config GUP_BENCHMARK
 	  performance of get_user_pages_fast().
 
 	  See tools/testing/selftests/vm/gup_benchmark.c
+
+config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+       def_bool n
+
+config SPECULATIVE_PAGE_FAULT
+       bool "Speculative page faults"
+       default y
+       depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
+       depends on MMU && SMP
+       help
+         Try to handle user space page faults without holding the mmap_sem.
+
+	 This should allow better concurrency for massively threaded process
+	 since the page fault handler will not wait for other threads memory
+	 layout change to be done, assuming that this change is done in another
+	 part of the process's memory space. This type of page fault is named
+	 speculative page fault.
+
+	 If the speculative page fault fails because of a concurrency is
+	 detected or because underlying PMD or PTE tables are not yet
+	 allocating, it is failing its processing and a classic page fault
+	 is then tried.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-05-08 11:04     ` Punit Agrawal
  2018-04-17 14:33 ` [PATCH v10 03/25] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
                   ` (24 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
Speculative Page Fault handler when building for 64bit.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/x86/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d8983df5a2bc..ebdeb48e4a4a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -30,6 +30,7 @@ config X86_64
 	select MODULES_USE_ELF_RELA
 	select X86_DEV_DMA_OPS
 	select ARCH_HAS_SYSCALL_WRAPPER
+	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
 
 #
 # Arch settings
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 03/25] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 04/25] mm: prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for BOOK3S_64. This enables
the Speculative Page Fault handler.

Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
  set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
  called by update_mmu_cache()

Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c32a181a7cbb..21ef887da7a3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -233,6 +233,7 @@ config PPC
 	select OLD_SIGACTION			if PPC32
 	select OLD_SIGSUSPEND
 	select SPARSE_IRQ
+	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT if PPC_BOOK3S_64
 	select SYSCTL_EXCEPTION_TRACE
 	select VIRT_TO_BUS			if !PPC64
 	#
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 04/25] mm: prepare for FAULT_FLAG_SPECULATIVE
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (2 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 03/25] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 05/25] mm: introduce pte_spinlock " Laurent Dufour
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

When speculating faults (without holding mmap_sem) we need to validate
that the vma against which we loaded pages is still valid when we're
ready to install the new PTE.

Therefore, replace the pte_offset_map_lock() calls that (re)take the
PTL with pte_map_lock() which can fail in case we find the VMA changed
since we started the fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
 implemented as the vm_fault structure in the kernel]
[move pte_map_lock()'s definition upper in the file]
[move the define of FAULT_FLAG_SPECULATIVE later in the series]
[review error path in do_swap_page(), do_anonymous_page() and
 wp_page_copy()]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 87 ++++++++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 58 insertions(+), 29 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index a1f990e33e38..4528bd584b7a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(apply_to_page_range);
 
+static inline bool pte_map_lock(struct vm_fault *vmf)
+{
+	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+				       vmf->address, &vmf->ptl);
+	return true;
+}
+
 /*
  * handle_pte_fault chooses page fault handler according to an entry which was
  * read non-atomically.  Before making any commitment, on those architectures
@@ -2477,25 +2484,26 @@ static int wp_page_copy(struct vm_fault *vmf)
 	const unsigned long mmun_start = vmf->address & PAGE_MASK;
 	const unsigned long mmun_end = mmun_start + PAGE_SIZE;
 	struct mem_cgroup *memcg;
+	int ret = VM_FAULT_OOM;
 
 	if (unlikely(anon_vma_prepare(vma)))
-		goto oom;
+		goto out;
 
 	if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
 		new_page = alloc_zeroed_user_highpage_movable(vma,
 							      vmf->address);
 		if (!new_page)
-			goto oom;
+			goto out;
 	} else {
 		new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
 				vmf->address);
 		if (!new_page)
-			goto oom;
+			goto out;
 		cow_user_page(new_page, old_page, vmf->address, vma);
 	}
 
 	if (mem_cgroup_try_charge(new_page, mm, GFP_KERNEL, &memcg, false))
-		goto oom_free_new;
+		goto out_free_new;
 
 	__SetPageUptodate(new_page);
 
@@ -2504,7 +2512,10 @@ static int wp_page_copy(struct vm_fault *vmf)
 	/*
 	 * Re-check the pte - we dropped the lock
 	 */
-	vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
+	if (!pte_map_lock(vmf)) {
+		ret = VM_FAULT_RETRY;
+		goto out_uncharge;
+	}
 	if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
 		if (old_page) {
 			if (!PageAnon(old_page)) {
@@ -2591,12 +2602,14 @@ static int wp_page_copy(struct vm_fault *vmf)
 		put_page(old_page);
 	}
 	return page_copied ? VM_FAULT_WRITE : 0;
-oom_free_new:
+out_uncharge:
+	mem_cgroup_cancel_charge(new_page, memcg, false);
+out_free_new:
 	put_page(new_page);
-oom:
+out:
 	if (old_page)
 		put_page(old_page);
-	return VM_FAULT_OOM;
+	return ret;
 }
 
 /**
@@ -2617,8 +2630,8 @@ static int wp_page_copy(struct vm_fault *vmf)
 int finish_mkwrite_fault(struct vm_fault *vmf)
 {
 	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
-	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
-				       &vmf->ptl);
+	if (!pte_map_lock(vmf))
+		return VM_FAULT_RETRY;
 	/*
 	 * We might have raced with another page fault while we released the
 	 * pte_offset_map_lock.
@@ -2736,8 +2749,11 @@ static int do_wp_page(struct vm_fault *vmf)
 			get_page(vmf->page);
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			lock_page(vmf->page);
-			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-					vmf->address, &vmf->ptl);
+			if (!pte_map_lock(vmf)) {
+				unlock_page(vmf->page);
+				put_page(vmf->page);
+				return VM_FAULT_RETRY;
+			}
 			if (!pte_same(*vmf->pte, vmf->orig_pte)) {
 				unlock_page(vmf->page);
 				pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2944,11 +2960,15 @@ int do_swap_page(struct vm_fault *vmf)
 
 		if (!page) {
 			/*
-			 * Back out if somebody else faulted in this pte
-			 * while we released the pte lock.
+			 * Back out if the VMA has changed in our back during
+			 * a speculative page fault or if somebody else
+			 * faulted in this pte while we released the pte lock.
 			 */
-			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-					vmf->address, &vmf->ptl);
+			if (!pte_map_lock(vmf)) {
+				delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+				ret = VM_FAULT_RETRY;
+				goto out;
+			}
 			if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
 				ret = VM_FAULT_OOM;
 			delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
@@ -3001,10 +3021,13 @@ int do_swap_page(struct vm_fault *vmf)
 	}
 
 	/*
-	 * Back out if somebody else already faulted in this pte.
+	 * Back out if the VMA has changed in our back during a speculative
+	 * page fault or if somebody else already faulted in this pte.
 	 */
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
+	if (!pte_map_lock(vmf)) {
+		ret = VM_FAULT_RETRY;
+		goto out_cancel_cgroup;
+	}
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
 		goto out_nomap;
 
@@ -3082,8 +3105,9 @@ int do_swap_page(struct vm_fault *vmf)
 out:
 	return ret;
 out_nomap:
-	mem_cgroup_cancel_charge(page, memcg, false);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+out_cancel_cgroup:
+	mem_cgroup_cancel_charge(page, memcg, false);
 out_page:
 	unlock_page(page);
 out_release:
@@ -3134,8 +3158,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
 			!mm_forbids_zeropage(vma->vm_mm)) {
 		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
 						vma->vm_page_prot));
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-				vmf->address, &vmf->ptl);
+		if (!pte_map_lock(vmf))
+			return VM_FAULT_RETRY;
 		if (!pte_none(*vmf->pte))
 			goto unlock;
 		ret = check_stable_address_space(vma->vm_mm);
@@ -3170,14 +3194,16 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	if (vma->vm_flags & VM_WRITE)
 		entry = pte_mkwrite(pte_mkdirty(entry));
 
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
-	if (!pte_none(*vmf->pte))
+	if (!pte_map_lock(vmf)) {
+		ret = VM_FAULT_RETRY;
 		goto release;
+	}
+	if (!pte_none(*vmf->pte))
+		goto unlock_and_release;
 
 	ret = check_stable_address_space(vma->vm_mm);
 	if (ret)
-		goto release;
+		goto unlock_and_release;
 
 	/* Deliver the page fault to userland, check inside PT lock */
 	if (userfaultfd_missing(vma)) {
@@ -3199,10 +3225,12 @@ static int do_anonymous_page(struct vm_fault *vmf)
 unlock:
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	return ret;
+unlock_and_release:
+	pte_unmap_unlock(vmf->pte, vmf->ptl);
 release:
 	mem_cgroup_cancel_charge(page, memcg, false);
 	put_page(page);
-	goto unlock;
+	return ret;
 oom_free_page:
 	put_page(page);
 oom:
@@ -3295,8 +3323,9 @@ static int pte_alloc_one_map(struct vm_fault *vmf)
 	 * pte_none() under vmf->ptl protection when we return to
 	 * alloc_set_pte().
 	 */
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
+	if (!pte_map_lock(vmf))
+		return VM_FAULT_RETRY;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 05/25] mm: introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (3 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 04/25] mm: prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF Laurent Dufour
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.

So move the fetch and locking operations in a dedicated function.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 4528bd584b7a..0b9a51f80e0e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(apply_to_page_range);
 
+static inline bool pte_spinlock(struct vm_fault *vmf)
+{
+	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+	spin_lock(vmf->ptl);
+	return true;
+}
+
 static inline bool pte_map_lock(struct vm_fault *vmf)
 {
 	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
@@ -3804,8 +3811,8 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * validation through pte_unmap_same(). It's of NUMA type but
 	 * the pfn may be screwed if the read is non atomic.
 	 */
-	vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
-	spin_lock(vmf->ptl);
+	if (!pte_spinlock(vmf))
+		return VM_FAULT_RETRY;
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		goto out;
@@ -3998,8 +4005,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
 	if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
 		return do_numa_page(vmf);
 
-	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-	spin_lock(vmf->ptl);
+	if (!pte_spinlock(vmf))
+		return VM_FAULT_RETRY;
 	entry = vmf->orig_pte;
 	if (unlikely(!pte_same(*vmf->pte, entry)))
 		goto unlock;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (4 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 05/25] mm: introduce pte_spinlock " Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-23  6:31   ` Minchan Kim
  2018-05-10 16:15   ` vinayak menon
  2018-04-17 14:33 ` [PATCH v10 07/25] mm: introduce INIT_VMA() Laurent Dufour
                   ` (20 subsequent siblings)
  26 siblings, 2 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

pte_unmap_same() is making the assumption that the page table are still
around because the mmap_sem is held.
This is no more the case when running a speculative page fault and
additional check must be made to ensure that the final page table are still
there.

This is now done by calling pte_spinlock() to check for the VMA's
consistency while locking for the page tables.

This is requiring passing a vm_fault structure to pte_unmap_same() which is
containing all the needed parameters.

As pte_spinlock() may fail in the case of a speculative page fault, if the
VMA has been touched in our back, pte_unmap_same() should now return 3
cases :
	1. pte are the same (0)
	2. pte are different (VM_FAULT_PTNOTSAME)
	3. a VMA's changes has been detected (VM_FAULT_RETRY)

The case 2 is handled by the introduction of a new VM_FAULT flag named
VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
page fault while holding the mmap_sem.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 39 ++++++++++++++++++++++++++++-----------
 2 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4d1aff80669c..714da99d77a3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1208,6 +1208,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
 #define VM_FAULT_NEEDDSYNC  0x2000	/* ->fault did not modify page tables
 					 * and needs fsync() to complete (for
 					 * synchronous page faults in DAX) */
+#define VM_FAULT_PTNOTSAME 0x4000	/* Page table entries have changed */
 
 #define VM_FAULT_ERROR	(VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
 			 VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
diff --git a/mm/memory.c b/mm/memory.c
index 0b9a51f80e0e..f86efcb8e268 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2309,21 +2309,29 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
  * parts, do_swap_page must check under lock before unmapping the pte and
  * proceeding (but do_wp_page is only called after already making such a check;
  * and do_anonymous_page can safely check later on).
+ *
+ * pte_unmap_same() returns:
+ *	0			if the PTE are the same
+ *	VM_FAULT_PTNOTSAME	if the PTE are different
+ *	VM_FAULT_RETRY		if the VMA has changed in our back during
+ *				a speculative page fault handling.
  */
-static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
-				pte_t *page_table, pte_t orig_pte)
+static inline int pte_unmap_same(struct vm_fault *vmf)
 {
-	int same = 1;
+	int ret = 0;
+
 #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
 	if (sizeof(pte_t) > sizeof(unsigned long)) {
-		spinlock_t *ptl = pte_lockptr(mm, pmd);
-		spin_lock(ptl);
-		same = pte_same(*page_table, orig_pte);
-		spin_unlock(ptl);
+		if (pte_spinlock(vmf)) {
+			if (!pte_same(*vmf->pte, vmf->orig_pte))
+				ret = VM_FAULT_PTNOTSAME;
+			spin_unlock(vmf->ptl);
+		} else
+			ret = VM_FAULT_RETRY;
 	}
 #endif
-	pte_unmap(page_table);
-	return same;
+	pte_unmap(vmf->pte);
+	return ret;
 }
 
 static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
@@ -2912,10 +2920,19 @@ int do_swap_page(struct vm_fault *vmf)
 	pte_t pte;
 	int locked;
 	int exclusive = 0;
-	int ret = 0;
+	int ret;
 
-	if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
+	ret = pte_unmap_same(vmf);
+	if (ret) {
+		/*
+		 * If pte != orig_pte, this means another thread did the
+		 * swap operation in our back.
+		 * So nothing else to do.
+		 */
+		if (ret == VM_FAULT_PTNOTSAME)
+			ret = 0;
 		goto out;
+	}
 
 	entry = pte_to_swp_entry(vmf->orig_pte);
 	if (unlikely(non_swap_entry(entry))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 07/25] mm: introduce INIT_VMA()
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (5 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 08/25] mm: VMA sequence count Laurent Dufour
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Some VMA struct fields need to be initialized once the VMA structure is
allocated.
Currently this only concerns anon_vma_chain field but some other will be
added to support the speculative page fault.

Instead of spreading the initialization calls all over the code, let's
introduce a dedicated inline function.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 fs/exec.c          |  2 +-
 include/linux/mm.h |  5 +++++
 kernel/fork.c      |  2 +-
 mm/mmap.c          | 10 +++++-----
 mm/nommu.c         |  2 +-
 5 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/fs/exec.c b/fs/exec.c
index 32eea4c65909..bd03689aa358 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -311,7 +311,7 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
 	vma->vm_start = vma->vm_end - PAGE_SIZE;
 	vma->vm_flags = VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-	INIT_LIST_HEAD(&vma->anon_vma_chain);
+	INIT_VMA(vma);
 
 	err = insert_vm_struct(mm, vma);
 	if (err)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 714da99d77a3..efc1248b82bd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1261,6 +1261,11 @@ struct zap_details {
 	pgoff_t last_index;			/* Highest page->index to unmap */
 };
 
+static inline void INIT_VMA(struct vm_area_struct *vma)
+{
+	INIT_LIST_HEAD(&vma->anon_vma_chain);
+}
+
 struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			     pte_t pte, bool with_public_device);
 #define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
diff --git a/kernel/fork.c b/kernel/fork.c
index b1d877f1a0ac..d937e5945f77 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -451,7 +451,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 		if (!tmp)
 			goto fail_nomem;
 		*tmp = *mpnt;
-		INIT_LIST_HEAD(&tmp->anon_vma_chain);
+		INIT_VMA(tmp);
 		retval = vma_dup_policy(mpnt, tmp);
 		if (retval)
 			goto fail_nomem_policy;
diff --git a/mm/mmap.c b/mm/mmap.c
index 188f195883b9..8bd9ae1dfacc 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1700,7 +1700,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	vma->vm_flags = vm_flags;
 	vma->vm_page_prot = vm_get_page_prot(vm_flags);
 	vma->vm_pgoff = pgoff;
-	INIT_LIST_HEAD(&vma->anon_vma_chain);
+	INIT_VMA(vma);
 
 	if (file) {
 		if (vm_flags & VM_DENYWRITE) {
@@ -2586,7 +2586,7 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	/* most fields are the same, copy all, and then fixup */
 	*new = *vma;
 
-	INIT_LIST_HEAD(&new->anon_vma_chain);
+	INIT_VMA(new);
 
 	if (new_below)
 		new->vm_end = addr;
@@ -2956,7 +2956,7 @@ static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long
 		return -ENOMEM;
 	}
 
-	INIT_LIST_HEAD(&vma->anon_vma_chain);
+	INIT_VMA(vma);
 	vma->vm_mm = mm;
 	vma->vm_start = addr;
 	vma->vm_end = addr + len;
@@ -3167,7 +3167,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 		new_vma->vm_pgoff = pgoff;
 		if (vma_dup_policy(vma, new_vma))
 			goto out_free_vma;
-		INIT_LIST_HEAD(&new_vma->anon_vma_chain);
+		INIT_VMA(new_vma);
 		if (anon_vma_clone(new_vma, vma))
 			goto out_free_mempol;
 		if (new_vma->vm_file)
@@ -3310,7 +3310,7 @@ static struct vm_area_struct *__install_special_mapping(
 	if (unlikely(vma == NULL))
 		return ERR_PTR(-ENOMEM);
 
-	INIT_LIST_HEAD(&vma->anon_vma_chain);
+	INIT_VMA(vma);
 	vma->vm_mm = mm;
 	vma->vm_start = addr;
 	vma->vm_end = addr + len;
diff --git a/mm/nommu.c b/mm/nommu.c
index 13723736d38f..6909ea0bf88d 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1212,7 +1212,7 @@ unsigned long do_mmap(struct file *file,
 	region->vm_flags = vm_flags;
 	region->vm_pgoff = pgoff;
 
-	INIT_LIST_HEAD(&vma->anon_vma_chain);
+	INIT_VMA(vma);
 	vma->vm_flags = vm_flags;
 	vma->vm_pgoff = pgoff;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 08/25] mm: VMA sequence count
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (6 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 07/25] mm: introduce INIT_VMA() Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-23  6:42   ` Minchan Kim
  2018-04-17 14:33 ` [PATCH v10 09/25] mm: protect VMA modifications using " Laurent Dufour
                   ` (18 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.

The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are still valid.

The flip side is that we cannot distinguish between a vma_adjust() and
the unmap_page_range() -- where with the former we could have
re-checked the vma bounds against the address.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Port to 4.12 kernel]
[Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
[Introduce vm_write_* inline function depending on
 CONFIG_SPECULATIVE_PAGE_FAULT]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
 using vm_raw_write* functions]
[Fix a lock dependency warning in mmap_region() when entering the error
 path]
[move sequence initialisation INIT_VMA()]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h       | 44 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mm_types.h |  3 +++
 mm/memory.c              |  2 ++
 mm/mmap.c                | 31 +++++++++++++++++++++++++++++++
 4 files changed, 80 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index efc1248b82bd..988daf7030c9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1264,6 +1264,9 @@ struct zap_details {
 static inline void INIT_VMA(struct vm_area_struct *vma)
 {
 	INIT_LIST_HEAD(&vma->anon_vma_chain);
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	seqcount_init(&vma->vm_sequence);
+#endif
 }
 
 struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
@@ -1386,6 +1389,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
 	unmap_mapping_range(mapping, holebegin, holelen, 0);
 }
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+static inline void vm_write_begin(struct vm_area_struct *vma)
+{
+	write_seqcount_begin(&vma->vm_sequence);
+}
+static inline void vm_write_begin_nested(struct vm_area_struct *vma,
+					 int subclass)
+{
+	write_seqcount_begin_nested(&vma->vm_sequence, subclass);
+}
+static inline void vm_write_end(struct vm_area_struct *vma)
+{
+	write_seqcount_end(&vma->vm_sequence);
+}
+static inline void vm_raw_write_begin(struct vm_area_struct *vma)
+{
+	raw_write_seqcount_begin(&vma->vm_sequence);
+}
+static inline void vm_raw_write_end(struct vm_area_struct *vma)
+{
+	raw_write_seqcount_end(&vma->vm_sequence);
+}
+#else
+static inline void vm_write_begin(struct vm_area_struct *vma)
+{
+}
+static inline void vm_write_begin_nested(struct vm_area_struct *vma,
+					 int subclass)
+{
+}
+static inline void vm_write_end(struct vm_area_struct *vma)
+{
+}
+static inline void vm_raw_write_begin(struct vm_area_struct *vma)
+{
+}
+static inline void vm_raw_write_end(struct vm_area_struct *vma)
+{
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
 extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
 		void *buf, int len, unsigned int gup_flags);
 extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 21612347d311..db5e9d630e7a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -335,6 +335,9 @@ struct vm_area_struct {
 	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
 #endif
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	seqcount_t vm_sequence;
+#endif
 } __randomize_layout;
 
 struct core_thread {
diff --git a/mm/memory.c b/mm/memory.c
index f86efcb8e268..f7fed053df80 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1503,6 +1503,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 	unsigned long next;
 
 	BUG_ON(addr >= end);
+	vm_write_begin(vma);
 	tlb_start_vma(tlb, vma);
 	pgd = pgd_offset(vma->vm_mm, addr);
 	do {
@@ -1512,6 +1513,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 		next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
 	} while (pgd++, addr = next, addr != end);
 	tlb_end_vma(tlb, vma);
+	vm_write_end(vma);
 }
 
 
diff --git a/mm/mmap.c b/mm/mmap.c
index 8bd9ae1dfacc..813e49589ea1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -692,6 +692,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	long adjust_next = 0;
 	int remove_next = 0;
 
+	/*
+	 * Why using vm_raw_write*() functions here to avoid lockdep's warning ?
+	 *
+	 * Locked is complaining about a theoretical lock dependency, involving
+	 * 3 locks:
+	 *   mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
+	 *
+	 * Here are the major path leading to this dependency :
+	 *  1. __vma_adjust() mmap_sem  -> vm_sequence -> i_mmap_rwsem
+	 *  2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
+	 *  3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
+	 *  4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
+	 *
+	 * So there is no way to solve this easily, especially because in
+	 * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
+	 * VMAs are not yet known.
+	 * However, the way the vm_seq is used is guarantying that we will
+	 * never block on it since we just check for its value and never wait
+	 * for it to move, see vma_has_changed() and handle_speculative_fault().
+	 */
+	vm_raw_write_begin(vma);
+	if (next)
+		vm_raw_write_begin(next);
+
 	if (next && !insert) {
 		struct vm_area_struct *exporter = NULL, *importer = NULL;
 
@@ -902,6 +926,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 			anon_vma_merge(vma, next);
 		mm->map_count--;
 		mpol_put(vma_policy(next));
+		vm_raw_write_end(next);
 		kmem_cache_free(vm_area_cachep, next);
 		/*
 		 * In mprotect's case 6 (see comments on vma_merge),
@@ -916,6 +941,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 			 * "vma->vm_next" gap must be updated.
 			 */
 			next = vma->vm_next;
+			if (next)
+				vm_raw_write_begin(next);
 		} else {
 			/*
 			 * For the scope of the comment "next" and
@@ -962,6 +989,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	if (insert && file)
 		uprobe_mmap(insert);
 
+	if (next && next != vma)
+		vm_raw_write_end(next);
+	vm_raw_write_end(vma);
+
 	validate_mm(mm);
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 09/25] mm: protect VMA modifications using VMA sequence count
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (7 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 08/25] mm: VMA sequence count Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-23  7:19   ` Minchan Kim
  2018-04-17 14:33 ` [PATCH v10 10/25] mm: protect mremap() against SPF hanlder Laurent Dufour
                   ` (17 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.

This patch provides protection against the VMA modification done in :
	- madvise()
	- mpol_rebind_policy()
	- vma_replace_policy()
	- change_prot_numa()
	- mlock(), munlock()
	- mprotect()
	- mmap_region()
	- collapse_huge_page()
	- userfaultd registering services

In addition, VMA fields which will be read during the speculative fault
path needs to be written using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 fs/proc/task_mmu.c |  5 ++++-
 fs/userfaultfd.c   | 17 +++++++++++++----
 mm/khugepaged.c    |  3 +++
 mm/madvise.c       |  6 +++++-
 mm/mempolicy.c     | 51 ++++++++++++++++++++++++++++++++++-----------------
 mm/mlock.c         | 13 ++++++++-----
 mm/mmap.c          | 22 +++++++++++++---------
 mm/mprotect.c      |  4 +++-
 mm/swap_state.c    |  8 ++++++--
 9 files changed, 89 insertions(+), 40 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c486ad4b43f0..aeb417f28839 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 					goto out_mm;
 				}
 				for (vma = mm->mmap; vma; vma = vma->vm_next) {
-					vma->vm_flags &= ~VM_SOFTDIRTY;
+					vm_write_begin(vma);
+					WRITE_ONCE(vma->vm_flags,
+						   vma->vm_flags & ~VM_SOFTDIRTY);
 					vma_set_page_prot(vma);
+					vm_write_end(vma);
 				}
 				downgrade_write(&mm->mmap_sem);
 				break;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index cec550c8468f..b8212ba17695 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -659,8 +659,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
 
 	octx = vma->vm_userfaultfd_ctx.ctx;
 	if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
+		vm_write_begin(vma);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
-		vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
+		WRITE_ONCE(vma->vm_flags,
+			   vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
+		vm_write_end(vma);
 		return 0;
 	}
 
@@ -885,8 +888,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
 			vma = prev;
 		else
 			prev = vma;
-		vma->vm_flags = new_flags;
+		vm_write_begin(vma);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+		vm_write_end(vma);
 	}
 	up_write(&mm->mmap_sem);
 	mmput(mm);
@@ -1434,8 +1439,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
 		 * the next vma was merged into the current one and
 		 * the current one has not been updated yet.
 		 */
-		vma->vm_flags = new_flags;
+		vm_write_begin(vma);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx.ctx = ctx;
+		vm_write_end(vma);
 
 	skip:
 		prev = vma;
@@ -1592,8 +1599,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
 		 * the next vma was merged into the current one and
 		 * the current one has not been updated yet.
 		 */
-		vma->vm_flags = new_flags;
+		vm_write_begin(vma);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+		vm_write_end(vma);
 
 	skip:
 		prev = vma;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d7b2a4bf8671..0b28af4b950d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1011,6 +1011,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	if (mm_find_pmd(mm, address) != pmd)
 		goto out;
 
+	vm_write_begin(vma);
 	anon_vma_lock_write(vma->anon_vma);
 
 	pte = pte_offset_map(pmd, address);
@@ -1046,6 +1047,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 		pmd_populate(mm, pmd, pmd_pgtable(_pmd));
 		spin_unlock(pmd_ptl);
 		anon_vma_unlock_write(vma->anon_vma);
+		vm_write_end(vma);
 		result = SCAN_FAIL;
 		goto out;
 	}
@@ -1080,6 +1082,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	set_pmd_at(mm, address, pmd, _pmd);
 	update_mmu_cache_pmd(vma, address, pmd);
 	spin_unlock(pmd_ptl);
+	vm_write_end(vma);
 
 	*hpage = NULL;
 
diff --git a/mm/madvise.c b/mm/madvise.c
index 4d3c922ea1a1..e328f7ab5942 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -184,7 +184,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
 	/*
 	 * vm_flags is protected by the mmap_sem held in write mode.
 	 */
-	vma->vm_flags = new_flags;
+	vm_write_begin(vma);
+	WRITE_ONCE(vma->vm_flags, new_flags);
+	vm_write_end(vma);
 out:
 	return error;
 }
@@ -450,9 +452,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
 		.private = tlb,
 	};
 
+	vm_write_begin(vma);
 	tlb_start_vma(tlb, vma);
 	walk_page_range(addr, end, &free_walk);
 	tlb_end_vma(tlb, vma);
+	vm_write_end(vma);
 }
 
 static int madvise_free_single_vma(struct vm_area_struct *vma,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9ac49ef17b4e..898d325c9fea 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -380,8 +380,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 	struct vm_area_struct *vma;
 
 	down_write(&mm->mmap_sem);
-	for (vma = mm->mmap; vma; vma = vma->vm_next)
+	for (vma = mm->mmap; vma; vma = vma->vm_next) {
+		vm_write_begin(vma);
 		mpol_rebind_policy(vma->vm_policy, new);
+		vm_write_end(vma);
+	}
 	up_write(&mm->mmap_sem);
 }
 
@@ -554,9 +557,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 {
 	int nr_updated;
 
+	vm_write_begin(vma);
 	nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
 	if (nr_updated)
 		count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
+	vm_write_end(vma);
 
 	return nr_updated;
 }
@@ -657,6 +662,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 	if (IS_ERR(new))
 		return PTR_ERR(new);
 
+	vm_write_begin(vma);
 	if (vma->vm_ops && vma->vm_ops->set_policy) {
 		err = vma->vm_ops->set_policy(vma, new);
 		if (err)
@@ -664,11 +670,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 	}
 
 	old = vma->vm_policy;
-	vma->vm_policy = new; /* protected by mmap_sem */
+	/*
+	 * The speculative page fault handler accesses this field without
+	 * hodling the mmap_sem.
+	 */
+	WRITE_ONCE(vma->vm_policy,  new);
+	vm_write_end(vma);
 	mpol_put(old);
 
 	return 0;
  err_out:
+	vm_write_end(vma);
 	mpol_put(new);
 	return err;
 }
@@ -1614,23 +1626,28 @@ COMPAT_SYSCALL_DEFINE4(migrate_pages, compat_pid_t, pid,
 struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
 						unsigned long addr)
 {
-	struct mempolicy *pol = NULL;
+	struct mempolicy *pol;
 
-	if (vma) {
-		if (vma->vm_ops && vma->vm_ops->get_policy) {
-			pol = vma->vm_ops->get_policy(vma, addr);
-		} else if (vma->vm_policy) {
-			pol = vma->vm_policy;
+	if (!vma)
+		return NULL;
 
-			/*
-			 * shmem_alloc_page() passes MPOL_F_SHARED policy with
-			 * a pseudo vma whose vma->vm_ops=NULL. Take a reference
-			 * count on these policies which will be dropped by
-			 * mpol_cond_put() later
-			 */
-			if (mpol_needs_cond_ref(pol))
-				mpol_get(pol);
-		}
+	if (vma->vm_ops && vma->vm_ops->get_policy)
+		return vma->vm_ops->get_policy(vma, addr);
+
+	/*
+	 * This could be called without holding the mmap_sem in the
+	 * speculative page fault handler's path.
+	 */
+	pol = READ_ONCE(vma->vm_policy);
+	if (pol) {
+		/*
+		 * shmem_alloc_page() passes MPOL_F_SHARED policy with
+		 * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+		 * count on these policies which will be dropped by
+		 * mpol_cond_put() later
+		 */
+		if (mpol_needs_cond_ref(pol))
+			mpol_get(pol);
 	}
 
 	return pol;
diff --git a/mm/mlock.c b/mm/mlock.c
index 74e5a6547c3d..c40285c94ced 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -445,7 +445,9 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
 void munlock_vma_pages_range(struct vm_area_struct *vma,
 			     unsigned long start, unsigned long end)
 {
-	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+	vm_write_begin(vma);
+	WRITE_ONCE(vma->vm_flags, vma->vm_flags & VM_LOCKED_CLEAR_MASK);
+	vm_write_end(vma);
 
 	while (start < end) {
 		struct page *page;
@@ -568,10 +570,11 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 	 * It's okay if try_to_unmap_one unmaps a page just after we
 	 * set VM_LOCKED, populate_vma_page_range will bring it back.
 	 */
-
-	if (lock)
-		vma->vm_flags = newflags;
-	else
+	if (lock) {
+		vm_write_begin(vma);
+		WRITE_ONCE(vma->vm_flags, newflags);
+		vm_write_end(vma);
+	} else
 		munlock_vma_pages_range(vma, start, end);
 
 out:
diff --git a/mm/mmap.c b/mm/mmap.c
index 813e49589ea1..921f20cc6df0 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -843,17 +843,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	}
 
 	if (start != vma->vm_start) {
-		vma->vm_start = start;
+		WRITE_ONCE(vma->vm_start, start);
 		start_changed = true;
 	}
 	if (end != vma->vm_end) {
-		vma->vm_end = end;
+		WRITE_ONCE(vma->vm_end, end);
 		end_changed = true;
 	}
-	vma->vm_pgoff = pgoff;
+	WRITE_ONCE(vma->vm_pgoff, pgoff);
 	if (adjust_next) {
-		next->vm_start += adjust_next << PAGE_SHIFT;
-		next->vm_pgoff += adjust_next;
+		WRITE_ONCE(next->vm_start,
+			   next->vm_start + (adjust_next << PAGE_SHIFT));
+		WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
 	}
 
 	if (root) {
@@ -1784,13 +1785,15 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 out:
 	perf_event_mmap(vma);
 
+	vm_write_begin(vma);
 	vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
 	if (vm_flags & VM_LOCKED) {
 		if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
 					vma == get_gate_vma(current->mm)))
 			mm->locked_vm += (len >> PAGE_SHIFT);
 		else
-			vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+			WRITE_ONCE(vma->vm_flags,
+				   vma->vm_flags & VM_LOCKED_CLEAR_MASK);
 	}
 
 	if (file)
@@ -1803,9 +1806,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 * then new mapped in-place (which must be aimed as
 	 * a completely new data area).
 	 */
-	vma->vm_flags |= VM_SOFTDIRTY;
+	WRITE_ONCE(vma->vm_flags, vma->vm_flags | VM_SOFTDIRTY);
 
 	vma_set_page_prot(vma);
+	vm_write_end(vma);
 
 	return addr;
 
@@ -2434,8 +2438,8 @@ int expand_downwards(struct vm_area_struct *vma,
 					mm->locked_vm += grow;
 				vm_stat_account(mm, vma->vm_flags, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
-				vma->vm_start = address;
-				vma->vm_pgoff -= grow;
+				WRITE_ONCE(vma->vm_start, address);
+				WRITE_ONCE(vma->vm_pgoff, vma->vm_pgoff - grow);
 				anon_vma_interval_tree_post_update_vma(vma);
 				vma_gap_update(vma);
 				spin_unlock(&mm->page_table_lock);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 625608bc8962..83594cc68062 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -375,12 +375,14 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 	 * vm_flags and vm_page_prot are protected by the mmap_sem
 	 * held in write mode.
 	 */
-	vma->vm_flags = newflags;
+	vm_write_begin(vma);
+	WRITE_ONCE(vma->vm_flags, newflags);
 	dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot);
 	vma_set_page_prot(vma);
 
 	change_protection(vma, start, end, vma->vm_page_prot,
 			  dirty_accountable, 0);
+	vm_write_end(vma);
 
 	/*
 	 * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major
diff --git a/mm/swap_state.c b/mm/swap_state.c
index fe079756bb18..8a8a402ed59f 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -575,6 +575,10 @@ static unsigned long swapin_nr_pages(unsigned long offset)
  * the readahead.
  *
  * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
+ * This is needed to ensure the VMA will not be freed in our back. In the case
+ * of the speculative page fault handler, this cannot happen, even if we don't
+ * hold the mmap_sem. Callees are assumed to take care of reading VMA's fields
+ * using READ_ONCE() to read consistent values.
  */
 struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 				struct vm_fault *vmf)
@@ -668,9 +672,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
 				     unsigned long *start,
 				     unsigned long *end)
 {
-	*start = max3(lpfn, PFN_DOWN(vma->vm_start),
+	*start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
 		      PFN_DOWN(faddr & PMD_MASK));
-	*end = min3(rpfn, PFN_DOWN(vma->vm_end),
+	*end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
 		    PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 10/25] mm: protect mremap() against SPF hanlder
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (8 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 09/25] mm: protect VMA modifications using " Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 11/25] mm: protect SPF handler against anon_vma changes Laurent Dufour
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

If a thread is remapping an area while another one is faulting on the
destination area, the SPF handler may fetch the vma from the RB tree before
the pte has been moved by the other thread. This means that the moved ptes
will overwrite those create by the page fault handler leading to page
leaked.

	CPU 1				CPU2
	enter mremap()
	unmap the dest area
	copy_vma()			Enter speculative page fault handler
	   >> at this time the dest area is present in the RB tree
					fetch the vma matching dest area
					create a pte as the VMA matched
					Exit the SPF handler
					<data written in the new page>
	move_ptes()
	  > it is assumed that the dest area is empty,
 	  > the move ptes overwrite the page mapped by the CPU2.

To prevent that, when the VMA matching the dest area is extended or created
by copy_vma(), it should be marked as non available to the SPF handler.
The usual way to so is to rely on vm_write_begin()/end().
This is already in __vma_adjust() called by copy_vma() (through
vma_merge()). But __vma_adjust() is calling vm_write_end() before returning
which create a window for another thread.
This patch adds a new parameter to vma_merge() which is passed down to
vma_adjust().
The assumption is that copy_vma() is returning a vma which should be
released by calling vm_raw_write_end() by the callee once the ptes have
been moved.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h | 24 +++++++++++++++++++-----
 mm/mmap.c          | 53 +++++++++++++++++++++++++++++++++++++++++------------
 mm/mremap.c        | 13 +++++++++++++
 3 files changed, 73 insertions(+), 17 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 988daf7030c9..f6edd15563bc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2211,18 +2211,32 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
 
 /* mmap.c */
 extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
+
 extern int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
-	struct vm_area_struct *expand);
+	struct vm_area_struct *expand, bool keep_locked);
+
 static inline int vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
 {
-	return __vma_adjust(vma, start, end, pgoff, insert, NULL);
+	return __vma_adjust(vma, start, end, pgoff, insert, NULL, false);
 }
-extern struct vm_area_struct *vma_merge(struct mm_struct *,
+
+extern struct vm_area_struct *__vma_merge(struct mm_struct *mm,
+	struct vm_area_struct *prev, unsigned long addr, unsigned long end,
+	unsigned long vm_flags, struct anon_vma *anon, struct file *file,
+	pgoff_t pgoff, struct mempolicy *mpol,
+	struct vm_userfaultfd_ctx uff, bool keep_locked);
+
+static inline struct vm_area_struct *vma_merge(struct mm_struct *mm,
 	struct vm_area_struct *prev, unsigned long addr, unsigned long end,
-	unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
-	struct mempolicy *, struct vm_userfaultfd_ctx);
+	unsigned long vm_flags, struct anon_vma *anon, struct file *file,
+	pgoff_t off, struct mempolicy *pol, struct vm_userfaultfd_ctx uff)
+{
+	return __vma_merge(mm, prev, addr, end, vm_flags, anon, file, off,
+			   pol, uff, false);
+}
+
 extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
 extern int __split_vma(struct mm_struct *, struct vm_area_struct *,
 	unsigned long addr, int new_below);
diff --git a/mm/mmap.c b/mm/mmap.c
index 921f20cc6df0..5601f1ef8bb9 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -680,7 +680,7 @@ static inline void __vma_unlink_prev(struct mm_struct *mm,
  */
 int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert,
-	struct vm_area_struct *expand)
+	struct vm_area_struct *expand, bool keep_locked)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	struct vm_area_struct *next = vma->vm_next, *orig_vma = vma;
@@ -796,8 +796,12 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 
 			importer->anon_vma = exporter->anon_vma;
 			error = anon_vma_clone(importer, exporter);
-			if (error)
+			if (error) {
+				if (next && next != vma)
+					vm_raw_write_end(next);
+				vm_raw_write_end(vma);
 				return error;
+			}
 		}
 	}
 again:
@@ -992,7 +996,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 
 	if (next && next != vma)
 		vm_raw_write_end(next);
-	vm_raw_write_end(vma);
+	if (!keep_locked)
+		vm_raw_write_end(vma);
 
 	validate_mm(mm);
 
@@ -1128,12 +1133,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
  * parameter) may establish ptes with the wrong permissions of NNNN
  * instead of the right permissions of XXXX.
  */
-struct vm_area_struct *vma_merge(struct mm_struct *mm,
+struct vm_area_struct *__vma_merge(struct mm_struct *mm,
 			struct vm_area_struct *prev, unsigned long addr,
 			unsigned long end, unsigned long vm_flags,
 			struct anon_vma *anon_vma, struct file *file,
 			pgoff_t pgoff, struct mempolicy *policy,
-			struct vm_userfaultfd_ctx vm_userfaultfd_ctx)
+			struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
+			bool keep_locked)
 {
 	pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
 	struct vm_area_struct *area, *next;
@@ -1181,10 +1187,11 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
 							/* cases 1, 6 */
 			err = __vma_adjust(prev, prev->vm_start,
 					 next->vm_end, prev->vm_pgoff, NULL,
-					 prev);
+					 prev, keep_locked);
 		} else					/* cases 2, 5, 7 */
 			err = __vma_adjust(prev, prev->vm_start,
-					 end, prev->vm_pgoff, NULL, prev);
+					   end, prev->vm_pgoff, NULL, prev,
+					   keep_locked);
 		if (err)
 			return NULL;
 		khugepaged_enter_vma_merge(prev, vm_flags);
@@ -1201,10 +1208,12 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
 					     vm_userfaultfd_ctx)) {
 		if (prev && addr < prev->vm_end)	/* case 4 */
 			err = __vma_adjust(prev, prev->vm_start,
-					 addr, prev->vm_pgoff, NULL, next);
+					 addr, prev->vm_pgoff, NULL, next,
+					 keep_locked);
 		else {					/* cases 3, 8 */
 			err = __vma_adjust(area, addr, next->vm_end,
-					 next->vm_pgoff - pglen, NULL, next);
+					 next->vm_pgoff - pglen, NULL, next,
+					 keep_locked);
 			/*
 			 * In case 3 area is already equal to next and
 			 * this is a noop, but in case 8 "area" has
@@ -3167,9 +3176,20 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 
 	if (find_vma_links(mm, addr, addr + len, &prev, &rb_link, &rb_parent))
 		return NULL;	/* should never get here */
-	new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
-			    vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
-			    vma->vm_userfaultfd_ctx);
+
+	/* There is 3 cases to manage here in
+	 *     AAAA            AAAA              AAAA              AAAA
+	 * PPPP....      PPPP......NNNN      PPPP....NNNN      PP........NN
+	 * PPPPPPPP(A)   PPPP..NNNNNNNN(B)   PPPPPPPPPPPP(1)       NULL
+	 *                                   PPPPPPPPNNNN(2)
+	 *                                   PPPPNNNNNNNN(3)
+	 *
+	 * new_vma == prev in case A,1,2
+	 * new_vma == next in case B,3
+	 */
+	new_vma = __vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
+			      vma->anon_vma, vma->vm_file, pgoff,
+			      vma_policy(vma), vma->vm_userfaultfd_ctx, true);
 	if (new_vma) {
 		/*
 		 * Source vma may have been merged into new_vma
@@ -3209,6 +3229,15 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 			get_file(new_vma->vm_file);
 		if (new_vma->vm_ops && new_vma->vm_ops->open)
 			new_vma->vm_ops->open(new_vma);
+		/*
+		 * As the VMA is linked right now, it may be hit by the
+		 * speculative page fault handler. But we don't want it to
+		 * to start mapping page in this area until the caller has
+		 * potentially move the pte from the moved VMA. To prevent
+		 * that we protect it right now, and let the caller unprotect
+		 * it once the move is done.
+		 */
+		vm_raw_write_begin(new_vma);
 		vma_link(mm, new_vma, prev, rb_link, rb_parent);
 		*need_rmap_locks = false;
 	}
diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..8ed1a1d6eaed 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -302,6 +302,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 	if (!new_vma)
 		return -ENOMEM;
 
+	/* new_vma is returned protected by copy_vma, to prevent speculative
+	 * page fault to be done in the destination area before we move the pte.
+	 * Now, we must also protect the source VMA since we don't want pages
+	 * to be mapped in our back while we are copying the PTEs.
+	 */
+	if (vma != new_vma)
+		vm_raw_write_begin(vma);
+
 	moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
 				     need_rmap_locks);
 	if (moved_len < old_len) {
@@ -318,6 +326,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 		 */
 		move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
 				 true);
+		if (vma != new_vma)
+			vm_raw_write_end(vma);
 		vma = new_vma;
 		old_len = new_len;
 		old_addr = new_addr;
@@ -326,7 +336,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 		mremap_userfaultfd_prep(new_vma, uf);
 		arch_remap(mm, old_addr, old_addr + old_len,
 			   new_addr, new_addr + new_len);
+		if (vma != new_vma)
+			vm_raw_write_end(vma);
 	}
+	vm_raw_write_end(new_vma);
 
 	/* Conceal VM_ACCOUNT so old reservation is not undone */
 	if (vm_flags & VM_ACCOUNT) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 11/25] mm: protect SPF handler against anon_vma changes
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (9 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 10/25] mm: protect mremap() against SPF hanlder Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.

In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protected by the mmap_sem.

In __vma_adjust() when importer->anon_vma is set, there is no need to
protect against speculative page faults since speculative page fault
is aborted if the vma->anon_vma is not set.

When calling page_add_new_anon_rmap() vma->anon_vma is necessarily
valid since we checked for it when locking the pte and the anon_vma is
removed once the pte is unlocked. So even if the speculative page
fault handler is running concurrently with do_unmap(), as the pte is
locked in unmap_region() - through unmap_vmas() - and the anon_vma
unlinked later, because we check for the vma sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index f7fed053df80..f76f5027d251 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -624,7 +624,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 * Hide vma from rmap and truncate_pagecache before freeing
 		 * pgtables
 		 */
+		vm_write_begin(vma);
 		unlink_anon_vmas(vma);
+		vm_write_end(vma);
 		unlink_file_vma(vma);
 
 		if (is_vm_hugetlb_page(vma)) {
@@ -638,7 +640,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			       && !is_vm_hugetlb_page(next)) {
 				vma = next;
 				next = vma->vm_next;
+				vm_write_begin(vma);
 				unlink_anon_vmas(vma);
+				vm_write_end(vma);
 				unlink_file_vma(vma);
 			}
 			free_pgd_range(tlb, addr, vma->vm_end,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (10 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 11/25] mm: protect SPF handler against anon_vma changes Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-23  7:42   ` Minchan Kim
  2018-04-17 14:33 ` [PATCH v10 13/25] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
                   ` (14 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.

This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h | 10 ++++++++--
 mm/huge_memory.c   |  6 +++---
 mm/hugetlb.c       |  2 ++
 mm/khugepaged.c    |  2 ++
 mm/memory.c        | 50 ++++++++++++++++++++++++++------------------------
 mm/migrate.c       |  2 +-
 6 files changed, 42 insertions(+), 30 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f6edd15563bc..c65205c8c558 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -367,6 +367,12 @@ struct vm_fault {
 					 * page table to avoid allocation from
 					 * atomic context.
 					 */
+	/*
+	 * These entries are required when handling speculative page fault.
+	 * This way the page handling is done using consistent field values.
+	 */
+	unsigned long vma_flags;
+	pgprot_t vma_page_prot;
 };
 
 /* page entry size for vm->huge_fault() */
@@ -687,9 +693,9 @@ void free_compound_page(struct page *page);
  * pte_mkwrite.  But get_user_pages can cause write faults for mappings
  * that do not have writing enabled, when used by access_process_vm.
  */
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
 {
-	if (likely(vma->vm_flags & VM_WRITE))
+	if (likely(vma_flags & VM_WRITE))
 		pte = pte_mkwrite(pte);
 	return pte;
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a3a1815f8e11..da2afda67e68 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1194,8 +1194,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
 
 	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
 		pte_t entry;
-		entry = mk_pte(pages[i], vma->vm_page_prot);
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		entry = mk_pte(pages[i], vmf->vma_page_prot);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 		memcg = (void *)page_private(pages[i]);
 		set_page_private(pages[i], 0);
 		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
@@ -2168,7 +2168,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 				entry = pte_swp_mksoft_dirty(entry);
 		} else {
 			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
-			entry = maybe_mkwrite(entry, vma);
+			entry = maybe_mkwrite(entry, vma->vm_flags);
 			if (!write)
 				entry = pte_wrprotect(entry);
 			if (!young)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 218679138255..774864153407 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3718,6 +3718,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
 				.vma = vma,
 				.address = address,
 				.flags = flags,
+				.vma_flags = vma->vm_flags,
+				.vma_page_prot = vma->vm_page_prot,
 				/*
 				 * Hard to debug if it ends up being
 				 * used by a callee that assumes
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0b28af4b950d..2b02a9f9589e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -887,6 +887,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 		.flags = FAULT_FLAG_ALLOW_RETRY,
 		.pmd = pmd,
 		.pgoff = linear_page_index(vma, address),
+		.vma_flags = vma->vm_flags,
+		.vma_page_prot = vma->vm_page_prot,
 	};
 
 	/* we only decide to swapin, if there is enough young ptes */
diff --git a/mm/memory.c b/mm/memory.c
index f76f5027d251..2fb9920e06a5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 out_mkwrite:
 	if (mkwrite) {
 		entry = pte_mkyoung(entry);
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
 	}
 
 	set_pte_at(mm, addr, pte, entry);
@@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 
 	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
 	entry = pte_mkyoung(vmf->orig_pte);
-	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 	if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
 		update_mmu_cache(vma, vmf->address, vmf->pte);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2548,8 +2548,8 @@ static int wp_page_copy(struct vm_fault *vmf)
 			inc_mm_counter_fast(mm, MM_ANONPAGES);
 		}
 		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
-		entry = mk_pte(new_page, vma->vm_page_prot);
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		entry = mk_pte(new_page, vmf->vma_page_prot);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 		/*
 		 * Clear the pte entry and flush it first, before updating the
 		 * pte with the new entry. This will avoid a race condition
@@ -2614,7 +2614,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		 * Don't let another task, with possibly unlocked vma,
 		 * keep the mlocked page.
 		 */
-		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
+		if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
 			lock_page(old_page);	/* LRU manipulation */
 			if (PageMlocked(old_page))
 				munlock_vma_page(old_page);
@@ -2650,7 +2650,7 @@ static int wp_page_copy(struct vm_fault *vmf)
  */
 int finish_mkwrite_fault(struct vm_fault *vmf)
 {
-	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
+	WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
 	if (!pte_map_lock(vmf))
 		return VM_FAULT_RETRY;
 	/*
@@ -2752,7 +2752,7 @@ static int do_wp_page(struct vm_fault *vmf)
 		 * We should not cow pages in a shared writeable mapping.
 		 * Just mark the pages writable and/or call ops->pfn_mkwrite.
 		 */
-		if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+		if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
 				     (VM_WRITE|VM_SHARED))
 			return wp_pfn_shared(vmf);
 
@@ -2799,7 +2799,7 @@ static int do_wp_page(struct vm_fault *vmf)
 			return VM_FAULT_WRITE;
 		}
 		unlock_page(vmf->page);
-	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+	} else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
 					(VM_WRITE|VM_SHARED))) {
 		return wp_page_shared(vmf);
 	}
@@ -3078,9 +3078,9 @@ int do_swap_page(struct vm_fault *vmf)
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
-	pte = mk_pte(page, vma->vm_page_prot);
+	pte = mk_pte(page, vmf->vma_page_prot);
 	if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
-		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+		pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
 		vmf->flags &= ~FAULT_FLAG_WRITE;
 		ret |= VM_FAULT_WRITE;
 		exclusive = RMAP_EXCLUSIVE;
@@ -3105,7 +3105,7 @@ int do_swap_page(struct vm_fault *vmf)
 
 	swap_free(entry);
 	if (mem_cgroup_swap_full(page) ||
-	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+	    (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
 		try_to_free_swap(page);
 	unlock_page(page);
 	if (page != swapcache && swapcache) {
@@ -3163,7 +3163,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	pte_t entry;
 
 	/* File mapping without ->vm_ops ? */
-	if (vma->vm_flags & VM_SHARED)
+	if (vmf->vma_flags & VM_SHARED)
 		return VM_FAULT_SIGBUS;
 
 	/*
@@ -3187,7 +3187,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
 			!mm_forbids_zeropage(vma->vm_mm)) {
 		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
-						vma->vm_page_prot));
+						vmf->vma_page_prot));
 		if (!pte_map_lock(vmf))
 			return VM_FAULT_RETRY;
 		if (!pte_none(*vmf->pte))
@@ -3220,8 +3220,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	 */
 	__SetPageUptodate(page);
 
-	entry = mk_pte(page, vma->vm_page_prot);
-	if (vma->vm_flags & VM_WRITE)
+	entry = mk_pte(page, vmf->vma_page_prot);
+	if (vmf->vma_flags & VM_WRITE)
 		entry = pte_mkwrite(pte_mkdirty(entry));
 
 	if (!pte_map_lock(vmf)) {
@@ -3418,7 +3418,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
 	for (i = 0; i < HPAGE_PMD_NR; i++)
 		flush_icache_page(vma, page + i);
 
-	entry = mk_huge_pmd(page, vma->vm_page_prot);
+	entry = mk_huge_pmd(page, vmf->vma_page_prot);
 	if (write)
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 
@@ -3492,11 +3492,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		return VM_FAULT_NOPAGE;
 
 	flush_icache_page(vma, page);
-	entry = mk_pte(page, vma->vm_page_prot);
+	entry = mk_pte(page, vmf->vma_page_prot);
 	if (write)
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 	/* copy-on-write page */
-	if (write && !(vma->vm_flags & VM_SHARED)) {
+	if (write && !(vmf->vma_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
@@ -3535,7 +3535,7 @@ int finish_fault(struct vm_fault *vmf)
 
 	/* Did we COW the page? */
 	if ((vmf->flags & FAULT_FLAG_WRITE) &&
-	    !(vmf->vma->vm_flags & VM_SHARED))
+	    !(vmf->vma_flags & VM_SHARED))
 		page = vmf->cow_page;
 	else
 		page = vmf->page;
@@ -3789,7 +3789,7 @@ static int do_fault(struct vm_fault *vmf)
 		ret = VM_FAULT_SIGBUS;
 	else if (!(vmf->flags & FAULT_FLAG_WRITE))
 		ret = do_read_fault(vmf);
-	else if (!(vma->vm_flags & VM_SHARED))
+	else if (!(vmf->vma_flags & VM_SHARED))
 		ret = do_cow_fault(vmf);
 	else
 		ret = do_shared_fault(vmf);
@@ -3846,7 +3846,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * accessible ptes, some can allow access by kernel mode.
 	 */
 	pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
-	pte = pte_modify(pte, vma->vm_page_prot);
+	pte = pte_modify(pte, vmf->vma_page_prot);
 	pte = pte_mkyoung(pte);
 	if (was_writable)
 		pte = pte_mkwrite(pte);
@@ -3880,7 +3880,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * Flag if the page is shared between multiple address spaces. This
 	 * is later used when determining whether to group tasks together
 	 */
-	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
+	if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
 		flags |= TNF_SHARED;
 
 	last_cpupid = page_cpupid_last(page);
@@ -3925,7 +3925,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
 		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
 
 	/* COW handled on pte level: split pmd */
-	VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
+	VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
 	__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
 
 	return VM_FAULT_FALLBACK;
@@ -4072,6 +4072,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 		.flags = flags,
 		.pgoff = linear_page_index(vma, address),
 		.gfp_mask = __get_fault_gfp_mask(vma),
+		.vma_flags = vma->vm_flags,
+		.vma_page_prot = vma->vm_page_prot,
 	};
 	unsigned int dirty = flags & FAULT_FLAG_WRITE;
 	struct mm_struct *mm = vma->vm_mm;
diff --git a/mm/migrate.c b/mm/migrate.c
index bb6367d70a3e..44d7007cfc1c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 		 */
 		entry = pte_to_swp_entry(*pvmw.pte);
 		if (is_write_migration_entry(entry))
-			pte = maybe_mkwrite(pte, vma);
+			pte = maybe_mkwrite(pte, vma->vm_flags);
 
 		if (unlikely(is_zone_device_page(new))) {
 			if (is_device_private_page(new)) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 13/25] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (11 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 14/25] mm: introduce __lru_cache_add_active_or_unevictable Laurent Dufour
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.

This way during the speculative page fault path the saved vma->vm_flags
could be used.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/migrate.h | 4 ++--
 mm/memory.c             | 2 +-
 mm/migrate.c            | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f2b4abbca55e..fd4c3ab7bd9c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -126,14 +126,14 @@ static inline void __ClearPageMovable(struct page *page)
 #ifdef CONFIG_NUMA_BALANCING
 extern bool pmd_trans_migrating(pmd_t pmd);
 extern int migrate_misplaced_page(struct page *page,
-				  struct vm_area_struct *vma, int node);
+				  struct vm_fault *vmf, int node);
 #else
 static inline bool pmd_trans_migrating(pmd_t pmd)
 {
 	return false;
 }
 static inline int migrate_misplaced_page(struct page *page,
-					 struct vm_area_struct *vma, int node)
+					 struct vm_fault *vmf, int node)
 {
 	return -EAGAIN; /* can't migrate now */
 }
diff --git a/mm/memory.c b/mm/memory.c
index 2fb9920e06a5..e28cbbae3f3d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3894,7 +3894,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	}
 
 	/* Migrate to the requested node */
-	migrated = migrate_misplaced_page(page, vma, target_nid);
+	migrated = migrate_misplaced_page(page, vmf, target_nid);
 	if (migrated) {
 		page_nid = target_nid;
 		flags |= TNF_MIGRATED;
diff --git a/mm/migrate.c b/mm/migrate.c
index 44d7007cfc1c..5d5cf9b5ac16 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1944,7 +1944,7 @@ bool pmd_trans_migrating(pmd_t pmd)
  * node. Caller is expected to have an elevated reference count on
  * the page that will be dropped by this function before returning.
  */
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+int migrate_misplaced_page(struct page *page, struct vm_fault *vmf,
 			   int node)
 {
 	pg_data_t *pgdat = NODE_DATA(node);
@@ -1957,7 +1957,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	 * with execute permissions as they are probably shared libraries.
 	 */
 	if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
-	    (vma->vm_flags & VM_EXEC))
+	    (vmf->vma_flags & VM_EXEC))
 		goto out;
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 14/25] mm: introduce __lru_cache_add_active_or_unevictable
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (12 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 13/25] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 15/25] mm: introduce __vm_normal_page() Laurent Dufour
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/swap.h | 10 ++++++++--
 mm/memory.c          |  8 ++++----
 mm/swap.c            |  6 +++---
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 1985940af479..a7dc37e0e405 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -338,8 +338,14 @@ extern void deactivate_file_page(struct page *page);
 extern void mark_page_lazyfree(struct page *page);
 extern void swap_setup(void);
 
-extern void lru_cache_add_active_or_unevictable(struct page *page,
-						struct vm_area_struct *vma);
+extern void __lru_cache_add_active_or_unevictable(struct page *page,
+						unsigned long vma_flags);
+
+static inline void lru_cache_add_active_or_unevictable(struct page *page,
+						struct vm_area_struct *vma)
+{
+	return __lru_cache_add_active_or_unevictable(page, vma->vm_flags);
+}
 
 /* linux/mm/vmscan.c */
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
diff --git a/mm/memory.c b/mm/memory.c
index e28cbbae3f3d..47af9e97f02a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2559,7 +2559,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
 		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(new_page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(new_page, vma);
+		__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
 		/*
 		 * We call the notify macro here because, when using secondary
 		 * mmu page tables (such as kvm shadow page tables), we want the
@@ -3095,7 +3095,7 @@ int do_swap_page(struct vm_fault *vmf)
 	if (unlikely(page != swapcache && swapcache)) {
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(page, vma);
+		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
 		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
 		mem_cgroup_commit_charge(page, memcg, true, false);
@@ -3246,7 +3246,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	page_add_new_anon_rmap(page, vma, vmf->address, false);
 	mem_cgroup_commit_charge(page, memcg, false, false);
-	lru_cache_add_active_or_unevictable(page, vma);
+	__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 setpte:
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
 
@@ -3500,7 +3500,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(page, vma);
+		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
 		inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
 		page_add_file_rmap(page, false);
diff --git a/mm/swap.c b/mm/swap.c
index 3dd518832096..f2f9c587246f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -455,12 +455,12 @@ void lru_cache_add(struct page *page)
  * directly back onto it's zone's unevictable list, it does NOT use a
  * per cpu pagevec.
  */
-void lru_cache_add_active_or_unevictable(struct page *page,
-					 struct vm_area_struct *vma)
+void __lru_cache_add_active_or_unevictable(struct page *page,
+					   unsigned long vma_flags)
 {
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 
-	if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
+	if (likely((vma_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
 		SetPageActive(page);
 	else if (!TestSetPageMlocked(page)) {
 		/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 15/25] mm: introduce __vm_normal_page()
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (13 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 14/25] mm: introduce __lru_cache_add_active_or_unevictable Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 16/25] mm: introduce __page_add_new_anon_rmap() Laurent Dufour
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.

Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value as parameter.

Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h | 18 +++++++++++++++---
 mm/memory.c        | 25 ++++++++++++++++---------
 2 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c65205c8c558..f967bf84094f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1275,9 +1275,21 @@ static inline void INIT_VMA(struct vm_area_struct *vma)
 #endif
 }
 
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
-			     pte_t pte, bool with_public_device);
-#define vm_normal_page(vma, addr, pte) _vm_normal_page(vma, addr, pte, false)
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+			      pte_t pte, bool with_public_device,
+			      unsigned long vma_flags);
+static inline struct page *_vm_normal_page(struct vm_area_struct *vma,
+					    unsigned long addr, pte_t pte,
+					    bool with_public_device)
+{
+	return __vm_normal_page(vma, addr, pte, with_public_device,
+				vma->vm_flags);
+}
+static inline struct page *vm_normal_page(struct vm_area_struct *vma,
+					  unsigned long addr, pte_t pte)
+{
+	return _vm_normal_page(vma, addr, pte, false);
+}
 
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 				pmd_t pmd);
diff --git a/mm/memory.c b/mm/memory.c
index 47af9e97f02a..d9146a0c3d25 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -780,7 +780,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 }
 
 /*
- * vm_normal_page -- This function gets the "struct page" associated with a pte.
+ * __vm_normal_page -- This function gets the "struct page" associated with
+ * a pte.
  *
  * "Special" mappings do not wish to be associated with a "struct page" (either
  * it doesn't exist, or it exists but they don't want to touch it). In this
@@ -826,8 +827,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 #else
 # define HAVE_PTE_SPECIAL 0
 #endif
-struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
-			     pte_t pte, bool with_public_device)
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+			      pte_t pte, bool with_public_device,
+			      unsigned long vma_flags)
 {
 	unsigned long pfn = pte_pfn(pte);
 
@@ -836,7 +838,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			goto check_pfn;
 		if (vma->vm_ops && vma->vm_ops->find_special_page)
 			return vma->vm_ops->find_special_page(vma, addr);
-		if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+		if (vma_flags & (VM_PFNMAP | VM_MIXEDMAP))
 			return NULL;
 		if (is_zero_pfn(pfn))
 			return NULL;
@@ -867,9 +869,13 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 	}
 
 	/* !HAVE_PTE_SPECIAL case follows: */
+	/*
+	 * This part should never get called when CONFIG_SPECULATIVE_PAGE_FAULT
+	 * is set. This is mainly because we can't rely on vm_start.
+	 */
 
-	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
-		if (vma->vm_flags & VM_MIXEDMAP) {
+	if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+		if (vma_flags & VM_MIXEDMAP) {
 			if (!pfn_valid(pfn))
 				return NULL;
 			goto out;
@@ -878,7 +884,7 @@ struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			off = (addr - vma->vm_start) >> PAGE_SHIFT;
 			if (pfn == vma->vm_pgoff + off)
 				return NULL;
-			if (!is_cow_mapping(vma->vm_flags))
+			if (!is_cow_mapping(vma_flags))
 				return NULL;
 		}
 	}
@@ -2743,7 +2749,8 @@ static int do_wp_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
 
-	vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
+	vmf->page = __vm_normal_page(vma, vmf->address, vmf->orig_pte, false,
+				     vmf->vma_flags);
 	if (!vmf->page) {
 		/*
 		 * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a
@@ -3853,7 +3860,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte);
 	update_mmu_cache(vma, vmf->address, vmf->pte);
 
-	page = vm_normal_page(vma, vmf->address, pte);
+	page = __vm_normal_page(vma, vmf->address, pte, false, vmf->vma_flags);
 	if (!page) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 16/25] mm: introduce __page_add_new_anon_rmap()
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (14 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 15/25] mm: introduce __vm_normal_page() Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock Laurent Dufour
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.

This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VMA will be the same as the original one,
because in __split_vma the new->anon_vma is set to src->anon_vma when
*new = *vma.

So even if the VMA boundaries are not correct, the anon_vma pointer is
still valid.

If the VMA has been merged, then the VMA in which it has been merged
must have the same anon_vma pointer otherwise the merge can't be done.

So in all the case we know that the anon_vma is valid, since we have
checked before starting the speculative page fault that the anon_vma
pointer is valid for this VMA and since there is an anon_vma this
means that at one time a page has been backed and that before the VMA
is cleaned, the page table lock would have to be grab to clean the
PTE, and the anon_vma field is checked once the PTE is locked.

This patch introduce a new __page_add_new_anon_rmap() service which
doesn't check for the VMA boundaries, and create a new inline one
which do the check.

When called from a page fault handler, if this is not a speculative one,
there is a guarantee that vm_start and vm_end match the faulting address,
so this check is useless. In the context of the speculative page fault
handler, this check may be wrong but anon_vma is still valid as explained
above.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/rmap.h | 12 ++++++++++--
 mm/memory.c          |  8 ++++----
 mm/rmap.c            |  5 ++---
 3 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 988d176472df..a5d282573093 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -174,8 +174,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *,
 		unsigned long, bool);
 void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
 			   unsigned long, int);
-void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
-		unsigned long, bool);
+void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
+			      unsigned long, bool);
+static inline void page_add_new_anon_rmap(struct page *page,
+					  struct vm_area_struct *vma,
+					  unsigned long address, bool compound)
+{
+	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+	__page_add_new_anon_rmap(page, vma, address, compound);
+}
+
 void page_add_file_rmap(struct page *, bool);
 void page_remove_rmap(struct page *, bool);
 
diff --git a/mm/memory.c b/mm/memory.c
index d9146a0c3d25..9c220ac0e2c5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2563,7 +2563,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		 * thread doing COW.
 		 */
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
-		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(new_page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
 		/*
@@ -3100,7 +3100,7 @@ int do_swap_page(struct vm_fault *vmf)
 
 	/* ksm created a completely new copy */
 	if (unlikely(page != swapcache && swapcache)) {
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
@@ -3251,7 +3251,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	}
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(page, vma, vmf->address, false);
+	__page_add_new_anon_rmap(page, vma, vmf->address, false);
 	mem_cgroup_commit_charge(page, memcg, false, false);
 	__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 setpte:
@@ -3505,7 +3505,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 	/* copy-on-write page */
 	if (write && !(vmf->vma_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
diff --git a/mm/rmap.c b/mm/rmap.c
index 8d5337fed37b..9307f6140796 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1136,7 +1136,7 @@ void do_page_add_anon_rmap(struct page *page,
 }
 
 /**
- * page_add_new_anon_rmap - add pte mapping to a new anonymous page
+ * __page_add_new_anon_rmap - add pte mapping to a new anonymous page
  * @page:	the page to add the mapping to
  * @vma:	the vm area in which the mapping is added
  * @address:	the user virtual address mapped
@@ -1146,12 +1146,11 @@ void do_page_add_anon_rmap(struct page *page,
  * This means the inc-and-test can be bypassed.
  * Page does not have to be locked.
  */
-void page_add_new_anon_rmap(struct page *page,
+void __page_add_new_anon_rmap(struct page *page,
 	struct vm_area_struct *vma, unsigned long address, bool compound)
 {
 	int nr = compound ? hpage_nr_pages(page) : 1;
 
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
 	__SetPageSwapBacked(page);
 	if (compound) {
 		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (15 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 16/25] mm: introduce __page_add_new_anon_rmap() Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-30 18:47   ` Punit Agrawal
  2018-04-17 14:33 ` [PATCH v10 18/25] mm: provide speculative fault infrastructure Laurent Dufour
                   ` (9 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

This change is inspired by the Peter's proposal patch [1] which was
protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
that particular case, and it is introducing major performance degradation
due to excessive scheduling operations.

To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
is protecting it access using a rwlock.  As the mm_rb tree is a O(log n)
search it is safe to protect it using such a lock.  The VMA cache is not
protected by the new rwlock and it should not be used without holding the
mmap_sem.

To allow the picked VMA structure to be used once the rwlock is released, a
use count is added to the VMA structure. When the VMA is allocated it is
set to 1.  Each time the VMA is picked with the rwlock held its use count
is incremented. Each time the VMA is released it is decremented. When the
use count hits zero, this means that the VMA is no more used and should be
freed.

This patch is preparing for 2 kind of VMA access :
 - as usual, under the control of the mmap_sem,
 - without holding the mmap_sem for the speculative page fault handler.

Access done under the control the mmap_sem doesn't require to grab the
rwlock to protect read access to the mm_rb tree, but access in write must
be done under the protection of the rwlock too. This affects inserting and
removing of elements in the RB tree.

The patch is introducing 2 new functions:
 - vma_get() to find a VMA based on an address by holding the new rwlock.
 - vma_put() to release the VMA when its no more used.
These services are designed to be used when access are made to the RB tree
without holding the mmap_sem.

When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
we rely on the WMB done when releasing the rwlock to serialize the write
with the RMB done in a later patch to check for the VMA's validity.

When free_vma is called, the file associated with the VMA is closed
immediately, but the policy and the file structure remained in used until
the VMA's use count reach 0, which may happens later when exiting an
in progress speculative page fault.

[1] https://patchwork.kernel.org/patch/5108281/

Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h       |   1 +
 include/linux/mm_types.h |   4 ++
 kernel/fork.c            |   3 ++
 mm/init-mm.c             |   3 ++
 mm/internal.h            |   6 +++
 mm/mmap.c                | 115 +++++++++++++++++++++++++++++++++++------------
 6 files changed, 104 insertions(+), 28 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f967bf84094f..e2c24ea58d94 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1272,6 +1272,7 @@ static inline void INIT_VMA(struct vm_area_struct *vma)
 	INIT_LIST_HEAD(&vma->anon_vma_chain);
 #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
 	seqcount_init(&vma->vm_sequence);
+	atomic_set(&vma->vm_ref_count, 1);
 #endif
 }
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index db5e9d630e7a..faf3844dd815 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -337,6 +337,7 @@ struct vm_area_struct {
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
 #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
 	seqcount_t vm_sequence;
+	atomic_t vm_ref_count;		/* see vma_get(), vma_put() */
 #endif
 } __randomize_layout;
 
@@ -355,6 +356,9 @@ struct kioctx_table;
 struct mm_struct {
 	struct vm_area_struct *mmap;		/* list of VMAs */
 	struct rb_root mm_rb;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	rwlock_t mm_rb_lock;
+#endif
 	u32 vmacache_seqnum;                   /* per-thread vmacache */
 #ifdef CONFIG_MMU
 	unsigned long (*get_unmapped_area) (struct file *filp,
diff --git a/kernel/fork.c b/kernel/fork.c
index d937e5945f77..9f8d235a3df8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -891,6 +891,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 	mm->mmap = NULL;
 	mm->mm_rb = RB_ROOT;
 	mm->vmacache_seqnum = 0;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	rwlock_init(&mm->mm_rb_lock);
+#endif
 	atomic_set(&mm->mm_users, 1);
 	atomic_set(&mm->mm_count, 1);
 	init_rwsem(&mm->mmap_sem);
diff --git a/mm/init-mm.c b/mm/init-mm.c
index f94d5d15ebc0..e71ac37a98c4 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -17,6 +17,9 @@
 
 struct mm_struct init_mm = {
 	.mm_rb		= RB_ROOT,
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	.mm_rb_lock	= __RW_LOCK_UNLOCKED(init_mm.mm_rb_lock),
+#endif
 	.pgd		= swapper_pg_dir,
 	.mm_users	= ATOMIC_INIT(2),
 	.mm_count	= ATOMIC_INIT(1),
diff --git a/mm/internal.h b/mm/internal.h
index 62d8c34e63d5..fb2667b20f0a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,12 @@ void page_writeback_init(void);
 
 int do_swap_page(struct vm_fault *vmf);
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern struct vm_area_struct *get_vma(struct mm_struct *mm,
+				      unsigned long addr);
+extern void put_vma(struct vm_area_struct *vma);
+#endif
+
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/mmap.c b/mm/mmap.c
index 5601f1ef8bb9..a82950960f2e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -160,6 +160,27 @@ void unlink_file_vma(struct vm_area_struct *vma)
 	}
 }
 
+static void __free_vma(struct vm_area_struct *vma)
+{
+	if (vma->vm_file)
+		fput(vma->vm_file);
+	mpol_put(vma_policy(vma));
+	kmem_cache_free(vm_area_cachep, vma);
+}
+
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+void put_vma(struct vm_area_struct *vma)
+{
+	if (atomic_dec_and_test(&vma->vm_ref_count))
+		__free_vma(vma);
+}
+#else
+static inline void put_vma(struct vm_area_struct *vma)
+{
+	return __free_vma(vma);
+}
+#endif
+
 /*
  * Close a vm structure and free it, returning the next.
  */
@@ -170,10 +191,7 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
 	might_sleep();
 	if (vma->vm_ops && vma->vm_ops->close)
 		vma->vm_ops->close(vma);
-	if (vma->vm_file)
-		fput(vma->vm_file);
-	mpol_put(vma_policy(vma));
-	kmem_cache_free(vm_area_cachep, vma);
+	put_vma(vma);
 	return next;
 }
 
@@ -393,6 +411,14 @@ static void validate_mm(struct mm_struct *mm)
 #define validate_mm(mm) do { } while (0)
 #endif
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+#define mm_rb_write_lock(mm)	write_lock(&(mm)->mm_rb_lock)
+#define mm_rb_write_unlock(mm)	write_unlock(&(mm)->mm_rb_lock)
+#else
+#define mm_rb_write_lock(mm)	do { } while (0)
+#define mm_rb_write_unlock(mm)	do { } while (0)
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
 RB_DECLARE_CALLBACKS(static, vma_gap_callbacks, struct vm_area_struct, vm_rb,
 		     unsigned long, rb_subtree_gap, vma_compute_subtree_gap)
 
@@ -411,26 +437,37 @@ static void vma_gap_update(struct vm_area_struct *vma)
 }
 
 static inline void vma_rb_insert(struct vm_area_struct *vma,
-				 struct rb_root *root)
+				 struct mm_struct *mm)
 {
+	struct rb_root *root = &mm->mm_rb;
+
 	/* All rb_subtree_gap values must be consistent prior to insertion */
 	validate_mm_rb(root, NULL);
 
 	rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
 }
 
-static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
+static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm)
 {
+	struct rb_root *root = &mm->mm_rb;
 	/*
 	 * Note rb_erase_augmented is a fairly large inline function,
 	 * so make sure we instantiate it only once with our desired
 	 * augmented rbtree callbacks.
 	 */
+	mm_rb_write_lock(mm);
 	rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
+	mm_rb_write_unlock(mm); /* wmb */
+
+	/*
+	 * Ensure the removal is complete before clearing the node.
+	 * Matched by vma_has_changed()/handle_speculative_fault().
+	 */
+	RB_CLEAR_NODE(&vma->vm_rb);
 }
 
 static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
-						struct rb_root *root,
+						struct mm_struct *mm,
 						struct vm_area_struct *ignore)
 {
 	/*
@@ -438,21 +475,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
 	 * with the possible exception of the "next" vma being erased if
 	 * next->vm_start was reduced.
 	 */
-	validate_mm_rb(root, ignore);
+	validate_mm_rb(&mm->mm_rb, ignore);
 
-	__vma_rb_erase(vma, root);
+	__vma_rb_erase(vma, mm);
 }
 
 static __always_inline void vma_rb_erase(struct vm_area_struct *vma,
-					 struct rb_root *root)
+					 struct mm_struct *mm)
 {
 	/*
 	 * All rb_subtree_gap values must be consistent prior to erase,
 	 * with the possible exception of the vma being erased.
 	 */
-	validate_mm_rb(root, vma);
+	validate_mm_rb(&mm->mm_rb, vma);
 
-	__vma_rb_erase(vma, root);
+	__vma_rb_erase(vma, mm);
 }
 
 /*
@@ -567,10 +604,12 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
 	 * immediately update the gap to the correct value. Finally we
 	 * rebalance the rbtree after all augmented values have been set.
 	 */
+	mm_rb_write_lock(mm);
 	rb_link_node(&vma->vm_rb, rb_parent, rb_link);
 	vma->rb_subtree_gap = 0;
 	vma_gap_update(vma);
-	vma_rb_insert(vma, &mm->mm_rb);
+	vma_rb_insert(vma, mm);
+	mm_rb_write_unlock(mm);
 }
 
 static void __vma_link_file(struct vm_area_struct *vma)
@@ -646,7 +685,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm,
 {
 	struct vm_area_struct *next;
 
-	vma_rb_erase_ignore(vma, &mm->mm_rb, ignore);
+	vma_rb_erase_ignore(vma, mm, ignore);
 	next = vma->vm_next;
 	if (has_prev)
 		prev->vm_next = next;
@@ -923,16 +962,13 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	}
 
 	if (remove_next) {
-		if (file) {
+		if (file)
 			uprobe_munmap(next, next->vm_start, next->vm_end);
-			fput(file);
-		}
 		if (next->anon_vma)
 			anon_vma_merge(vma, next);
 		mm->map_count--;
-		mpol_put(vma_policy(next));
 		vm_raw_write_end(next);
-		kmem_cache_free(vm_area_cachep, next);
+		put_vma(next);
 		/*
 		 * In mprotect's case 6 (see comments on vma_merge),
 		 * we must remove another next too. It would clutter
@@ -2190,15 +2226,11 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
 EXPORT_SYMBOL(get_unmapped_area);
 
 /* Look up the first VMA which satisfies  addr < vm_end,  NULL if none. */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+static struct vm_area_struct *__find_vma(struct mm_struct *mm,
+					 unsigned long addr)
 {
 	struct rb_node *rb_node;
-	struct vm_area_struct *vma;
-
-	/* Check the cache first. */
-	vma = vmacache_find(mm, addr);
-	if (likely(vma))
-		return vma;
+	struct vm_area_struct *vma = NULL;
 
 	rb_node = mm->mm_rb.rb_node;
 
@@ -2216,13 +2248,40 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
 			rb_node = rb_node->rb_right;
 	}
 
+	return vma;
+}
+
+struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+{
+	struct vm_area_struct *vma;
+
+	/* Check the cache first. */
+	vma = vmacache_find(mm, addr);
+	if (likely(vma))
+		return vma;
+
+	vma = __find_vma(mm, addr);
 	if (vma)
 		vmacache_update(addr, vma);
 	return vma;
 }
-
 EXPORT_SYMBOL(find_vma);
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+struct vm_area_struct *get_vma(struct mm_struct *mm, unsigned long addr)
+{
+	struct vm_area_struct *vma = NULL;
+
+	read_lock(&mm->mm_rb_lock);
+	vma = __find_vma(mm, addr);
+	if (vma)
+		atomic_inc(&vma->vm_ref_count);
+	read_unlock(&mm->mm_rb_lock);
+
+	return vma;
+}
+#endif
+
 /*
  * Same as find_vma, but also return a pointer to the previous VMA in *pprev.
  */
@@ -2590,7 +2649,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
 	insertion_point = (prev ? &prev->vm_next : &mm->mmap);
 	vma->vm_prev = NULL;
 	do {
-		vma_rb_erase(vma, &mm->mm_rb);
+		vma_rb_erase(vma, mm);
 		mm->map_count--;
 		tail_vma = vma;
 		vma = vma->vm_next;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 18/25] mm: provide speculative fault infrastructure
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (16 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-05-15 13:09   ` vinayak menon
  2018-04-17 14:33 ` [PATCH v10 19/25] mm: adding speculative page fault failure trace events Laurent Dufour
                   ` (8 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Provide infrastructure to do a speculative fault (not holding
mmap_sem).

The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(including umapping / page-table deletion) and we use gup_fast() style
page-table walking to deal with page-table races.

Once we've obtained the page and are ready to update the PTE, we
validate if the state we started the fault with is still valid, if
not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
PTE and we're done.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Manage the newly introduced pte_spinlock() for speculative page
 fault to fail if the VMA is touched in our back]
[Rename vma_is_dead() to vma_has_changed() and declare it here]
[Fetch p4d and pud]
[Set vmd.sequence in __handle_mm_fault()]
[Abort speculative path when handle_userfault() has to be called]
[Add additional VMA's flags checks in handle_speculative_fault()]
[Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
[Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
[Remove warning comment about waiting for !seq&1 since we don't want
 to wait]
[Remove warning about no huge page support, mention it explictly]
[Don't call do_fault() in the speculative path as __do_fault() calls
 vma->vm_ops->fault() which may want to release mmap_sem]
[Only vm_fault pointer argument for vma_has_changed()]
[Fix check against huge page, calling pmd_trans_huge()]
[Use READ_ONCE() when reading VMA's fields in the speculative path]
[Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
 processing done in vm_normal_page()]
[Check that vma->anon_vma is already set when starting the speculative
 path]
[Check for memory policy as we can't support MPOL_INTERLEAVE case due to
 the processing done in mpol_misplaced()]
[Don't support VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
[Don't build SPF services if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Add mem cgroup oom check]
[Use READ_ONCE to access p*d entries]
[Replace deprecated ACCESS_ONCE() by READ_ONCE() in vma_has_changed()]
[Don't fetch pte again in handle_pte_fault() when running the speculative
 path]
[Check PMD against concurrent collapsing operation]
[Try spin lock the pte during the speculative path to avoid deadlock with
 other CPU's invalidating the TLB and requiring this CPU to catch the
 inter processor's interrupt]
[Move define of FAULT_FLAG_SPECULATIVE here]
[Introduce __handle_speculative_fault() and add a check against
 mm->mm_users in handle_speculative_fault() defined in mm.h]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/hugetlb_inline.h |   2 +-
 include/linux/mm.h             |  30 ++++
 include/linux/pagemap.h        |   4 +-
 mm/internal.h                  |  16 +-
 mm/memory.c                    | 340 ++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 385 insertions(+), 7 deletions(-)

diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 0660a03d37d9..9e25283d6fc9 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -8,7 +8,7 @@
 
 static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
 {
-	return !!(vma->vm_flags & VM_HUGETLB);
+	return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
 }
 
 #else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e2c24ea58d94..08540c98d63b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -309,6 +309,7 @@ extern pgprot_t protection_map[16];
 #define FAULT_FLAG_USER		0x40	/* The fault originated in userspace */
 #define FAULT_FLAG_REMOTE	0x80	/* faulting for non current tsk/mm */
 #define FAULT_FLAG_INSTRUCTION  0x100	/* The fault was during an instruction fetch */
+#define FAULT_FLAG_SPECULATIVE	0x200	/* Speculative fault, not holding mmap_sem */
 
 #define FAULT_FLAG_TRACE \
 	{ FAULT_FLAG_WRITE,		"WRITE" }, \
@@ -337,6 +338,10 @@ struct vm_fault {
 	gfp_t gfp_mask;			/* gfp mask to be used for allocations */
 	pgoff_t pgoff;			/* Logical page offset based on vma */
 	unsigned long address;		/* Faulting virtual address */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	unsigned int sequence;
+	pmd_t orig_pmd;			/* value of PMD at the time of fault */
+#endif
 	pmd_t *pmd;			/* Pointer to pmd entry matching
 					 * the 'address' */
 	pud_t *pud;			/* Pointer to pud entry matching
@@ -1373,6 +1378,31 @@ int invalidate_inode_page(struct page *page);
 #ifdef CONFIG_MMU
 extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 		unsigned int flags);
+
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+extern int __handle_speculative_fault(struct mm_struct *mm,
+				      unsigned long address,
+				      unsigned int flags);
+static inline int handle_speculative_fault(struct mm_struct *mm,
+					   unsigned long address,
+					   unsigned int flags)
+{
+	/*
+	 * Try speculative page fault for multithreaded user space task only.
+	 */
+	if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
+		return VM_FAULT_RETRY;
+	return __handle_speculative_fault(mm, address, flags);
+}
+#else
+static inline int handle_speculative_fault(struct mm_struct *mm,
+					   unsigned long address,
+					   unsigned int flags)
+{
+	return VM_FAULT_RETRY;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
 extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
 			    unsigned long address, unsigned int fault_flags,
 			    bool *unlocked);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b1bd2186e6d2..6e2aa4e79af7 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -456,8 +456,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
 	pgoff_t pgoff;
 	if (unlikely(is_vm_hugetlb_page(vma)))
 		return linear_hugepage_index(vma, address);
-	pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
-	pgoff += vma->vm_pgoff;
+	pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
+	pgoff += READ_ONCE(vma->vm_pgoff);
 	return pgoff;
 }
 
diff --git a/mm/internal.h b/mm/internal.h
index fb2667b20f0a..10b188c87fa4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -44,7 +44,21 @@ int do_swap_page(struct vm_fault *vmf);
 extern struct vm_area_struct *get_vma(struct mm_struct *mm,
 				      unsigned long addr);
 extern void put_vma(struct vm_area_struct *vma);
-#endif
+
+static inline bool vma_has_changed(struct vm_fault *vmf)
+{
+	int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
+	unsigned int seq = READ_ONCE(vmf->vma->vm_sequence.sequence);
+
+	/*
+	 * Matches both the wmb in write_seqlock_{begin,end}() and
+	 * the wmb in vma_rb_erase().
+	 */
+	smp_rmb();
+
+	return ret || seq != vmf->sequence;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
 
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
diff --git a/mm/memory.c b/mm/memory.c
index 9c220ac0e2c5..8addf78deadb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -769,7 +769,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 	if (page)
 		dump_page(page, "bad pte");
 	pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
-		 (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
+		 (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
+		 mapping, index);
 	pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
 		 vma->vm_file,
 		 vma->vm_ops ? vma->vm_ops->fault : NULL,
@@ -2300,6 +2301,113 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(apply_to_page_range);
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+static bool pte_spinlock(struct vm_fault *vmf)
+{
+	bool ret = false;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	pmd_t pmdval;
+#endif
+
+	/* Check if vma is still valid */
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+		vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+		spin_lock(vmf->ptl);
+		return true;
+	}
+
+	local_irq_disable();
+	if (vma_has_changed(vmf))
+		goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	/*
+	 * We check if the pmd value is still the same to ensure that there
+	 * is not a huge collapse operation in progress in our back.
+	 */
+	pmdval = READ_ONCE(*vmf->pmd);
+	if (!pmd_same(pmdval, vmf->orig_pmd))
+		goto out;
+#endif
+
+	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+	if (unlikely(!spin_trylock(vmf->ptl)))
+		goto out;
+
+	if (vma_has_changed(vmf)) {
+		spin_unlock(vmf->ptl);
+		goto out;
+	}
+
+	ret = true;
+out:
+	local_irq_enable();
+	return ret;
+}
+
+static bool pte_map_lock(struct vm_fault *vmf)
+{
+	bool ret = false;
+	pte_t *pte;
+	spinlock_t *ptl;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	pmd_t pmdval;
+#endif
+
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+		vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+					       vmf->address, &vmf->ptl);
+		return true;
+	}
+
+	/*
+	 * The first vma_has_changed() guarantees the page-tables are still
+	 * valid, having IRQs disabled ensures they stay around, hence the
+	 * second vma_has_changed() to make sure they are still valid once
+	 * we've got the lock. After that a concurrent zap_pte_range() will
+	 * block on the PTL and thus we're safe.
+	 */
+	local_irq_disable();
+	if (vma_has_changed(vmf))
+		goto out;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	/*
+	 * We check if the pmd value is still the same to ensure that there
+	 * is not a huge collapse operation in progress in our back.
+	 */
+	pmdval = READ_ONCE(*vmf->pmd);
+	if (!pmd_same(pmdval, vmf->orig_pmd))
+		goto out;
+#endif
+
+	/*
+	 * Same as pte_offset_map_lock() except that we call
+	 * spin_trylock() in place of spin_lock() to avoid race with
+	 * unmap path which may have the lock and wait for this CPU
+	 * to invalidate TLB but this CPU has irq disabled.
+	 * Since we are in a speculative patch, accept it could fail
+	 */
+	ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+	pte = pte_offset_map(vmf->pmd, vmf->address);
+	if (unlikely(!spin_trylock(ptl))) {
+		pte_unmap(pte);
+		goto out;
+	}
+
+	if (vma_has_changed(vmf)) {
+		pte_unmap_unlock(pte, ptl);
+		goto out;
+	}
+
+	vmf->pte = pte;
+	vmf->ptl = ptl;
+	ret = true;
+out:
+	local_irq_enable();
+	return ret;
+}
+#else
 static inline bool pte_spinlock(struct vm_fault *vmf)
 {
 	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
@@ -2313,6 +2421,7 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
 				       vmf->address, &vmf->ptl);
 	return true;
 }
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
 
 /*
  * handle_pte_fault chooses page fault handler according to an entry which was
@@ -3202,6 +3311,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
 		ret = check_stable_address_space(vma->vm_mm);
 		if (ret)
 			goto unlock;
+		/*
+		 * Don't call the userfaultfd during the speculative path.
+		 * We already checked for the VMA to not be managed through
+		 * userfaultfd, but it may be set in our back once we have lock
+		 * the pte. In such a case we can ignore it this time.
+		 */
+		if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+			goto setpte;
 		/* Deliver the page fault to userland, check inside PT lock */
 		if (userfaultfd_missing(vma)) {
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -3243,7 +3360,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 		goto unlock_and_release;
 
 	/* Deliver the page fault to userland, check inside PT lock */
-	if (userfaultfd_missing(vma)) {
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		mem_cgroup_cancel_charge(page, memcg, false);
 		put_page(page);
@@ -3988,13 +4105,22 @@ static int handle_pte_fault(struct vm_fault *vmf)
 
 	if (unlikely(pmd_none(*vmf->pmd))) {
 		/*
+		 * In the case of the speculative page fault handler we abort
+		 * the speculative path immediately as the pmd is probably
+		 * in the way to be converted in a huge one. We will try
+		 * again holding the mmap_sem (which implies that the collapse
+		 * operation is done).
+		 */
+		if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+			return VM_FAULT_RETRY;
+		/*
 		 * Leave __pte_alloc() until later: because vm_ops->fault may
 		 * want to allocate huge page, and if we expose page table
 		 * for an instant, it will be difficult to retract from
 		 * concurrent faults and from rmap lookups.
 		 */
 		vmf->pte = NULL;
-	} else {
+	} else if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
 		/* See comment in pte_alloc_one_map() */
 		if (pmd_devmap_trans_unstable(vmf->pmd))
 			return 0;
@@ -4003,6 +4129,9 @@ static int handle_pte_fault(struct vm_fault *vmf)
 		 * pmd from under us anymore at this point because we hold the
 		 * mmap_sem read mode and khugepaged takes it in write mode.
 		 * So now it's safe to run pte_offset_map().
+		 * This is not applicable to the speculative page fault handler
+		 * but in that case, the pte is fetched earlier in
+		 * handle_speculative_fault().
 		 */
 		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
 		vmf->orig_pte = *vmf->pte;
@@ -4025,6 +4154,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
 	if (!vmf->pte) {
 		if (vma_is_anonymous(vmf->vma))
 			return do_anonymous_page(vmf);
+		else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+			return VM_FAULT_RETRY;
 		else
 			return do_fault(vmf);
 	}
@@ -4122,6 +4253,9 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	vmf.pmd = pmd_alloc(mm, vmf.pud, address);
 	if (!vmf.pmd)
 		return VM_FAULT_OOM;
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
+#endif
 	if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
 		ret = create_huge_pmd(&vmf);
 		if (!(ret & VM_FAULT_FALLBACK))
@@ -4155,6 +4289,206 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	return handle_pte_fault(&vmf);
 }
 
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+
+#ifndef __HAVE_ARCH_PTE_SPECIAL
+/* This is required by vm_normal_page() */
+#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
+#endif
+
+/*
+ * vm_normal_page() adds some processing which should be done while
+ * hodling the mmap_sem.
+ */
+int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
+			       unsigned int flags)
+{
+	struct vm_fault vmf = {
+		.address = address,
+	};
+	pgd_t *pgd, pgdval;
+	p4d_t *p4d, p4dval;
+	pud_t pudval;
+	int seq, ret = VM_FAULT_RETRY;
+	struct vm_area_struct *vma;
+
+	/* Clear flags that may lead to release the mmap_sem to retry */
+	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
+	flags |= FAULT_FLAG_SPECULATIVE;
+
+	vma = get_vma(mm, address);
+	if (!vma)
+		return ret;
+
+	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+	if (seq & 1)
+		goto out_put;
+
+	/*
+	 * Can't call vm_ops service has we don't know what they would do
+	 * with the VMA.
+	 * This include huge page from hugetlbfs.
+	 */
+	if (vma->vm_ops)
+		goto out_put;
+
+	/*
+	 * __anon_vma_prepare() requires the mmap_sem to be held
+	 * because vm_next and vm_prev must be safe. This can't be guaranteed
+	 * in the speculative path.
+	 */
+	if (unlikely(!vma->anon_vma))
+		goto out_put;
+
+	vmf.vma_flags = READ_ONCE(vma->vm_flags);
+	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+
+	/* Can't call userland page fault handler in the speculative path */
+	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+		goto out_put;
+
+	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+		/*
+		 * This could be detected by the check address against VMA's
+		 * boundaries but we want to trace it as not supported instead
+		 * of changed.
+		 */
+		goto out_put;
+
+	if (address < READ_ONCE(vma->vm_start)
+	    || READ_ONCE(vma->vm_end) <= address)
+		goto out_put;
+
+	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+				       flags & FAULT_FLAG_INSTRUCTION,
+				       flags & FAULT_FLAG_REMOTE)) {
+		ret = VM_FAULT_SIGSEGV;
+		goto out_put;
+	}
+
+	/* This is one is required to check that the VMA has write access set */
+	if (flags & FAULT_FLAG_WRITE) {
+		if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+			ret = VM_FAULT_SIGSEGV;
+			goto out_put;
+		}
+	} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+		ret = VM_FAULT_SIGSEGV;
+		goto out_put;
+	}
+
+	if (IS_ENABLED(CONFIG_NUMA)) {
+		struct mempolicy *pol;
+
+		/*
+		 * MPOL_INTERLEAVE implies additional checks in
+		 * mpol_misplaced() which are not compatible with the
+		 *speculative page fault processing.
+		 */
+		pol = __get_vma_policy(vma, address);
+		if (!pol)
+			pol = get_task_policy(current);
+		if (pol && pol->mode == MPOL_INTERLEAVE)
+			goto out_put;
+	}
+
+	/*
+	 * Do a speculative lookup of the PTE entry.
+	 */
+	local_irq_disable();
+	pgd = pgd_offset(mm, address);
+	pgdval = READ_ONCE(*pgd);
+	if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
+		goto out_walk;
+
+	p4d = p4d_offset(pgd, address);
+	p4dval = READ_ONCE(*p4d);
+	if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
+		goto out_walk;
+
+	vmf.pud = pud_offset(p4d, address);
+	pudval = READ_ONCE(*vmf.pud);
+	if (pud_none(pudval) || unlikely(pud_bad(pudval)))
+		goto out_walk;
+
+	/* Huge pages at PUD level are not supported. */
+	if (unlikely(pud_trans_huge(pudval)))
+		goto out_walk;
+
+	vmf.pmd = pmd_offset(vmf.pud, address);
+	vmf.orig_pmd = READ_ONCE(*vmf.pmd);
+	/*
+	 * pmd_none could mean that a hugepage collapse is in progress
+	 * in our back as collapse_huge_page() mark it before
+	 * invalidating the pte (which is done once the IPI is catched
+	 * by all CPU and we have interrupt disabled).
+	 * For this reason we cannot handle THP in a speculative way since we
+	 * can't safely indentify an in progress collapse operation done in our
+	 * back on that PMD.
+	 * Regarding the order of the following checks, see comment in
+	 * pmd_devmap_trans_unstable()
+	 */
+	if (unlikely(pmd_devmap(vmf.orig_pmd) ||
+		     pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
+		     is_swap_pmd(vmf.orig_pmd)))
+		goto out_walk;
+
+	/*
+	 * The above does not allocate/instantiate page-tables because doing so
+	 * would lead to the possibility of instantiating page-tables after
+	 * free_pgtables() -- and consequently leaking them.
+	 *
+	 * The result is that we take at least one !speculative fault per PMD
+	 * in order to instantiate it.
+	 */
+
+	vmf.pte = pte_offset_map(vmf.pmd, address);
+	vmf.orig_pte = READ_ONCE(*vmf.pte);
+	barrier(); /* See comment in handle_pte_fault() */
+	if (pte_none(vmf.orig_pte)) {
+		pte_unmap(vmf.pte);
+		vmf.pte = NULL;
+	}
+
+	vmf.vma = vma;
+	vmf.pgoff = linear_page_index(vma, address);
+	vmf.gfp_mask = __get_fault_gfp_mask(vma);
+	vmf.sequence = seq;
+	vmf.flags = flags;
+
+	local_irq_enable();
+
+	/*
+	 * We need to re-validate the VMA after checking the bounds, otherwise
+	 * we might have a false positive on the bounds.
+	 */
+	if (read_seqcount_retry(&vma->vm_sequence, seq))
+		goto out_put;
+
+	mem_cgroup_oom_enable();
+	ret = handle_pte_fault(&vmf);
+	mem_cgroup_oom_disable();
+
+	put_vma(vma);
+
+	/*
+	 * The task may have entered a memcg OOM situation but
+	 * if the allocation error was handled gracefully (no
+	 * VM_FAULT_OOM), there is no need to kill anything.
+	 * Just clean up the OOM state peacefully.
+	 */
+	if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
+		mem_cgroup_oom_synchronize(false);
+	return ret;
+
+out_walk:
+	local_irq_enable();
+out_put:
+	put_vma(vma);
+	return ret;
+}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
+
 /*
  * By the time we get here, we already hold the mm semaphore
  *
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 19/25] mm: adding speculative page fault failure trace events
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (17 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 18/25] mm: provide speculative fault infrastructure Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 20/25] perf: add a speculative page fault sw event Laurent Dufour
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

This patch a set of new trace events to collect the speculative page fault
event failures.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/trace/events/pagefault.h | 88 ++++++++++++++++++++++++++++++++++++++++
 mm/memory.c                      | 62 ++++++++++++++++++++++------
 2 files changed, 137 insertions(+), 13 deletions(-)
 create mode 100644 include/trace/events/pagefault.h

diff --git a/include/trace/events/pagefault.h b/include/trace/events/pagefault.h
new file mode 100644
index 000000000000..a9643b3759f2
--- /dev/null
+++ b/include/trace/events/pagefault.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM pagefault
+
+#if !defined(_TRACE_PAGEFAULT_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PAGEFAULT_H
+
+#include <linux/tracepoint.h>
+#include <linux/mm.h>
+
+DECLARE_EVENT_CLASS(spf,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, caller)
+		__field(unsigned long, vm_start)
+		__field(unsigned long, vm_end)
+		__field(unsigned long, address)
+	),
+
+	TP_fast_assign(
+		__entry->caller		= caller;
+		__entry->vm_start	= vma->vm_start;
+		__entry->vm_end		= vma->vm_end;
+		__entry->address	= address;
+	),
+
+	TP_printk("ip:%lx vma:%lx-%lx address:%lx",
+		  __entry->caller, __entry->vm_start, __entry->vm_end,
+		  __entry->address)
+);
+
+DEFINE_EVENT(spf, spf_pte_lock,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_changed,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_noanon,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_notsup,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_access,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_pmd_changed,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+#endif /* _TRACE_PAGEFAULT_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/memory.c b/mm/memory.c
index 8addf78deadb..76178feff000 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -80,6 +80,9 @@
 
 #include "internal.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/pagefault.h>
+
 #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
 #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
 #endif
@@ -2317,8 +2320,10 @@ static bool pte_spinlock(struct vm_fault *vmf)
 	}
 
 	local_irq_disable();
-	if (vma_has_changed(vmf))
+	if (vma_has_changed(vmf)) {
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	/*
@@ -2326,16 +2331,21 @@ static bool pte_spinlock(struct vm_fault *vmf)
 	 * is not a huge collapse operation in progress in our back.
 	 */
 	pmdval = READ_ONCE(*vmf->pmd);
-	if (!pmd_same(pmdval, vmf->orig_pmd))
+	if (!pmd_same(pmdval, vmf->orig_pmd)) {
+		trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 #endif
 
 	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-	if (unlikely(!spin_trylock(vmf->ptl)))
+	if (unlikely(!spin_trylock(vmf->ptl))) {
+		trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 	if (vma_has_changed(vmf)) {
 		spin_unlock(vmf->ptl);
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
@@ -2368,8 +2378,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	 * block on the PTL and thus we're safe.
 	 */
 	local_irq_disable();
-	if (vma_has_changed(vmf))
+	if (vma_has_changed(vmf)) {
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	/*
@@ -2377,8 +2389,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	 * is not a huge collapse operation in progress in our back.
 	 */
 	pmdval = READ_ONCE(*vmf->pmd);
-	if (!pmd_same(pmdval, vmf->orig_pmd))
+	if (!pmd_same(pmdval, vmf->orig_pmd)) {
+		trace_spf_pmd_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 #endif
 
 	/*
@@ -2392,11 +2406,13 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	pte = pte_offset_map(vmf->pmd, vmf->address);
 	if (unlikely(!spin_trylock(ptl))) {
 		pte_unmap(pte);
+		trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
 	if (vma_has_changed(vmf)) {
 		pte_unmap_unlock(pte, ptl);
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
@@ -4321,47 +4337,60 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 		return ret;
 
 	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
-	if (seq & 1)
+	if (seq & 1) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	/*
 	 * Can't call vm_ops service has we don't know what they would do
 	 * with the VMA.
 	 * This include huge page from hugetlbfs.
 	 */
-	if (vma->vm_ops)
+	if (vma->vm_ops) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	/*
 	 * __anon_vma_prepare() requires the mmap_sem to be held
 	 * because vm_next and vm_prev must be safe. This can't be guaranteed
 	 * in the speculative path.
 	 */
-	if (unlikely(!vma->anon_vma))
+	if (unlikely(!vma->anon_vma)) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	vmf.vma_flags = READ_ONCE(vma->vm_flags);
 	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
 
 	/* Can't call userland page fault handler in the speculative path */
-	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
-	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
 		/*
 		 * This could be detected by the check address against VMA's
 		 * boundaries but we want to trace it as not supported instead
 		 * of changed.
 		 */
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	if (address < READ_ONCE(vma->vm_start)
-	    || READ_ONCE(vma->vm_end) <= address)
+	    || READ_ONCE(vma->vm_end) <= address) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
 				       flags & FAULT_FLAG_INSTRUCTION,
 				       flags & FAULT_FLAG_REMOTE)) {
+		trace_spf_vma_access(_RET_IP_, vma, address);
 		ret = VM_FAULT_SIGSEGV;
 		goto out_put;
 	}
@@ -4369,10 +4398,12 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	/* This is one is required to check that the VMA has write access set */
 	if (flags & FAULT_FLAG_WRITE) {
 		if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+			trace_spf_vma_access(_RET_IP_, vma, address);
 			ret = VM_FAULT_SIGSEGV;
 			goto out_put;
 		}
 	} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
+		trace_spf_vma_access(_RET_IP_, vma, address);
 		ret = VM_FAULT_SIGSEGV;
 		goto out_put;
 	}
@@ -4388,8 +4419,10 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 		pol = __get_vma_policy(vma, address);
 		if (!pol)
 			pol = get_task_policy(current);
-		if (pol && pol->mode == MPOL_INTERLEAVE)
+		if (pol && pol->mode == MPOL_INTERLEAVE) {
+			trace_spf_vma_notsup(_RET_IP_, vma, address);
 			goto out_put;
+		}
 	}
 
 	/*
@@ -4462,8 +4495,10 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * We need to re-validate the VMA after checking the bounds, otherwise
 	 * we might have a false positive on the bounds.
 	 */
-	if (read_seqcount_retry(&vma->vm_sequence, seq))
+	if (read_seqcount_retry(&vma->vm_sequence, seq)) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto out_put;
+	}
 
 	mem_cgroup_oom_enable();
 	ret = handle_pte_fault(&vmf);
@@ -4482,6 +4517,7 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	return ret;
 
 out_walk:
+	trace_spf_vma_notsup(_RET_IP_, vma, address);
 	local_irq_enable();
 out_put:
 	put_vma(vma);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 20/25] perf: add a speculative page fault sw event
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (18 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 19/25] mm: adding speculative page fault failure trace events Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 21/25] perf tools: add support for the SPF perf event Laurent Dufour
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Add a new software event to count succeeded speculative page faults.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/uapi/linux/perf_event.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 912b85b52344..9aad243607fe 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
 	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
 	PERF_COUNT_SW_DUMMY			= 9,
 	PERF_COUNT_SW_BPF_OUTPUT		= 10,
+	PERF_COUNT_SW_SPF			= 11,
 
 	PERF_COUNT_SW_MAX,			/* non-ABI */
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 21/25] perf tools: add support for the SPF perf event
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (19 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 20/25] perf: add a speculative page fault sw event Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 22/25] mm: speculative page fault handler return VMA Laurent Dufour
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Add support for the new speculative faults event.

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 tools/include/uapi/linux/perf_event.h | 1 +
 tools/perf/util/evsel.c               | 1 +
 tools/perf/util/parse-events.c        | 4 ++++
 tools/perf/util/parse-events.l        | 1 +
 tools/perf/util/python.c              | 1 +
 5 files changed, 8 insertions(+)

diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index 912b85b52344..9aad243607fe 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -112,6 +112,7 @@ enum perf_sw_ids {
 	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
 	PERF_COUNT_SW_DUMMY			= 9,
 	PERF_COUNT_SW_BPF_OUTPUT		= 10,
+	PERF_COUNT_SW_SPF			= 11,
 
 	PERF_COUNT_SW_MAX,			/* non-ABI */
 };
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 1ac8d9236efd..e14a754c3675 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -429,6 +429,7 @@ const char *perf_evsel__sw_names[PERF_COUNT_SW_MAX] = {
 	"alignment-faults",
 	"emulation-faults",
 	"dummy",
+	"speculative-faults",
 };
 
 static const char *__perf_evsel__sw_name(u64 config)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 2fb0272146d8..54719f566314 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -140,6 +140,10 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = {
 		.symbol = "bpf-output",
 		.alias  = "",
 	},
+	[PERF_COUNT_SW_SPF] = {
+		.symbol = "speculative-faults",
+		.alias	= "spf",
+	},
 };
 
 #define __PERF_EVENT_FIELD(config, name) \
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index a1a01b1ac8b8..86584d3a3068 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -308,6 +308,7 @@ emulation-faults				{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EM
 dummy						{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
 duration_time					{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
 bpf-output					{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); }
+speculative-faults|spf				{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_SPF); }
 
 	/*
 	 * We have to handle the kernel PMU event cycles-ct/cycles-t/mem-loads/mem-stores separately.
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index 863b61478edd..df4f7ff9bdff 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -1181,6 +1181,7 @@ static struct {
 	PERF_CONST(COUNT_SW_ALIGNMENT_FAULTS),
 	PERF_CONST(COUNT_SW_EMULATION_FAULTS),
 	PERF_CONST(COUNT_SW_DUMMY),
+	PERF_CONST(COUNT_SW_SPF),
 
 	PERF_CONST(SAMPLE_IP),
 	PERF_CONST(SAMPLE_TID),
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 22/25] mm: speculative page fault handler return VMA
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (20 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 21/25] perf tools: add support for the SPF perf event Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 23/25] mm: add speculative page fault vmstats Laurent Dufour
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

When the speculative page fault handler is returning VM_RETRY, there is a
chance that VMA fetched without grabbing the mmap_sem can be reused by the
legacy page fault handler.  By reusing it, we avoid calling find_vma()
again. To achieve, that we must ensure that the VMA structure will not be
freed in our back. This is done by getting the reference on it (get_vma())
and by assuming that the caller will call the new service
can_reuse_spf_vma() once it has grabbed the mmap_sem.

can_reuse_spf_vma() is first checking that the VMA is still in the RB tree
, and then that the VMA's boundaries matched the passed address and release
the reference on the VMA so that it can be freed if needed.

In the case the VMA is freed, can_reuse_spf_vma() will have returned false
as the VMA is no more in the RB tree.

In the architecture page fault handler, the call to the new service
reuse_spf_or_find_vma() should be made in place of find_vma(), this will
handle the check on the spf_vma and if needed call find_vma().

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h |  22 +++++++--
 mm/memory.c        | 140 ++++++++++++++++++++++++++++++++---------------------
 2 files changed, 103 insertions(+), 59 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 08540c98d63b..50b6fd3bf9e2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1382,25 +1382,37 @@ extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
 extern int __handle_speculative_fault(struct mm_struct *mm,
 				      unsigned long address,
-				      unsigned int flags);
+				      unsigned int flags,
+				      struct vm_area_struct **vma);
 static inline int handle_speculative_fault(struct mm_struct *mm,
 					   unsigned long address,
-					   unsigned int flags)
+					   unsigned int flags,
+					   struct vm_area_struct **vma)
 {
 	/*
 	 * Try speculative page fault for multithreaded user space task only.
 	 */
-	if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1)
+	if (!(flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) {
+		*vma = NULL;
 		return VM_FAULT_RETRY;
-	return __handle_speculative_fault(mm, address, flags);
+	}
+	return __handle_speculative_fault(mm, address, flags, vma);
 }
+extern bool can_reuse_spf_vma(struct vm_area_struct *vma,
+			      unsigned long address);
 #else
 static inline int handle_speculative_fault(struct mm_struct *mm,
 					   unsigned long address,
-					   unsigned int flags)
+					   unsigned int flags,
+					   struct vm_area_struct **vma)
 {
 	return VM_FAULT_RETRY;
 }
+static inline bool can_reuse_spf_vma(struct vm_area_struct *vma,
+				     unsigned long address)
+{
+	return false;
+}
 #endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
 
 extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
diff --git a/mm/memory.c b/mm/memory.c
index 76178feff000..425f07e0bf38 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4311,13 +4311,22 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 /* This is required by vm_normal_page() */
 #error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
 #endif
-
 /*
  * vm_normal_page() adds some processing which should be done while
  * hodling the mmap_sem.
  */
+
+/*
+ * Tries to handle the page fault in a speculative way, without grabbing the
+ * mmap_sem.
+ * When VM_FAULT_RETRY is returned, the vma pointer is valid and this vma must
+ * be checked later when the mmap_sem has been grabbed by calling
+ * can_reuse_spf_vma().
+ * This is needed as the returned vma is kept in memory until the call to
+ * can_reuse_spf_vma() is made.
+ */
 int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
-			       unsigned int flags)
+			       unsigned int flags, struct vm_area_struct **vma)
 {
 	struct vm_fault vmf = {
 		.address = address,
@@ -4325,21 +4334,22 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	pgd_t *pgd, pgdval;
 	p4d_t *p4d, p4dval;
 	pud_t pudval;
-	int seq, ret = VM_FAULT_RETRY;
-	struct vm_area_struct *vma;
+	int seq, ret;
 
 	/* Clear flags that may lead to release the mmap_sem to retry */
 	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
 	flags |= FAULT_FLAG_SPECULATIVE;
 
-	vma = get_vma(mm, address);
-	if (!vma)
-		return ret;
+	*vma = get_vma(mm, address);
+	if (!*vma)
+		return VM_FAULT_RETRY;
+	vmf.vma = *vma;
 
-	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+	/* rmb <-> seqlock,vma_rb_erase() */
+	seq = raw_read_seqcount(&vmf.vma->vm_sequence);
 	if (seq & 1) {
-		trace_spf_vma_changed(_RET_IP_, vma, address);
-		goto out_put;
+		trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
 	/*
@@ -4347,9 +4357,9 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * with the VMA.
 	 * This include huge page from hugetlbfs.
 	 */
-	if (vma->vm_ops) {
-		trace_spf_vma_notsup(_RET_IP_, vma, address);
-		goto out_put;
+	if (vmf.vma->vm_ops) {
+		trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
 	/*
@@ -4357,18 +4367,18 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * because vm_next and vm_prev must be safe. This can't be guaranteed
 	 * in the speculative path.
 	 */
-	if (unlikely(!vma->anon_vma)) {
-		trace_spf_vma_notsup(_RET_IP_, vma, address);
-		goto out_put;
+	if (unlikely(!vmf.vma->anon_vma)) {
+		trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
-	vmf.vma_flags = READ_ONCE(vma->vm_flags);
-	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+	vmf.vma_flags = READ_ONCE(vmf.vma->vm_flags);
+	vmf.vma_page_prot = READ_ONCE(vmf.vma->vm_page_prot);
 
 	/* Can't call userland page fault handler in the speculative path */
 	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
-		trace_spf_vma_notsup(_RET_IP_, vma, address);
-		goto out_put;
+		trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
 	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
@@ -4377,36 +4387,27 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 		 * boundaries but we want to trace it as not supported instead
 		 * of changed.
 		 */
-		trace_spf_vma_notsup(_RET_IP_, vma, address);
-		goto out_put;
+		trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
-	if (address < READ_ONCE(vma->vm_start)
-	    || READ_ONCE(vma->vm_end) <= address) {
-		trace_spf_vma_changed(_RET_IP_, vma, address);
-		goto out_put;
+	if (address < READ_ONCE(vmf.vma->vm_start)
+	    || READ_ONCE(vmf.vma->vm_end) <= address) {
+		trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
-	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+	if (!arch_vma_access_permitted(vmf.vma, flags & FAULT_FLAG_WRITE,
 				       flags & FAULT_FLAG_INSTRUCTION,
-				       flags & FAULT_FLAG_REMOTE)) {
-		trace_spf_vma_access(_RET_IP_, vma, address);
-		ret = VM_FAULT_SIGSEGV;
-		goto out_put;
-	}
+				       flags & FAULT_FLAG_REMOTE))
+		goto out_segv;
 
 	/* This is one is required to check that the VMA has write access set */
 	if (flags & FAULT_FLAG_WRITE) {
-		if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
-			trace_spf_vma_access(_RET_IP_, vma, address);
-			ret = VM_FAULT_SIGSEGV;
-			goto out_put;
-		}
-	} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
-		trace_spf_vma_access(_RET_IP_, vma, address);
-		ret = VM_FAULT_SIGSEGV;
-		goto out_put;
-	}
+		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
+			goto out_segv;
+	} else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE))))
+		goto out_segv;
 
 	if (IS_ENABLED(CONFIG_NUMA)) {
 		struct mempolicy *pol;
@@ -4416,12 +4417,12 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 		 * mpol_misplaced() which are not compatible with the
 		 *speculative page fault processing.
 		 */
-		pol = __get_vma_policy(vma, address);
+		pol = __get_vma_policy(vmf.vma, address);
 		if (!pol)
 			pol = get_task_policy(current);
 		if (pol && pol->mode == MPOL_INTERLEAVE) {
-			trace_spf_vma_notsup(_RET_IP_, vma, address);
-			goto out_put;
+			trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
+			return VM_FAULT_RETRY;
 		}
 	}
 
@@ -4483,9 +4484,8 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 		vmf.pte = NULL;
 	}
 
-	vmf.vma = vma;
-	vmf.pgoff = linear_page_index(vma, address);
-	vmf.gfp_mask = __get_fault_gfp_mask(vma);
+	vmf.pgoff = linear_page_index(vmf.vma, address);
+	vmf.gfp_mask = __get_fault_gfp_mask(vmf.vma);
 	vmf.sequence = seq;
 	vmf.flags = flags;
 
@@ -4495,16 +4495,22 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * We need to re-validate the VMA after checking the bounds, otherwise
 	 * we might have a false positive on the bounds.
 	 */
-	if (read_seqcount_retry(&vma->vm_sequence, seq)) {
-		trace_spf_vma_changed(_RET_IP_, vma, address);
-		goto out_put;
+	if (read_seqcount_retry(&vmf.vma->vm_sequence, seq)) {
+		trace_spf_vma_changed(_RET_IP_, vmf.vma, address);
+		return VM_FAULT_RETRY;
 	}
 
 	mem_cgroup_oom_enable();
 	ret = handle_pte_fault(&vmf);
 	mem_cgroup_oom_disable();
 
-	put_vma(vma);
+	/*
+	 * If there is no need to retry, don't return the vma to the caller.
+	 */
+	if (ret != VM_FAULT_RETRY) {
+		put_vma(vmf.vma);
+		*vma = NULL;
+	}
 
 	/*
 	 * The task may have entered a memcg OOM situation but
@@ -4517,9 +4523,35 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	return ret;
 
 out_walk:
-	trace_spf_vma_notsup(_RET_IP_, vma, address);
+	trace_spf_vma_notsup(_RET_IP_, vmf.vma, address);
 	local_irq_enable();
-out_put:
+	return VM_FAULT_RETRY;
+
+out_segv:
+	trace_spf_vma_access(_RET_IP_, vmf.vma, address);
+	/*
+	 * We don't return VM_FAULT_RETRY so the caller is not expected to
+	 * retrieve the fetched VMA.
+	 */
+	put_vma(vmf.vma);
+	*vma = NULL;
+	return VM_FAULT_SIGSEGV;
+}
+
+/*
+ * This is used to know if the vma fetch in the speculative page fault handler
+ * is still valid when trying the regular fault path while holding the
+ * mmap_sem.
+ * The call to put_vma(vma) must be made after checking the vma's fields, as
+ * the vma may be freed by put_vma(). In such a case it is expected that false
+ * is returned.
+ */
+bool can_reuse_spf_vma(struct vm_area_struct *vma, unsigned long address)
+{
+	bool ret;
+
+	ret = !RB_EMPTY_NODE(&vma->vm_rb) &&
+		vma->vm_start <= address && address < vma->vm_end;
 	put_vma(vma);
 	return ret;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 23/25] mm: add speculative page fault vmstats
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (21 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 22/25] mm: speculative page fault handler return VMA Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-05-16  2:50   ` Ganesh Mahendran
  2018-04-17 14:33 ` [PATCH v10 24/25] x86/mm: add speculative pagefault handling Laurent Dufour
                   ` (3 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Add speculative_pgfault vmstat counter to count successful speculative page
fault handling.

Also fixing a minor typo in include/linux/vm_event_item.h.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/vm_event_item.h | 3 +++
 mm/memory.c                   | 1 +
 mm/vmstat.c                   | 5 ++++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5c7f010676a7..a240acc09684 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -111,6 +111,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		SWAP_RA,
 		SWAP_RA_HIT,
 #endif
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+		SPECULATIVE_PGFAULT,
+#endif
 		NR_VM_EVENT_ITEMS
 };
 
diff --git a/mm/memory.c b/mm/memory.c
index 425f07e0bf38..1cd5bc000643 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4508,6 +4508,7 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * If there is no need to retry, don't return the vma to the caller.
 	 */
 	if (ret != VM_FAULT_RETRY) {
+		count_vm_event(SPECULATIVE_PGFAULT);
 		put_vma(vmf.vma);
 		*vma = NULL;
 	}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 536332e988b8..c6b49bfa8139 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1289,7 +1289,10 @@ const char * const vmstat_text[] = {
 	"swap_ra",
 	"swap_ra_hit",
 #endif
-#endif /* CONFIG_VM_EVENTS_COUNTERS */
+#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
+	"speculative_pgfault"
+#endif
+#endif /* CONFIG_VM_EVENT_COUNTERS */
 };
 #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 24/25] x86/mm: add speculative pagefault handling
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (22 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 23/25] mm: add speculative page fault vmstats Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-30 18:43   ` Punit Agrawal
  2018-04-17 14:33 ` [PATCH v10 25/25] powerpc/mm: add speculative page fault Laurent Dufour
                   ` (2 subsequent siblings)
  26 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
 handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
 handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Try speculative fault path only for multi threaded processes]
[Try reuse to the VMA fetch during the speculative path in case of retry]
[Call reuse_spf_or_find_vma()]
[Handle memory protection key fault]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/x86/mm/fault.c | 42 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 73bd8c95ac71..59f778386df5 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1220,7 +1220,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	struct mm_struct *mm;
 	int fault, major = 0;
 	unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
-	u32 pkey;
+	u32 pkey, *pt_pkey = &pkey;
 
 	tsk = current;
 	mm = tsk->mm;
@@ -1310,6 +1310,30 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 		flags |= FAULT_FLAG_INSTRUCTION;
 
 	/*
+	 * Do not try speculative page fault for kernel's pages and if
+	 * the fault was due to protection keys since it can't be resolved.
+	 */
+	if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT) &&
+	    !(error_code & X86_PF_PK)) {
+		fault = handle_speculative_fault(mm, address, flags, &vma);
+		if (fault != VM_FAULT_RETRY) {
+			perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+			/*
+			 * Do not advertise for the pkey value since we don't
+			 * know it.
+			 * This is not a matter as we checked for X86_PF_PK
+			 * earlier, so we should not handle pkey fault here,
+			 * but to be sure that mm_fault_error() callees will
+			 * not try to use it, we invalidate the pointer.
+			 */
+			pt_pkey = NULL;
+			goto done;
+		}
+	} else {
+		vma = NULL;
+	}
+
+	/*
 	 * When running in the kernel we expect faults to occur only to
 	 * addresses in user space.  All other faults represent errors in
 	 * the kernel and should generate an OOPS.  Unfortunately, in the
@@ -1342,7 +1366,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 		might_sleep();
 	}
 
-	vma = find_vma(mm, address);
+	if (!vma || !can_reuse_spf_vma(vma, address))
+		vma = find_vma(mm, address);
 	if (unlikely(!vma)) {
 		bad_area(regs, error_code, address);
 		return;
@@ -1409,8 +1434,15 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 		if (flags & FAULT_FLAG_ALLOW_RETRY) {
 			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
-			if (!fatal_signal_pending(tsk))
+			if (!fatal_signal_pending(tsk)) {
+				/*
+				 * Do not try to reuse this vma and fetch it
+				 * again since we will release the mmap_sem.
+				 */
+				if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT))
+					vma = NULL;
 				goto retry;
+			}
 		}
 
 		/* User mode? Just return to handle the fatal exception */
@@ -1423,8 +1455,10 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	}
 
 	up_read(&mm->mmap_sem);
+
+done:
 	if (unlikely(fault & VM_FAULT_ERROR)) {
-		mm_fault_error(regs, error_code, address, &pkey, fault);
+		mm_fault_error(regs, error_code, address, pt_pkey, fault);
 		return;
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [PATCH v10 25/25] powerpc/mm: add speculative page fault
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (23 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 24/25] x86/mm: add speculative pagefault handling Laurent Dufour
@ 2018-04-17 14:33 ` Laurent Dufour
  2018-04-17 16:51 ` [PATCH v10 00/25] Speculative page faults Christopher Lameter
  2018-05-02 14:17   ` Punit Agrawal
  26 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-17 14:33 UTC (permalink / raw)
  To: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

This patch enable the speculative page fault on the PowerPC
architecture.

This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.

The speculative path is only tried for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/powerpc/mm/fault.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index c01d627e687a..37191147026e 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -464,6 +464,26 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 	if (is_exec)
 		flags |= FAULT_FLAG_INSTRUCTION;
 
+	if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT)) {
+		fault = handle_speculative_fault(mm, address, flags, &vma);
+		/*
+		 * Page fault is done if VM_FAULT_RETRY is not returned.
+		 * But if the memory protection keys are active, we don't know
+		 * if the fault is due to key mistmatch or due to a
+		 * classic protection check.
+		 * To differentiate that, we will need the VMA we no
+		 * more have, so let's retry with the mmap_sem held.
+		 */
+		if (fault != VM_FAULT_RETRY &&
+		    (IS_ENABLED(CONFIG_PPC_MEM_KEYS) &&
+		     fault != VM_FAULT_SIGSEGV)) {
+			perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+			goto done;
+		}
+	} else {
+		vma = NULL;
+	}
+
 	/* When running in the kernel we expect faults to occur only to
 	 * addresses in user space.  All other faults represent errors in the
 	 * kernel and should generate an OOPS.  Unfortunately, in the case of an
@@ -494,7 +514,8 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 		might_sleep();
 	}
 
-	vma = find_vma(mm, address);
+	if (!vma || !can_reuse_spf_vma(vma, address))
+		vma = find_vma(mm, address);
 	if (unlikely(!vma))
 		return bad_area(regs, address);
 	if (likely(vma->vm_start <= address))
@@ -551,8 +572,15 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 			 */
 			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
-			if (!fatal_signal_pending(current))
+			if (!fatal_signal_pending(current)) {
+				/*
+				 * Do not try to reuse this vma and fetch it
+				 * again since we will release the mmap_sem.
+				 */
+				if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT))
+					vma = NULL;
 				goto retry;
+			}
 		}
 
 		/*
@@ -564,6 +592,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 
 	up_read(&current->mm->mmap_sem);
 
+done:
 	if (unlikely(fault & VM_FAULT_ERROR))
 		return mm_fault_error(regs, address, fault);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
                   ` (24 preceding siblings ...)
  2018-04-17 14:33 ` [PATCH v10 25/25] powerpc/mm: add speculative page fault Laurent Dufour
@ 2018-04-17 16:51 ` Christopher Lameter
  2018-05-02 14:17   ` Punit Agrawal
  26 siblings, 0 replies; 68+ messages in thread
From: Christopher Lameter @ 2018-04-17 16:51 UTC (permalink / raw)
  To: Laurent Dufour; +Cc: akpm, mhocko, Matthew Wilcox, linux-mm, Paul E. McKenney


This is part of one thread going back to 2017 when you posted the first
RFC. Guess lots of people may not see this given the age of the thread?

Could you start a new thread when posting a patchset?

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
  2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-04-23  5:58   ` Minchan Kim
  2018-04-23 15:10     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-04-23  5:58 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

I guess it's good timing to review. Guess LSF/MM goes so might change
a lot since then. :) Anyway, I grap a time to review.

On Tue, Apr 17, 2018 at 04:33:07PM +0200, Laurent Dufour wrote:
> This configuration variable will be used to build the code needed to
> handle speculative page fault.
> 
> By default it is turned off, and activated depending on architecture
> support, SMP and MMU.

Can we have description in here why it depends on architecture?

> 
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Suggested-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  mm/Kconfig | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/mm/Kconfig b/mm/Kconfig
> index d5004d82a1d6..5484dca11199 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -752,3 +752,25 @@ config GUP_BENCHMARK
>  	  performance of get_user_pages_fast().
>  
>  	  See tools/testing/selftests/vm/gup_benchmark.c
> +
> +config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> +       def_bool n
> +
> +config SPECULATIVE_PAGE_FAULT
> +       bool "Speculative page faults"
> +       default y
> +       depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> +       depends on MMU && SMP
> +       help
> +         Try to handle user space page faults without holding the mmap_sem.
> +
> +	 This should allow better concurrency for massively threaded process
> +	 since the page fault handler will not wait for other threads memory
> +	 layout change to be done, assuming that this change is done in another
> +	 part of the process's memory space. This type of page fault is named
> +	 speculative page fault.
> +
> +	 If the speculative page fault fails because of a concurrency is
> +	 detected or because underlying PMD or PTE tables are not yet
> +	 allocating, it is failing its processing and a classic page fault
> +	 is then tried.
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-04-17 14:33 ` [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF Laurent Dufour
@ 2018-04-23  6:31   ` Minchan Kim
  2018-04-30 14:07     ` Laurent Dufour
  2018-05-10 16:15   ` vinayak menon
  1 sibling, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-04-23  6:31 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Tue, Apr 17, 2018 at 04:33:12PM +0200, Laurent Dufour wrote:
> pte_unmap_same() is making the assumption that the page table are still
> around because the mmap_sem is held.
> This is no more the case when running a speculative page fault and
> additional check must be made to ensure that the final page table are still
> there.
> 
> This is now done by calling pte_spinlock() to check for the VMA's
> consistency while locking for the page tables.
> 
> This is requiring passing a vm_fault structure to pte_unmap_same() which is
> containing all the needed parameters.
> 
> As pte_spinlock() may fail in the case of a speculative page fault, if the
> VMA has been touched in our back, pte_unmap_same() should now return 3
> cases :
> 	1. pte are the same (0)
> 	2. pte are different (VM_FAULT_PTNOTSAME)
> 	3. a VMA's changes has been detected (VM_FAULT_RETRY)
> 
> The case 2 is handled by the introduction of a new VM_FAULT flag named
> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().

I don't see such logic in this patch.
Maybe you introduces it later? If so, please comment on it.
Or just return 0 in case of 2 without introducing VM_FAULT_PTNOTSAME.

> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
> page fault while holding the mmap_sem.
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/mm.h |  1 +
>  mm/memory.c        | 39 ++++++++++++++++++++++++++++-----------
>  2 files changed, 29 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 4d1aff80669c..714da99d77a3 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1208,6 +1208,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>  #define VM_FAULT_NEEDDSYNC  0x2000	/* ->fault did not modify page tables
>  					 * and needs fsync() to complete (for
>  					 * synchronous page faults in DAX) */
> +#define VM_FAULT_PTNOTSAME 0x4000	/* Page table entries have changed */
>  
>  #define VM_FAULT_ERROR	(VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
>  			 VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
> diff --git a/mm/memory.c b/mm/memory.c
> index 0b9a51f80e0e..f86efcb8e268 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2309,21 +2309,29 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>   * parts, do_swap_page must check under lock before unmapping the pte and
>   * proceeding (but do_wp_page is only called after already making such a check;
>   * and do_anonymous_page can safely check later on).
> + *
> + * pte_unmap_same() returns:
> + *	0			if the PTE are the same
> + *	VM_FAULT_PTNOTSAME	if the PTE are different
> + *	VM_FAULT_RETRY		if the VMA has changed in our back during
> + *				a speculative page fault handling.
>   */
> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
> -				pte_t *page_table, pte_t orig_pte)
> +static inline int pte_unmap_same(struct vm_fault *vmf)
>  {
> -	int same = 1;
> +	int ret = 0;
> +
>  #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
>  	if (sizeof(pte_t) > sizeof(unsigned long)) {
> -		spinlock_t *ptl = pte_lockptr(mm, pmd);
> -		spin_lock(ptl);
> -		same = pte_same(*page_table, orig_pte);
> -		spin_unlock(ptl);
> +		if (pte_spinlock(vmf)) {
> +			if (!pte_same(*vmf->pte, vmf->orig_pte))
> +				ret = VM_FAULT_PTNOTSAME;
> +			spin_unlock(vmf->ptl);
> +		} else
> +			ret = VM_FAULT_RETRY;
>  	}
>  #endif
> -	pte_unmap(page_table);
> -	return same;
> +	pte_unmap(vmf->pte);
> +	return ret;
>  }
>  
>  static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
> @@ -2912,10 +2920,19 @@ int do_swap_page(struct vm_fault *vmf)
>  	pte_t pte;
>  	int locked;
>  	int exclusive = 0;
> -	int ret = 0;
> +	int ret;
>  
> -	if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
> +	ret = pte_unmap_same(vmf);
> +	if (ret) {
> +		/*
> +		 * If pte != orig_pte, this means another thread did the
> +		 * swap operation in our back.
> +		 * So nothing else to do.
> +		 */
> +		if (ret == VM_FAULT_PTNOTSAME)
> +			ret = 0;
>  		goto out;
> +	}
>  
>  	entry = pte_to_swp_entry(vmf->orig_pte);
>  	if (unlikely(non_swap_entry(entry))) {
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 08/25] mm: VMA sequence count
  2018-04-17 14:33 ` [PATCH v10 08/25] mm: VMA sequence count Laurent Dufour
@ 2018-04-23  6:42   ` Minchan Kim
  2018-04-30 15:14     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-04-23  6:42 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Tue, Apr 17, 2018 at 04:33:14PM +0200, Laurent Dufour wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
> counts such that we can easily test if a VMA is changed.

So, seqcount is to protect modifying all attributes of vma?

> 
> The unmap_page_range() one allows us to make assumptions about
> page-tables; when we find the seqcount hasn't changed we can assume
> page-tables are still valid.

Hmm, seqcount covers page-table, too.
Please describe what the seqcount want to protect.

> 
> The flip side is that we cannot distinguish between a vma_adjust() and
> the unmap_page_range() -- where with the former we could have
> re-checked the vma bounds against the address.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> [Port to 4.12 kernel]
> [Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
> [Introduce vm_write_* inline function depending on
>  CONFIG_SPECULATIVE_PAGE_FAULT]
> [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
>  using vm_raw_write* functions]
> [Fix a lock dependency warning in mmap_region() when entering the error
>  path]
> [move sequence initialisation INIT_VMA()]
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/mm.h       | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/mm_types.h |  3 +++
>  mm/memory.c              |  2 ++
>  mm/mmap.c                | 31 +++++++++++++++++++++++++++++++
>  4 files changed, 80 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index efc1248b82bd..988daf7030c9 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1264,6 +1264,9 @@ struct zap_details {
>  static inline void INIT_VMA(struct vm_area_struct *vma)
>  {
>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +	seqcount_init(&vma->vm_sequence);
> +#endif
>  }
>  
>  struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> @@ -1386,6 +1389,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
>  	unmap_mapping_range(mapping, holebegin, holelen, 0);
>  }
>  
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +static inline void vm_write_begin(struct vm_area_struct *vma)
> +{
> +	write_seqcount_begin(&vma->vm_sequence);
> +}
> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> +					 int subclass)
> +{
> +	write_seqcount_begin_nested(&vma->vm_sequence, subclass);
> +}
> +static inline void vm_write_end(struct vm_area_struct *vma)
> +{
> +	write_seqcount_end(&vma->vm_sequence);
> +}
> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> +{
> +	raw_write_seqcount_begin(&vma->vm_sequence);
> +}
> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> +{
> +	raw_write_seqcount_end(&vma->vm_sequence);
> +}
> +#else
> +static inline void vm_write_begin(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> +					 int subclass)
> +{
> +}
> +static inline void vm_write_end(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> +{
> +}
> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> +{
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
>  extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
>  		void *buf, int len, unsigned int gup_flags);
>  extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 21612347d311..db5e9d630e7a 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -335,6 +335,9 @@ struct vm_area_struct {
>  	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
>  #endif
>  	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +	seqcount_t vm_sequence;
> +#endif
>  } __randomize_layout;
>  
>  struct core_thread {
> diff --git a/mm/memory.c b/mm/memory.c
> index f86efcb8e268..f7fed053df80 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1503,6 +1503,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>  	unsigned long next;
>  
>  	BUG_ON(addr >= end);

The comment about saying it aims for page-table stability will help.

> +	vm_write_begin(vma);
>  	tlb_start_vma(tlb, vma);
>  	pgd = pgd_offset(vma->vm_mm, addr);
>  	do {
> @@ -1512,6 +1513,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>  		next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
>  	} while (pgd++, addr = next, addr != end);
>  	tlb_end_vma(tlb, vma);
> +	vm_write_end(vma);
>  }
>  
>  
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 8bd9ae1dfacc..813e49589ea1 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -692,6 +692,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>  	long adjust_next = 0;
>  	int remove_next = 0;
>  
> +	/*
> +	 * Why using vm_raw_write*() functions here to avoid lockdep's warning ?
> +	 *
> +	 * Locked is complaining about a theoretical lock dependency, involving
> +	 * 3 locks:
> +	 *   mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
> +	 *
> +	 * Here are the major path leading to this dependency :
> +	 *  1. __vma_adjust() mmap_sem  -> vm_sequence -> i_mmap_rwsem
> +	 *  2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
> +	 *  3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
> +	 *  4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
> +	 *
> +	 * So there is no way to solve this easily, especially because in
> +	 * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
> +	 * VMAs are not yet known.
> +	 * However, the way the vm_seq is used is guarantying that we will
> +	 * never block on it since we just check for its value and never wait
> +	 * for it to move, see vma_has_changed() and handle_speculative_fault().
> +	 */
> +	vm_raw_write_begin(vma);
> +	if (next)
> +		vm_raw_write_begin(next);
> +
>  	if (next && !insert) {
>  		struct vm_area_struct *exporter = NULL, *importer = NULL;
>  
> @@ -902,6 +926,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>  			anon_vma_merge(vma, next);
>  		mm->map_count--;
>  		mpol_put(vma_policy(next));
> +		vm_raw_write_end(next);
>  		kmem_cache_free(vm_area_cachep, next);
>  		/*
>  		 * In mprotect's case 6 (see comments on vma_merge),
> @@ -916,6 +941,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>  			 * "vma->vm_next" gap must be updated.
>  			 */
>  			next = vma->vm_next;
> +			if (next)
> +				vm_raw_write_begin(next);
>  		} else {
>  			/*
>  			 * For the scope of the comment "next" and
> @@ -962,6 +989,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>  	if (insert && file)
>  		uprobe_mmap(insert);
>  
> +	if (next && next != vma)
> +		vm_raw_write_end(next);
> +	vm_raw_write_end(vma);
> +
>  	validate_mm(mm);
>  
>  	return 0;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 09/25] mm: protect VMA modifications using VMA sequence count
  2018-04-17 14:33 ` [PATCH v10 09/25] mm: protect VMA modifications using " Laurent Dufour
@ 2018-04-23  7:19   ` Minchan Kim
  2018-05-14 15:25     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-04-23  7:19 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Tue, Apr 17, 2018 at 04:33:15PM +0200, Laurent Dufour wrote:
> The VMA sequence count has been introduced to allow fast detection of
> VMA modification when running a page fault handler without holding
> the mmap_sem.
> 
> This patch provides protection against the VMA modification done in :
> 	- madvise()
> 	- mpol_rebind_policy()
> 	- vma_replace_policy()
> 	- change_prot_numa()
> 	- mlock(), munlock()
> 	- mprotect()
> 	- mmap_region()
> 	- collapse_huge_page()
> 	- userfaultd registering services
> 
> In addition, VMA fields which will be read during the speculative fault
> path needs to be written using WRITE_ONCE to prevent write to be split
> and intermediate values to be pushed to other CPUs.
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  fs/proc/task_mmu.c |  5 ++++-
>  fs/userfaultfd.c   | 17 +++++++++++++----
>  mm/khugepaged.c    |  3 +++
>  mm/madvise.c       |  6 +++++-
>  mm/mempolicy.c     | 51 ++++++++++++++++++++++++++++++++++-----------------
>  mm/mlock.c         | 13 ++++++++-----
>  mm/mmap.c          | 22 +++++++++++++---------
>  mm/mprotect.c      |  4 +++-
>  mm/swap_state.c    |  8 ++++++--
>  9 files changed, 89 insertions(+), 40 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index c486ad4b43f0..aeb417f28839 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  					goto out_mm;
>  				}
>  				for (vma = mm->mmap; vma; vma = vma->vm_next) {
> -					vma->vm_flags &= ~VM_SOFTDIRTY;
> +					vm_write_begin(vma);
> +					WRITE_ONCE(vma->vm_flags,
> +						   vma->vm_flags & ~VM_SOFTDIRTY);
>  					vma_set_page_prot(vma);
> +					vm_write_end(vma);

trivial:

I think It's tricky to maintain that VMA fields to be read during SPF should be
(READ|WRITE_ONCE). I think we need some accessor to read/write them rather than
raw accessing like like vma_set_page_prot. Maybe spf prefix would be helpful. 

	vma_spf_set_value(vma, vm_flags, val);

We also add some markers in vm_area_struct's fileds to indicate that
people shouldn't access those fields directly.

Just a thought.


>  				}
>  				downgrade_write(&mm->mmap_sem);


> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index fe079756bb18..8a8a402ed59f 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -575,6 +575,10 @@ static unsigned long swapin_nr_pages(unsigned long offset)
>   * the readahead.
>   *
>   * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
> + * This is needed to ensure the VMA will not be freed in our back. In the case
> + * of the speculative page fault handler, this cannot happen, even if we don't
> + * hold the mmap_sem. Callees are assumed to take care of reading VMA's fields

I guess reader would be curious on *why* is safe with SPF.
Comment about the why could be helpful for reviewer.

> + * using READ_ONCE() to read consistent values.
>   */
>  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>  				struct vm_fault *vmf)
> @@ -668,9 +672,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>  				     unsigned long *start,
>  				     unsigned long *end)
>  {
> -	*start = max3(lpfn, PFN_DOWN(vma->vm_start),
> +	*start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>  		      PFN_DOWN(faddr & PMD_MASK));
> -	*end = min3(rpfn, PFN_DOWN(vma->vm_end),
> +	*end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>  		    PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>  }
>  
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-04-17 14:33 ` [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
@ 2018-04-23  7:42   ` Minchan Kim
  2018-05-03 12:25     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-04-23  7:42 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
> When handling speculative page fault, the vma->vm_flags and
> vma->vm_page_prot fields are read once the page table lock is released. So
> there is no more guarantee that these fields would not change in our back.
> They will be saved in the vm_fault structure before the VMA is checked for
> changes.

Sorry. I cannot understand.
If it is changed under us, what happens? If it's critical, why cannot we
check with seqcounter?
Clearly, I'm not understanding the logic here. However, it's a global
change without CONFIG_SPF so I want to be more careful.
It would be better to describe why we need to sanpshot those values
into vm_fault rather than preventing the race.

Thanks.

> 
> This patch also set the fields in hugetlb_no_page() and
> __collapse_huge_page_swapin even if it is not need for the callee.
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/mm.h | 10 ++++++++--
>  mm/huge_memory.c   |  6 +++---
>  mm/hugetlb.c       |  2 ++
>  mm/khugepaged.c    |  2 ++
>  mm/memory.c        | 50 ++++++++++++++++++++++++++------------------------
>  mm/migrate.c       |  2 +-
>  6 files changed, 42 insertions(+), 30 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f6edd15563bc..c65205c8c558 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -367,6 +367,12 @@ struct vm_fault {
>  					 * page table to avoid allocation from
>  					 * atomic context.
>  					 */
> +	/*
> +	 * These entries are required when handling speculative page fault.
> +	 * This way the page handling is done using consistent field values.
> +	 */
> +	unsigned long vma_flags;
> +	pgprot_t vma_page_prot;
>  };
>  
>  /* page entry size for vm->huge_fault() */
> @@ -687,9 +693,9 @@ void free_compound_page(struct page *page);
>   * pte_mkwrite.  But get_user_pages can cause write faults for mappings
>   * that do not have writing enabled, when used by access_process_vm.
>   */
> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
>  {
> -	if (likely(vma->vm_flags & VM_WRITE))
> +	if (likely(vma_flags & VM_WRITE))
>  		pte = pte_mkwrite(pte);
>  	return pte;
>  }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index a3a1815f8e11..da2afda67e68 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1194,8 +1194,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
>  
>  	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
>  		pte_t entry;
> -		entry = mk_pte(pages[i], vma->vm_page_prot);
> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> +		entry = mk_pte(pages[i], vmf->vma_page_prot);
> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>  		memcg = (void *)page_private(pages[i]);
>  		set_page_private(pages[i], 0);
>  		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
> @@ -2168,7 +2168,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  				entry = pte_swp_mksoft_dirty(entry);
>  		} else {
>  			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
> -			entry = maybe_mkwrite(entry, vma);
> +			entry = maybe_mkwrite(entry, vma->vm_flags);
>  			if (!write)
>  				entry = pte_wrprotect(entry);
>  			if (!young)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 218679138255..774864153407 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3718,6 +3718,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  				.vma = vma,
>  				.address = address,
>  				.flags = flags,
> +				.vma_flags = vma->vm_flags,
> +				.vma_page_prot = vma->vm_page_prot,
>  				/*
>  				 * Hard to debug if it ends up being
>  				 * used by a callee that assumes
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 0b28af4b950d..2b02a9f9589e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -887,6 +887,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>  		.flags = FAULT_FLAG_ALLOW_RETRY,
>  		.pmd = pmd,
>  		.pgoff = linear_page_index(vma, address),
> +		.vma_flags = vma->vm_flags,
> +		.vma_page_prot = vma->vm_page_prot,
>  	};
>  
>  	/* we only decide to swapin, if there is enough young ptes */
> diff --git a/mm/memory.c b/mm/memory.c
> index f76f5027d251..2fb9920e06a5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>  out_mkwrite:
>  	if (mkwrite) {
>  		entry = pte_mkyoung(entry);
> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> +		entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
>  	}
>  
>  	set_pte_at(mm, addr, pte, entry);
> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>  
>  	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
>  	entry = pte_mkyoung(vmf->orig_pte);
> -	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> +	entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>  	if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
>  		update_mmu_cache(vma, vmf->address, vmf->pte);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -2548,8 +2548,8 @@ static int wp_page_copy(struct vm_fault *vmf)
>  			inc_mm_counter_fast(mm, MM_ANONPAGES);
>  		}
>  		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> -		entry = mk_pte(new_page, vma->vm_page_prot);
> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> +		entry = mk_pte(new_page, vmf->vma_page_prot);
> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>  		/*
>  		 * Clear the pte entry and flush it first, before updating the
>  		 * pte with the new entry. This will avoid a race condition
> @@ -2614,7 +2614,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>  		 * Don't let another task, with possibly unlocked vma,
>  		 * keep the mlocked page.
>  		 */
> -		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
> +		if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
>  			lock_page(old_page);	/* LRU manipulation */
>  			if (PageMlocked(old_page))
>  				munlock_vma_page(old_page);
> @@ -2650,7 +2650,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>   */
>  int finish_mkwrite_fault(struct vm_fault *vmf)
>  {
> -	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> +	WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
>  	if (!pte_map_lock(vmf))
>  		return VM_FAULT_RETRY;
>  	/*
> @@ -2752,7 +2752,7 @@ static int do_wp_page(struct vm_fault *vmf)
>  		 * We should not cow pages in a shared writeable mapping.
>  		 * Just mark the pages writable and/or call ops->pfn_mkwrite.
>  		 */
> -		if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> +		if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>  				     (VM_WRITE|VM_SHARED))
>  			return wp_pfn_shared(vmf);
>  
> @@ -2799,7 +2799,7 @@ static int do_wp_page(struct vm_fault *vmf)
>  			return VM_FAULT_WRITE;
>  		}
>  		unlock_page(vmf->page);
> -	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> +	} else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>  					(VM_WRITE|VM_SHARED))) {
>  		return wp_page_shared(vmf);
>  	}
> @@ -3078,9 +3078,9 @@ int do_swap_page(struct vm_fault *vmf)
>  
>  	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>  	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> -	pte = mk_pte(page, vma->vm_page_prot);
> +	pte = mk_pte(page, vmf->vma_page_prot);
>  	if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> -		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> +		pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
>  		vmf->flags &= ~FAULT_FLAG_WRITE;
>  		ret |= VM_FAULT_WRITE;
>  		exclusive = RMAP_EXCLUSIVE;
> @@ -3105,7 +3105,7 @@ int do_swap_page(struct vm_fault *vmf)
>  
>  	swap_free(entry);
>  	if (mem_cgroup_swap_full(page) ||
> -	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> +	    (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
>  		try_to_free_swap(page);
>  	unlock_page(page);
>  	if (page != swapcache && swapcache) {
> @@ -3163,7 +3163,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>  	pte_t entry;
>  
>  	/* File mapping without ->vm_ops ? */
> -	if (vma->vm_flags & VM_SHARED)
> +	if (vmf->vma_flags & VM_SHARED)
>  		return VM_FAULT_SIGBUS;
>  
>  	/*
> @@ -3187,7 +3187,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>  	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
>  			!mm_forbids_zeropage(vma->vm_mm)) {
>  		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
> -						vma->vm_page_prot));
> +						vmf->vma_page_prot));
>  		if (!pte_map_lock(vmf))
>  			return VM_FAULT_RETRY;
>  		if (!pte_none(*vmf->pte))
> @@ -3220,8 +3220,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
>  	 */
>  	__SetPageUptodate(page);
>  
> -	entry = mk_pte(page, vma->vm_page_prot);
> -	if (vma->vm_flags & VM_WRITE)
> +	entry = mk_pte(page, vmf->vma_page_prot);
> +	if (vmf->vma_flags & VM_WRITE)
>  		entry = pte_mkwrite(pte_mkdirty(entry));
>  
>  	if (!pte_map_lock(vmf)) {
> @@ -3418,7 +3418,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
>  	for (i = 0; i < HPAGE_PMD_NR; i++)
>  		flush_icache_page(vma, page + i);
>  
> -	entry = mk_huge_pmd(page, vma->vm_page_prot);
> +	entry = mk_huge_pmd(page, vmf->vma_page_prot);
>  	if (write)
>  		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>  
> @@ -3492,11 +3492,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
>  		return VM_FAULT_NOPAGE;
>  
>  	flush_icache_page(vma, page);
> -	entry = mk_pte(page, vma->vm_page_prot);
> +	entry = mk_pte(page, vmf->vma_page_prot);
>  	if (write)
> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>  	/* copy-on-write page */
> -	if (write && !(vma->vm_flags & VM_SHARED)) {
> +	if (write && !(vmf->vma_flags & VM_SHARED)) {
>  		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>  		page_add_new_anon_rmap(page, vma, vmf->address, false);
>  		mem_cgroup_commit_charge(page, memcg, false, false);
> @@ -3535,7 +3535,7 @@ int finish_fault(struct vm_fault *vmf)
>  
>  	/* Did we COW the page? */
>  	if ((vmf->flags & FAULT_FLAG_WRITE) &&
> -	    !(vmf->vma->vm_flags & VM_SHARED))
> +	    !(vmf->vma_flags & VM_SHARED))
>  		page = vmf->cow_page;
>  	else
>  		page = vmf->page;
> @@ -3789,7 +3789,7 @@ static int do_fault(struct vm_fault *vmf)
>  		ret = VM_FAULT_SIGBUS;
>  	else if (!(vmf->flags & FAULT_FLAG_WRITE))
>  		ret = do_read_fault(vmf);
> -	else if (!(vma->vm_flags & VM_SHARED))
> +	else if (!(vmf->vma_flags & VM_SHARED))
>  		ret = do_cow_fault(vmf);
>  	else
>  		ret = do_shared_fault(vmf);
> @@ -3846,7 +3846,7 @@ static int do_numa_page(struct vm_fault *vmf)
>  	 * accessible ptes, some can allow access by kernel mode.
>  	 */
>  	pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
> -	pte = pte_modify(pte, vma->vm_page_prot);
> +	pte = pte_modify(pte, vmf->vma_page_prot);
>  	pte = pte_mkyoung(pte);
>  	if (was_writable)
>  		pte = pte_mkwrite(pte);
> @@ -3880,7 +3880,7 @@ static int do_numa_page(struct vm_fault *vmf)
>  	 * Flag if the page is shared between multiple address spaces. This
>  	 * is later used when determining whether to group tasks together
>  	 */
> -	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
> +	if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
>  		flags |= TNF_SHARED;
>  
>  	last_cpupid = page_cpupid_last(page);
> @@ -3925,7 +3925,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
>  		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
>  
>  	/* COW handled on pte level: split pmd */
> -	VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
> +	VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
>  	__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
>  
>  	return VM_FAULT_FALLBACK;
> @@ -4072,6 +4072,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>  		.flags = flags,
>  		.pgoff = linear_page_index(vma, address),
>  		.gfp_mask = __get_fault_gfp_mask(vma),
> +		.vma_flags = vma->vm_flags,
> +		.vma_page_prot = vma->vm_page_prot,
>  	};
>  	unsigned int dirty = flags & FAULT_FLAG_WRITE;
>  	struct mm_struct *mm = vma->vm_mm;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index bb6367d70a3e..44d7007cfc1c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
>  		 */
>  		entry = pte_to_swp_entry(*pvmw.pte);
>  		if (is_write_migration_entry(entry))
> -			pte = maybe_mkwrite(pte, vma);
> +			pte = maybe_mkwrite(pte, vma->vm_flags);
>  
>  		if (unlikely(is_zone_device_page(new))) {
>  			if (is_device_private_page(new)) {
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT
  2018-04-23  5:58   ` Minchan Kim
@ 2018-04-23 15:10     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-04-23 15:10 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 23/04/2018 07:58, Minchan Kim wrote:
> Hi Laurent,
> 
> I guess it's good timing to review. Guess LSF/MM goes so might change
> a lot since then. :) Anyway, I grap a time to review.

Hi,

Thanks a lot for reviewing this series.

> On Tue, Apr 17, 2018 at 04:33:07PM +0200, Laurent Dufour wrote:
>> This configuration variable will be used to build the code needed to
>> handle speculative page fault.
>>
>> By default it is turned off, and activated depending on architecture
>> support, SMP and MMU.
> 
> Can we have description in here why it depends on architecture?

The reason is that the per architecture page fault code must handle the
speculative page fault. It is done in this series for x86 and ppc64.

I'll make it explicit here.

> 
>>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Suggested-by: David Rientjes <rientjes@google.com>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  mm/Kconfig | 22 ++++++++++++++++++++++
>>  1 file changed, 22 insertions(+)
>>
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index d5004d82a1d6..5484dca11199 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -752,3 +752,25 @@ config GUP_BENCHMARK
>>  	  performance of get_user_pages_fast().
>>  
>>  	  See tools/testing/selftests/vm/gup_benchmark.c
>> +
>> +config ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> +       def_bool n
>> +
>> +config SPECULATIVE_PAGE_FAULT
>> +       bool "Speculative page faults"
>> +       default y
>> +       depends on ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> +       depends on MMU && SMP
>> +       help
>> +         Try to handle user space page faults without holding the mmap_sem.
>> +
>> +	 This should allow better concurrency for massively threaded process
>> +	 since the page fault handler will not wait for other threads memory
>> +	 layout change to be done, assuming that this change is done in another
>> +	 part of the process's memory space. This type of page fault is named
>> +	 speculative page fault.
>> +
>> +	 If the speculative page fault fails because of a concurrency is
>> +	 detected or because underlying PMD or PTE tables are not yet
>> +	 allocating, it is failing its processing and a classic page fault
>> +	 is then tried.
>> -- 
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-04-23  6:31   ` Minchan Kim
@ 2018-04-30 14:07     ` Laurent Dufour
  2018-05-01 13:04       ` Minchan Kim
  0 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-30 14:07 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 23/04/2018 08:31, Minchan Kim wrote:
> On Tue, Apr 17, 2018 at 04:33:12PM +0200, Laurent Dufour wrote:
>> pte_unmap_same() is making the assumption that the page table are still
>> around because the mmap_sem is held.
>> This is no more the case when running a speculative page fault and
>> additional check must be made to ensure that the final page table are still
>> there.
>>
>> This is now done by calling pte_spinlock() to check for the VMA's
>> consistency while locking for the page tables.
>>
>> This is requiring passing a vm_fault structure to pte_unmap_same() which is
>> containing all the needed parameters.
>>
>> As pte_spinlock() may fail in the case of a speculative page fault, if the
>> VMA has been touched in our back, pte_unmap_same() should now return 3
>> cases :
>> 	1. pte are the same (0)
>> 	2. pte are different (VM_FAULT_PTNOTSAME)
>> 	3. a VMA's changes has been detected (VM_FAULT_RETRY)
>>
>> The case 2 is handled by the introduction of a new VM_FAULT flag named
>> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
> 
> I don't see such logic in this patch.
> Maybe you introduces it later? If so, please comment on it.
> Or just return 0 in case of 2 without introducing VM_FAULT_PTNOTSAME.

Late in the series, pte_spinlock() will check for the VMA's changes and may
return 1. This will be then required to handle the 3 cases presented above.

I can move this handling later in the series, but I wondering if this will make
it more easier to read.

> 
>> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
>> page fault while holding the mmap_sem.
>>
>> Acked-by: David Rientjes <rientjes@google.com>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/mm.h |  1 +
>>  mm/memory.c        | 39 ++++++++++++++++++++++++++++-----------
>>  2 files changed, 29 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 4d1aff80669c..714da99d77a3 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1208,6 +1208,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>>  #define VM_FAULT_NEEDDSYNC  0x2000	/* ->fault did not modify page tables
>>  					 * and needs fsync() to complete (for
>>  					 * synchronous page faults in DAX) */
>> +#define VM_FAULT_PTNOTSAME 0x4000	/* Page table entries have changed */
>>  
>>  #define VM_FAULT_ERROR	(VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
>>  			 VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 0b9a51f80e0e..f86efcb8e268 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -2309,21 +2309,29 @@ static inline bool pte_map_lock(struct vm_fault *vmf)
>>   * parts, do_swap_page must check under lock before unmapping the pte and
>>   * proceeding (but do_wp_page is only called after already making such a check;
>>   * and do_anonymous_page can safely check later on).
>> + *
>> + * pte_unmap_same() returns:
>> + *	0			if the PTE are the same
>> + *	VM_FAULT_PTNOTSAME	if the PTE are different
>> + *	VM_FAULT_RETRY		if the VMA has changed in our back during
>> + *				a speculative page fault handling.
>>   */
>> -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
>> -				pte_t *page_table, pte_t orig_pte)
>> +static inline int pte_unmap_same(struct vm_fault *vmf)
>>  {
>> -	int same = 1;
>> +	int ret = 0;
>> +
>>  #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
>>  	if (sizeof(pte_t) > sizeof(unsigned long)) {
>> -		spinlock_t *ptl = pte_lockptr(mm, pmd);
>> -		spin_lock(ptl);
>> -		same = pte_same(*page_table, orig_pte);
>> -		spin_unlock(ptl);
>> +		if (pte_spinlock(vmf)) {
>> +			if (!pte_same(*vmf->pte, vmf->orig_pte))
>> +				ret = VM_FAULT_PTNOTSAME;
>> +			spin_unlock(vmf->ptl);
>> +		} else
>> +			ret = VM_FAULT_RETRY;
>>  	}
>>  #endif
>> -	pte_unmap(page_table);
>> -	return same;
>> +	pte_unmap(vmf->pte);
>> +	return ret;
>>  }
>>  
>>  static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
>> @@ -2912,10 +2920,19 @@ int do_swap_page(struct vm_fault *vmf)
>>  	pte_t pte;
>>  	int locked;
>>  	int exclusive = 0;
>> -	int ret = 0;
>> +	int ret;
>>  
>> -	if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
>> +	ret = pte_unmap_same(vmf);
>> +	if (ret) {
>> +		/*
>> +		 * If pte != orig_pte, this means another thread did the
>> +		 * swap operation in our back.
>> +		 * So nothing else to do.
>> +		 */
>> +		if (ret == VM_FAULT_PTNOTSAME)
>> +			ret = 0;
>>  		goto out;
>> +	}
>>  
>>  	entry = pte_to_swp_entry(vmf->orig_pte);
>>  	if (unlikely(non_swap_entry(entry))) {
>> -- 
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 08/25] mm: VMA sequence count
  2018-04-23  6:42   ` Minchan Kim
@ 2018-04-30 15:14     ` Laurent Dufour
  2018-05-01 13:16       ` Minchan Kim
  0 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-04-30 15:14 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86



On 23/04/2018 08:42, Minchan Kim wrote:
> On Tue, Apr 17, 2018 at 04:33:14PM +0200, Laurent Dufour wrote:
>> From: Peter Zijlstra <peterz@infradead.org>
>>
>> Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
>> counts such that we can easily test if a VMA is changed.
> 
> So, seqcount is to protect modifying all attributes of vma?

The seqcount is used to protect fields that will be used during the speculative
page fault like boundaries, protections.

>>
>> The unmap_page_range() one allows us to make assumptions about
>> page-tables; when we find the seqcount hasn't changed we can assume
>> page-tables are still valid.
> 
> Hmm, seqcount covers page-table, too.
> Please describe what the seqcount want to protect.

The calls to vm_write_begin/end() in unmap_page_range() are used to detect when
a VMA is being unmap and thus that new page fault should not be satisfied for
this VMA. This is protecting the VMA unmapping operation, not the page tables
themselves.

>>
>> The flip side is that we cannot distinguish between a vma_adjust() and
>> the unmap_page_range() -- where with the former we could have
>> re-checked the vma bounds against the address.
>>
>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>
>> [Port to 4.12 kernel]
>> [Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
>> [Introduce vm_write_* inline function depending on
>>  CONFIG_SPECULATIVE_PAGE_FAULT]
>> [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
>>  using vm_raw_write* functions]
>> [Fix a lock dependency warning in mmap_region() when entering the error
>>  path]
>> [move sequence initialisation INIT_VMA()]
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/mm.h       | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>  include/linux/mm_types.h |  3 +++
>>  mm/memory.c              |  2 ++
>>  mm/mmap.c                | 31 +++++++++++++++++++++++++++++++
>>  4 files changed, 80 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index efc1248b82bd..988daf7030c9 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1264,6 +1264,9 @@ struct zap_details {
>>  static inline void INIT_VMA(struct vm_area_struct *vma)
>>  {
>>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +	seqcount_init(&vma->vm_sequence);
>> +#endif
>>  }
>>  
>>  struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> @@ -1386,6 +1389,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
>>  	unmap_mapping_range(mapping, holebegin, holelen, 0);
>>  }
>>  
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +static inline void vm_write_begin(struct vm_area_struct *vma)
>> +{
>> +	write_seqcount_begin(&vma->vm_sequence);
>> +}
>> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
>> +					 int subclass)
>> +{
>> +	write_seqcount_begin_nested(&vma->vm_sequence, subclass);
>> +}
>> +static inline void vm_write_end(struct vm_area_struct *vma)
>> +{
>> +	write_seqcount_end(&vma->vm_sequence);
>> +}
>> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
>> +{
>> +	raw_write_seqcount_begin(&vma->vm_sequence);
>> +}
>> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
>> +{
>> +	raw_write_seqcount_end(&vma->vm_sequence);
>> +}
>> +#else
>> +static inline void vm_write_begin(struct vm_area_struct *vma)
>> +{
>> +}
>> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
>> +					 int subclass)
>> +{
>> +}
>> +static inline void vm_write_end(struct vm_area_struct *vma)
>> +{
>> +}
>> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
>> +{
>> +}
>> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
>> +{
>> +}
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>> +
>>  extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
>>  		void *buf, int len, unsigned int gup_flags);
>>  extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 21612347d311..db5e9d630e7a 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -335,6 +335,9 @@ struct vm_area_struct {
>>  	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
>>  #endif
>>  	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +	seqcount_t vm_sequence;
>> +#endif
>>  } __randomize_layout;
>>  
>>  struct core_thread {
>> diff --git a/mm/memory.c b/mm/memory.c
>> index f86efcb8e268..f7fed053df80 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1503,6 +1503,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>>  	unsigned long next;
>>  
>>  	BUG_ON(addr >= end);
> 
> The comment about saying it aims for page-table stability will help.

A comment may be added mentioning that we use the seqcount to indicate that the
VMA is modified, being unmapped. But there is not a real page table protection,
and I think this may be confusing to talk about page table stability here.

> 
>> +	vm_write_begin(vma);
>>  	tlb_start_vma(tlb, vma);
>>  	pgd = pgd_offset(vma->vm_mm, addr);
>>  	do {
>> @@ -1512,6 +1513,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>>  		next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
>>  	} while (pgd++, addr = next, addr != end);
>>  	tlb_end_vma(tlb, vma);
>> +	vm_write_end(vma);
>>  }
>>  
>>  
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 8bd9ae1dfacc..813e49589ea1 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -692,6 +692,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>>  	long adjust_next = 0;
>>  	int remove_next = 0;
>>  
>> +	/*
>> +	 * Why using vm_raw_write*() functions here to avoid lockdep's warning ?
>> +	 *
>> +	 * Locked is complaining about a theoretical lock dependency, involving
>> +	 * 3 locks:
>> +	 *   mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
>> +	 *
>> +	 * Here are the major path leading to this dependency :
>> +	 *  1. __vma_adjust() mmap_sem  -> vm_sequence -> i_mmap_rwsem
>> +	 *  2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
>> +	 *  3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
>> +	 *  4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
>> +	 *
>> +	 * So there is no way to solve this easily, especially because in
>> +	 * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
>> +	 * VMAs are not yet known.
>> +	 * However, the way the vm_seq is used is guarantying that we will
>> +	 * never block on it since we just check for its value and never wait
>> +	 * for it to move, see vma_has_changed() and handle_speculative_fault().
>> +	 */
>> +	vm_raw_write_begin(vma);
>> +	if (next)
>> +		vm_raw_write_begin(next);
>> +
>>  	if (next && !insert) {
>>  		struct vm_area_struct *exporter = NULL, *importer = NULL;
>>  
>> @@ -902,6 +926,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>>  			anon_vma_merge(vma, next);
>>  		mm->map_count--;
>>  		mpol_put(vma_policy(next));
>> +		vm_raw_write_end(next);
>>  		kmem_cache_free(vm_area_cachep, next);
>>  		/*
>>  		 * In mprotect's case 6 (see comments on vma_merge),
>> @@ -916,6 +941,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>>  			 * "vma->vm_next" gap must be updated.
>>  			 */
>>  			next = vma->vm_next;
>> +			if (next)
>> +				vm_raw_write_begin(next);
>>  		} else {
>>  			/*
>>  			 * For the scope of the comment "next" and
>> @@ -962,6 +989,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
>>  	if (insert && file)
>>  		uprobe_mmap(insert);
>>  
>> +	if (next && next != vma)
>> +		vm_raw_write_end(next);
>> +	vm_raw_write_end(vma);
>> +
>>  	validate_mm(mm);
>>  
>>  	return 0;
>> -- 
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 24/25] x86/mm: add speculative pagefault handling
  2018-04-17 14:33 ` [PATCH v10 24/25] x86/mm: add speculative pagefault handling Laurent Dufour
@ 2018-04-30 18:43   ` Punit Agrawal
  2018-05-03 14:59     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Punit Agrawal @ 2018-04-30 18:43 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

I am looking to add support for speculative page fault handling to
arm64 (effectively porting this patch) and had a few questions.
Apologies if I've missed an obvious explanation for my queries. I'm
jumping in bit late to the discussion.

On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
> From: Peter Zijlstra <peterz@infradead.org>
>
> Try a speculative fault before acquiring mmap_sem, if it returns with
> VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
> traditional fault.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
>  handle_speculative_fault()]
> [Retry with usual fault path in the case VM_ERROR is returned by
>  handle_speculative_fault(). This allows signal to be delivered]
> [Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
> [Try speculative fault path only for multi threaded processes]
> [Try reuse to the VMA fetch during the speculative path in case of retry]
> [Call reuse_spf_or_find_vma()]
> [Handle memory protection key fault]
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/x86/mm/fault.c | 42 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 38 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 73bd8c95ac71..59f778386df5 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -1220,7 +1220,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>         struct mm_struct *mm;
>         int fault, major = 0;
>         unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
> -       u32 pkey;
> +       u32 pkey, *pt_pkey = &pkey;
>
>         tsk = current;
>         mm = tsk->mm;
> @@ -1310,6 +1310,30 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>                 flags |= FAULT_FLAG_INSTRUCTION;
>
>         /*
> +        * Do not try speculative page fault for kernel's pages and if
> +        * the fault was due to protection keys since it can't be resolved.
> +        */
> +       if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT) &&
> +           !(error_code & X86_PF_PK)) {

You can simplify this condition by dropping the IS_ENABLED() check as
you already provide an alternate implementation of
handle_speculative_fault() when CONFIG_SPECULATIVE_PAGE_FAULT is not
defined.

> +               fault = handle_speculative_fault(mm, address, flags, &vma);
> +               if (fault != VM_FAULT_RETRY) {
> +                       perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
> +                       /*
> +                        * Do not advertise for the pkey value since we don't
> +                        * know it.
> +                        * This is not a matter as we checked for X86_PF_PK
> +                        * earlier, so we should not handle pkey fault here,
> +                        * but to be sure that mm_fault_error() callees will
> +                        * not try to use it, we invalidate the pointer.
> +                        */
> +                       pt_pkey = NULL;
> +                       goto done;
> +               }
> +       } else {
> +               vma = NULL;
> +       }

The else part can be dropped if vma is initialised to NULL when it is
declared at the top of the function.

> +
> +       /*
>          * When running in the kernel we expect faults to occur only to
>          * addresses in user space.  All other faults represent errors in
>          * the kernel and should generate an OOPS.  Unfortunately, in the
> @@ -1342,7 +1366,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>                 might_sleep();
>         }
>
> -       vma = find_vma(mm, address);
> +       if (!vma || !can_reuse_spf_vma(vma, address))
> +               vma = find_vma(mm, address);

Is there a measurable benefit from reusing the vma?

Dropping the vma reference unconditionally after speculative page
fault handling gets rid of the implicit state when "vma != NULL"
(increased ref-count). I found it a bit confusing to follow.

>         if (unlikely(!vma)) {
>                 bad_area(regs, error_code, address);
>                 return;
> @@ -1409,8 +1434,15 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>                 if (flags & FAULT_FLAG_ALLOW_RETRY) {
>                         flags &= ~FAULT_FLAG_ALLOW_RETRY;
>                         flags |= FAULT_FLAG_TRIED;
> -                       if (!fatal_signal_pending(tsk))
> +                       if (!fatal_signal_pending(tsk)) {
> +                               /*
> +                                * Do not try to reuse this vma and fetch it
> +                                * again since we will release the mmap_sem.
> +                                */
> +                               if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT))
> +                                       vma = NULL;

Regardless of the above comment, can the vma be reset here unconditionally?

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock
  2018-04-17 14:33 ` [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock Laurent Dufour
@ 2018-04-30 18:47   ` Punit Agrawal
  2018-05-02  6:37     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Punit Agrawal @ 2018-04-30 18:47 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

One nitpick below.

On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
> This change is inspired by the Peter's proposal patch [1] which was
> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
> that particular case, and it is introducing major performance degradation
> due to excessive scheduling operations.
>
> To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
> is protecting it access using a rwlock.  As the mm_rb tree is a O(log n)
> search it is safe to protect it using such a lock.  The VMA cache is not
> protected by the new rwlock and it should not be used without holding the
> mmap_sem.
>
> To allow the picked VMA structure to be used once the rwlock is released, a
> use count is added to the VMA structure. When the VMA is allocated it is
> set to 1.  Each time the VMA is picked with the rwlock held its use count
> is incremented. Each time the VMA is released it is decremented. When the
> use count hits zero, this means that the VMA is no more used and should be
> freed.
>
> This patch is preparing for 2 kind of VMA access :
>  - as usual, under the control of the mmap_sem,
>  - without holding the mmap_sem for the speculative page fault handler.
>
> Access done under the control the mmap_sem doesn't require to grab the
> rwlock to protect read access to the mm_rb tree, but access in write must
> be done under the protection of the rwlock too. This affects inserting and
> removing of elements in the RB tree.
>
> The patch is introducing 2 new functions:
>  - vma_get() to find a VMA based on an address by holding the new rwlock.
>  - vma_put() to release the VMA when its no more used.
> These services are designed to be used when access are made to the RB tree
> without holding the mmap_sem.
>
> When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
> we rely on the WMB done when releasing the rwlock to serialize the write
> with the RMB done in a later patch to check for the VMA's validity.
>
> When free_vma is called, the file associated with the VMA is closed
> immediately, but the policy and the file structure remained in used until
> the VMA's use count reach 0, which may happens later when exiting an
> in progress speculative page fault.
>
> [1] https://patchwork.kernel.org/patch/5108281/
>
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/mm.h       |   1 +
>  include/linux/mm_types.h |   4 ++
>  kernel/fork.c            |   3 ++
>  mm/init-mm.c             |   3 ++
>  mm/internal.h            |   6 +++
>  mm/mmap.c                | 115 +++++++++++++++++++++++++++++++++++------------
>  6 files changed, 104 insertions(+), 28 deletions(-)
>

[...]

> diff --git a/mm/mmap.c b/mm/mmap.c
> index 5601f1ef8bb9..a82950960f2e 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -160,6 +160,27 @@ void unlink_file_vma(struct vm_area_struct *vma)
>         }
>  }
>
> +static void __free_vma(struct vm_area_struct *vma)
> +{
> +       if (vma->vm_file)
> +               fput(vma->vm_file);
> +       mpol_put(vma_policy(vma));
> +       kmem_cache_free(vm_area_cachep, vma);
> +}
> +
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +void put_vma(struct vm_area_struct *vma)
> +{
> +       if (atomic_dec_and_test(&vma->vm_ref_count))
> +               __free_vma(vma);
> +}
> +#else
> +static inline void put_vma(struct vm_area_struct *vma)
> +{
> +       return __free_vma(vma);

Please drop the "return".

Thanks,
Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-04-30 14:07     ` Laurent Dufour
@ 2018-05-01 13:04       ` Minchan Kim
  0 siblings, 0 replies; 68+ messages in thread
From: Minchan Kim @ 2018-05-01 13:04 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Mon, Apr 30, 2018 at 04:07:30PM +0200, Laurent Dufour wrote:
> On 23/04/2018 08:31, Minchan Kim wrote:
> > On Tue, Apr 17, 2018 at 04:33:12PM +0200, Laurent Dufour wrote:
> >> pte_unmap_same() is making the assumption that the page table are still
> >> around because the mmap_sem is held.
> >> This is no more the case when running a speculative page fault and
> >> additional check must be made to ensure that the final page table are still
> >> there.
> >>
> >> This is now done by calling pte_spinlock() to check for the VMA's
> >> consistency while locking for the page tables.
> >>
> >> This is requiring passing a vm_fault structure to pte_unmap_same() which is
> >> containing all the needed parameters.
> >>
> >> As pte_spinlock() may fail in the case of a speculative page fault, if the
> >> VMA has been touched in our back, pte_unmap_same() should now return 3
> >> cases :
> >> 	1. pte are the same (0)
> >> 	2. pte are different (VM_FAULT_PTNOTSAME)
> >> 	3. a VMA's changes has been detected (VM_FAULT_RETRY)
> >>
> >> The case 2 is handled by the introduction of a new VM_FAULT flag named
> >> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
> > 
> > I don't see such logic in this patch.
> > Maybe you introduces it later? If so, please comment on it.
> > Or just return 0 in case of 2 without introducing VM_FAULT_PTNOTSAME.
> 
> Late in the series, pte_spinlock() will check for the VMA's changes and may
> return 1. This will be then required to handle the 3 cases presented above.
> 
> I can move this handling later in the series, but I wondering if this will make
> it more easier to read.

Just nit:
During reviewing this patch, I was just curious you introduced new thing
here but I couldn't find any site where use it. It makes review hard. :(
That's why I said to you that please commet on it if you will use new thing
late in this series.
If you think as-is is better for review, it would be better to mention it
explicitly.

Thanks.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 08/25] mm: VMA sequence count
  2018-04-30 15:14     ` Laurent Dufour
@ 2018-05-01 13:16       ` Minchan Kim
  2018-05-03 14:45         ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-05-01 13:16 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Mon, Apr 30, 2018 at 05:14:27PM +0200, Laurent Dufour wrote:
> 
> 
> On 23/04/2018 08:42, Minchan Kim wrote:
> > On Tue, Apr 17, 2018 at 04:33:14PM +0200, Laurent Dufour wrote:
> >> From: Peter Zijlstra <peterz@infradead.org>
> >>
> >> Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
> >> counts such that we can easily test if a VMA is changed.
> > 
> > So, seqcount is to protect modifying all attributes of vma?
> 
> The seqcount is used to protect fields that will be used during the speculative
> page fault like boundaries, protections.

a VMA is changed, it was rather vague to me at this point.
If you could specify detail fields or some example what seqcount aim for,
it would help to review.

> 
> >>
> >> The unmap_page_range() one allows us to make assumptions about
> >> page-tables; when we find the seqcount hasn't changed we can assume
> >> page-tables are still valid.
> > 
> > Hmm, seqcount covers page-table, too.
> > Please describe what the seqcount want to protect.
> 
> The calls to vm_write_begin/end() in unmap_page_range() are used to detect when
> a VMA is being unmap and thus that new page fault should not be satisfied for
> this VMA. This is protecting the VMA unmapping operation, not the page tables
> themselves.

Thanks for the detail. yes, please include this phrase instead of "page-table
are still valid". It makes me confused.

> 
> >>
> >> The flip side is that we cannot distinguish between a vma_adjust() and
> >> the unmap_page_range() -- where with the former we could have
> >> re-checked the vma bounds against the address.
> >>
> >> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> >>
> >> [Port to 4.12 kernel]
> >> [Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
> >> [Introduce vm_write_* inline function depending on
> >>  CONFIG_SPECULATIVE_PAGE_FAULT]
> >> [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
> >>  using vm_raw_write* functions]
> >> [Fix a lock dependency warning in mmap_region() when entering the error
> >>  path]
> >> [move sequence initialisation INIT_VMA()]
> >> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> >> ---
> >>  include/linux/mm.h       | 44 ++++++++++++++++++++++++++++++++++++++++++++
> >>  include/linux/mm_types.h |  3 +++
> >>  mm/memory.c              |  2 ++
> >>  mm/mmap.c                | 31 +++++++++++++++++++++++++++++++
> >>  4 files changed, 80 insertions(+)
> >>
> >> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >> index efc1248b82bd..988daf7030c9 100644
> >> --- a/include/linux/mm.h
> >> +++ b/include/linux/mm.h
> >> @@ -1264,6 +1264,9 @@ struct zap_details {
> >>  static inline void INIT_VMA(struct vm_area_struct *vma)
> >>  {
> >>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
> >> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> >> +	seqcount_init(&vma->vm_sequence);
> >> +#endif
> >>  }
> >>  
> >>  struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> >> @@ -1386,6 +1389,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
> >>  	unmap_mapping_range(mapping, holebegin, holelen, 0);
> >>  }
> >>  
> >> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> >> +static inline void vm_write_begin(struct vm_area_struct *vma)
> >> +{
> >> +	write_seqcount_begin(&vma->vm_sequence);
> >> +}
> >> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> >> +					 int subclass)
> >> +{
> >> +	write_seqcount_begin_nested(&vma->vm_sequence, subclass);
> >> +}
> >> +static inline void vm_write_end(struct vm_area_struct *vma)
> >> +{
> >> +	write_seqcount_end(&vma->vm_sequence);
> >> +}
> >> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> >> +{
> >> +	raw_write_seqcount_begin(&vma->vm_sequence);
> >> +}
> >> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> >> +{
> >> +	raw_write_seqcount_end(&vma->vm_sequence);
> >> +}
> >> +#else
> >> +static inline void vm_write_begin(struct vm_area_struct *vma)
> >> +{
> >> +}
> >> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
> >> +					 int subclass)
> >> +{
> >> +}
> >> +static inline void vm_write_end(struct vm_area_struct *vma)
> >> +{
> >> +}
> >> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
> >> +{
> >> +}
> >> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
> >> +{
> >> +}
> >> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> >> +
> >>  extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
> >>  		void *buf, int len, unsigned int gup_flags);
> >>  extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
> >> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> >> index 21612347d311..db5e9d630e7a 100644
> >> --- a/include/linux/mm_types.h
> >> +++ b/include/linux/mm_types.h
> >> @@ -335,6 +335,9 @@ struct vm_area_struct {
> >>  	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
> >>  #endif
> >>  	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> >> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> >> +	seqcount_t vm_sequence;
> >> +#endif
> >>  } __randomize_layout;
> >>  
> >>  struct core_thread {
> >> diff --git a/mm/memory.c b/mm/memory.c
> >> index f86efcb8e268..f7fed053df80 100644
> >> --- a/mm/memory.c
> >> +++ b/mm/memory.c
> >> @@ -1503,6 +1503,7 @@ void unmap_page_range(struct mmu_gather *tlb,
> >>  	unsigned long next;
> >>  
> >>  	BUG_ON(addr >= end);
> > 
> > The comment about saying it aims for page-table stability will help.
> 
> A comment may be added mentioning that we use the seqcount to indicate that the
> VMA is modified, being unmapped. But there is not a real page table protection,
> and I think this may be confusing to talk about page table stability here.

Okay, so here you mean seqcount is not protecting VMA's fields but vma unmap
operation like you mentioned above. I was confused like below description.

"The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are still valid"

Instead of using page-tables's validness in descriptoin, it would be better
to use scenario you mentioned about VMA unmap operation and page fault race.

Thanks.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock
  2018-04-30 18:47   ` Punit Agrawal
@ 2018-05-02  6:37     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-02  6:37 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Punit,

Thanks for reviewing this series.

On 30/04/2018 20:47, Punit Agrawal wrote:
> Hi Laurent,
> 
> One nitpick below.
> 
> On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>> This change is inspired by the Peter's proposal patch [1] which was
>> protecting the VMA using SRCU. Unfortunately, SRCU is not scaling well in
>> that particular case, and it is introducing major performance degradation
>> due to excessive scheduling operations.
>>
>> To allow access to the mm_rb tree without grabbing the mmap_sem, this patch
>> is protecting it access using a rwlock.  As the mm_rb tree is a O(log n)
>> search it is safe to protect it using such a lock.  The VMA cache is not
>> protected by the new rwlock and it should not be used without holding the
>> mmap_sem.
>>
>> To allow the picked VMA structure to be used once the rwlock is released, a
>> use count is added to the VMA structure. When the VMA is allocated it is
>> set to 1.  Each time the VMA is picked with the rwlock held its use count
>> is incremented. Each time the VMA is released it is decremented. When the
>> use count hits zero, this means that the VMA is no more used and should be
>> freed.
>>
>> This patch is preparing for 2 kind of VMA access :
>>  - as usual, under the control of the mmap_sem,
>>  - without holding the mmap_sem for the speculative page fault handler.
>>
>> Access done under the control the mmap_sem doesn't require to grab the
>> rwlock to protect read access to the mm_rb tree, but access in write must
>> be done under the protection of the rwlock too. This affects inserting and
>> removing of elements in the RB tree.
>>
>> The patch is introducing 2 new functions:
>>  - vma_get() to find a VMA based on an address by holding the new rwlock.
>>  - vma_put() to release the VMA when its no more used.
>> These services are designed to be used when access are made to the RB tree
>> without holding the mmap_sem.
>>
>> When a VMA is removed from the RB tree, its vma->vm_rb field is cleared and
>> we rely on the WMB done when releasing the rwlock to serialize the write
>> with the RMB done in a later patch to check for the VMA's validity.
>>
>> When free_vma is called, the file associated with the VMA is closed
>> immediately, but the policy and the file structure remained in used until
>> the VMA's use count reach 0, which may happens later when exiting an
>> in progress speculative page fault.
>>
>> [1] https://patchwork.kernel.org/patch/5108281/
>>
>> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/mm.h       |   1 +
>>  include/linux/mm_types.h |   4 ++
>>  kernel/fork.c            |   3 ++
>>  mm/init-mm.c             |   3 ++
>>  mm/internal.h            |   6 +++
>>  mm/mmap.c                | 115 +++++++++++++++++++++++++++++++++++------------
>>  6 files changed, 104 insertions(+), 28 deletions(-)
>>
> 
> [...]
> 
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 5601f1ef8bb9..a82950960f2e 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -160,6 +160,27 @@ void unlink_file_vma(struct vm_area_struct *vma)
>>         }
>>  }
>>
>> +static void __free_vma(struct vm_area_struct *vma)
>> +{
>> +       if (vma->vm_file)
>> +               fput(vma->vm_file);
>> +       mpol_put(vma_policy(vma));
>> +       kmem_cache_free(vm_area_cachep, vma);
>> +}
>> +
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +void put_vma(struct vm_area_struct *vma)
>> +{
>> +       if (atomic_dec_and_test(&vma->vm_ref_count))
>> +               __free_vma(vma);
>> +}
>> +#else
>> +static inline void put_vma(struct vm_area_struct *vma)
>> +{
>> +       return __free_vma(vma);
> 
> Please drop the "return".

Sure !
Thanks.

> 
> Thanks,
> Punit
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
  2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
  2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
@ 2018-05-02 14:17   ` Punit Agrawal
  2018-04-17 14:33 ` [PATCH v10 03/25] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
                     ` (24 subsequent siblings)
  26 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-02 14:17 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

One query below -

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

[...]

>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
>
>   		BASE		SPF		delta	
> 16 CPUs x86 VM	12405.52	91104.52	634.39%
> 80 CPUs P8 node 37880.01	76201.05	101.16%

How do you measure the number of records processed? Is there a specific
version of ebizzy that reports this? I couldn't find a way to get this
information with the ebizzy that's included in ltp.

>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>  Performance counter stats for './ebizzy -mRTp':
>             860074      faults
>             856866      spf
>                285      pagefault:spf_pte_lock
>               1506      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                 73      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
>  Performance counter stats for './ebizzy -mRTp':
>             722695      faults
>             699402      spf
>              16048      pagefault:spf_pte_lock
>               6838      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                277      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> In ebizzy's case most of the page fault were handled in a speculative way,
> leading the ebizzy performance boost.

A trial run showed increased fault handling when SPF is enabled on an
8-core ARM64 system running 4.17-rc3. I am using a port of your x86
patch to enable spf on arm64.

SPF
---

Performance counter stats for './ebizzy -vvvmTRp':

         1,322,736      faults                                                      
         1,299,241      software/config=11/                                         

      10.005348034 seconds time elapsed

No SPF
-----

 Performance counter stats for './ebizzy -vvvmTRp':

           708,916      faults
                 0      software/config=11/

      10.005807432 seconds time elapsed

Thanks,
Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
@ 2018-05-02 14:17   ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-02 14:17 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, ozlabs.org, x86

Hi Laurent,

One query below -

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

[...]

>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
>
>   		BASE		SPF		delta	
> 16 CPUs x86 VM	12405.52	91104.52	634.39%
> 80 CPUs P8 node 37880.01	76201.05	101.16%

How do you measure the number of records processed? Is there a specific
version of ebizzy that reports this? I couldn't find a way to get this
information with the ebizzy that's included in ltp.

>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>  Performance counter stats for './ebizzy -mRTp':
>             860074      faults
>             856866      spf
>                285      pagefault:spf_pte_lock
>               1506      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                 73      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
>  Performance counter stats for './ebizzy -mRTp':
>             722695      faults
>             699402      spf
>              16048      pagefault:spf_pte_lock
>               6838      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                277      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> In ebizzy's case most of the page fault were handled in a speculative way,
> leading the ebizzy performance boost.

A trial run showed increased fault handling when SPF is enabled on an
8-core ARM64 system running 4.17-rc3. I am using a port of your x86
patch to enable spf on arm64.

SPF
---

Performance counter stats for './ebizzy -vvvmTRp':

         1,322,736      faults                                                      
         1,299,241      software/config=11/                                         

      10.005348034 seconds time elapsed

No SPF
-----

 Performance counter stats for './ebizzy -vvvmTRp':

           708,916      faults
                 0      software/config=11/

      10.005807432 seconds time elapsed

Thanks,
Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
@ 2018-05-02 14:17   ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-02 14:17 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

One query below -

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

[...]

>
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
>
>   		BASE		SPF		delta	
> 16 CPUs x86 VM	12405.52	91104.52	634.39%
> 80 CPUs P8 node 37880.01	76201.05	101.16%

How do you measure the number of records processed? Is there a specific
version of ebizzy that reports this? I couldn't find a way to get this
information with the ebizzy that's included in ltp.

>
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>  Performance counter stats for './ebizzy -mRTp':
>             860074      faults
>             856866      spf
>                285      pagefault:spf_pte_lock
>               1506      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                 73      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> And the ones captured during a run on a 80 CPUs Power node:
>  Performance counter stats for './ebizzy -mRTp':
>             722695      faults
>             699402      spf
>              16048      pagefault:spf_pte_lock
>               6838      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                277      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
>
> In ebizzy's case most of the page fault were handled in a speculative way,
> leading the ebizzy performance boost.

A trial run showed increased fault handling when SPF is enabled on an
8-core ARM64 system running 4.17-rc3. I am using a port of your x86
patch to enable spf on arm64.

SPF
---

Performance counter stats for './ebizzy -vvvmTRp':

         1,322,736      faults                                                      
         1,299,241      software/config=11/                                         

      10.005348034 seconds time elapsed

No SPF
-----

 Performance counter stats for './ebizzy -vvvmTRp':

           708,916      faults
                 0      software/config=11/

      10.005807432 seconds time elapsed

Thanks,
Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
  2018-05-02 14:17   ` Punit Agrawal
  (?)
  (?)
@ 2018-05-02 14:45   ` Laurent Dufour
  2018-05-02 15:50       ` Punit Agrawal
  -1 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-05-02 14:45 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86



On 02/05/2018 16:17, Punit Agrawal wrote:
> Hi Laurent,
> 
> One query below -
> 
> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
> 
> [...]
> 
>>
>> Ebizzy:
>> -------
>> The test is counting the number of records per second it can manage, the
>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>> result I repeated the test 100 times and measure the average result. The
>> number is the record processes per second, the higher is the best.
>>
>>   		BASE		SPF		delta	
>> 16 CPUs x86 VM	12405.52	91104.52	634.39%
>> 80 CPUs P8 node 37880.01	76201.05	101.16%
> 
> How do you measure the number of records processed? Is there a specific
> version of ebizzy that reports this? I couldn't find a way to get this
> information with the ebizzy that's included in ltp.

I'm using the original one : http://ebizzy.sourceforge.net/

> 
>>
>> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>>  Performance counter stats for './ebizzy -mRTp':
>>             860074      faults
>>             856866      spf
>>                285      pagefault:spf_pte_lock
>>               1506      pagefault:spf_vma_changed
>>                  0      pagefault:spf_vma_noanon
>>                 73      pagefault:spf_vma_notsup
>>                  0      pagefault:spf_vma_access
>>                  0      pagefault:spf_pmd_changed
>>
>> And the ones captured during a run on a 80 CPUs Power node:
>>  Performance counter stats for './ebizzy -mRTp':
>>             722695      faults
>>             699402      spf
>>              16048      pagefault:spf_pte_lock
>>               6838      pagefault:spf_vma_changed
>>                  0      pagefault:spf_vma_noanon
>>                277      pagefault:spf_vma_notsup
>>                  0      pagefault:spf_vma_access
>>                  0      pagefault:spf_pmd_changed
>>
>> In ebizzy's case most of the page fault were handled in a speculative way,
>> leading the ebizzy performance boost.
> 
> A trial run showed increased fault handling when SPF is enabled on an
> 8-core ARM64 system running 4.17-rc3. I am using a port of your x86
> patch to enable spf on arm64.
> 
> SPF
> ---
> 
> Performance counter stats for './ebizzy -vvvmTRp':
> 
>          1,322,736      faults                                                      
>          1,299,241      software/config=11/                                         
> 
>       10.005348034 seconds time elapsed
> 
> No SPF
> -----
> 
>  Performance counter stats for './ebizzy -vvvmTRp':
> 
>            708,916      faults
>                  0      software/config=11/
> 
>       10.005807432 seconds time elapsed

Thanks for sharing these good numbers !

> Thanks,
> Punit
> 
> [...]
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
  2018-05-02 14:45   ` Laurent Dufour
@ 2018-05-02 15:50       ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-02 15:50 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen

Hi Laurent,

Thanks for your reply.

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 02/05/2018 16:17, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> One query below -
>> 
>> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>> 
>> [...]
>> 
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage, the
>>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>>> result I repeated the test 100 times and measure the average result. The
>>> number is the record processes per second, the higher is the best.
>>>
>>>   		BASE		SPF		delta	
>>> 16 CPUs x86 VM	12405.52	91104.52	634.39%
>>> 80 CPUs P8 node 37880.01	76201.05	101.16%
>> 
>> How do you measure the number of records processed? Is there a specific
>> version of ebizzy that reports this? I couldn't find a way to get this
>> information with the ebizzy that's included in ltp.
>
> I'm using the original one : http://ebizzy.sourceforge.net/

Turns out I missed the records processed in the verbose output enabled
by "-vvv". Sorry for the noise.

[...]

>> 
>> A trial run showed increased fault handling when SPF is enabled on an
>> 8-core ARM64 system running 4.17-rc3. I am using a port of your x86
>> patch to enable spf on arm64.
>> 
>> SPF
>> ---
>> 
>> Performance counter stats for './ebizzy -vvvmTRp':
>> 
>>          1,322,736      faults                                                      
>>          1,299,241      software/config=11/                                         
>> 
>>       10.005348034 seconds time elapsed
>> 
>> No SPF
>> -----
>> 
>>  Performance counter stats for './ebizzy -vvvmTRp':
>> 
>>            708,916      faults
>>                  0      software/config=11/
>> 
>>       10.005807432 seconds time elapsed
>
> Thanks for sharing these good numbers !


A quick run showed 71041 (no-spf) vs 122306 (spf) records/s (~72%
improvement).

I'd like to do some runs on a slightly larger system (if I can get my
hands on one) to see how the patches behave. I'll also have a closer
look at your series - the previous comments were just somethings I
observed as part of trying the functionality on arm64.

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 00/25] Speculative page faults
@ 2018-05-02 15:50       ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-02 15:50 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen

Hi Laurent,

Thanks for your reply.

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 02/05/2018 16:17, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> One query below -
>> 
>> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>> 
>> [...]
>> 
>>>
>>> Ebizzy:
>>> -------
>>> The test is counting the number of records per second it can manage, the
>>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>>> result I repeated the test 100 times and measure the average result. The
>>> number is the record processes per second, the higher is the best.
>>>
>>>   		BASE		SPF		delta	
>>> 16 CPUs x86 VM	12405.52	91104.52	634.39%
>>> 80 CPUs P8 node 37880.01	76201.05	101.16%
>> 
>> How do you measure the number of records processed? Is there a specific
>> version of ebizzy that reports this? I couldn't find a way to get this
>> information with the ebizzy that's included in ltp.
>
> I'm using the original one : http://ebizzy.sourceforge.net/

Turns out I missed the records processed in the verbose output enabled
by "-vvv". Sorry for the noise.

[...]

>> 
>> A trial run showed increased fault handling when SPF is enabled on an
>> 8-core ARM64 system running 4.17-rc3. I am using a port of your x86
>> patch to enable spf on arm64.
>> 
>> SPF
>> ---
>> 
>> Performance counter stats for './ebizzy -vvvmTRp':
>> 
>>          1,322,736      faults                                                      
>>          1,299,241      software/config=11/                                         
>> 
>>       10.005348034 seconds time elapsed
>> 
>> No SPF
>> -----
>> 
>>  Performance counter stats for './ebizzy -vvvmTRp':
>> 
>>            708,916      faults
>>                  0      software/config=11/
>> 
>>       10.005807432 seconds time elapsed
>
> Thanks for sharing these good numbers !


A quick run showed 71041 (no-spf) vs 122306 (spf) records/s (~72%
improvement).

I'd like to do some runs on a slightly larger system (if I can get my
hands on one) to see how the patches behave. I'll also have a closer
look at your series - the previous comments were just somethings I
observed as part of trying the functionality on arm64.

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-04-23  7:42   ` Minchan Kim
@ 2018-05-03 12:25     ` Laurent Dufour
  2018-05-03 15:42       ` Minchan Kim
  0 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-05-03 12:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 23/04/2018 09:42, Minchan Kim wrote:
> On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
>> When handling speculative page fault, the vma->vm_flags and
>> vma->vm_page_prot fields are read once the page table lock is released. So
>> there is no more guarantee that these fields would not change in our back.
>> They will be saved in the vm_fault structure before the VMA is checked for
>> changes.
> 
> Sorry. I cannot understand.
> If it is changed under us, what happens? If it's critical, why cannot we
> check with seqcounter?
> Clearly, I'm not understanding the logic here. However, it's a global
> change without CONFIG_SPF so I want to be more careful.
> It would be better to describe why we need to sanpshot those values
> into vm_fault rather than preventing the race.

The idea is to go forward processing the page fault using the VMA's fields
values saved in the vm_fault structure. Then once the pte are locked, the
vma->sequence_counter is checked again and if something has changed in our back
the speculative page fault processing is aborted.

Thanks,
Laurent.


> 
> Thanks.
> 
>>
>> This patch also set the fields in hugetlb_no_page() and
>> __collapse_huge_page_swapin even if it is not need for the callee.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/mm.h | 10 ++++++++--
>>  mm/huge_memory.c   |  6 +++---
>>  mm/hugetlb.c       |  2 ++
>>  mm/khugepaged.c    |  2 ++
>>  mm/memory.c        | 50 ++++++++++++++++++++++++++------------------------
>>  mm/migrate.c       |  2 +-
>>  6 files changed, 42 insertions(+), 30 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index f6edd15563bc..c65205c8c558 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -367,6 +367,12 @@ struct vm_fault {
>>  					 * page table to avoid allocation from
>>  					 * atomic context.
>>  					 */
>> +	/*
>> +	 * These entries are required when handling speculative page fault.
>> +	 * This way the page handling is done using consistent field values.
>> +	 */
>> +	unsigned long vma_flags;
>> +	pgprot_t vma_page_prot;
>>  };
>>  
>>  /* page entry size for vm->huge_fault() */
>> @@ -687,9 +693,9 @@ void free_compound_page(struct page *page);
>>   * pte_mkwrite.  But get_user_pages can cause write faults for mappings
>>   * that do not have writing enabled, when used by access_process_vm.
>>   */
>> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>> +static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
>>  {
>> -	if (likely(vma->vm_flags & VM_WRITE))
>> +	if (likely(vma_flags & VM_WRITE))
>>  		pte = pte_mkwrite(pte);
>>  	return pte;
>>  }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index a3a1815f8e11..da2afda67e68 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1194,8 +1194,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
>>  
>>  	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
>>  		pte_t entry;
>> -		entry = mk_pte(pages[i], vma->vm_page_prot);
>> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> +		entry = mk_pte(pages[i], vmf->vma_page_prot);
>> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>>  		memcg = (void *)page_private(pages[i]);
>>  		set_page_private(pages[i], 0);
>>  		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
>> @@ -2168,7 +2168,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  				entry = pte_swp_mksoft_dirty(entry);
>>  		} else {
>>  			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
>> -			entry = maybe_mkwrite(entry, vma);
>> +			entry = maybe_mkwrite(entry, vma->vm_flags);
>>  			if (!write)
>>  				entry = pte_wrprotect(entry);
>>  			if (!young)
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 218679138255..774864153407 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3718,6 +3718,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
>>  				.vma = vma,
>>  				.address = address,
>>  				.flags = flags,
>> +				.vma_flags = vma->vm_flags,
>> +				.vma_page_prot = vma->vm_page_prot,
>>  				/*
>>  				 * Hard to debug if it ends up being
>>  				 * used by a callee that assumes
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 0b28af4b950d..2b02a9f9589e 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -887,6 +887,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
>>  		.flags = FAULT_FLAG_ALLOW_RETRY,
>>  		.pmd = pmd,
>>  		.pgoff = linear_page_index(vma, address),
>> +		.vma_flags = vma->vm_flags,
>> +		.vma_page_prot = vma->vm_page_prot,
>>  	};
>>  
>>  	/* we only decide to swapin, if there is enough young ptes */
>> diff --git a/mm/memory.c b/mm/memory.c
>> index f76f5027d251..2fb9920e06a5 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>>  out_mkwrite:
>>  	if (mkwrite) {
>>  		entry = pte_mkyoung(entry);
>> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> +		entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
>>  	}
>>  
>>  	set_pte_at(mm, addr, pte, entry);
>> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>>  
>>  	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
>>  	entry = pte_mkyoung(vmf->orig_pte);
>> -	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> +	entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>>  	if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
>>  		update_mmu_cache(vma, vmf->address, vmf->pte);
>>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
>> @@ -2548,8 +2548,8 @@ static int wp_page_copy(struct vm_fault *vmf)
>>  			inc_mm_counter_fast(mm, MM_ANONPAGES);
>>  		}
>>  		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
>> -		entry = mk_pte(new_page, vma->vm_page_prot);
>> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> +		entry = mk_pte(new_page, vmf->vma_page_prot);
>> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>>  		/*
>>  		 * Clear the pte entry and flush it first, before updating the
>>  		 * pte with the new entry. This will avoid a race condition
>> @@ -2614,7 +2614,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>>  		 * Don't let another task, with possibly unlocked vma,
>>  		 * keep the mlocked page.
>>  		 */
>> -		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
>> +		if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
>>  			lock_page(old_page);	/* LRU manipulation */
>>  			if (PageMlocked(old_page))
>>  				munlock_vma_page(old_page);
>> @@ -2650,7 +2650,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>>   */
>>  int finish_mkwrite_fault(struct vm_fault *vmf)
>>  {
>> -	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
>> +	WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
>>  	if (!pte_map_lock(vmf))
>>  		return VM_FAULT_RETRY;
>>  	/*
>> @@ -2752,7 +2752,7 @@ static int do_wp_page(struct vm_fault *vmf)
>>  		 * We should not cow pages in a shared writeable mapping.
>>  		 * Just mark the pages writable and/or call ops->pfn_mkwrite.
>>  		 */
>> -		if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
>> +		if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>>  				     (VM_WRITE|VM_SHARED))
>>  			return wp_pfn_shared(vmf);
>>  
>> @@ -2799,7 +2799,7 @@ static int do_wp_page(struct vm_fault *vmf)
>>  			return VM_FAULT_WRITE;
>>  		}
>>  		unlock_page(vmf->page);
>> -	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
>> +	} else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
>>  					(VM_WRITE|VM_SHARED))) {
>>  		return wp_page_shared(vmf);
>>  	}
>> @@ -3078,9 +3078,9 @@ int do_swap_page(struct vm_fault *vmf)
>>  
>>  	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>>  	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
>> -	pte = mk_pte(page, vma->vm_page_prot);
>> +	pte = mk_pte(page, vmf->vma_page_prot);
>>  	if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
>> -		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>> +		pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
>>  		vmf->flags &= ~FAULT_FLAG_WRITE;
>>  		ret |= VM_FAULT_WRITE;
>>  		exclusive = RMAP_EXCLUSIVE;
>> @@ -3105,7 +3105,7 @@ int do_swap_page(struct vm_fault *vmf)
>>  
>>  	swap_free(entry);
>>  	if (mem_cgroup_swap_full(page) ||
>> -	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>> +	    (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
>>  		try_to_free_swap(page);
>>  	unlock_page(page);
>>  	if (page != swapcache && swapcache) {
>> @@ -3163,7 +3163,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>  	pte_t entry;
>>  
>>  	/* File mapping without ->vm_ops ? */
>> -	if (vma->vm_flags & VM_SHARED)
>> +	if (vmf->vma_flags & VM_SHARED)
>>  		return VM_FAULT_SIGBUS;
>>  
>>  	/*
>> @@ -3187,7 +3187,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>  	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
>>  			!mm_forbids_zeropage(vma->vm_mm)) {
>>  		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
>> -						vma->vm_page_prot));
>> +						vmf->vma_page_prot));
>>  		if (!pte_map_lock(vmf))
>>  			return VM_FAULT_RETRY;
>>  		if (!pte_none(*vmf->pte))
>> @@ -3220,8 +3220,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
>>  	 */
>>  	__SetPageUptodate(page);
>>  
>> -	entry = mk_pte(page, vma->vm_page_prot);
>> -	if (vma->vm_flags & VM_WRITE)
>> +	entry = mk_pte(page, vmf->vma_page_prot);
>> +	if (vmf->vma_flags & VM_WRITE)
>>  		entry = pte_mkwrite(pte_mkdirty(entry));
>>  
>>  	if (!pte_map_lock(vmf)) {
>> @@ -3418,7 +3418,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
>>  	for (i = 0; i < HPAGE_PMD_NR; i++)
>>  		flush_icache_page(vma, page + i);
>>  
>> -	entry = mk_huge_pmd(page, vma->vm_page_prot);
>> +	entry = mk_huge_pmd(page, vmf->vma_page_prot);
>>  	if (write)
>>  		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>>  
>> @@ -3492,11 +3492,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
>>  		return VM_FAULT_NOPAGE;
>>  
>>  	flush_icache_page(vma, page);
>> -	entry = mk_pte(page, vma->vm_page_prot);
>> +	entry = mk_pte(page, vmf->vma_page_prot);
>>  	if (write)
>> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
>> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
>>  	/* copy-on-write page */
>> -	if (write && !(vma->vm_flags & VM_SHARED)) {
>> +	if (write && !(vmf->vma_flags & VM_SHARED)) {
>>  		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
>>  		page_add_new_anon_rmap(page, vma, vmf->address, false);
>>  		mem_cgroup_commit_charge(page, memcg, false, false);
>> @@ -3535,7 +3535,7 @@ int finish_fault(struct vm_fault *vmf)
>>  
>>  	/* Did we COW the page? */
>>  	if ((vmf->flags & FAULT_FLAG_WRITE) &&
>> -	    !(vmf->vma->vm_flags & VM_SHARED))
>> +	    !(vmf->vma_flags & VM_SHARED))
>>  		page = vmf->cow_page;
>>  	else
>>  		page = vmf->page;
>> @@ -3789,7 +3789,7 @@ static int do_fault(struct vm_fault *vmf)
>>  		ret = VM_FAULT_SIGBUS;
>>  	else if (!(vmf->flags & FAULT_FLAG_WRITE))
>>  		ret = do_read_fault(vmf);
>> -	else if (!(vma->vm_flags & VM_SHARED))
>> +	else if (!(vmf->vma_flags & VM_SHARED))
>>  		ret = do_cow_fault(vmf);
>>  	else
>>  		ret = do_shared_fault(vmf);
>> @@ -3846,7 +3846,7 @@ static int do_numa_page(struct vm_fault *vmf)
>>  	 * accessible ptes, some can allow access by kernel mode.
>>  	 */
>>  	pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
>> -	pte = pte_modify(pte, vma->vm_page_prot);
>> +	pte = pte_modify(pte, vmf->vma_page_prot);
>>  	pte = pte_mkyoung(pte);
>>  	if (was_writable)
>>  		pte = pte_mkwrite(pte);
>> @@ -3880,7 +3880,7 @@ static int do_numa_page(struct vm_fault *vmf)
>>  	 * Flag if the page is shared between multiple address spaces. This
>>  	 * is later used when determining whether to group tasks together
>>  	 */
>> -	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
>> +	if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
>>  		flags |= TNF_SHARED;
>>  
>>  	last_cpupid = page_cpupid_last(page);
>> @@ -3925,7 +3925,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
>>  		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
>>  
>>  	/* COW handled on pte level: split pmd */
>> -	VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
>> +	VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
>>  	__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
>>  
>>  	return VM_FAULT_FALLBACK;
>> @@ -4072,6 +4072,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>>  		.flags = flags,
>>  		.pgoff = linear_page_index(vma, address),
>>  		.gfp_mask = __get_fault_gfp_mask(vma),
>> +		.vma_flags = vma->vm_flags,
>> +		.vma_page_prot = vma->vm_page_prot,
>>  	};
>>  	unsigned int dirty = flags & FAULT_FLAG_WRITE;
>>  	struct mm_struct *mm = vma->vm_mm;
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index bb6367d70a3e..44d7007cfc1c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
>>  		 */
>>  		entry = pte_to_swp_entry(*pvmw.pte);
>>  		if (is_write_migration_entry(entry))
>> -			pte = maybe_mkwrite(pte, vma);
>> +			pte = maybe_mkwrite(pte, vma->vm_flags);
>>  
>>  		if (unlikely(is_zone_device_page(new))) {
>>  			if (is_device_private_page(new)) {
>> -- 
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 08/25] mm: VMA sequence count
  2018-05-01 13:16       ` Minchan Kim
@ 2018-05-03 14:45         ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-03 14:45 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 01/05/2018 15:16, Minchan Kim wrote:
> On Mon, Apr 30, 2018 at 05:14:27PM +0200, Laurent Dufour wrote:
>>
>>
>> On 23/04/2018 08:42, Minchan Kim wrote:
>>> On Tue, Apr 17, 2018 at 04:33:14PM +0200, Laurent Dufour wrote:
>>>> From: Peter Zijlstra <peterz@infradead.org>
>>>>
>>>> Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
>>>> counts such that we can easily test if a VMA is changed.
>>>
>>> So, seqcount is to protect modifying all attributes of vma?
>>
>> The seqcount is used to protect fields that will be used during the speculative
>> page fault like boundaries, protections.
> 
> a VMA is changed, it was rather vague to me at this point.
> If you could specify detail fields or some example what seqcount aim for,
> it would help to review.

Got it, I'll try to make it more explicit in the commit message, mentioning
which fields are used in the speculative path and to be protected using the VMA
sequence counter.
> 
>>
>>>>
>>>> The unmap_page_range() one allows us to make assumptions about
>>>> page-tables; when we find the seqcount hasn't changed we can assume
>>>> page-tables are still valid.
>>>
>>> Hmm, seqcount covers page-table, too.
>>> Please describe what the seqcount want to protect.
>>
>> The calls to vm_write_begin/end() in unmap_page_range() are used to detect when
>> a VMA is being unmap and thus that new page fault should not be satisfied for
>> this VMA. This is protecting the VMA unmapping operation, not the page tables
>> themselves.
> 
> Thanks for the detail. yes, please include this phrase instead of "page-table
> are still valid". It makes me confused.

Sure, will do.

> 
>>
>>>>
>>>> The flip side is that we cannot distinguish between a vma_adjust() and
>>>> the unmap_page_range() -- where with the former we could have
>>>> re-checked the vma bounds against the address.
>>>>
>>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>>
>>>> [Port to 4.12 kernel]
>>>> [Build depends on CONFIG_SPECULATIVE_PAGE_FAULT]
>>>> [Introduce vm_write_* inline function depending on
>>>>  CONFIG_SPECULATIVE_PAGE_FAULT]
>>>> [Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence by
>>>>  using vm_raw_write* functions]
>>>> [Fix a lock dependency warning in mmap_region() when entering the error
>>>>  path]
>>>> [move sequence initialisation INIT_VMA()]
>>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>>> ---
>>>>  include/linux/mm.h       | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>>>  include/linux/mm_types.h |  3 +++
>>>>  mm/memory.c              |  2 ++
>>>>  mm/mmap.c                | 31 +++++++++++++++++++++++++++++++
>>>>  4 files changed, 80 insertions(+)
>>>>
>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>> index efc1248b82bd..988daf7030c9 100644
>>>> --- a/include/linux/mm.h
>>>> +++ b/include/linux/mm.h
>>>> @@ -1264,6 +1264,9 @@ struct zap_details {
>>>>  static inline void INIT_VMA(struct vm_area_struct *vma)
>>>>  {
>>>>  	INIT_LIST_HEAD(&vma->anon_vma_chain);
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +	seqcount_init(&vma->vm_sequence);
>>>> +#endif
>>>>  }
>>>>  
>>>>  struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>>>> @@ -1386,6 +1389,47 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
>>>>  	unmap_mapping_range(mapping, holebegin, holelen, 0);
>>>>  }
>>>>  
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +static inline void vm_write_begin(struct vm_area_struct *vma)
>>>> +{
>>>> +	write_seqcount_begin(&vma->vm_sequence);
>>>> +}
>>>> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
>>>> +					 int subclass)
>>>> +{
>>>> +	write_seqcount_begin_nested(&vma->vm_sequence, subclass);
>>>> +}
>>>> +static inline void vm_write_end(struct vm_area_struct *vma)
>>>> +{
>>>> +	write_seqcount_end(&vma->vm_sequence);
>>>> +}
>>>> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
>>>> +{
>>>> +	raw_write_seqcount_begin(&vma->vm_sequence);
>>>> +}
>>>> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
>>>> +{
>>>> +	raw_write_seqcount_end(&vma->vm_sequence);
>>>> +}
>>>> +#else
>>>> +static inline void vm_write_begin(struct vm_area_struct *vma)
>>>> +{
>>>> +}
>>>> +static inline void vm_write_begin_nested(struct vm_area_struct *vma,
>>>> +					 int subclass)
>>>> +{
>>>> +}
>>>> +static inline void vm_write_end(struct vm_area_struct *vma)
>>>> +{
>>>> +}
>>>> +static inline void vm_raw_write_begin(struct vm_area_struct *vma)
>>>> +{
>>>> +}
>>>> +static inline void vm_raw_write_end(struct vm_area_struct *vma)
>>>> +{
>>>> +}
>>>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>>>> +
>>>>  extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
>>>>  		void *buf, int len, unsigned int gup_flags);
>>>>  extern int access_remote_vm(struct mm_struct *mm, unsigned long addr,
>>>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>>>> index 21612347d311..db5e9d630e7a 100644
>>>> --- a/include/linux/mm_types.h
>>>> +++ b/include/linux/mm_types.h
>>>> @@ -335,6 +335,9 @@ struct vm_area_struct {
>>>>  	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
>>>>  #endif
>>>>  	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
>>>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>>>> +	seqcount_t vm_sequence;
>>>> +#endif
>>>>  } __randomize_layout;
>>>>  
>>>>  struct core_thread {
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index f86efcb8e268..f7fed053df80 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -1503,6 +1503,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>>>>  	unsigned long next;
>>>>  
>>>>  	BUG_ON(addr >= end);
>>>
>>> The comment about saying it aims for page-table stability will help.
>>
>> A comment may be added mentioning that we use the seqcount to indicate that the
>> VMA is modified, being unmapped. But there is not a real page table protection,
>> and I think this may be confusing to talk about page table stability here.
> 
> Okay, so here you mean seqcount is not protecting VMA's fields but vma unmap
> operation like you mentioned above. I was confused like below description.
> 
> "The unmap_page_range() one allows us to make assumptions about
> page-tables; when we find the seqcount hasn't changed we can assume
> page-tables are still valid"
> 
> Instead of using page-tables's validness in descriptoin, it would be better
> to use scenario you mentioned about VMA unmap operation and page fault race.

Ok will do that.
Thanks.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 24/25] x86/mm: add speculative pagefault handling
  2018-04-30 18:43   ` Punit Agrawal
@ 2018-05-03 14:59     ` Laurent Dufour
  2018-05-04 15:55         ` Punit Agrawal
  0 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-05-03 14:59 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 30/04/2018 20:43, Punit Agrawal wrote:
> Hi Laurent,
> 
> I am looking to add support for speculative page fault handling to
> arm64 (effectively porting this patch) and had a few questions.
> Apologies if I've missed an obvious explanation for my queries. I'm
> jumping in bit late to the discussion.

Hi Punit,

Thanks for giving this series a review.
I don't have arm64 hardware to play with, but I'll be happy to add arm64
patches to my series and to try to maintain them.

> 
> On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>> From: Peter Zijlstra <peterz@infradead.org>
>>
>> Try a speculative fault before acquiring mmap_sem, if it returns with
>> VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
>> traditional fault.
>>
>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>
>> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
>>  handle_speculative_fault()]
>> [Retry with usual fault path in the case VM_ERROR is returned by
>>  handle_speculative_fault(). This allows signal to be delivered]
>> [Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
>> [Try speculative fault path only for multi threaded processes]
>> [Try reuse to the VMA fetch during the speculative path in case of retry]
>> [Call reuse_spf_or_find_vma()]
>> [Handle memory protection key fault]
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  arch/x86/mm/fault.c | 42 ++++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 38 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>> index 73bd8c95ac71..59f778386df5 100644
>> --- a/arch/x86/mm/fault.c
>> +++ b/arch/x86/mm/fault.c
>> @@ -1220,7 +1220,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>         struct mm_struct *mm;
>>         int fault, major = 0;
>>         unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
>> -       u32 pkey;
>> +       u32 pkey, *pt_pkey = &pkey;
>>
>>         tsk = current;
>>         mm = tsk->mm;
>> @@ -1310,6 +1310,30 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>                 flags |= FAULT_FLAG_INSTRUCTION;
>>
>>         /*
>> +        * Do not try speculative page fault for kernel's pages and if
>> +        * the fault was due to protection keys since it can't be resolved.
>> +        */
>> +       if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT) &&
>> +           !(error_code & X86_PF_PK)) {
> 
> You can simplify this condition by dropping the IS_ENABLED() check as
> you already provide an alternate implementation of
> handle_speculative_fault() when CONFIG_SPECULATIVE_PAGE_FAULT is not
> defined.

Yes you're right, I completely forgot about that define of
handle_speculative_fault() when CONFIG_SPECULATIVE_PAGE_FAULT is not set, that
will definitively makes that part of code more readable.

> 
>> +               fault = handle_speculative_fault(mm, address, flags, &vma);
>> +               if (fault != VM_FAULT_RETRY) {
>> +                       perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
>> +                       /*
>> +                        * Do not advertise for the pkey value since we don't
>> +                        * know it.
>> +                        * This is not a matter as we checked for X86_PF_PK
>> +                        * earlier, so we should not handle pkey fault here,
>> +                        * but to be sure that mm_fault_error() callees will
>> +                        * not try to use it, we invalidate the pointer.
>> +                        */
>> +                       pt_pkey = NULL;
>> +                       goto done;
>> +               }
>> +       } else {
>> +               vma = NULL;
>> +       }
> 
> The else part can be dropped if vma is initialised to NULL when it is
> declared at the top of the function.
Sure.

> 
>> +
>> +       /*
>>          * When running in the kernel we expect faults to occur only to
>>          * addresses in user space.  All other faults represent errors in
>>          * the kernel and should generate an OOPS.  Unfortunately, in the
>> @@ -1342,7 +1366,8 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>                 might_sleep();
>>         }
>>
>> -       vma = find_vma(mm, address);
>> +       if (!vma || !can_reuse_spf_vma(vma, address))
>> +               vma = find_vma(mm, address);
> 
> Is there a measurable benefit from reusing the vma?
> 
> Dropping the vma reference unconditionally after speculative page
> fault handling gets rid of the implicit state when "vma != NULL"
> (increased ref-count). I found it a bit confusing to follow.

I do agree, this is quite confusing. My initial goal was to be able to reuse
the VMA in the case a protection key error was detected, but it's not really
necessary on x86 since we know at the beginning of the fault operation that
protection key are in the loop. This is not the case on ppc64 but I couldn't
find a way to easily rely on the speculatively fetched VMA neither, so for
protection keys, this didn't help.

Regarding the measurable benefit of reusing the fetched vma, I did further
tests using will-it-scale/page_fault2_threads test, and I'm no more really
convince that this worth the added complexity. I think I'll drop the patch "mm:
speculative page fault handler return VMA" of the series, and thus remove the
call to can_reuse_spf_vma().

Thanks,
Laurent.

> 
>>         if (unlikely(!vma)) {
>>                 bad_area(regs, error_code, address);
>>                 return;
>> @@ -1409,8 +1434,15 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>                 if (flags & FAULT_FLAG_ALLOW_RETRY) {
>>                         flags &= ~FAULT_FLAG_ALLOW_RETRY;
>>                         flags |= FAULT_FLAG_TRIED;
>> -                       if (!fatal_signal_pending(tsk))
>> +                       if (!fatal_signal_pending(tsk)) {
>> +                               /*
>> +                                * Do not try to reuse this vma and fetch it
>> +                                * again since we will release the mmap_sem.
>> +                                */
>> +                               if (IS_ENABLED(CONFIG_SPECULATIVE_PAGE_FAULT))
>> +                                       vma = NULL;
> 
> Regardless of the above comment, can the vma be reset here unconditionally?
> 
> Thanks,
> Punit
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-05-03 12:25     ` Laurent Dufour
@ 2018-05-03 15:42       ` Minchan Kim
  2018-05-04  9:10         ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Minchan Kim @ 2018-05-03 15:42 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Thu, May 03, 2018 at 02:25:18PM +0200, Laurent Dufour wrote:
> On 23/04/2018 09:42, Minchan Kim wrote:
> > On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
> >> When handling speculative page fault, the vma->vm_flags and
> >> vma->vm_page_prot fields are read once the page table lock is released. So
> >> there is no more guarantee that these fields would not change in our back
> >> They will be saved in the vm_fault structure before the VMA is checked for
> >> changes.
> > 
> > Sorry. I cannot understand.
> > If it is changed under us, what happens? If it's critical, why cannot we
> > check with seqcounter?
> > Clearly, I'm not understanding the logic here. However, it's a global
> > change without CONFIG_SPF so I want to be more careful.
> > It would be better to describe why we need to sanpshot those values
> > into vm_fault rather than preventing the race.
> 
> The idea is to go forward processing the page fault using the VMA's fields
> values saved in the vm_fault structure. Then once the pte are locked, the
> vma->sequence_counter is checked again and if something has changed in our back
> the speculative page fault processing is aborted.

Sorry, still I don't understand why we should capture some fields to vm_fault.
If we found vma->seq_cnt is changed under pte lock, can't we just bail out and
fallback to classic fault handling?

Maybe, I'm missing something clear now. It would be really helpful to understand
if you give some exmaple.

Thanks.

> 
> Thanks,
> Laurent.
> 
> 
> > 
> > Thanks.
> > 
> >>
> >> This patch also set the fields in hugetlb_no_page() and
> >> __collapse_huge_page_swapin even if it is not need for the callee.
> >>
> >> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> >> ---
> >>  include/linux/mm.h | 10 ++++++++--
> >>  mm/huge_memory.c   |  6 +++---
> >>  mm/hugetlb.c       |  2 ++
> >>  mm/khugepaged.c    |  2 ++
> >>  mm/memory.c        | 50 ++++++++++++++++++++++++++------------------------
> >>  mm/migrate.c       |  2 +-
> >>  6 files changed, 42 insertions(+), 30 deletions(-)
> >>
> >> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >> index f6edd15563bc..c65205c8c558 100644
> >> --- a/include/linux/mm.h
> >> +++ b/include/linux/mm.h
> >> @@ -367,6 +367,12 @@ struct vm_fault {
> >>  					 * page table to avoid allocation from
> >>  					 * atomic context.
> >>  					 */
> >> +	/*
> >> +	 * These entries are required when handling speculative page fault.
> >> +	 * This way the page handling is done using consistent field values.
> >> +	 */
> >> +	unsigned long vma_flags;
> >> +	pgprot_t vma_page_prot;
> >>  };
> >>  
> >>  /* page entry size for vm->huge_fault() */
> >> @@ -687,9 +693,9 @@ void free_compound_page(struct page *page);
> >>   * pte_mkwrite.  But get_user_pages can cause write faults for mappings
> >>   * that do not have writing enabled, when used by access_process_vm.
> >>   */
> >> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> >> +static inline pte_t maybe_mkwrite(pte_t pte, unsigned long vma_flags)
> >>  {
> >> -	if (likely(vma->vm_flags & VM_WRITE))
> >> +	if (likely(vma_flags & VM_WRITE))
> >>  		pte = pte_mkwrite(pte);
> >>  	return pte;
> >>  }
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index a3a1815f8e11..da2afda67e68 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -1194,8 +1194,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
> >>  
> >>  	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
> >>  		pte_t entry;
> >> -		entry = mk_pte(pages[i], vma->vm_page_prot);
> >> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> >> +		entry = mk_pte(pages[i], vmf->vma_page_prot);
> >> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> >>  		memcg = (void *)page_private(pages[i]);
> >>  		set_page_private(pages[i], 0);
> >>  		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
> >> @@ -2168,7 +2168,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> >>  				entry = pte_swp_mksoft_dirty(entry);
> >>  		} else {
> >>  			entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
> >> -			entry = maybe_mkwrite(entry, vma);
> >> +			entry = maybe_mkwrite(entry, vma->vm_flags);
> >>  			if (!write)
> >>  				entry = pte_wrprotect(entry);
> >>  			if (!young)
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index 218679138255..774864153407 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -3718,6 +3718,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
> >>  				.vma = vma,
> >>  				.address = address,
> >>  				.flags = flags,
> >> +				.vma_flags = vma->vm_flags,
> >> +				.vma_page_prot = vma->vm_page_prot,
> >>  				/*
> >>  				 * Hard to debug if it ends up being
> >>  				 * used by a callee that assumes
> >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> >> index 0b28af4b950d..2b02a9f9589e 100644
> >> --- a/mm/khugepaged.c
> >> +++ b/mm/khugepaged.c
> >> @@ -887,6 +887,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
> >>  		.flags = FAULT_FLAG_ALLOW_RETRY,
> >>  		.pmd = pmd,
> >>  		.pgoff = linear_page_index(vma, address),
> >> +		.vma_flags = vma->vm_flags,
> >> +		.vma_page_prot = vma->vm_page_prot,
> >>  	};
> >>  
> >>  	/* we only decide to swapin, if there is enough young ptes */
> >> diff --git a/mm/memory.c b/mm/memory.c
> >> index f76f5027d251..2fb9920e06a5 100644
> >> --- a/mm/memory.c
> >> +++ b/mm/memory.c
> >> @@ -1826,7 +1826,7 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> >>  out_mkwrite:
> >>  	if (mkwrite) {
> >>  		entry = pte_mkyoung(entry);
> >> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> >> +		entry = maybe_mkwrite(pte_mkdirty(entry), vma->vm_flags);
> >>  	}
> >>  
> >>  	set_pte_at(mm, addr, pte, entry);
> >> @@ -2472,7 +2472,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
> >>  
> >>  	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> >>  	entry = pte_mkyoung(vmf->orig_pte);
> >> -	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> >> +	entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> >>  	if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
> >>  		update_mmu_cache(vma, vmf->address, vmf->pte);
> >>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> >> @@ -2548,8 +2548,8 @@ static int wp_page_copy(struct vm_fault *vmf)
> >>  			inc_mm_counter_fast(mm, MM_ANONPAGES);
> >>  		}
> >>  		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> >> -		entry = mk_pte(new_page, vma->vm_page_prot);
> >> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> >> +		entry = mk_pte(new_page, vmf->vma_page_prot);
> >> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> >>  		/*
> >>  		 * Clear the pte entry and flush it first, before updating the
> >>  		 * pte with the new entry. This will avoid a race condition
> >> @@ -2614,7 +2614,7 @@ static int wp_page_copy(struct vm_fault *vmf)
> >>  		 * Don't let another task, with possibly unlocked vma,
> >>  		 * keep the mlocked page.
> >>  		 */
> >> -		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
> >> +		if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
> >>  			lock_page(old_page);	/* LRU manipulation */
> >>  			if (PageMlocked(old_page))
> >>  				munlock_vma_page(old_page);
> >> @@ -2650,7 +2650,7 @@ static int wp_page_copy(struct vm_fault *vmf)
> >>   */
> >>  int finish_mkwrite_fault(struct vm_fault *vmf)
> >>  {
> >> -	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> >> +	WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
> >>  	if (!pte_map_lock(vmf))
> >>  		return VM_FAULT_RETRY;
> >>  	/*
> >> @@ -2752,7 +2752,7 @@ static int do_wp_page(struct vm_fault *vmf)
> >>  		 * We should not cow pages in a shared writeable mapping.
> >>  		 * Just mark the pages writable and/or call ops->pfn_mkwrite.
> >>  		 */
> >> -		if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> >> +		if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
> >>  				     (VM_WRITE|VM_SHARED))
> >>  			return wp_pfn_shared(vmf);
> >>  
> >> @@ -2799,7 +2799,7 @@ static int do_wp_page(struct vm_fault *vmf)
> >>  			return VM_FAULT_WRITE;
> >>  		}
> >>  		unlock_page(vmf->page);
> >> -	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
> >> +	} else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
> >>  					(VM_WRITE|VM_SHARED))) {
> >>  		return wp_page_shared(vmf);
> >>  	}
> >> @@ -3078,9 +3078,9 @@ int do_swap_page(struct vm_fault *vmf)
> >>  
> >>  	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> >>  	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
> >> -	pte = mk_pte(page, vma->vm_page_prot);
> >> +	pte = mk_pte(page, vmf->vma_page_prot);
> >>  	if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
> >> -		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> >> +		pte = maybe_mkwrite(pte_mkdirty(pte), vmf->vma_flags);
> >>  		vmf->flags &= ~FAULT_FLAG_WRITE;
> >>  		ret |= VM_FAULT_WRITE;
> >>  		exclusive = RMAP_EXCLUSIVE;
> >> @@ -3105,7 +3105,7 @@ int do_swap_page(struct vm_fault *vmf)
> >>  
> >>  	swap_free(entry);
> >>  	if (mem_cgroup_swap_full(page) ||
> >> -	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
> >> +	    (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
> >>  		try_to_free_swap(page);
> >>  	unlock_page(page);
> >>  	if (page != swapcache && swapcache) {
> >> @@ -3163,7 +3163,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> >>  	pte_t entry;
> >>  
> >>  	/* File mapping without ->vm_ops ? */
> >> -	if (vma->vm_flags & VM_SHARED)
> >> +	if (vmf->vma_flags & VM_SHARED)
> >>  		return VM_FAULT_SIGBUS;
> >>  
> >>  	/*
> >> @@ -3187,7 +3187,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
> >>  	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
> >>  			!mm_forbids_zeropage(vma->vm_mm)) {
> >>  		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
> >> -						vma->vm_page_prot));
> >> +						vmf->vma_page_prot));
> >>  		if (!pte_map_lock(vmf))
> >>  			return VM_FAULT_RETRY;
> >>  		if (!pte_none(*vmf->pte))
> >> @@ -3220,8 +3220,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
> >>  	 */
> >>  	__SetPageUptodate(page);
> >>  
> >> -	entry = mk_pte(page, vma->vm_page_prot);
> >> -	if (vma->vm_flags & VM_WRITE)
> >> +	entry = mk_pte(page, vmf->vma_page_prot);
> >> +	if (vmf->vma_flags & VM_WRITE)
> >>  		entry = pte_mkwrite(pte_mkdirty(entry));
> >>  
> >>  	if (!pte_map_lock(vmf)) {
> >> @@ -3418,7 +3418,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
> >>  	for (i = 0; i < HPAGE_PMD_NR; i++)
> >>  		flush_icache_page(vma, page + i);
> >>  
> >> -	entry = mk_huge_pmd(page, vma->vm_page_prot);
> >> +	entry = mk_huge_pmd(page, vmf->vma_page_prot);
> >>  	if (write)
> >>  		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
> >>  
> >> @@ -3492,11 +3492,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
> >>  		return VM_FAULT_NOPAGE;
> >>  
> >>  	flush_icache_page(vma, page);
> >> -	entry = mk_pte(page, vma->vm_page_prot);
> >> +	entry = mk_pte(page, vmf->vma_page_prot);
> >>  	if (write)
> >> -		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> >> +		entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
> >>  	/* copy-on-write page */
> >> -	if (write && !(vma->vm_flags & VM_SHARED)) {
> >> +	if (write && !(vmf->vma_flags & VM_SHARED)) {
> >>  		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
> >>  		page_add_new_anon_rmap(page, vma, vmf->address, false);
> >>  		mem_cgroup_commit_charge(page, memcg, false, false);
> >> @@ -3535,7 +3535,7 @@ int finish_fault(struct vm_fault *vmf)
> >>  
> >>  	/* Did we COW the page? */
> >>  	if ((vmf->flags & FAULT_FLAG_WRITE) &&
> >> -	    !(vmf->vma->vm_flags & VM_SHARED))
> >> +	    !(vmf->vma_flags & VM_SHARED))
> >>  		page = vmf->cow_page;
> >>  	else
> >>  		page = vmf->page;
> >> @@ -3789,7 +3789,7 @@ static int do_fault(struct vm_fault *vmf)
> >>  		ret = VM_FAULT_SIGBUS;
> >>  	else if (!(vmf->flags & FAULT_FLAG_WRITE))
> >>  		ret = do_read_fault(vmf);
> >> -	else if (!(vma->vm_flags & VM_SHARED))
> >> +	else if (!(vmf->vma_flags & VM_SHARED))
> >>  		ret = do_cow_fault(vmf);
> >>  	else
> >>  		ret = do_shared_fault(vmf);
> >> @@ -3846,7 +3846,7 @@ static int do_numa_page(struct vm_fault *vmf)
> >>  	 * accessible ptes, some can allow access by kernel mode.
> >>  	 */
> >>  	pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
> >> -	pte = pte_modify(pte, vma->vm_page_prot);
> >> +	pte = pte_modify(pte, vmf->vma_page_prot);
> >>  	pte = pte_mkyoung(pte);
> >>  	if (was_writable)
> >>  		pte = pte_mkwrite(pte);
> >> @@ -3880,7 +3880,7 @@ static int do_numa_page(struct vm_fault *vmf)
> >>  	 * Flag if the page is shared between multiple address spaces. This
> >>  	 * is later used when determining whether to group tasks together
> >>  	 */
> >> -	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
> >> +	if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
> >>  		flags |= TNF_SHARED;
> >>  
> >>  	last_cpupid = page_cpupid_last(page);
> >> @@ -3925,7 +3925,7 @@ static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
> >>  		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
> >>  
> >>  	/* COW handled on pte level: split pmd */
> >> -	VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
> >> +	VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
> >>  	__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
> >>  
> >>  	return VM_FAULT_FALLBACK;
> >> @@ -4072,6 +4072,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> >>  		.flags = flags,
> >>  		.pgoff = linear_page_index(vma, address),
> >>  		.gfp_mask = __get_fault_gfp_mask(vma),
> >> +		.vma_flags = vma->vm_flags,
> >> +		.vma_page_prot = vma->vm_page_prot,
> >>  	};
> >>  	unsigned int dirty = flags & FAULT_FLAG_WRITE;
> >>  	struct mm_struct *mm = vma->vm_mm;
> >> diff --git a/mm/migrate.c b/mm/migrate.c
> >> index bb6367d70a3e..44d7007cfc1c 100644
> >> --- a/mm/migrate.c
> >> +++ b/mm/migrate.c
> >> @@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
> >>  		 */
> >>  		entry = pte_to_swp_entry(*pvmw.pte);
> >>  		if (is_write_migration_entry(entry))
> >> -			pte = maybe_mkwrite(pte, vma);
> >> +			pte = maybe_mkwrite(pte, vma->vm_flags);
> >>  
> >>  		if (unlikely(is_zone_device_page(new))) {
> >>  			if (is_device_private_page(new)) {
> >> -- 
> >> 2.7.4
> >>
> > 
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-05-03 15:42       ` Minchan Kim
@ 2018-05-04  9:10         ` Laurent Dufour
  2018-05-08 10:56           ` Minchan Kim
  0 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-05-04  9:10 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 03/05/2018 17:42, Minchan Kim wrote:
> On Thu, May 03, 2018 at 02:25:18PM +0200, Laurent Dufour wrote:
>> On 23/04/2018 09:42, Minchan Kim wrote:
>>> On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
>>>> When handling speculative page fault, the vma->vm_flags and
>>>> vma->vm_page_prot fields are read once the page table lock is released. So
>>>> there is no more guarantee that these fields would not change in our back
>>>> They will be saved in the vm_fault structure before the VMA is checked for
>>>> changes.
>>>
>>> Sorry. I cannot understand.
>>> If it is changed under us, what happens? If it's critical, why cannot we
>>> check with seqcounter?
>>> Clearly, I'm not understanding the logic here. However, it's a global
>>> change without CONFIG_SPF so I want to be more careful.
>>> It would be better to describe why we need to sanpshot those values
>>> into vm_fault rather than preventing the race.
>>
>> The idea is to go forward processing the page fault using the VMA's fields
>> values saved in the vm_fault structure. Then once the pte are locked, the
>> vma->sequence_counter is checked again and if something has changed in our back
>> the speculative page fault processing is aborted.
> 
> Sorry, still I don't understand why we should capture some fields to vm_fault.
> If we found vma->seq_cnt is changed under pte lock, can't we just bail out and
> fallback to classic fault handling?
> 
> Maybe, I'm missing something clear now. It would be really helpful to understand
> if you give some exmaple.

I'd rather say that I was not clear enough ;)

Here is the point, when we deal with a speculative page fault, the mmap_sem is
not taken, so parallel VMA's changes can occurred. When a VMA change is done
which will impact the page fault processing, we assumed that the VMA sequence
counter will be changed.

In the page fault processing, at the time the PTE is locked, we checked the VMA
sequence counter to detect changes done in our back. If no change is detected
we can continue further. But this doesn't prevent the VMA to not be changed in
our back while the PTE is locked. So VMA's fields which are used while the PTE
is locked must be saved to ensure that we are using *static* values.
This is important since the PTE changes will be made on regards to these VMA
fields and they need to be consistent. This concerns the vma->vm_flags and
vma->vm_page_prot VMA fields.

I hope I make this clear enough this time.

Thanks,
Laurent.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 24/25] x86/mm: add speculative pagefault handling
  2018-05-03 14:59     ` Laurent Dufour
@ 2018-05-04 15:55         ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-04 15:55 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Punit Agrawal, akpm, mhocko, peterz, kirill, ak, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 30/04/2018 20:43, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> I am looking to add support for speculative page fault handling to
>> arm64 (effectively porting this patch) and had a few questions.
>> Apologies if I've missed an obvious explanation for my queries. I'm
>> jumping in bit late to the discussion.
>
> Hi Punit,
>
> Thanks for giving this series a review.
> I don't have arm64 hardware to play with, but I'll be happy to add arm64
> patches to my series and to try to maintain them.

I'll be happy to try them on arm64 platforms I have access to and
provide feedback.

>
>> 
>> On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
>> <ldufour@linux.vnet.ibm.com> wrote:
>>> From: Peter Zijlstra <peterz@infradead.org>
>>>

[...]

>>>
>>> -       vma = find_vma(mm, address);
>>> +       if (!vma || !can_reuse_spf_vma(vma, address))
>>> +               vma = find_vma(mm, address);
>> 
>> Is there a measurable benefit from reusing the vma?
>> 
>> Dropping the vma reference unconditionally after speculative page
>> fault handling gets rid of the implicit state when "vma != NULL"
>> (increased ref-count). I found it a bit confusing to follow.
>
> I do agree, this is quite confusing. My initial goal was to be able to reuse
> the VMA in the case a protection key error was detected, but it's not really
> necessary on x86 since we know at the beginning of the fault operation that
> protection key are in the loop. This is not the case on ppc64 but I couldn't
> find a way to easily rely on the speculatively fetched VMA neither, so for
> protection keys, this didn't help.
>
> Regarding the measurable benefit of reusing the fetched vma, I did further
> tests using will-it-scale/page_fault2_threads test, and I'm no more really
> convince that this worth the added complexity. I think I'll drop the patch "mm:
> speculative page fault handler return VMA" of the series, and thus remove the
> call to can_reuse_spf_vma().

Makes sense. Thanks for giving this a go.

Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 24/25] x86/mm: add speculative pagefault handling
@ 2018-05-04 15:55         ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-04 15:55 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Punit Agrawal, akpm, mhocko, peterz, kirill, ak, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 30/04/2018 20:43, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> I am looking to add support for speculative page fault handling to
>> arm64 (effectively porting this patch) and had a few questions.
>> Apologies if I've missed an obvious explanation for my queries. I'm
>> jumping in bit late to the discussion.
>
> Hi Punit,
>
> Thanks for giving this series a review.
> I don't have arm64 hardware to play with, but I'll be happy to add arm64
> patches to my series and to try to maintain them.

I'll be happy to try them on arm64 platforms I have access to and
provide feedback.

>
>> 
>> On Tue, Apr 17, 2018 at 3:33 PM, Laurent Dufour
>> <ldufour@linux.vnet.ibm.com> wrote:
>>> From: Peter Zijlstra <peterz@infradead.org>
>>>

[...]

>>>
>>> -       vma = find_vma(mm, address);
>>> +       if (!vma || !can_reuse_spf_vma(vma, address))
>>> +               vma = find_vma(mm, address);
>> 
>> Is there a measurable benefit from reusing the vma?
>> 
>> Dropping the vma reference unconditionally after speculative page
>> fault handling gets rid of the implicit state when "vma != NULL"
>> (increased ref-count). I found it a bit confusing to follow.
>
> I do agree, this is quite confusing. My initial goal was to be able to reuse
> the VMA in the case a protection key error was detected, but it's not really
> necessary on x86 since we know at the beginning of the fault operation that
> protection key are in the loop. This is not the case on ppc64 but I couldn't
> find a way to easily rely on the speculatively fetched VMA neither, so for
> protection keys, this didn't help.
>
> Regarding the measurable benefit of reusing the fetched vma, I did further
> tests using will-it-scale/page_fault2_threads test, and I'm no more really
> convince that this worth the added complexity. I think I'll drop the patch "mm:
> speculative page fault handler return VMA" of the series, and thus remove the
> call to can_reuse_spf_vma().

Makes sense. Thanks for giving this a go.

Punit

[...]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure
  2018-05-04  9:10         ` Laurent Dufour
@ 2018-05-08 10:56           ` Minchan Kim
  0 siblings, 0 replies; 68+ messages in thread
From: Minchan Kim @ 2018-05-08 10:56 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On Fri, May 04, 2018 at 11:10:54AM +0200, Laurent Dufour wrote:
> On 03/05/2018 17:42, Minchan Kim wrote:
> > On Thu, May 03, 2018 at 02:25:18PM +0200, Laurent Dufour wrote:
> >> On 23/04/2018 09:42, Minchan Kim wrote:
> >>> On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
> >>>> When handling speculative page fault, the vma->vm_flags and
> >>>> vma->vm_page_prot fields are read once the page table lock is released. So
> >>>> there is no more guarantee that these fields would not change in our back
> >>>> They will be saved in the vm_fault structure before the VMA is checked for
> >>>> changes.
> >>>
> >>> Sorry. I cannot understand.
> >>> If it is changed under us, what happens? If it's critical, why cannot we
> >>> check with seqcounter?
> >>> Clearly, I'm not understanding the logic here. However, it's a global
> >>> change without CONFIG_SPF so I want to be more careful.
> >>> It would be better to describe why we need to sanpshot those values
> >>> into vm_fault rather than preventing the race.
> >>
> >> The idea is to go forward processing the page fault using the VMA's fields
> >> values saved in the vm_fault structure. Then once the pte are locked, the
> >> vma->sequence_counter is checked again and if something has changed in our back
> >> the speculative page fault processing is aborted.
> > 
> > Sorry, still I don't understand why we should capture some fields to vm_fault.
> > If we found vma->seq_cnt is changed under pte lock, can't we just bail out and
> > fallback to classic fault handling?
> > 
> > Maybe, I'm missing something clear now. It would be really helpful to understand
> > if you give some exmaple.
> 
> I'd rather say that I was not clear enough ;)
> 
> Here is the point, when we deal with a speculative page fault, the mmap_sem is
> not taken, so parallel VMA's changes can occurred. When a VMA change is done
> which will impact the page fault processing, we assumed that the VMA sequence
> counter will be changed.
> 
> In the page fault processing, at the time the PTE is locked, we checked the VMA
> sequence counter to detect changes done in our back. If no change is detected
> we can continue further. But this doesn't prevent the VMA to not be changed in
> our back while the PTE is locked. So VMA's fields which are used while the PTE
> is locked must be saved to ensure that we are using *static* values.
> This is important since the PTE changes will be made on regards to these VMA
> fields and they need to be consistent. This concerns the vma->vm_flags and
> vma->vm_page_prot VMA fields.
> 
> I hope I make this clear enough this time.

It's more clear at this point. Please include such nice explanation in description.
Now, I am wondering how you synchronize those static value and vma's seqcount.
It must be in next patchset. I hope to grab a time to read it, asap.

Thanks.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  2018-04-17 14:33 ` [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
  2018-05-08 11:04     ` Punit Agrawal
@ 2018-05-08 11:04     ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-08 11:04 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
> Speculative Page Fault handler when building for 64bit.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/x86/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index d8983df5a2bc..ebdeb48e4a4a 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -30,6 +30,7 @@ config X86_64
>  	select MODULES_USE_ELF_RELA
>  	select X86_DEV_DMA_OPS
>  	select ARCH_HAS_SYSCALL_WRAPPER
> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT

I'd suggest merging this patch with the one making changes to the
architectural fault handler towards the end of the series.

The Kconfig change is closely tied to the architectural support for SPF
and makes sense to be in a single patch.

If there's a good reason to keep them as separate patches, please move
the architecture Kconfig changes after the patch adding fault handler
changes.

It's better to enable the feature once the core infrastructure is merged
rather than at the beginning of the series to avoid potential bad
fallout from incomplete functionality during bisection.

All the comments here definitely hold for the arm64 patches that you
plan to include with the next update.

Thanks,
Punit

>  
>  #
>  # Arch settings

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
@ 2018-05-08 11:04     ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-08 11:04 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, ozlabs.org, x86

Hi Laurent,

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
> Speculative Page Fault handler when building for 64bit.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/x86/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index d8983df5a2bc..ebdeb48e4a4a 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -30,6 +30,7 @@ config X86_64
>  	select MODULES_USE_ELF_RELA
>  	select X86_DEV_DMA_OPS
>  	select ARCH_HAS_SYSCALL_WRAPPER
> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT

I'd suggest merging this patch with the one making changes to the
architectural fault handler towards the end of the series.

The Kconfig change is closely tied to the architectural support for SPF
and makes sense to be in a single patch.

If there's a good reason to keep them as separate patches, please move
the architecture Kconfig changes after the patch adding fault handler
changes.

It's better to enable the feature once the core infrastructure is merged
rather than at the beginning of the series to avoid potential bad
fallout from incomplete functionality during bisection.

All the comments here definitely hold for the arm64 patches that you
plan to include with the next update.

Thanks,
Punit

>  
>  #
>  # Arch settings

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
@ 2018-05-08 11:04     ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-08 11:04 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Hi Laurent,

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
> Speculative Page Fault handler when building for 64bit.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/x86/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index d8983df5a2bc..ebdeb48e4a4a 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -30,6 +30,7 @@ config X86_64
>  	select MODULES_USE_ELF_RELA
>  	select X86_DEV_DMA_OPS
>  	select ARCH_HAS_SYSCALL_WRAPPER
> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT

I'd suggest merging this patch with the one making changes to the
architectural fault handler towards the end of the series.

The Kconfig change is closely tied to the architectural support for SPF
and makes sense to be in a single patch.

If there's a good reason to keep them as separate patches, please move
the architecture Kconfig changes after the patch adding fault handler
changes.

It's better to enable the feature once the core infrastructure is merged
rather than at the beginning of the series to avoid potential bad
fallout from incomplete functionality during bisection.

All the comments here definitely hold for the arm64 patches that you
plan to include with the next update.

Thanks,
Punit

>  
>  #
>  # Arch settings

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-04-17 14:33 ` [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF Laurent Dufour
  2018-04-23  6:31   ` Minchan Kim
@ 2018-05-10 16:15   ` vinayak menon
  2018-05-14 15:09     ` Laurent Dufour
  1 sibling, 1 reply; 68+ messages in thread
From: vinayak menon @ 2018-05-10 16:15 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	sergey.senozhatsky.work, Daniel Jordan, David Rientjes,
	Jerome Glisse, Ganesh Mahendran, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Paul McKenney, Tim Chen,
	linuxppc-dev, x86, Vinayak Menon

On Tue, Apr 17, 2018 at 8:03 PM, Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
> pte_unmap_same() is making the assumption that the page table are still
> around because the mmap_sem is held.
> This is no more the case when running a speculative page fault and
> additional check must be made to ensure that the final page table are still
> there.
>
> This is now done by calling pte_spinlock() to check for the VMA's
> consistency while locking for the page tables.
>
> This is requiring passing a vm_fault structure to pte_unmap_same() which is
> containing all the needed parameters.
>
> As pte_spinlock() may fail in the case of a speculative page fault, if the
> VMA has been touched in our back, pte_unmap_same() should now return 3
> cases :
>         1. pte are the same (0)
>         2. pte are different (VM_FAULT_PTNOTSAME)
>         3. a VMA's changes has been detected (VM_FAULT_RETRY)
>
> The case 2 is handled by the introduction of a new VM_FAULT flag named
> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
> page fault while holding the mmap_sem.
>
> Acked-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/mm.h |  1 +
>  mm/memory.c        | 39 ++++++++++++++++++++++++++++-----------
>  2 files changed, 29 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 4d1aff80669c..714da99d77a3 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1208,6 +1208,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>  #define VM_FAULT_NEEDDSYNC  0x2000     /* ->fault did not modify page tables
>                                          * and needs fsync() to complete (for
>                                          * synchronous page faults in DAX) */
> +#define VM_FAULT_PTNOTSAME 0x4000      /* Page table entries have changed */


This has to be added to VM_FAULT_RESULT_TRACE ?

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  2018-05-08 11:04     ` Punit Agrawal
  (?)
  (?)
@ 2018-05-14 14:47     ` Laurent Dufour
  2018-05-14 15:05         ` Punit Agrawal
  -1 siblings, 1 reply; 68+ messages in thread
From: Laurent Dufour @ 2018-05-14 14:47 UTC (permalink / raw)
  To: Punit Agrawal, akpm
  Cc: mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox, benh,
	mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa, Will Deacon,
	Sergey Senozhatsky, Andrea Arcangeli, Alexei Starovoitov,
	kemi.wang, sergey.senozhatsky.work, Daniel Jordan,
	David Rientjes, Jerome Glisse, Ganesh Mahendran, linux-kernel,
	linux-mm, haren, khandual, npiggin, bsingharora, paulmck,
	Tim Chen, linuxppc-dev, x86

On 08/05/2018 13:04, Punit Agrawal wrote:
> Hi Laurent,
> 
> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
> 
>> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
>> Speculative Page Fault handler when building for 64bit.
>>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  arch/x86/Kconfig | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index d8983df5a2bc..ebdeb48e4a4a 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -30,6 +30,7 @@ config X86_64
>>  	select MODULES_USE_ELF_RELA
>>  	select X86_DEV_DMA_OPS
>>  	select ARCH_HAS_SYSCALL_WRAPPER
>> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
> 
> I'd suggest merging this patch with the one making changes to the
> architectural fault handler towards the end of the series.
> 
> The Kconfig change is closely tied to the architectural support for SPF
> and makes sense to be in a single patch.
> 
> If there's a good reason to keep them as separate patches, please move
> the architecture Kconfig changes after the patch adding fault handler
> changes.
> 
> It's better to enable the feature once the core infrastructure is merged
> rather than at the beginning of the series to avoid potential bad
> fallout from incomplete functionality during bisection.

Indeed bisection was the reason why Andrew asked me to push the configuration
enablement on top of the series (https://lkml.org/lkml/2017/10/10/1229).

I also think it would be better to have the architecture enablement in on patch
but that would mean that the code will not be build when bisecting without the
latest patch adding the per architecture code.

I'm fine with the both options.

Andrew, what do you think would be the best here ?

Thanks,
Laurent.

> 
> All the comments here definitely hold for the arm64 patches that you
> plan to include with the next update.
> 
> Thanks,
> Punit
> 
>>  
>>  #
>>  # Arch settings
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
  2018-05-14 14:47     ` Laurent Dufour
  2018-05-14 15:05         ` Punit Agrawal
@ 2018-05-14 15:05         ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-14 15:05 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 08/05/2018 13:04, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>> 
>>> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
>>> Speculative Page Fault handler when building for 64bit.
>>>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>>  arch/x86/Kconfig | 1 +
>>>  1 file changed, 1 insertion(+)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index d8983df5a2bc..ebdeb48e4a4a 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -30,6 +30,7 @@ config X86_64
>>>  	select MODULES_USE_ELF_RELA
>>>  	select X86_DEV_DMA_OPS
>>>  	select ARCH_HAS_SYSCALL_WRAPPER
>>> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> 
>> I'd suggest merging this patch with the one making changes to the
>> architectural fault handler towards the end of the series.
>> 
>> The Kconfig change is closely tied to the architectural support for SPF
>> and makes sense to be in a single patch.
>> 
>> If there's a good reason to keep them as separate patches, please move
>> the architecture Kconfig changes after the patch adding fault handler
>> changes.
>> 
>> It's better to enable the feature once the core infrastructure is merged
>> rather than at the beginning of the series to avoid potential bad
>> fallout from incomplete functionality during bisection.
>
> Indeed bisection was the reason why Andrew asked me to push the configuration
> enablement on top of the series (https://lkml.org/lkml/2017/10/10/1229).

The config options have gone through another round of splitting (between
core and architecture) since that comment. I agree that it still makes
sense to define the core config - CONFIG_SPECULATIVE_PAGE_FAULT early
on.

Just to clarify, my suggestion was to only move the architecture configs
further down.

>
> I also think it would be better to have the architecture enablement in on patch
> but that would mean that the code will not be build when bisecting without the
> latest patch adding the per architecture code.

I don't see that as a problem. But if I'm in the minority, I am OK with
leaving things as they are as well.

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
@ 2018-05-14 15:05         ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-14 15:05 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, ozlabs.org, x86

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 08/05/2018 13:04, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>> 
>>> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
>>> Speculative Page Fault handler when building for 64bit.
>>>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>>  arch/x86/Kconfig | 1 +
>>>  1 file changed, 1 insertion(+)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index d8983df5a2bc..ebdeb48e4a4a 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -30,6 +30,7 @@ config X86_64
>>>  	select MODULES_USE_ELF_RELA
>>>  	select X86_DEV_DMA_OPS
>>>  	select ARCH_HAS_SYSCALL_WRAPPER
>>> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> 
>> I'd suggest merging this patch with the one making changes to the
>> architectural fault handler towards the end of the series.
>> 
>> The Kconfig change is closely tied to the architectural support for SPF
>> and makes sense to be in a single patch.
>> 
>> If there's a good reason to keep them as separate patches, please move
>> the architecture Kconfig changes after the patch adding fault handler
>> changes.
>> 
>> It's better to enable the feature once the core infrastructure is merged
>> rather than at the beginning of the series to avoid potential bad
>> fallout from incomplete functionality during bisection.
>
> Indeed bisection was the reason why Andrew asked me to push the configuration
> enablement on top of the series (https://lkml.org/lkml/2017/10/10/1229).

The config options have gone through another round of splitting (between
core and architecture) since that comment. I agree that it still makes
sense to define the core config - CONFIG_SPECULATIVE_PAGE_FAULT early
on.

Just to clarify, my suggestion was to only move the architecture configs
further down.

>
> I also think it would be better to have the architecture enablement in on patch
> but that would mean that the code will not be build when bisecting without the
> latest patch adding the per architecture code.

I don't see that as a problem. But if I'm in the minority, I am OK with
leaving things as they are as well.

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
@ 2018-05-14 15:05         ` Punit Agrawal
  0 siblings, 0 replies; 68+ messages in thread
From: Punit Agrawal @ 2018-05-14 15:05 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:

> On 08/05/2018 13:04, Punit Agrawal wrote:
>> Hi Laurent,
>> 
>> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>> 
>>> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT which turns on the
>>> Speculative Page Fault handler when building for 64bit.
>>>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>>  arch/x86/Kconfig | 1 +
>>>  1 file changed, 1 insertion(+)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index d8983df5a2bc..ebdeb48e4a4a 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -30,6 +30,7 @@ config X86_64
>>>  	select MODULES_USE_ELF_RELA
>>>  	select X86_DEV_DMA_OPS
>>>  	select ARCH_HAS_SYSCALL_WRAPPER
>>> +	select ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT
>> 
>> I'd suggest merging this patch with the one making changes to the
>> architectural fault handler towards the end of the series.
>> 
>> The Kconfig change is closely tied to the architectural support for SPF
>> and makes sense to be in a single patch.
>> 
>> If there's a good reason to keep them as separate patches, please move
>> the architecture Kconfig changes after the patch adding fault handler
>> changes.
>> 
>> It's better to enable the feature once the core infrastructure is merged
>> rather than at the beginning of the series to avoid potential bad
>> fallout from incomplete functionality during bisection.
>
> Indeed bisection was the reason why Andrew asked me to push the configuration
> enablement on top of the series (https://lkml.org/lkml/2017/10/10/1229).

The config options have gone through another round of splitting (between
core and architecture) since that comment. I agree that it still makes
sense to define the core config - CONFIG_SPECULATIVE_PAGE_FAULT early
on.

Just to clarify, my suggestion was to only move the architecture configs
further down.

>
> I also think it would be better to have the architecture enablement in on patch
> but that would mean that the code will not be build when bisecting without the
> latest patch adding the per architecture code.

I don't see that as a problem. But if I'm in the minority, I am OK with
leaving things as they are as well.

Thanks,
Punit

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF
  2018-05-10 16:15   ` vinayak menon
@ 2018-05-14 15:09     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-14 15:09 UTC (permalink / raw)
  To: vinayak menon
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	sergey.senozhatsky.work, Daniel Jordan, David Rientjes,
	Jerome Glisse, Ganesh Mahendran, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Paul McKenney, Tim Chen,
	linuxppc-dev, x86, Vinayak Menon



On 10/05/2018 18:15, vinayak menon wrote:
> On Tue, Apr 17, 2018 at 8:03 PM, Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>> pte_unmap_same() is making the assumption that the page table are still
>> around because the mmap_sem is held.
>> This is no more the case when running a speculative page fault and
>> additional check must be made to ensure that the final page table are still
>> there.
>>
>> This is now done by calling pte_spinlock() to check for the VMA's
>> consistency while locking for the page tables.
>>
>> This is requiring passing a vm_fault structure to pte_unmap_same() which is
>> containing all the needed parameters.
>>
>> As pte_spinlock() may fail in the case of a speculative page fault, if the
>> VMA has been touched in our back, pte_unmap_same() should now return 3
>> cases :
>>         1. pte are the same (0)
>>         2. pte are different (VM_FAULT_PTNOTSAME)
>>         3. a VMA's changes has been detected (VM_FAULT_RETRY)
>>
>> The case 2 is handled by the introduction of a new VM_FAULT flag named
>> VM_FAULT_PTNOTSAME which is then trapped in cow_user_page().
>> If VM_FAULT_RETRY is returned, it is passed up to the callers to retry the
>> page fault while holding the mmap_sem.
>>
>> Acked-by: David Rientjes <rientjes@google.com>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/mm.h |  1 +
>>  mm/memory.c        | 39 ++++++++++++++++++++++++++++-----------
>>  2 files changed, 29 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 4d1aff80669c..714da99d77a3 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1208,6 +1208,7 @@ static inline void clear_page_pfmemalloc(struct page *page)
>>  #define VM_FAULT_NEEDDSYNC  0x2000     /* ->fault did not modify page tables
>>                                          * and needs fsync() to complete (for
>>                                          * synchronous page faults in DAX) */
>> +#define VM_FAULT_PTNOTSAME 0x4000      /* Page table entries have changed */
> 
> 
> This has to be added to VM_FAULT_RESULT_TRACE ?

Indeed there is no chance that the macro VM_FAULT_RESULT_TRACE would have to
translate that code to a string since VM_FAULT_PTNOTSAME is currently only
returned by pte_unmap_same() and then converted by its only caller
do_swap_page() to return 0. So VM_FAULT_PTNOTSAME is not expected to be seen
outside of these services which are never using VM_FAULT_RESULT_TRACE().

This being said, this may be a good idea to add it in the case of future
potential usage.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 09/25] mm: protect VMA modifications using VMA sequence count
  2018-04-23  7:19   ` Minchan Kim
@ 2018-05-14 15:25     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-14 15:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: akpm, mhocko, peterz, kirill, ak, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, Sergey Senozhatsky, Andrea Arcangeli,
	Alexei Starovoitov, kemi.wang, sergey.senozhatsky.work,
	Daniel Jordan, David Rientjes, Jerome Glisse, Ganesh Mahendran,
	linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	paulmck, Tim Chen, linuxppc-dev, x86

On 23/04/2018 09:19, Minchan Kim wrote:
> On Tue, Apr 17, 2018 at 04:33:15PM +0200, Laurent Dufour wrote:
>> The VMA sequence count has been introduced to allow fast detection of
>> VMA modification when running a page fault handler without holding
>> the mmap_sem.
>>
>> This patch provides protection against the VMA modification done in :
>> 	- madvise()
>> 	- mpol_rebind_policy()
>> 	- vma_replace_policy()
>> 	- change_prot_numa()
>> 	- mlock(), munlock()
>> 	- mprotect()
>> 	- mmap_region()
>> 	- collapse_huge_page()
>> 	- userfaultd registering services
>>
>> In addition, VMA fields which will be read during the speculative fault
>> path needs to be written using WRITE_ONCE to prevent write to be split
>> and intermediate values to be pushed to other CPUs.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  fs/proc/task_mmu.c |  5 ++++-
>>  fs/userfaultfd.c   | 17 +++++++++++++----
>>  mm/khugepaged.c    |  3 +++
>>  mm/madvise.c       |  6 +++++-
>>  mm/mempolicy.c     | 51 ++++++++++++++++++++++++++++++++++-----------------
>>  mm/mlock.c         | 13 ++++++++-----
>>  mm/mmap.c          | 22 +++++++++++++---------
>>  mm/mprotect.c      |  4 +++-
>>  mm/swap_state.c    |  8 ++++++--
>>  9 files changed, 89 insertions(+), 40 deletions(-)
>>
>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>> index c486ad4b43f0..aeb417f28839 100644
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -1136,8 +1136,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>>  					goto out_mm;
>>  				}
>>  				for (vma = mm->mmap; vma; vma = vma->vm_next) {
>> -					vma->vm_flags &= ~VM_SOFTDIRTY;
>> +					vm_write_begin(vma);
>> +					WRITE_ONCE(vma->vm_flags,
>> +						   vma->vm_flags & ~VM_SOFTDIRTY);
>>  					vma_set_page_prot(vma);
>> +					vm_write_end(vma);
> 
> trivial:
> 
> I think It's tricky to maintain that VMA fields to be read during SPF should be
> (READ|WRITE_ONCE). I think we need some accessor to read/write them rather than
> raw accessing like like vma_set_page_prot. Maybe spf prefix would be helpful. 
> 
> 	vma_spf_set_value(vma, vm_flags, val);
> 
> We also add some markers in vm_area_struct's fileds to indicate that
> people shouldn't access those fields directly.
> 
> Just a thought.

At the beginning I was liking that idea but...

I'm not sure this will change a lot the code, most of the time the
vm_write_begin()/end() are surrounding part of code larger than one VMA
structure's field change. For this particular case and few others this will be
applicable but that's not the majority.

Thanks,
Laurent.

> 
> 
>>  				}
>>  				downgrade_write(&mm->mmap_sem);
> 
> 
>> diff --git a/mm/swap_state.c b/mm/swap_state.c
>> index fe079756bb18..8a8a402ed59f 100644
>> --- a/mm/swap_state.c
>> +++ b/mm/swap_state.c
>> @@ -575,6 +575,10 @@ static unsigned long swapin_nr_pages(unsigned long offset)
>>   * the readahead.
>>   *
>>   * Caller must hold down_read on the vma->vm_mm if vmf->vma is not NULL.
>> + * This is needed to ensure the VMA will not be freed in our back. In the case
>> + * of the speculative page fault handler, this cannot happen, even if we don't
>> + * hold the mmap_sem. Callees are assumed to take care of reading VMA's fields
> 
> I guess reader would be curious on *why* is safe with SPF.
> Comment about the why could be helpful for reviewer.
> 
>> + * using READ_ONCE() to read consistent values.
>>   */
>>  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>>  				struct vm_fault *vmf)
>> @@ -668,9 +672,9 @@ static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
>>  				     unsigned long *start,
>>  				     unsigned long *end)
>>  {
>> -	*start = max3(lpfn, PFN_DOWN(vma->vm_start),
>> +	*start = max3(lpfn, PFN_DOWN(READ_ONCE(vma->vm_start)),
>>  		      PFN_DOWN(faddr & PMD_MASK));
>> -	*end = min3(rpfn, PFN_DOWN(vma->vm_end),
>> +	*end = min3(rpfn, PFN_DOWN(READ_ONCE(vma->vm_end)),
>>  		    PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
>>  }
>>  
>> -- 
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 18/25] mm: provide speculative fault infrastructure
  2018-04-17 14:33 ` [PATCH v10 18/25] mm: provide speculative fault infrastructure Laurent Dufour
@ 2018-05-15 13:09   ` vinayak menon
  2018-05-15 14:07     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: vinayak menon @ 2018-05-15 13:09 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	sergey.senozhatsky.work, Daniel Jordan, David Rientjes,
	Jerome Glisse, Ganesh Mahendran, linux-kernel, linux-mm, haren,
	khandual, npiggin, Balbir Singh, Paul McKenney, Tim Chen,
	linuxppc-dev, x86, Vinayak Menon

On Tue, Apr 17, 2018 at 8:03 PM, Laurent Dufour
<ldufour@linux.vnet.ibm.com> wrote:
>
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +
> +#ifndef __HAVE_ARCH_PTE_SPECIAL
> +/* This is required by vm_normal_page() */
> +#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
> +#endif
> +
> +/*
> + * vm_normal_page() adds some processing which should be done while
> + * hodling the mmap_sem.
> + */
> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
> +                              unsigned int flags)
> +{
> +       struct vm_fault vmf = {
> +               .address = address,
> +       };
> +       pgd_t *pgd, pgdval;
> +       p4d_t *p4d, p4dval;
> +       pud_t pudval;
> +       int seq, ret = VM_FAULT_RETRY;
> +       struct vm_area_struct *vma;
> +
> +       /* Clear flags that may lead to release the mmap_sem to retry */
> +       flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
> +       flags |= FAULT_FLAG_SPECULATIVE;
> +
> +       vma = get_vma(mm, address);
> +       if (!vma)
> +               return ret;
> +
> +       seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
> +       if (seq & 1)
> +               goto out_put;
> +
> +       /*
> +        * Can't call vm_ops service has we don't know what they would do
> +        * with the VMA.
> +        * This include huge page from hugetlbfs.
> +        */
> +       if (vma->vm_ops)
> +               goto out_put;
> +
> +       /*
> +        * __anon_vma_prepare() requires the mmap_sem to be held
> +        * because vm_next and vm_prev must be safe. This can't be guaranteed
> +        * in the speculative path.
> +        */
> +       if (unlikely(!vma->anon_vma))
> +               goto out_put;
> +
> +       vmf.vma_flags = READ_ONCE(vma->vm_flags);
> +       vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
> +
> +       /* Can't call userland page fault handler in the speculative path */
> +       if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
> +               goto out_put;
> +
> +       if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
> +               /*
> +                * This could be detected by the check address against VMA's
> +                * boundaries but we want to trace it as not supported instead
> +                * of changed.
> +                */
> +               goto out_put;
> +
> +       if (address < READ_ONCE(vma->vm_start)
> +           || READ_ONCE(vma->vm_end) <= address)
> +               goto out_put;
> +
> +       if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> +                                      flags & FAULT_FLAG_INSTRUCTION,
> +                                      flags & FAULT_FLAG_REMOTE)) {
> +               ret = VM_FAULT_SIGSEGV;
> +               goto out_put;
> +       }
> +
> +       /* This is one is required to check that the VMA has write access set */
> +       if (flags & FAULT_FLAG_WRITE) {
> +               if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
> +                       ret = VM_FAULT_SIGSEGV;
> +                       goto out_put;
> +               }
> +       } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
> +               ret = VM_FAULT_SIGSEGV;
> +               goto out_put;
> +       }
> +
> +       if (IS_ENABLED(CONFIG_NUMA)) {
> +               struct mempolicy *pol;
> +
> +               /*
> +                * MPOL_INTERLEAVE implies additional checks in
> +                * mpol_misplaced() which are not compatible with the
> +                *speculative page fault processing.
> +                */
> +               pol = __get_vma_policy(vma, address);


This gives a compile time error when CONFIG_NUMA is disabled, as there
is no definition for
__get_vma_policy.


> +               if (!pol)
> +                       pol = get_task_policy(current);
> +               if (pol && pol->mode == MPOL_INTERLEAVE)
> +                       goto out_put;
> +       }
> +
> +       /*
> +        * Do a speculative lookup of the PTE entry.
> +        */
> +       local_irq_disable();
> +       pgd = pgd_offset(mm, address);
> +       pgdval = READ_ONCE(*pgd);
> +       if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
> +               goto out_walk;
> +
> +       p4d = p4d_offset(pgd, address);
> +       p4dval = READ_ONCE(*p4d);
> +       if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
> +               goto out_walk;
> +
> +       vmf.pud = pud_offset(p4d, address);
> +       pudval = READ_ONCE(*vmf.pud);
> +       if (pud_none(pudval) || unlikely(pud_bad(pudval)))
> +               goto out_walk;
> +
> +       /* Huge pages at PUD level are not supported. */
> +       if (unlikely(pud_trans_huge(pudval)))
> +               goto out_walk;
> +
> +       vmf.pmd = pmd_offset(vmf.pud, address);
> +       vmf.orig_pmd = READ_ONCE(*vmf.pmd);
> +       /*
> +        * pmd_none could mean that a hugepage collapse is in progress
> +        * in our back as collapse_huge_page() mark it before
> +        * invalidating the pte (which is done once the IPI is catched
> +        * by all CPU and we have interrupt disabled).
> +        * For this reason we cannot handle THP in a speculative way since we
> +        * can't safely indentify an in progress collapse operation done in our
> +        * back on that PMD.
> +        * Regarding the order of the following checks, see comment in
> +        * pmd_devmap_trans_unstable()
> +        */
> +       if (unlikely(pmd_devmap(vmf.orig_pmd) ||
> +                    pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
> +                    is_swap_pmd(vmf.orig_pmd)))
> +               goto out_walk;
> +
> +       /*
> +        * The above does not allocate/instantiate page-tables because doing so
> +        * would lead to the possibility of instantiating page-tables after
> +        * free_pgtables() -- and consequently leaking them.
> +        *
> +        * The result is that we take at least one !speculative fault per PMD
> +        * in order to instantiate it.
> +        */
> +
> +       vmf.pte = pte_offset_map(vmf.pmd, address);
> +       vmf.orig_pte = READ_ONCE(*vmf.pte);
> +       barrier(); /* See comment in handle_pte_fault() */
> +       if (pte_none(vmf.orig_pte)) {
> +               pte_unmap(vmf.pte);
> +               vmf.pte = NULL;
> +       }
> +
> +       vmf.vma = vma;
> +       vmf.pgoff = linear_page_index(vma, address);
> +       vmf.gfp_mask = __get_fault_gfp_mask(vma);
> +       vmf.sequence = seq;
> +       vmf.flags = flags;
> +
> +       local_irq_enable();
> +
> +       /*
> +        * We need to re-validate the VMA after checking the bounds, otherwise
> +        * we might have a false positive on the bounds.
> +        */
> +       if (read_seqcount_retry(&vma->vm_sequence, seq))
> +               goto out_put;
> +
> +       mem_cgroup_oom_enable();
> +       ret = handle_pte_fault(&vmf);
> +       mem_cgroup_oom_disable();
> +
> +       put_vma(vma);
> +
> +       /*
> +        * The task may have entered a memcg OOM situation but
> +        * if the allocation error was handled gracefully (no
> +        * VM_FAULT_OOM), there is no need to kill anything.
> +        * Just clean up the OOM state peacefully.
> +        */
> +       if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
> +               mem_cgroup_oom_synchronize(false);
> +       return ret;
> +
> +out_walk:
> +       local_irq_enable();
> +out_put:
> +       put_vma(vma);
> +       return ret;
> +}
> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
> +
>  /*
>   * By the time we get here, we already hold the mm semaphore
>   *
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 18/25] mm: provide speculative fault infrastructure
  2018-05-15 13:09   ` vinayak menon
@ 2018-05-15 14:07     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-15 14:07 UTC (permalink / raw)
  To: vinayak menon
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	sergey.senozhatsky.work, Daniel Jordan, David Rientjes,
	Jerome Glisse, Ganesh Mahendran, linux-kernel, linux-mm, haren,
	khandual, npiggin, Balbir Singh, Paul McKenney, Tim Chen,
	linuxppc-dev, x86, Vinayak Menon

On 15/05/2018 15:09, vinayak menon wrote:
> On Tue, Apr 17, 2018 at 8:03 PM, Laurent Dufour
> <ldufour@linux.vnet.ibm.com> wrote:
>>
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +
>> +#ifndef __HAVE_ARCH_PTE_SPECIAL
>> +/* This is required by vm_normal_page() */
>> +#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
>> +#endif
>> +
>> +/*
>> + * vm_normal_page() adds some processing which should be done while
>> + * hodling the mmap_sem.
>> + */
>> +int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> +                              unsigned int flags)
>> +{
>> +       struct vm_fault vmf = {
>> +               .address = address,
>> +       };
>> +       pgd_t *pgd, pgdval;
>> +       p4d_t *p4d, p4dval;
>> +       pud_t pudval;
>> +       int seq, ret = VM_FAULT_RETRY;
>> +       struct vm_area_struct *vma;
>> +
>> +       /* Clear flags that may lead to release the mmap_sem to retry */
>> +       flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>> +       flags |= FAULT_FLAG_SPECULATIVE;
>> +
>> +       vma = get_vma(mm, address);
>> +       if (!vma)
>> +               return ret;
>> +
>> +       seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>> +       if (seq & 1)
>> +               goto out_put;
>> +
>> +       /*
>> +        * Can't call vm_ops service has we don't know what they would do
>> +        * with the VMA.
>> +        * This include huge page from hugetlbfs.
>> +        */
>> +       if (vma->vm_ops)
>> +               goto out_put;
>> +
>> +       /*
>> +        * __anon_vma_prepare() requires the mmap_sem to be held
>> +        * because vm_next and vm_prev must be safe. This can't be guaranteed
>> +        * in the speculative path.
>> +        */
>> +       if (unlikely(!vma->anon_vma))
>> +               goto out_put;
>> +
>> +       vmf.vma_flags = READ_ONCE(vma->vm_flags);
>> +       vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>> +
>> +       /* Can't call userland page fault handler in the speculative path */
>> +       if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>> +               goto out_put;
>> +
>> +       if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>> +               /*
>> +                * This could be detected by the check address against VMA's
>> +                * boundaries but we want to trace it as not supported instead
>> +                * of changed.
>> +                */
>> +               goto out_put;
>> +
>> +       if (address < READ_ONCE(vma->vm_start)
>> +           || READ_ONCE(vma->vm_end) <= address)
>> +               goto out_put;
>> +
>> +       if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>> +                                      flags & FAULT_FLAG_INSTRUCTION,
>> +                                      flags & FAULT_FLAG_REMOTE)) {
>> +               ret = VM_FAULT_SIGSEGV;
>> +               goto out_put;
>> +       }
>> +
>> +       /* This is one is required to check that the VMA has write access set */
>> +       if (flags & FAULT_FLAG_WRITE) {
>> +               if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
>> +                       ret = VM_FAULT_SIGSEGV;
>> +                       goto out_put;
>> +               }
>> +       } else if (unlikely(!(vmf.vma_flags & (VM_READ|VM_EXEC|VM_WRITE)))) {
>> +               ret = VM_FAULT_SIGSEGV;
>> +               goto out_put;
>> +       }
>> +
>> +       if (IS_ENABLED(CONFIG_NUMA)) {
>> +               struct mempolicy *pol;
>> +
>> +               /*
>> +                * MPOL_INTERLEAVE implies additional checks in
>> +                * mpol_misplaced() which are not compatible with the
>> +                *speculative page fault processing.
>> +                */
>> +               pol = __get_vma_policy(vma, address);
> 
> 
> This gives a compile time error when CONFIG_NUMA is disabled, as there
> is no definition for
> __get_vma_policy.

IS_ENABLED is not workiing as I expected, my mistake.
I'll rollback to the legacy #ifdef stuff.

Thanks,
Laurent.


>> +               if (!pol)
>> +                       pol = get_task_policy(current);
>> +               if (pol && pol->mode == MPOL_INTERLEAVE)
>> +                       goto out_put;
>> +       }
>> +
>> +       /*
>> +        * Do a speculative lookup of the PTE entry.
>> +        */
>> +       local_irq_disable();
>> +       pgd = pgd_offset(mm, address);
>> +       pgdval = READ_ONCE(*pgd);
>> +       if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval)))
>> +               goto out_walk;
>> +
>> +       p4d = p4d_offset(pgd, address);
>> +       p4dval = READ_ONCE(*p4d);
>> +       if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
>> +               goto out_walk;
>> +
>> +       vmf.pud = pud_offset(p4d, address);
>> +       pudval = READ_ONCE(*vmf.pud);
>> +       if (pud_none(pudval) || unlikely(pud_bad(pudval)))
>> +               goto out_walk;
>> +
>> +       /* Huge pages at PUD level are not supported. */
>> +       if (unlikely(pud_trans_huge(pudval)))
>> +               goto out_walk;
>> +
>> +       vmf.pmd = pmd_offset(vmf.pud, address);
>> +       vmf.orig_pmd = READ_ONCE(*vmf.pmd);
>> +       /*
>> +        * pmd_none could mean that a hugepage collapse is in progress
>> +        * in our back as collapse_huge_page() mark it before
>> +        * invalidating the pte (which is done once the IPI is catched
>> +        * by all CPU and we have interrupt disabled).
>> +        * For this reason we cannot handle THP in a speculative way since we
>> +        * can't safely indentify an in progress collapse operation done in our
>> +        * back on that PMD.
>> +        * Regarding the order of the following checks, see comment in
>> +        * pmd_devmap_trans_unstable()
>> +        */
>> +       if (unlikely(pmd_devmap(vmf.orig_pmd) ||
>> +                    pmd_none(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) ||
>> +                    is_swap_pmd(vmf.orig_pmd)))
>> +               goto out_walk;
>> +
>> +       /*
>> +        * The above does not allocate/instantiate page-tables because doing so
>> +        * would lead to the possibility of instantiating page-tables after
>> +        * free_pgtables() -- and consequently leaking them.
>> +        *
>> +        * The result is that we take at least one !speculative fault per PMD
>> +        * in order to instantiate it.
>> +        */
>> +
>> +       vmf.pte = pte_offset_map(vmf.pmd, address);
>> +       vmf.orig_pte = READ_ONCE(*vmf.pte);
>> +       barrier(); /* See comment in handle_pte_fault() */
>> +       if (pte_none(vmf.orig_pte)) {
>> +               pte_unmap(vmf.pte);
>> +               vmf.pte = NULL;
>> +       }
>> +
>> +       vmf.vma = vma;
>> +       vmf.pgoff = linear_page_index(vma, address);
>> +       vmf.gfp_mask = __get_fault_gfp_mask(vma);
>> +       vmf.sequence = seq;
>> +       vmf.flags = flags;
>> +
>> +       local_irq_enable();
>> +
>> +       /*
>> +        * We need to re-validate the VMA after checking the bounds, otherwise
>> +        * we might have a false positive on the bounds.
>> +        */
>> +       if (read_seqcount_retry(&vma->vm_sequence, seq))
>> +               goto out_put;
>> +
>> +       mem_cgroup_oom_enable();
>> +       ret = handle_pte_fault(&vmf);
>> +       mem_cgroup_oom_disable();
>> +
>> +       put_vma(vma);
>> +
>> +       /*
>> +        * The task may have entered a memcg OOM situation but
>> +        * if the allocation error was handled gracefully (no
>> +        * VM_FAULT_OOM), there is no need to kill anything.
>> +        * Just clean up the OOM state peacefully.
>> +        */
>> +       if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
>> +               mem_cgroup_oom_synchronize(false);
>> +       return ret;
>> +
>> +out_walk:
>> +       local_irq_enable();
>> +out_put:
>> +       put_vma(vma);
>> +       return ret;
>> +}
>> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
>> +
>>  /*
>>   * By the time we get here, we already hold the mm semaphore
>>   *
>> --
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 23/25] mm: add speculative page fault vmstats
  2018-04-17 14:33 ` [PATCH v10 23/25] mm: add speculative page fault vmstats Laurent Dufour
@ 2018-05-16  2:50   ` Ganesh Mahendran
  2018-05-16  6:42     ` Laurent Dufour
  0 siblings, 1 reply; 68+ messages in thread
From: Ganesh Mahendran @ 2018-05-16  2:50 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	Sergey Senozhatsky, Daniel Jordan, David Rientjes, Jerome Glisse,
	linux-kernel, Linux-MM, haren, khandual, npiggin, Balbir Singh,
	Paul McKenney, Tim Chen, linuxppc-dev, x86

2018-04-17 22:33 GMT+08:00 Laurent Dufour <ldufour@linux.vnet.ibm.com>:
> Add speculative_pgfault vmstat counter to count successful speculative page
> fault handling.
>
> Also fixing a minor typo in include/linux/vm_event_item.h.
>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  include/linux/vm_event_item.h | 3 +++
>  mm/memory.c                   | 1 +
>  mm/vmstat.c                   | 5 ++++-
>  3 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index 5c7f010676a7..a240acc09684 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -111,6 +111,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>                 SWAP_RA,
>                 SWAP_RA_HIT,
>  #endif
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +               SPECULATIVE_PGFAULT,
> +#endif
>                 NR_VM_EVENT_ITEMS
>  };
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 425f07e0bf38..1cd5bc000643 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4508,6 +4508,7 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>          * If there is no need to retry, don't return the vma to the caller.
>          */
>         if (ret != VM_FAULT_RETRY) {
> +               count_vm_event(SPECULATIVE_PGFAULT);
>                 put_vma(vmf.vma);
>                 *vma = NULL;
>         }
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 536332e988b8..c6b49bfa8139 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1289,7 +1289,10 @@ const char * const vmstat_text[] = {
>         "swap_ra",
>         "swap_ra_hit",
>  #endif
> -#endif /* CONFIG_VM_EVENTS_COUNTERS */
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> +       "speculative_pgfault"

"speculative_pgfault",
will be better. :)

> +#endif
> +#endif /* CONFIG_VM_EVENT_COUNTERS */
>  };
>  #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
>
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [PATCH v10 23/25] mm: add speculative page fault vmstats
  2018-05-16  2:50   ` Ganesh Mahendran
@ 2018-05-16  6:42     ` Laurent Dufour
  0 siblings, 0 replies; 68+ messages in thread
From: Laurent Dufour @ 2018-05-16  6:42 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: Andrew Morton, Michal Hocko, Peter Zijlstra, kirill, ak, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, Sergey Senozhatsky,
	Andrea Arcangeli, Alexei Starovoitov, kemi.wang,
	Sergey Senozhatsky, Daniel Jordan, David Rientjes, Jerome Glisse,
	linux-kernel, Linux-MM, haren, khandual, npiggin, Balbir Singh,
	Paul McKenney, Tim Chen, linuxppc-dev, x86



On 16/05/2018 04:50, Ganesh Mahendran wrote:
> 2018-04-17 22:33 GMT+08:00 Laurent Dufour <ldufour@linux.vnet.ibm.com>:
>> Add speculative_pgfault vmstat counter to count successful speculative page
>> fault handling.
>>
>> Also fixing a minor typo in include/linux/vm_event_item.h.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  include/linux/vm_event_item.h | 3 +++
>>  mm/memory.c                   | 1 +
>>  mm/vmstat.c                   | 5 ++++-
>>  3 files changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
>> index 5c7f010676a7..a240acc09684 100644
>> --- a/include/linux/vm_event_item.h
>> +++ b/include/linux/vm_event_item.h
>> @@ -111,6 +111,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>>                 SWAP_RA,
>>                 SWAP_RA_HIT,
>>  #endif
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +               SPECULATIVE_PGFAULT,
>> +#endif
>>                 NR_VM_EVENT_ITEMS
>>  };
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 425f07e0bf38..1cd5bc000643 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4508,6 +4508,7 @@ int __handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>          * If there is no need to retry, don't return the vma to the caller.
>>          */
>>         if (ret != VM_FAULT_RETRY) {
>> +               count_vm_event(SPECULATIVE_PGFAULT);
>>                 put_vma(vmf.vma);
>>                 *vma = NULL;
>>         }
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 536332e988b8..c6b49bfa8139 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/vmstat.c
>> @@ -1289,7 +1289,10 @@ const char * const vmstat_text[] = {
>>         "swap_ra",
>>         "swap_ra_hit",
>>  #endif
>> -#endif /* CONFIG_VM_EVENTS_COUNTERS */
>> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
>> +       "speculative_pgfault"
> 
> "speculative_pgfault",
> will be better. :)

Sure !

Thanks.

> 
>> +#endif
>> +#endif /* CONFIG_VM_EVENT_COUNTERS */
>>  };
>>  #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
>>
>> --
>> 2.7.4
>>
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2018-05-16  6:42 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-17 14:33 [PATCH v10 00/25] Speculative page faults Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 01/25] mm: introduce CONFIG_SPECULATIVE_PAGE_FAULT Laurent Dufour
2018-04-23  5:58   ` Minchan Kim
2018-04-23 15:10     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 02/25] x86/mm: define ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
2018-05-08 11:04   ` Punit Agrawal
2018-05-08 11:04     ` Punit Agrawal
2018-05-08 11:04     ` Punit Agrawal
2018-05-14 14:47     ` Laurent Dufour
2018-05-14 15:05       ` Punit Agrawal
2018-05-14 15:05         ` Punit Agrawal
2018-05-14 15:05         ` Punit Agrawal
2018-04-17 14:33 ` [PATCH v10 03/25] powerpc/mm: set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 04/25] mm: prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 05/25] mm: introduce pte_spinlock " Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 06/25] mm: make pte_unmap_same compatible with SPF Laurent Dufour
2018-04-23  6:31   ` Minchan Kim
2018-04-30 14:07     ` Laurent Dufour
2018-05-01 13:04       ` Minchan Kim
2018-05-10 16:15   ` vinayak menon
2018-05-14 15:09     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 07/25] mm: introduce INIT_VMA() Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 08/25] mm: VMA sequence count Laurent Dufour
2018-04-23  6:42   ` Minchan Kim
2018-04-30 15:14     ` Laurent Dufour
2018-05-01 13:16       ` Minchan Kim
2018-05-03 14:45         ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 09/25] mm: protect VMA modifications using " Laurent Dufour
2018-04-23  7:19   ` Minchan Kim
2018-05-14 15:25     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 10/25] mm: protect mremap() against SPF hanlder Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 11/25] mm: protect SPF handler against anon_vma changes Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 12/25] mm: cache some VMA fields in the vm_fault structure Laurent Dufour
2018-04-23  7:42   ` Minchan Kim
2018-05-03 12:25     ` Laurent Dufour
2018-05-03 15:42       ` Minchan Kim
2018-05-04  9:10         ` Laurent Dufour
2018-05-08 10:56           ` Minchan Kim
2018-04-17 14:33 ` [PATCH v10 13/25] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 14/25] mm: introduce __lru_cache_add_active_or_unevictable Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 15/25] mm: introduce __vm_normal_page() Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 16/25] mm: introduce __page_add_new_anon_rmap() Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 17/25] mm: protect mm_rb tree with a rwlock Laurent Dufour
2018-04-30 18:47   ` Punit Agrawal
2018-05-02  6:37     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 18/25] mm: provide speculative fault infrastructure Laurent Dufour
2018-05-15 13:09   ` vinayak menon
2018-05-15 14:07     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 19/25] mm: adding speculative page fault failure trace events Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 20/25] perf: add a speculative page fault sw event Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 21/25] perf tools: add support for the SPF perf event Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 22/25] mm: speculative page fault handler return VMA Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 23/25] mm: add speculative page fault vmstats Laurent Dufour
2018-05-16  2:50   ` Ganesh Mahendran
2018-05-16  6:42     ` Laurent Dufour
2018-04-17 14:33 ` [PATCH v10 24/25] x86/mm: add speculative pagefault handling Laurent Dufour
2018-04-30 18:43   ` Punit Agrawal
2018-05-03 14:59     ` Laurent Dufour
2018-05-04 15:55       ` Punit Agrawal
2018-05-04 15:55         ` Punit Agrawal
2018-04-17 14:33 ` [PATCH v10 25/25] powerpc/mm: add speculative page fault Laurent Dufour
2018-04-17 16:51 ` [PATCH v10 00/25] Speculative page faults Christopher Lameter
2018-05-02 14:17 ` Punit Agrawal
2018-05-02 14:17   ` Punit Agrawal
2018-05-02 14:17   ` Punit Agrawal
2018-05-02 14:45   ` Laurent Dufour
2018-05-02 15:50     ` Punit Agrawal
2018-05-02 15:50       ` Punit Agrawal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.