linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/20] Speculative page faults
@ 2017-08-17 22:04 Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
                   ` (21 more replies)
  0 siblings, 22 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:04 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

This is a port on kernel 4.13 of the work done by Peter Zijlstra to
handle page fault without holding the mm semaphore [1].

The idea is to try to handle user space page faults without holding the
mmap_sem. This should allow better concurrency for massively threaded
process since the page fault handler will not wait for other threads memory
layout change to be done, assuming that this change is done in another part
of the process's memory space. This type page fault is named speculative
page fault. If the speculative page fault fails because of a concurrency is
detected or because underlying PMD or PTE tables are not yet allocating, it
is failing its processing and a classic page fault is then tried.

The speculative page fault (SPF) has to look for the VMA matching the fault
address without holding the mmap_sem, so the VMA list is now managed using
SRCU allowing lockless walking. The only impact would be the deferred file
derefencing in the case of a file mapping, since the file pointer is
released once the SRCU cleaning is done.  This patch relies on the change
done recently by Paul McKenney in SRCU which now runs a callback per CPU
instead of per SRCU structure [1].

The VMA's attributes checked during the speculative page fault processing
have to be protected against parallel changes. This is done by using a per
VMA sequence lock. This sequence lock allows the speculative page fault
handler to fast check for parallel changes in progress and to abort the
speculative page fault in that case.

Once the VMA is found, the speculative page fault handler would check for
the VMA's attributes to verify that the page fault has to be handled
correctly or not. Thus the VMA is protected through a sequence lock which
allows fast detection of concurrent VMA changes. If such a change is
detected, the speculative page fault is aborted and a *classic* page fault
is tried.  VMA sequence locks are added when VMA attributes which are
checked during the page fault are modified.

When the PTE is fetched, the VMA is checked to see if it has been changed,
so once the page table is locked, the VMA is valid, so any other changes
leading to touching this PTE will need to lock the page table, so no
parallel change is possible at this time.

Compared to the Peter's initial work, this series introduces a spin_trylock
when dealing with speculative page fault. This is required to avoid dead
lock when handling a page fault while a TLB invalidate is requested by an
other CPU holding the PTE. Another change due to a lock dependency issue
with mapping->i_mmap_rwsem.

In addition some VMA field values which are used once the PTE is unlocked
at the end the page fault path are saved into the vm_fault structure to
used the values matching the VMA at the time the PTE was locked.

This series builds on top of v4.13-rc5 and is functional on x86 and
PowerPC.

Tests have been made using a large commercial in-memory database on a
PowerPC system with 752 CPU using RFC v5. The results are very encouraging
since the loading of the 2TB database was faster by 14% with the
speculative page fault.

Using ebizzy test [3], which spreads a lot of threads, the result are good
when running on both a large or a small system. When using kernbench, the
result are quite similar which expected as not so much multithreaded
processes are involved. But there is no performance degradation neither
which is good.

------------------
Benchmarks results

Note these test have been made on top of 4.13-rc3 with the following patch
from Paul McKenney applied: 
 "srcu: Provide ordering for CPU not involved in grace period" [5]

Ebizzy:
-------
The test is counting the number of records per second it can manage, the
higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
result I repeated the test 100 times and measure the average result, mean
deviation, max and min.

- 16 CPUs x86 VM
Records/s	4.13-rc5	4.13-rc5-spf
Average		11350.29	21760.36
Mean deviation	396.56		881.40
Max		13773		26194
Min		10567		19223

- 80 CPUs Power 8 node:
Records/s	4.13-rc5	4.13-rc5-spf
Average		33904.67	58847.91
Mean deviation	789.40		1753.19
Max		36703		68958
Min		31759		55125

The number of record per second is far better with the speculative page
fault.
The mean deviation is higher with the speculative page fault, may be
because sometime the fault are not handled in a speculative way leading to
more variation.


Kernbench:
----------
This test is building a 4.12 kernel using platform default config. The
build has been run 5 times each time.

- 16 CPUs x86 VM
Average Half load -j 8 Run (std deviation)
 		 4.13.0-rc5		4.13.0-rc5-spf
Elapsed Time     166.574 (0.340779)	145.754 (0.776325)		
User Time        1080.77 (2.05871)	999.272 (4.12142)		
System Time      204.594 (1.02449)	116.362 (1.22974)		
Percent CPU 	 771.2 (1.30384)	765 (0.707107)
Context Switches 46590.6 (935.591)	66316.4 (744.64)
Sleeps           84421.2 (596.612)	85186 (523.041)		

Average Optimal load -j 16 Run (std deviation)
 		 4.13.0-rc5		4.13.0-rc5-spf
Elapsed Time     85.422 (0.42293)	74.81 (0.419345)
User Time        1031.79 (51.6557)	954.912 (46.8439)
System Time      186.528 (19.0575)	107.514 (9.36902)
Percent CPU 	 1059.2 (303.607)	1056.8 (307.624)
Context Switches 67240.3 (21788.9)	89360.6 (24299.9)
Sleeps           89607.8 (5511.22)	90372.5 (5490.16)

The elapsed time is a bit shorter in the case of the SPF release, but the
impact less important since there are less multithreaded processes involved
here. 

- 80 CPUs Power 8 node:
Average Half load -j 40 Run (std deviation)
 		 4.13.0-rc5		4.13.0-rc5-spf
Elapsed Time     117.176 (0.824093)	116.792 (0.695392)
User Time        4412.34 (24.29)	4396.02 (24.4819)
System Time      131.106 (1.28343)	133.452 (0.708851)
Percent CPU      3876.8 (18.1439)	3877.6 (21.9955)
Context Switches 72470.2 (466.181)	72971 (673.624)
Sleeps           161294 (2284.85)	161946 (2217.9)

Average Optimal load -j 80 Run (std deviation)
 		 4.13.0-rc5		4.13.0-rc5-spf
Elapsed Time     111.176 (1.11123)	111.242 (0.801542)
User Time        5930.03 (1600.07)	5929.89 (1617)
System Time      166.258 (37.0662)	169.337 (37.8419)
Percent CPU      5378.5 (1584.16)	5385.6 (1590.24)
Context Switches 117389 (47350.1)	130132 (60256.3)
Sleeps           163354 (4153.9)	163219 (2251.27)

Here the elapsed time is a bit shorter using the spf release, but we
remain in the error margin. It has to be noted that this system is not
correctly balanced on the NUMA point of view as all the available memory is
attached to one core.

------------------------
Changes since v1:
 - Remove PERF_COUNT_SW_SPF_FAILED perf event.
 - Add tracing events to details speculative page fault failures.
 - Cache VMA fields values which are used once the PTE is unlocked at the
 end of the page fault events.
 - Ensure that fields read during the speculative path are written and read
 using WRITE_ONCE and READ_ONCE.
 - Add checks at the beginning of the speculative path to abort it if the
 VMA is known to not be supported.
Changes since RFC V5 [6]
 - Port to 4.13 kernel
 - Merging patch fixing lock dependency into the original patch
 - Replace the 2 parameters of vma_has_changed() with the vmf pointer
 - In patch 7, don't call __do_fault() in the speculative path as it may
 want to unlock the mmap_sem.
 - In patch 11-12, don't check for vma boundaries when
 page_add_new_anon_rmap() is called during the spf path and protect against
 anon_vma pointer's update.
 - In patch 13-16, add performance events to report number of successful
 and failed speculative events. 

[1] http://linux-kernel.2935.n7.nabble.com/RFC-PATCH-0-6-Another-go-at-speculative-page-faults-tt965642.html#none
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=da915ad5cf25b5f5d358dd3670c3378d8ae8c03e
[3] http://ebizzy.sourceforge.net/
[4] http://ck.kolivas.org/apps/kernbench/kernbench-0.50/
[5] https://lkml.org/lkml/2017/7/24/829
[6] https://lwn.net/Articles/725607/

Laurent Dufour (14):
  mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
  mm: Protect VMA modifications using VMA sequence count
  mm: Cache some VMA fields in the vm_fault structure
  mm: Protect SPF handler against anon_vma changes
  mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
  mm: Introduce __lru_cache_add_active_or_unevictable
  mm: Introduce __maybe_mkwrite()
  mm: Introduce __vm_normal_page()
  mm: Introduce __page_add_new_anon_rmap()
  mm: Try spin lock in speculative path
  mm: Adding speculative page fault failure trace events
  perf: Add a speculative page fault sw event
  perf tools: Add support for the SPF perf event
  powerpc/mm: Add speculative page fault

Peter Zijlstra (6):
  mm: Dont assume page-table invariance during faults
  mm: Prepare for FAULT_FLAG_SPECULATIVE
  mm: VMA sequence count
  mm: RCU free VMAs
  mm: Provide speculative fault infrastructure
  x86/mm: Add speculative pagefault handling

 arch/powerpc/include/asm/book3s/64/pgtable.h |   5 +
 arch/powerpc/mm/fault.c                      |  30 +-
 arch/x86/include/asm/pgtable_types.h         |   7 +
 arch/x86/mm/fault.c                          |  19 ++
 fs/proc/task_mmu.c                           |   5 +-
 fs/userfaultfd.c                             |  17 +-
 include/linux/hugetlb_inline.h               |   2 +-
 include/linux/migrate.h                      |   4 +-
 include/linux/mm.h                           |  21 +-
 include/linux/mm_types.h                     |   3 +
 include/linux/pagemap.h                      |   4 +-
 include/linux/rmap.h                         |  12 +-
 include/linux/swap.h                         |  11 +-
 include/trace/events/pagefault.h             |  87 +++++
 include/uapi/linux/perf_event.h              |   1 +
 kernel/fork.c                                |   1 +
 mm/hugetlb.c                                 |   2 +
 mm/init-mm.c                                 |   1 +
 mm/internal.h                                |  19 ++
 mm/khugepaged.c                              |   5 +
 mm/madvise.c                                 |   6 +-
 mm/memory.c                                  | 474 ++++++++++++++++++++++-----
 mm/mempolicy.c                               |  51 ++-
 mm/migrate.c                                 |   4 +-
 mm/mlock.c                                   |  13 +-
 mm/mmap.c                                    | 138 ++++++--
 mm/mprotect.c                                |   4 +-
 mm/mremap.c                                  |   7 +
 mm/rmap.c                                    |   5 +-
 mm/swap.c                                    |  12 +-
 tools/include/uapi/linux/perf_event.h        |   1 +
 tools/perf/util/evsel.c                      |   1 +
 tools/perf/util/parse-events.c               |   4 +
 tools/perf/util/parse-events.l               |   1 +
 tools/perf/util/python.c                     |   1 +
 35 files changed, 803 insertions(+), 175 deletions(-)
 create mode 100644 include/trace/events/pagefault.h

-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH v2 01/20] mm: Dont assume page-table invariance during faults
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.

Remove the reliance on the pte pointer.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 mm/memory.c | 27 ---------------------------
 1 file changed, 27 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e158f7ac6730..36609c082256 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2131,30 +2131,6 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(apply_to_page_range);
 
-/*
- * handle_pte_fault chooses page fault handler according to an entry which was
- * read non-atomically.  Before making any commitment, on those architectures
- * or configurations (e.g. i386 with PAE) which might give a mix of unmatched
- * parts, do_swap_page must check under lock before unmapping the pte and
- * proceeding (but do_wp_page is only called after already making such a check;
- * and do_anonymous_page can safely check later on).
- */
-static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
-				pte_t *page_table, pte_t orig_pte)
-{
-	int same = 1;
-#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
-	if (sizeof(pte_t) > sizeof(unsigned long)) {
-		spinlock_t *ptl = pte_lockptr(mm, pmd);
-		spin_lock(ptl);
-		same = pte_same(*page_table, orig_pte);
-		spin_unlock(ptl);
-	}
-#endif
-	pte_unmap(page_table);
-	return same;
-}
-
 static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
 {
 	debug_dma_assert_idle(src);
@@ -2711,9 +2687,6 @@ int do_swap_page(struct vm_fault *vmf)
 	int exclusive = 0;
 	int ret = 0;
 
-	if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
-		goto out;
-
 	entry = pte_to_swp_entry(vmf->orig_pte);
 	if (unlikely(non_swap_entry(entry))) {
 		if (is_migration_entry(entry)) {
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 03/20] mm: Introduce pte_spinlock " Laurent Dufour
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

When speculating faults (without holding mmap_sem) we need to validate
that the vma against which we loaded pages is still valid when we're
ready to install the new PTE.

Therefore, replace the pte_offset_map_lock() calls that (re)take the
PTL with pte_map_lock() which can fail in case we find the VMA changed
since we started the fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Port to 4.12 kernel]
[Remove the comment about the fault_env structure which has been
 implemented as the vm_fault structure in the kernel]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 55 ++++++++++++++++++++++++++++++++++++++----------------
 2 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 46b9ac5e8569..8763ec96dc78 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -286,6 +286,7 @@ extern pgprot_t protection_map[16];
 #define FAULT_FLAG_USER		0x40	/* The fault originated in userspace */
 #define FAULT_FLAG_REMOTE	0x80	/* faulting for non current tsk/mm */
 #define FAULT_FLAG_INSTRUCTION  0x100	/* The fault was during an instruction fetch */
+#define FAULT_FLAG_SPECULATIVE	0x200	/* Speculative fault, not holding mmap_sem */
 
 #define FAULT_FLAG_TRACE \
 	{ FAULT_FLAG_WRITE,		"WRITE" }, \
diff --git a/mm/memory.c b/mm/memory.c
index 36609c082256..3ed1b00ca841 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2269,6 +2269,12 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 }
 
+static bool pte_map_lock(struct vm_fault *vmf)
+{
+	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl);
+	return true;
+}
+
 /*
  * Handle the case of a page which we actually need to copy to a new page.
  *
@@ -2296,6 +2302,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 	const unsigned long mmun_start = vmf->address & PAGE_MASK;
 	const unsigned long mmun_end = mmun_start + PAGE_SIZE;
 	struct mem_cgroup *memcg;
+	int ret = VM_FAULT_OOM;
 
 	if (unlikely(anon_vma_prepare(vma)))
 		goto oom;
@@ -2323,7 +2330,11 @@ static int wp_page_copy(struct vm_fault *vmf)
 	/*
 	 * Re-check the pte - we dropped the lock
 	 */
-	vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
+	if (!pte_map_lock(vmf)) {
+		mem_cgroup_cancel_charge(new_page, memcg, false);
+		ret = VM_FAULT_RETRY;
+		goto oom_free_new;
+	}
 	if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
 		if (old_page) {
 			if (!PageAnon(old_page)) {
@@ -2411,7 +2422,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 oom:
 	if (old_page)
 		put_page(old_page);
-	return VM_FAULT_OOM;
+	return ret;
 }
 
 /**
@@ -2432,8 +2443,8 @@ static int wp_page_copy(struct vm_fault *vmf)
 int finish_mkwrite_fault(struct vm_fault *vmf)
 {
 	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
-	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
-				       &vmf->ptl);
+	if (!pte_map_lock(vmf))
+		return VM_FAULT_RETRY;
 	/*
 	 * We might have raced with another page fault while we released the
 	 * pte_offset_map_lock.
@@ -2551,8 +2562,11 @@ static int do_wp_page(struct vm_fault *vmf)
 			get_page(vmf->page);
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
 			lock_page(vmf->page);
-			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-					vmf->address, &vmf->ptl);
+			if (!pte_map_lock(vmf)) {
+				unlock_page(vmf->page);
+				put_page(vmf->page);
+				return VM_FAULT_RETRY;
+			}
 			if (!pte_same(*vmf->pte, vmf->orig_pte)) {
 				unlock_page(vmf->page);
 				pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2710,8 +2724,10 @@ int do_swap_page(struct vm_fault *vmf)
 			 * Back out if somebody else faulted in this pte
 			 * while we released the pte lock.
 			 */
-			vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-					vmf->address, &vmf->ptl);
+			if (!pte_map_lock(vmf)) {
+				delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+				return VM_FAULT_RETRY;
+			}
 			if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
 				ret = VM_FAULT_OOM;
 			delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
@@ -2767,8 +2783,11 @@ int do_swap_page(struct vm_fault *vmf)
 	/*
 	 * Back out if somebody else already faulted in this pte.
 	 */
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
+	if (!pte_map_lock(vmf)) {
+		ret = VM_FAULT_RETRY;
+		mem_cgroup_cancel_charge(page, memcg, false);
+		goto out_page;
+	}
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
 		goto out_nomap;
 
@@ -2894,8 +2913,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
 			!mm_forbids_zeropage(vma->vm_mm)) {
 		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
 						vma->vm_page_prot));
-		vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
-				vmf->address, &vmf->ptl);
+		if (!pte_map_lock(vmf))
+			return VM_FAULT_RETRY;
 		if (!pte_none(*vmf->pte))
 			goto unlock;
 		/* Deliver the page fault to userland, check inside PT lock */
@@ -2927,8 +2946,11 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	if (vma->vm_flags & VM_WRITE)
 		entry = pte_mkwrite(pte_mkdirty(entry));
 
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
+	if (!pte_map_lock(vmf)) {
+		mem_cgroup_cancel_charge(page, memcg, false);
+		put_page(page);
+		return VM_FAULT_RETRY;
+	}
 	if (!pte_none(*vmf->pte))
 		goto release;
 
@@ -3048,8 +3070,9 @@ static int pte_alloc_one_map(struct vm_fault *vmf)
 	 * pte_none() under vmf->ptl protection when we return to
 	 * alloc_set_pte().
 	 */
-	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
-			&vmf->ptl);
+	if (!pte_map_lock(vmf))
+		return VM_FAULT_RETRY;
+
 	return 0;
 }
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 03/20] mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 04/20] mm: VMA sequence count Laurent Dufour
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

When handling page fault without holding the mmap_sem the fetch of the
pte lock pointer and the locking will have to be done while ensuring
that the VMA is not touched in our back.

So move the fetch and locking operations in a dedicated function.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 3ed1b00ca841..fa598889eb0e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2269,6 +2269,13 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 }
 
+static bool pte_spinlock(struct vm_fault *vmf)
+{
+	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+	spin_lock(vmf->ptl);
+	return true;
+}
+
 static bool pte_map_lock(struct vm_fault *vmf)
 {
 	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl);
@@ -3543,8 +3550,8 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * validation through pte_unmap_same(). It's of NUMA type but
 	 * the pfn may be screwed if the read is non atomic.
 	 */
-	vmf->ptl = pte_lockptr(vma->vm_mm, vmf->pmd);
-	spin_lock(vmf->ptl);
+	if (!pte_spinlock(vmf))
+		return VM_FAULT_RETRY;
 	if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		goto out;
@@ -3736,8 +3743,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
 	if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma))
 		return do_numa_page(vmf);
 
-	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-	spin_lock(vmf->ptl);
+	if (!pte_spinlock(vmf))
+		return VM_FAULT_RETRY;
 	entry = vmf->orig_pte;
 	if (unlikely(!pte_same(*vmf->pte, entry)))
 		goto unlock;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 04/20] mm: VMA sequence count
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (2 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 03/20] mm: Introduce pte_spinlock " Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 05/20] mm: Protect VMA modifications using " Laurent Dufour
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.

The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are still valid.

The flip side is that we cannot distinguish between a vma_adjust() and
the unmap_page_range() -- where with the former we could have
re-checked the vma bounds against the address.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Port to 4.12 kernel]
[Fix lock dependency between mapping->i_mmap_rwsem and vma->vm_sequence]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm_types.h |  1 +
 mm/memory.c              |  2 ++
 mm/mmap.c                | 21 ++++++++++++++++++---
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3cadee0a3508..642aad26b32f 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -342,6 +342,7 @@ struct vm_area_struct {
 	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
 #endif
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+	seqcount_t vm_sequence;
 } __randomize_layout;
 
 struct core_thread {
diff --git a/mm/memory.c b/mm/memory.c
index fa598889eb0e..4a2736fe2ef6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1408,6 +1408,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 	unsigned long next;
 
 	BUG_ON(addr >= end);
+	write_seqcount_begin(&vma->vm_sequence);
 	tlb_start_vma(tlb, vma);
 	pgd = pgd_offset(vma->vm_mm, addr);
 	do {
@@ -1417,6 +1418,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 		next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
 	} while (pgd++, addr = next, addr != end);
 	tlb_end_vma(tlb, vma);
+	write_seqcount_end(&vma->vm_sequence);
 }
 
 
diff --git a/mm/mmap.c b/mm/mmap.c
index f19efcf75418..140b22136cb7 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -557,6 +557,8 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
 	else
 		mm->highest_vm_end = vm_end_gap(vma);
 
+	seqcount_init(&vma->vm_sequence);
+
 	/*
 	 * vma->vm_prev wasn't known when we followed the rbtree to find the
 	 * correct insertion point for that vma. As a result, we could not
@@ -798,6 +800,11 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 		}
 	}
 
+	write_seqcount_begin(&vma->vm_sequence);
+	if (next && next != vma)
+		write_seqcount_begin_nested(&next->vm_sequence,
+					    SINGLE_DEPTH_NESTING);
+
 	anon_vma = vma->anon_vma;
 	if (!anon_vma && adjust_next)
 		anon_vma = next->anon_vma;
@@ -902,6 +909,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 		mm->map_count--;
 		mpol_put(vma_policy(next));
 		kmem_cache_free(vm_area_cachep, next);
+		write_seqcount_end(&next->vm_sequence);
 		/*
 		 * In mprotect's case 6 (see comments on vma_merge),
 		 * we must remove another next too. It would clutter
@@ -931,11 +939,14 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 		if (remove_next == 2) {
 			remove_next = 1;
 			end = next->vm_end;
+			write_seqcount_end(&vma->vm_sequence);
 			goto again;
-		}
-		else if (next)
+		} else if (next) {
+			if (next != vma)
+				write_seqcount_begin_nested(&next->vm_sequence,
+							    SINGLE_DEPTH_NESTING);
 			vma_gap_update(next);
-		else {
+		} else {
 			/*
 			 * If remove_next == 2 we obviously can't
 			 * reach this path.
@@ -961,6 +972,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	if (insert && file)
 		uprobe_mmap(insert);
 
+	if (next && next != vma)
+		write_seqcount_end(&next->vm_sequence);
+	write_seqcount_end(&vma->vm_sequence);
+
 	validate_mm(mm);
 
 	return 0;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 05/20] mm: Protect VMA modifications using VMA sequence count
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (3 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 04/20] mm: VMA sequence count Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 06/20] mm: RCU free VMAs Laurent Dufour
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

The VMA sequence count has been introduced to allow fast detection of
VMA modification when running a page fault handler without holding
the mmap_sem.

This patch provides protection against the VMA modification done in :
	- madvise()
	- mremap()
	- mpol_rebind_policy()
	- vma_replace_policy()
	- change_prot_numa()
	- mlock(), munlock()
	- mprotect()
	- mmap_region()
	- collapse_huge_page()
	- userfaultd registering services

In addition, VMA fields which will be read during the speculative fault
path needs to be written using WRITE_ONCE to prevent write to be split
and intermediate values to be pushed to other CPUs.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 fs/proc/task_mmu.c |  5 ++++-
 fs/userfaultfd.c   | 17 +++++++++++++----
 mm/khugepaged.c    |  3 +++
 mm/madvise.c       |  6 +++++-
 mm/mempolicy.c     | 51 ++++++++++++++++++++++++++++++++++-----------------
 mm/mlock.c         | 13 ++++++++-----
 mm/mmap.c          | 17 ++++++++++-------
 mm/mprotect.c      |  4 +++-
 mm/mremap.c        |  7 +++++++
 9 files changed, 87 insertions(+), 36 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index fe8f3265e877..e682179edaae 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1067,8 +1067,11 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 					goto out_mm;
 				}
 				for (vma = mm->mmap; vma; vma = vma->vm_next) {
-					vma->vm_flags &= ~VM_SOFTDIRTY;
+					write_seqcount_begin(&vma->vm_sequence);
+					WRITE_ONCE(vma->vm_flags,
+						   vma->vm_flags & ~VM_SOFTDIRTY);
 					vma_set_page_prot(vma);
+					write_seqcount_end(&vma->vm_sequence);
 				}
 				downgrade_write(&mm->mmap_sem);
 				break;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index b0d5897bc4e6..77b1e025c88e 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -612,8 +612,11 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
 
 	octx = vma->vm_userfaultfd_ctx.ctx;
 	if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
+		write_seqcount_begin(&vma->vm_sequence);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
-		vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING);
+		WRITE_ONCE(vma->vm_flags,
+			   vma->vm_flags & ~(VM_UFFD_WP | VM_UFFD_MISSING));
+		write_seqcount_end(&vma->vm_sequence);
 		return 0;
 	}
 
@@ -838,8 +841,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
 			vma = prev;
 		else
 			prev = vma;
-		vma->vm_flags = new_flags;
+		write_seqcount_begin(&vma->vm_sequence);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+		write_seqcount_end(&vma->vm_sequence);
 	}
 	up_write(&mm->mmap_sem);
 	mmput(mm);
@@ -1357,8 +1362,10 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
 		 * the next vma was merged into the current one and
 		 * the current one has not been updated yet.
 		 */
-		vma->vm_flags = new_flags;
+		write_seqcount_begin(&vma->vm_sequence);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx.ctx = ctx;
+		write_seqcount_end(&vma->vm_sequence);
 
 	skip:
 		prev = vma;
@@ -1515,8 +1522,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
 		 * the next vma was merged into the current one and
 		 * the current one has not been updated yet.
 		 */
-		vma->vm_flags = new_flags;
+		write_seqcount_begin(&vma->vm_sequence);
+		WRITE_ONCE(vma->vm_flags, new_flags);
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+		write_seqcount_end(&vma->vm_sequence);
 
 	skip:
 		prev = vma;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c01f177a1120..56dd994c05d0 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1005,6 +1005,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	if (mm_find_pmd(mm, address) != pmd)
 		goto out;
 
+	write_seqcount_begin(&vma->vm_sequence);
 	anon_vma_lock_write(vma->anon_vma);
 
 	pte = pte_offset_map(pmd, address);
@@ -1040,6 +1041,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 		pmd_populate(mm, pmd, pmd_pgtable(_pmd));
 		spin_unlock(pmd_ptl);
 		anon_vma_unlock_write(vma->anon_vma);
+		write_seqcount_end(&vma->vm_sequence);
 		result = SCAN_FAIL;
 		goto out;
 	}
@@ -1074,6 +1076,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	set_pmd_at(mm, address, pmd, _pmd);
 	update_mmu_cache_pmd(vma, address, pmd);
 	spin_unlock(pmd_ptl);
+	write_seqcount_end(&vma->vm_sequence);
 
 	*hpage = NULL;
 
diff --git a/mm/madvise.c b/mm/madvise.c
index 47d8d8a25eae..8fc4f73c8ac5 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -172,7 +172,9 @@ static long madvise_behavior(struct vm_area_struct *vma,
 	/*
 	 * vm_flags is protected by the mmap_sem held in write mode.
 	 */
-	vma->vm_flags = new_flags;
+	write_seqcount_begin(&vma->vm_sequence);
+	WRITE_ONCE(vma->vm_flags, new_flags);
+	write_seqcount_end(&vma->vm_sequence);
 out:
 	return error;
 }
@@ -440,9 +442,11 @@ static void madvise_free_page_range(struct mmu_gather *tlb,
 		.private = tlb,
 	};
 
+	write_seqcount_begin(&vma->vm_sequence);
 	tlb_start_vma(tlb, vma);
 	walk_page_range(addr, end, &free_walk);
 	tlb_end_vma(tlb, vma);
+	write_seqcount_end(&vma->vm_sequence);
 }
 
 static int madvise_free_single_vma(struct vm_area_struct *vma,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d911fa5cb2a7..8e2f67af8e05 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -378,8 +378,11 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 	struct vm_area_struct *vma;
 
 	down_write(&mm->mmap_sem);
-	for (vma = mm->mmap; vma; vma = vma->vm_next)
+	for (vma = mm->mmap; vma; vma = vma->vm_next) {
+		write_seqcount_begin(&vma->vm_sequence);
 		mpol_rebind_policy(vma->vm_policy, new);
+		write_seqcount_end(&vma->vm_sequence);
+	}
 	up_write(&mm->mmap_sem);
 }
 
@@ -537,9 +540,11 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 {
 	int nr_updated;
 
+	write_seqcount_begin(&vma->vm_sequence);
 	nr_updated = change_protection(vma, addr, end, PAGE_NONE, 0, 1);
 	if (nr_updated)
 		count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
+	write_seqcount_end(&vma->vm_sequence);
 
 	return nr_updated;
 }
@@ -640,6 +645,7 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 	if (IS_ERR(new))
 		return PTR_ERR(new);
 
+	write_seqcount_begin(&vma->vm_sequence);
 	if (vma->vm_ops && vma->vm_ops->set_policy) {
 		err = vma->vm_ops->set_policy(vma, new);
 		if (err)
@@ -647,11 +653,17 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 	}
 
 	old = vma->vm_policy;
-	vma->vm_policy = new; /* protected by mmap_sem */
+	/*
+	 * The speculative page fault handler access this field without
+	 * hodling the mmap_sem.
+	 */
+	WRITE_ONCE(vma->vm_policy,  new);
+	write_seqcount_end(&vma->vm_sequence);
 	mpol_put(old);
 
 	return 0;
  err_out:
+	write_seqcount_end(&vma->vm_sequence);
 	mpol_put(new);
 	return err;
 }
@@ -1505,23 +1517,28 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len,
 struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
 						unsigned long addr)
 {
-	struct mempolicy *pol = NULL;
+	struct mempolicy *pol;
 
-	if (vma) {
-		if (vma->vm_ops && vma->vm_ops->get_policy) {
-			pol = vma->vm_ops->get_policy(vma, addr);
-		} else if (vma->vm_policy) {
-			pol = vma->vm_policy;
+	if (!vma)
+		return NULL;
 
-			/*
-			 * shmem_alloc_page() passes MPOL_F_SHARED policy with
-			 * a pseudo vma whose vma->vm_ops=NULL. Take a reference
-			 * count on these policies which will be dropped by
-			 * mpol_cond_put() later
-			 */
-			if (mpol_needs_cond_ref(pol))
-				mpol_get(pol);
-		}
+	if (vma->vm_ops && vma->vm_ops->get_policy)
+		return vma->vm_ops->get_policy(vma, addr);
+
+	/*
+	 * This could be called without holding the mmap_sem in the
+	 * speculative page fault handler's path.
+	 */
+	pol = READ_ONCE(vma->vm_policy);
+	if (pol) {
+		/*
+		 * shmem_alloc_page() passes MPOL_F_SHARED policy with
+		 * a pseudo vma whose vma->vm_ops=NULL. Take a reference
+		 * count on these policies which will be dropped by
+		 * mpol_cond_put() later
+		 */
+		if (mpol_needs_cond_ref(pol))
+			mpol_get(pol);
 	}
 
 	return pol;
diff --git a/mm/mlock.c b/mm/mlock.c
index b562b5523a65..23d16dbff7fb 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -438,7 +438,9 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
 void munlock_vma_pages_range(struct vm_area_struct *vma,
 			     unsigned long start, unsigned long end)
 {
-	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
+	write_seqcount_begin(&vma->vm_sequence);
+	WRITE_ONCE(vma->vm_flags, vma->vm_flags & VM_LOCKED_CLEAR_MASK);
+	write_seqcount_end(&vma->vm_sequence);
 
 	while (start < end) {
 		struct page *page;
@@ -563,10 +565,11 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 	 * It's okay if try_to_unmap_one unmaps a page just after we
 	 * set VM_LOCKED, populate_vma_page_range will bring it back.
 	 */
-
-	if (lock)
-		vma->vm_flags = newflags;
-	else
+	if (lock) {
+		write_seqcount_begin(&vma->vm_sequence);
+		WRITE_ONCE(vma->vm_flags, newflags);
+		write_seqcount_end(&vma->vm_sequence);
+	} else
 		munlock_vma_pages_range(vma, start, end);
 
 out:
diff --git a/mm/mmap.c b/mm/mmap.c
index 140b22136cb7..b480043e38fb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -825,17 +825,18 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	}
 
 	if (start != vma->vm_start) {
-		vma->vm_start = start;
+		WRITE_ONCE(vma->vm_start, start);
 		start_changed = true;
 	}
 	if (end != vma->vm_end) {
-		vma->vm_end = end;
+		WRITE_ONCE(vma->vm_end, end);
 		end_changed = true;
 	}
-	vma->vm_pgoff = pgoff;
+	WRITE_ONCE(vma->vm_pgoff, pgoff);
 	if (adjust_next) {
-		next->vm_start += adjust_next << PAGE_SHIFT;
-		next->vm_pgoff += adjust_next;
+		WRITE_ONCE(next->vm_start,
+			   next->vm_start + (adjust_next << PAGE_SHIFT));
+		WRITE_ONCE(next->vm_pgoff, next->vm_pgoff + adjust_next);
 	}
 
 	if (root) {
@@ -1734,6 +1735,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 out:
 	perf_event_mmap(vma);
 
+	write_seqcount_begin(&vma->vm_sequence);
 	vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
 	if (vm_flags & VM_LOCKED) {
 		if (!((vm_flags & VM_SPECIAL) || is_vm_hugetlb_page(vma) ||
@@ -1756,6 +1758,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	vma->vm_flags |= VM_SOFTDIRTY;
 
 	vma_set_page_prot(vma);
+	write_seqcount_end(&vma->vm_sequence);
 
 	return addr;
 
@@ -2384,8 +2387,8 @@ int expand_downwards(struct vm_area_struct *vma,
 					mm->locked_vm += grow;
 				vm_stat_account(mm, vma->vm_flags, grow);
 				anon_vma_interval_tree_pre_update_vma(vma);
-				vma->vm_start = address;
-				vma->vm_pgoff -= grow;
+				WRITE_ONCE(vma->vm_start, address);
+				WRITE_ONCE(vma->vm_pgoff, vma->vm_pgoff - grow);
 				anon_vma_interval_tree_post_update_vma(vma);
 				vma_gap_update(vma);
 				spin_unlock(&mm->page_table_lock);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index bd0f409922cb..0def85982d6c 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -344,7 +344,8 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 	 * vm_flags and vm_page_prot are protected by the mmap_sem
 	 * held in write mode.
 	 */
-	vma->vm_flags = newflags;
+	write_seqcount_begin(&vma->vm_sequence);
+	WRITE_ONCE(vma->vm_flags, newflags);
 	dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot);
 	vma_set_page_prot(vma);
 
@@ -359,6 +360,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 			(newflags & VM_WRITE)) {
 		populate_vma_page_range(vma, start, end, NULL);
 	}
+	write_seqcount_end(&vma->vm_sequence);
 
 	vm_stat_account(mm, oldflags, -nrpages);
 	vm_stat_account(mm, newflags, nrpages);
diff --git a/mm/mremap.c b/mm/mremap.c
index 3f23715d3c69..1abadea8ab84 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -301,6 +301,10 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 	if (!new_vma)
 		return -ENOMEM;
 
+	write_seqcount_begin(&vma->vm_sequence);
+	write_seqcount_begin_nested(&new_vma->vm_sequence,
+				    SINGLE_DEPTH_NESTING);
+
 	moved_len = move_page_tables(vma, old_addr, new_vma, new_addr, old_len,
 				     need_rmap_locks);
 	if (moved_len < old_len) {
@@ -317,6 +321,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 		 */
 		move_page_tables(new_vma, new_addr, vma, old_addr, moved_len,
 				 true);
+		write_seqcount_end(&vma->vm_sequence);
 		vma = new_vma;
 		old_len = new_len;
 		old_addr = new_addr;
@@ -325,7 +330,9 @@ static unsigned long move_vma(struct vm_area_struct *vma,
 		mremap_userfaultfd_prep(new_vma, uf);
 		arch_remap(mm, old_addr, old_addr + old_len,
 			   new_addr, new_addr + new_len);
+		write_seqcount_end(&vma->vm_sequence);
 	}
+	write_seqcount_end(&new_vma->vm_sequence);
 
 	/* Conceal VM_ACCOUNT so old reservation is not undone */
 	if (vm_flags & VM_ACCOUNT) {
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 06/20] mm: RCU free VMAs
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (4 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 05/20] mm: Protect VMA modifications using " Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 07/20] mm: Cache some VMA fields in the vm_fault structure Laurent Dufour
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Manage the VMAs with SRCU such that we can do a lockless VMA lookup.

We put the fput(vma->vm_file) in the SRCU callback, this keeps files
valid during speculative faults, this is possible due to the delayed
fput work by Al Viro -- do we need srcu_barrier() in unmount
someplace?

We guard the mm_rb tree with a seqlock (this could be a seqcount but
we'd have to disable preemption around the write side in order to make
the retry loop in __read_seqcount_begin() work) such that we can know
if the rb tree walk was correct. We cannot trust the restult of a
lockless tree walk in the face of concurrent tree rotations; although
we can trust on the termination of such walks -- tree rotations
guarantee the end result is a tree again after all.

Furthermore, we rely on the WMB implied by the
write_seqlock/count_begin() to separate the VMA initialization and the
publishing stores, analogous to the RELEASE in rcu_assign_pointer().
We also rely on the RMB from read_seqretry() to separate the vma load
from further loads like the smp_read_barrier_depends() in regular
RCU.

We must not touch the vmacache while doing SRCU lookups as that is not
properly serialized against changes. We update gap information after
publishing the VMA, but A) we don't use that and B) the seqlock
read side would fix that anyhow.

We clear vma->vm_rb for nodes removed from the vma tree such that we
can easily detect such 'dead' nodes, we rely on the WMB from
write_sequnlock() to separate the tree removal and clearing the node.

Provide find_vma_srcu() which wraps the required magic.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Remove the warnings in description about the SRCU global lock which
 has been removed now]
[Rename vma_is_dead() to vma_has_changed() and move its adding to the next
 patch]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm_types.h |   2 +
 kernel/fork.c            |   1 +
 mm/init-mm.c             |   1 +
 mm/internal.h            |   5 +++
 mm/mmap.c                | 100 +++++++++++++++++++++++++++++++++++------------
 5 files changed, 83 insertions(+), 26 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 642aad26b32f..f3851b250fde 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -343,6 +343,7 @@ struct vm_area_struct {
 #endif
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
 	seqcount_t vm_sequence;
+	struct rcu_head vm_rcu_head;
 } __randomize_layout;
 
 struct core_thread {
@@ -360,6 +361,7 @@ struct kioctx_table;
 struct mm_struct {
 	struct vm_area_struct *mmap;		/* list of VMAs */
 	struct rb_root mm_rb;
+	seqlock_t mm_seq;
 	u32 vmacache_seqnum;                   /* per-thread vmacache */
 #ifdef CONFIG_MMU
 	unsigned long (*get_unmapped_area) (struct file *filp,
diff --git a/kernel/fork.c b/kernel/fork.c
index e075b7780421..f28aa54c668c 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -791,6 +791,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 	mm->mmap = NULL;
 	mm->mm_rb = RB_ROOT;
 	mm->vmacache_seqnum = 0;
+	seqlock_init(&mm->mm_seq);
 	atomic_set(&mm->mm_users, 1);
 	atomic_set(&mm->mm_count, 1);
 	init_rwsem(&mm->mmap_sem);
diff --git a/mm/init-mm.c b/mm/init-mm.c
index 975e49f00f34..2b1fa061684f 100644
--- a/mm/init-mm.c
+++ b/mm/init-mm.c
@@ -16,6 +16,7 @@
 
 struct mm_struct init_mm = {
 	.mm_rb		= RB_ROOT,
+	.mm_seq		= __SEQLOCK_UNLOCKED(init_mm.mm_seq),
 	.pgd		= swapper_pg_dir,
 	.mm_users	= ATOMIC_INIT(2),
 	.mm_count	= ATOMIC_INIT(1),
diff --git a/mm/internal.h b/mm/internal.h
index 4ef49fc55e58..736540f15936 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,11 @@ void page_writeback_init(void);
 
 int do_swap_page(struct vm_fault *vmf);
 
+extern struct srcu_struct vma_srcu;
+
+extern struct vm_area_struct *find_vma_srcu(struct mm_struct *mm,
+					    unsigned long addr);
+
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/mmap.c b/mm/mmap.c
index b480043e38fb..34a7f1bdffe4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -159,6 +159,23 @@ void unlink_file_vma(struct vm_area_struct *vma)
 	}
 }
 
+DEFINE_SRCU(vma_srcu);
+
+static void __free_vma(struct rcu_head *head)
+{
+	struct vm_area_struct *vma =
+		container_of(head, struct vm_area_struct, vm_rcu_head);
+
+	if (vma->vm_file)
+		fput(vma->vm_file);
+	kmem_cache_free(vm_area_cachep, vma);
+}
+
+static void free_vma(struct vm_area_struct *vma)
+{
+	call_srcu(&vma_srcu, &vma->vm_rcu_head, __free_vma);
+}
+
 /*
  * Close a vm structure and free it, returning the next.
  */
@@ -169,10 +186,8 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
 	might_sleep();
 	if (vma->vm_ops && vma->vm_ops->close)
 		vma->vm_ops->close(vma);
-	if (vma->vm_file)
-		fput(vma->vm_file);
 	mpol_put(vma_policy(vma));
-	kmem_cache_free(vm_area_cachep, vma);
+	free_vma(vma);
 	return next;
 }
 
@@ -410,26 +425,37 @@ static void vma_gap_update(struct vm_area_struct *vma)
 }
 
 static inline void vma_rb_insert(struct vm_area_struct *vma,
-				 struct rb_root *root)
+				 struct mm_struct *mm)
 {
+	struct rb_root *root = &mm->mm_rb;
+
 	/* All rb_subtree_gap values must be consistent prior to insertion */
 	validate_mm_rb(root, NULL);
 
 	rb_insert_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
 }
 
-static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root *root)
+static void __vma_rb_erase(struct vm_area_struct *vma, struct mm_struct *mm)
 {
+	struct rb_root *root = &mm->mm_rb;
 	/*
 	 * Note rb_erase_augmented is a fairly large inline function,
 	 * so make sure we instantiate it only once with our desired
 	 * augmented rbtree callbacks.
 	 */
+	write_seqlock(&mm->mm_seq);
 	rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks);
+	write_sequnlock(&mm->mm_seq); /* wmb */
+
+	/*
+	 * Ensure the removal is complete before clearing the node.
+	 * Matched by vma_has_changed()/handle_speculative_fault().
+	 */
+	RB_CLEAR_NODE(&vma->vm_rb);
 }
 
 static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
-						struct rb_root *root,
+						struct mm_struct *mm,
 						struct vm_area_struct *ignore)
 {
 	/*
@@ -437,21 +463,21 @@ static __always_inline void vma_rb_erase_ignore(struct vm_area_struct *vma,
 	 * with the possible exception of the "next" vma being erased if
 	 * next->vm_start was reduced.
 	 */
-	validate_mm_rb(root, ignore);
+	validate_mm_rb(&mm->mm_rb, ignore);
 
-	__vma_rb_erase(vma, root);
+	__vma_rb_erase(vma, mm);
 }
 
 static __always_inline void vma_rb_erase(struct vm_area_struct *vma,
-					 struct rb_root *root)
+					 struct mm_struct *mm)
 {
 	/*
 	 * All rb_subtree_gap values must be consistent prior to erase,
 	 * with the possible exception of the vma being erased.
 	 */
-	validate_mm_rb(root, vma);
+	validate_mm_rb(&mm->mm_rb, vma);
 
-	__vma_rb_erase(vma, root);
+	__vma_rb_erase(vma, mm);
 }
 
 /*
@@ -568,10 +594,12 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
 	 * immediately update the gap to the correct value. Finally we
 	 * rebalance the rbtree after all augmented values have been set.
 	 */
+	write_seqlock(&mm->mm_seq);
 	rb_link_node(&vma->vm_rb, rb_parent, rb_link);
 	vma->rb_subtree_gap = 0;
 	vma_gap_update(vma);
-	vma_rb_insert(vma, &mm->mm_rb);
+	vma_rb_insert(vma, mm);
+	write_sequnlock(&mm->mm_seq);
 }
 
 static void __vma_link_file(struct vm_area_struct *vma)
@@ -647,7 +675,7 @@ static __always_inline void __vma_unlink_common(struct mm_struct *mm,
 {
 	struct vm_area_struct *next;
 
-	vma_rb_erase_ignore(vma, &mm->mm_rb, ignore);
+	vma_rb_erase_ignore(vma, mm, ignore);
 	next = vma->vm_next;
 	if (has_prev)
 		prev->vm_next = next;
@@ -901,15 +929,13 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	}
 
 	if (remove_next) {
-		if (file) {
+		if (file)
 			uprobe_munmap(next, next->vm_start, next->vm_end);
-			fput(file);
-		}
 		if (next->anon_vma)
 			anon_vma_merge(vma, next);
 		mm->map_count--;
 		mpol_put(vma_policy(next));
-		kmem_cache_free(vm_area_cachep, next);
+		free_vma(next);
 		write_seqcount_end(&next->vm_sequence);
 		/*
 		 * In mprotect's case 6 (see comments on vma_merge),
@@ -2130,15 +2156,10 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
 EXPORT_SYMBOL(get_unmapped_area);
 
 /* Look up the first VMA which satisfies  addr < vm_end,  NULL if none. */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+static struct vm_area_struct *__find_vma(struct mm_struct *mm, unsigned long addr)
 {
 	struct rb_node *rb_node;
-	struct vm_area_struct *vma;
-
-	/* Check the cache first. */
-	vma = vmacache_find(mm, addr);
-	if (likely(vma))
-		return vma;
+	struct vm_area_struct *vma = NULL;
 
 	rb_node = mm->mm_rb.rb_node;
 
@@ -2156,13 +2177,40 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
 			rb_node = rb_node->rb_right;
 	}
 
+	return vma;
+}
+
+struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+{
+	struct vm_area_struct *vma;
+
+	/* Check the cache first. */
+	vma = vmacache_find(mm, addr);
+	if (likely(vma))
+		return vma;
+
+	vma = __find_vma(mm, addr);
 	if (vma)
 		vmacache_update(addr, vma);
 	return vma;
 }
-
 EXPORT_SYMBOL(find_vma);
 
+struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long addr)
+{
+	struct vm_area_struct *vma;
+	unsigned int seq;
+
+	WARN_ON_ONCE(!srcu_read_lock_held(&vma_srcu));
+
+	do {
+		seq = read_seqbegin(&mm->mm_seq);
+		vma = __find_vma(mm, addr);
+	} while (read_seqretry(&mm->mm_seq, seq));
+
+	return vma;
+}
+
 /*
  * Same as find_vma, but also return a pointer to the previous VMA in *pprev.
  */
@@ -2530,7 +2578,7 @@ detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
 	insertion_point = (prev ? &prev->vm_next : &mm->mmap);
 	vma->vm_prev = NULL;
 	do {
-		vma_rb_erase(vma, &mm->mm_rb);
+		vma_rb_erase(vma, mm);
 		mm->map_count--;
 		tail_vma = vma;
 		vma = vma->vm_next;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 07/20] mm: Cache some VMA fields in the vm_fault structure
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (5 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 06/20] mm: RCU free VMAs Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 08/20] mm: Protect SPF handler against anon_vma changes Laurent Dufour
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

When handling speculative page fault, the vma->vm_flags and
vma->vm_page_prot fields are read once the page table lock is released. So
there is no more guarantee that these fields would not change in our back.
They will be saved in the vm_fault structure before the VMA is checked for
changes.

This patch also set the fields in hugetlb_no_page() and
__collapse_huge_page_swapin even if it is not need for the callee.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h |  6 ++++++
 mm/hugetlb.c       |  2 ++
 mm/khugepaged.c    |  2 ++
 mm/memory.c        | 38 ++++++++++++++++++++------------------
 4 files changed, 30 insertions(+), 18 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8763ec96dc78..43d313ff3a5b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -345,6 +345,12 @@ struct vm_fault {
 					 * page table to avoid allocation from
 					 * atomic context.
 					 */
+	/*
+	 * These entries are required when handling speculative page fault.
+	 * This way the page handling is done using consistent field values.
+	 */
+	unsigned long vma_flags;
+	pgprot_t vma_page_prot;
 };
 
 /* page entry size for vm->huge_fault() */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 31e207cb399b..55201b98133e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3676,6 +3676,8 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
 				.vma = vma,
 				.address = address,
 				.flags = flags,
+				.vma_flags = vma->vm_flags,
+				.vma_page_prot = vma->vm_page_prot,
 				/*
 				 * Hard to debug if it ends up being
 				 * used by a callee that assumes
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 56dd994c05d0..0525a0e74535 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -881,6 +881,8 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 		.flags = FAULT_FLAG_ALLOW_RETRY,
 		.pmd = pmd,
 		.pgoff = linear_page_index(vma, address),
+		.vma_flags = vma->vm_flags,
+		.vma_page_prot = vma->vm_page_prot,
 	};
 
 	/* we only decide to swapin, if there is enough young ptes */
diff --git a/mm/memory.c b/mm/memory.c
index 4a2736fe2ef6..da3bd07bb052 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2417,7 +2417,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		 * Don't let another task, with possibly unlocked vma,
 		 * keep the mlocked page.
 		 */
-		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
+		if (page_copied && (vmf->vma_flags & VM_LOCKED)) {
 			lock_page(old_page);	/* LRU manipulation */
 			if (PageMlocked(old_page))
 				munlock_vma_page(old_page);
@@ -2451,7 +2451,7 @@ static int wp_page_copy(struct vm_fault *vmf)
  */
 int finish_mkwrite_fault(struct vm_fault *vmf)
 {
-	WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
+	WARN_ON_ONCE(!(vmf->vma_flags & VM_SHARED));
 	if (!pte_map_lock(vmf))
 		return VM_FAULT_RETRY;
 	/*
@@ -2553,7 +2553,7 @@ static int do_wp_page(struct vm_fault *vmf)
 		 * We should not cow pages in a shared writeable mapping.
 		 * Just mark the pages writable and/or call ops->pfn_mkwrite.
 		 */
-		if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+		if ((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
 				     (VM_WRITE|VM_SHARED))
 			return wp_pfn_shared(vmf);
 
@@ -2600,7 +2600,7 @@ static int do_wp_page(struct vm_fault *vmf)
 			return VM_FAULT_WRITE;
 		}
 		unlock_page(vmf->page);
-	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+	} else if (unlikely((vmf->vma_flags & (VM_WRITE|VM_SHARED)) ==
 					(VM_WRITE|VM_SHARED))) {
 		return wp_page_shared(vmf);
 	}
@@ -2817,7 +2817,7 @@ int do_swap_page(struct vm_fault *vmf)
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
-	pte = mk_pte(page, vma->vm_page_prot);
+	pte = mk_pte(page, vmf->vma_page_prot);
 	if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) {
 		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
 		vmf->flags &= ~FAULT_FLAG_WRITE;
@@ -2841,7 +2841,7 @@ int do_swap_page(struct vm_fault *vmf)
 
 	swap_free(entry);
 	if (mem_cgroup_swap_full(page) ||
-	    (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
+	    (vmf->vma_flags & VM_LOCKED) || PageMlocked(page))
 		try_to_free_swap(page);
 	unlock_page(page);
 	if (page != swapcache) {
@@ -2897,7 +2897,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	pte_t entry;
 
 	/* File mapping without ->vm_ops ? */
-	if (vma->vm_flags & VM_SHARED)
+	if (vmf->vma_flags & VM_SHARED)
 		return VM_FAULT_SIGBUS;
 
 	/*
@@ -2921,7 +2921,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	if (!(vmf->flags & FAULT_FLAG_WRITE) &&
 			!mm_forbids_zeropage(vma->vm_mm)) {
 		entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address),
-						vma->vm_page_prot));
+						vmf->vma_page_prot));
 		if (!pte_map_lock(vmf))
 			return VM_FAULT_RETRY;
 		if (!pte_none(*vmf->pte))
@@ -2951,8 +2951,8 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	 */
 	__SetPageUptodate(page);
 
-	entry = mk_pte(page, vma->vm_page_prot);
-	if (vma->vm_flags & VM_WRITE)
+	entry = mk_pte(page, vmf->vma_page_prot);
+	if (vmf->vma_flags & VM_WRITE)
 		entry = pte_mkwrite(pte_mkdirty(entry));
 
 	if (!pte_map_lock(vmf)) {
@@ -3144,7 +3144,7 @@ static int do_set_pmd(struct vm_fault *vmf, struct page *page)
 	for (i = 0; i < HPAGE_PMD_NR; i++)
 		flush_icache_page(vma, page + i);
 
-	entry = mk_huge_pmd(page, vma->vm_page_prot);
+	entry = mk_huge_pmd(page, vmf->vma_page_prot);
 	if (write)
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 
@@ -3218,11 +3218,11 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		return VM_FAULT_NOPAGE;
 
 	flush_icache_page(vma, page);
-	entry = mk_pte(page, vma->vm_page_prot);
+	entry = mk_pte(page, vmf->vma_page_prot);
 	if (write)
 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 	/* copy-on-write page */
-	if (write && !(vma->vm_flags & VM_SHARED)) {
+	if (write && !(vmf->vma_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
@@ -3261,7 +3261,7 @@ int finish_fault(struct vm_fault *vmf)
 
 	/* Did we COW the page? */
 	if ((vmf->flags & FAULT_FLAG_WRITE) &&
-	    !(vmf->vma->vm_flags & VM_SHARED))
+	    !(vmf->vma_flags & VM_SHARED))
 		page = vmf->cow_page;
 	else
 		page = vmf->page;
@@ -3507,7 +3507,7 @@ static int do_fault(struct vm_fault *vmf)
 		ret = VM_FAULT_SIGBUS;
 	else if (!(vmf->flags & FAULT_FLAG_WRITE))
 		ret = do_read_fault(vmf);
-	else if (!(vma->vm_flags & VM_SHARED))
+	else if (!(vmf->vma_flags & VM_SHARED))
 		ret = do_cow_fault(vmf);
 	else
 		ret = do_shared_fault(vmf);
@@ -3564,7 +3564,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * accessible ptes, some can allow access by kernel mode.
 	 */
 	pte = ptep_modify_prot_start(vma->vm_mm, vmf->address, vmf->pte);
-	pte = pte_modify(pte, vma->vm_page_prot);
+	pte = pte_modify(pte, vmf->vma_page_prot);
 	pte = pte_mkyoung(pte);
 	if (was_writable)
 		pte = pte_mkwrite(pte);
@@ -3598,7 +3598,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	 * Flag if the page is shared between multiple address spaces. This
 	 * is later used when determining whether to group tasks together
 	 */
-	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
+	if (page_mapcount(page) > 1 && (vmf->vma_flags & VM_SHARED))
 		flags |= TNF_SHARED;
 
 	last_cpupid = page_cpupid_last(page);
@@ -3642,7 +3642,7 @@ static int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
 		return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
 
 	/* COW handled on pte level: split pmd */
-	VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
+	VM_BUG_ON_VMA(vmf->vma_flags & VM_SHARED, vmf->vma);
 	__split_huge_pmd(vmf->vma, vmf->pmd, vmf->address, false, NULL);
 
 	return VM_FAULT_FALLBACK;
@@ -3789,6 +3789,8 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 		.flags = flags,
 		.pgoff = linear_page_index(vma, address),
 		.gfp_mask = __get_fault_gfp_mask(vma),
+		.vma_flags = vma->vm_flags,
+		.vma_page_prot = vma->vm_page_prot,
 	};
 	struct mm_struct *mm = vma->vm_mm;
 	pgd_t *pgd;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 08/20] mm: Protect SPF handler against anon_vma changes
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (6 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 07/20] mm: Cache some VMA fields in the vm_fault structure Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 09/20] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

The speculative page fault handler must be protected against anon_vma
changes. This is because page_add_new_anon_rmap() is called during the
speculative path.

In addition, don't try speculative page fault if the VMA don't have an
anon_vma structure allocated because its allocation should be
protected by the mmap_sem.

In __vma_adjust() when importer->anon_vma is set, there is no need to
protect against speculative page faults since speculative page fault
is aborted if the vma->anon_vma is not set.

When calling page_add_new_anon_rmap() vma->anon_vma is necessarily
valid since we checked for it when locking the pte and the anon_vma is
removed once the pte is unlocked. So even if the speculative page
fault handler is running concurrently with do_unmap(), as the pte is
locked in unmap_region() - through unmap_vmas() - and the anon_vma
unlinked later, because we check for the vma sequence counter which is
updated in unmap_page_range() before locking the pte, and then in
free_pgtables() so when locking the pte the change will be detected.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index da3bd07bb052..68e4fdcce692 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -615,7 +615,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 * Hide vma from rmap and truncate_pagecache before freeing
 		 * pgtables
 		 */
+		write_seqcount_begin(&vma->vm_sequence);
 		unlink_anon_vmas(vma);
+		write_seqcount_end(&vma->vm_sequence);
 		unlink_file_vma(vma);
 
 		if (is_vm_hugetlb_page(vma)) {
@@ -629,7 +631,9 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			       && !is_vm_hugetlb_page(next)) {
 				vma = next;
 				next = vma->vm_next;
+				write_seqcount_begin(&vma->vm_sequence);
 				unlink_anon_vmas(vma);
+				write_seqcount_end(&vma->vm_sequence);
 				unlink_file_vma(vma);
 			}
 			free_pgd_range(tlb, addr, vma->vm_end,
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 09/20] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (7 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 08/20] mm: Protect SPF handler against anon_vma changes Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 10/20] mm: Introduce __lru_cache_add_active_or_unevictable Laurent Dufour
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

migrate_misplaced_page() is only called during the page fault handling so
it's better to pass the pointer to the struct vm_fault instead of the vma.

This way during the speculative page fault path the saved vma->vm_flags
could be used.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/migrate.h | 4 ++--
 mm/memory.c             | 2 +-
 mm/migrate.c            | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 3e0d405dc842..65357105cbab 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -108,14 +108,14 @@ static inline void __ClearPageMovable(struct page *page)
 #ifdef CONFIG_NUMA_BALANCING
 extern bool pmd_trans_migrating(pmd_t pmd);
 extern int migrate_misplaced_page(struct page *page,
-				  struct vm_area_struct *vma, int node);
+				  struct vm_fault *vmf, int node);
 #else
 static inline bool pmd_trans_migrating(pmd_t pmd)
 {
 	return false;
 }
 static inline int migrate_misplaced_page(struct page *page,
-					 struct vm_area_struct *vma, int node)
+					 struct vm_fault *vmf, int node)
 {
 	return -EAGAIN; /* can't migrate now */
 }
diff --git a/mm/memory.c b/mm/memory.c
index 68e4fdcce692..53528eeee2b3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3616,7 +3616,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	}
 
 	/* Migrate to the requested node */
-	migrated = migrate_misplaced_page(page, vma, target_nid);
+	migrated = migrate_misplaced_page(page, vmf, target_nid);
 	if (migrated) {
 		page_nid = target_nid;
 		flags |= TNF_MIGRATED;
diff --git a/mm/migrate.c b/mm/migrate.c
index d68a41da6abb..354f74f7dad3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1847,7 +1847,7 @@ bool pmd_trans_migrating(pmd_t pmd)
  * node. Caller is expected to have an elevated reference count on
  * the page that will be dropped by this function before returning.
  */
-int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+int migrate_misplaced_page(struct page *page, struct vm_fault *vmf,
 			   int node)
 {
 	pg_data_t *pgdat = NODE_DATA(node);
@@ -1860,7 +1860,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	 * with execute permissions as they are probably shared libraries.
 	 */
 	if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
-	    (vma->vm_flags & VM_EXEC))
+	    (vmf->vma_flags & VM_EXEC))
 		goto out;
 
 	/*
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 10/20] mm: Introduce __lru_cache_add_active_or_unevictable
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (8 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 09/20] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 11/20] mm: Introduce __maybe_mkwrite() Laurent Dufour
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

The speculative page fault handler which is run without holding the
mmap_sem is calling lru_cache_add_active_or_unevictable() but the vm_flags
is not guaranteed to remain constant.
Introducing __lru_cache_add_active_or_unevictable() which has the vma flags
value parameter instead of the vma pointer.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/swap.h | 11 +++++++++--
 mm/memory.c          |  8 ++++----
 mm/swap.c            | 12 ++++++------
 3 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index d83d28e53e62..fdea932fe10f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -285,8 +285,15 @@ extern void swap_setup(void);
 
 extern void add_page_to_unevictable_list(struct page *page);
 
-extern void lru_cache_add_active_or_unevictable(struct page *page,
-						struct vm_area_struct *vma);
+extern void __lru_cache_add_active_or_unevictable(struct page *page,
+						unsigned long vma_flags);
+
+static inline void lru_cache_add_active_or_unevictable(struct page *page,
+						struct vm_area_struct *vma)
+{
+	return __lru_cache_add_active_or_unevictable(page, vma->vm_flags);
+}
+
 
 /* linux/mm/vmscan.c */
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
diff --git a/mm/memory.c b/mm/memory.c
index 53528eeee2b3..c6b18cc87e90 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2370,7 +2370,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
 		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(new_page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(new_page, vma);
+		__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
 		/*
 		 * We call the notify macro here because, when using secondary
 		 * mmu page tables (such as kvm shadow page tables), we want the
@@ -2840,7 +2840,7 @@ int do_swap_page(struct vm_fault *vmf)
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(page, vma);
+		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	}
 
 	swap_free(entry);
@@ -2978,7 +2978,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	page_add_new_anon_rmap(page, vma, vmf->address, false);
 	mem_cgroup_commit_charge(page, memcg, false, false);
-	lru_cache_add_active_or_unevictable(page, vma);
+	__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 setpte:
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
 
@@ -3230,7 +3230,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
-		lru_cache_add_active_or_unevictable(page, vma);
+		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
 		inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
 		page_add_file_rmap(page, false);
diff --git a/mm/swap.c b/mm/swap.c
index 60b1d2a75852..ece0826a205b 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -470,21 +470,21 @@ void add_page_to_unevictable_list(struct page *page)
 }
 
 /**
- * lru_cache_add_active_or_unevictable
- * @page:  the page to be added to LRU
- * @vma:   vma in which page is mapped for determining reclaimability
+ * __lru_cache_add_active_or_unevictable
+ * @page:	the page to be added to LRU
+ * @vma_flags:  vma in which page is mapped for determining reclaimability
  *
  * Place @page on the active or unevictable LRU list, depending on its
  * evictability.  Note that if the page is not evictable, it goes
  * directly back onto it's zone's unevictable list, it does NOT use a
  * per cpu pagevec.
  */
-void lru_cache_add_active_or_unevictable(struct page *page,
-					 struct vm_area_struct *vma)
+void __lru_cache_add_active_or_unevictable(struct page *page,
+					   unsigned long vma_flags)
 {
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 
-	if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) {
+	if (likely((vma_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) {
 		SetPageActive(page);
 		lru_cache_add(page);
 		return;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 11/20] mm: Introduce __maybe_mkwrite()
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (9 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 10/20] mm: Introduce __lru_cache_add_active_or_unevictable Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 12/20] mm: Introduce __vm_normal_page() Laurent Dufour
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

The current maybe_mkwrite() is getting passed the pointer to the vma
structure to fetch the vm_flags field.

When dealing with the speculative page fault handler, it will be better to
rely on the cached vm_flags value stored in the vm_fault structure.

This patch introduce a __maybe_mkwrite() service which can be called by
passing the value of the vm_flags field.

There is no change functional changes expected for the other callers of
maybe_mkwrite().

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm.h | 9 +++++++--
 mm/memory.c        | 6 +++---
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 43d313ff3a5b..0f4ddd72b172 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -668,13 +668,18 @@ void free_compound_page(struct page *page);
  * pte_mkwrite.  But get_user_pages can cause write faults for mappings
  * that do not have writing enabled, when used by access_process_vm.
  */
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t __maybe_mkwrite(pte_t pte, unsigned long vma_flags)
 {
-	if (likely(vma->vm_flags & VM_WRITE))
+	if (likely(vma_flags & VM_WRITE))
 		pte = pte_mkwrite(pte);
 	return pte;
 }
 
+static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+	return __maybe_mkwrite(pte, vma->vm_flags);
+}
+
 int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 		struct page *page);
 int finish_fault(struct vm_fault *vmf);
diff --git a/mm/memory.c b/mm/memory.c
index c6b18cc87e90..ad7b6372d302 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2269,7 +2269,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 
 	flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
 	entry = pte_mkyoung(vmf->orig_pte);
-	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 	if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
 		update_mmu_cache(vma, vmf->address, vmf->pte);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2359,8 +2359,8 @@ static int wp_page_copy(struct vm_fault *vmf)
 			inc_mm_counter_fast(mm, MM_ANONPAGES);
 		}
 		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
-		entry = mk_pte(new_page, vma->vm_page_prot);
-		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		entry = mk_pte(new_page, vmf->vma_page_prot);
+		entry = __maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
 		/*
 		 * Clear the pte entry and flush it first, before updating the
 		 * pte with the new entry. This will avoid a race condition
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 12/20] mm: Introduce __vm_normal_page()
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (10 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 11/20] mm: Introduce __maybe_mkwrite() Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 13/20] mm: Introduce __page_add_new_anon_rmap() Laurent Dufour
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

When dealing with the speculative fault path we should use the VMA's field
cached value stored in the vm_fault structure.

Currently vm_normal_page() is using the pointer to the VMA to fetch the
vm_flags value. This patch provides a new __vm_normal_page() which is
receiving the vm_flags flags value as parameter.

Note: The speculative path is turned on for architecture providing support
for special PTE flag. So only the first block of vm_normal_page is used
during the speculative path.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index ad7b6372d302..9f9e5bb7a556 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -820,8 +820,9 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 #else
 # define HAVE_PTE_SPECIAL 0
 #endif
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
-				pte_t pte)
+static struct page *__vm_normal_page(struct vm_area_struct *vma,
+				     unsigned long addr,
+				     pte_t pte, unsigned long vma_flags)
 {
 	unsigned long pfn = pte_pfn(pte);
 
@@ -830,7 +831,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			goto check_pfn;
 		if (vma->vm_ops && vma->vm_ops->find_special_page)
 			return vma->vm_ops->find_special_page(vma, addr);
-		if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+		if (vma_flags & (VM_PFNMAP | VM_MIXEDMAP))
 			return NULL;
 		if (!is_zero_pfn(pfn))
 			print_bad_pte(vma, addr, pte, NULL);
@@ -839,8 +840,8 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 
 	/* !HAVE_PTE_SPECIAL case follows: */
 
-	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
-		if (vma->vm_flags & VM_MIXEDMAP) {
+	if (unlikely(vma_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
+		if (vma_flags & VM_MIXEDMAP) {
 			if (!pfn_valid(pfn))
 				return NULL;
 			goto out;
@@ -849,7 +850,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			off = (addr - vma->vm_start) >> PAGE_SHIFT;
 			if (pfn == vma->vm_pgoff + off)
 				return NULL;
-			if (!is_cow_mapping(vma->vm_flags))
+			if (!is_cow_mapping(vma_flags))
 				return NULL;
 		}
 	}
@@ -870,6 +871,13 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 	return pfn_to_page(pfn);
 }
 
+struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+			    pte_t pte)
+{
+	return __vm_normal_page(vma, addr, pte, vma->vm_flags);
+}
+
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 				pmd_t pmd)
@@ -2548,7 +2556,8 @@ static int do_wp_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
 
-	vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
+	vmf->page = __vm_normal_page(vma, vmf->address, vmf->orig_pte,
+				     vmf->vma_flags);
 	if (!vmf->page) {
 		/*
 		 * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a
@@ -3575,7 +3584,7 @@ static int do_numa_page(struct vm_fault *vmf)
 	ptep_modify_prot_commit(vma->vm_mm, vmf->address, vmf->pte, pte);
 	update_mmu_cache(vma, vmf->address, vmf->pte);
 
-	page = vm_normal_page(vma, vmf->address, pte);
+	page = __vm_normal_page(vma, vmf->address, pte, vmf->vma_flags);
 	if (!page) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		return 0;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 13/20] mm: Introduce __page_add_new_anon_rmap()
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (11 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 12/20] mm: Introduce __vm_normal_page() Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

When dealing with speculative page fault handler, we may race with VMA
being split or merged. In this case the vma->vm_start and vm->vm_end
fields may not match the address the page fault is occurring.

This can only happens when the VMA is split but in that case, the
anon_vma pointer of the new VMA will be the same as the original one,
because in __split_vma the new->anon_vma is set to src->anon_vma when
*new = *vma.

So even if the VMA boundaries are not correct, the anon_vma pointer is
still valid.

If the VMA has been merged, then the VMA in which it has been merged
must have the same anon_vma pointer otherwise the merge can't be done.

So in all the case we know that the anon_vma is valid, since we have
checked before starting the speculative page fault that the anon_vma
pointer is valid for this VMA and since there is an anon_vma this
means that at one time a page has been backed and that before the VMA
is cleaned, the page table lock would have to be grab to clean the
PTE, and the anon_vma field is checked once the PTE is locked.

This patch introduce a new __page_add_new_anon_rmap() service which
doesn't check for the VMA boundaries, and create a new inline one
which do the check.

When called from a page fault handler, if this is not a speculative one,
there is a guarantee that vm_start and vm_end match the faulting address,
so this check is useless. In the context of the speculative page fault
handler, this check may be wrong but anon_vma is still valid as explained
above.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/rmap.h | 12 ++++++++++--
 mm/memory.c          |  8 ++++----
 mm/rmap.c            |  5 ++---
 3 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 43ef2c30cb0f..f5cd4dbc78b0 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -170,8 +170,16 @@ void page_add_anon_rmap(struct page *, struct vm_area_struct *,
 		unsigned long, bool);
 void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
 			   unsigned long, int);
-void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
-		unsigned long, bool);
+void __page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
+			      unsigned long, bool);
+static inline void page_add_new_anon_rmap(struct page *page,
+					  struct vm_area_struct *vma,
+					  unsigned long address, bool compound)
+{
+	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+	__page_add_new_anon_rmap(page, vma, address, compound);
+}
+
 void page_add_file_rmap(struct page *, bool);
 void page_remove_rmap(struct page *, bool);
 
diff --git a/mm/memory.c b/mm/memory.c
index 9f9e5bb7a556..51bc8315281e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2376,7 +2376,7 @@ static int wp_page_copy(struct vm_fault *vmf)
 		 * thread doing COW.
 		 */
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
-		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(new_page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(new_page, vmf->vma_flags);
 		/*
@@ -2847,7 +2847,7 @@ int do_swap_page(struct vm_fault *vmf)
 		mem_cgroup_commit_charge(page, memcg, true, false);
 		activate_page(page);
 	} else { /* ksm created a completely new copy */
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	}
@@ -2985,7 +2985,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 	}
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(page, vma, vmf->address, false);
+	__page_add_new_anon_rmap(page, vma, vmf->address, false);
 	mem_cgroup_commit_charge(page, memcg, false, false);
 	__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 setpte:
@@ -3237,7 +3237,7 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 	/* copy-on-write page */
 	if (write && !(vmf->vma_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
+		__page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false, false);
 		__lru_cache_add_active_or_unevictable(page, vmf->vma_flags);
 	} else {
diff --git a/mm/rmap.c b/mm/rmap.c
index c1286d47aa1f..0c9f8ded669a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1122,7 +1122,7 @@ void do_page_add_anon_rmap(struct page *page,
 }
 
 /**
- * page_add_new_anon_rmap - add pte mapping to a new anonymous page
+ * __page_add_new_anon_rmap - add pte mapping to a new anonymous page
  * @page:	the page to add the mapping to
  * @vma:	the vm area in which the mapping is added
  * @address:	the user virtual address mapped
@@ -1132,12 +1132,11 @@ void do_page_add_anon_rmap(struct page *page,
  * This means the inc-and-test can be bypassed.
  * Page does not have to be locked.
  */
-void page_add_new_anon_rmap(struct page *page,
+void __page_add_new_anon_rmap(struct page *page,
 	struct vm_area_struct *vma, unsigned long address, bool compound)
 {
 	int nr = compound ? hpage_nr_pages(page) : 1;
 
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
 	__SetPageSwapBacked(page);
 	if (compound) {
 		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (12 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 13/20] mm: Introduce __page_add_new_anon_rmap() Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-20 12:11   ` Sergey Senozhatsky
  2017-08-27  0:18   ` Kirill A. Shutemov
  2017-08-17 22:05 ` [PATCH v2 15/20] mm: Try spin lock in speculative path Laurent Dufour
                   ` (7 subsequent siblings)
  21 siblings, 2 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Provide infrastructure to do a speculative fault (not holding
mmap_sem).

The not holding of mmap_sem means we can race against VMA
change/removal and page-table destruction. We use the SRCU VMA freeing
to keep the VMA around. We use the VMA seqcount to detect change
(including umapping / page-table deletion) and we use gup_fast() style
page-table walking to deal with page-table races.

Once we've obtained the page and are ready to update the PTE, we
validate if the state we started the fault with is still valid, if
not, we'll fail the fault with VM_FAULT_RETRY, otherwise we update the
PTE and we're done.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Manage the newly introduced pte_spinlock() for speculative page
 fault to fail if the VMA is touched in our back]
[Rename vma_is_dead() to vma_has_changed() and declare it here]
[Call p4d_alloc() as it is safe since pgd is valid]
[Call pud_alloc() as it is safe since p4d is valid]
[Set fe.sequence in __handle_mm_fault()]
[Abort speculative path when handle_userfault() has to be called]
[Add additional VMA's flags checks in handle_speculative_fault()]
[Clear FAULT_FLAG_ALLOW_RETRY in handle_speculative_fault()]
[Don't set vmf->pte and vmf->ptl if pte_map_lock() failed]
[Remove warning comment about waiting for !seq&1 since we don't want
 to wait]
[Remove warning about no huge page support, mention it explictly]
[Don't call do_fault() in the speculative path as __do_fault() calls
 vma->vm_ops->fault() which may want to release mmap_sem]
[Only vm_fault pointer argument for vma_has_changed()]
[Fix check against huge page, calling pmd_trans_huge()]
[Introduce __HAVE_ARCH_CALL_SPF to declare the SPF handler only when
 architecture is supporting it]
[Use READ_ONCE() when reading VMA's fields in the speculative path]
[Explicitly check for __HAVE_ARCH_PTE_SPECIAL as we can't support for
 processing done in vm_normal_page()]
[Check that vma->anon_vma is already set when starting the speculative
 path]
[Check for memory policy as we can't support MPOL_INTERLEAVE case due to
 the processing done in mpol_misplaced()]
[Don't support VMA growing up or down]
[Move check on vm_sequence just before calling handle_pte_fault()]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/hugetlb_inline.h |   2 +-
 include/linux/mm.h             |   5 +
 include/linux/pagemap.h        |   4 +-
 mm/internal.h                  |  14 +++
 mm/memory.c                    | 237 ++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 254 insertions(+), 8 deletions(-)

diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index a4e7ca0f3585..6cfdfca4cc2a 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -7,7 +7,7 @@
 
 static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
 {
-	return !!(vma->vm_flags & VM_HUGETLB);
+	return !!(READ_ONCE(vma->vm_flags) & VM_HUGETLB);
 }
 
 #else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0f4ddd72b172..0fe0811d304f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -315,6 +315,7 @@ struct vm_fault {
 	gfp_t gfp_mask;			/* gfp mask to be used for allocations */
 	pgoff_t pgoff;			/* Logical page offset based on vma */
 	unsigned long address;		/* Faulting virtual address */
+	unsigned int sequence;
 	pmd_t *pmd;			/* Pointer to pmd entry matching
 					 * the 'address' */
 	pud_t *pud;			/* Pointer to pud entry matching
@@ -1297,6 +1298,10 @@ int invalidate_inode_page(struct page *page);
 #ifdef CONFIG_MMU
 extern int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 		unsigned int flags);
+#ifdef __HAVE_ARCH_CALL_SPF
+extern int handle_speculative_fault(struct mm_struct *mm,
+				    unsigned long address, unsigned int flags);
+#endif /* __HAVE_ARCH_CALL_SPF */
 extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm,
 			    unsigned long address, unsigned int fault_flags,
 			    bool *unlocked);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 79b36f57c3ba..3a9735dfa6b6 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -443,8 +443,8 @@ static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
 	pgoff_t pgoff;
 	if (unlikely(is_vm_hugetlb_page(vma)))
 		return linear_hugepage_index(vma, address);
-	pgoff = (address - vma->vm_start) >> PAGE_SHIFT;
-	pgoff += vma->vm_pgoff;
+	pgoff = (address - READ_ONCE(vma->vm_start)) >> PAGE_SHIFT;
+	pgoff += READ_ONCE(vma->vm_pgoff);
 	return pgoff;
 }
 
diff --git a/mm/internal.h b/mm/internal.h
index 736540f15936..9d6347e35747 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -45,6 +45,20 @@ extern struct srcu_struct vma_srcu;
 extern struct vm_area_struct *find_vma_srcu(struct mm_struct *mm,
 					    unsigned long addr);
 
+static inline bool vma_has_changed(struct vm_fault *vmf)
+{
+	int ret = RB_EMPTY_NODE(&vmf->vma->vm_rb);
+	unsigned seq = ACCESS_ONCE(vmf->vma->vm_sequence.sequence);
+
+	/*
+	 * Matches both the wmb in write_seqlock_{begin,end}() and
+	 * the wmb in vma_rb_erase().
+	 */
+	smp_rmb();
+
+	return ret || seq != vmf->sequence;
+}
+
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/memory.c b/mm/memory.c
index 51bc8315281e..0ba14a5797b2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -760,7 +760,8 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 	if (page)
 		dump_page(page, "bad pte");
 	pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
-		 (void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
+		 (void *)addr, READ_ONCE(vma->vm_flags), vma->anon_vma,
+		 mapping, index);
 	/*
 	 * Choose text because data symbols depend on CONFIG_KALLSYMS_ALL=y
 	 */
@@ -2285,15 +2286,69 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
 
 static bool pte_spinlock(struct vm_fault *vmf)
 {
+	bool ret = false;
+
+	/* Check if vma is still valid */
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+		vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+		spin_lock(vmf->ptl);
+		return true;
+	}
+
+	local_irq_disable();
+	if (vma_has_changed(vmf))
+		goto out;
+
 	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
 	spin_lock(vmf->ptl);
-	return true;
+
+	if (vma_has_changed(vmf)) {
+		spin_unlock(vmf->ptl);
+		goto out;
+	}
+
+	ret = true;
+out:
+	local_irq_enable();
+	return ret;
 }
 
 static bool pte_map_lock(struct vm_fault *vmf)
 {
-	vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl);
-	return true;
+	bool ret = false;
+	pte_t *pte;
+	spinlock_t *ptl;
+
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE)) {
+		vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+					       vmf->address, &vmf->ptl);
+		return true;
+	}
+
+	/*
+	 * The first vma_has_changed() guarantees the page-tables are still
+	 * valid, having IRQs disabled ensures they stay around, hence the
+	 * second vma_has_changed() to make sure they are still valid once
+	 * we've got the lock. After that a concurrent zap_pte_range() will
+	 * block on the PTL and thus we're safe.
+	 */
+	local_irq_disable();
+	if (vma_has_changed(vmf))
+		goto out;
+
+	pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+				  vmf->address, &ptl);
+	if (vma_has_changed(vmf)) {
+		pte_unmap_unlock(pte, ptl);
+		goto out;
+	}
+
+	vmf->pte = pte;
+	vmf->ptl = ptl;
+	ret = true;
+out:
+	local_irq_enable();
+	return ret;
 }
 
 /*
@@ -2939,6 +2994,14 @@ static int do_anonymous_page(struct vm_fault *vmf)
 			return VM_FAULT_RETRY;
 		if (!pte_none(*vmf->pte))
 			goto unlock;
+		/*
+		 * Don't call the userfaultfd during the speculative path.
+		 * We already checked for the VMA to not be managed through
+		 * userfaultfd, but it may be set in our back once we have lock
+		 * the pte. In such a case we can ignore it this time.
+		 */
+		if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+			goto setpte;
 		/* Deliver the page fault to userland, check inside PT lock */
 		if (userfaultfd_missing(vma)) {
 			pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2977,7 +3040,7 @@ static int do_anonymous_page(struct vm_fault *vmf)
 		goto release;
 
 	/* Deliver the page fault to userland, check inside PT lock */
-	if (userfaultfd_missing(vma)) {
+	if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) && userfaultfd_missing(vma)) {
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 		mem_cgroup_cancel_charge(page, memcg, false);
 		put_page(page);
@@ -3748,6 +3811,8 @@ static int handle_pte_fault(struct vm_fault *vmf)
 	if (!vmf->pte) {
 		if (vma_is_anonymous(vmf->vma))
 			return do_anonymous_page(vmf);
+		else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
+			return VM_FAULT_RETRY;
 		else
 			return do_fault(vmf);
 	}
@@ -3845,6 +3910,7 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	vmf.pmd = pmd_alloc(mm, vmf.pud, address);
 	if (!vmf.pmd)
 		return VM_FAULT_OOM;
+	vmf.sequence = raw_read_seqcount(&vma->vm_sequence);
 	if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) {
 		ret = create_huge_pmd(&vmf);
 		if (!(ret & VM_FAULT_FALLBACK))
@@ -3872,6 +3938,167 @@ static int __handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
 	return handle_pte_fault(&vmf);
 }
 
+#ifdef __HAVE_ARCH_CALL_SPF
+
+#ifndef __HAVE_ARCH_PTE_SPECIAL
+/* This is required by vm_normal_page() */
+#error "Speculative page fault handler requires __HAVE_ARCH_PTE_SPECIAL"
+#endif
+
+/*
+ * vm_normal_page() adds some processing which should be done while
+ * hodling the mmap_sem.
+ */
+int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
+			     unsigned int flags)
+{
+	struct vm_fault vmf = {
+		.address = address,
+	};
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	int dead, seq, idx, ret = VM_FAULT_RETRY;
+	struct vm_area_struct *vma;
+	struct mempolicy *pol;
+
+	/* Clear flags that may lead to release the mmap_sem to retry */
+	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
+	flags |= FAULT_FLAG_SPECULATIVE;
+
+	idx = srcu_read_lock(&vma_srcu);
+	vma = find_vma_srcu(mm, address);
+	if (!vma)
+		goto unlock;
+
+	/*
+	 * Validate the VMA found by the lockless lookup.
+	 */
+	dead = RB_EMPTY_NODE(&vma->vm_rb);
+	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
+	if ((seq & 1) || dead)
+		goto unlock;
+
+	/*
+	 * Can't call vm_ops service has we don't know what they would do
+	 * with the VMA.
+	 * This include huge page from hugetlbfs.
+	 */
+	if (vma->vm_ops)
+		goto unlock;
+
+	if (unlikely(!vma->anon_vma))
+		goto unlock;
+
+	vmf.vma_flags = READ_ONCE(vma->vm_flags);
+	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
+
+	/* Can't call userland page fault handler in the speculative path */
+	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+		goto unlock;
+
+	/*
+	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
+	 * are not compatible with the speculative page fault processing.
+	 */
+	pol = __get_vma_policy(vma, address);
+	if (!pol)
+		pol = get_task_policy(current);
+	if (pol && pol->mode == MPOL_INTERLEAVE)
+		goto unlock;
+
+	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+		/*
+		 * This could be detected by the check address against VMA's
+		 * boundaries but we want to trace it as not supported instead
+		 * of changed.
+		 */
+		goto unlock;
+
+	if (address < READ_ONCE(vma->vm_start)
+	    || READ_ONCE(vma->vm_end) <= address)
+		goto unlock;
+
+	/*
+	 * The three following checks are copied from access_error from
+	 * arch/x86/mm/fault.c
+	 */
+	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+				       flags & FAULT_FLAG_INSTRUCTION,
+				       flags & FAULT_FLAG_REMOTE))
+		goto unlock;
+
+	/* This is one is required to check that the VMA has write access set */
+	if (flags & FAULT_FLAG_WRITE) {
+		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
+			goto unlock;
+	} else {
+		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE))))
+			goto unlock;
+	}
+
+	/*
+	 * Do a speculative lookup of the PTE entry.
+	 */
+	local_irq_disable();
+	pgd = pgd_offset(mm, address);
+	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		goto out_walk;
+
+	p4d = p4d_alloc(mm, pgd, address);
+	if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
+		goto out_walk;
+
+	pud = pud_alloc(mm, p4d, address);
+	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
+		goto out_walk;
+
+	pmd = pmd_offset(pud, address);
+	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
+		goto out_walk;
+
+	/*
+	 * The above does not allocate/instantiate page-tables because doing so
+	 * would lead to the possibility of instantiating page-tables after
+	 * free_pgtables() -- and consequently leaking them.
+	 *
+	 * The result is that we take at least one !speculative fault per PMD
+	 * in order to instantiate it.
+	 */
+
+	/* Transparent huge pages are not supported. */
+	if (unlikely(pmd_trans_huge(*pmd)))
+		goto out_walk;
+
+	vmf.vma = vma;
+	vmf.pmd = pmd;
+	vmf.pgoff = linear_page_index(vma, address);
+	vmf.gfp_mask = __get_fault_gfp_mask(vma);
+	vmf.sequence = seq;
+	vmf.flags = flags;
+
+	local_irq_enable();
+
+	/*
+	 * We need to re-validate the VMA after checking the bounds, otherwise
+	 * we might have a false positive on the bounds.
+	 */
+	if (read_seqcount_retry(&vma->vm_sequence, seq))
+		goto unlock;
+
+	ret = handle_pte_fault(&vmf);
+
+unlock:
+	srcu_read_unlock(&vma_srcu, idx);
+	return ret;
+
+out_walk:
+	local_irq_enable();
+	goto unlock;
+}
+#endif /* __HAVE_ARCH_CALL_SPF */
+
 /*
  * By the time we get here, we already hold the mm semaphore
  *
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 15/20] mm: Try spin lock in speculative path
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (13 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 16/20] mm: Adding speculative page fault failure trace events Laurent Dufour
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

There is a deadlock when a CPU is doing a speculative page fault and
another one is calling do_unmap().

The deadlock occurred because the speculative path try to spinlock the
pte while the interrupt are disabled. When the other CPU in the
unmap's path has locked the pte then is waiting for all the CPU to
invalidate the TLB. As the CPU doing the speculative fault have the
interrupt disable it can't invalidate the TLB, and can't get the lock.

Since we are in a speculative path, we can race with other mm action.
So let assume that the lock may not get acquired and fail the
speculative page fault.

Here are the stacks captured during the deadlock:

	CPU 0
	native_flush_tlb_others+0x7c/0x260
	flush_tlb_mm_range+0x6a/0x220
	tlb_flush_mmu_tlbonly+0x63/0xc0
	unmap_page_range+0x897/0x9d0
	? unmap_single_vma+0x7d/0xe0
	? release_pages+0x2b3/0x360
	unmap_single_vma+0x7d/0xe0
	unmap_vmas+0x51/0xa0
	unmap_region+0xbd/0x130
	do_munmap+0x279/0x460
	SyS_munmap+0x53/0x70

	CPU 1
	do_raw_spin_lock+0x14e/0x160
	_raw_spin_lock+0x5d/0x80
	? pte_map_lock+0x169/0x1b0
	pte_map_lock+0x169/0x1b0
	handle_pte_fault+0xbf2/0xd80
	? trace_hardirqs_on+0xd/0x10
	handle_speculative_fault+0x272/0x280
	handle_speculative_fault+0x5/0x280
	__do_page_fault+0x187/0x580
	trace_do_page_fault+0x52/0x260
	do_async_page_fault+0x19/0x70
	async_page_fault+0x28/0x30

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memory.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 0ba14a5797b2..8c701e4f59d3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2300,7 +2300,8 @@ static bool pte_spinlock(struct vm_fault *vmf)
 		goto out;
 
 	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-	spin_lock(vmf->ptl);
+	if (unlikely(!spin_trylock(vmf->ptl)))
+		goto out;
 
 	if (vma_has_changed(vmf)) {
 		spin_unlock(vmf->ptl);
@@ -2336,8 +2337,20 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	if (vma_has_changed(vmf))
 		goto out;
 
-	pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
-				  vmf->address, &ptl);
+	/*
+	 * Same as pte_offset_map_lock() except that we call
+	 * spin_trylock() in place of spin_lock() to avoid race with
+	 * unmap path which may have the lock and wait for this CPU
+	 * to invalidate TLB but this CPU has irq disabled.
+	 * Since we are in a speculative patch, accept it could fail
+	 */
+	ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
+	pte = pte_offset_map(vmf->pmd, vmf->address);
+	if (unlikely(!spin_trylock(ptl))) {
+		pte_unmap(pte);
+		goto out;
+	}
+
 	if (vma_has_changed(vmf)) {
 		pte_unmap_unlock(pte, ptl);
 		goto out;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 16/20] mm: Adding speculative page fault failure trace events
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (14 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 15/20] mm: Try spin lock in speculative path Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-17 22:05 ` [PATCH v2 17/20] perf: Add a speculative page fault sw event Laurent Dufour
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

This patch a set of new trace events to collect the speculative page fault
event failures.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/trace/events/pagefault.h | 87 ++++++++++++++++++++++++++++++++++++++++
 mm/memory.c                      | 68 ++++++++++++++++++++++++-------
 2 files changed, 141 insertions(+), 14 deletions(-)
 create mode 100644 include/trace/events/pagefault.h

diff --git a/include/trace/events/pagefault.h b/include/trace/events/pagefault.h
new file mode 100644
index 000000000000..d7d56f8102d1
--- /dev/null
+++ b/include/trace/events/pagefault.h
@@ -0,0 +1,87 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM pagefault
+
+#if !defined(_TRACE_PAGEFAULT_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_PAGEFAULT_H
+
+#include <linux/tracepoint.h>
+#include <linux/mm.h>
+
+DECLARE_EVENT_CLASS(spf,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, caller)
+		__field(unsigned long, vm_start)
+		__field(unsigned long, vm_end)
+		__field(unsigned long, address)
+	),
+
+	TP_fast_assign(
+		__entry->caller		= caller;
+		__entry->vm_start	= vma->vm_start;
+		__entry->vm_end		= vma->vm_end;
+		__entry->address	= address;
+	),
+
+	TP_printk("ip:%lx vma:%lu-%lx address:%lx",
+		  __entry->caller, __entry->vm_start, __entry->vm_end,
+		  __entry->address)
+);
+
+DEFINE_EVENT(spf, spf_pte_lock,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_changed,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_dead,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_noanon,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_notsup,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+DEFINE_EVENT(spf, spf_vma_access,
+
+	TP_PROTO(unsigned long caller,
+		 struct vm_area_struct *vma, unsigned long address),
+
+	TP_ARGS(caller, vma, address)
+);
+
+#endif /* _TRACE_PAGEFAULT_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/memory.c b/mm/memory.c
index 8c701e4f59d3..549d23583f53 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -79,6 +79,9 @@
 
 #include "internal.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/pagefault.h>
+
 #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
 #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
 #endif
@@ -2296,15 +2299,20 @@ static bool pte_spinlock(struct vm_fault *vmf)
 	}
 
 	local_irq_disable();
-	if (vma_has_changed(vmf))
+	if (vma_has_changed(vmf)) {
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 	vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd);
-	if (unlikely(!spin_trylock(vmf->ptl)))
+	if (unlikely(!spin_trylock(vmf->ptl))) {
+		trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 	if (vma_has_changed(vmf)) {
 		spin_unlock(vmf->ptl);
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
@@ -2334,8 +2342,10 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	 * block on the PTL and thus we're safe.
 	 */
 	local_irq_disable();
-	if (vma_has_changed(vmf))
+	if (vma_has_changed(vmf)) {
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
+	}
 
 	/*
 	 * Same as pte_offset_map_lock() except that we call
@@ -2348,11 +2358,13 @@ static bool pte_map_lock(struct vm_fault *vmf)
 	pte = pte_offset_map(vmf->pmd, vmf->address);
 	if (unlikely(!spin_trylock(ptl))) {
 		pte_unmap(pte);
+		trace_spf_pte_lock(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
 	if (vma_has_changed(vmf)) {
 		pte_unmap_unlock(pte, ptl);
+		trace_spf_vma_changed(_RET_IP_, vmf->vma, vmf->address);
 		goto out;
 	}
 
@@ -3989,27 +4001,40 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * Validate the VMA found by the lockless lookup.
 	 */
 	dead = RB_EMPTY_NODE(&vma->vm_rb);
+	if (dead) {
+		trace_spf_vma_dead(_RET_IP_, vma, address);
+		goto unlock;
+	}
+
 	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
-	if ((seq & 1) || dead)
+	if (seq & 1) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	/*
 	 * Can't call vm_ops service has we don't know what they would do
 	 * with the VMA.
 	 * This include huge page from hugetlbfs.
 	 */
-	if (vma->vm_ops)
+	if (vma->vm_ops) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
-	if (unlikely(!vma->anon_vma))
+	if (unlikely(!vma->anon_vma)) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	vmf.vma_flags = READ_ONCE(vma->vm_flags);
 	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
 
 	/* Can't call userland page fault handler in the speculative path */
-	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
+	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING)) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	/*
 	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
@@ -4018,20 +4043,26 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	pol = __get_vma_policy(vma, address);
 	if (!pol)
 		pol = get_task_policy(current);
-	if (pol && pol->mode == MPOL_INTERLEAVE)
+	if (pol && pol->mode == MPOL_INTERLEAVE) {
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
-	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
+	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP) {
 		/*
 		 * This could be detected by the check address against VMA's
 		 * boundaries but we want to trace it as not supported instead
 		 * of changed.
 		 */
+		trace_spf_vma_notsup(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	if (address < READ_ONCE(vma->vm_start)
-	    || READ_ONCE(vma->vm_end) <= address)
+	    || READ_ONCE(vma->vm_end) <= address) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	/*
 	 * The three following checks are copied from access_error from
@@ -4039,16 +4070,22 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 */
 	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
 				       flags & FAULT_FLAG_INSTRUCTION,
-				       flags & FAULT_FLAG_REMOTE))
+				       flags & FAULT_FLAG_REMOTE)) {
+		trace_spf_vma_access(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	/* This is one is required to check that the VMA has write access set */
 	if (flags & FAULT_FLAG_WRITE) {
-		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
+		if (unlikely(!(vmf.vma_flags & VM_WRITE))) {
+			trace_spf_vma_access(_RET_IP_, vma, address);
 			goto unlock;
+		}
 	} else {
-		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE))))
+		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE)))) {
+			trace_spf_vma_access(_RET_IP_, vma, address);
 			goto unlock;
+		}
 	}
 
 	/*
@@ -4097,8 +4134,10 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	 * We need to re-validate the VMA after checking the bounds, otherwise
 	 * we might have a false positive on the bounds.
 	 */
-	if (read_seqcount_retry(&vma->vm_sequence, seq))
+	if (read_seqcount_retry(&vma->vm_sequence, seq)) {
+		trace_spf_vma_changed(_RET_IP_, vma, address);
 		goto unlock;
+	}
 
 	ret = handle_pte_fault(&vmf);
 
@@ -4107,6 +4146,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
 	return ret;
 
 out_walk:
+	trace_spf_vma_notsup(_RET_IP_, vma, address);
 	local_irq_enable();
 	goto unlock;
 }
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 17/20] perf: Add a speculative page fault sw event
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (15 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 16/20] mm: Adding speculative page fault failure trace events Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-21  8:55   ` Anshuman Khandual
  2017-08-17 22:05 ` [PATCH v2 18/20] perf tools: Add support for the SPF perf event Laurent Dufour
                   ` (4 subsequent siblings)
  21 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

Add a new software event to count succeeded speculative page faults.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/uapi/linux/perf_event.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index b1c0b187acfe..3043ec0988e9 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -111,6 +111,7 @@ enum perf_sw_ids {
 	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
 	PERF_COUNT_SW_DUMMY			= 9,
 	PERF_COUNT_SW_BPF_OUTPUT		= 10,
+	PERF_COUNT_SW_SPF_DONE			= 11,
 
 	PERF_COUNT_SW_MAX,			/* non-ABI */
 };
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 18/20] perf tools: Add support for the SPF perf event
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (16 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 17/20] perf: Add a speculative page fault sw event Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-21  8:48   ` Anshuman Khandual
  2017-08-17 22:05 ` [PATCH v2 19/20] x86/mm: Add speculative pagefault handling Laurent Dufour
                   ` (3 subsequent siblings)
  21 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

Add support for the new speculative faults event.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 tools/include/uapi/linux/perf_event.h | 1 +
 tools/perf/util/evsel.c               | 1 +
 tools/perf/util/parse-events.c        | 4 ++++
 tools/perf/util/parse-events.l        | 1 +
 tools/perf/util/python.c              | 1 +
 5 files changed, 8 insertions(+)

diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index b1c0b187acfe..3043ec0988e9 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -111,6 +111,7 @@ enum perf_sw_ids {
 	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
 	PERF_COUNT_SW_DUMMY			= 9,
 	PERF_COUNT_SW_BPF_OUTPUT		= 10,
+	PERF_COUNT_SW_SPF_DONE			= 11,
 
 	PERF_COUNT_SW_MAX,			/* non-ABI */
 };
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 413f74df08de..660a7038198b 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -426,6 +426,7 @@ const char *perf_evsel__sw_names[PERF_COUNT_SW_MAX] = {
 	"alignment-faults",
 	"emulation-faults",
 	"dummy",
+	"speculative-faults",
 };
 
 static const char *__perf_evsel__sw_name(u64 config)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 01e779b91c8e..ef8ef30d39c3 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -135,6 +135,10 @@ struct event_symbol event_symbols_sw[PERF_COUNT_SW_MAX] = {
 		.symbol = "bpf-output",
 		.alias  = "",
 	},
+	[PERF_COUNT_SW_SPF_DONE] = {
+		.symbol = "speculative-faults",
+		.alias	= "spf",
+	},
 };
 
 #define __PERF_EVENT_FIELD(config, name) \
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 660fca05bc93..5cb78f004737 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -274,6 +274,7 @@ alignment-faults				{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_AL
 emulation-faults				{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EMULATION_FAULTS); }
 dummy						{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_DUMMY); }
 bpf-output					{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_BPF_OUTPUT); }
+speculative-faults|spf				{ return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_SPF_DONE); }
 
 	/*
 	 * We have to handle the kernel PMU event cycles-ct/cycles-t/mem-loads/mem-stores separately.
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index c129e99114ae..1ee06e47d9dc 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -1141,6 +1141,7 @@ static struct {
 	PERF_CONST(COUNT_SW_ALIGNMENT_FAULTS),
 	PERF_CONST(COUNT_SW_EMULATION_FAULTS),
 	PERF_CONST(COUNT_SW_DUMMY),
+	PERF_CONST(COUNT_SW_SPF_DONE),
 
 	PERF_CONST(SAMPLE_IP),
 	PERF_CONST(SAMPLE_TID),
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 19/20] x86/mm: Add speculative pagefault handling
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (17 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 18/20] perf tools: Add support for the SPF perf event Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-21  7:29   ` Anshuman Khandual
  2017-08-17 22:05 ` [PATCH v2 20/20] powerpc/mm: Add speculative page fault Laurent Dufour
                   ` (2 subsequent siblings)
  21 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

From: Peter Zijlstra <peterz@infradead.org>

Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
 handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
 handle_speculative_fault(). This allows signal to be delivered]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/x86/include/asm/pgtable_types.h |  7 +++++++
 arch/x86/mm/fault.c                  | 19 +++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index bf9638e1ee42..4fd2693a037e 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -234,6 +234,13 @@ enum page_cache_mode {
 #define PGD_IDENT_ATTR	 0x001		/* PRESENT (no other attributes) */
 #endif
 
+/*
+ * Advertise that we call the Speculative Page Fault handler.
+ */
+#ifdef CONFIG_X86_64
+#define __HAVE_ARCH_CALL_SPF
+#endif
+
 #ifdef CONFIG_X86_32
 # include <asm/pgtable_32_types.h>
 #else
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 2a1fa10c6a98..4c070b9a4362 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1365,6 +1365,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 	if (error_code & PF_INSTR)
 		flags |= FAULT_FLAG_INSTRUCTION;
 
+#ifdef __HAVE_ARCH_CALL_SPF
+	if (error_code & PF_USER) {
+		fault = handle_speculative_fault(mm, address, flags);
+
+		/*
+		 * We also check against VM_FAULT_ERROR because we have to
+		 * raise a signal by calling later mm_fault_error() which
+		 * requires the vma pointer to be set. So in that case,
+		 * we fall through the normal path.
+		 */
+		if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) {
+			perf_sw_event(PERF_COUNT_SW_SPF_DONE, 1,
+				      regs, address);
+			goto done;
+		}
+	}
+#endif /* __HAVE_ARCH_CALL_SPF */
+
 	/*
 	 * When running in the kernel we expect faults to occur only to
 	 * addresses in user space.  All other faults represent errors in
@@ -1474,6 +1492,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
 		return;
 	}
 
+done:
 	/*
 	 * Major/minor page fault accounting. If any of the events
 	 * returned VM_FAULT_MAJOR, we account it as a major fault.
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 20/20] powerpc/mm: Add speculative page fault
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (18 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 19/20] x86/mm: Add speculative pagefault handling Laurent Dufour
@ 2017-08-17 22:05 ` Laurent Dufour
  2017-08-21  6:58   ` Anshuman Khandual
  2017-08-21  2:26 ` [PATCH v2 00/20] Speculative page faults Sergey Senozhatsky
  2017-08-21  6:28 ` Anshuman Khandual
  21 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-17 22:05 UTC (permalink / raw)
  To: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

This patch enable the speculative page fault on the PowerPC
architecture.

This will try a speculative page fault without holding the mmap_sem,
if it returns with WM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.

Support is only provide for BOOK3S_64 currently because:
- require CONFIG_PPC_STD_MMU because checks done in
  set_access_flags_filter()
- require BOOK3S because we can't support for book3e_hugetlb_preload()
  called by update_mmu_cache()

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
 arch/powerpc/mm/fault.c                      | 30 +++++++++++++++++++++++++++-
 2 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 818a58fc3f4f..897f8b9f67e6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -313,6 +313,11 @@ extern unsigned long pci_io_base;
 /* Advertise support for _PAGE_SPECIAL */
 #define __HAVE_ARCH_PTE_SPECIAL
 
+/* Advertise that we call the Speculative Page Fault handler */
+#if defined(CONFIG_PPC_BOOK3S_64)
+#define __HAVE_ARCH_CALL_SPF
+#endif
+
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 4c422632047b..7b3cc4c30eab 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -291,9 +291,36 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
 	if (is_write && is_user)
 		store_update_sp = store_updates_sp(regs);
 
-	if (is_user)
+	if (is_user) {
 		flags |= FAULT_FLAG_USER;
 
+#if defined(__HAVE_ARCH_CALL_SPF)
+		/* let's try a speculative page fault without grabbing the
+		 * mmap_sem.
+		 */
+
+		/*
+		 * flags is set later based on the VMA's flags, for the common
+		 * speculative service, we need some flags to be set.
+		 */
+		if (is_write)
+			flags |= FAULT_FLAG_WRITE;
+
+		fault = handle_speculative_fault(mm, address, flags);
+		if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) {
+			perf_sw_event(PERF_COUNT_SW_SPF_DONE, 1,
+				      regs, address);
+			goto done;
+		}
+
+		/*
+		 * Resetting flags since the following code assumes
+		 * FAULT_FLAG_WRITE is not set.
+		 */
+		flags &= ~FAULT_FLAG_WRITE;
+#endif /* defined(__HAVE_ARCH_CALL_SPF) */
+	}
+
 	/* When running in the kernel we expect faults to occur only to
 	 * addresses in user space.  All other faults represent errors in the
 	 * kernel and should generate an OOPS.  Unfortunately, in the case of an
@@ -479,6 +506,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
 			rc = 0;
 	}
 
+done:
 	/*
 	 * Major/minor page fault accounting.
 	 */
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-17 22:05 ` [PATCH v2 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
@ 2017-08-20 12:11   ` Sergey Senozhatsky
  2017-08-25  8:52     ` Laurent Dufour
  2017-08-27  0:18   ` Kirill A. Shutemov
  1 sibling, 1 reply; 61+ messages in thread
From: Sergey Senozhatsky @ 2017-08-20 12:11 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On (08/18/17 00:05), Laurent Dufour wrote:
[..]
> +	/*
> +	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
> +	 * are not compatible with the speculative page fault processing.
> +	 */
> +	pol = __get_vma_policy(vma, address);
> +	if (!pol)
> +		pol = get_task_policy(current);
> +	if (pol && pol->mode == MPOL_INTERLEAVE)
> +		goto unlock;

include/linux/mempolicy.h defines

struct mempolicy *get_task_policy(struct task_struct *p);
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
		unsigned long addr);

only for CONFIG_NUMA configs.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (19 preceding siblings ...)
  2017-08-17 22:05 ` [PATCH v2 20/20] powerpc/mm: Add speculative page fault Laurent Dufour
@ 2017-08-21  2:26 ` Sergey Senozhatsky
  2017-09-08  9:24   ` Laurent Dufour
  2017-08-21  6:28 ` Anshuman Khandual
  21 siblings, 1 reply; 61+ messages in thread
From: Sergey Senozhatsky @ 2017-08-21  2:26 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

Hello,

On (08/18/17 00:04), Laurent Dufour wrote:
> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
> 
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
> 
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, so the VMA list is now managed using
> SRCU allowing lockless walking. The only impact would be the deferred file
> derefencing in the case of a file mapping, since the file pointer is
> released once the SRCU cleaning is done.  This patch relies on the change
> done recently by Paul McKenney in SRCU which now runs a callback per CPU
> instead of per SRCU structure [1].
> 
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
> 
> Once the VMA is found, the speculative page fault handler would check for
> the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried.  VMA sequence locks are added when VMA attributes which are
> checked during the page fault are modified.
> 
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.

[ 2311.315400] ======================================================
[ 2311.315401] WARNING: possible circular locking dependency detected
[ 2311.315403] 4.13.0-rc5-next-20170817-dbg-00039-gaf11d7500492-dirty #1743 Not tainted
[ 2311.315404] ------------------------------------------------------
[ 2311.315406] khugepaged/43 is trying to acquire lock:
[ 2311.315407]  (&mapping->i_mmap_rwsem){++++}, at: [<ffffffff8111b339>] rmap_walk_file+0x5a/0x147
[ 2311.315415] 
               but task is already holding lock:
[ 2311.315416]  (fs_reclaim){+.+.}, at: [<ffffffff810ebd80>] fs_reclaim_acquire+0x12/0x35
[ 2311.315420] 
               which lock already depends on the new lock.

[ 2311.315422] 
               the existing dependency chain (in reverse order) is:
[ 2311.315423] 
               -> #3 (fs_reclaim){+.+.}:
[ 2311.315427]        fs_reclaim_acquire+0x32/0x35
[ 2311.315429]        __alloc_pages_nodemask+0x8d/0x217
[ 2311.315432]        pte_alloc_one+0x13/0x5e
[ 2311.315434]        __pte_alloc+0x1f/0x83
[ 2311.315436]        move_page_tables+0x2c9/0x5ac
[ 2311.315438]        move_vma.isra.25+0xff/0x2a2
[ 2311.315439]        SyS_mremap+0x41b/0x49e
[ 2311.315442]        entry_SYSCALL_64_fastpath+0x18/0xad
[ 2311.315443] 
               -> #2 (&vma->vm_sequence/1){+.+.}:
[ 2311.315449]        write_seqcount_begin_nested+0x1b/0x1d
[ 2311.315451]        __vma_adjust+0x1b7/0x5d6
[ 2311.315453]        __split_vma+0x142/0x1a3
[ 2311.315454]        do_munmap+0x128/0x2af
[ 2311.315455]        vm_munmap+0x5a/0x73
[ 2311.315458]        elf_map+0xb1/0xce
[ 2311.315459]        load_elf_binary+0x8e0/0x1348
[ 2311.315462]        search_binary_handler+0x70/0x1f3
[ 2311.315464]        load_script+0x1a6/0x1b5
[ 2311.315466]        search_binary_handler+0x70/0x1f3
[ 2311.315468]        do_execveat_common+0x461/0x691
[ 2311.315471]        kernel_init+0x5a/0xf0
[ 2311.315472]        ret_from_fork+0x27/0x40
[ 2311.315473] 
               -> #1 (&vma->vm_sequence){+.+.}:
[ 2311.315478]        write_seqcount_begin_nested+0x1b/0x1d
[ 2311.315480]        __vma_adjust+0x19c/0x5d6
[ 2311.315481]        __split_vma+0x142/0x1a3
[ 2311.315482]        do_munmap+0x128/0x2af
[ 2311.315484]        vm_munmap+0x5a/0x73
[ 2311.315485]        elf_map+0xb1/0xce
[ 2311.315487]        load_elf_binary+0x8e0/0x1348
[ 2311.315489]        search_binary_handler+0x70/0x1f3
[ 2311.315490]        load_script+0x1a6/0x1b5
[ 2311.315492]        search_binary_handler+0x70/0x1f3
[ 2311.315494]        do_execveat_common+0x461/0x691
[ 2311.315496]        kernel_init+0x5a/0xf0
[ 2311.315497]        ret_from_fork+0x27/0x40
[ 2311.315498] 
               -> #0 (&mapping->i_mmap_rwsem){++++}:
[ 2311.315503]        lock_acquire+0x176/0x19e
[ 2311.315505]        down_read+0x3b/0x55
[ 2311.315507]        rmap_walk_file+0x5a/0x147
[ 2311.315508]        page_referenced+0x11c/0x134
[ 2311.315511]        shrink_page_list+0x36b/0xb80
[ 2311.315512]        shrink_inactive_list+0x1d9/0x437
[ 2311.315514]        shrink_node_memcg.constprop.71+0x3e7/0x571
[ 2311.315515]        shrink_node+0x3f/0x149
[ 2311.315517]        try_to_free_pages+0x270/0x45f
[ 2311.315518]        __alloc_pages_slowpath+0x34a/0xaa2
[ 2311.315520]        __alloc_pages_nodemask+0x111/0x217
[ 2311.315523]        khugepaged_alloc_page+0x17/0x45
[ 2311.315524]        khugepaged+0xa29/0x16b5
[ 2311.315527]        kthread+0xfb/0x103
[ 2311.315529]        ret_from_fork+0x27/0x40
[ 2311.315530] 
               other info that might help us debug this:

[ 2311.315531] Chain exists of:
                 &mapping->i_mmap_rwsem --> &vma->vm_sequence/1 --> fs_reclaim

[ 2311.315537]  Possible unsafe locking scenario:

[ 2311.315538]        CPU0                    CPU1
[ 2311.315539]        ----                    ----
[ 2311.315540]   lock(fs_reclaim);
[ 2311.315542]                                lock(&vma->vm_sequence/1);
[ 2311.315545]                                lock(fs_reclaim);
[ 2311.315547]   lock(&mapping->i_mmap_rwsem);
[ 2311.315549] 
                *** DEADLOCK ***

[ 2311.315551] 1 lock held by khugepaged/43:
[ 2311.315552]  #0:  (fs_reclaim){+.+.}, at: [<ffffffff810ebd80>] fs_reclaim_acquire+0x12/0x35
[ 2311.315556] 
               stack backtrace:
[ 2311.315559] CPU: 0 PID: 43 Comm: khugepaged Not tainted 4.13.0-rc5-next-20170817-dbg-00039-gaf11d7500492-dirty #1743
[ 2311.315560] Call Trace:
[ 2311.315564]  dump_stack+0x67/0x8e
[ 2311.315568]  print_circular_bug.isra.39+0x1c7/0x1d4
[ 2311.315570]  __lock_acquire+0xb1a/0xe06
[ 2311.315572]  ? graph_unlock+0x69/0x69
[ 2311.315575]  lock_acquire+0x176/0x19e
[ 2311.315577]  ? rmap_walk_file+0x5a/0x147
[ 2311.315579]  down_read+0x3b/0x55
[ 2311.315581]  ? rmap_walk_file+0x5a/0x147
[ 2311.315583]  rmap_walk_file+0x5a/0x147
[ 2311.315585]  page_referenced+0x11c/0x134
[ 2311.315587]  ? page_vma_mapped_walk_done.isra.15+0xb/0xb
[ 2311.315589]  ? page_get_anon_vma+0x6d/0x6d
[ 2311.315591]  shrink_page_list+0x36b/0xb80
[ 2311.315593]  ? _raw_spin_unlock_irq+0x29/0x46
[ 2311.315595]  shrink_inactive_list+0x1d9/0x437
[ 2311.315597]  shrink_node_memcg.constprop.71+0x3e7/0x571
[ 2311.315600]  shrink_node+0x3f/0x149
[ 2311.315602]  try_to_free_pages+0x270/0x45f
[ 2311.315604]  __alloc_pages_slowpath+0x34a/0xaa2
[ 2311.315608]  ? ___might_sleep+0xd5/0x234
[ 2311.315609]  __alloc_pages_nodemask+0x111/0x217
[ 2311.315612]  khugepaged_alloc_page+0x17/0x45
[ 2311.315613]  khugepaged+0xa29/0x16b5
[ 2311.315616]  ? remove_wait_queue+0x47/0x47
[ 2311.315618]  ? collapse_shmem.isra.43+0x882/0x882
[ 2311.315620]  kthread+0xfb/0x103
[ 2311.315622]  ? __list_del_entry+0x1d/0x1d
[ 2311.315624]  ret_from_fork+0x27/0x40

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
                   ` (20 preceding siblings ...)
  2017-08-21  2:26 ` [PATCH v2 00/20] Speculative page faults Sergey Senozhatsky
@ 2017-08-21  6:28 ` Anshuman Khandual
  2017-08-22  0:41   ` Paul E. McKenney
  2017-08-25  9:41   ` Laurent Dufour
  21 siblings, 2 replies; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-21  6:28 UTC (permalink / raw)
  To: Laurent Dufour, paulmck, peterz, akpm, kirill, ak, mhocko, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

On 08/18/2017 03:34 AM, Laurent Dufour wrote:
> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
> 
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
> 
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, so the VMA list is now managed using
> SRCU allowing lockless walking. The only impact would be the deferred file
> derefencing in the case of a file mapping, since the file pointer is
> released once the SRCU cleaning is done.  This patch relies on the change
> done recently by Paul McKenney in SRCU which now runs a callback per CPU
> instead of per SRCU structure [1].
> 
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
> 
> Once the VMA is found, the speculative page fault handler would check for
> the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried.  VMA sequence locks are added when VMA attributes which are
> checked during the page fault are modified.
> 
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.
> 
> Compared to the Peter's initial work, this series introduces a spin_trylock
> when dealing with speculative page fault. This is required to avoid dead
> lock when handling a page fault while a TLB invalidate is requested by an
> other CPU holding the PTE. Another change due to a lock dependency issue
> with mapping->i_mmap_rwsem.
> 
> In addition some VMA field values which are used once the PTE is unlocked
> at the end the page fault path are saved into the vm_fault structure to
> used the values matching the VMA at the time the PTE was locked.
> 
> This series builds on top of v4.13-rc5 and is functional on x86 and
> PowerPC.
> 
> Tests have been made using a large commercial in-memory database on a
> PowerPC system with 752 CPU using RFC v5. The results are very encouraging
> since the loading of the 2TB database was faster by 14% with the
> speculative page fault.
> 

You specifically mention loading as most of the page faults will
happen at that time and then the working set will settle down with
very less page faults there after ? That means unless there is
another wave of page faults we wont notice performance improvement
during the runtime.

> Using ebizzy test [3], which spreads a lot of threads, the result are good
> when running on both a large or a small system. When using kernbench, the

The performance improvements are greater as there is a lot of creation
and destruction of anon mappings which generates constant flow of page
faults to be handled.

> result are quite similar which expected as not so much multi threaded
> processes are involved. But there is no performance degradation neither
> which is good.

If we compile with 'make -j N' there would be a lot of threads but I
guess the problem is SPF does not support handling file mapping IIUC
which limits the performance improvement for some workloads.

> 
> ------------------
> Benchmarks results
> 
> Note these test have been made on top of 4.13-rc3 with the following patch
> from Paul McKenney applied: 
>  "srcu: Provide ordering for CPU not involved in grace period" [5]

Is this patch an improvement for SRCU which we are using for walking VMAs.

> 
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result, mean
> deviation, max and min.
> 
> - 16 CPUs x86 VM
> Records/s	4.13-rc5	4.13-rc5-spf
> Average		11350.29	21760.36
> Mean deviation	396.56		881.40
> Max		13773		26194
> Min		10567		19223
> 
> - 80 CPUs Power 8 node:
> Records/s	4.13-rc5	4.13-rc5-spf
> Average		33904.67	58847.91
> Mean deviation	789.40		1753.19
> Max		36703		68958
> Min		31759		55125
> 

Can you also mention % improvement or degradation in a new column.

> The number of record per second is far better with the speculative page
> fault.
> The mean deviation is higher with the speculative page fault, may be
> because sometime the fault are not handled in a speculative way leading to
> more variation.

we need to analyze that. Why speculative page faults failed on those
occasions for exact same workload.

> 
> 
> Kernbench:
> ----------
> This test is building a 4.12 kernel using platform default config. The
> build has been run 5 times each time.
> 
> - 16 CPUs x86 VM
> Average Half load -j 8 Run (std deviation)
>  		 4.13.0-rc5		4.13.0-rc5-spf
> Elapsed Time     166.574 (0.340779)	145.754 (0.776325)		
> User Time        1080.77 (2.05871)	999.272 (4.12142)		
> System Time      204.594 (1.02449)	116.362 (1.22974)		
> Percent CPU 	 771.2 (1.30384)	765 (0.707107)
> Context Switches 46590.6 (935.591)	66316.4 (744.64)
> Sleeps           84421.2 (596.612)	85186 (523.041)		


> 
> Average Optimal load -j 16 Run (std deviation)
>  		 4.13.0-rc5		4.13.0-rc5-spf
> Elapsed Time     85.422 (0.42293)	74.81 (0.419345)
> User Time        1031.79 (51.6557)	954.912 (46.8439)
> System Time      186.528 (19.0575)	107.514 (9.36902)
> Percent CPU 	 1059.2 (303.607)	1056.8 (307.624)
> Context Switches 67240.3 (21788.9)	89360.6 (24299.9)
> Sleeps           89607.8 (5511.22)	90372.5 (5490.16)
> 
> The elapsed time is a bit shorter in the case of the SPF release, but the
> impact less important since there are less multithreaded processes involved
> here. 
> 
> - 80 CPUs Power 8 node:
> Average Half load -j 40 Run (std deviation)
>  		 4.13.0-rc5		4.13.0-rc5-spf
> Elapsed Time     117.176 (0.824093)	116.792 (0.695392)
> User Time        4412.34 (24.29)	4396.02 (24.4819)
> System Time      131.106 (1.28343)	133.452 (0.708851)
> Percent CPU      3876.8 (18.1439)	3877.6 (21.9955)
> Context Switches 72470.2 (466.181)	72971 (673.624)
> Sleeps           161294 (2284.85)	161946 (2217.9)
> 
> Average Optimal load -j 80 Run (std deviation)
>  		 4.13.0-rc5		4.13.0-rc5-spf
> Elapsed Time     111.176 (1.11123)	111.242 (0.801542)
> User Time        5930.03 (1600.07)	5929.89 (1617)
> System Time      166.258 (37.0662)	169.337 (37.8419)
> Percent CPU      5378.5 (1584.16)	5385.6 (1590.24)
> Context Switches 117389 (47350.1)	130132 (60256.3)
> Sleeps           163354 (4153.9)	163219 (2251.27)
> 

Can you also mention % improvement or degradation in a new column.

> Here the elapsed time is a bit shorter using the spf release, but we
> remain in the error margin. It has to be noted that this system is not
> correctly balanced on the NUMA point of view as all the available memory is
> attached to one core.

Why different NUMA configuration would have changed the outcome ?

> 
> ------------------------
> Changes since v1:
>  - Remove PERF_COUNT_SW_SPF_FAILED perf event.
>  - Add tracing events to details speculative page fault failures.
>  - Cache VMA fields values which are used once the PTE is unlocked at the
>  end of the page fault events.

Why is this required ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 20/20] powerpc/mm: Add speculative page fault
  2017-08-17 22:05 ` [PATCH v2 20/20] powerpc/mm: Add speculative page fault Laurent Dufour
@ 2017-08-21  6:58   ` Anshuman Khandual
  2017-08-29 15:13     ` Laurent Dufour
  0 siblings, 1 reply; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-21  6:58 UTC (permalink / raw)
  To: Laurent Dufour, paulmck, peterz, akpm, kirill, ak, mhocko, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

On 08/18/2017 03:35 AM, Laurent Dufour wrote:
> This patch enable the speculative page fault on the PowerPC
> architecture.
> 
> This will try a speculative page fault without holding the mmap_sem,
> if it returns with WM_FAULT_RETRY, the mmap_sem is acquired and the

s/WM_FAULT_RETRY/VM_FAULT_RETRY/

> traditional page fault processing is done.
> 
> Support is only provide for BOOK3S_64 currently because:
> - require CONFIG_PPC_STD_MMU because checks done in
>   set_access_flags_filter()

What checks are done in set_access_flags_filter() ? We are just
adding the code block in do_page_fault().


> - require BOOK3S because we can't support for book3e_hugetlb_preload()
>   called by update_mmu_cache()
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>  arch/powerpc/mm/fault.c                      | 30 +++++++++++++++++++++++++++-
>  2 files changed, 34 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 818a58fc3f4f..897f8b9f67e6 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -313,6 +313,11 @@ extern unsigned long pci_io_base;
>  /* Advertise support for _PAGE_SPECIAL */
>  #define __HAVE_ARCH_PTE_SPECIAL
>  
> +/* Advertise that we call the Speculative Page Fault handler */
> +#if defined(CONFIG_PPC_BOOK3S_64)
> +#define __HAVE_ARCH_CALL_SPF
> +#endif
> +
>  #ifndef __ASSEMBLY__
>  
>  /*
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 4c422632047b..7b3cc4c30eab 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -291,9 +291,36 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>  	if (is_write && is_user)
>  		store_update_sp = store_updates_sp(regs);
>  
> -	if (is_user)
> +	if (is_user) {
>  		flags |= FAULT_FLAG_USER;
>  
> +#if defined(__HAVE_ARCH_CALL_SPF)
> +		/* let's try a speculative page fault without grabbing the
> +		 * mmap_sem.
> +		 */
> +
> +		/*
> +		 * flags is set later based on the VMA's flags, for the common
> +		 * speculative service, we need some flags to be set.
> +		 */
> +		if (is_write)
> +			flags |= FAULT_FLAG_WRITE;
> +
> +		fault = handle_speculative_fault(mm, address, flags);
> +		if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) {
> +			perf_sw_event(PERF_COUNT_SW_SPF_DONE, 1,
> +				      regs, address);
> +			goto done;

Why we should retry with classical page fault on VM_FAULT_ERROR ?
We should always return VM_FAULT_RETRY in case there is a clear
collision some where which requires retry with classical method
and return VM_FAULT_ERROR in cases where we know that it cannot
be retried and fail for good. Should not handle_speculative_fault()
be changed to accommodate this ?

> +		}
> +
> +		/*
> +		 * Resetting flags since the following code assumes
> +		 * FAULT_FLAG_WRITE is not set.
> +		 */
> +		flags &= ~FAULT_FLAG_WRITE;
> +#endif /* defined(__HAVE_ARCH_CALL_SPF) */

Setting and resetting of FAULT_FLAG_WRITE seems confusing. Why you
say that some flags need to be set for handle_speculative_fault()
function. Could you elaborate on this ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 19/20] x86/mm: Add speculative pagefault handling
  2017-08-17 22:05 ` [PATCH v2 19/20] x86/mm: Add speculative pagefault handling Laurent Dufour
@ 2017-08-21  7:29   ` Anshuman Khandual
  2017-08-29 14:50     ` Laurent Dufour
  0 siblings, 1 reply; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-21  7:29 UTC (permalink / raw)
  To: Laurent Dufour, paulmck, peterz, akpm, kirill, ak, mhocko, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

On 08/18/2017 03:35 AM, Laurent Dufour wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> Try a speculative fault before acquiring mmap_sem, if it returns with
> VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
> traditional fault.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
>  handle_speculative_fault()]
> [Retry with usual fault path in the case VM_ERROR is returned by
>  handle_speculative_fault(). This allows signal to be delivered]
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |  7 +++++++
>  arch/x86/mm/fault.c                  | 19 +++++++++++++++++++
>  2 files changed, 26 insertions(+)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index bf9638e1ee42..4fd2693a037e 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -234,6 +234,13 @@ enum page_cache_mode {
>  #define PGD_IDENT_ATTR	 0x001		/* PRESENT (no other attributes) */
>  #endif
>  
> +/*
> + * Advertise that we call the Speculative Page Fault handler.
> + */
> +#ifdef CONFIG_X86_64
> +#define __HAVE_ARCH_CALL_SPF
> +#endif
> +
>  #ifdef CONFIG_X86_32
>  # include <asm/pgtable_32_types.h>
>  #else
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 2a1fa10c6a98..4c070b9a4362 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -1365,6 +1365,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>  	if (error_code & PF_INSTR)
>  		flags |= FAULT_FLAG_INSTRUCTION;
>  
> +#ifdef __HAVE_ARCH_CALL_SPF
> +	if (error_code & PF_USER) {
> +		fault = handle_speculative_fault(mm, address, flags);
> +
> +		/*
> +		 * We also check against VM_FAULT_ERROR because we have to
> +		 * raise a signal by calling later mm_fault_error() which
> +		 * requires the vma pointer to be set. So in that case,
> +		 * we fall through the normal path.

Cant mm_fault_error() be called inside handle_speculative_fault() ?
Falling through the normal page fault path again just to raise a
signal seems overkill. Looking into mm_fault_error(), it seems they
are different for x86 and powerpc.

X86:

mm_fault_error(struct pt_regs *regs, unsigned long error_code,
               unsigned long address, struct vm_area_struct *vma,
               unsigned int fault)

powerpc:

mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)

Even in case of X86, I guess we would have reference to the faulting
VMA (after the SRCU search) which can be used to call this function
directly.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 18/20] perf tools: Add support for the SPF perf event
  2017-08-17 22:05 ` [PATCH v2 18/20] perf tools: Add support for the SPF perf event Laurent Dufour
@ 2017-08-21  8:48   ` Anshuman Khandual
  2017-08-25  8:53     ` Laurent Dufour
  0 siblings, 1 reply; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-21  8:48 UTC (permalink / raw)
  To: Laurent Dufour, paulmck, peterz, akpm, kirill, ak, mhocko, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

On 08/18/2017 03:35 AM, Laurent Dufour wrote:
> Add support for the new speculative faults event.
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
>  tools/include/uapi/linux/perf_event.h | 1 +
>  tools/perf/util/evsel.c               | 1 +
>  tools/perf/util/parse-events.c        | 4 ++++
>  tools/perf/util/parse-events.l        | 1 +
>  tools/perf/util/python.c              | 1 +
>  5 files changed, 8 insertions(+)
> 
> diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
> index b1c0b187acfe..3043ec0988e9 100644
> --- a/tools/include/uapi/linux/perf_event.h
> +++ b/tools/include/uapi/linux/perf_event.h
> @@ -111,6 +111,7 @@ enum perf_sw_ids {
>  	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
>  	PERF_COUNT_SW_DUMMY			= 9,
>  	PERF_COUNT_SW_BPF_OUTPUT		= 10,
> +	PERF_COUNT_SW_SPF_DONE			= 11,

Right, just one event for the success case. 'DONE' is redundant, only
'SPF' should be fine IMHO.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 17/20] perf: Add a speculative page fault sw event
  2017-08-17 22:05 ` [PATCH v2 17/20] perf: Add a speculative page fault sw event Laurent Dufour
@ 2017-08-21  8:55   ` Anshuman Khandual
  2017-08-22  1:46     ` Michael Ellerman
  0 siblings, 1 reply; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-21  8:55 UTC (permalink / raw)
  To: Laurent Dufour, paulmck, peterz, akpm, kirill, ak, mhocko, dave,
	jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, khandual, npiggin, bsingharora,
	Tim Chen, linuxppc-dev, x86

On 08/18/2017 03:35 AM, Laurent Dufour wrote:
> Add a new software event to count succeeded speculative page faults.
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>

Should be merged with the next patch.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-08-21  6:28 ` Anshuman Khandual
@ 2017-08-22  0:41   ` Paul E. McKenney
  2017-08-25  9:41   ` Laurent Dufour
  1 sibling, 0 replies; 61+ messages in thread
From: Paul E. McKenney @ 2017-08-22  0:41 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Laurent Dufour, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On Mon, Aug 21, 2017 at 11:58:03AM +0530, Anshuman Khandual wrote:
> On 08/18/2017 03:34 AM, Laurent Dufour wrote:
> > This is a port on kernel 4.13 of the work done by Peter Zijlstra to
> > handle page fault without holding the mm semaphore [1].
> > 
> > The idea is to try to handle user space page faults without holding the
> > mmap_sem. This should allow better concurrency for massively threaded
> > process since the page fault handler will not wait for other threads memory
> > layout change to be done, assuming that this change is done in another part
> > of the process's memory space. This type page fault is named speculative
> > page fault. If the speculative page fault fails because of a concurrency is
> > detected or because underlying PMD or PTE tables are not yet allocating, it
> > is failing its processing and a classic page fault is then tried.
> > 
> > The speculative page fault (SPF) has to look for the VMA matching the fault
> > address without holding the mmap_sem, so the VMA list is now managed using
> > SRCU allowing lockless walking. The only impact would be the deferred file
> > derefencing in the case of a file mapping, since the file pointer is
> > released once the SRCU cleaning is done.  This patch relies on the change
> > done recently by Paul McKenney in SRCU which now runs a callback per CPU
> > instead of per SRCU structure [1].
> > 
> > The VMA's attributes checked during the speculative page fault processing
> > have to be protected against parallel changes. This is done by using a per
> > VMA sequence lock. This sequence lock allows the speculative page fault
> > handler to fast check for parallel changes in progress and to abort the
> > speculative page fault in that case.
> > 
> > Once the VMA is found, the speculative page fault handler would check for
> > the VMA's attributes to verify that the page fault has to be handled
> > correctly or not. Thus the VMA is protected through a sequence lock which
> > allows fast detection of concurrent VMA changes. If such a change is
> > detected, the speculative page fault is aborted and a *classic* page fault
> > is tried.  VMA sequence locks are added when VMA attributes which are
> > checked during the page fault are modified.
> > 
> > When the PTE is fetched, the VMA is checked to see if it has been changed,
> > so once the page table is locked, the VMA is valid, so any other changes
> > leading to touching this PTE will need to lock the page table, so no
> > parallel change is possible at this time.
> > 
> > Compared to the Peter's initial work, this series introduces a spin_trylock
> > when dealing with speculative page fault. This is required to avoid dead
> > lock when handling a page fault while a TLB invalidate is requested by an
> > other CPU holding the PTE. Another change due to a lock dependency issue
> > with mapping->i_mmap_rwsem.
> > 
> > In addition some VMA field values which are used once the PTE is unlocked
> > at the end the page fault path are saved into the vm_fault structure to
> > used the values matching the VMA at the time the PTE was locked.
> > 
> > This series builds on top of v4.13-rc5 and is functional on x86 and
> > PowerPC.
> > 
> > Tests have been made using a large commercial in-memory database on a
> > PowerPC system with 752 CPU using RFC v5. The results are very encouraging
> > since the loading of the 2TB database was faster by 14% with the
> > speculative page fault.
> > 
> 
> You specifically mention loading as most of the page faults will
> happen at that time and then the working set will settle down with
> very less page faults there after ? That means unless there is
> another wave of page faults we wont notice performance improvement
> during the runtime.
> 
> > Using ebizzy test [3], which spreads a lot of threads, the result are good
> > when running on both a large or a small system. When using kernbench, the
> 
> The performance improvements are greater as there is a lot of creation
> and destruction of anon mappings which generates constant flow of page
> faults to be handled.
> 
> > result are quite similar which expected as not so much multi threaded
> > processes are involved. But there is no performance degradation neither
> > which is good.
> 
> If we compile with 'make -j N' there would be a lot of threads but I
> guess the problem is SPF does not support handling file mapping IIUC
> which limits the performance improvement for some workloads.
> 
> > 
> > ------------------
> > Benchmarks results
> > 
> > Note these test have been made on top of 4.13-rc3 with the following patch
> > from Paul McKenney applied: 
> >  "srcu: Provide ordering for CPU not involved in grace period" [5]
> 
> Is this patch an improvement for SRCU which we are using for walking VMAs.

It is a tweak to an earlier patch that parallelizes SRCU callback
handling.

							Thanx, Paul

> > Ebizzy:
> > -------
> > The test is counting the number of records per second it can manage, the
> > higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> > result I repeated the test 100 times and measure the average result, mean
> > deviation, max and min.
> > 
> > - 16 CPUs x86 VM
> > Records/s	4.13-rc5	4.13-rc5-spf
> > Average		11350.29	21760.36
> > Mean deviation	396.56		881.40
> > Max		13773		26194
> > Min		10567		19223
> > 
> > - 80 CPUs Power 8 node:
> > Records/s	4.13-rc5	4.13-rc5-spf
> > Average		33904.67	58847.91
> > Mean deviation	789.40		1753.19
> > Max		36703		68958
> > Min		31759		55125
> > 
> 
> Can you also mention % improvement or degradation in a new column.
> 
> > The number of record per second is far better with the speculative page
> > fault.
> > The mean deviation is higher with the speculative page fault, may be
> > because sometime the fault are not handled in a speculative way leading to
> > more variation.
> 
> we need to analyze that. Why speculative page faults failed on those
> occasions for exact same workload.
> 
> > 
> > 
> > Kernbench:
> > ----------
> > This test is building a 4.12 kernel using platform default config. The
> > build has been run 5 times each time.
> > 
> > - 16 CPUs x86 VM
> > Average Half load -j 8 Run (std deviation)
> >  		 4.13.0-rc5		4.13.0-rc5-spf
> > Elapsed Time     166.574 (0.340779)	145.754 (0.776325)		
> > User Time        1080.77 (2.05871)	999.272 (4.12142)		
> > System Time      204.594 (1.02449)	116.362 (1.22974)		
> > Percent CPU 	 771.2 (1.30384)	765 (0.707107)
> > Context Switches 46590.6 (935.591)	66316.4 (744.64)
> > Sleeps           84421.2 (596.612)	85186 (523.041)		
> 
> 
> > 
> > Average Optimal load -j 16 Run (std deviation)
> >  		 4.13.0-rc5		4.13.0-rc5-spf
> > Elapsed Time     85.422 (0.42293)	74.81 (0.419345)
> > User Time        1031.79 (51.6557)	954.912 (46.8439)
> > System Time      186.528 (19.0575)	107.514 (9.36902)
> > Percent CPU 	 1059.2 (303.607)	1056.8 (307.624)
> > Context Switches 67240.3 (21788.9)	89360.6 (24299.9)
> > Sleeps           89607.8 (5511.22)	90372.5 (5490.16)
> > 
> > The elapsed time is a bit shorter in the case of the SPF release, but the
> > impact less important since there are less multithreaded processes involved
> > here. 
> > 
> > - 80 CPUs Power 8 node:
> > Average Half load -j 40 Run (std deviation)
> >  		 4.13.0-rc5		4.13.0-rc5-spf
> > Elapsed Time     117.176 (0.824093)	116.792 (0.695392)
> > User Time        4412.34 (24.29)	4396.02 (24.4819)
> > System Time      131.106 (1.28343)	133.452 (0.708851)
> > Percent CPU      3876.8 (18.1439)	3877.6 (21.9955)
> > Context Switches 72470.2 (466.181)	72971 (673.624)
> > Sleeps           161294 (2284.85)	161946 (2217.9)
> > 
> > Average Optimal load -j 80 Run (std deviation)
> >  		 4.13.0-rc5		4.13.0-rc5-spf
> > Elapsed Time     111.176 (1.11123)	111.242 (0.801542)
> > User Time        5930.03 (1600.07)	5929.89 (1617)
> > System Time      166.258 (37.0662)	169.337 (37.8419)
> > Percent CPU      5378.5 (1584.16)	5385.6 (1590.24)
> > Context Switches 117389 (47350.1)	130132 (60256.3)
> > Sleeps           163354 (4153.9)	163219 (2251.27)
> > 
> 
> Can you also mention % improvement or degradation in a new column.
> 
> > Here the elapsed time is a bit shorter using the spf release, but we
> > remain in the error margin. It has to be noted that this system is not
> > correctly balanced on the NUMA point of view as all the available memory is
> > attached to one core.
> 
> Why different NUMA configuration would have changed the outcome ?
> 
> > 
> > ------------------------
> > Changes since v1:
> >  - Remove PERF_COUNT_SW_SPF_FAILED perf event.
> >  - Add tracing events to details speculative page fault failures.
> >  - Cache VMA fields values which are used once the PTE is unlocked at the
> >  end of the page fault events.
> 
> Why is this required ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 17/20] perf: Add a speculative page fault sw event
  2017-08-21  8:55   ` Anshuman Khandual
@ 2017-08-22  1:46     ` Michael Ellerman
  0 siblings, 0 replies; 61+ messages in thread
From: Michael Ellerman @ 2017-08-22  1:46 UTC (permalink / raw)
  To: Anshuman Khandual, Laurent Dufour, paulmck, peterz, akpm, kirill,
	ak, mhocko, dave, jack, Matthew Wilcox, benh, paulus,
	Thomas Gleixner, Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

Anshuman Khandual <khandual@linux.vnet.ibm.com> writes:

> On 08/18/2017 03:35 AM, Laurent Dufour wrote:
>> Add a new software event to count succeeded speculative page faults.
>> 
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>
> Should be merged with the next patch.

No it shouldn't.

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-20 12:11   ` Sergey Senozhatsky
@ 2017-08-25  8:52     ` Laurent Dufour
  0 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-25  8:52 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 20/08/2017 14:11, Sergey Senozhatsky wrote:
> On (08/18/17 00:05), Laurent Dufour wrote:
> [..]
>> +	/*
>> +	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
>> +	 * are not compatible with the speculative page fault processing.
>> +	 */
>> +	pol = __get_vma_policy(vma, address);
>> +	if (!pol)
>> +		pol = get_task_policy(current);
>> +	if (pol && pol->mode == MPOL_INTERLEAVE)
>> +		goto unlock;
> 
> include/linux/mempolicy.h defines
> 
> struct mempolicy *get_task_policy(struct task_struct *p);
> struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> 		unsigned long addr);
> 
> only for CONFIG_NUMA configs.

Thanks Sergey, I'll add #ifdef around this block.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 18/20] perf tools: Add support for the SPF perf event
  2017-08-21  8:48   ` Anshuman Khandual
@ 2017-08-25  8:53     ` Laurent Dufour
  0 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-25  8:53 UTC (permalink / raw)
  To: Anshuman Khandual, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On 21/08/2017 10:48, Anshuman Khandual wrote:
> On 08/18/2017 03:35 AM, Laurent Dufour wrote:
>> Add support for the new speculative faults event.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  tools/include/uapi/linux/perf_event.h | 1 +
>>  tools/perf/util/evsel.c               | 1 +
>>  tools/perf/util/parse-events.c        | 4 ++++
>>  tools/perf/util/parse-events.l        | 1 +
>>  tools/perf/util/python.c              | 1 +
>>  5 files changed, 8 insertions(+)
>>
>> diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
>> index b1c0b187acfe..3043ec0988e9 100644
>> --- a/tools/include/uapi/linux/perf_event.h
>> +++ b/tools/include/uapi/linux/perf_event.h
>> @@ -111,6 +111,7 @@ enum perf_sw_ids {
>>  	PERF_COUNT_SW_EMULATION_FAULTS		= 8,
>>  	PERF_COUNT_SW_DUMMY			= 9,
>>  	PERF_COUNT_SW_BPF_OUTPUT		= 10,
>> +	PERF_COUNT_SW_SPF_DONE			= 11,
> 
> Right, just one event for the success case. 'DONE' is redundant, only
> 'SPF' should be fine IMHO.
> 

Fair enough, I'll rename it PERF_COUNT_SW_SPF.

Thanks,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-08-21  6:28 ` Anshuman Khandual
  2017-08-22  0:41   ` Paul E. McKenney
@ 2017-08-25  9:41   ` Laurent Dufour
  1 sibling, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-25  9:41 UTC (permalink / raw)
  To: Anshuman Khandual, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On 21/08/2017 08:28, Anshuman Khandual wrote:
> On 08/18/2017 03:34 AM, Laurent Dufour wrote:
>> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>> process since the page fault handler will not wait for other threads memory
>> layout change to be done, assuming that this change is done in another part
>> of the process's memory space. This type page fault is named speculative
>> page fault. If the speculative page fault fails because of a concurrency is
>> detected or because underlying PMD or PTE tables are not yet allocating, it
>> is failing its processing and a classic page fault is then tried.
>>
>> The speculative page fault (SPF) has to look for the VMA matching the fault
>> address without holding the mmap_sem, so the VMA list is now managed using
>> SRCU allowing lockless walking. The only impact would be the deferred file
>> derefencing in the case of a file mapping, since the file pointer is
>> released once the SRCU cleaning is done.  This patch relies on the change
>> done recently by Paul McKenney in SRCU which now runs a callback per CPU
>> instead of per SRCU structure [1].
>>
>> The VMA's attributes checked during the speculative page fault processing
>> have to be protected against parallel changes. This is done by using a per
>> VMA sequence lock. This sequence lock allows the speculative page fault
>> handler to fast check for parallel changes in progress and to abort the
>> speculative page fault in that case.
>>
>> Once the VMA is found, the speculative page fault handler would check for
>> the VMA's attributes to verify that the page fault has to be handled
>> correctly or not. Thus the VMA is protected through a sequence lock which
>> allows fast detection of concurrent VMA changes. If such a change is
>> detected, the speculative page fault is aborted and a *classic* page fault
>> is tried.  VMA sequence locks are added when VMA attributes which are
>> checked during the page fault are modified.
>>
>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>> so once the page table is locked, the VMA is valid, so any other changes
>> leading to touching this PTE will need to lock the page table, so no
>> parallel change is possible at this time.
>>
>> Compared to the Peter's initial work, this series introduces a spin_trylock
>> when dealing with speculative page fault. This is required to avoid dead
>> lock when handling a page fault while a TLB invalidate is requested by an
>> other CPU holding the PTE. Another change due to a lock dependency issue
>> with mapping->i_mmap_rwsem.
>>
>> In addition some VMA field values which are used once the PTE is unlocked
>> at the end the page fault path are saved into the vm_fault structure to
>> used the values matching the VMA at the time the PTE was locked.
>>
>> This series builds on top of v4.13-rc5 and is functional on x86 and
>> PowerPC.
>>
>> Tests have been made using a large commercial in-memory database on a
>> PowerPC system with 752 CPU using RFC v5. The results are very encouraging
>> since the loading of the 2TB database was faster by 14% with the
>> speculative page fault.
>>
> 
> You specifically mention loading as most of the page faults will
> happen at that time and then the working set will settle down with
> very less page faults there after ? That means unless there is
> another wave of page faults we wont notice performance improvement
> during the runtime.

I just captured performance statistic during the database loading then
since the database was not stimulated, there was no page faults generated.
Further tests will be made while the database is running but I didn't have
the framework to do so right now.

> 
>> Using ebizzy test [3], which spreads a lot of threads, the result are good
>> when running on both a large or a small system. When using kernbench, the
> 
> The performance improvements are greater as there is a lot of creation
> and destruction of anon mappings which generates constant flow of page
> faults to be handled.
> 
>> result are quite similar which expected as not so much multi threaded
>> processes are involved. But there is no performance degradation neither
>> which is good.
> 
> If we compile with 'make -j N' there would be a lot of threads but I
> guess the problem is SPF does not support handling file mapping IIUC
> which limits the performance improvement for some workloads.

Yes but that test is showing that there is no performance degradation which
is good.

>>
>> ------------------
>> Benchmarks results
>>
>> Note these test have been made on top of 4.13-rc3 with the following patch
>> from Paul McKenney applied: 
>>  "srcu: Provide ordering for CPU not involved in grace period" [5]
> 
> Is this patch an improvement for SRCU which we are using for walking VMAs.
> 
>>
>> Ebizzy:
>> -------
>> The test is counting the number of records per second it can manage, the
>> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
>> result I repeated the test 100 times and measure the average result, mean
>> deviation, max and min.
>>
>> - 16 CPUs x86 VM
>> Records/s	4.13-rc5	4.13-rc5-spf
>> Average		11350.29	21760.36
>> Mean deviation	396.56		881.40
>> Max		13773		26194
>> Min		10567		19223
>>
>> - 80 CPUs Power 8 node:
>> Records/s	4.13-rc5	4.13-rc5-spf
>> Average		33904.67	58847.91
>> Mean deviation	789.40		1753.19
>> Max		36703		68958
>> Min		31759		55125
>>
> 
> Can you also mention % improvement or degradation in a new column.

Fair enough:

- 16 CPUs x86 VM
Records/s	4.13-rc5	4.13-rc5-spf
Average		11350.29	21760.36	+92%
Mean deviation	396.56		881.40		+122%
Max		13773		26194		+90%
Min		10567		19223		+82%

- 80 CPUs Power 8 node:
Records/s	4.13-rc5	4.13-rc5-spf
Average		33904.67	58847.91	+74%
Mean deviation	789.40		1753.19		+122%
Max		36703		68958		+88%
Min		31759		55125		+74%


> 
>> The number of record per second is far better with the speculative page
>> fault.
>> The mean deviation is higher with the speculative page fault, may be
>> because sometime the fault are not handled in a speculative way leading to
>> more variation.
> 
> we need to analyze that. Why speculative page faults failed on those
> occasions for exact same workload.

That's not even clear that the mean deviation increasing is due to
speculative page fault failure. This will need to be study, but even if the
mean deviation is more important, the result are far better anyway.

>>
>>
>> Kernbench:
>> ----------
>> This test is building a 4.12 kernel using platform default config. The
>> build has been run 5 times each time.
>>
>> - 16 CPUs x86 VM
>> Average Half load -j 8 Run (std deviation)
>>  		 4.13.0-rc5		4.13.0-rc5-spf
>> Elapsed Time     166.574 (0.340779)	145.754 (0.776325)		
>> User Time        1080.77 (2.05871)	999.272 (4.12142)		
>> System Time      204.594 (1.02449)	116.362 (1.22974)		
>> Percent CPU 	 771.2 (1.30384)	765 (0.707107)
>> Context Switches 46590.6 (935.591)	66316.4 (744.64)
>> Sleeps           84421.2 (596.612)	85186 (523.041)		
> 
> 
>>
>> Average Optimal load -j 16 Run (std deviation)
>>  		 4.13.0-rc5		4.13.0-rc5-spf
>> Elapsed Time     85.422 (0.42293)	74.81 (0.419345)
>> User Time        1031.79 (51.6557)	954.912 (46.8439)
>> System Time      186.528 (19.0575)	107.514 (9.36902)
>> Percent CPU 	 1059.2 (303.607)	1056.8 (307.624)
>> Context Switches 67240.3 (21788.9)	89360.6 (24299.9)
>> Sleeps           89607.8 (5511.22)	90372.5 (5490.16)
>>
>> The elapsed time is a bit shorter in the case of the SPF release, but the
>> impact less important since there are less multithreaded processes involved
>> here. 
>>
>> - 80 CPUs Power 8 node:
>> Average Half load -j 40 Run (std deviation)
>>  		 4.13.0-rc5		4.13.0-rc5-spf
>> Elapsed Time     117.176 (0.824093)	116.792 (0.695392)
>> User Time        4412.34 (24.29)	4396.02 (24.4819)
>> System Time      131.106 (1.28343)	133.452 (0.708851)
>> Percent CPU      3876.8 (18.1439)	3877.6 (21.9955)
>> Context Switches 72470.2 (466.181)	72971 (673.624)
>> Sleeps           161294 (2284.85)	161946 (2217.9)
>>
>> Average Optimal load -j 80 Run (std deviation)
>>  		 4.13.0-rc5		4.13.0-rc5-spf
>> Elapsed Time     111.176 (1.11123)	111.242 (0.801542)
>> User Time        5930.03 (1600.07)	5929.89 (1617)
>> System Time      166.258 (37.0662)	169.337 (37.8419)
>> Percent CPU      5378.5 (1584.16)	5385.6 (1590.24)
>> Context Switches 117389 (47350.1)	130132 (60256.3)
>> Sleeps           163354 (4153.9)	163219 (2251.27)
>>
> 
> Can you also mention % improvement or degradation in a new column.

Fair enough:
- 16 CPUs x86 VM
Average Half load -j 8 Run (std deviation)
                 4.13.0-rc5             4.13.0-rc5-spf
Elapsed Time     166.574 (0.340779)     145.754 (0.776325)	-12.5%
User Time        1080.77 (2.05871)      999.272 (4.12142)	-7.54%
System Time      204.594 (1.02449)      116.362 (1.22974)	-43.13%
Percent CPU      771.2 (1.30384)        765 (0.707107)		-0.8%
Context Switches 46590.6 (935.591)      66316.4 (744.64)	+42.34%
Sleeps           84421.2 (596.612)      85186 (523.041)		+0.9%

Average Optimal load -j 16 Run (std deviation)
                 4.13.0-rc5             4.13.0-rc5-spf
Elapsed Time     85.422 (0.42293)       74.81 (0.419345)	-12.42%
User Time        1031.79 (51.6557)      954.912 (46.8439)	-7.45%
System Time      186.528 (19.0575)      107.514 (9.36902)	-42.36%
Percent CPU      1059.2 (303.607)       1056.8 (307.624)	-0.23%
Context Switches 67240.3 (21788.9)      89360.6 (24299.9)	+32.9%
Sleeps           89607.8 (5511.22)      90372.5 (5490.16)	+0.85%

- 80 CPUs Power 8 node:
Average Half load -j 40 Run (std deviation)
                 4.13.0-rc5             4.13.0-rc5-spf
Elapsed Time     117.176 (0.824093)     116.792 (0.695392)	-0.33%
User Time        4412.34 (24.29)        4396.02 (24.4819)	-0.37%
System Time      131.106 (1.28343)      133.452 (0.708851)	+1.79%
Percent CPU      3876.8 (18.1439)       3877.6 (21.9955)	+0.02%
Context Switches 72470.2 (466.181)      72971 (673.624)		+0.69%
Sleeps           161294 (2284.85)       161946 (2217.9)		+0.40%

Average Optimal load -j 80 Run (std deviation)
                 4.13.0-rc5             4.13.0-rc5-spf
Elapsed Time     111.176 (1.11123)      111.242 (0.801542)	+0.06%
User Time        5930.03 (1600.07)      5929.89 (1617)		+0%
System Time      166.258 (37.0662)      169.337 (37.8419)	+1.85%
Percent CPU      5378.5 (1584.16)       5385.6 (1590.24)	+0.13%
Context Switches 117389 (47350.1)       130132 (60256.3)	+10.86%
Sleeps           163354 (4153.9)        163219 (2251.27)	-0.08%


>> Here the elapsed time is a bit shorter using the spf release, but we
>> remain in the error margin. It has to be noted that this system is not
>> correctly balanced on the NUMA point of view as all the available memory is
>> attached to one core.
> 
> Why different NUMA configuration would have changed the outcome ?

I guess, process will have been scheduled nearest the memory, or spread in
a different way on the core if memory will be attached to.

>>
>> ------------------------
>> Changes since v1:
>>  - Remove PERF_COUNT_SW_SPF_FAILED perf event.
>>  - Add tracing events to details speculative page fault failures.
>>  - Cache VMA fields values which are used once the PTE is unlocked at the
>>  end of the page fault events.
> 
> Why is this required ?

Please see patch 07/20 for details.

Cheers,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-17 22:05 ` [PATCH v2 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
  2017-08-20 12:11   ` Sergey Senozhatsky
@ 2017-08-27  0:18   ` Kirill A. Shutemov
  2017-08-28  9:37     ` Peter Zijlstra
                       ` (3 more replies)
  1 sibling, 4 replies; 61+ messages in thread
From: Kirill A. Shutemov @ 2017-08-27  0:18 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: paulmck, peterz, akpm, ak, mhocko, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, linux-kernel, linux-mm, haren, khandual, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On Fri, Aug 18, 2017 at 12:05:13AM +0200, Laurent Dufour wrote:
> +/*
> + * vm_normal_page() adds some processing which should be done while
> + * hodling the mmap_sem.
> + */
> +int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
> +			     unsigned int flags)
> +{
> +	struct vm_fault vmf = {
> +		.address = address,
> +	};
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +	int dead, seq, idx, ret = VM_FAULT_RETRY;
> +	struct vm_area_struct *vma;
> +	struct mempolicy *pol;
> +
> +	/* Clear flags that may lead to release the mmap_sem to retry */
> +	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
> +	flags |= FAULT_FLAG_SPECULATIVE;
> +
> +	idx = srcu_read_lock(&vma_srcu);
> +	vma = find_vma_srcu(mm, address);
> +	if (!vma)
> +		goto unlock;
> +
> +	/*
> +	 * Validate the VMA found by the lockless lookup.
> +	 */
> +	dead = RB_EMPTY_NODE(&vma->vm_rb);
> +	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
> +	if ((seq & 1) || dead)
> +		goto unlock;
> +
> +	/*
> +	 * Can't call vm_ops service has we don't know what they would do
> +	 * with the VMA.
> +	 * This include huge page from hugetlbfs.
> +	 */
> +	if (vma->vm_ops)
> +		goto unlock;

I think we need to have a way to white-list safe ->vm_ops.

> +
> +	if (unlikely(!vma->anon_vma))
> +		goto unlock;

It deserves a comment.

> +
> +	vmf.vma_flags = READ_ONCE(vma->vm_flags);
> +	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
> +
> +	/* Can't call userland page fault handler in the speculative path */
> +	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
> +		goto unlock;
> +
> +	/*
> +	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
> +	 * are not compatible with the speculative page fault processing.
> +	 */
> +	pol = __get_vma_policy(vma, address);
> +	if (!pol)
> +		pol = get_task_policy(current);
> +	if (pol && pol->mode == MPOL_INTERLEAVE)
> +		goto unlock;
> +
> +	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
> +		/*
> +		 * This could be detected by the check address against VMA's
> +		 * boundaries but we want to trace it as not supported instead
> +		 * of changed.
> +		 */
> +		goto unlock;
> +
> +	if (address < READ_ONCE(vma->vm_start)
> +	    || READ_ONCE(vma->vm_end) <= address)
> +		goto unlock;
> +
> +	/*
> +	 * The three following checks are copied from access_error from
> +	 * arch/x86/mm/fault.c
> +	 */
> +	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> +				       flags & FAULT_FLAG_INSTRUCTION,
> +				       flags & FAULT_FLAG_REMOTE))
> +		goto unlock;
> +
> +	/* This is one is required to check that the VMA has write access set */
> +	if (flags & FAULT_FLAG_WRITE) {
> +		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
> +			goto unlock;
> +	} else {
> +		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE))))
> +			goto unlock;
> +	}
> +
> +	/*
> +	 * Do a speculative lookup of the PTE entry.
> +	 */
> +	local_irq_disable();
> +	pgd = pgd_offset(mm, address);
> +	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
> +		goto out_walk;
> +
> +	p4d = p4d_alloc(mm, pgd, address);
> +	if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
> +		goto out_walk;
> +
> +	pud = pud_alloc(mm, p4d, address);
> +	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
> +		goto out_walk;
> +
> +	pmd = pmd_offset(pud, address);
> +	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> +		goto out_walk;
> +
> +	/*
> +	 * The above does not allocate/instantiate page-tables because doing so
> +	 * would lead to the possibility of instantiating page-tables after
> +	 * free_pgtables() -- and consequently leaking them.
> +	 *
> +	 * The result is that we take at least one !speculative fault per PMD
> +	 * in order to instantiate it.
> +	 */


Doing all this job and just give up because we cannot allocate page tables
looks very wasteful to me.

Have you considered to look how we can hand over from speculative to
non-speculative path without starting from scratch (when possible)?

> +	/* Transparent huge pages are not supported. */
> +	if (unlikely(pmd_trans_huge(*pmd)))
> +		goto out_walk;

That's looks like a blocker to me.

Is there any problem with making it supported (besides plain coding)?

> +
> +	vmf.vma = vma;
> +	vmf.pmd = pmd;
> +	vmf.pgoff = linear_page_index(vma, address);
> +	vmf.gfp_mask = __get_fault_gfp_mask(vma);
> +	vmf.sequence = seq;
> +	vmf.flags = flags;
> +
> +	local_irq_enable();
> +
> +	/*
> +	 * We need to re-validate the VMA after checking the bounds, otherwise
> +	 * we might have a false positive on the bounds.
> +	 */
> +	if (read_seqcount_retry(&vma->vm_sequence, seq))
> +		goto unlock;
> +
> +	ret = handle_pte_fault(&vmf);
> +
> +unlock:
> +	srcu_read_unlock(&vma_srcu, idx);
> +	return ret;
> +
> +out_walk:
> +	local_irq_enable();
> +	goto unlock;
> +}
> +#endif /* __HAVE_ARCH_CALL_SPF */
> +
>  /*
>   * By the time we get here, we already hold the mm semaphore
>   *
> -- 
> 2.7.4
> 

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-27  0:18   ` Kirill A. Shutemov
@ 2017-08-28  9:37     ` Peter Zijlstra
  2017-08-28 21:14       ` Benjamin Herrenschmidt
  2017-08-29  7:59     ` Laurent Dufour
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-28  9:37 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Laurent Dufour, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Sun, Aug 27, 2017 at 03:18:23AM +0300, Kirill A. Shutemov wrote:
> On Fri, Aug 18, 2017 at 12:05:13AM +0200, Laurent Dufour wrote:
> > +	/*
> > +	 * Can't call vm_ops service has we don't know what they would do
> > +	 * with the VMA.
> > +	 * This include huge page from hugetlbfs.
> > +	 */
> > +	if (vma->vm_ops)
> > +		goto unlock;
> 
> I think we need to have a way to white-list safe ->vm_ops.

Either that, or simply teach all ->fault() callbacks about speculative
faults. Shouldn't be too hard, just 'work'.

> > +
> > +	if (unlikely(!vma->anon_vma))
> > +		goto unlock;
> 
> It deserves a comment.

Yes, that was very much not intended. It wrecks most of the fun. This
really _should_ work for file maps too.

> > +	/*
> > +	 * Do a speculative lookup of the PTE entry.
> > +	 */
> > +	local_irq_disable();
> > +	pgd = pgd_offset(mm, address);
> > +	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
> > +		goto out_walk;
> > +
> > +	p4d = p4d_alloc(mm, pgd, address);
> > +	if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
> > +		goto out_walk;
> > +
> > +	pud = pud_alloc(mm, p4d, address);
> > +	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
> > +		goto out_walk;
> > +
> > +	pmd = pmd_offset(pud, address);
> > +	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> > +		goto out_walk;
> > +
> > +	/*
> > +	 * The above does not allocate/instantiate page-tables because doing so
> > +	 * would lead to the possibility of instantiating page-tables after
> > +	 * free_pgtables() -- and consequently leaking them.
> > +	 *
> > +	 * The result is that we take at least one !speculative fault per PMD
> > +	 * in order to instantiate it.
> > +	 */
> 
> 
> Doing all this job and just give up because we cannot allocate page tables
> looks very wasteful to me.
> 
> Have you considered to look how we can hand over from speculative to
> non-speculative path without starting from scratch (when possible)?

So we _can_ in fact allocate and install page-tables, but we have to be
very careful about it. The interesting case is where we race with
free_pgtables() and install a page that was just taken out.

But since we already have the VMA I think we can do something like:

	if (p*g_none()) {
		p*d_t *new = p*d_alloc_one(mm, address);

		spin_lock(&mm->page_table_lock);
		if (!vma_changed_or_dead(vma,seq)) {
			if (p*d_none())
				p*d_populate(mm, p*d, new);
			else
				p*d_free(new);

			new = NULL;
		}
		spin_unlock(&mm->page_table_lock);

		if (new) {
			p*d_free(new);
			goto out_walk;
		}
	}

I just never bothered with that, figured we ought to get the basics
working before trying to be clever.

> > +	/* Transparent huge pages are not supported. */
> > +	if (unlikely(pmd_trans_huge(*pmd)))
> > +		goto out_walk;
> 
> That's looks like a blocker to me.
> 
> Is there any problem with making it supported (besides plain coding)?

Not that I can remember, but I never really looked at THP, I don't think
we even had that when I did the first versions.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-28  9:37     ` Peter Zijlstra
@ 2017-08-28 21:14       ` Benjamin Herrenschmidt
  2017-08-28 22:35         ` Andi Kleen
  2017-08-29  8:33         ` Peter Zijlstra
  0 siblings, 2 replies; 61+ messages in thread
From: Benjamin Herrenschmidt @ 2017-08-28 21:14 UTC (permalink / raw)
  To: Peter Zijlstra, Kirill A. Shutemov
  Cc: Laurent Dufour, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, linux-kernel, linux-mm, haren, khandual, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On Mon, 2017-08-28 at 11:37 +0200, Peter Zijlstra wrote:
> > Doing all this job and just give up because we cannot allocate page tables
> > looks very wasteful to me.
> > 
> > Have you considered to look how we can hand over from speculative to
> > non-speculative path without starting from scratch (when possible)?
> 
> So we _can_ in fact allocate and install page-tables, but we have to be
> very careful about it. The interesting case is where we race with
> free_pgtables() and install a page that was just taken out.
> 
> But since we already have the VMA I think we can do something like:

That makes me extremely nervous... there could be all sort of
assumptions esp. in arch code about the fact that we never populate the
tree without the mm sem.

We'd have to audit archs closely. Things like the page walk cache
flushing on power etc...

I don't mind the "retry" .. .we've brought stuff in the L1 cache
already which I would expect to be the bulk of the overhead, and the
allocation case isn't that common. Do we have numbers to show how
destrimental this is today ?

Cheers,
Ben.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-28 21:14       ` Benjamin Herrenschmidt
@ 2017-08-28 22:35         ` Andi Kleen
  2017-08-29  8:15           ` Peter Zijlstra
  2017-08-29  8:33         ` Peter Zijlstra
  1 sibling, 1 reply; 61+ messages in thread
From: Andi Kleen @ 2017-08-28 22:35 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Peter Zijlstra, Kirill A. Shutemov, Laurent Dufour, paulmck,
	akpm, mhocko, dave, jack, Matthew Wilcox, mpe, paulus,
	Thomas Gleixner, Ingo Molnar, hpa, Will Deacon, linux-kernel,
	linux-mm, haren, khandual, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

> That makes me extremely nervous... there could be all sort of
> assumptions esp. in arch code about the fact that we never populate the
> tree without the mm sem.
> 
> We'd have to audit archs closely. Things like the page walk cache
> flushing on power etc...

Yes the whole thing is quite risky. Probably will need some
kind of per architecture opt-in scheme?

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-27  0:18   ` Kirill A. Shutemov
  2017-08-28  9:37     ` Peter Zijlstra
@ 2017-08-29  7:59     ` Laurent Dufour
  2017-08-29 12:04       ` Peter Zijlstra
  2017-08-30  5:25     ` Anshuman Khandual
  2017-08-30  8:56     ` Laurent Dufour
  3 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-29  7:59 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: paulmck, peterz, akpm, ak, mhocko, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, linux-kernel, linux-mm, haren, khandual, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 27/08/2017 02:18, Kirill A. Shutemov wrote:
> On Fri, Aug 18, 2017 at 12:05:13AM +0200, Laurent Dufour wrote:
>> +/*
>> + * vm_normal_page() adds some processing which should be done while
>> + * hodling the mmap_sem.
>> + */
>> +int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> +			     unsigned int flags)
>> +{
>> +	struct vm_fault vmf = {
>> +		.address = address,
>> +	};
>> +	pgd_t *pgd;
>> +	p4d_t *p4d;
>> +	pud_t *pud;
>> +	pmd_t *pmd;
>> +	int dead, seq, idx, ret = VM_FAULT_RETRY;
>> +	struct vm_area_struct *vma;
>> +	struct mempolicy *pol;
>> +
>> +	/* Clear flags that may lead to release the mmap_sem to retry */
>> +	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>> +	flags |= FAULT_FLAG_SPECULATIVE;
>> +
>> +	idx = srcu_read_lock(&vma_srcu);
>> +	vma = find_vma_srcu(mm, address);
>> +	if (!vma)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * Validate the VMA found by the lockless lookup.
>> +	 */
>> +	dead = RB_EMPTY_NODE(&vma->vm_rb);
>> +	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>> +	if ((seq & 1) || dead)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * Can't call vm_ops service has we don't know what they would do
>> +	 * with the VMA.
>> +	 * This include huge page from hugetlbfs.
>> +	 */
>> +	if (vma->vm_ops)
>> +		goto unlock;
> 
> I think we need to have a way to white-list safe ->vm_ops.

Hi Kirill,
Yes this would be a good optimization done in a next step.

>> +
>> +	if (unlikely(!vma->anon_vma))
>> +		goto unlock;
> 
> It deserves a comment.

You're right I'll add it in the next version.
For the record, the root cause is that __anon_vma_prepare() requires the
mmap_sem to be held because vm_next and vm_prev must be safe.


>> +
>> +	vmf.vma_flags = READ_ONCE(vma->vm_flags);
>> +	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>> +
>> +	/* Can't call userland page fault handler in the speculative path */
>> +	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>> +		goto unlock;
>> +
>> +	/*
>> +	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
>> +	 * are not compatible with the speculative page fault processing.
>> +	 */
>> +	pol = __get_vma_policy(vma, address);
>> +	if (!pol)
>> +		pol = get_task_policy(current);
>> +	if (pol && pol->mode == MPOL_INTERLEAVE)
>> +		goto unlock;
>> +
>> +	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>> +		/*
>> +		 * This could be detected by the check address against VMA's
>> +		 * boundaries but we want to trace it as not supported instead
>> +		 * of changed.
>> +		 */
>> +		goto unlock;
>> +
>> +	if (address < READ_ONCE(vma->vm_start)
>> +	    || READ_ONCE(vma->vm_end) <= address)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * The three following checks are copied from access_error from
>> +	 * arch/x86/mm/fault.c
>> +	 */
>> +	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>> +				       flags & FAULT_FLAG_INSTRUCTION,
>> +				       flags & FAULT_FLAG_REMOTE))
>> +		goto unlock;
>> +
>> +	/* This is one is required to check that the VMA has write access set */
>> +	if (flags & FAULT_FLAG_WRITE) {
>> +		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
>> +			goto unlock;
>> +	} else {
>> +		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE))))
>> +			goto unlock;
>> +	}
>> +
>> +	/*
>> +	 * Do a speculative lookup of the PTE entry.
>> +	 */
>> +	local_irq_disable();
>> +	pgd = pgd_offset(mm, address);
>> +	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
>> +		goto out_walk;
>> +
>> +	p4d = p4d_alloc(mm, pgd, address);
>> +	if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
>> +		goto out_walk;
>> +
>> +	pud = pud_alloc(mm, p4d, address);
>> +	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
>> +		goto out_walk;
>> +
>> +	pmd = pmd_offset(pud, address);
>> +	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
>> +		goto out_walk;
>> +
>> +	/*
>> +	 * The above does not allocate/instantiate page-tables because doing so
>> +	 * would lead to the possibility of instantiating page-tables after
>> +	 * free_pgtables() -- and consequently leaking them.
>> +	 *
>> +	 * The result is that we take at least one !speculative fault per PMD
>> +	 * in order to instantiate it.
>> +	 */
> 
> 
> Doing all this job and just give up because we cannot allocate page tables
> looks very wasteful to me.
> 
> Have you considered to look how we can hand over from speculative to
> non-speculative path without starting from scratch (when possible)?

Not really, but as mentioned by Benjamin and Andy, this will require care
from the architecture code.
This may be a future optimization, but it will require guarantee from the
architecture code as well.

>> +	/* Transparent huge pages are not supported. */
>> +	if (unlikely(pmd_trans_huge(*pmd)))
>> +		goto out_walk;
> 
> That's looks like a blocker to me.
> 
> Is there any problem with making it supported (besides plain coding)?

To be honest, I can't remember why I added such a check, may be for safety
reason, but I need to double check that again. I'll do so and come back
later with a statement.

Thanks,
Laurent.

>> +
>> +	vmf.vma = vma;
>> +	vmf.pmd = pmd;
>> +	vmf.pgoff = linear_page_index(vma, address);
>> +	vmf.gfp_mask = __get_fault_gfp_mask(vma);
>> +	vmf.sequence = seq;
>> +	vmf.flags = flags;
>> +
>> +	local_irq_enable();
>> +
>> +	/*
>> +	 * We need to re-validate the VMA after checking the bounds, otherwise
>> +	 * we might have a false positive on the bounds.
>> +	 */
>> +	if (read_seqcount_retry(&vma->vm_sequence, seq))
>> +		goto unlock;
>> +
>> +	ret = handle_pte_fault(&vmf);
>> +
>> +unlock:
>> +	srcu_read_unlock(&vma_srcu, idx);
>> +	return ret;
>> +
>> +out_walk:
>> +	local_irq_enable();
>> +	goto unlock;
>> +}
>> +#endif /* __HAVE_ARCH_CALL_SPF */
>> +
>>  /*
>>   * By the time we get here, we already hold the mm semaphore
>>   *
>> -- 
>> 2.7.4
>>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-28 22:35         ` Andi Kleen
@ 2017-08-29  8:15           ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29  8:15 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Benjamin Herrenschmidt, Kirill A. Shutemov, Laurent Dufour,
	paulmck, akpm, mhocko, dave, jack, Matthew Wilcox, mpe, paulus,
	Thomas Gleixner, Ingo Molnar, hpa, Will Deacon, linux-kernel,
	linux-mm, haren, khandual, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On Mon, Aug 28, 2017 at 03:35:11PM -0700, Andi Kleen wrote:
> Yes the whole thing is quite risky. Probably will need some
> kind of per architecture opt-in scheme?

See patch 19/20, that not enough for you?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-28 21:14       ` Benjamin Herrenschmidt
  2017-08-28 22:35         ` Andi Kleen
@ 2017-08-29  8:33         ` Peter Zijlstra
  2017-08-29 11:27           ` Peter Zijlstra
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29  8:33 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Kirill A. Shutemov, Laurent Dufour, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Tue, Aug 29, 2017 at 07:14:37AM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2017-08-28 at 11:37 +0200, Peter Zijlstra wrote:
> > > Doing all this job and just give up because we cannot allocate page tables
> > > looks very wasteful to me.
> > > 
> > > Have you considered to look how we can hand over from speculative to
> > > non-speculative path without starting from scratch (when possible)?
> > 
> > So we _can_ in fact allocate and install page-tables, but we have to be
> > very careful about it. The interesting case is where we race with
> > free_pgtables() and install a page that was just taken out.
> > 
> > But since we already have the VMA I think we can do something like:
> 
> That makes me extremely nervous... there could be all sort of
> assumptions esp. in arch code about the fact that we never populate the
> tree without the mm sem.

That _would_ be somewhat dodgy, because that means it needs to rely on
taking mmap_sem for _writing_ to undo things and arch/powerpc/ doesn't
have many down_write.*mmap_sem:

$ git grep "down_write.*mmap_sem" arch/powerpc/
arch/powerpc/kernel/vdso.c:     if (down_write_killable(&mm->mmap_sem))
arch/powerpc/kvm/book3s_64_vio.c:       down_write(&current->mm->mmap_sem);
arch/powerpc/mm/mmu_context_iommu.c:    down_write(&mm->mmap_sem);
arch/powerpc/mm/subpage-prot.c: down_write(&mm->mmap_sem);
arch/powerpc/mm/subpage-prot.c: down_write(&mm->mmap_sem);
arch/powerpc/mm/subpage-prot.c:         down_write(&mm->mmap_sem);

Then again, I suppose it could be relying on the implicit down_write
from things like munmap() and the like..

And things _ought_ to be ordered by the various PTLs
(mm->page_table_lock and pmd->lock) which of course doesn't mean
something accidentally snuck through.

> We'd have to audit archs closely. Things like the page walk cache
> flushing on power etc...

If you point me where to look, I'll have a poke around. I'm not
quite sure what you mean with pagewalk cache flushing. Your hash thing
flushes everything inside the PTL IIRC and the radix code appears fairly
'normal'.

> I don't mind the "retry" .. .we've brought stuff in the L1 cache
> already which I would expect to be the bulk of the overhead, and the
> allocation case isn't that common. Do we have numbers to show how
> destrimental this is today ?

No numbers, afaik. And like I said, I didn't consider this an actual
problem when I did these patches. But since Kirill asked ;-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29  8:33         ` Peter Zijlstra
@ 2017-08-29 11:27           ` Peter Zijlstra
  2017-08-29 21:19             ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29 11:27 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Kirill A. Shutemov, Laurent Dufour, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Tue, Aug 29, 2017 at 10:33:52AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 29, 2017 at 07:14:37AM +1000, Benjamin Herrenschmidt wrote:
> > We'd have to audit archs closely. Things like the page walk cache
> > flushing on power etc...
> 
> If you point me where to look, I'll have a poke around. I'm not
> quite sure what you mean with pagewalk cache flushing. Your hash thing
> flushes everything inside the PTL IIRC and the radix code appears fairly
> 'normal'.

mpe helped me out and explained that is the PWC hint to TBLIE.

So, you set need_flush_all when you unhook pud/pmd/pte which you then
use to set PWC. So free_pgtables() will do the PWC when it unhooks
higher level pages.

But you're right that there's some issues, free_pgtables() itself
doesn't seem to use mm->page_table_lock,pmd->lock _AT_ALL_ to unhook the
pages.

If it were to do that, things should work fine since those locks would
then serialize against the speculative faults, we would never install a
page if the VMA would be under tear-down and it would thus not be
visible to your caches either.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29  7:59     ` Laurent Dufour
@ 2017-08-29 12:04       ` Peter Zijlstra
  2017-08-29 13:18         ` Laurent Dufour
  2017-08-30  3:48         ` Anshuman Khandual
  0 siblings, 2 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29 12:04 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
> >> +
> >> +	if (unlikely(!vma->anon_vma))
> >> +		goto unlock;
> > 
> > It deserves a comment.
> 
> You're right I'll add it in the next version.
> For the record, the root cause is that __anon_vma_prepare() requires the
> mmap_sem to be held because vm_next and vm_prev must be safe.

But should that test not be:

	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
		goto unlock;

Because !anon vmas will never have ->anon_vma set and you don't want to
exclude those.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 12:04       ` Peter Zijlstra
@ 2017-08-29 13:18         ` Laurent Dufour
  2017-08-29 13:45           ` Peter Zijlstra
  2017-08-30  3:48         ` Anshuman Khandual
  1 sibling, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-29 13:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 29/08/2017 14:04, Peter Zijlstra wrote:
> On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
>> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
>>>> +
>>>> +	if (unlikely(!vma->anon_vma))
>>>> +		goto unlock;
>>>
>>> It deserves a comment.
>>
>> You're right I'll add it in the next version.
>> For the record, the root cause is that __anon_vma_prepare() requires the
>> mmap_sem to be held because vm_next and vm_prev must be safe.
> 
> But should that test not be:
> 
> 	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> 		goto unlock;
> 
> Because !anon vmas will never have ->anon_vma set and you don't want to
> exclude those.

Yes in the case we later allow non anonymous vmas to be handled.
Currently only anonymous vmas are supported so the check is good enough,
isn't it ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 13:18         ` Laurent Dufour
@ 2017-08-29 13:45           ` Peter Zijlstra
  2017-08-30  5:03             ` Anshuman Khandual
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-29 13:45 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Tue, Aug 29, 2017 at 03:18:25PM +0200, Laurent Dufour wrote:
> On 29/08/2017 14:04, Peter Zijlstra wrote:
> > On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
> >> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
> >>>> +
> >>>> +	if (unlikely(!vma->anon_vma))
> >>>> +		goto unlock;
> >>>
> >>> It deserves a comment.
> >>
> >> You're right I'll add it in the next version.
> >> For the record, the root cause is that __anon_vma_prepare() requires the
> >> mmap_sem to be held because vm_next and vm_prev must be safe.
> > 
> > But should that test not be:
> > 
> > 	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> > 		goto unlock;
> > 
> > Because !anon vmas will never have ->anon_vma set and you don't want to
> > exclude those.
> 
> Yes in the case we later allow non anonymous vmas to be handled.
> Currently only anonymous vmas are supported so the check is good enough,
> isn't it ?

That wasn't at all clear from reading the code. This makes it clear
->anon_vma is only ever looked at for anonymous.

And like Kirill says, we _really_ should start allowing some (if not
all) vm_ops. Large file based mappings aren't particularly rare.

I'm not sure we want to introduce a white-list or just bite the bullet
and audit all ->fault() implementations. But either works and isn't
terribly difficult, auditing all is more work though.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 19/20] x86/mm: Add speculative pagefault handling
  2017-08-21  7:29   ` Anshuman Khandual
@ 2017-08-29 14:50     ` Laurent Dufour
  2017-08-29 14:58       ` Laurent Dufour
  0 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-29 14:50 UTC (permalink / raw)
  To: Anshuman Khandual, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On 21/08/2017 09:29, Anshuman Khandual wrote:
> On 08/18/2017 03:35 AM, Laurent Dufour wrote:
>> From: Peter Zijlstra <peterz@infradead.org>
>>
>> Try a speculative fault before acquiring mmap_sem, if it returns with
>> VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
>> traditional fault.
>>
>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>
>> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
>>  handle_speculative_fault()]
>> [Retry with usual fault path in the case VM_ERROR is returned by
>>  handle_speculative_fault(). This allows signal to be delivered]
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  arch/x86/include/asm/pgtable_types.h |  7 +++++++
>>  arch/x86/mm/fault.c                  | 19 +++++++++++++++++++
>>  2 files changed, 26 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
>> index bf9638e1ee42..4fd2693a037e 100644
>> --- a/arch/x86/include/asm/pgtable_types.h
>> +++ b/arch/x86/include/asm/pgtable_types.h
>> @@ -234,6 +234,13 @@ enum page_cache_mode {
>>  #define PGD_IDENT_ATTR	 0x001		/* PRESENT (no other attributes) */
>>  #endif
>>  
>> +/*
>> + * Advertise that we call the Speculative Page Fault handler.
>> + */
>> +#ifdef CONFIG_X86_64
>> +#define __HAVE_ARCH_CALL_SPF
>> +#endif
>> +
>>  #ifdef CONFIG_X86_32
>>  # include <asm/pgtable_32_types.h>
>>  #else
>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>> index 2a1fa10c6a98..4c070b9a4362 100644
>> --- a/arch/x86/mm/fault.c
>> +++ b/arch/x86/mm/fault.c
>> @@ -1365,6 +1365,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>  	if (error_code & PF_INSTR)
>>  		flags |= FAULT_FLAG_INSTRUCTION;
>>  
>> +#ifdef __HAVE_ARCH_CALL_SPF
>> +	if (error_code & PF_USER) {
>> +		fault = handle_speculative_fault(mm, address, flags);
>> +
>> +		/*
>> +		 * We also check against VM_FAULT_ERROR because we have to
>> +		 * raise a signal by calling later mm_fault_error() which
>> +		 * requires the vma pointer to be set. So in that case,
>> +		 * we fall through the normal path.
> 
> Cant mm_fault_error() be called inside handle_speculative_fault() ?
> Falling through the normal page fault path again just to raise a
> signal seems overkill. Looking into mm_fault_error(), it seems they
> are different for x86 and powerpc.
> 
> X86:
> 
> mm_fault_error(struct pt_regs *regs, unsigned long error_code,
>                unsigned long address, struct vm_area_struct *vma,
>                unsigned int fault)
> 
> powerpc:
> 
> mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
> 
> Even in case of X86, I guess we would have reference to the faulting
> VMA (after the SRCU search) which can be used to call this function
> directly.

Yes I think this is doable in the case of x86.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 19/20] x86/mm: Add speculative pagefault handling
  2017-08-29 14:50     ` Laurent Dufour
@ 2017-08-29 14:58       ` Laurent Dufour
  0 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-29 14:58 UTC (permalink / raw)
  To: Anshuman Khandual, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On 29/08/2017 16:50, Laurent Dufour wrote:
> On 21/08/2017 09:29, Anshuman Khandual wrote:
>> On 08/18/2017 03:35 AM, Laurent Dufour wrote:
>>> From: Peter Zijlstra <peterz@infradead.org>
>>>
>>> Try a speculative fault before acquiring mmap_sem, if it returns with
>>> VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
>>> traditional fault.
>>>
>>> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>>>
>>> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
>>>  handle_speculative_fault()]
>>> [Retry with usual fault path in the case VM_ERROR is returned by
>>>  handle_speculative_fault(). This allows signal to be delivered]
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>>  arch/x86/include/asm/pgtable_types.h |  7 +++++++
>>>  arch/x86/mm/fault.c                  | 19 +++++++++++++++++++
>>>  2 files changed, 26 insertions(+)
>>>
>>> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
>>> index bf9638e1ee42..4fd2693a037e 100644
>>> --- a/arch/x86/include/asm/pgtable_types.h
>>> +++ b/arch/x86/include/asm/pgtable_types.h
>>> @@ -234,6 +234,13 @@ enum page_cache_mode {
>>>  #define PGD_IDENT_ATTR	 0x001		/* PRESENT (no other attributes) */
>>>  #endif
>>>  
>>> +/*
>>> + * Advertise that we call the Speculative Page Fault handler.
>>> + */
>>> +#ifdef CONFIG_X86_64
>>> +#define __HAVE_ARCH_CALL_SPF
>>> +#endif
>>> +
>>>  #ifdef CONFIG_X86_32
>>>  # include <asm/pgtable_32_types.h>
>>>  #else
>>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>>> index 2a1fa10c6a98..4c070b9a4362 100644
>>> --- a/arch/x86/mm/fault.c
>>> +++ b/arch/x86/mm/fault.c
>>> @@ -1365,6 +1365,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
>>>  	if (error_code & PF_INSTR)
>>>  		flags |= FAULT_FLAG_INSTRUCTION;
>>>  
>>> +#ifdef __HAVE_ARCH_CALL_SPF
>>> +	if (error_code & PF_USER) {
>>> +		fault = handle_speculative_fault(mm, address, flags);
>>> +
>>> +		/*
>>> +		 * We also check against VM_FAULT_ERROR because we have to
>>> +		 * raise a signal by calling later mm_fault_error() which
>>> +		 * requires the vma pointer to be set. So in that case,
>>> +		 * we fall through the normal path.
>>
>> Cant mm_fault_error() be called inside handle_speculative_fault() ?
>> Falling through the normal page fault path again just to raise a
>> signal seems overkill. Looking into mm_fault_error(), it seems they
>> are different for x86 and powerpc.
>>
>> X86:
>>
>> mm_fault_error(struct pt_regs *regs, unsigned long error_code,
>>                unsigned long address, struct vm_area_struct *vma,
>>                unsigned int fault)
>>
>> powerpc:
>>
>> mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
>>
>> Even in case of X86, I guess we would have reference to the faulting
>> VMA (after the SRCU search) which can be used to call this function
>> directly.
> 
> Yes I think this is doable in the case of x86.

Indeed this is not doable as the vma pointer is not returned by
handle_speculative_fault() and this is not possible to return it because
once srcu_read_unlock() is called, the pointer is no more safe.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 20/20] powerpc/mm: Add speculative page fault
  2017-08-21  6:58   ` Anshuman Khandual
@ 2017-08-29 15:13     ` Laurent Dufour
  0 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-29 15:13 UTC (permalink / raw)
  To: Anshuman Khandual, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon
  Cc: linux-kernel, linux-mm, haren, npiggin, bsingharora, Tim Chen,
	linuxppc-dev, x86

On 21/08/2017 08:58, Anshuman Khandual wrote:
> On 08/18/2017 03:35 AM, Laurent Dufour wrote:
>> This patch enable the speculative page fault on the PowerPC
>> architecture.
>>
>> This will try a speculative page fault without holding the mmap_sem,
>> if it returns with WM_FAULT_RETRY, the mmap_sem is acquired and the
> 
> s/WM_FAULT_RETRY/VM_FAULT_RETRY/

Good catch ;)

>> traditional page fault processing is done.
>>
>> Support is only provide for BOOK3S_64 currently because:
>> - require CONFIG_PPC_STD_MMU because checks done in
>>   set_access_flags_filter()
> 
> What checks are done in set_access_flags_filter() ? We are just
> adding the code block in do_page_fault().

set_access_flags_filter() is checking for vm_flags & VM_EXEC which may be
changed in our back, leading to a spurious WARN displayed.
This being said, I focused on the BOOK3S as this meaningful for large
system, and I didn't get time to check for embedded systems.

> 
>> - require BOOK3S because we can't support for book3e_hugetlb_preload()
>>   called by update_mmu_cache()
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/mm/fault.c                      | 30 +++++++++++++++++++++++++++-
>>  2 files changed, 34 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index 818a58fc3f4f..897f8b9f67e6 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -313,6 +313,11 @@ extern unsigned long pci_io_base;
>>  /* Advertise support for _PAGE_SPECIAL */
>>  #define __HAVE_ARCH_PTE_SPECIAL
>>  
>> +/* Advertise that we call the Speculative Page Fault handler */
>> +#if defined(CONFIG_PPC_BOOK3S_64)
>> +#define __HAVE_ARCH_CALL_SPF
>> +#endif
>> +
>>  #ifndef __ASSEMBLY__
>>  
>>  /*
>> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
>> index 4c422632047b..7b3cc4c30eab 100644
>> --- a/arch/powerpc/mm/fault.c
>> +++ b/arch/powerpc/mm/fault.c
>> @@ -291,9 +291,36 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>>  	if (is_write && is_user)
>>  		store_update_sp = store_updates_sp(regs);
>>  
>> -	if (is_user)
>> +	if (is_user) {
>>  		flags |= FAULT_FLAG_USER;
>>  
>> +#if defined(__HAVE_ARCH_CALL_SPF)
>> +		/* let's try a speculative page fault without grabbing the
>> +		 * mmap_sem.
>> +		 */
>> +
>> +		/*
>> +		 * flags is set later based on the VMA's flags, for the common
>> +		 * speculative service, we need some flags to be set.
>> +		 */
>> +		if (is_write)
>> +			flags |= FAULT_FLAG_WRITE;
>> +
>> +		fault = handle_speculative_fault(mm, address, flags);
>> +		if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) {
>> +			perf_sw_event(PERF_COUNT_SW_SPF_DONE, 1,
>> +				      regs, address);
>> +			goto done;
> 
> Why we should retry with classical page fault on VM_FAULT_ERROR ?
> We should always return VM_FAULT_RETRY in case there is a clear
> collision some where which requires retry with classical method
> and return VM_FAULT_ERROR in cases where we know that it cannot
> be retried and fail for good. Should not handle_speculative_fault()
> be changed to accommodate this ?

There is no need to change handle_speculative_fault(), it should return
VM_FAULT_RETRY when a retry is required. If VM_FAULT_ERROR is return, we
should be able to jump to the block dealing with VM_FAULT_ERROR and calling
vm_fault_error().


> 
>> +		}
>> +
>> +		/*
>> +		 * Resetting flags since the following code assumes
>> +		 * FAULT_FLAG_WRITE is not set.
>> +		 */
>> +		flags &= ~FAULT_FLAG_WRITE;
>> +#endif /* defined(__HAVE_ARCH_CALL_SPF) */
> 
> Setting and resetting of FAULT_FLAG_WRITE seems confusing. Why you
> say that some flags need to be set for handle_speculative_fault()
> function. Could you elaborate on this ?

FAULT_FLAG_WRITE is required to handle write access. In the case we retry
with the classical path, the flag is reset and will be set later if
!is_exec and is_write.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 11:27           ` Peter Zijlstra
@ 2017-08-29 21:19             ` Benjamin Herrenschmidt
  2017-08-30  6:13               ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Benjamin Herrenschmidt @ 2017-08-29 21:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kirill A. Shutemov, Laurent Dufour, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Tue, 2017-08-29 at 13:27 +0200, Peter Zijlstra wrote:
> mpe helped me out and explained that is the PWC hint to TBLIE.
> 
> So, you set need_flush_all when you unhook pud/pmd/pte which you then
> use to set PWC. So free_pgtables() will do the PWC when it unhooks
> higher level pages.
> 
> But you're right that there's some issues, free_pgtables() itself
> doesn't seem to use mm->page_table_lock,pmd->lock _AT_ALL_ to unhook the
> pages.
> 
> If it were to do that, things should work fine since those locks would
> then serialize against the speculative faults, we would never install a
> page if the VMA would be under tear-down and it would thus not be
> visible to your caches either.

That's one case. I don't remember of *all* the cases to be honest, but
I do remember several times over the past few years thinking "ah we are
fine because the mm sem taken for writing protects us from any
concurrent tree structure change" :-)

Cheers,
Ben.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 12:04       ` Peter Zijlstra
  2017-08-29 13:18         ` Laurent Dufour
@ 2017-08-30  3:48         ` Anshuman Khandual
  1 sibling, 0 replies; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-30  3:48 UTC (permalink / raw)
  To: Peter Zijlstra, Laurent Dufour
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 08/29/2017 05:34 PM, Peter Zijlstra wrote:
> On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
>> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
>>>> +
>>>> +	if (unlikely(!vma->anon_vma))
>>>> +		goto unlock;
>>> It deserves a comment.
>> You're right I'll add it in the next version.
>> For the record, the root cause is that __anon_vma_prepare() requires the
>> mmap_sem to be held because vm_next and vm_prev must be safe.
> But should that test not be:
> 
> 	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> 		goto unlock;

This makes more sense. We are backing off from speculative path
because struct anon_vma has not been created for this anonymous
vma and we cannot do that without holding mmap_sem. This should
have nothing to do with vma->vm_ops availability.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 13:45           ` Peter Zijlstra
@ 2017-08-30  5:03             ` Anshuman Khandual
  2017-08-30  5:58               ` Peter Zijlstra
  2017-08-30  9:53               ` Laurent Dufour
  0 siblings, 2 replies; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-30  5:03 UTC (permalink / raw)
  To: Peter Zijlstra, Laurent Dufour
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 08/29/2017 07:15 PM, Peter Zijlstra wrote:
> On Tue, Aug 29, 2017 at 03:18:25PM +0200, Laurent Dufour wrote:
>> On 29/08/2017 14:04, Peter Zijlstra wrote:
>>> On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
>>>> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
>>>>>> +
>>>>>> +	if (unlikely(!vma->anon_vma))
>>>>>> +		goto unlock;
>>>>>
>>>>> It deserves a comment.
>>>>
>>>> You're right I'll add it in the next version.
>>>> For the record, the root cause is that __anon_vma_prepare() requires the
>>>> mmap_sem to be held because vm_next and vm_prev must be safe.
>>>
>>> But should that test not be:
>>>
>>> 	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
>>> 		goto unlock;
>>>
>>> Because !anon vmas will never have ->anon_vma set and you don't want to
>>> exclude those.
>>
>> Yes in the case we later allow non anonymous vmas to be handled.
>> Currently only anonymous vmas are supported so the check is good enough,
>> isn't it ?
> 
> That wasn't at all clear from reading the code. This makes it clear
> ->anon_vma is only ever looked at for anonymous.
> 
> And like Kirill says, we _really_ should start allowing some (if not
> all) vm_ops. Large file based mappings aren't particularly rare.
> 
> I'm not sure we want to introduce a white-list or just bite the bullet
> and audit all ->fault() implementations. But either works and isn't
> terribly difficult, auditing all is more work though.

filemap_fault() is used as vma-vm_ops->fault() for most of the file
systems. Changing it can enable speculative fault support for all of
them. It will still exclude other driver based vma-vm_ops->fault()
implementation. AFAICS, __lock_page_or_retry() function can drop
mm->mmap_sem if the page could not be locked right away. As suggested
by Peterz, making it understand FAULT_FLAG_SPECULATIVE should be good
enough. The patch is lightly tested for file mappings on top of this
series.

diff --git a/mm/filemap.c b/mm/filemap.c
index a497024..08f3042 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
 int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
                         unsigned int flags)
 {
+       if (flags & FAULT_FLAG_SPECULATIVE) {
+               if (flags & FAULT_FLAG_KILLABLE) {
+                       int ret;
+
+                       ret = __lock_page_killable(page);
+                       if (ret)
+                               return 0;
+               } else
+                       __lock_page(page);
+               return 1;
+       }
+
        if (flags & FAULT_FLAG_ALLOW_RETRY) {
                /*
                 * CAUTION! In this case, mmap_sem is not released
diff --git a/mm/memory.c b/mm/memory.c
index 549d235..02347f3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3836,8 +3836,6 @@ static int handle_pte_fault(struct vm_fault *vmf)
        if (!vmf->pte) {
                if (vma_is_anonymous(vmf->vma))
                        return do_anonymous_page(vmf);
-               else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
-                       return VM_FAULT_RETRY;
                else
                        return do_fault(vmf);
        }
@@ -4012,17 +4010,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
                goto unlock;
        }

-       /*
-        * Can't call vm_ops service has we don't know what they would do
-        * with the VMA.
-        * This include huge page from hugetlbfs.
-        */
-       if (vma->vm_ops) {
-               trace_spf_vma_notsup(_RET_IP_, vma, address);
-               goto unlock;
-       }
-
-       if (unlikely(!vma->anon_vma)) {
+       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma)) {
                trace_spf_vma_notsup(_RET_IP_, vma, address);
                goto unlock;
        }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-27  0:18   ` Kirill A. Shutemov
  2017-08-28  9:37     ` Peter Zijlstra
  2017-08-29  7:59     ` Laurent Dufour
@ 2017-08-30  5:25     ` Anshuman Khandual
  2017-08-30  8:56     ` Laurent Dufour
  3 siblings, 0 replies; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-30  5:25 UTC (permalink / raw)
  To: Kirill A. Shutemov, Laurent Dufour
  Cc: paulmck, peterz, akpm, ak, mhocko, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, linux-kernel, linux-mm, haren, khandual, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 08/27/2017 05:48 AM, Kirill A. Shutemov wrote:
>> +	/* Transparent huge pages are not supported. */
>> +	if (unlikely(pmd_trans_huge(*pmd)))
>> +		goto out_walk;
> That's looks like a blocker to me.
> 
> Is there any problem with making it supported (besides plain coding)?

IIUC we would have to reattempt once for each PMD level fault because
of the lack of a page table entry there. Besides do we want to support
huge pages in general as part of speculative page fault path ? The
number of faults will be very less (256 times lower on POWER and 512
times lower on X86). So is it worth it ? BTW calling hugetlb_fault()
after figuring out the VMA, works correctly inside handle_speculative
_fault() last time I checked.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-30  5:03             ` Anshuman Khandual
@ 2017-08-30  5:58               ` Peter Zijlstra
  2017-08-30  9:32                 ` Laurent Dufour
  2017-08-30  9:53               ` Laurent Dufour
  1 sibling, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-30  5:58 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Laurent Dufour, Kirill A. Shutemov, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Wed, Aug 30, 2017 at 10:33:50AM +0530, Anshuman Khandual wrote:
> diff --git a/mm/filemap.c b/mm/filemap.c
> index a497024..08f3042 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
>                          unsigned int flags)
>  {
> +       if (flags & FAULT_FLAG_SPECULATIVE) {
> +               if (flags & FAULT_FLAG_KILLABLE) {
> +                       int ret;
> +
> +                       ret = __lock_page_killable(page);
> +                       if (ret)
> +                               return 0;
> +               } else
> +                       __lock_page(page);
> +               return 1;
> +       }
> +
>         if (flags & FAULT_FLAG_ALLOW_RETRY) {
>                 /*
>                  * CAUTION! In this case, mmap_sem is not released

Yeah, that looks right.

> @@ -4012,17 +4010,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>                 goto unlock;
>         }
> 
> +       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma)) {
>                 trace_spf_vma_notsup(_RET_IP_, vma, address);
>                 goto unlock;
>         }

As riel pointed out on IRC slightly later, private file maps also need
->anon_vma and those actually have ->vm_ops IIRC so the condition needs
to be slightly more complicated.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-29 21:19             ` Benjamin Herrenschmidt
@ 2017-08-30  6:13               ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-30  6:13 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Kirill A. Shutemov, Laurent Dufour, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Wed, Aug 30, 2017 at 07:19:30AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2017-08-29 at 13:27 +0200, Peter Zijlstra wrote:
> > mpe helped me out and explained that is the PWC hint to TBLIE.
> > 
> > So, you set need_flush_all when you unhook pud/pmd/pte which you then
> > use to set PWC. So free_pgtables() will do the PWC when it unhooks
> > higher level pages.
> > 
> > But you're right that there's some issues, free_pgtables() itself
> > doesn't seem to use mm->page_table_lock,pmd->lock _AT_ALL_ to unhook the
> > pages.
> > 
> > If it were to do that, things should work fine since those locks would
> > then serialize against the speculative faults, we would never install a
> > page if the VMA would be under tear-down and it would thus not be
> > visible to your caches either.
> 
> That's one case. I don't remember of *all* the cases to be honest, but
> I do remember several times over the past few years thinking "ah we are
> fine because the mm sem taken for writing protects us from any
> concurrent tree structure change" :-)

Well, installing always seems to use the locks (it needs to, because its
always done with down_read()), that only leaves removal, and the only
place I know that removes stuff is free_pgtables().

But I think I found another fun place, copy_page_range(). While it
(pointlessly) takes all the PTLs on the dst mm it walks the src page
tables without any PTLs.

This means that if we have a multi-threaded process doing fork() a
thread of the src mm could instantiate page-tables that will not be
copied over.

Of course, this is highly dubious behaviour to begin with, and I don't
think there's anything fundamentally wrong with missing those pages but
we should document this stuff.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-27  0:18   ` Kirill A. Shutemov
                       ` (2 preceding siblings ...)
  2017-08-30  5:25     ` Anshuman Khandual
@ 2017-08-30  8:56     ` Laurent Dufour
  3 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-30  8:56 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: paulmck, peterz, akpm, ak, mhocko, dave, jack, Matthew Wilcox,
	benh, mpe, paulus, Thomas Gleixner, Ingo Molnar, hpa,
	Will Deacon, linux-kernel, linux-mm, haren, khandual, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 27/08/2017 02:18, Kirill A. Shutemov wrote:
> On Fri, Aug 18, 2017 at 12:05:13AM +0200, Laurent Dufour wrote:
>> +/*
>> + * vm_normal_page() adds some processing which should be done while
>> + * hodling the mmap_sem.
>> + */
>> +int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>> +			     unsigned int flags)
>> +{
>> +	struct vm_fault vmf = {
>> +		.address = address,
>> +	};
>> +	pgd_t *pgd;
>> +	p4d_t *p4d;
>> +	pud_t *pud;
>> +	pmd_t *pmd;
>> +	int dead, seq, idx, ret = VM_FAULT_RETRY;
>> +	struct vm_area_struct *vma;
>> +	struct mempolicy *pol;
>> +
>> +	/* Clear flags that may lead to release the mmap_sem to retry */
>> +	flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
>> +	flags |= FAULT_FLAG_SPECULATIVE;
>> +
>> +	idx = srcu_read_lock(&vma_srcu);
>> +	vma = find_vma_srcu(mm, address);
>> +	if (!vma)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * Validate the VMA found by the lockless lookup.
>> +	 */
>> +	dead = RB_EMPTY_NODE(&vma->vm_rb);
>> +	seq = raw_read_seqcount(&vma->vm_sequence); /* rmb <-> seqlock,vma_rb_erase() */
>> +	if ((seq & 1) || dead)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * Can't call vm_ops service has we don't know what they would do
>> +	 * with the VMA.
>> +	 * This include huge page from hugetlbfs.
>> +	 */
>> +	if (vma->vm_ops)
>> +		goto unlock;
> 
> I think we need to have a way to white-list safe ->vm_ops.
> 
>> +
>> +	if (unlikely(!vma->anon_vma))
>> +		goto unlock;
> 
> It deserves a comment.
> 
>> +
>> +	vmf.vma_flags = READ_ONCE(vma->vm_flags);
>> +	vmf.vma_page_prot = READ_ONCE(vma->vm_page_prot);
>> +
>> +	/* Can't call userland page fault handler in the speculative path */
>> +	if (unlikely(vmf.vma_flags & VM_UFFD_MISSING))
>> +		goto unlock;
>> +
>> +	/*
>> +	 * MPOL_INTERLEAVE implies additional check in mpol_misplaced() which
>> +	 * are not compatible with the speculative page fault processing.
>> +	 */
>> +	pol = __get_vma_policy(vma, address);
>> +	if (!pol)
>> +		pol = get_task_policy(current);
>> +	if (pol && pol->mode == MPOL_INTERLEAVE)
>> +		goto unlock;
>> +
>> +	if (vmf.vma_flags & VM_GROWSDOWN || vmf.vma_flags & VM_GROWSUP)
>> +		/*
>> +		 * This could be detected by the check address against VMA's
>> +		 * boundaries but we want to trace it as not supported instead
>> +		 * of changed.
>> +		 */
>> +		goto unlock;
>> +
>> +	if (address < READ_ONCE(vma->vm_start)
>> +	    || READ_ONCE(vma->vm_end) <= address)
>> +		goto unlock;
>> +
>> +	/*
>> +	 * The three following checks are copied from access_error from
>> +	 * arch/x86/mm/fault.c
>> +	 */
>> +	if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
>> +				       flags & FAULT_FLAG_INSTRUCTION,
>> +				       flags & FAULT_FLAG_REMOTE))
>> +		goto unlock;
>> +
>> +	/* This is one is required to check that the VMA has write access set */
>> +	if (flags & FAULT_FLAG_WRITE) {
>> +		if (unlikely(!(vmf.vma_flags & VM_WRITE)))
>> +			goto unlock;
>> +	} else {
>> +		if (unlikely(!(vmf.vma_flags & (VM_READ | VM_EXEC | VM_WRITE))))
>> +			goto unlock;
>> +	}
>> +
>> +	/*
>> +	 * Do a speculative lookup of the PTE entry.
>> +	 */
>> +	local_irq_disable();
>> +	pgd = pgd_offset(mm, address);
>> +	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
>> +		goto out_walk;
>> +
>> +	p4d = p4d_alloc(mm, pgd, address);
>> +	if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
>> +		goto out_walk;
>> +
>> +	pud = pud_alloc(mm, p4d, address);
>> +	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
>> +		goto out_walk;
>> +
>> +	pmd = pmd_offset(pud, address);
>> +	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
>> +		goto out_walk;
>> +
>> +	/*
>> +	 * The above does not allocate/instantiate page-tables because doing so
>> +	 * would lead to the possibility of instantiating page-tables after
>> +	 * free_pgtables() -- and consequently leaking them.
>> +	 *
>> +	 * The result is that we take at least one !speculative fault per PMD
>> +	 * in order to instantiate it.
>> +	 */
> 
> 
> Doing all this job and just give up because we cannot allocate page tables
> looks very wasteful to me.
> 
> Have you considered to look how we can hand over from speculative to
> non-speculative path without starting from scratch (when possible)?
> 
>> +	/* Transparent huge pages are not supported. */
>> +	if (unlikely(pmd_trans_huge(*pmd)))
>> +		goto out_walk;
> 
> That's looks like a blocker to me.
> 
> Is there any problem with making it supported (besides plain coding)?

This is not straight forward, as the THP are mainly handled in
__handle_mm_fault() and it is not called during the speculative path.
Having THP handled in the speculative path sounds doable but I'd have to
double check all the callees deeper, and this will required either
redesigning __handle_mm_fault() or doing the job in a dedicated way in
handle_speculative_fault() .
Furthermore, we should handle both PUD and PMD's level huge pages.

This being said, I can't see any blocking issue at this time except plain
coding but I'd prefer to get it done in a next step, as an optimization,
since huge page's faults are far less frequent per design.

Having _standard_ page's fault handled in a speculative way is already
providing good performance improvement, we should consider having it
upstreamed and then adding support for THP as well as other compatible
vm_ops like hugetlb, isn't it ?

Cheers,
Laurent.

>> +
>> +	vmf.vma = vma;
>> +	vmf.pmd = pmd;
>> +	vmf.pgoff = linear_page_index(vma, address);
>> +	vmf.gfp_mask = __get_fault_gfp_mask(vma);
>> +	vmf.sequence = seq;
>> +	vmf.flags = flags;
>> +
>> +	local_irq_enable();
>> +
>> +	/*
>> +	 * We need to re-validate the VMA after checking the bounds, otherwise
>> +	 * we might have a false positive on the bounds.
>> +	 */
>> +	if (read_seqcount_retry(&vma->vm_sequence, seq))
>> +		goto unlock;
>> +
>> +	ret = handle_pte_fault(&vmf);
>> +
>> +unlock:
>> +	srcu_read_unlock(&vma_srcu, idx);
>> +	return ret;
>> +
>> +out_walk:
>> +	local_irq_enable();
>> +	goto unlock;
>> +}
>> +#endif /* __HAVE_ARCH_CALL_SPF */
>> +
>>  /*
>>   * By the time we get here, we already hold the mm semaphore
>>   *
>> -- 
>> 2.7.4
>>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-30  5:58               ` Peter Zijlstra
@ 2017-08-30  9:32                 ` Laurent Dufour
  2017-08-31  6:55                   ` Anshuman Khandual
  0 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-08-30  9:32 UTC (permalink / raw)
  To: Peter Zijlstra, Anshuman Khandual
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 30/08/2017 07:58, Peter Zijlstra wrote:
> On Wed, Aug 30, 2017 at 10:33:50AM +0530, Anshuman Khandual wrote:
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index a497024..08f3042 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
>>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
>>                          unsigned int flags)
>>  {
>> +       if (flags & FAULT_FLAG_SPECULATIVE) {
>> +               if (flags & FAULT_FLAG_KILLABLE) {
>> +                       int ret;
>> +
>> +                       ret = __lock_page_killable(page);
>> +                       if (ret)
>> +                               return 0;
>> +               } else
>> +                       __lock_page(page);
>> +               return 1;
>> +       }
>> +
>>         if (flags & FAULT_FLAG_ALLOW_RETRY) {
>>                 /*
>>                  * CAUTION! In this case, mmap_sem is not released
> 
> Yeah, that looks right.

Hum, I'm wondering if FAULT_FLAG_RETRY_NOWAIT should be forced in the
speculative path in that case to match the semantics of
__lock_page_or_retry().

> 
>> @@ -4012,17 +4010,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>>                 goto unlock;
>>         }
>>
>> +       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma)) {
>>                 trace_spf_vma_notsup(_RET_IP_, vma, address);
>>                 goto unlock;
>>         }
> 
> As riel pointed out on IRC slightly later, private file maps also need
> ->anon_vma and those actually have ->vm_ops IIRC so the condition needs
> to be slightly more complicated.

Yes I read again the code and lead to the same conclusion.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-30  5:03             ` Anshuman Khandual
  2017-08-30  5:58               ` Peter Zijlstra
@ 2017-08-30  9:53               ` Laurent Dufour
  1 sibling, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-08-30  9:53 UTC (permalink / raw)
  To: Anshuman Khandual, Peter Zijlstra
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 30/08/2017 07:03, Anshuman Khandual wrote:
> On 08/29/2017 07:15 PM, Peter Zijlstra wrote:
>> On Tue, Aug 29, 2017 at 03:18:25PM +0200, Laurent Dufour wrote:
>>> On 29/08/2017 14:04, Peter Zijlstra wrote:
>>>> On Tue, Aug 29, 2017 at 09:59:30AM +0200, Laurent Dufour wrote:
>>>>> On 27/08/2017 02:18, Kirill A. Shutemov wrote:
>>>>>>> +
>>>>>>> +	if (unlikely(!vma->anon_vma))
>>>>>>> +		goto unlock;
>>>>>>
>>>>>> It deserves a comment.
>>>>>
>>>>> You're right I'll add it in the next version.
>>>>> For the record, the root cause is that __anon_vma_prepare() requires the
>>>>> mmap_sem to be held because vm_next and vm_prev must be safe.
>>>>
>>>> But should that test not be:
>>>>
>>>> 	if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
>>>> 		goto unlock;
>>>>
>>>> Because !anon vmas will never have ->anon_vma set and you don't want to
>>>> exclude those.
>>>
>>> Yes in the case we later allow non anonymous vmas to be handled.
>>> Currently only anonymous vmas are supported so the check is good enough,
>>> isn't it ?
>>
>> That wasn't at all clear from reading the code. This makes it clear
>> ->anon_vma is only ever looked at for anonymous.
>>
>> And like Kirill says, we _really_ should start allowing some (if not
>> all) vm_ops. Large file based mappings aren't particularly rare.
>>
>> I'm not sure we want to introduce a white-list or just bite the bullet
>> and audit all ->fault() implementations. But either works and isn't
>> terribly difficult, auditing all is more work though.
> 
> filemap_fault() is used as vma-vm_ops->fault() for most of the file
> systems. Changing it can enable speculative fault support for all of
> them. It will still exclude other driver based vma-vm_ops->fault()
> implementation. AFAICS, __lock_page_or_retry() function can drop
> mm->mmap_sem if the page could not be locked right away. As suggested
> by Peterz, making it understand FAULT_FLAG_SPECULATIVE should be good
> enough. The patch is lightly tested for file mappings on top of this
> series.

Hi Anshuman,

This sounds pretty good, except for  the FAULT_FLAG_RETRY_NOWAIT's case I
mentioned in another mail.

The next step would be to find a way to discriminate between the vm_fault()
functions. Any idea ?

Thanks,
Laurent.

> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index a497024..08f3042 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
>                          unsigned int flags)
>  {
> +       if (flags & FAULT_FLAG_SPECULATIVE) {
> +               if (flags & FAULT_FLAG_KILLABLE) {
> +                       int ret;
> +
> +                       ret = __lock_page_killable(page);
> +                       if (ret)
> +                               return 0;
> +               } else
> +                       __lock_page(page);
> +               return 1;
> +       }
> +
>         if (flags & FAULT_FLAG_ALLOW_RETRY) {
>                 /*
>                  * CAUTION! In this case, mmap_sem is not released
> diff --git a/mm/memory.c b/mm/memory.c
> index 549d235..02347f3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3836,8 +3836,6 @@ static int handle_pte_fault(struct vm_fault *vmf)
>         if (!vmf->pte) {
>                 if (vma_is_anonymous(vmf->vma))
>                         return do_anonymous_page(vmf);
> -               else if (vmf->flags & FAULT_FLAG_SPECULATIVE)
> -                       return VM_FAULT_RETRY;
>                 else
>                         return do_fault(vmf);
>         }
> @@ -4012,17 +4010,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
>                 goto unlock;
>         }
> 
> -       /*
> -        * Can't call vm_ops service has we don't know what they would do
> -        * with the VMA.
> -        * This include huge page from hugetlbfs.
> -        */
> -       if (vma->vm_ops) {
> -               trace_spf_vma_notsup(_RET_IP_, vma, address);
> -               goto unlock;
> -       }
> -
> -       if (unlikely(!vma->anon_vma)) {
> +       if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma)) {
>                 trace_spf_vma_notsup(_RET_IP_, vma, address);
>                 goto unlock;
>         }
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-30  9:32                 ` Laurent Dufour
@ 2017-08-31  6:55                   ` Anshuman Khandual
  2017-08-31  7:31                     ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Anshuman Khandual @ 2017-08-31  6:55 UTC (permalink / raw)
  To: Laurent Dufour, Peter Zijlstra, Anshuman Khandual
  Cc: Kirill A. Shutemov, paulmck, akpm, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, npiggin,
	bsingharora, Tim Chen, linuxppc-dev, x86

On 08/30/2017 03:02 PM, Laurent Dufour wrote:
> On 30/08/2017 07:58, Peter Zijlstra wrote:
>> On Wed, Aug 30, 2017 at 10:33:50AM +0530, Anshuman Khandual wrote:
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index a497024..08f3042 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
>>>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
>>>                          unsigned int flags)
>>>  {
>>> +       if (flags & FAULT_FLAG_SPECULATIVE) {
>>> +               if (flags & FAULT_FLAG_KILLABLE) {
>>> +                       int ret;
>>> +
>>> +                       ret = __lock_page_killable(page);
>>> +                       if (ret)
>>> +                               return 0;
>>> +               } else
>>> +                       __lock_page(page);
>>> +               return 1;
>>> +       }
>>> +
>>>         if (flags & FAULT_FLAG_ALLOW_RETRY) {
>>>                 /*
>>>                  * CAUTION! In this case, mmap_sem is not released
>>
>> Yeah, that looks right.
> 
> Hum, I'm wondering if FAULT_FLAG_RETRY_NOWAIT should be forced in the
> speculative path in that case to match the semantics of
> __lock_page_or_retry().

Doing that would force us to have another retry through classic fault
path wasting all the work done till now through SPF. Hence it may be
better to just wait, get the lock here and complete the fault. Peterz,
would you agree ? Or we should do as suggested by Laurent. More over,
forcing FAULT_FLAG_RETRY_NOWAIT on FAULT_FLAG_SPECULTIVE at this point
would look like a hack.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 14/20] mm: Provide speculative fault infrastructure
  2017-08-31  6:55                   ` Anshuman Khandual
@ 2017-08-31  7:31                     ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2017-08-31  7:31 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Laurent Dufour, Kirill A. Shutemov, paulmck, akpm, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On Thu, Aug 31, 2017 at 12:25:16PM +0530, Anshuman Khandual wrote:
> On 08/30/2017 03:02 PM, Laurent Dufour wrote:
> > On 30/08/2017 07:58, Peter Zijlstra wrote:
> >> On Wed, Aug 30, 2017 at 10:33:50AM +0530, Anshuman Khandual wrote:
> >>> diff --git a/mm/filemap.c b/mm/filemap.c
> >>> index a497024..08f3042 100644
> >>> --- a/mm/filemap.c
> >>> +++ b/mm/filemap.c
> >>> @@ -1181,6 +1181,18 @@ int __lock_page_killable(struct page *__page)
> >>>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
> >>>                          unsigned int flags)
> >>>  {
> >>> +       if (flags & FAULT_FLAG_SPECULATIVE) {
> >>> +               if (flags & FAULT_FLAG_KILLABLE) {
> >>> +                       int ret;
> >>> +
> >>> +                       ret = __lock_page_killable(page);
> >>> +                       if (ret)
> >>> +                               return 0;
> >>> +               } else
> >>> +                       __lock_page(page);
> >>> +               return 1;
> >>> +       }
> >>> +
> >>>         if (flags & FAULT_FLAG_ALLOW_RETRY) {
> >>>                 /*
> >>>                  * CAUTION! In this case, mmap_sem is not released
> >>
> >> Yeah, that looks right.
> > 
> > Hum, I'm wondering if FAULT_FLAG_RETRY_NOWAIT should be forced in the
> > speculative path in that case to match the semantics of
> > __lock_page_or_retry().
> 
> Doing that would force us to have another retry through classic fault
> path wasting all the work done till now through SPF. Hence it may be
> better to just wait, get the lock here and complete the fault. Peterz,
> would you agree ? Or we should do as suggested by Laurent. More over,
> forcing FAULT_FLAG_RETRY_NOWAIT on FAULT_FLAG_SPECULTIVE at this point
> would look like a hack.

Is there ever a situation where SPECULATIVE and NOWAIT are used
together? That seems like something to avoid.

A git-grep seems to suggest gup() can set it, but gup() will not be
doing speculative faults. s390 also sets it, but then again, they don't
have speculative fault support yet and when they do they can avoid
setting them together.

So maybe put in a WARN_ON_ONCE() on having both of them, it is not
something that makes sense to me, but maybe someone sees a rationale for
it?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-08-21  2:26 ` [PATCH v2 00/20] Speculative page faults Sergey Senozhatsky
@ 2017-09-08  9:24   ` Laurent Dufour
  2017-09-11  0:45     ` Sergey Senozhatsky
  0 siblings, 1 reply; 61+ messages in thread
From: Laurent Dufour @ 2017-09-08  9:24 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 21/08/2017 04:26, Sergey Senozhatsky wrote:
> Hello,
> 
> On (08/18/17 00:04), Laurent Dufour wrote:
>> This is a port on kernel 4.13 of the work done by Peter Zijlstra to
>> handle page fault without holding the mm semaphore [1].
>>
>> The idea is to try to handle user space page faults without holding the
>> mmap_sem. This should allow better concurrency for massively threaded
>> process since the page fault handler will not wait for other threads memory
>> layout change to be done, assuming that this change is done in another part
>> of the process's memory space. This type page fault is named speculative
>> page fault. If the speculative page fault fails because of a concurrency is
>> detected or because underlying PMD or PTE tables are not yet allocating, it
>> is failing its processing and a classic page fault is then tried.
>>
>> The speculative page fault (SPF) has to look for the VMA matching the fault
>> address without holding the mmap_sem, so the VMA list is now managed using
>> SRCU allowing lockless walking. The only impact would be the deferred file
>> derefencing in the case of a file mapping, since the file pointer is
>> released once the SRCU cleaning is done.  This patch relies on the change
>> done recently by Paul McKenney in SRCU which now runs a callback per CPU
>> instead of per SRCU structure [1].
>>
>> The VMA's attributes checked during the speculative page fault processing
>> have to be protected against parallel changes. This is done by using a per
>> VMA sequence lock. This sequence lock allows the speculative page fault
>> handler to fast check for parallel changes in progress and to abort the
>> speculative page fault in that case.
>>
>> Once the VMA is found, the speculative page fault handler would check for
>> the VMA's attributes to verify that the page fault has to be handled
>> correctly or not. Thus the VMA is protected through a sequence lock which
>> allows fast detection of concurrent VMA changes. If such a change is
>> detected, the speculative page fault is aborted and a *classic* page fault
>> is tried.  VMA sequence locks are added when VMA attributes which are
>> checked during the page fault are modified.
>>
>> When the PTE is fetched, the VMA is checked to see if it has been changed,
>> so once the page table is locked, the VMA is valid, so any other changes
>> leading to touching this PTE will need to lock the page table, so no
>> parallel change is possible at this time.
> 
> [ 2311.315400] ======================================================
> [ 2311.315401] WARNING: possible circular locking dependency detected
> [ 2311.315403] 4.13.0-rc5-next-20170817-dbg-00039-gaf11d7500492-dirty #1743 Not tainted
> [ 2311.315404] ------------------------------------------------------
> [ 2311.315406] khugepaged/43 is trying to acquire lock:
> [ 2311.315407]  (&mapping->i_mmap_rwsem){++++}, at: [<ffffffff8111b339>] rmap_walk_file+0x5a/0x147
> [ 2311.315415] 
>                but task is already holding lock:
> [ 2311.315416]  (fs_reclaim){+.+.}, at: [<ffffffff810ebd80>] fs_reclaim_acquire+0x12/0x35
> [ 2311.315420] 
>                which lock already depends on the new lock.
> 
> [ 2311.315422] 
>                the existing dependency chain (in reverse order) is:
> [ 2311.315423] 
>                -> #3 (fs_reclaim){+.+.}:
> [ 2311.315427]        fs_reclaim_acquire+0x32/0x35
> [ 2311.315429]        __alloc_pages_nodemask+0x8d/0x217
> [ 2311.315432]        pte_alloc_one+0x13/0x5e
> [ 2311.315434]        __pte_alloc+0x1f/0x83
> [ 2311.315436]        move_page_tables+0x2c9/0x5ac
> [ 2311.315438]        move_vma.isra.25+0xff/0x2a2
> [ 2311.315439]        SyS_mremap+0x41b/0x49e
> [ 2311.315442]        entry_SYSCALL_64_fastpath+0x18/0xad
> [ 2311.315443] 
>                -> #2 (&vma->vm_sequence/1){+.+.}:
> [ 2311.315449]        write_seqcount_begin_nested+0x1b/0x1d
> [ 2311.315451]        __vma_adjust+0x1b7/0x5d6
> [ 2311.315453]        __split_vma+0x142/0x1a3
> [ 2311.315454]        do_munmap+0x128/0x2af
> [ 2311.315455]        vm_munmap+0x5a/0x73
> [ 2311.315458]        elf_map+0xb1/0xce
> [ 2311.315459]        load_elf_binary+0x8e0/0x1348
> [ 2311.315462]        search_binary_handler+0x70/0x1f3
> [ 2311.315464]        load_script+0x1a6/0x1b5
> [ 2311.315466]        search_binary_handler+0x70/0x1f3
> [ 2311.315468]        do_execveat_common+0x461/0x691
> [ 2311.315471]        kernel_init+0x5a/0xf0
> [ 2311.315472]        ret_from_fork+0x27/0x40
> [ 2311.315473] 
>                -> #1 (&vma->vm_sequence){+.+.}:
> [ 2311.315478]        write_seqcount_begin_nested+0x1b/0x1d
> [ 2311.315480]        __vma_adjust+0x19c/0x5d6
> [ 2311.315481]        __split_vma+0x142/0x1a3
> [ 2311.315482]        do_munmap+0x128/0x2af
> [ 2311.315484]        vm_munmap+0x5a/0x73
> [ 2311.315485]        elf_map+0xb1/0xce
> [ 2311.315487]        load_elf_binary+0x8e0/0x1348
> [ 2311.315489]        search_binary_handler+0x70/0x1f3
> [ 2311.315490]        load_script+0x1a6/0x1b5
> [ 2311.315492]        search_binary_handler+0x70/0x1f3
> [ 2311.315494]        do_execveat_common+0x461/0x691
> [ 2311.315496]        kernel_init+0x5a/0xf0
> [ 2311.315497]        ret_from_fork+0x27/0x40
> [ 2311.315498] 
>                -> #0 (&mapping->i_mmap_rwsem){++++}:
> [ 2311.315503]        lock_acquire+0x176/0x19e
> [ 2311.315505]        down_read+0x3b/0x55
> [ 2311.315507]        rmap_walk_file+0x5a/0x147
> [ 2311.315508]        page_referenced+0x11c/0x134
> [ 2311.315511]        shrink_page_list+0x36b/0xb80
> [ 2311.315512]        shrink_inactive_list+0x1d9/0x437
> [ 2311.315514]        shrink_node_memcg.constprop.71+0x3e7/0x571
> [ 2311.315515]        shrink_node+0x3f/0x149
> [ 2311.315517]        try_to_free_pages+0x270/0x45f
> [ 2311.315518]        __alloc_pages_slowpath+0x34a/0xaa2
> [ 2311.315520]        __alloc_pages_nodemask+0x111/0x217
> [ 2311.315523]        khugepaged_alloc_page+0x17/0x45
> [ 2311.315524]        khugepaged+0xa29/0x16b5
> [ 2311.315527]        kthread+0xfb/0x103
> [ 2311.315529]        ret_from_fork+0x27/0x40
> [ 2311.315530] 
>                other info that might help us debug this:
> 
> [ 2311.315531] Chain exists of:
>                  &mapping->i_mmap_rwsem --> &vma->vm_sequence/1 --> fs_reclaim

Hi Sergey,

I can't see where such a chain could happen.

I tried to recreate it on top of the latest mm tree, to latest stack output
but I can't get it.
How did you raised this one ?

Thanks,
Laurent.


> 
> [ 2311.315537]  Possible unsafe locking scenario:
> 
> [ 2311.315538]        CPU0                    CPU1
> [ 2311.315539]        ----                    ----
> [ 2311.315540]   lock(fs_reclaim);
> [ 2311.315542]                                lock(&vma->vm_sequence/1);
> [ 2311.315545]                                lock(fs_reclaim);
> [ 2311.315547]   lock(&mapping->i_mmap_rwsem);
> [ 2311.315549] 
>                 *** DEADLOCK ***
> 
> [ 2311.315551] 1 lock held by khugepaged/43:
> [ 2311.315552]  #0:  (fs_reclaim){+.+.}, at: [<ffffffff810ebd80>] fs_reclaim_acquire+0x12/0x35
> [ 2311.315556] 
>                stack backtrace:
> [ 2311.315559] CPU: 0 PID: 43 Comm: khugepaged Not tainted 4.13.0-rc5-next-20170817-dbg-00039-gaf11d7500492-dirty #1743
> [ 2311.315560] Call Trace:
> [ 2311.315564]  dump_stack+0x67/0x8e
> [ 2311.315568]  print_circular_bug.isra.39+0x1c7/0x1d4
> [ 2311.315570]  __lock_acquire+0xb1a/0xe06
> [ 2311.315572]  ? graph_unlock+0x69/0x69
> [ 2311.315575]  lock_acquire+0x176/0x19e
> [ 2311.315577]  ? rmap_walk_file+0x5a/0x147
> [ 2311.315579]  down_read+0x3b/0x55
> [ 2311.315581]  ? rmap_walk_file+0x5a/0x147
> [ 2311.315583]  rmap_walk_file+0x5a/0x147
> [ 2311.315585]  page_referenced+0x11c/0x134
> [ 2311.315587]  ? page_vma_mapped_walk_done.isra.15+0xb/0xb
> [ 2311.315589]  ? page_get_anon_vma+0x6d/0x6d
> [ 2311.315591]  shrink_page_list+0x36b/0xb80
> [ 2311.315593]  ? _raw_spin_unlock_irq+0x29/0x46
> [ 2311.315595]  shrink_inactive_list+0x1d9/0x437
> [ 2311.315597]  shrink_node_memcg.constprop.71+0x3e7/0x571
> [ 2311.315600]  shrink_node+0x3f/0x149
> [ 2311.315602]  try_to_free_pages+0x270/0x45f
> [ 2311.315604]  __alloc_pages_slowpath+0x34a/0xaa2
> [ 2311.315608]  ? ___might_sleep+0xd5/0x234
> [ 2311.315609]  __alloc_pages_nodemask+0x111/0x217
> [ 2311.315612]  khugepaged_alloc_page+0x17/0x45
> [ 2311.315613]  khugepaged+0xa29/0x16b5
> [ 2311.315616]  ? remove_wait_queue+0x47/0x47
> [ 2311.315618]  ? collapse_shmem.isra.43+0x882/0x882
> [ 2311.315620]  kthread+0xfb/0x103
> [ 2311.315622]  ? __list_del_entry+0x1d/0x1d
> [ 2311.315624]  ret_from_fork+0x27/0x40
> 
> 	-ss
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-09-08  9:24   ` Laurent Dufour
@ 2017-09-11  0:45     ` Sergey Senozhatsky
  2017-09-11  6:28       ` Laurent Dufour
  0 siblings, 1 reply; 61+ messages in thread
From: Sergey Senozhatsky @ 2017-09-11  0:45 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Sergey Senozhatsky, paulmck, peterz, akpm, kirill, ak, mhocko,
	dave, jack, Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner,
	Ingo Molnar, hpa, Will Deacon, linux-kernel, linux-mm, haren,
	khandual, npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On (09/08/17 11:24), Laurent Dufour wrote:
> Hi Sergey,
> 
> I can't see where such a chain could happen.
> 
> I tried to recreate it on top of the latest mm tree, to latest stack output
> but I can't get it.
> How did you raised this one ?

Hi Laurent,

didn't do anything special, the box even wasn't under severe memory
pressure. can re-test your new patch set.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/20] Speculative page faults
  2017-09-11  0:45     ` Sergey Senozhatsky
@ 2017-09-11  6:28       ` Laurent Dufour
  0 siblings, 0 replies; 61+ messages in thread
From: Laurent Dufour @ 2017-09-11  6:28 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: paulmck, peterz, akpm, kirill, ak, mhocko, dave, jack,
	Matthew Wilcox, benh, mpe, paulus, Thomas Gleixner, Ingo Molnar,
	hpa, Will Deacon, linux-kernel, linux-mm, haren, khandual,
	npiggin, bsingharora, Tim Chen, linuxppc-dev, x86

On 11/09/2017 02:45, Sergey Senozhatsky wrote:
> On (09/08/17 11:24), Laurent Dufour wrote:
>> Hi Sergey,
>>
>> I can't see where such a chain could happen.
>>
>> I tried to recreate it on top of the latest mm tree, to latest stack output
>> but I can't get it.
>> How did you raised this one ?
> 
> Hi Laurent,
> 
> didn't do anything special, the box even wasn't under severe memory
> pressure. can re-test your new patch set.

Hi Sergey,

I sent a v3 series, would you please give it a try ?

Thanks,
Laurent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2017-09-11  6:28 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-17 22:04 [PATCH v2 00/20] Speculative page faults Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 01/20] mm: Dont assume page-table invariance during faults Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 02/20] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 03/20] mm: Introduce pte_spinlock " Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 04/20] mm: VMA sequence count Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 05/20] mm: Protect VMA modifications using " Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 06/20] mm: RCU free VMAs Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 07/20] mm: Cache some VMA fields in the vm_fault structure Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 08/20] mm: Protect SPF handler against anon_vma changes Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 09/20] mm/migrate: Pass vm_fault pointer to migrate_misplaced_page() Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 10/20] mm: Introduce __lru_cache_add_active_or_unevictable Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 11/20] mm: Introduce __maybe_mkwrite() Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 12/20] mm: Introduce __vm_normal_page() Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 13/20] mm: Introduce __page_add_new_anon_rmap() Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 14/20] mm: Provide speculative fault infrastructure Laurent Dufour
2017-08-20 12:11   ` Sergey Senozhatsky
2017-08-25  8:52     ` Laurent Dufour
2017-08-27  0:18   ` Kirill A. Shutemov
2017-08-28  9:37     ` Peter Zijlstra
2017-08-28 21:14       ` Benjamin Herrenschmidt
2017-08-28 22:35         ` Andi Kleen
2017-08-29  8:15           ` Peter Zijlstra
2017-08-29  8:33         ` Peter Zijlstra
2017-08-29 11:27           ` Peter Zijlstra
2017-08-29 21:19             ` Benjamin Herrenschmidt
2017-08-30  6:13               ` Peter Zijlstra
2017-08-29  7:59     ` Laurent Dufour
2017-08-29 12:04       ` Peter Zijlstra
2017-08-29 13:18         ` Laurent Dufour
2017-08-29 13:45           ` Peter Zijlstra
2017-08-30  5:03             ` Anshuman Khandual
2017-08-30  5:58               ` Peter Zijlstra
2017-08-30  9:32                 ` Laurent Dufour
2017-08-31  6:55                   ` Anshuman Khandual
2017-08-31  7:31                     ` Peter Zijlstra
2017-08-30  9:53               ` Laurent Dufour
2017-08-30  3:48         ` Anshuman Khandual
2017-08-30  5:25     ` Anshuman Khandual
2017-08-30  8:56     ` Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 15/20] mm: Try spin lock in speculative path Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 16/20] mm: Adding speculative page fault failure trace events Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 17/20] perf: Add a speculative page fault sw event Laurent Dufour
2017-08-21  8:55   ` Anshuman Khandual
2017-08-22  1:46     ` Michael Ellerman
2017-08-17 22:05 ` [PATCH v2 18/20] perf tools: Add support for the SPF perf event Laurent Dufour
2017-08-21  8:48   ` Anshuman Khandual
2017-08-25  8:53     ` Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 19/20] x86/mm: Add speculative pagefault handling Laurent Dufour
2017-08-21  7:29   ` Anshuman Khandual
2017-08-29 14:50     ` Laurent Dufour
2017-08-29 14:58       ` Laurent Dufour
2017-08-17 22:05 ` [PATCH v2 20/20] powerpc/mm: Add speculative page fault Laurent Dufour
2017-08-21  6:58   ` Anshuman Khandual
2017-08-29 15:13     ` Laurent Dufour
2017-08-21  2:26 ` [PATCH v2 00/20] Speculative page faults Sergey Senozhatsky
2017-09-08  9:24   ` Laurent Dufour
2017-09-11  0:45     ` Sergey Senozhatsky
2017-09-11  6:28       ` Laurent Dufour
2017-08-21  6:28 ` Anshuman Khandual
2017-08-22  0:41   ` Paul E. McKenney
2017-08-25  9:41   ` Laurent Dufour

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).