linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RESEND,PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup
@ 2020-10-05  2:40 Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 1/3] mmap locking API: add mmap_lock_is_contended() Chinwen Chang
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Chinwen Chang @ 2020-10-05  2:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-kernel

Recently, we have observed some janky issues caused by unpleasantly long
contention on mmap_lock which is held by smaps_rollup when probing large
processes. To address the problem, we let smaps_rollup detect if anyone
wants to acquire mmap_lock for write attempts. If yes, just release the
lock temporarily to ease the contention.

smaps_rollup is a procfs interface which allows users to summarize the
process's memory usage without the overhead of seq_* calls. Android uses it
to sample the memory usage of various processes to balance its memory pool
sizes. If no one wants to take the lock for write requests, smaps_rollup
with this patch will behave like the original one.

Although there are on-going mmap_lock optimizations like range-based locks,
the lock applied to smaps_rollup would be the coarse one, which is hard to
avoid the occurrence of aforementioned issues. So the detection and
temporary release for write attempts on mmap_lock in smaps_rollup is still
necessary.

Change since v1:
- If current VMA is freed after dropping the lock, it will return
- incomplete result. To fix this issue, refine the code flow as
- suggested by Steve. [1]

Change since v2:
- When getting back the mmap lock, the address where you stopped last
- time could now be in the middle of a vma. Add one more check to handle
- this case as suggested by Michel. [2]

Change since v3:
- last_stopped is easily confused with last_vma_end. Replace it with
- a direct call to smap_gather_stats(vma, &mss, last_vma_end) as
- suggested by Steve. [3]

[1] https://lore.kernel.org/lkml/bf40676e-b14b-44cd-75ce-419c70194783@arm.com/
[2] https://lore.kernel.org/lkml/CANN689FtCsC71cjAjs0GPspOhgo_HRj+diWsoU1wr98YPktgWg@mail.gmail.com/
[3] https://lore.kernel.org/lkml/db0d40e2-72f3-09d5-c162-9c49218f128f@arm.com/


Chinwen Chang (3):
  mmap locking API: add mmap_lock_is_contended()
  mm: smaps*: extend smap_gather_stats to support specified beginning
  mm: proc: smaps_rollup: do not stall write attempts on mmap_lock

 fs/proc/task_mmu.c        | 96 +++++++++++++++++++++++++++++++++++----
 include/linux/mmap_lock.h |  5 ++
 2 files changed, 92 insertions(+), 9 deletions(-)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RESEND, PATCH v4 1/3] mmap locking API: add mmap_lock_is_contended()
  2020-10-05  2:40 [RESEND,PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup Chinwen Chang
@ 2020-10-05  2:40 ` Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 2/3] mm: smaps*: extend smap_gather_stats to support specified beginning Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 3/3] mm: proc: smaps_rollup: do not stall write attempts on mmap_lock Chinwen Chang
  2 siblings, 0 replies; 4+ messages in thread
From: Chinwen Chang @ 2020-10-05  2:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-kernel, Chinwen Chang

Add new API to query if someone wants to acquire mmap_lock
for write attempts.

Using this instead of rwsem_is_contended makes it more tolerant
of future changes to the lock type.

Change-Id: Idb21478bb0580ba72b9926aba3bbc4b1f75deec2
Signed-off-by: Chinwen Chang <chinwen.chang@mediatek.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Michel Lespinasse <walken@google.com>
---
 include/linux/mmap_lock.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 0707671..18e7eae 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -87,4 +87,9 @@ static inline void mmap_assert_write_locked(struct mm_struct *mm)
 	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
 }
 
+static inline int mmap_lock_is_contended(struct mm_struct *mm)
+{
+	return rwsem_is_contended(&mm->mmap_lock);
+}
+
 #endif /* _LINUX_MMAP_LOCK_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RESEND, PATCH v4 2/3] mm: smaps*: extend smap_gather_stats to support specified beginning
  2020-10-05  2:40 [RESEND,PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 1/3] mmap locking API: add mmap_lock_is_contended() Chinwen Chang
@ 2020-10-05  2:40 ` Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 3/3] mm: proc: smaps_rollup: do not stall write attempts on mmap_lock Chinwen Chang
  2 siblings, 0 replies; 4+ messages in thread
From: Chinwen Chang @ 2020-10-05  2:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-kernel, Chinwen Chang, Michel Lespinasse

Extend smap_gather_stats to support indicated beginning address at
which it should start gathering. To achieve the goal, we add a new
parameter @start assigned by the caller and try to refactor it for
simplicity.

If @start is 0, it will use the range of @vma for gathering.

Change since v2:
- This is a new change to make the retry behavior of smaps_rollup
- more complete as suggested by Michel [1]

[1] https://lore.kernel.org/lkml/CANN689FtCsC71cjAjs0GPspOhgo_HRj+diWsoU1wr98YPktgWg@mail.gmail.com/

Change-Id: I8652e0ee6c5e93fb56376a68d71ed6cdd8ac10e8
Signed-off-by: Chinwen Chang <chinwen.chang@mediatek.com>
CC: Michel Lespinasse <walken@google.com>
Reviewed-by: Steven Price <steven.price@arm.com>
---
 fs/proc/task_mmu.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index dbda449..76e623a 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -723,9 +723,21 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
 	.pte_hole		= smaps_pte_hole,
 };
 
+/*
+ * Gather mem stats from @vma with the indicated beginning
+ * address @start, and keep them in @mss.
+ *
+ * Use vm_start of @vma as the beginning address if @start is 0.
+ */
 static void smap_gather_stats(struct vm_area_struct *vma,
-			     struct mem_size_stats *mss)
+		struct mem_size_stats *mss, unsigned long start)
 {
+	const struct mm_walk_ops *ops = &smaps_walk_ops;
+
+	/* Invalid start */
+	if (start >= vma->vm_end)
+		return;
+
 #ifdef CONFIG_SHMEM
 	/* In case of smaps_rollup, reset the value from previous vma */
 	mss->check_shmem_swap = false;
@@ -742,18 +754,20 @@ static void smap_gather_stats(struct vm_area_struct *vma,
 		 */
 		unsigned long shmem_swapped = shmem_swap_usage(vma);
 
-		if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
-					!(vma->vm_flags & VM_WRITE)) {
+		if (!start && (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+					!(vma->vm_flags & VM_WRITE))) {
 			mss->swap += shmem_swapped;
 		} else {
 			mss->check_shmem_swap = true;
-			walk_page_vma(vma, &smaps_shmem_walk_ops, mss);
-			return;
+			ops = &smaps_shmem_walk_ops;
 		}
 	}
 #endif
 	/* mmap_lock is held in m_start */
-	walk_page_vma(vma, &smaps_walk_ops, mss);
+	if (!start)
+		walk_page_vma(vma, ops, mss);
+	else
+		walk_page_range(vma->vm_mm, start, vma->vm_end, ops, mss);
 }
 
 #define SEQ_PUT_DEC(str, val) \
@@ -805,7 +819,7 @@ static int show_smap(struct seq_file *m, void *v)
 
 	memset(&mss, 0, sizeof(mss));
 
-	smap_gather_stats(vma, &mss);
+	smap_gather_stats(vma, &mss, 0);
 
 	show_map_vma(m, vma);
 
@@ -854,7 +868,7 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
 	hold_task_mempolicy(priv);
 
 	for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
-		smap_gather_stats(vma, &mss);
+		smap_gather_stats(vma, &mss, 0);
 		last_vma_end = vma->vm_end;
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RESEND, PATCH v4 3/3] mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
  2020-10-05  2:40 [RESEND,PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 1/3] mmap locking API: add mmap_lock_is_contended() Chinwen Chang
  2020-10-05  2:40 ` [RESEND, PATCH v4 2/3] mm: smaps*: extend smap_gather_stats to support specified beginning Chinwen Chang
@ 2020-10-05  2:40 ` Chinwen Chang
  2 siblings, 0 replies; 4+ messages in thread
From: Chinwen Chang @ 2020-10-05  2:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-kernel, Chinwen Chang, Michel Lespinasse

smaps_rollup will try to grab mmap_lock and go through the whole vma
list until it finishes the iterating. When encountering large processes,
the mmap_lock will be held for a longer time, which may block other
write requests like mmap and munmap from progressing smoothly.

There are upcoming mmap_lock optimizations like range-based locks, but
the lock applied to smaps_rollup would be the coarse type, which doesn't
avoid the occurrence of unpleasant contention.

To solve aforementioned issue, we add a check which detects whether
anyone wants to grab mmap_lock for write attempts.

Change since v1:
- If current VMA is freed after dropping the lock, it will return
- incomplete result. To fix this issue, refine the code flow as
- suggested by Steve. [1]

Change since v2:
- When getting back the mmap lock, the address where you stopped last
- time could now be in the middle of a vma. Add one more check to handle
- this case as suggested by Michel. [2]

Change since v3:
- last_stopped is easily confused with last_vma_end. Replace it with
- a direct call to smap_gather_stats(vma, &mss, last_vma_end) as
- suggested by Steve. [3]

[1] https://lore.kernel.org/lkml/bf40676e-b14b-44cd-75ce-419c70194783@arm.com/
[2] https://lore.kernel.org/lkml/CANN689FtCsC71cjAjs0GPspOhgo_HRj+diWsoU1wr98YPktgWg@mail.gmail.com/
[3] https://lore.kernel.org/lkml/db0d40e2-72f3-09d5-c162-9c49218f128f@arm.com/

Change-Id: Idcdb6478ccd06a9e5edd4eda9285378e961a6b94
Signed-off-by: Chinwen Chang <chinwen.chang@mediatek.com>
Reviewed-by: Steven Price <steven.price@arm.com>
CC: Michel Lespinasse <walken@google.com>
---
 fs/proc/task_mmu.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 65 insertions(+), 1 deletion(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 76e623a..1a80624 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -867,9 +867,73 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
 
 	hold_task_mempolicy(priv);
 
-	for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
+	for (vma = priv->mm->mmap; vma;) {
 		smap_gather_stats(vma, &mss, 0);
 		last_vma_end = vma->vm_end;
+
+		/*
+		 * Release mmap_lock temporarily if someone wants to
+		 * access it for write request.
+		 */
+		if (mmap_lock_is_contended(mm)) {
+			mmap_read_unlock(mm);
+			ret = mmap_read_lock_killable(mm);
+			if (ret) {
+				release_task_mempolicy(priv);
+				goto out_put_mm;
+			}
+
+			/*
+			 * After dropping the lock, there are four cases to
+			 * consider. See the following example for explanation.
+			 *
+			 *   +------+------+-----------+
+			 *   | VMA1 | VMA2 | VMA3      |
+			 *   +------+------+-----------+
+			 *   |      |      |           |
+			 *  4k     8k     16k         400k
+			 *
+			 * Suppose we drop the lock after reading VMA2 due to
+			 * contention, then we get:
+			 *
+			 *	last_vma_end = 16k
+			 *
+			 * 1) VMA2 is freed, but VMA3 exists:
+			 *
+			 *    find_vma(mm, 16k - 1) will return VMA3.
+			 *    In this case, just continue from VMA3.
+			 *
+			 * 2) VMA2 still exists:
+			 *
+			 *    find_vma(mm, 16k - 1) will return VMA2.
+			 *    Iterate the loop like the original one.
+			 *
+			 * 3) No more VMAs can be found:
+			 *
+			 *    find_vma(mm, 16k - 1) will return NULL.
+			 *    No more things to do, just break.
+			 *
+			 * 4) (last_vma_end - 1) is the middle of a vma (VMA'):
+			 *
+			 *    find_vma(mm, 16k - 1) will return VMA' whose range
+			 *    contains last_vma_end.
+			 *    Iterate VMA' from last_vma_end.
+			 */
+			vma = find_vma(mm, last_vma_end - 1);
+			/* Case 3 above */
+			if (!vma)
+				break;
+
+			/* Case 1 above */
+			if (vma->vm_start >= last_vma_end)
+				continue;
+
+			/* Case 4 above */
+			if (vma->vm_end > last_vma_end)
+				smap_gather_stats(vma, &mss, last_vma_end);
+		}
+		/* Case 2 above */
+		vma = vma->vm_next;
 	}
 
 	show_vma_header_prefix(m, priv->mm->mmap->vm_start,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-05  2:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-05  2:40 [RESEND,PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup Chinwen Chang
2020-10-05  2:40 ` [RESEND, PATCH v4 1/3] mmap locking API: add mmap_lock_is_contended() Chinwen Chang
2020-10-05  2:40 ` [RESEND, PATCH v4 2/3] mm: smaps*: extend smap_gather_stats to support specified beginning Chinwen Chang
2020-10-05  2:40 ` [RESEND, PATCH v4 3/3] mm: proc: smaps_rollup: do not stall write attempts on mmap_lock Chinwen Chang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).