linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* cma_alloc(), add sleep-and-retry for temporary page pinning
@ 2020-08-06  2:56 Chris Goldsworthy
  2020-08-06  2:56 ` [PATCH] mm: cma: retry allocations in cma_alloc Chris Goldsworthy
  2020-08-07  1:31 ` cma_alloc(), add sleep-and-retry for temporary page pinning Andrew Morton
  0 siblings, 2 replies; 3+ messages in thread
From: Chris Goldsworthy @ 2020-08-06  2:56 UTC (permalink / raw)
  To: akpm
  Cc: linux-mm, linux-arm-msm, linux-kernel, pratikp, pdaly, sudraja,
	iamjoonsoo.kim

On mobile devices, failure to allocate from a CMA area constitutes a
functional failure.  Sometimes during CMA allocations, we have observed
that pages in a CMA area allocated through alloc_pages(), that we're trying
to migrate away to make room for a CMA allocation, are temporarily pinned.
This temporary pinning can occur when a process that owns the pinned page
is being forked (the example is explained further in the commit text).
This patch addresses this issue by adding a sleep-and-retry loop in
cma_alloc() . There's another example we know of similar to the above that
occurs during exit_mmap() (in zap_pte_range() specifically), but I need to
determine if this is still relevant today.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH] mm: cma: retry allocations in cma_alloc
  2020-08-06  2:56 cma_alloc(), add sleep-and-retry for temporary page pinning Chris Goldsworthy
@ 2020-08-06  2:56 ` Chris Goldsworthy
  2020-08-07  1:31 ` cma_alloc(), add sleep-and-retry for temporary page pinning Andrew Morton
  1 sibling, 0 replies; 3+ messages in thread
From: Chris Goldsworthy @ 2020-08-06  2:56 UTC (permalink / raw)
  To: akpm
  Cc: linux-mm, linux-arm-msm, linux-kernel, pratikp, pdaly, sudraja,
	iamjoonsoo.kim, Chris Goldsworthy, Susheel Khiani, Vinayak Menon

CMA allocations will fail if 'pinned' pages are in a CMA area, since we
cannot migrate pinned pages. The _refcount of a struct page being greater
than _mapcount for that page can cause pinning for anonymous pages.  This
is because try_to_unmap(), which (1) is called in the CMA allocation path,
and (2) decrements both _refcount and _mapcount for a page, will stop
unmapping a page from VMAs once the _mapcount for a page reaches 0.  This
implies that after try_to_unmap() has finished successfully for a page
where _recount > _mapcount, that _refcount will be greater than 0.  Later
in the CMA allocation path in migrate_page_move_mapping(), we will have one
more reference count than intended for anonymous pages, meaning the
allocation will fail for that page.

One example of where _refcount can be greater than _mapcount for a page we
would not expect to be pinned is inside of copy_one_pte(), which is called
during a fork. For ptes for which pte_present(pte) == true, copy_one_pte()
will increment the _refcount field followed by the  _mapcount field of a
page. If the process doing copy_one_pte() is context switched out after
incrementing _refcount but before incrementing _mapcount, then the page
will be temporarily pinned.

So, inside of cma_alloc(), instead of giving up when alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries with sleeps to give the system an opportunity to unpin any pinned
pages.

Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Co-developed-by: Susheel Khiani <skhiani@codeaurora.org>
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
Co-developed-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
---
 mm/cma.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 7f415d7..7b85fe6 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -32,6 +32,7 @@
 #include <linux/highmem.h>
 #include <linux/io.h>
 #include <linux/kmemleak.h>
+#include <linux/delay.h>
 #include <trace/events/cma.h>
 
 #include "cma.h"
@@ -418,6 +419,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	size_t i;
 	struct page *page = NULL;
 	int ret = -ENOMEM;
+	int num_attempts = 0;
+	int max_retries = 5;
 
 	if (!cma || !cma->count || !cma->bitmap)
 		return NULL;
@@ -442,8 +445,25 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 				bitmap_maxno, start, bitmap_count, mask,
 				offset);
 		if (bitmap_no >= bitmap_maxno) {
-			mutex_unlock(&cma->lock);
-			break;
+			if ((num_attempts < max_retries) && (ret == -EBUSY)) {
+				mutex_unlock(&cma->lock);
+
+				/*
+				 * Page may be momentarily pinned by some other
+				 * process which has been scheduled out, e.g.
+				 * in exit path, during unmap call, or process
+				 * fork and so cannot be freed there. Sleep
+				 * for 100ms and retry the allocation.
+				 */
+				start = 0;
+				ret = -ENOMEM;
+				msleep(100);
+				num_attempts++;
+				continue;
+			} else {
+				mutex_unlock(&cma->lock);
+				break;
+			}
 		}
 		bitmap_set(cma->bitmap, bitmap_no, bitmap_count);
 		/*
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: cma_alloc(), add sleep-and-retry for temporary page pinning
  2020-08-06  2:56 cma_alloc(), add sleep-and-retry for temporary page pinning Chris Goldsworthy
  2020-08-06  2:56 ` [PATCH] mm: cma: retry allocations in cma_alloc Chris Goldsworthy
@ 2020-08-07  1:31 ` Andrew Morton
  1 sibling, 0 replies; 3+ messages in thread
From: Andrew Morton @ 2020-08-07  1:31 UTC (permalink / raw)
  To: Chris Goldsworthy
  Cc: linux-mm, linux-arm-msm, linux-kernel, pratikp, pdaly, sudraja,
	iamjoonsoo.kim

On Wed,  5 Aug 2020 19:56:21 -0700 Chris Goldsworthy <cgoldswo@codeaurora.org> wrote:

> On mobile devices, failure to allocate from a CMA area constitutes a
> functional failure.  Sometimes during CMA allocations, we have observed
> that pages in a CMA area allocated through alloc_pages(), that we're trying
> to migrate away to make room for a CMA allocation, are temporarily pinned.
> This temporary pinning can occur when a process that owns the pinned page
> is being forked (the example is explained further in the commit text).
> This patch addresses this issue by adding a sleep-and-retry loop in
> cma_alloc() . There's another example we know of similar to the above that
> occurs during exit_mmap() (in zap_pte_range() specifically), but I need to
> determine if this is still relevant today.

Sounds fairly serious but boy, we're late for 5.9.

I can queue it for 5.10 with a cc:stable so that it gets backported
into earlier kernels a couple of months from now, if we think the
seriousness justifies backporting(?). 

Or I can do something else - thoughts?

And...  it really is a sad little patch, isn't it?  Instead of fixing
the problem, it reduces the problem's probability by 5x.  Can't we do
better than this?



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-08-07  1:31 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-06  2:56 cma_alloc(), add sleep-and-retry for temporary page pinning Chris Goldsworthy
2020-08-06  2:56 ` [PATCH] mm: cma: retry allocations in cma_alloc Chris Goldsworthy
2020-08-07  1:31 ` cma_alloc(), add sleep-and-retry for temporary page pinning Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).