linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] mm: avoid re-using pfmemalloc page in page_frag_alloc()
@ 2020-11-03 19:32 Dongli Zhang
  2020-11-03 20:35 ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Dongli Zhang @ 2020-11-03 19:32 UTC (permalink / raw)
  To: linux-mm, netdev
  Cc: linux-kernel, akpm, davem, kuba, dongli.zhang, aruna.ramakrishna,
	bert.barbe, rama.nichanamatlu, venkat.x.venkatsubra,
	manjunath.b.patil, joe.jin, srinivas.eeda

The ethernet driver may allocates skb (and skb->data) via napi_alloc_skb().
This ends up to page_frag_alloc() to allocate skb->data from
page_frag_cache->va.

During the memory pressure, page_frag_cache->va may be allocated as
pfmemalloc page. As a result, the skb->pfmemalloc is always true as
skb->data is from page_frag_cache->va. The skb will be dropped if the
sock (receiver) does not have SOCK_MEMALLOC. This is expected behaviour
under memory pressure.

However, once kernel is not under memory pressure any longer (suppose large
amount of memory pages are just reclaimed), the page_frag_alloc() may still
re-use the prior pfmemalloc page_frag_cache->va to allocate skb->data. As a
result, the skb->pfmemalloc is always true unless page_frag_cache->va is
re-allocated, even the kernel is not under memory pressure any longer.

Here is how kernel runs into issue.

1. The kernel is under memory pressure and allocation of
PAGE_FRAG_CACHE_MAX_ORDER in __page_frag_cache_refill() will fail. Instead,
the pfmemalloc page is allocated for page_frag_cache->va.

2: All skb->data from page_frag_cache->va (pfmemalloc) will have
skb->pfmemalloc=true. The skb will always be dropped by sock without
SOCK_MEMALLOC. This is an expected behaviour.

3. Suppose a large amount of pages are reclaimed and kernel is not under
memory pressure any longer. We expect skb->pfmemalloc drop will not happen.

4. Unfortunately, page_frag_alloc() does not proactively re-allocate
page_frag_alloc->va and will always re-use the prior pfmemalloc page. The
skb->pfmemalloc is always true even kernel is not under memory pressure any
longer.

Therefore, this patch always checks and tries to avoid re-using the
pfmemalloc page for page_frag_alloc->va.

Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Bert Barbe <bert.barbe@oracle.com>
Cc: Rama Nichanamatlu <rama.nichanamatlu@oracle.com>
Cc: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Cc: Manjunath Patil <manjunath.b.patil@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Cc: SRINIVAS <srinivas.eeda@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 mm/page_alloc.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 23f5066bd4a5..291df2f9f8f3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5075,6 +5075,16 @@ void *page_frag_alloc(struct page_frag_cache *nc,
 	struct page *page;
 	int offset;
 
+	/*
+	 * Try to avoid re-using pfmemalloc page because kernel may already
+	 * run out of the memory pressure situation at any time.
+	 */
+	if (unlikely(nc->va && nc->pfmemalloc)) {
+		page = virt_to_page(nc->va);
+		__page_frag_cache_drain(page, nc->pagecnt_bias);
+		nc->va = NULL;
+	}
+
 	if (unlikely(!nc->va)) {
 refill:
 		page = __page_frag_cache_refill(nc, gfp_mask);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-11-04 16:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03 19:32 [PATCH 1/1] mm: avoid re-using pfmemalloc page in page_frag_alloc() Dongli Zhang
2020-11-03 20:35 ` Matthew Wilcox
2020-11-03 20:57   ` Dongli Zhang
2020-11-03 21:15     ` Matthew Wilcox
2020-11-03 21:37       ` Dongli Zhang
2020-11-04  1:16       ` Rama Nichanamatlu
2020-11-04  8:50         ` Eric Dumazet
2020-11-04 12:36           ` Matthew Wilcox
2020-11-04 12:51             ` Eric Dumazet
2020-11-04 16:41               ` Dongli Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).