kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
* vmscan.c: Reclaim unevictable pages.
@ 2019-04-06  5:59 Pankaj Suryawanshi
  2019-04-17 11:39 ` Vlastimil Babka
  0 siblings, 1 reply; 3+ messages in thread
From: Pankaj Suryawanshi @ 2019-04-06  5:59 UTC (permalink / raw)
  To: linux-kernel, linux-mm, kernelnewbies, vbabka, mhocko, minchan

Hello ,

shrink_page_list() returns , number of pages reclaimed, when pages is
unevictable it returns VM_BUG_ON_PAGE(PageLRU(page) ||
PageUnevicatble(page),page);

We can add the unevictable pages in reclaim list in
shrink_page_list(), return total number of reclaim pages including
unevictable pages, let the caller handle unevictable pages.

I think the problem is shrink_page_list is awkard. If page is
unevictable it goto activate_locked->keep_locked->keep lables, keep
lable list_add the unevictable pages and throw the VM_BUG instead of
passing it to caller while it relies on caller for
non-reclaimed-non-unevictable  page's putback.
I think we can make it consistent so that shrink_page_list could
return non-reclaimed pages via page_list and caller can handle it. As
an advance, it could try to migrate mlocked pages without retrial.


Below is the issue i observed of CMA_ALLOC of large size buffer :
(Kernel version - 4.14.65 With Android Pie.

[   24.718792] page dumped because: VM_BUG_ON_PAGE(PageLRU(page) ||
PageUnevictable(page))
[   24.726949] page->mem_cgroup:bd008c00
[   24.730693] ------------[ cut here ]------------
[   24.735304] kernel BUG at mm/vmscan.c:1350!
[   24.739478] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM


Below is the patch which solved this issue :

diff --git a/mm/vmscan.c b/mm/vmscan.c
index be56e2e..12ac353 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct
list_head *page_list,
                sc->nr_scanned++;

                if (unlikely(!page_evictable(page)))
-                       goto activate_locked;
+                      goto cull_mlocked;

                if (!sc->may_unmap && page_mapped(page))
                        goto keep_locked;
@@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct
list_head *page_list,
                } else
                        list_add(&page->lru, &free_pages);
                continue;
-
+cull_mlocked:
+                if (PageSwapCache(page))
+                        try_to_free_swap(page);
+                unlock_page(page);
+                list_add(&page->lru, &ret_pages);
+                continue;
 activate_locked:
                /* Not a candidate for swapping, so reclaim swap space. */
                if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||




It fixes the below issue.

1. Large size buffer allocation using cma_alloc successful with
unevictable pages.

cma_alloc of current kernel will fail due to unevictable page

Please let me know if anything i am missing.

Regards,
Pankaj

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-04-28 18:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-06  5:59 vmscan.c: Reclaim unevictable pages Pankaj Suryawanshi
2019-04-17 11:39 ` Vlastimil Babka
2019-04-28 12:38   ` Pankaj Suryawanshi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).