From: Pankaj Suryawanshi <pankajssuryawanshi@gmail.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernelnewbies@kernelnewbies.org, vbabka@suse.cz, mhocko@kernel.org, minchan@kernel.org Subject: vmscan.c: Reclaim unevictable pages. Date: Sat, 6 Apr 2019 11:29:27 +0530 [thread overview] Message-ID: <CACDBo57pEVRjOBf0yLMQ+KuGPeOuFcMufGVzjPJVnwfLFjzFSA@mail.gmail.com> (raw) Hello , shrink_page_list() returns , number of pages reclaimed, when pages is unevictable it returns VM_BUG_ON_PAGE(PageLRU(page) || PageUnevicatble(page),page); We can add the unevictable pages in reclaim list in shrink_page_list(), return total number of reclaim pages including unevictable pages, let the caller handle unevictable pages. I think the problem is shrink_page_list is awkard. If page is unevictable it goto activate_locked->keep_locked->keep lables, keep lable list_add the unevictable pages and throw the VM_BUG instead of passing it to caller while it relies on caller for non-reclaimed-non-unevictable page's putback. I think we can make it consistent so that shrink_page_list could return non-reclaimed pages via page_list and caller can handle it. As an advance, it could try to migrate mlocked pages without retrial. Below is the issue i observed of CMA_ALLOC of large size buffer : (Kernel version - 4.14.65 With Android Pie. [ 24.718792] page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page)) [ 24.726949] page->mem_cgroup:bd008c00 [ 24.730693] ------------[ cut here ]------------ [ 24.735304] kernel BUG at mm/vmscan.c:1350! [ 24.739478] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM Below is the patch which solved this issue : diff --git a/mm/vmscan.c b/mm/vmscan.c index be56e2e..12ac353 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, sc->nr_scanned++; if (unlikely(!page_evictable(page))) - goto activate_locked; + goto cull_mlocked; if (!sc->may_unmap && page_mapped(page)) goto keep_locked; @@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct list_head *page_list, } else list_add(&page->lru, &free_pages); continue; - +cull_mlocked: + if (PageSwapCache(page)) + try_to_free_swap(page); + unlock_page(page); + list_add(&page->lru, &ret_pages); + continue; activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || It fixes the below issue. 1. Large size buffer allocation using cma_alloc successful with unevictable pages. cma_alloc of current kernel will fail due to unevictable page Please let me know if anything i am missing. Regards, Pankaj
WARNING: multiple messages have this Message-ID (diff)
From: Pankaj Suryawanshi <pankajssuryawanshi@gmail.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernelnewbies@kernelnewbies.org, vbabka@suse.cz, mhocko@kernel.org, minchan@kernel.org Subject: vmscan.c: Reclaim unevictable pages. Date: Sat, 6 Apr 2019 11:29:27 +0530 [thread overview] Message-ID: <CACDBo57pEVRjOBf0yLMQ+KuGPeOuFcMufGVzjPJVnwfLFjzFSA@mail.gmail.com> (raw) Hello , shrink_page_list() returns , number of pages reclaimed, when pages is unevictable it returns VM_BUG_ON_PAGE(PageLRU(page) || PageUnevicatble(page),page); We can add the unevictable pages in reclaim list in shrink_page_list(), return total number of reclaim pages including unevictable pages, let the caller handle unevictable pages. I think the problem is shrink_page_list is awkard. If page is unevictable it goto activate_locked->keep_locked->keep lables, keep lable list_add the unevictable pages and throw the VM_BUG instead of passing it to caller while it relies on caller for non-reclaimed-non-unevictable page's putback. I think we can make it consistent so that shrink_page_list could return non-reclaimed pages via page_list and caller can handle it. As an advance, it could try to migrate mlocked pages without retrial. Below is the issue i observed of CMA_ALLOC of large size buffer : (Kernel version - 4.14.65 With Android Pie. [ 24.718792] page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page)) [ 24.726949] page->mem_cgroup:bd008c00 [ 24.730693] ------------[ cut here ]------------ [ 24.735304] kernel BUG at mm/vmscan.c:1350! [ 24.739478] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM Below is the patch which solved this issue : diff --git a/mm/vmscan.c b/mm/vmscan.c index be56e2e..12ac353 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, sc->nr_scanned++; if (unlikely(!page_evictable(page))) - goto activate_locked; + goto cull_mlocked; if (!sc->may_unmap && page_mapped(page)) goto keep_locked; @@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct list_head *page_list, } else list_add(&page->lru, &free_pages); continue; - +cull_mlocked: + if (PageSwapCache(page)) + try_to_free_swap(page); + unlock_page(page); + list_add(&page->lru, &ret_pages); + continue; activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || It fixes the below issue. 1. Large size buffer allocation using cma_alloc successful with unevictable pages. cma_alloc of current kernel will fail due to unevictable page Please let me know if anything i am missing. Regards, Pankaj _______________________________________________ Kernelnewbies mailing list Kernelnewbies@kernelnewbies.org https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
next reply other threads:[~2019-04-06 5:59 UTC|newest] Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-04-06 5:59 Pankaj Suryawanshi [this message] 2019-04-06 5:59 ` vmscan.c: Reclaim unevictable pages Pankaj Suryawanshi 2019-04-06 5:59 ` Pankaj Suryawanshi 2019-04-17 11:39 ` Vlastimil Babka 2019-04-17 11:39 ` Vlastimil Babka 2019-04-28 12:38 ` Pankaj Suryawanshi 2019-04-28 12:38 ` Pankaj Suryawanshi 2019-04-28 12:38 ` Pankaj Suryawanshi
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CACDBo57pEVRjOBf0yLMQ+KuGPeOuFcMufGVzjPJVnwfLFjzFSA@mail.gmail.com \ --to=pankajssuryawanshi@gmail.com \ --cc=kernelnewbies@kernelnewbies.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=minchan@kernel.org \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.