From: Minchan Kim <minchan@kernel.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michael Kerrisk <mtk.manpages@gmail.com>, linux-api@vger.kernel.org, Hugh Dickins <hughd@google.com>, Johannes Weiner <hannes@cmpxchg.org>, zhangyanfei@cn.fujitsu.com, Rik van Riel <riel@redhat.com>, Mel Gorman <mgorman@suse.de>, KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>, Jason Evans <je@fb.com>, Daniel Micay <danielmicay@gmail.com>, "Kirill A. Shutemov" <kirill@shutemov.name>, Michal Hocko <mhocko@suse.cz>, yalin.wang2010@gmail.com, Shaohua Li <shli@kernel.org>, Minchan Kim <minchan@kernel.org> Subject: [PATCH 4/8] mm: free swp_entry in madvise_free Date: Fri, 30 Oct 2015 16:01:40 +0900 [thread overview] Message-ID: <1446188504-28023-5-git-send-email-minchan@kernel.org> (raw) In-Reply-To: <1446188504-28023-1-git-send-email-minchan@kernel.org> When I test below piece of code with 12 processes(ie, 512M * 12 = 6G consume) on my (3G ram + 12 cpu + 8G swap, the madvise_free is siginficat slower (ie, 2x times) than madvise_dontneed. loop = 5; mmap(512M); while (loop--) { memset(512M); madvise(MADV_FREE or MADV_DONTNEED); } The reason is lots of swapin. 1) dontneed: 1,612 swapin 2) madvfree: 879,585 swapin If we find hinted pages were already swapped out when syscall is called, it's pointless to keep the swapped-out pages in pte. Instead, let's free the cold page because swapin is more expensive than (alloc page + zeroing). With this patch, it reduced swapin from 879,585 to 1,878 so elapsed time 1) dontneed: 6.10user 233.50system 0:50.44elapsed 2) madvfree: 6.03user 401.17system 1:30.67elapsed 2) madvfree + below patch: 6.70user 339.14system 1:04.45elapsed Acked-by: Hugh Dickins <hughd@google.com> Signed-off-by: Minchan Kim <minchan@kernel.org> --- mm/madvise.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/mm/madvise.c b/mm/madvise.c index 640311704e31..663bd9fa0ae0 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -270,6 +270,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, spinlock_t *ptl; pte_t *pte, ptent; struct page *page; + swp_entry_t entry; + int nr_swap = 0; split_huge_page_pmd(vma, addr, pmd); if (pmd_trans_unstable(pmd)) @@ -280,8 +282,22 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, for (; addr != end; pte++, addr += PAGE_SIZE) { ptent = *pte; - if (!pte_present(ptent)) + if (pte_none(ptent)) continue; + /* + * If the pte has swp_entry, just clear page table to + * prevent swap-in which is more expensive rather than + * (page allocation + zeroing). + */ + if (!pte_present(ptent)) { + entry = pte_to_swp_entry(ptent); + if (non_swap_entry(entry)) + continue; + nr_swap--; + free_swap_and_cache(entry); + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + continue; + } page = vm_normal_page(vma, addr, ptent); if (!page) @@ -313,6 +329,14 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, set_pte_at(mm, addr, pte, ptent); tlb_remove_tlb_entry(tlb, pte, addr); } + + if (nr_swap) { + if (current->mm == mm) + sync_mm_rss(mm); + + add_mm_counter(mm, MM_SWAPENTS, nr_swap); + } + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); cond_resched(); -- 1.9.1
WARNING: multiple messages have this Message-ID (diff)
From: Minchan Kim <minchan@kernel.org> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michael Kerrisk <mtk.manpages@gmail.com>, linux-api@vger.kernel.org, Hugh Dickins <hughd@google.com>, Johannes Weiner <hannes@cmpxchg.org>, zhangyanfei@cn.fujitsu.com, Rik van Riel <riel@redhat.com>, Mel Gorman <mgorman@suse.de>, KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>, Jason Evans <je@fb.com>, Daniel Micay <danielmicay@gmail.com>, "Kirill A. Shutemov" <kirill@shutemov.name>, Michal Hocko <mhocko@suse.cz>, yalin.wang2010@gmail.com, Shaohua Li <shli@kernel.org>, Minchan Kim <minchan@kernel.org> Subject: [PATCH 4/8] mm: free swp_entry in madvise_free Date: Fri, 30 Oct 2015 16:01:40 +0900 [thread overview] Message-ID: <1446188504-28023-5-git-send-email-minchan@kernel.org> (raw) In-Reply-To: <1446188504-28023-1-git-send-email-minchan@kernel.org> When I test below piece of code with 12 processes(ie, 512M * 12 = 6G consume) on my (3G ram + 12 cpu + 8G swap, the madvise_free is siginficat slower (ie, 2x times) than madvise_dontneed. loop = 5; mmap(512M); while (loop--) { memset(512M); madvise(MADV_FREE or MADV_DONTNEED); } The reason is lots of swapin. 1) dontneed: 1,612 swapin 2) madvfree: 879,585 swapin If we find hinted pages were already swapped out when syscall is called, it's pointless to keep the swapped-out pages in pte. Instead, let's free the cold page because swapin is more expensive than (alloc page + zeroing). With this patch, it reduced swapin from 879,585 to 1,878 so elapsed time 1) dontneed: 6.10user 233.50system 0:50.44elapsed 2) madvfree: 6.03user 401.17system 1:30.67elapsed 2) madvfree + below patch: 6.70user 339.14system 1:04.45elapsed Acked-by: Hugh Dickins <hughd@google.com> Signed-off-by: Minchan Kim <minchan@kernel.org> --- mm/madvise.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/mm/madvise.c b/mm/madvise.c index 640311704e31..663bd9fa0ae0 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -270,6 +270,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, spinlock_t *ptl; pte_t *pte, ptent; struct page *page; + swp_entry_t entry; + int nr_swap = 0; split_huge_page_pmd(vma, addr, pmd); if (pmd_trans_unstable(pmd)) @@ -280,8 +282,22 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, for (; addr != end; pte++, addr += PAGE_SIZE) { ptent = *pte; - if (!pte_present(ptent)) + if (pte_none(ptent)) continue; + /* + * If the pte has swp_entry, just clear page table to + * prevent swap-in which is more expensive rather than + * (page allocation + zeroing). + */ + if (!pte_present(ptent)) { + entry = pte_to_swp_entry(ptent); + if (non_swap_entry(entry)) + continue; + nr_swap--; + free_swap_and_cache(entry); + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + continue; + } page = vm_normal_page(vma, addr, ptent); if (!page) @@ -313,6 +329,14 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, set_pte_at(mm, addr, pte, ptent); tlb_remove_tlb_entry(tlb, pte, addr); } + + if (nr_swap) { + if (current->mm == mm) + sync_mm_rss(mm); + + add_mm_counter(mm, MM_SWAPENTS, nr_swap); + } + arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); cond_resched(); -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-10-30 7:01 UTC|newest] Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-10-30 7:01 [PATCH 0/8] MADV_FREE support Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 7:01 ` [PATCH 1/8] mm: support madvise(MADV_FREE) Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 16:49 ` Shaohua Li 2015-10-30 16:49 ` Shaohua Li 2015-11-03 0:10 ` Minchan Kim 2015-11-03 0:10 ` Minchan Kim 2015-11-03 0:10 ` Minchan Kim 2015-10-30 7:01 ` [PATCH 2/8] mm: define MADV_FREE for some arches Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 7:01 ` [PATCH 3/8] arch: uapi: asm: mman.h: Let MADV_FREE have same value for all architectures Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-11-02 0:08 ` Hugh Dickins 2015-11-02 0:08 ` Hugh Dickins 2015-11-02 0:08 ` Hugh Dickins 2015-11-02 0:08 ` Hugh Dickins 2015-11-03 2:32 ` Minchan Kim 2015-11-03 2:32 ` Minchan Kim 2015-11-03 2:32 ` Minchan Kim 2015-11-03 2:32 ` Minchan Kim 2015-11-03 2:36 ` Minchan Kim 2015-11-03 2:36 ` Minchan Kim 2015-11-03 2:36 ` Minchan Kim 2015-11-03 2:36 ` Minchan Kim 2015-11-03 3:36 ` David Miller 2015-11-03 3:36 ` David Miller 2015-11-03 3:36 ` David Miller 2015-11-03 4:31 ` Minchan Kim 2015-11-03 4:31 ` Minchan Kim 2015-11-03 4:31 ` Minchan Kim 2015-10-30 7:01 ` Minchan Kim [this message] 2015-10-30 7:01 ` [PATCH 4/8] mm: free swp_entry in madvise_free Minchan Kim 2015-10-30 12:28 ` Michal Hocko 2015-10-30 12:28 ` Michal Hocko 2015-11-03 0:53 ` Minchan Kim 2015-11-03 0:53 ` Minchan Kim 2015-11-03 0:53 ` Minchan Kim 2015-10-30 7:01 ` [PATCH 5/8] mm: move lazily freed pages to inactive list Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 17:22 ` Shaohua Li 2015-10-30 17:22 ` Shaohua Li 2015-10-30 17:22 ` Shaohua Li 2015-11-03 0:52 ` Minchan Kim 2015-11-03 0:52 ` Minchan Kim 2015-11-03 0:52 ` Minchan Kim 2015-11-04 8:15 ` Michal Hocko 2015-11-04 8:15 ` Michal Hocko 2015-11-04 17:53 ` Shaohua Li 2015-11-04 17:53 ` Shaohua Li 2015-11-04 17:53 ` Shaohua Li 2015-11-04 18:20 ` Shaohua Li 2015-11-04 18:20 ` Shaohua Li 2015-11-05 1:11 ` Minchan Kim 2015-11-05 1:11 ` Minchan Kim 2015-11-05 1:03 ` Minchan Kim 2015-11-05 1:03 ` Minchan Kim 2015-11-05 1:03 ` Minchan Kim 2015-11-04 20:55 ` Johannes Weiner 2015-11-04 20:55 ` Johannes Weiner 2015-11-04 20:55 ` Johannes Weiner 2015-11-04 21:48 ` Daniel Micay 2015-11-04 21:48 ` Daniel Micay 2015-11-04 22:55 ` Johannes Weiner 2015-11-04 22:55 ` Johannes Weiner 2015-11-04 22:55 ` Johannes Weiner 2015-11-04 23:36 ` Daniel Micay 2015-11-04 23:49 ` Daniel Micay 2015-11-04 23:49 ` Daniel Micay 2015-10-30 7:01 ` [PATCH 6/8] mm: lru_deactivate_fn should clear PG_referenced Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 12:47 ` Michal Hocko 2015-10-30 12:47 ` Michal Hocko 2015-10-30 12:47 ` Michal Hocko 2015-11-03 1:10 ` Minchan Kim 2015-11-03 1:10 ` Minchan Kim 2015-11-04 8:22 ` Michal Hocko 2015-11-04 8:22 ` Michal Hocko 2015-11-04 8:22 ` Michal Hocko 2015-10-30 7:01 ` [PATCH 7/8] mm: clear PG_dirty to mark page freeable Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-10-30 12:55 ` Michal Hocko 2015-10-30 12:55 ` Michal Hocko 2015-10-30 12:55 ` Michal Hocko 2015-10-30 7:01 ` [PATCH 8/8] mm: mark stable page dirty in KSM Minchan Kim 2015-10-30 7:01 ` Minchan Kim 2015-11-01 4:51 ` [PATCH 0/8] MADV_FREE support David Rientjes 2015-11-01 4:51 ` David Rientjes 2015-11-01 4:51 ` David Rientjes 2015-11-01 6:29 ` Daniel Micay 2015-11-03 2:23 ` Minchan Kim 2015-11-03 2:23 ` Minchan Kim 2015-11-03 2:23 ` Minchan Kim 2015-11-04 20:19 ` David Rientjes 2015-11-04 20:19 ` David Rientjes
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1446188504-28023-5-git-send-email-minchan@kernel.org \ --to=minchan@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=danielmicay@gmail.com \ --cc=hannes@cmpxchg.org \ --cc=hughd@google.com \ --cc=je@fb.com \ --cc=kirill@shutemov.name \ --cc=kosaki.motohiro@jp.fujitsu.com \ --cc=linux-api@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ --cc=mtk.manpages@gmail.com \ --cc=riel@redhat.com \ --cc=shli@kernel.org \ --cc=yalin.wang2010@gmail.com \ --cc=zhangyanfei@cn.fujitsu.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.