From 79c23c212f9e21edb2dbb440dd499d0a49e79bea Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 30 Oct 2022 13:50:39 -0700 Subject: [PATCH 2/4] mm: inline simpler case of page_remove_file_rmap() Now that we have a simplified special case of 'page_remove_rmap()' that doesn't deal with the 'compound' case and always gets a file-mapped (ie not anonymous) page, it ended up doing just lock_page_memcg(page); page_remove_file_rmap(page, false); unlock_page_memcg(page); but 'page_remove_file_rmap()' is actually trivial when 'compound' is false. So just inline that non-compound case in the caller, and - like we did in the previous commit for the anon pages - only do the memcg locking for the parts that actually matter: the page statistics. Also, as the previous commit did for anonymous pages, knowing we only get called for the last-level page table entries allows for a further simplification: we can get rid of the 'PageHuge(page)' case too. You can't map a huge-page in a pte without splitting it (and the full code in the generic page_remove_file_rmap() function has a comment to that effect: "hugetlb pages are always mapped with pmds"). That means that the page_zap_file_rmap() case of that whole function is really small and trivial. Signed-off-by: Linus Torvalds --- mm/rmap.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 71a5365f23f3..69de6c833d5c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1426,8 +1426,11 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_zap_file_rmap(struct page *page) { + if (!atomic_add_negative(-1, &page->_mapcount)) + return; + lock_page_memcg(page); - page_remove_file_rmap(page, false); + __dec_lruvec_page_state(page, NR_FILE_MAPPED); unlock_page_memcg(page); } -- 2.37.1.289.g45aa1e5c72.dirty