From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD11C433E1 for ; Wed, 19 Aug 2020 03:50:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A6DD2078B for ; Wed, 19 Aug 2020 03:50:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597809049; bh=WJMWBp8zpOZauz976RrszYN2dUDQx2kNyUc+pqzE4F0=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=MA1aVpF0tfGIENpD4fhI6FWlOncFkkQPGuUNT6t4DvGNSPRAxJimDUdvJBDjTHX8F PooQ/F5ahkD49r3zn1yONRJgJlMfnDyVHvg6BAvEDA9wRpA1+j2rTIt2qXDE46xQSS pEg4w+Ra0EbplYpUbaqmbJ+gaIY1kaRfmWaSsaJM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725903AbgHSDuq (ORCPT ); Tue, 18 Aug 2020 23:50:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:43644 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726342AbgHSDup (ORCPT ); Tue, 18 Aug 2020 23:50:45 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0477B2078B; Wed, 19 Aug 2020 03:50:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597809044; bh=WJMWBp8zpOZauz976RrszYN2dUDQx2kNyUc+pqzE4F0=; h=Date:From:To:Subject:In-Reply-To:From; b=riP4vc+u+oWh7EtACsRsc8+Gvv1t2iLgD74EmfnioL4cGnFYUqzaboqmq7/pd/CRm ynrvUSvUNgNz+EzS4XDEi8Qd1JUupvTrvazj4q7UlF8qWTyLn1Fi/H7m8nzpN/h8im 8MD6zn7ZgGn0FQNs776HqE/LfKV8VQXeaHaRPc60= Date: Tue, 18 Aug 2020 20:50:43 -0700 From: Andrew Morton To: aarcange@redhat.com, alex.shi@linux.alibaba.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, kirill.shutemov@linux.intel.com, mhocko@kernel.org, mhocko@suse.com, mm-commits@vger.kernel.org, richard.weiyang@gmail.com, vdavydov.dev@gmail.com, willy@infradead.org Subject: + mm-thp-move-lru_add_page_tail-func-to-huge_memoryc.patch added to -mm tree Message-ID: <20200819035043.6hDoc5rc4%akpm@linux-foundation.org> In-Reply-To: <20200814172939.55d6d80b6e21e4241f1ee1f3@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: mm/thp: move lru_add_page_tail func to huge_memory.c has been added to the -mm tree. Its filename is mm-thp-move-lru_add_page_tail-func-to-huge_memoryc.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-thp-move-lru_add_page_tail-func-to-huge_memoryc.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-thp-move-lru_add_page_tail-func-to-huge_memoryc.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alex Shi Subject: mm/thp: move lru_add_page_tail func to huge_memory.c The function is only used in huge_memory.c, defining it in other file with a CONFIG_TRANSPARENT_HUGEPAGE macro restrict just looks weird. Let's move it THP. And make it static as Hugh Dickin suggested. Link: http://lkml.kernel.org/r/1597144232-11370-3-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Michal Hocko Cc: Roman Gushchin Cc: Vladimir Davydov Cc: Wei Yang Signed-off-by: Andrew Morton --- include/linux/swap.h | 2 -- mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/swap.c | 33 --------------------------------- 3 files changed, 30 insertions(+), 35 deletions(-) --- a/include/linux/swap.h~mm-thp-move-lru_add_page_tail-func-to-huge_memoryc +++ a/include/linux/swap.h @@ -338,8 +338,6 @@ extern void lru_note_cost(struct lruvec unsigned int nr_pages); extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); -extern void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *head); extern void mark_page_accessed(struct page *); extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); --- a/mm/huge_memory.c~mm-thp-move-lru_add_page_tail-func-to-huge_memoryc +++ a/mm/huge_memory.c @@ -2313,6 +2313,36 @@ static void remap_page(struct page *page } } +static void lru_add_page_tail(struct page *page, struct page *page_tail, + struct lruvec *lruvec, struct list_head *list) +{ + VM_BUG_ON_PAGE(!PageHead(page), page); + VM_BUG_ON_PAGE(PageCompound(page_tail), page); + VM_BUG_ON_PAGE(PageLRU(page_tail), page); + lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); + + if (!list) + SetPageLRU(page_tail); + + if (likely(PageLRU(page))) + list_add_tail(&page_tail->lru, &page->lru); + else if (list) { + /* page reclaim is reclaiming a huge page */ + get_page(page_tail); + list_add_tail(&page_tail->lru, list); + } else { + /* + * Head page has not yet been counted, as an hpage, + * so we must account for each subpage individually. + * + * Put page_tail on the list at the correct position + * so they all end up in order. + */ + add_page_to_lru_list_tail(page_tail, lruvec, + page_lru(page_tail)); + } +} + static void __split_huge_page_tail(struct page *head, int tail, struct lruvec *lruvec, struct list_head *list) { --- a/mm/swap.c~mm-thp-move-lru_add_page_tail-func-to-huge_memoryc +++ a/mm/swap.c @@ -930,39 +930,6 @@ void __pagevec_release(struct pagevec *p } EXPORT_SYMBOL(__pagevec_release); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* used by __split_huge_page_refcount() */ -void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *list) -{ - VM_BUG_ON_PAGE(!PageHead(page), page); - VM_BUG_ON_PAGE(PageCompound(page_tail), page); - VM_BUG_ON_PAGE(PageLRU(page_tail), page); - lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); - - if (!list) - SetPageLRU(page_tail); - - if (likely(PageLRU(page))) - list_add_tail(&page_tail->lru, &page->lru); - else if (list) { - /* page reclaim is reclaiming a huge page */ - get_page(page_tail); - list_add_tail(&page_tail->lru, list); - } else { - /* - * Head page has not yet been counted, as an hpage, - * so we must account for each subpage individually. - * - * Put page_tail on the list at the correct position - * so they all end up in order. - */ - add_page_to_lru_list_tail(page_tail, lruvec, - page_lru(page_tail)); - } -} -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, void *arg) { _ Patches currently in -mm which might be from alex.shi@linux.alibaba.com are mm-memcg-warning-on-memcg-after-readahead-page-charged.patch mm-memcg-remove-useless-check-on-page-mem_cgroup.patch mm-thp-move-lru_add_page_tail-func-to-huge_memoryc.patch mm-thp-clean-up-lru_add_page_tail.patch mm-thp-remove-code-path-which-never-got-into.patch mm-thp-narrow-lru-locking.patch