From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ED48C433E0 for ; Tue, 9 Mar 2021 00:11:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7BDDA65287 for ; Tue, 9 Mar 2021 00:11:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BDDA65287 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F00AF8D0092; Mon, 8 Mar 2021 19:11:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED6E08D007F; Mon, 8 Mar 2021 19:11:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2D398D0092; Mon, 8 Mar 2021 19:11:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id B6F468D007F for ; Mon, 8 Mar 2021 19:11:10 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7C1B5180AD81F for ; Tue, 9 Mar 2021 00:11:10 +0000 (UTC) X-FDA: 77898405900.13.7B2AC9D Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by imf28.hostedemail.com (Postfix) with ESMTP id D15312000382 for ; Tue, 9 Mar 2021 00:11:10 +0000 (UTC) Received: by mail-ej1-f50.google.com with SMTP id dx17so23983309ejb.2 for ; Mon, 08 Mar 2021 16:11:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=5mndPuJJOYqbozYSxcIORzE9wZL/Ih//MLSsvGIaSkU=; b=fmflyKWNkZUfjNiu08Zp155Mzk9RE6vWgmjT6fHoHXrnZgpACUYwtzepI5PxaUzXwQ itF3fg6QmkYQdqOmgIwk4hwGOKylLuEY/J4/+fmJ3bGzalwjFaxkk8XpS5GuPE1pM3T5 Ffwr7eyuec+Bx8AVY4HVwQtqKy6SiZjBYQ0M5bvkME5BTvifHNq+MCmeYWTS9y6Llt2G sGflXE/lJWBVILOtvrTPWZqNcr+6mPO4hUVYDqqz8i0u9bqtkfAeAggcEbhERY+EVys/ gr/wsdESmjAwyY1RsRe0VJgA+mmsHId7cyJKuAhQ5TeUCBlk5pbG1Dry2jAnNc5OvWS1 2WKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=5mndPuJJOYqbozYSxcIORzE9wZL/Ih//MLSsvGIaSkU=; b=A4MDY8oIw44sda1sFWRGVaG49cGduHhaVPA8Y2GX3JUiHXvM83WfJ2fX9EAMVtk7zw B5aE0O9OIVQpDWjMK9qBT84NUA5lABTqobS1i39YRlJEtE014q1outWblkL2KGhV06Aj mKMW6XnEAEOspmjWDde9srbtzYGiP/uAIl8Q3y1At4KdClrZw3oT3HjsBllEoWlWP1o2 IJKOStT2H0vV3JZLgDzySHL38Ror24ln+TkywyJ1/0NStRUMwFexWG4lDpVwXHBQxg1B Dzn8Hwg03JatjQNzrTSb1/Fc4FlVR+EHa/XizfKH0l8k8hzxgT5ggspMrJslgHRoaDOA L9Og== X-Gm-Message-State: AOAM530ZA7RRc0pEFhhFEvEs388kY36FCvIZY0y3N/osNthao6AMl54t 3Ymr1Uv436Xry3BA+BunA7c3QkC146OYvDpTv7M= X-Google-Smtp-Source: ABdhPJygiHqiYAN5VVVYBYjCzAuwwomg63/h3aN5RwFrVh72IGn81UdzGu6Ydx924MMlbLaDRlExOGfB4vy63/iT+O8= X-Received: by 2002:a17:906:a147:: with SMTP id bu7mr17007122ejb.383.1615248668829; Mon, 08 Mar 2021 16:11:08 -0800 (PST) MIME-Version: 1.0 References: <20210304235949.7922C1C3@viggo.jf.intel.com> <20210304235958.ECFA81E5@viggo.jf.intel.com> In-Reply-To: <20210304235958.ECFA81E5@viggo.jf.intel.com> From: Yang Shi Date: Mon, 8 Mar 2021 16:10:57 -0800 Message-ID: Subject: Re: [PATCH 05/10] mm/migrate: demote pages during reclaim To: Dave Hansen Cc: Linux Kernel Mailing List , Linux MM , Yang Shi , David Rientjes , Huang Ying , Dan Williams , Oscar Salvador Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: b4asd34x8wyan1pi7ks1ies39rxdcq5j X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D15312000382 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mail-ej1-f50.google.com; client-ip=209.85.218.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615248670-851021 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 4, 2021 at 4:01 PM Dave Hansen wrote: > > > From: Dave Hansen > > This is mostly derived from a patch from Yang Shi: > > https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/ > > Add code to the reclaim path (shrink_page_list()) to "demote" data > to another NUMA node instead of discarding the data. This always > avoids the cost of I/O needed to read the page back in and sometimes > avoids the writeout cost when the pagee is dirty. > > A second pass through shrink_page_list() will be made if any demotions > fail. This essentally falls back to normal reclaim behavior in the > case that demotions fail. Previous versions of this patch may have > simply failed to reclaim pages which were eligible for demotion but > were unable to be demoted in practice. > > Note: This just adds the start of infratructure for migration. It is > actually disabled next to the FIXME in migrate_demote_page_ok(). > > Signed-off-by: Dave Hansen > Cc: Yang Shi > Cc: David Rientjes > Cc: Huang Ying > Cc: Dan Williams > Cc: osalvador > > -- > changes from 20210122: > * move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE (Ying) > > changes from 202010: > * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define > * make migrate_demote_page_ok() static, remove 'sc' arg until > later patch > * remove unnecessary alloc_demote_page() hugetlb warning > * Simplify alloc_demote_page() gfp mask. Depend on > __GFP_NORETRY to make it lightweight instead of fancier > stuff like leaving out __GFP_IO/FS. > * Allocate migration page with alloc_migration_target() > instead of allocating directly. > changes from 20200730: > * Add another pass through shrink_page_list() when demotion > fails. > --- > > b/include/linux/migrate.h | 13 +++++- > b/include/trace/events/migrate.h | 3 - > b/mm/vmscan.c | 81 +++++++++++++++++++++++++++++++++++++++ > 3 files changed, 94 insertions(+), 3 deletions(-) > > diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h > --- a/include/linux/migrate.h~demote-with-migrate_pages 2021-03-04 15:35:56.471806429 -0800 > +++ b/include/linux/migrate.h 2021-03-04 15:35:56.479806429 -0800 > @@ -27,6 +27,7 @@ enum migrate_reason { > MR_MEMPOLICY_MBIND, > MR_NUMA_MISPLACED, > MR_CONTIG_RANGE, > + MR_DEMOTION, > MR_TYPES > }; > > @@ -58,8 +59,8 @@ extern int migrate_page_move_mapping(str > > static inline void putback_movable_pages(struct list_head *l) {} > static inline int migrate_pages(struct list_head *l, new_page_t new, > - unsigned long private, enum migrate_mode mode, int reason, > - unsigned int *nr_succeeded) > + free_page_t free, unsigned long private, enum migrate_mode mode, > + int reason, unsigned int *nr_succeeded) > { return -ENOSYS; } > static inline struct page *alloc_migration_target(struct page *page, > unsigned long private) > @@ -196,6 +197,14 @@ struct migrate_vma { > int migrate_vma_setup(struct migrate_vma *args); > void migrate_vma_pages(struct migrate_vma *migrate); > void migrate_vma_finalize(struct migrate_vma *migrate); > +int next_demotion_node(int node); > + > +#else /* CONFIG_MIGRATION disabled: */ > + > +static inline int next_demotion_node(int node) > +{ > + return NUMA_NO_NODE; > +} > > #endif /* CONFIG_MIGRATION */ > > diff -puN include/trace/events/migrate.h~demote-with-migrate_pages include/trace/events/migrate.h > --- a/include/trace/events/migrate.h~demote-with-migrate_pages 2021-03-04 15:35:56.473806429 -0800 > +++ b/include/trace/events/migrate.h 2021-03-04 15:35:56.479806429 -0800 > @@ -20,7 +20,8 @@ > EM( MR_SYSCALL, "syscall_or_cpuset") \ > EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ > EM( MR_NUMA_MISPLACED, "numa_misplaced") \ > - EMe(MR_CONTIG_RANGE, "contig_range") > + EM( MR_CONTIG_RANGE, "contig_range") \ > + EMe(MR_DEMOTION, "demotion") > > /* > * First define the enums in the above macros to be exported to userspace > diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c > --- a/mm/vmscan.c~demote-with-migrate_pages 2021-03-04 15:35:56.475806429 -0800 > +++ b/mm/vmscan.c 2021-03-04 15:35:56.482806429 -0800 > @@ -41,6 +41,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1034,6 +1035,23 @@ static enum page_references page_check_r > return PAGEREF_RECLAIM; > } > > +static bool migrate_demote_page_ok(struct page *page) > +{ > + int next_nid = next_demotion_node(page_to_nid(page)); > + > + VM_BUG_ON_PAGE(!PageLocked(page), page); > + VM_BUG_ON_PAGE(PageHuge(page), page); > + VM_BUG_ON_PAGE(PageLRU(page), page); > + > + if (next_nid == NUMA_NO_NODE) > + return false; > + if (PageTransHuge(page) && !thp_migration_supported()) > + return false; > + > + // FIXME: actually enable this later in the series > + return false; > +} > + > /* Check if a page is dirty or under writeback */ > static void page_check_dirty_writeback(struct page *page, > bool *dirty, bool *writeback) > @@ -1064,6 +1082,45 @@ static void page_check_dirty_writeback(s > mapping->a_ops->is_dirty_writeback(page, dirty, writeback); > } > > +static struct page *alloc_demote_page(struct page *page, unsigned long node) > +{ > + struct migration_target_control mtc = { > + /* > + * Fail the allocation quickly and quietly. When this > + * happens, 'page; will likely just be discarded instead > + * of migrated. > + */ > + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_NORETRY | __GFP_NOWARN, > + .nid = node I recall I pointed out __GFP_THISNODE should be needed to guarantee the allocation doesn't fallback. But it seems it is not solved yet or it is guaranteed by the other way? > + }; > + > + return alloc_migration_target(page, (unsigned long)&mtc); > +} > + > +/* > + * Take pages on @demote_list and attempt to demote them to > + * another node. Pages which are not demoted are left on > + * @demote_pages. > + */ > +static unsigned int demote_page_list(struct list_head *demote_pages, > + struct pglist_data *pgdat, > + struct scan_control *sc) > +{ > + int target_nid = next_demotion_node(pgdat->node_id); > + unsigned int nr_succeeded = 0; > + int err; > + > + if (list_empty(demote_pages)) > + return 0; > + > + /* Demotion ignores all cpuset and mempolicy settings */ > + err = migrate_pages(demote_pages, alloc_demote_page, NULL, > + target_nid, MIGRATE_ASYNC, MR_DEMOTION, > + &nr_succeeded); > + > + return nr_succeeded; > +} > + > /* > * shrink_page_list() returns the number of reclaimed pages > */ > @@ -1075,12 +1132,15 @@ static unsigned int shrink_page_list(str > { > LIST_HEAD(ret_pages); > LIST_HEAD(free_pages); > + LIST_HEAD(demote_pages); > unsigned int nr_reclaimed = 0; > unsigned int pgactivate = 0; > + bool do_demote_pass = true; > > memset(stat, 0, sizeof(*stat)); > cond_resched(); > > +retry: > while (!list_empty(page_list)) { > struct address_space *mapping; > struct page *page; > @@ -1230,6 +1290,16 @@ static unsigned int shrink_page_list(str > } > > /* > + * Before reclaiming the page, try to relocate > + * its contents to another node. > + */ > + if (do_demote_pass && migrate_demote_page_ok(page)) { > + list_add(&page->lru, &demote_pages); > + unlock_page(page); > + continue; > + } > + > + /* > * Anonymous process memory has backing store? > * Try to allocate it some swap space here. > * Lazyfree page could be freed directly > @@ -1479,6 +1549,17 @@ keep: > list_add(&page->lru, &ret_pages); > VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); > } > + /* 'page_list' is always empty here */ > + > + /* Migrate pages selected for demotion */ > + nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc); > + /* Pages that could not be demoted are still in @demote_pages */ > + if (!list_empty(&demote_pages)) { > + /* Pages which failed to demoted go back on @page_list for retry: */ > + list_splice_init(&demote_pages, page_list); > + do_demote_pass = false; > + goto retry; > + } > > pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; > > _ >