From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
To: Ying Huang <ying.huang@intel.com>,
linux-mm@kvack.org, akpm@linux-foundation.org
Cc: Greg Thelen <gthelen@google.com>, Yang Shi <shy828301@gmail.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Tim C Chen <tim.c.chen@intel.com>,
Brice Goglin <brice.goglin@gmail.com>,
Michal Hocko <mhocko@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Hesham Almatary <hesham.almatary@huawei.com>,
Dave Hansen <dave.hansen@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Alistair Popple <apopple@nvidia.com>,
Dan Williams <dan.j.williams@intel.com>,
Feng Tang <feng.tang@intel.com>,
Jagdish Gediya <jvgediya@linux.ibm.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
David Rientjes <rientjes@google.com>
Subject: Re: [RFC PATCH v4 7/7] mm/demotion: Demote pages according to allocation fallback order
Date: Mon, 06 Jun 2022 11:51:27 +0530 [thread overview]
Message-ID: <87leua8g7s.fsf@linux.ibm.com> (raw)
In-Reply-To: <65919df6b3302741780ff6fa69e497af06a9825e.camel@intel.com>
Ying Huang <ying.huang@intel.com> writes:
.....
> > > >
>> > > > https://lore.kernel.org/lkml/69f2d063a15f8c4afb4688af7b7890f32af55391.camel@intel.com/
>> > > >
>> > > > That is, something like below,
>> > > >
>> > > > static struct page *alloc_demote_page(struct page *page, unsigned long node)
>> > > > {
>> > > > struct page *page;
>> > > > nodemask_t allowed_mask;
>> > > > struct migration_target_control mtc = {
>> > > > /*
>> > > > * Allocate from 'node', or fail quickly and quietly.
>> > > > * When this happens, 'page' will likely just be discarded
>> > > > * instead of migrated.
>> > > > */
>> > > > .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
>> > > > __GFP_THISNODE | __GFP_NOWARN |
>> > > > __GFP_NOMEMALLOC | GFP_NOWAIT,
>> > > > .nid = node
>> > > > };
>> > > >
>> > > > page = alloc_migration_target(page, (unsigned long)&mtc);
>> > > > if (page)
>> > > > return page;
>> > > >
>> > > > mtc.gfp_mask &= ~__GFP_THISNODE;
>> > > > mtc.nmask = &allowed_mask;
>> > > >
>> > > > return alloc_migration_target(page, (unsigned long)&mtc);
>> > > > }
>> > >
>> > > I skipped doing this in v5 because I was not sure this is really what we
>> > > want.
>> >
>> > I think so. And this is the original behavior. We should keep the
>> > original behavior as much as possible, then make changes if necessary.
>> >
>>
>> That is the reason I split the new page allocation as a separate patch.
>> Previous discussion on this topic didn't conclude on whether we really
>> need to do the above or not
>> https://lore.kernel.org/lkml/CAAPL-u9endrWf_aOnPENDPdvT-2-YhCAeJ7ONGckGnXErTLOfQ@mail.gmail.com/
>
> Please check the later email in the thread you referenced. Both Wei and
> me agree that the use case needs to be supported. We just didn't reach
> concensus about how to implement it. If you think Wei's solution is
> better (referenced as below), you can try to do that too. Although I
> think my proposed implementation is much simpler.
How about the below details
diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
index 79bd8d26feb2..cd6e71f702ad 100644
--- a/include/linux/memory-tiers.h
+++ b/include/linux/memory-tiers.h
@@ -21,6 +21,7 @@ void node_remove_from_memory_tier(int node);
int node_get_memory_tier_id(int node);
int node_set_memory_tier(int node, int tier);
int node_reset_memory_tier(int node, int tier);
+void node_get_allowed_targets(int node, nodemask_t *targets);
#else
#define numa_demotion_enabled false
static inline int next_demotion_node(int node)
@@ -28,6 +29,10 @@ static inline int next_demotion_node(int node)
return NUMA_NO_NODE;
}
+static inline void node_get_allowed_targets(int node, nodemask_t *targets)
+{
+ *targets = NODE_MASK_NONE;
+}
#endif /* CONFIG_TIERED_MEMORY */
#endif
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index b4e72b672d4d..592d939ec28d 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -18,6 +18,7 @@ struct memory_tier {
struct demotion_nodes {
nodemask_t preferred;
+ nodemask_t allowed;
};
#define to_memory_tier(device) container_of(device, struct memory_tier, dev)
@@ -378,6 +379,25 @@ int node_set_memory_tier(int node, int tier)
}
EXPORT_SYMBOL_GPL(node_set_memory_tier);
+void node_get_allowed_targets(int node, nodemask_t *targets)
+{
+ /*
+ * node_demotion[] is updated without excluding this
+ * function from running.
+ *
+ * If any node is moving to lower tiers then modifications
+ * in node_demotion[] are still valid for this node, if any
+ * node is moving to higher tier then moving node may be
+ * used once for demotion which should be ok so rcu should
+ * be enough here.
+ */
+ rcu_read_lock();
+
+ *targets = node_demotion[node].allowed;
+
+ rcu_read_unlock();
+}
+
/**
* next_demotion_node() - Get the next node in the demotion path
* @node: The starting node to lookup the next node
@@ -437,8 +457,10 @@ static void __disable_all_migrate_targets(void)
{
int node;
- for_each_node_mask(node, node_states[N_MEMORY])
+ for_each_node_mask(node, node_states[N_MEMORY]) {
node_demotion[node].preferred = NODE_MASK_NONE;
+ node_demotion[node].allowed = NODE_MASK_NONE;
+ }
}
static void disable_all_migrate_targets(void)
@@ -465,7 +487,7 @@ static void establish_migration_targets(void)
struct demotion_nodes *nd;
int target = NUMA_NO_NODE, node;
int distance, best_distance;
- nodemask_t used;
+ nodemask_t used, allowed = NODE_MASK_NONE;
if (!node_demotion)
return;
@@ -511,6 +533,29 @@ static void establish_migration_targets(void)
}
} while (1);
}
+ /*
+ * Now build the allowed mask for each node collecting node mask from
+ * all memory tier below it. This allows us to fallback demotion page
+ * allocation to a set of nodes that is closer the above selected
+ * perferred node.
+ */
+ list_for_each_entry(memtier, &memory_tiers, list)
+ nodes_or(allowed, allowed, memtier->nodelist);
+ /*
+ * Removes nodes not yet in N_MEMORY.
+ */
+ nodes_and(allowed, node_states[N_MEMORY], allowed);
+
+ list_for_each_entry(memtier, &memory_tiers, list) {
+ /*
+ * Keep removing current tier from allowed nodes,
+ * This will remove all nodes in current and above
+ * memory tier from the allowed mask.
+ */
+ nodes_andnot(allowed, allowed, memtier->nodelist);
+ for_each_node_mask(node, memtier->nodelist)
+ node_demotion[node].allowed = allowed;
+ }
}
/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3a8f78277f99..b0792d838efb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1460,19 +1460,32 @@ static void folio_check_dirty_writeback(struct folio *folio,
mapping->a_ops->is_dirty_writeback(folio, dirty, writeback);
}
-static struct page *alloc_demote_page(struct page *page, unsigned long node)
+static struct page *alloc_demote_page(struct page *page, unsigned long private)
{
- struct migration_target_control mtc = {
- /*
- * Allocate from 'node', or fail quickly and quietly.
- * When this happens, 'page' will likely just be discarded
- * instead of migrated.
- */
- .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
- __GFP_THISNODE | __GFP_NOWARN |
- __GFP_NOMEMALLOC | GFP_NOWAIT,
- .nid = node
- };
+ struct page *target_page;
+ nodemask_t *allowed_mask;
+ struct migration_target_control *mtc;
+
+ mtc = (struct migration_target_control *)private;
+
+ allowed_mask = mtc->nmask;
+ /*
+ * make sure we allocate from the target node first also trying to
+ * reclaim pages from the target node via kswapd if we are low on
+ * free memory on target node. If we don't do this and if we have low
+ * free memory on the target memtier, we would start allocating pages
+ * from higher memory tiers without even forcing a demotion of cold
+ * pages from the target memtier. This can result in the kernel placing
+ * hotpages in higher memory tiers.
+ */
+ mtc->nmask = NULL;
+ mtc->gfp_mask |= __GFP_THISNODE;
+ target_page = alloc_migration_target(page, (unsigned long)&mtc);
+ if (target_page)
+ return target_page;
+
+ mtc->gfp_mask &= ~__GFP_THISNODE;
+ mtc->nmask = allowed_mask;
return alloc_migration_target(page, (unsigned long)&mtc);
}
@@ -1487,6 +1500,19 @@ static unsigned int demote_page_list(struct list_head *demote_pages,
{
int target_nid = next_demotion_node(pgdat->node_id);
unsigned int nr_succeeded;
+ nodemask_t allowed_mask;
+
+ struct migration_target_control mtc = {
+ /*
+ * Allocate from 'node', or fail quickly and quietly.
+ * When this happens, 'page' will likely just be discarded
+ * instead of migrated.
+ */
+ .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN |
+ __GFP_NOMEMALLOC | GFP_NOWAIT,
+ .nid = target_nid,
+ .nmask = &allowed_mask
+ };
if (list_empty(demote_pages))
return 0;
@@ -1494,10 +1520,12 @@ static unsigned int demote_page_list(struct list_head *demote_pages,
if (target_nid == NUMA_NO_NODE)
return 0;
+ node_get_allowed_targets(pgdat->node_id, &allowed_mask);
+
/* Demotion ignores all cpuset and mempolicy settings */
migrate_pages(demote_pages, alloc_demote_page, NULL,
- target_nid, MIGRATE_ASYNC, MR_DEMOTION,
- &nr_succeeded);
+ (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION,
+ &nr_succeeded);
if (current_is_kswapd())
__count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);
next prev parent reply other threads:[~2022-06-06 6:28 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-26 21:22 RFC: Memory Tiering Kernel Interfaces (v3) Wei Xu
2022-05-27 2:58 ` Ying Huang
2022-05-27 14:05 ` Hesham Almatary
2022-05-27 16:25 ` Wei Xu
2022-05-27 12:25 ` [RFC PATCH v4 0/7] mm/demotion: Memory tiers and demotion Aneesh Kumar K.V
2022-05-27 12:25 ` [RFC PATCH v4 1/7] mm/demotion: Add support for explicit memory tiers Aneesh Kumar K.V
2022-06-02 6:07 ` Ying Huang
2022-06-06 2:49 ` Ying Huang
2022-06-06 3:56 ` Aneesh Kumar K V
2022-06-06 5:33 ` Ying Huang
2022-06-06 6:01 ` Aneesh Kumar K V
2022-06-06 6:27 ` Aneesh Kumar K.V
2022-06-06 7:53 ` Ying Huang
2022-06-06 8:01 ` Aneesh Kumar K V
2022-06-06 8:52 ` Ying Huang
2022-06-06 9:02 ` Aneesh Kumar K V
2022-06-08 1:24 ` Ying Huang
2022-06-08 7:16 ` Ying Huang
2022-06-08 8:24 ` Aneesh Kumar K V
2022-06-08 8:27 ` Ying Huang
2022-05-27 12:25 ` [RFC PATCH v4 2/7] mm/demotion: Expose per node memory tier to sysfs Aneesh Kumar K.V
[not found] ` <20220527151531.00002a0c@Huawei.com>
2022-06-03 8:40 ` Aneesh Kumar K V
2022-06-06 14:59 ` Jonathan Cameron
2022-06-06 16:01 ` Aneesh Kumar K V
2022-06-06 16:16 ` Jonathan Cameron
2022-06-06 16:39 ` Aneesh Kumar K V
2022-06-06 17:46 ` Aneesh Kumar K.V
2022-06-08 7:18 ` Ying Huang
2022-06-08 8:25 ` Aneesh Kumar K V
2022-06-08 8:29 ` Ying Huang
2022-05-27 12:25 ` [RFC PATCH v4 3/7] mm/demotion: Build demotion targets based on explicit memory tiers Aneesh Kumar K.V
2022-05-30 3:35 ` [mm/demotion] 8ebccd60c2: BUG:sleeping_function_called_from_invalid_context_at_mm/compaction.c kernel test robot
2022-05-27 12:25 ` [RFC PATCH v4 4/7] mm/demotion/dax/kmem: Set node's memory tier to MEMORY_TIER_PMEM Aneesh Kumar K.V
2022-06-01 6:29 ` Bharata B Rao
2022-06-01 13:49 ` Aneesh Kumar K V
2022-06-02 6:36 ` Bharata B Rao
2022-06-03 9:04 ` Aneesh Kumar K V
2022-06-06 10:11 ` Bharata B Rao
2022-06-06 10:16 ` Aneesh Kumar K V
2022-06-06 11:54 ` Aneesh Kumar K.V
2022-06-06 12:09 ` Bharata B Rao
2022-06-06 13:00 ` Aneesh Kumar K V
2022-05-27 12:25 ` [RFC PATCH v4 5/7] mm/demotion: Add support to associate rank with memory tier Aneesh Kumar K.V
[not found] ` <20220527154557.00002c56@Huawei.com>
2022-05-27 15:45 ` Aneesh Kumar K V
2022-05-30 12:36 ` Jonathan Cameron
2022-06-02 6:41 ` Ying Huang
2022-05-27 12:25 ` [RFC PATCH v4 6/7] mm/demotion: Add support for removing node from demotion memory tiers Aneesh Kumar K.V
2022-06-02 6:43 ` Ying Huang
2022-05-27 12:25 ` [RFC PATCH v4 7/7] mm/demotion: Demote pages according to allocation fallback order Aneesh Kumar K.V
2022-06-02 7:35 ` Ying Huang
2022-06-03 15:09 ` Aneesh Kumar K V
2022-06-06 0:43 ` Ying Huang
2022-06-06 4:07 ` Aneesh Kumar K V
2022-06-06 5:26 ` Ying Huang
2022-06-06 6:21 ` Aneesh Kumar K.V [this message]
2022-06-06 7:42 ` Ying Huang
2022-06-06 8:02 ` Aneesh Kumar K V
2022-06-06 8:06 ` Ying Huang
2022-06-06 17:07 ` Yang Shi
2022-05-27 13:40 ` RFC: Memory Tiering Kernel Interfaces (v3) Aneesh Kumar K V
2022-05-27 16:30 ` Wei Xu
2022-05-29 4:31 ` Ying Huang
2022-05-30 12:50 ` Jonathan Cameron
2022-05-31 1:57 ` Ying Huang
2022-06-07 19:25 ` Tim Chen
2022-06-08 4:41 ` Aneesh Kumar K V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87leua8g7s.fsf@linux.ibm.com \
--to=aneesh.kumar@linux.ibm.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=brice.goglin@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=feng.tang@intel.com \
--cc=gthelen@google.com \
--cc=hesham.almatary@huawei.com \
--cc=jvgediya@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=rientjes@google.com \
--cc=shy828301@gmail.com \
--cc=tim.c.chen@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).