* [PATCH v11 0/2] Improved Memory Tier Creation for CPUless NUMA Nodes @ 2024-04-05 0:07 Ho-Ren (Jack) Chuang 2024-04-05 0:07 ` [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types Ho-Ren (Jack) Chuang 2024-04-05 0:07 ` [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info Ho-Ren (Jack) Chuang 0 siblings, 2 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-05 0:07 UTC (permalink / raw) To: Jonathan Cameron, Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm Cc: Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel When a memory device, such as CXL1.1 type3 memory, is emulated as normal memory (E820_TYPE_RAM), the memory device is indistinguishable from normal DRAM in terms of memory tiering with the current implementation. The current memory tiering assigns all detected normal memory nodes to the same DRAM tier. This results in normal memory devices with different attributions being unable to be assigned to the correct memory tier, leading to the inability to migrate pages between different types of memory. https://lore.kernel.org/linux-mm/PH0PR08MB7955E9F08CCB64F23963B5C3A860A@PH0PR08MB7955.namprd08.prod.outlook.com/T/ This patchset automatically resolves the issues. It delays the initialization of memory tiers for CPUless NUMA nodes until they obtain HMAT information and after all devices are initialized at boot time, eliminating the need for user intervention. If no HMAT is specified, it falls back to using `default_dram_type`. Example usecase: We have CXL memory on the host, and we create VMs with a new system memory device backed by host CXL memory. We inject CXL memory performance attributes through QEMU, and the guest now sees memory nodes with performance attributes in HMAT. With this change, we enable the guest kernel to construct the correct memory tiering for the memory nodes. - v11: Thanks to comments from Jonathan, * Replace `mutex_lock()` with `guard(mutex)()` * Reorder some modifications within the patchset * Rewrite the code for improved readability and fixing alignment issues * Pass all strict rules in checkpatch.pl - v10: Thanks to Andrew's and SeongJae's comments, * Address kunit compilation errors * Resolve the bug of not returning the correct error code in `mt_perf_to_adistance` * https://lore.kernel.org/lkml/20240402001739.2521623-1-horenchuang@bytedance.com/T/#u -v9: * Address corner cases in `memory_tier_late_init`. Thank Ying's comments. * https://lore.kernel.org/lkml/20240329053353.309557-1-horenchuang@bytedance.com/T/#u -v8: * Fix email format * https://lore.kernel.org/lkml/20240329004815.195476-1-horenchuang@bytedance.com/T/#u -v7: * Add Reviewed-by: "Huang, Ying" <ying.huang@intel.com> -v6: Thanks to Ying's comments, * Move `default_dram_perf_lock` to the function's beginning for clarity * Fix double unlocking at v5 * https://lore.kernel.org/lkml/20240327072729.3381685-1-horenchuang@bytedance.com/T/#u -v5: Thanks to Ying's comments, * Add comments about what is protected by `default_dram_perf_lock` * Fix an uninitialized pointer mtype * Slightly shorten the time holding `default_dram_perf_lock` * Fix a deadlock bug in `mt_perf_to_adistance` * https://lore.kernel.org/lkml/20240327041646.3258110-1-horenchuang@bytedance.com/T/#u -v4: Thanks to Ying's comments, * Remove redundant code * Reorganize patches accordingly * https://lore.kernel.org/lkml/20240322070356.315922-1-horenchuang@bytedance.com/T/#u -v3: Thanks to Ying's comments, * Make the newly added code independent of HMAT * Upgrade set_node_memory_tier to support more cases * Put all non-driver-initialized memory types into default_memory_types instead of using hmat_memory_types * find_alloc_memory_type -> mt_find_alloc_memory_type * https://lore.kernel.org/lkml/20240320061041.3246828-1-horenchuang@bytedance.com/T/#u -v2: Thanks to Ying's comments, * Rewrite cover letter & patch description * Rename functions, don't use _hmat * Abstract common functions into find_alloc_memory_type() * Use the expected way to use set_node_memory_tier instead of modifying it * https://lore.kernel.org/lkml/20240312061729.1997111-1-horenchuang@bytedance.com/T/#u -v1: * https://lore.kernel.org/lkml/20240301082248.3456086-1-horenchuang@bytedance.com/T/#u Ho-Ren (Jack) Chuang (2): memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types memory tier: create CPUless memory tiers after obtaining HMAT info drivers/dax/kmem.c | 30 ++------- include/linux/memory-tiers.h | 13 ++++ mm/memory-tiers.c | 123 ++++++++++++++++++++++++++++------- 3 files changed, 116 insertions(+), 50 deletions(-) -- Ho-Ren (Jack) Chuang ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types 2024-04-05 0:07 [PATCH v11 0/2] Improved Memory Tier Creation for CPUless NUMA Nodes Ho-Ren (Jack) Chuang @ 2024-04-05 0:07 ` Ho-Ren (Jack) Chuang 2024-04-05 13:56 ` Jonathan Cameron 2024-04-05 0:07 ` [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info Ho-Ren (Jack) Chuang 1 sibling, 1 reply; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-05 0:07 UTC (permalink / raw) To: Jonathan Cameron, Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm Cc: Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel Since different memory devices require finding, allocating, and putting memory types, these common steps are abstracted in this patch, enhancing the scalability and conciseness of the code. Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> --- drivers/dax/kmem.c | 30 ++++-------------------------- include/linux/memory-tiers.h | 13 +++++++++++++ mm/memory-tiers.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 46 insertions(+), 26 deletions(-) diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index 42ee360cf4e3..4fe9d040e375 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -55,36 +55,14 @@ static LIST_HEAD(kmem_memory_types); static struct memory_dev_type *kmem_find_alloc_memory_type(int adist) { - bool found = false; - struct memory_dev_type *mtype; - - mutex_lock(&kmem_memory_type_lock); - list_for_each_entry(mtype, &kmem_memory_types, list) { - if (mtype->adistance == adist) { - found = true; - break; - } - } - if (!found) { - mtype = alloc_memory_type(adist); - if (!IS_ERR(mtype)) - list_add(&mtype->list, &kmem_memory_types); - } - mutex_unlock(&kmem_memory_type_lock); - - return mtype; + guard(mutex)(&kmem_memory_type_lock); + return mt_find_alloc_memory_type(adist, &kmem_memory_types); } static void kmem_put_memory_types(void) { - struct memory_dev_type *mtype, *mtn; - - mutex_lock(&kmem_memory_type_lock); - list_for_each_entry_safe(mtype, mtn, &kmem_memory_types, list) { - list_del(&mtype->list); - put_memory_type(mtype); - } - mutex_unlock(&kmem_memory_type_lock); + guard(mutex)(&kmem_memory_type_lock); + mt_put_memory_types(&kmem_memory_types); } static int dev_dax_kmem_probe(struct dev_dax *dev_dax) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 69e781900082..0d70788558f4 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -48,6 +48,9 @@ int mt_calc_adistance(int node, int *adist); int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, const char *source); int mt_perf_to_adistance(struct access_coordinate *perf, int *adist); +struct memory_dev_type *mt_find_alloc_memory_type(int adist, + struct list_head *memory_types); +void mt_put_memory_types(struct list_head *memory_types); #ifdef CONFIG_MIGRATION int next_demotion_node(int node); void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets); @@ -136,5 +139,15 @@ static inline int mt_perf_to_adistance(struct access_coordinate *perf, int *adis { return -EIO; } + +static inline struct memory_dev_type *mt_find_alloc_memory_type(int adist, + struct list_head *memory_types) +{ + return NULL; +} + +static inline void mt_put_memory_types(struct list_head *memory_types) +{ +} #endif /* CONFIG_NUMA */ #endif /* _LINUX_MEMORY_TIERS_H */ diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index 0537664620e5..516b144fd45a 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -623,6 +623,35 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype) } EXPORT_SYMBOL_GPL(clear_node_memory_type); +struct memory_dev_type *mt_find_alloc_memory_type(int adist, struct list_head *memory_types) +{ + struct memory_dev_type *mtype; + + list_for_each_entry(mtype, memory_types, list) + if (mtype->adistance == adist) + return mtype; + + mtype = alloc_memory_type(adist); + if (IS_ERR(mtype)) + return mtype; + + list_add(&mtype->list, memory_types); + + return mtype; +} +EXPORT_SYMBOL_GPL(mt_find_alloc_memory_type); + +void mt_put_memory_types(struct list_head *memory_types) +{ + struct memory_dev_type *mtype, *mtn; + + list_for_each_entry_safe(mtype, mtn, memory_types, list) { + list_del(&mtype->list); + put_memory_type(mtype); + } +} +EXPORT_SYMBOL_GPL(mt_put_memory_types); + static void dump_hmem_attrs(struct access_coordinate *coord, const char *prefix) { pr_info( -- Ho-Ren (Jack) Chuang ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types 2024-04-05 0:07 ` [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types Ho-Ren (Jack) Chuang @ 2024-04-05 13:56 ` Jonathan Cameron 2024-04-09 19:00 ` [External] " Ho-Ren (Jack) Chuang 0 siblings, 1 reply; 16+ messages in thread From: Jonathan Cameron @ 2024-04-05 13:56 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel On Fri, 5 Apr 2024 00:07:05 +0000 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > Since different memory devices require finding, allocating, and putting > memory types, these common steps are abstracted in this patch, > enhancing the scalability and conciseness of the code. > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawie.com> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types 2024-04-05 13:56 ` Jonathan Cameron @ 2024-04-09 19:00 ` Ho-Ren (Jack) Chuang 2024-04-09 21:50 ` Andrew Morton 0 siblings, 1 reply; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-09 19:00 UTC (permalink / raw) To: Jonathan Cameron Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel Hi Jonathan, On Fri, Apr 5, 2024 at 6:56 AM Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote: > > On Fri, 5 Apr 2024 00:07:05 +0000 > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > Since different memory devices require finding, allocating, and putting > > memory types, these common steps are abstracted in this patch, > > enhancing the scalability and conciseness of the code. > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawie.com> > Thank you for reviewing and for adding your "Reviewed-by"! I was wondering if I need to send a v12 and manually add this to the commit description, or if this is sufficient. -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types 2024-04-09 19:00 ` [External] " Ho-Ren (Jack) Chuang @ 2024-04-09 21:50 ` Andrew Morton 2024-04-09 23:09 ` Ho-Ren (Jack) Chuang 0 siblings, 1 reply; 16+ messages in thread From: Andrew Morton @ 2024-04-09 21:50 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Jonathan Cameron, Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel On Tue, 9 Apr 2024 12:00:06 -0700 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > Hi Jonathan, > > On Fri, Apr 5, 2024 at 6:56 AM Jonathan Cameron > <Jonathan.Cameron@huawei.com> wrote: > > > > On Fri, 5 Apr 2024 00:07:05 +0000 > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > Since different memory devices require finding, allocating, and putting > > > memory types, these common steps are abstracted in this patch, > > > enhancing the scalability and conciseness of the code. > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawie.com> > > > Thank you for reviewing and for adding your "Reviewed-by"! > I was wondering if I need to send a v12 and manually add > this to the commit description, or if this is sufficient. I had added Jonathan's r-b to the mm.git copy of this patch. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types 2024-04-09 21:50 ` Andrew Morton @ 2024-04-09 23:09 ` Ho-Ren (Jack) Chuang 0 siblings, 0 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-09 23:09 UTC (permalink / raw) To: Andrew Morton Cc: Jonathan Cameron, Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel On Tue, Apr 9, 2024 at 2:50 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Tue, 9 Apr 2024 12:00:06 -0700 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > Hi Jonathan, > > > > On Fri, Apr 5, 2024 at 6:56 AM Jonathan Cameron > > <Jonathan.Cameron@huawei.com> wrote: > > > > > > On Fri, 5 Apr 2024 00:07:05 +0000 > > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > > > Since different memory devices require finding, allocating, and putting > > > > memory types, these common steps are abstracted in this patch, > > > > enhancing the scalability and conciseness of the code. > > > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawie.com> > > > > > Thank you for reviewing and for adding your "Reviewed-by"! > > I was wondering if I need to send a v12 and manually add > > this to the commit description, or if this is sufficient. > > I had added Jonathan's r-b to the mm.git copy of this patch. Got it~ Thank you Andrew! -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 0:07 [PATCH v11 0/2] Improved Memory Tier Creation for CPUless NUMA Nodes Ho-Ren (Jack) Chuang 2024-04-05 0:07 ` [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types Ho-Ren (Jack) Chuang @ 2024-04-05 0:07 ` Ho-Ren (Jack) Chuang 2024-04-05 14:02 ` Jonathan Cameron 2024-04-19 14:01 ` Jonathan Cameron 1 sibling, 2 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-05 0:07 UTC (permalink / raw) To: Jonathan Cameron, Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm Cc: Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang The current implementation treats emulated memory devices, such as CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory (E820_TYPE_RAM). However, these emulated devices have different characteristics than traditional DRAM, making it important to distinguish them. Thus, we modify the tiered memory initialization process to introduce a delay specifically for CPUless NUMA nodes. This delay ensures that the memory tier initialization for these nodes is deferred until HMAT information is obtained during the boot process. Finally, demotion tables are recalculated at the end. * late_initcall(memory_tier_late_init); Some device drivers may have initialized memory tiers between `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing online memory nodes and configuring memory tiers. They should be excluded in the late init. * Handle cases where there is no HMAT when creating memory tiers There is a scenario where a CPUless node does not provide HMAT information. If no HMAT is specified, it falls back to using the default DRAM tier. * Introduce another new lock `default_dram_perf_lock` for adist calculation In the current implementation, iterating through CPUlist nodes requires holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up trying to acquire the same lock, leading to a potential deadlock. Therefore, we propose introducing a standalone `default_dram_perf_lock` to protect `default_dram_perf_*`. This approach not only avoids deadlock but also prevents holding a large lock simultaneously. * Upgrade `set_node_memory_tier` to support additional cases, including default DRAM, late CPUless, and hot-plugged initializations. To cover hot-plugged memory nodes, `mt_calc_adistance()` and `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to handle cases where memtype is not initialized and where HMAT information is available. * Introduce `default_memory_types` for those memory types that are not initialized by device drivers. Because late initialized memory and default DRAM memory need to be managed, a default memory type is created for storing all memory types that are not initialized by device drivers and as a fallback. Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> --- mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 70 insertions(+), 24 deletions(-) diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index 516b144fd45a..6632102bd5c9 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -36,6 +36,11 @@ struct node_memory_type_map { static DEFINE_MUTEX(memory_tier_lock); static LIST_HEAD(memory_tiers); +/* + * The list is used to store all memory types that are not created + * by a device driver. + */ +static LIST_HEAD(default_memory_types); static struct node_memory_type_map node_memory_types[MAX_NUMNODES]; struct memory_dev_type *default_dram_type; @@ -108,6 +113,8 @@ static struct demotion_nodes *node_demotion __read_mostly; static BLOCKING_NOTIFIER_HEAD(mt_adistance_algorithms); +/* The lock is used to protect `default_dram_perf*` info and nid. */ +static DEFINE_MUTEX(default_dram_perf_lock); static bool default_dram_perf_error; static struct access_coordinate default_dram_perf; static int default_dram_perf_ref_nid = NUMA_NO_NODE; @@ -505,7 +512,8 @@ static inline void __init_node_memory_type(int node, struct memory_dev_type *mem static struct memory_tier *set_node_memory_tier(int node) { struct memory_tier *memtier; - struct memory_dev_type *memtype; + struct memory_dev_type *memtype = default_dram_type; + int adist = MEMTIER_ADISTANCE_DRAM; pg_data_t *pgdat = NODE_DATA(node); @@ -514,7 +522,16 @@ static struct memory_tier *set_node_memory_tier(int node) if (!node_state(node, N_MEMORY)) return ERR_PTR(-EINVAL); - __init_node_memory_type(node, default_dram_type); + mt_calc_adistance(node, &adist); + if (!node_memory_types[node].memtype) { + memtype = mt_find_alloc_memory_type(adist, &default_memory_types); + if (IS_ERR(memtype)) { + memtype = default_dram_type; + pr_info("Failed to allocate a memory type. Fall back.\n"); + } + } + + __init_node_memory_type(node, memtype); memtype = node_memory_types[node].memtype; node_set(node, memtype->nodes); @@ -652,6 +669,35 @@ void mt_put_memory_types(struct list_head *memory_types) } EXPORT_SYMBOL_GPL(mt_put_memory_types); +/* + * This is invoked via `late_initcall()` to initialize memory tiers for + * CPU-less memory nodes after driver initialization, which is + * expected to provide `adistance` algorithms. + */ +static int __init memory_tier_late_init(void) +{ + int nid; + + guard(mutex)(&memory_tier_lock); + for_each_node_state(nid, N_MEMORY) { + /* + * Some device drivers may have initialized memory tiers + * between `memory_tier_init()` and `memory_tier_late_init()`, + * potentially bringing online memory nodes and + * configuring memory tiers. Exclude them here. + */ + if (node_memory_types[nid].memtype) + continue; + + set_node_memory_tier(nid); + } + + establish_demotion_targets(); + + return 0; +} +late_initcall(memory_tier_late_init); + static void dump_hmem_attrs(struct access_coordinate *coord, const char *prefix) { pr_info( @@ -663,25 +709,19 @@ static void dump_hmem_attrs(struct access_coordinate *coord, const char *prefix) int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, const char *source) { - int rc = 0; - - mutex_lock(&memory_tier_lock); - if (default_dram_perf_error) { - rc = -EIO; - goto out; - } + guard(mutex)(&default_dram_perf_lock); + if (default_dram_perf_error) + return -EIO; if (perf->read_latency + perf->write_latency == 0 || - perf->read_bandwidth + perf->write_bandwidth == 0) { - rc = -EINVAL; - goto out; - } + perf->read_bandwidth + perf->write_bandwidth == 0) + return -EINVAL; if (default_dram_perf_ref_nid == NUMA_NO_NODE) { default_dram_perf = *perf; default_dram_perf_ref_nid = nid; default_dram_perf_ref_source = kstrdup(source, GFP_KERNEL); - goto out; + return 0; } /* @@ -709,27 +749,25 @@ int mt_set_default_dram_perf(int nid, struct access_coordinate *perf, pr_info( " disable default DRAM node performance based abstract distance algorithm.\n"); default_dram_perf_error = true; - rc = -EINVAL; + return -EINVAL; } -out: - mutex_unlock(&memory_tier_lock); - return rc; + return 0; } int mt_perf_to_adistance(struct access_coordinate *perf, int *adist) { + guard(mutex)(&default_dram_perf_lock); if (default_dram_perf_error) return -EIO; - if (default_dram_perf_ref_nid == NUMA_NO_NODE) - return -ENOENT; - if (perf->read_latency + perf->write_latency == 0 || perf->read_bandwidth + perf->write_bandwidth == 0) return -EINVAL; - mutex_lock(&memory_tier_lock); + if (default_dram_perf_ref_nid == NUMA_NO_NODE) + return -ENOENT; + /* * The abstract distance of a memory node is in direct proportion to * its memory latency (read + write) and inversely proportional to its @@ -742,7 +780,6 @@ int mt_perf_to_adistance(struct access_coordinate *perf, int *adist) (default_dram_perf.read_latency + default_dram_perf.write_latency) * (default_dram_perf.read_bandwidth + default_dram_perf.write_bandwidth) / (perf->read_bandwidth + perf->write_bandwidth); - mutex_unlock(&memory_tier_lock); return 0; } @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) * For now we can have 4 faster memory tiers with smaller adistance * than default DRAM tier. */ - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, + &default_memory_types); if (IS_ERR(default_dram_type)) panic("%s() failed to allocate default DRAM tier\n", __func__); @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) * types assigned. */ for_each_node_state(node, N_MEMORY) { + if (!node_state(node, N_CPU)) + /* + * Defer memory tier initialization on + * CPUless numa nodes. These will be initialized + * after firmware and devices are initialized. + */ + continue; + memtier = set_node_memory_tier(node); if (IS_ERR(memtier)) /* -- Ho-Ren (Jack) Chuang ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 0:07 ` [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info Ho-Ren (Jack) Chuang @ 2024-04-05 14:02 ` Jonathan Cameron 2024-04-05 22:43 ` Ho-Ren (Jack) Chuang 2024-04-19 14:01 ` Jonathan Cameron 1 sibling, 1 reply; 16+ messages in thread From: Jonathan Cameron @ 2024-04-05 14:02 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Fri, 5 Apr 2024 00:07:06 +0000 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > The current implementation treats emulated memory devices, such as > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > (E820_TYPE_RAM). However, these emulated devices have different > characteristics than traditional DRAM, making it important to > distinguish them. Thus, we modify the tiered memory initialization process > to introduce a delay specifically for CPUless NUMA nodes. This delay > ensures that the memory tier initialization for these nodes is deferred > until HMAT information is obtained during the boot process. Finally, > demotion tables are recalculated at the end. > > * late_initcall(memory_tier_late_init); > Some device drivers may have initialized memory tiers between > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > online memory nodes and configuring memory tiers. They should be excluded > in the late init. > > * Handle cases where there is no HMAT when creating memory tiers > There is a scenario where a CPUless node does not provide HMAT information. > If no HMAT is specified, it falls back to using the default DRAM tier. > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > In the current implementation, iterating through CPUlist nodes requires > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > trying to acquire the same lock, leading to a potential deadlock. > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > protect `default_dram_perf_*`. This approach not only avoids deadlock > but also prevents holding a large lock simultaneously. > > * Upgrade `set_node_memory_tier` to support additional cases, including > default DRAM, late CPUless, and hot-plugged initializations. > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > handle cases where memtype is not initialized and where HMAT information is > available. > > * Introduce `default_memory_types` for those memory types that are not > initialized by device drivers. > Because late initialized memory and default DRAM memory need to be managed, > a default memory type is created for storing all memory types that are > not initialized by device drivers and as a fallback. > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> Hi - one remaining question. Why can't we delay init for all nodes to either drivers or your fallback late_initcall code. It would be nice to reduce possible code paths. Jonathan > --- > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ > 1 file changed, 70 insertions(+), 24 deletions(-) > > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > index 516b144fd45a..6632102bd5c9 100644 > --- a/mm/memory-tiers.c > +++ b/mm/memory-tiers.c > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) > * For now we can have 4 faster memory tiers with smaller adistance > * than default DRAM tier. > */ > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, > + &default_memory_types); > if (IS_ERR(default_dram_type)) > panic("%s() failed to allocate default DRAM tier\n", __func__); > > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) > * types assigned. > */ > for_each_node_state(node, N_MEMORY) { > + if (!node_state(node, N_CPU)) > + /* > + * Defer memory tier initialization on > + * CPUless numa nodes. These will be initialized > + * after firmware and devices are initialized. Could the comment also say why we can't defer them all? (In an odd coincidence we have a similar issue for some CPU hotplug related bring up where review feedback was move all cases later). > + */ > + continue; > + > memtier = set_node_memory_tier(node); > if (IS_ERR(memtier)) > /* ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 14:02 ` Jonathan Cameron @ 2024-04-05 22:43 ` Ho-Ren (Jack) Chuang 2024-04-09 16:12 ` Jonathan Cameron 2024-04-10 2:30 ` Huang, Ying 0 siblings, 2 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-05 22:43 UTC (permalink / raw) To: Jonathan Cameron Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote: > > On Fri, 5 Apr 2024 00:07:06 +0000 > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > The current implementation treats emulated memory devices, such as > > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > > (E820_TYPE_RAM). However, these emulated devices have different > > characteristics than traditional DRAM, making it important to > > distinguish them. Thus, we modify the tiered memory initialization process > > to introduce a delay specifically for CPUless NUMA nodes. This delay > > ensures that the memory tier initialization for these nodes is deferred > > until HMAT information is obtained during the boot process. Finally, > > demotion tables are recalculated at the end. > > > > * late_initcall(memory_tier_late_init); > > Some device drivers may have initialized memory tiers between > > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > > online memory nodes and configuring memory tiers. They should be excluded > > in the late init. > > > > * Handle cases where there is no HMAT when creating memory tiers > > There is a scenario where a CPUless node does not provide HMAT information. > > If no HMAT is specified, it falls back to using the default DRAM tier. > > > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > > In the current implementation, iterating through CPUlist nodes requires > > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > > trying to acquire the same lock, leading to a potential deadlock. > > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > > protect `default_dram_perf_*`. This approach not only avoids deadlock > > but also prevents holding a large lock simultaneously. > > > > * Upgrade `set_node_memory_tier` to support additional cases, including > > default DRAM, late CPUless, and hot-plugged initializations. > > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > > handle cases where memtype is not initialized and where HMAT information is > > available. > > > > * Introduce `default_memory_types` for those memory types that are not > > initialized by device drivers. > > Because late initialized memory and default DRAM memory need to be managed, > > a default memory type is created for storing all memory types that are > > not initialized by device drivers and as a fallback. > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > Hi - one remaining question. Why can't we delay init for all nodes > to either drivers or your fallback late_initcall code. > It would be nice to reduce possible code paths. I try not to change too much of the existing code structure in this patchset. To me, postponing/moving all memory tier registrations to late_initcall() is another possible action item for the next patchset. After tier_mem(), hmat_init() is called, which requires registering `default_dram_type` info. This is when `default_dram_type` is needed. However, it is indeed possible to postpone the latter part, set_node_memory_tier(), to `late_init(). So, memory_tier_init() can indeed be split into two parts, and the latter part can be moved to late_initcall() to be processed together. Doing this all memory-type drivers have to call late_initcall() to register a memory tier. I’m not sure how many they are? What do you guys think? > > Jonathan > > > > --- > > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ > > 1 file changed, 70 insertions(+), 24 deletions(-) > > > > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > > index 516b144fd45a..6632102bd5c9 100644 > > --- a/mm/memory-tiers.c > > +++ b/mm/memory-tiers.c > > > > > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) > > * For now we can have 4 faster memory tiers with smaller adistance > > * than default DRAM tier. > > */ > > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); > > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, > > + &default_memory_types); > > if (IS_ERR(default_dram_type)) > > panic("%s() failed to allocate default DRAM tier\n", __func__); > > > > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) > > * types assigned. > > */ > > for_each_node_state(node, N_MEMORY) { > > + if (!node_state(node, N_CPU)) > > + /* > > + * Defer memory tier initialization on > > + * CPUless numa nodes. These will be initialized > > + * after firmware and devices are initialized. > > Could the comment also say why we can't defer them all? > > (In an odd coincidence we have a similar issue for some CPU hotplug > related bring up where review feedback was move all cases later). > > > + */ > > + continue; > > + > > memtier = set_node_memory_tier(node); > > if (IS_ERR(memtier)) > > /* > -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 22:43 ` Ho-Ren (Jack) Chuang @ 2024-04-09 16:12 ` Jonathan Cameron 2024-04-09 19:02 ` [External] " Ho-Ren (Jack) Chuang 2024-04-10 2:30 ` Huang, Ying 1 sibling, 1 reply; 16+ messages in thread From: Jonathan Cameron @ 2024-04-09 16:12 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Fri, 5 Apr 2024 15:43:47 -0700 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > <Jonathan.Cameron@huawei.com> wrote: > > > > On Fri, 5 Apr 2024 00:07:06 +0000 > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > The current implementation treats emulated memory devices, such as > > > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > > > (E820_TYPE_RAM). However, these emulated devices have different > > > characteristics than traditional DRAM, making it important to > > > distinguish them. Thus, we modify the tiered memory initialization process > > > to introduce a delay specifically for CPUless NUMA nodes. This delay > > > ensures that the memory tier initialization for these nodes is deferred > > > until HMAT information is obtained during the boot process. Finally, > > > demotion tables are recalculated at the end. > > > > > > * late_initcall(memory_tier_late_init); > > > Some device drivers may have initialized memory tiers between > > > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > > > online memory nodes and configuring memory tiers. They should be excluded > > > in the late init. > > > > > > * Handle cases where there is no HMAT when creating memory tiers > > > There is a scenario where a CPUless node does not provide HMAT information. > > > If no HMAT is specified, it falls back to using the default DRAM tier. > > > > > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > > > In the current implementation, iterating through CPUlist nodes requires > > > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > > > trying to acquire the same lock, leading to a potential deadlock. > > > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > > > protect `default_dram_perf_*`. This approach not only avoids deadlock > > > but also prevents holding a large lock simultaneously. > > > > > > * Upgrade `set_node_memory_tier` to support additional cases, including > > > default DRAM, late CPUless, and hot-plugged initializations. > > > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > > > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > > > handle cases where memtype is not initialized and where HMAT information is > > > available. > > > > > > * Introduce `default_memory_types` for those memory types that are not > > > initialized by device drivers. > > > Because late initialized memory and default DRAM memory need to be managed, > > > a default memory type is created for storing all memory types that are > > > not initialized by device drivers and as a fallback. > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > > > Hi - one remaining question. Why can't we delay init for all nodes > > to either drivers or your fallback late_initcall code. > > It would be nice to reduce possible code paths. > > I try not to change too much of the existing code structure in > this patchset. > > To me, postponing/moving all memory tier registrations to > late_initcall() is another possible action item for the next patchset. > > After tier_mem(), hmat_init() is called, which requires registering > `default_dram_type` info. This is when `default_dram_type` is needed. > However, it is indeed possible to postpone the latter part, > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > indeed be split into two parts, and the latter part can be moved to > late_initcall() to be processed together. > > Doing this all memory-type drivers have to call late_initcall() to > register a memory tier. I’m not sure how many they are? > > What do you guys think? Gut feeling - if you are going to move it for some cases, move it for all of them. Then we only have to test once ;) J > > > > > Jonathan > > > > > > > --- > > > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ > > > 1 file changed, 70 insertions(+), 24 deletions(-) > > > > > > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > > > index 516b144fd45a..6632102bd5c9 100644 > > > --- a/mm/memory-tiers.c > > > +++ b/mm/memory-tiers.c > > > > > > > > > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) > > > * For now we can have 4 faster memory tiers with smaller adistance > > > * than default DRAM tier. > > > */ > > > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); > > > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, > > > + &default_memory_types); > > > if (IS_ERR(default_dram_type)) > > > panic("%s() failed to allocate default DRAM tier\n", __func__); > > > > > > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) > > > * types assigned. > > > */ > > > for_each_node_state(node, N_MEMORY) { > > > + if (!node_state(node, N_CPU)) > > > + /* > > > + * Defer memory tier initialization on > > > + * CPUless numa nodes. These will be initialized > > > + * after firmware and devices are initialized. > > > > Could the comment also say why we can't defer them all? > > > > (In an odd coincidence we have a similar issue for some CPU hotplug > > related bring up where review feedback was move all cases later). > > > > > + */ > > > + continue; > > > + > > > memtier = set_node_memory_tier(node); > > > if (IS_ERR(memtier)) > > > /* > > > > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-09 16:12 ` Jonathan Cameron @ 2024-04-09 19:02 ` Ho-Ren (Jack) Chuang 2024-04-10 16:51 ` Jonathan Cameron 0 siblings, 1 reply; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-09 19:02 UTC (permalink / raw) To: Jonathan Cameron Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang Hi Jonathan, On Tue, Apr 9, 2024 at 9:12 AM Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote: > > On Fri, 5 Apr 2024 15:43:47 -0700 > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > > <Jonathan.Cameron@huawei.com> wrote: > > > > > > On Fri, 5 Apr 2024 00:07:06 +0000 > > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > > > The current implementation treats emulated memory devices, such as > > > > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > > > > (E820_TYPE_RAM). However, these emulated devices have different > > > > characteristics than traditional DRAM, making it important to > > > > distinguish them. Thus, we modify the tiered memory initialization process > > > > to introduce a delay specifically for CPUless NUMA nodes. This delay > > > > ensures that the memory tier initialization for these nodes is deferred > > > > until HMAT information is obtained during the boot process. Finally, > > > > demotion tables are recalculated at the end. > > > > > > > > * late_initcall(memory_tier_late_init); > > > > Some device drivers may have initialized memory tiers between > > > > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > > > > online memory nodes and configuring memory tiers. They should be excluded > > > > in the late init. > > > > > > > > * Handle cases where there is no HMAT when creating memory tiers > > > > There is a scenario where a CPUless node does not provide HMAT information. > > > > If no HMAT is specified, it falls back to using the default DRAM tier. > > > > > > > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > > > > In the current implementation, iterating through CPUlist nodes requires > > > > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > > > > trying to acquire the same lock, leading to a potential deadlock. > > > > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > > > > protect `default_dram_perf_*`. This approach not only avoids deadlock > > > > but also prevents holding a large lock simultaneously. > > > > > > > > * Upgrade `set_node_memory_tier` to support additional cases, including > > > > default DRAM, late CPUless, and hot-plugged initializations. > > > > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > > > > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > > > > handle cases where memtype is not initialized and where HMAT information is > > > > available. > > > > > > > > * Introduce `default_memory_types` for those memory types that are not > > > > initialized by device drivers. > > > > Because late initialized memory and default DRAM memory need to be managed, > > > > a default memory type is created for storing all memory types that are > > > > not initialized by device drivers and as a fallback. > > > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > > > > > Hi - one remaining question. Why can't we delay init for all nodes > > > to either drivers or your fallback late_initcall code. > > > It would be nice to reduce possible code paths. > > > > I try not to change too much of the existing code structure in > > this patchset. > > > > To me, postponing/moving all memory tier registrations to > > late_initcall() is another possible action item for the next patchset. > > > > After tier_mem(), hmat_init() is called, which requires registering > > `default_dram_type` info. This is when `default_dram_type` is needed. > > However, it is indeed possible to postpone the latter part, > > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > > indeed be split into two parts, and the latter part can be moved to > > late_initcall() to be processed together. > > > > Doing this all memory-type drivers have to call late_initcall() to > > register a memory tier. I’m not sure how many they are? > > > > What do you guys think? > > Gut feeling - if you are going to move it for some cases, move it for > all of them. Then we only have to test once ;) > > J Thank you for your reminder! I agree~ That's why I'm considering changing them in the next patchset because of the amount of changes. And also, this patchset already contains too many things. > > > > > > > > Jonathan > > > > > > > > > > --- > > > > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ > > > > 1 file changed, 70 insertions(+), 24 deletions(-) > > > > > > > > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > > > > index 516b144fd45a..6632102bd5c9 100644 > > > > --- a/mm/memory-tiers.c > > > > +++ b/mm/memory-tiers.c > > > > > > > > > > > > > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) > > > > * For now we can have 4 faster memory tiers with smaller adistance > > > > * than default DRAM tier. > > > > */ > > > > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); > > > > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, > > > > + &default_memory_types); > > > > if (IS_ERR(default_dram_type)) > > > > panic("%s() failed to allocate default DRAM tier\n", __func__); > > > > > > > > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) > > > > * types assigned. > > > > */ > > > > for_each_node_state(node, N_MEMORY) { > > > > + if (!node_state(node, N_CPU)) > > > > + /* > > > > + * Defer memory tier initialization on > > > > + * CPUless numa nodes. These will be initialized > > > > + * after firmware and devices are initialized. > > > > > > Could the comment also say why we can't defer them all? > > > > > > (In an odd coincidence we have a similar issue for some CPU hotplug > > > related bring up where review feedback was move all cases later). > > > > > > > + */ > > > > + continue; > > > > + > > > > memtier = set_node_memory_tier(node); > > > > if (IS_ERR(memtier)) > > > > /* > > > > > > > > -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-09 19:02 ` [External] " Ho-Ren (Jack) Chuang @ 2024-04-10 16:51 ` Jonathan Cameron 2024-04-17 8:53 ` Ho-Ren (Jack) Chuang 0 siblings, 1 reply; 16+ messages in thread From: Jonathan Cameron @ 2024-04-10 16:51 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Tue, 9 Apr 2024 12:02:31 -0700 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > Hi Jonathan, > > On Tue, Apr 9, 2024 at 9:12 AM Jonathan Cameron > <Jonathan.Cameron@huawei.com> wrote: > > > > On Fri, 5 Apr 2024 15:43:47 -0700 > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > > > <Jonathan.Cameron@huawei.com> wrote: > > > > > > > > On Fri, 5 Apr 2024 00:07:06 +0000 > > > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > > > > > The current implementation treats emulated memory devices, such as > > > > > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > > > > > (E820_TYPE_RAM). However, these emulated devices have different > > > > > characteristics than traditional DRAM, making it important to > > > > > distinguish them. Thus, we modify the tiered memory initialization process > > > > > to introduce a delay specifically for CPUless NUMA nodes. This delay > > > > > ensures that the memory tier initialization for these nodes is deferred > > > > > until HMAT information is obtained during the boot process. Finally, > > > > > demotion tables are recalculated at the end. > > > > > > > > > > * late_initcall(memory_tier_late_init); > > > > > Some device drivers may have initialized memory tiers between > > > > > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > > > > > online memory nodes and configuring memory tiers. They should be excluded > > > > > in the late init. > > > > > > > > > > * Handle cases where there is no HMAT when creating memory tiers > > > > > There is a scenario where a CPUless node does not provide HMAT information. > > > > > If no HMAT is specified, it falls back to using the default DRAM tier. > > > > > > > > > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > > > > > In the current implementation, iterating through CPUlist nodes requires > > > > > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > > > > > trying to acquire the same lock, leading to a potential deadlock. > > > > > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > > > > > protect `default_dram_perf_*`. This approach not only avoids deadlock > > > > > but also prevents holding a large lock simultaneously. > > > > > > > > > > * Upgrade `set_node_memory_tier` to support additional cases, including > > > > > default DRAM, late CPUless, and hot-plugged initializations. > > > > > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > > > > > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > > > > > handle cases where memtype is not initialized and where HMAT information is > > > > > available. > > > > > > > > > > * Introduce `default_memory_types` for those memory types that are not > > > > > initialized by device drivers. > > > > > Because late initialized memory and default DRAM memory need to be managed, > > > > > a default memory type is created for storing all memory types that are > > > > > not initialized by device drivers and as a fallback. > > > > > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > > > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > > > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > > > > > > > Hi - one remaining question. Why can't we delay init for all nodes > > > > to either drivers or your fallback late_initcall code. > > > > It would be nice to reduce possible code paths. > > > > > > I try not to change too much of the existing code structure in > > > this patchset. > > > > > > To me, postponing/moving all memory tier registrations to > > > late_initcall() is another possible action item for the next patchset. > > > > > > After tier_mem(), hmat_init() is called, which requires registering > > > `default_dram_type` info. This is when `default_dram_type` is needed. > > > However, it is indeed possible to postpone the latter part, > > > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > > > indeed be split into two parts, and the latter part can be moved to > > > late_initcall() to be processed together. > > > > > > Doing this all memory-type drivers have to call late_initcall() to > > > register a memory tier. I’m not sure how many they are? > > > > > > What do you guys think? > > > > Gut feeling - if you are going to move it for some cases, move it for > > all of them. Then we only have to test once ;) > > > > J > > Thank you for your reminder! I agree~ That's why I'm considering > changing them in the next patchset because of the amount of changes. > And also, this patchset already contains too many things. Makes sense. (Interestingly we are reaching the same conclusion for the thread that motivated suggesting bringing them all together in the first place!) Get things work in a clean fashion, then consider moving everything to happen at the same time to simplify testing etc. Jonathan ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-10 16:51 ` Jonathan Cameron @ 2024-04-17 8:53 ` Ho-Ren (Jack) Chuang 0 siblings, 0 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-17 8:53 UTC (permalink / raw) To: Jonathan Cameron Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Wed, Apr 10, 2024 at 9:51 AM Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote: > > On Tue, 9 Apr 2024 12:02:31 -0700 > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > Hi Jonathan, > > > > On Tue, Apr 9, 2024 at 9:12 AM Jonathan Cameron > > <Jonathan.Cameron@huawei.com> wrote: > > > > > > On Fri, 5 Apr 2024 15:43:47 -0700 > > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > > > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > > > > <Jonathan.Cameron@huawei.com> wrote: > > > > > > > > > > On Fri, 5 Apr 2024 00:07:06 +0000 > > > > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > > > > > > > > > > > The current implementation treats emulated memory devices, such as > > > > > > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > > > > > > (E820_TYPE_RAM). However, these emulated devices have different > > > > > > characteristics than traditional DRAM, making it important to > > > > > > distinguish them. Thus, we modify the tiered memory initialization process > > > > > > to introduce a delay specifically for CPUless NUMA nodes. This delay > > > > > > ensures that the memory tier initialization for these nodes is deferred > > > > > > until HMAT information is obtained during the boot process. Finally, > > > > > > demotion tables are recalculated at the end. > > > > > > > > > > > > * late_initcall(memory_tier_late_init); > > > > > > Some device drivers may have initialized memory tiers between > > > > > > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > > > > > > online memory nodes and configuring memory tiers. They should be excluded > > > > > > in the late init. > > > > > > > > > > > > * Handle cases where there is no HMAT when creating memory tiers > > > > > > There is a scenario where a CPUless node does not provide HMAT information. > > > > > > If no HMAT is specified, it falls back to using the default DRAM tier. > > > > > > > > > > > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > > > > > > In the current implementation, iterating through CPUlist nodes requires > > > > > > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > > > > > > trying to acquire the same lock, leading to a potential deadlock. > > > > > > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > > > > > > protect `default_dram_perf_*`. This approach not only avoids deadlock > > > > > > but also prevents holding a large lock simultaneously. > > > > > > > > > > > > * Upgrade `set_node_memory_tier` to support additional cases, including > > > > > > default DRAM, late CPUless, and hot-plugged initializations. > > > > > > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > > > > > > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > > > > > > handle cases where memtype is not initialized and where HMAT information is > > > > > > available. > > > > > > > > > > > > * Introduce `default_memory_types` for those memory types that are not > > > > > > initialized by device drivers. > > > > > > Because late initialized memory and default DRAM memory need to be managed, > > > > > > a default memory type is created for storing all memory types that are > > > > > > not initialized by device drivers and as a fallback. > > > > > > > > > > > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > > > > > > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > > > > > > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > > > > > > > > > > Hi - one remaining question. Why can't we delay init for all nodes > > > > > to either drivers or your fallback late_initcall code. > > > > > It would be nice to reduce possible code paths. > > > > > > > > I try not to change too much of the existing code structure in > > > > this patchset. > > > > > > > > To me, postponing/moving all memory tier registrations to > > > > late_initcall() is another possible action item for the next patchset. > > > > > > > > After tier_mem(), hmat_init() is called, which requires registering > > > > `default_dram_type` info. This is when `default_dram_type` is needed. > > > > However, it is indeed possible to postpone the latter part, > > > > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > > > > indeed be split into two parts, and the latter part can be moved to > > > > late_initcall() to be processed together. > > > > > > > > Doing this all memory-type drivers have to call late_initcall() to > > > > register a memory tier. I’m not sure how many they are? > > > > > > > > What do you guys think? > > > > > > Gut feeling - if you are going to move it for some cases, move it for > > > all of them. Then we only have to test once ;) > > > > > > J > > > > Thank you for your reminder! I agree~ That's why I'm considering > > changing them in the next patchset because of the amount of changes. > > And also, this patchset already contains too many things. > > Makes sense. (Interestingly we are reaching the same conclusion > for the thread that motivated suggesting bringing them all together > in the first place!) > > Get things work in a clean fashion, then consider moving everything to > happen at the same time to simplify testing etc. Hi Jonathan, Thank you and I will do! Could you please take another look and see if there are any further changes needed for this patchset? If everything looks good to you, could you please also provide a 'Reviewed-by' for this patch? Per discussion, I'm going to prepare another patchset "memory tier initialization path optimization" and will send it out once ready. > > Jonathan -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 22:43 ` Ho-Ren (Jack) Chuang 2024-04-09 16:12 ` Jonathan Cameron @ 2024-04-10 2:30 ` Huang, Ying 2024-04-10 5:55 ` [External] " Ho-Ren (Jack) Chuang 1 sibling, 1 reply; 16+ messages in thread From: Huang, Ying @ 2024-04-10 2:30 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Jonathan Cameron, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> writes: > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > <Jonathan.Cameron@huawei.com> wrote: >> >> On Fri, 5 Apr 2024 00:07:06 +0000 >> "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: >> >> > The current implementation treats emulated memory devices, such as >> > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory >> > (E820_TYPE_RAM). However, these emulated devices have different >> > characteristics than traditional DRAM, making it important to >> > distinguish them. Thus, we modify the tiered memory initialization process >> > to introduce a delay specifically for CPUless NUMA nodes. This delay >> > ensures that the memory tier initialization for these nodes is deferred >> > until HMAT information is obtained during the boot process. Finally, >> > demotion tables are recalculated at the end. >> > >> > * late_initcall(memory_tier_late_init); >> > Some device drivers may have initialized memory tiers between >> > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing >> > online memory nodes and configuring memory tiers. They should be excluded >> > in the late init. >> > >> > * Handle cases where there is no HMAT when creating memory tiers >> > There is a scenario where a CPUless node does not provide HMAT information. >> > If no HMAT is specified, it falls back to using the default DRAM tier. >> > >> > * Introduce another new lock `default_dram_perf_lock` for adist calculation >> > In the current implementation, iterating through CPUlist nodes requires >> > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up >> > trying to acquire the same lock, leading to a potential deadlock. >> > Therefore, we propose introducing a standalone `default_dram_perf_lock` to >> > protect `default_dram_perf_*`. This approach not only avoids deadlock >> > but also prevents holding a large lock simultaneously. >> > >> > * Upgrade `set_node_memory_tier` to support additional cases, including >> > default DRAM, late CPUless, and hot-plugged initializations. >> > To cover hot-plugged memory nodes, `mt_calc_adistance()` and >> > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to >> > handle cases where memtype is not initialized and where HMAT information is >> > available. >> > >> > * Introduce `default_memory_types` for those memory types that are not >> > initialized by device drivers. >> > Because late initialized memory and default DRAM memory need to be managed, >> > a default memory type is created for storing all memory types that are >> > not initialized by device drivers and as a fallback. >> > >> > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> >> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> >> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> >> >> Hi - one remaining question. Why can't we delay init for all nodes >> to either drivers or your fallback late_initcall code. >> It would be nice to reduce possible code paths. > > I try not to change too much of the existing code structure in > this patchset. > > To me, postponing/moving all memory tier registrations to > late_initcall() is another possible action item for the next patchset. > > After tier_mem(), hmat_init() is called, which requires registering > `default_dram_type` info. This is when `default_dram_type` is needed. > However, it is indeed possible to postpone the latter part, > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > indeed be split into two parts, and the latter part can be moved to > late_initcall() to be processed together. I don't think that it's good to move all memory_tier initialization in drivers to late_initcall(). It's natural to keep them in device_initcall() level. If so, we can allocate default_dram_type in memory_tier_init(), and call set_node_memory_tier() only in memory_tier_lateinit(). We can call memory_tier_lateinit() in device_initcall() level too. -- Best Regards, Huang, Ying > Doing this all memory-type drivers have to call late_initcall() to > register a memory tier. I’m not sure how many they are? > > What do you guys think? > >> >> Jonathan >> >> >> > --- >> > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ >> > 1 file changed, 70 insertions(+), 24 deletions(-) >> > >> > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c >> > index 516b144fd45a..6632102bd5c9 100644 >> > --- a/mm/memory-tiers.c >> > +++ b/mm/memory-tiers.c >> >> >> >> > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) >> > * For now we can have 4 faster memory tiers with smaller adistance >> > * than default DRAM tier. >> > */ >> > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); >> > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, >> > + &default_memory_types); >> > if (IS_ERR(default_dram_type)) >> > panic("%s() failed to allocate default DRAM tier\n", __func__); >> > >> > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) >> > * types assigned. >> > */ >> > for_each_node_state(node, N_MEMORY) { >> > + if (!node_state(node, N_CPU)) >> > + /* >> > + * Defer memory tier initialization on >> > + * CPUless numa nodes. These will be initialized >> > + * after firmware and devices are initialized. >> >> Could the comment also say why we can't defer them all? >> >> (In an odd coincidence we have a similar issue for some CPU hotplug >> related bring up where review feedback was move all cases later). >> >> > + */ >> > + continue; >> > + >> > memtier = set_node_memory_tier(node); >> > if (IS_ERR(memtier)) >> > /* >> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [External] Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-10 2:30 ` Huang, Ying @ 2024-04-10 5:55 ` Ho-Ren (Jack) Chuang 0 siblings, 0 replies; 16+ messages in thread From: Ho-Ren (Jack) Chuang @ 2024-04-10 5:55 UTC (permalink / raw) To: Huang, Ying Cc: Jonathan Cameron, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, Linux Memory Management List, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Tue, Apr 9, 2024 at 7:33 PM Huang, Ying <ying.huang@intel.com> wrote: > > "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> writes: > > > On Fri, Apr 5, 2024 at 7:03 AM Jonathan Cameron > > <Jonathan.Cameron@huawei.com> wrote: > >> > >> On Fri, 5 Apr 2024 00:07:06 +0000 > >> "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > >> > >> > The current implementation treats emulated memory devices, such as > >> > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > >> > (E820_TYPE_RAM). However, these emulated devices have different > >> > characteristics than traditional DRAM, making it important to > >> > distinguish them. Thus, we modify the tiered memory initialization process > >> > to introduce a delay specifically for CPUless NUMA nodes. This delay > >> > ensures that the memory tier initialization for these nodes is deferred > >> > until HMAT information is obtained during the boot process. Finally, > >> > demotion tables are recalculated at the end. > >> > > >> > * late_initcall(memory_tier_late_init); > >> > Some device drivers may have initialized memory tiers between > >> > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > >> > online memory nodes and configuring memory tiers. They should be excluded > >> > in the late init. > >> > > >> > * Handle cases where there is no HMAT when creating memory tiers > >> > There is a scenario where a CPUless node does not provide HMAT information. > >> > If no HMAT is specified, it falls back to using the default DRAM tier. > >> > > >> > * Introduce another new lock `default_dram_perf_lock` for adist calculation > >> > In the current implementation, iterating through CPUlist nodes requires > >> > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > >> > trying to acquire the same lock, leading to a potential deadlock. > >> > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > >> > protect `default_dram_perf_*`. This approach not only avoids deadlock > >> > but also prevents holding a large lock simultaneously. > >> > > >> > * Upgrade `set_node_memory_tier` to support additional cases, including > >> > default DRAM, late CPUless, and hot-plugged initializations. > >> > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > >> > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > >> > handle cases where memtype is not initialized and where HMAT information is > >> > available. > >> > > >> > * Introduce `default_memory_types` for those memory types that are not > >> > initialized by device drivers. > >> > Because late initialized memory and default DRAM memory need to be managed, > >> > a default memory type is created for storing all memory types that are > >> > not initialized by device drivers and as a fallback. > >> > > >> > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > >> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > >> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> > >> > >> Hi - one remaining question. Why can't we delay init for all nodes > >> to either drivers or your fallback late_initcall code. > >> It would be nice to reduce possible code paths. > > > > I try not to change too much of the existing code structure in > > this patchset. > > > > To me, postponing/moving all memory tier registrations to > > late_initcall() is another possible action item for the next patchset. > > > > After tier_mem(), hmat_init() is called, which requires registering > > `default_dram_type` info. This is when `default_dram_type` is needed. > > However, it is indeed possible to postpone the latter part, > > set_node_memory_tier(), to `late_init(). So, memory_tier_init() can > > indeed be split into two parts, and the latter part can be moved to > > late_initcall() to be processed together. > > I don't think that it's good to move all memory_tier initialization in > drivers to late_initcall(). It's natural to keep them in > device_initcall() level. > > If so, we can allocate default_dram_type in memory_tier_init(), and call > set_node_memory_tier() only in memory_tier_lateinit(). We can call > memory_tier_lateinit() in device_initcall() level too. > It makes sense to me to leave only `default_dram_type ` and hotplug_init() in memory_tier_init(), postponing all set_node_memory_tier()s to memory_tier_late_init() Would it be possible there is no device_initcall() calling memory_tier_late_init()? If yes, I think putting memory_tier_late_init() in late_init() is still necessary. > -- > Best Regards, > Huang, Ying > > > Doing this all memory-type drivers have to call late_initcall() to > > register a memory tier. I’m not sure how many they are? > > > > What do you guys think? > > > >> > >> Jonathan > >> > >> > >> > --- > >> > mm/memory-tiers.c | 94 +++++++++++++++++++++++++++++++++++------------ > >> > 1 file changed, 70 insertions(+), 24 deletions(-) > >> > > >> > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > >> > index 516b144fd45a..6632102bd5c9 100644 > >> > --- a/mm/memory-tiers.c > >> > +++ b/mm/memory-tiers.c > >> > >> > >> > >> > @@ -855,7 +892,8 @@ static int __init memory_tier_init(void) > >> > * For now we can have 4 faster memory tiers with smaller adistance > >> > * than default DRAM tier. > >> > */ > >> > - default_dram_type = alloc_memory_type(MEMTIER_ADISTANCE_DRAM); > >> > + default_dram_type = mt_find_alloc_memory_type(MEMTIER_ADISTANCE_DRAM, > >> > + &default_memory_types); > >> > if (IS_ERR(default_dram_type)) > >> > panic("%s() failed to allocate default DRAM tier\n", __func__); > >> > > >> > @@ -865,6 +903,14 @@ static int __init memory_tier_init(void) > >> > * types assigned. > >> > */ > >> > for_each_node_state(node, N_MEMORY) { > >> > + if (!node_state(node, N_CPU)) > >> > + /* > >> > + * Defer memory tier initialization on > >> > + * CPUless numa nodes. These will be initialized > >> > + * after firmware and devices are initialized. > >> > >> Could the comment also say why we can't defer them all? > >> > >> (In an odd coincidence we have a similar issue for some CPU hotplug > >> related bring up where review feedback was move all cases later). > >> > >> > + */ > >> > + continue; > >> > + > >> > memtier = set_node_memory_tier(node); > >> > if (IS_ERR(memtier)) > >> > /* > >> -- Best regards, Ho-Ren (Jack) Chuang 莊賀任 ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info 2024-04-05 0:07 ` [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info Ho-Ren (Jack) Chuang 2024-04-05 14:02 ` Jonathan Cameron @ 2024-04-19 14:01 ` Jonathan Cameron 1 sibling, 0 replies; 16+ messages in thread From: Jonathan Cameron @ 2024-04-19 14:01 UTC (permalink / raw) To: Ho-Ren (Jack) Chuang Cc: Huang, Ying, Gregory Price, aneesh.kumar, mhocko, tj, john, Eishan Mirakhur, Vinicius Tavares Petrucci, Ravis OpenSrc, Alistair Popple, Srinivasulu Thanneeru, SeongJae Park, Dan Williams, Vishal Verma, Dave Jiang, Andrew Morton, nvdimm, linux-cxl, linux-kernel, linux-mm, Ho-Ren (Jack) Chuang, Ho-Ren (Jack) Chuang, qemu-devel, Hao Xiang On Fri, 5 Apr 2024 00:07:06 +0000 "Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com> wrote: > The current implementation treats emulated memory devices, such as > CXL1.1 type3 memory, as normal DRAM when they are emulated as normal memory > (E820_TYPE_RAM). However, these emulated devices have different > characteristics than traditional DRAM, making it important to > distinguish them. Thus, we modify the tiered memory initialization process > to introduce a delay specifically for CPUless NUMA nodes. This delay > ensures that the memory tier initialization for these nodes is deferred > until HMAT information is obtained during the boot process. Finally, > demotion tables are recalculated at the end. > > * late_initcall(memory_tier_late_init); > Some device drivers may have initialized memory tiers between > `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing > online memory nodes and configuring memory tiers. They should be excluded > in the late init. > > * Handle cases where there is no HMAT when creating memory tiers > There is a scenario where a CPUless node does not provide HMAT information. > If no HMAT is specified, it falls back to using the default DRAM tier. > > * Introduce another new lock `default_dram_perf_lock` for adist calculation > In the current implementation, iterating through CPUlist nodes requires > holding the `memory_tier_lock`. However, `mt_calc_adistance()` will end up > trying to acquire the same lock, leading to a potential deadlock. > Therefore, we propose introducing a standalone `default_dram_perf_lock` to > protect `default_dram_perf_*`. This approach not only avoids deadlock > but also prevents holding a large lock simultaneously. > > * Upgrade `set_node_memory_tier` to support additional cases, including > default DRAM, late CPUless, and hot-plugged initializations. > To cover hot-plugged memory nodes, `mt_calc_adistance()` and > `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to > handle cases where memtype is not initialized and where HMAT information is > available. > > * Introduce `default_memory_types` for those memory types that are not > initialized by device drivers. > Because late initialized memory and default DRAM memory need to be managed, > a default memory type is created for storing all memory types that are > not initialized by device drivers and as a fallback. > > Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com> > Reviewed-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2024-04-19 14:01 UTC | newest] Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-04-05 0:07 [PATCH v11 0/2] Improved Memory Tier Creation for CPUless NUMA Nodes Ho-Ren (Jack) Chuang 2024-04-05 0:07 ` [PATCH v11 1/2] memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types Ho-Ren (Jack) Chuang 2024-04-05 13:56 ` Jonathan Cameron 2024-04-09 19:00 ` [External] " Ho-Ren (Jack) Chuang 2024-04-09 21:50 ` Andrew Morton 2024-04-09 23:09 ` Ho-Ren (Jack) Chuang 2024-04-05 0:07 ` [PATCH v11 2/2] memory tier: create CPUless memory tiers after obtaining HMAT info Ho-Ren (Jack) Chuang 2024-04-05 14:02 ` Jonathan Cameron 2024-04-05 22:43 ` Ho-Ren (Jack) Chuang 2024-04-09 16:12 ` Jonathan Cameron 2024-04-09 19:02 ` [External] " Ho-Ren (Jack) Chuang 2024-04-10 16:51 ` Jonathan Cameron 2024-04-17 8:53 ` Ho-Ren (Jack) Chuang 2024-04-10 2:30 ` Huang, Ying 2024-04-10 5:55 ` [External] " Ho-Ren (Jack) Chuang 2024-04-19 14:01 ` Jonathan Cameron
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).