From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82D25C433FE for ; Wed, 16 Feb 2022 08:30:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 120C06B0078; Wed, 16 Feb 2022 03:30:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A69A6B007B; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA1396B007E; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id B43A46B0078 for ; Wed, 16 Feb 2022 03:30:51 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 79242180A9E2F for ; Wed, 16 Feb 2022 08:30:51 +0000 (UTC) X-FDA: 79147972302.21.E396EA2 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf21.hostedemail.com (Postfix) with ESMTP id 7E1041C000B for ; Wed, 16 Feb 2022 08:30:50 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjyv_1645000246; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjyv_1645000246) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:47 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 3/5] mm/damon: Add 'damon_region' NUMA access statistics core implementation Date: Wed, 16 Feb 2022 16:30:39 +0800 Message-Id: X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7E1041C000B X-Stat-Signature: ddkz9hapm3jhgaycddpfyr1d1awphnmi X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf21.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-HE-Tag: 1645000250-997540 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After setting PTE none or PMD none in DAMON, NUMA access of "damon_region= " will be counted in page fault if current pid matches the pid that DAMON is monito= ring. Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- include/linux/damon.h | 18 ++++++++++ mm/damon/core.c | 80 +++++++++++++++++++++++++++++++++++++++++-- mm/damon/dbgfs.c | 18 +++++++--- mm/damon/vaddr.c | 11 ++---- mm/huge_memory.c | 5 +++ mm/memory.c | 5 +++ 6 files changed, 121 insertions(+), 16 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 77d0937dcab5..5bf1eb92584b 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -12,12 +12,16 @@ #include #include #include +#include =20 /* Minimal region size. Every damon_region is aligned by this. */ #define DAMON_MIN_REGION PAGE_SIZE /* Max priority score for DAMON-based operation schemes */ #define DAMOS_MAX_SCORE (99) =20 +extern struct damon_ctx **dbgfs_ctxs; +extern int dbgfs_nr_ctxs; + /* Get a random number in [l, r) */ static inline unsigned long damon_rand(unsigned long l, unsigned long r) { @@ -68,6 +72,7 @@ struct damon_region { * @nr_regions: Number of monitoring target regions of this target. * @regions_list: Head of the monitoring target regions of this target. * @list: List head for siblings. + * @target_lock: Use damon_region lock to avoid race. * * Each monitoring context could have multiple targets. For example, a = context * for virtual memory address spaces could have multiple target processe= s. The @@ -80,6 +85,7 @@ struct damon_target { unsigned int nr_regions; struct list_head regions_list; struct list_head list; + spinlock_t target_lock; }; =20 /** @@ -503,8 +509,20 @@ int damon_stop(struct damon_ctx **ctxs, int nr_ctxs)= ; #endif /* CONFIG_DAMON */ =20 #ifdef CONFIG_DAMON_VADDR + +/* + * 't->id' should be the pointer to the relevant 'struct pid' having ref= erence + * count. Caller must put the returned task, unless it is NULL. + */ +static inline struct task_struct *damon_get_task_struct(struct damon_tar= get *t) +{ + return get_pid_task((struct pid *)t->id, PIDTYPE_PID); +} bool damon_va_target_valid(void *t); void damon_va_set_primitives(struct damon_ctx *ctx); +void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf); +#else +static inline void damon_numa_fault(int page_nid, int node_id, struct vm= _fault *vmf) { } #endif /* CONFIG_DAMON_VADDR */ =20 #ifdef CONFIG_DAMON_PADDR diff --git a/mm/damon/core.c b/mm/damon/core.c index 933ef51afa71..970fc02abeba 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -157,6 +157,7 @@ struct damon_target *damon_new_target(unsigned long i= d) t->id =3D id; t->nr_regions =3D 0; INIT_LIST_HEAD(&t->regions_list); + spin_lock_init(&t->target_lock); =20 return t; } @@ -792,8 +793,11 @@ static void kdamond_merge_regions(struct damon_ctx *= c, unsigned int threshold, { struct damon_target *t; =20 - damon_for_each_target(t, c) + damon_for_each_target(t, c) { + spin_lock(&t->target_lock); damon_merge_regions_of(t, threshold, sz_limit); + spin_unlock(&t->target_lock); + } } =20 /* @@ -879,8 +883,11 @@ static void kdamond_split_regions(struct damon_ctx *= ctx) nr_regions < ctx->max_nr_regions / 3) nr_subregions =3D 3; =20 - damon_for_each_target(t, ctx) + damon_for_each_target(t, ctx) { + spin_lock(&t->target_lock); damon_split_regions_of(ctx, t, nr_subregions); + spin_unlock(&t->target_lock); + } =20 last_nr_regions =3D nr_regions; } @@ -1000,6 +1007,73 @@ static int kdamond_wait_activation(struct damon_ct= x *ctx) return -EBUSY; } =20 +static struct damon_target *get_damon_target(struct task_struct *task) +{ + int i; + unsigned long id1, id2; + struct damon_target *t; + + rcu_read_lock(); + for (i =3D 0; i < READ_ONCE(dbgfs_nr_ctxs); i++) { + struct damon_ctx *ctx =3D rcu_dereference(dbgfs_ctxs[i]); + + if (!ctx || !ctx->kdamond) + continue; + damon_for_each_target(t, dbgfs_ctxs[i]) { + struct task_struct *ts =3D damon_get_task_struct(t); + + if (ts) { + id1 =3D (unsigned long)pid_vnr((struct pid *)t->id); + id2 =3D (unsigned long)pid_vnr(get_task_pid(task, PIDTYPE_PID)); + put_task_struct(ts); + if (id1 =3D=3D id2) + return t; + } + } + } + rcu_read_unlock(); + + return NULL; +} + +static struct damon_region *get_damon_region(struct damon_target *t, uns= igned long addr) +{ + struct damon_region *r, *next; + + if (!t || !addr) + return NULL; + + spin_lock(&t->target_lock); + damon_for_each_region_safe(r, next, t) { + if (r->ar.start <=3D addr && r->ar.end >=3D addr) { + spin_unlock(&t->target_lock); + return r; + } + } + spin_unlock(&t->target_lock); + + return NULL; +} + +void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf) +{ + struct damon_target *t; + struct damon_region *r; + + if (nr_online_nodes > 1) { + t =3D get_damon_target(current); + if (!t) + return; + r =3D get_damon_region(t, vmf->address); + if (r) { + if (page_nid =3D=3D node_id) + r->local++; + else + r->remote++; + } + } +} + /* * The monitoring daemon that runs as a kernel thread */ @@ -1057,8 +1131,10 @@ static int kdamond_fn(void *data) } } damon_for_each_target(t, ctx) { + spin_lock(&t->target_lock); damon_for_each_region_safe(r, next, t) damon_destroy_region(r, t); + spin_unlock(&t->target_lock); } =20 if (ctx->callback.before_terminate) diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c index 5b899601e56c..c7f4e95abc14 100644 --- a/mm/damon/dbgfs.c +++ b/mm/damon/dbgfs.c @@ -15,11 +15,12 @@ #include #include =20 -static struct damon_ctx **dbgfs_ctxs; -static int dbgfs_nr_ctxs; +struct damon_ctx **dbgfs_ctxs; +int dbgfs_nr_ctxs; static struct dentry **dbgfs_dirs; static DEFINE_MUTEX(damon_dbgfs_lock); =20 + /* * Returns non-empty string on success, negative error code otherwise. */ @@ -808,10 +809,18 @@ static int dbgfs_rm_context(char *name) return -ENOMEM; } =20 - for (i =3D 0, j =3D 0; i < dbgfs_nr_ctxs; i++) { + dbgfs_nr_ctxs--; + /* Prevent NUMA fault get the wrong value */ + smp_mb(); + + for (i =3D 0, j =3D 0; i < dbgfs_nr_ctxs + 1; i++) { if (dbgfs_dirs[i] =3D=3D dir) { + struct damon_ctx *tmp_ctx =3D dbgfs_ctxs[i]; + + rcu_assign_pointer(dbgfs_ctxs[i], NULL); + synchronize_rcu(); debugfs_remove(dbgfs_dirs[i]); - dbgfs_destroy_ctx(dbgfs_ctxs[i]); + dbgfs_destroy_ctx(tmp_ctx); continue; } new_dirs[j] =3D dbgfs_dirs[i]; @@ -823,7 +832,6 @@ static int dbgfs_rm_context(char *name) =20 dbgfs_dirs =3D new_dirs; dbgfs_ctxs =3D new_ctxs; - dbgfs_nr_ctxs--; =20 return 0; } diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 732b41ed134c..78b90972d171 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -22,15 +22,6 @@ #define DAMON_MIN_REGION 1 #endif =20 -/* - * 't->id' should be the pointer to the relevant 'struct pid' having ref= erence - * count. Caller must put the returned task, unless it is NULL. - */ -static inline struct task_struct *damon_get_task_struct(struct damon_tar= get *t) -{ - return get_pid_task((struct pid *)t->id, PIDTYPE_PID); -} - /* * Get the mm_struct of the given target * @@ -363,7 +354,9 @@ static void damon_va_update(struct damon_ctx *ctx) damon_for_each_target(t, ctx) { if (damon_va_three_regions(t, three_regions)) continue; + spin_lock(&t->target_lock); damon_va_apply_three_regions(t, three_regions); + spin_unlock(&t->target_lock); } } =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..9cb413a8cd4a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #include @@ -1450,6 +1451,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *= vmf) flags |=3D TNF_NO_GROUP; =20 page_nid =3D page_to_nid(page); + + /* Get the NUMA accesses of monitored processes by DAMON */ + damon_numa_fault(page_nid, numa_node_id(), vmf); + last_cpupid =3D page_cpupid_last(page); target_nid =3D numa_migrate_prep(page, vma, haddr, page_nid, &flags); diff --git a/mm/memory.c b/mm/memory.c index c125c4969913..fb55264f36af 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -74,6 +74,7 @@ #include #include #include +#include =20 #include =20 @@ -4392,6 +4393,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vm= f) =20 last_cpupid =3D page_cpupid_last(page); page_nid =3D page_to_nid(page); + + /* Get the NUMA accesses of monitored processes by DAMON */ + damon_numa_fault(page_nid, numa_node_id(), vmf); + target_nid =3D numa_migrate_prep(page, vma, vmf->address, page_nid, &flags); if (target_nid =3D=3D NUMA_NO_NODE) { --=20 2.27.0