linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
@ 2022-03-15 16:37 Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region() Xin Hao
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Xin Hao @ 2022-03-15 16:37 UTC (permalink / raw)
  To: sj; +Cc: xhao, rongwei.wang, akpm, linux-mm, linux-kernel

The purpose of these patches is to add CMA memory monitoring function.
In some memory tight scenarios, it will be a good choice to release more
memory by monitoring the CMA memory.

These patches is only preliminarily for monitoring function, About the
reclaim, it need to do some fixes base on "reclaim.c" and more tests,
I will implement it in the next patch series.


Xin Hao (3):
  mm/damon: rename damon_evenly_split_region()
  mm/damon/paddr: Move "paddr" relative func to ops-common.c file
  mm/damon/sysfs: Add CMA memory monitoring

 include/linux/damon.h |   1 +
 mm/damon/Makefile     |   2 +-
 mm/damon/ops-common.c | 286 ++++++++++++++++++++++++++++++++++++++++++
 mm/damon/ops-common.h |  18 +++
 mm/damon/paddr-cma.c  | 104 +++++++++++++++
 mm/damon/paddr.c      | 246 ------------------------------------
 mm/damon/sysfs.c      |   1 +
 mm/damon/vaddr-test.h |   6 +-
 mm/damon/vaddr.c      |  41 +-----
 9 files changed, 415 insertions(+), 290 deletions(-)
 create mode 100644 mm/damon/paddr-cma.c

--
2.27.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region()
  2022-03-15 16:37 [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support Xin Hao
@ 2022-03-15 16:37 ` Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 2/3] mm/damon/paddr: Move "paddr" relative func to ops-common.c file Xin Hao
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Xin Hao @ 2022-03-15 16:37 UTC (permalink / raw)
  To: sj; +Cc: xhao, rongwei.wang, akpm, linux-mm, linux-kernel

This patch rename damon_va_evenly_split_region() to
damon_evenly_split_region() is aimed to call it
in the physical address space.

So there fix it, and move it to "ops-common.c" file.

Signed-off-by: Xin Hao <xhao@linux.alibaba.com>
---
 mm/damon/ops-common.c | 39 +++++++++++++++++++++++++++++++++++++++
 mm/damon/ops-common.h |  3 +++
 mm/damon/vaddr-test.h |  6 +++---
 mm/damon/vaddr.c      | 41 +----------------------------------------
 4 files changed, 46 insertions(+), 43 deletions(-)

diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index e346cc10d143..fd5e98005358 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -131,3 +131,42 @@ int damon_pageout_score(struct damon_ctx *c, struct damon_region *r,
 	/* Return coldness of the region */
 	return DAMOS_MAX_SCORE - hotness;
 }
+
+/*
+ * Size-evenly split a region into 'nr_pieces' small regions
+ *
+ * Returns 0 on success, or negative error code otherwise.
+ */
+int damon_evenly_split_region(struct damon_target *t,
+		struct damon_region *r, unsigned int nr_pieces)
+{
+	unsigned long sz_orig, sz_piece, orig_end;
+	struct damon_region *n = NULL, *next;
+	unsigned long start;
+
+	if (!r || !nr_pieces)
+		return -EINVAL;
+
+	orig_end = r->ar.end;
+	sz_orig = r->ar.end - r->ar.start;
+	sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION);
+
+	if (!sz_piece)
+		return -EINVAL;
+
+	r->ar.end = r->ar.start + sz_piece;
+	next = damon_next_region(r);
+	for (start = r->ar.end; start + sz_piece <= orig_end;
+			start += sz_piece) {
+		n = damon_new_region(start, start + sz_piece);
+		if (!n)
+			return -ENOMEM;
+		damon_insert_region(n, r, next, t);
+		r = n;
+	}
+	/* complement last region for possible rounding error */
+	if (n)
+		n->ar.end = orig_end;
+
+	return 0;
+}
diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
index e790cb5f8fe0..fd441016a2ae 100644
--- a/mm/damon/ops-common.h
+++ b/mm/damon/ops-common.h
@@ -14,3 +14,6 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
 
 int damon_pageout_score(struct damon_ctx *c, struct damon_region *r,
 			struct damos *s);
+
+int damon_evenly_split_region(struct damon_target *t,
+		struct damon_region *r, unsigned int nr_pieces);
diff --git a/mm/damon/vaddr-test.h b/mm/damon/vaddr-test.h
index 1a55bb6c36c3..161906ab66a7 100644
--- a/mm/damon/vaddr-test.h
+++ b/mm/damon/vaddr-test.h
@@ -256,7 +256,7 @@ static void damon_test_split_evenly_fail(struct kunit *test,
 
 	damon_add_region(r, t);
 	KUNIT_EXPECT_EQ(test,
-			damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL);
+			damon_evenly_split_region(t, r, nr_pieces), -EINVAL);
 	KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u);
 
 	damon_for_each_region(r, t) {
@@ -277,7 +277,7 @@ static void damon_test_split_evenly_succ(struct kunit *test,
 
 	damon_add_region(r, t);
 	KUNIT_EXPECT_EQ(test,
-			damon_va_evenly_split_region(t, r, nr_pieces), 0);
+			damon_evenly_split_region(t, r, nr_pieces), 0);
 	KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces);
 
 	damon_for_each_region(r, t) {
@@ -294,7 +294,7 @@ static void damon_test_split_evenly_succ(struct kunit *test,
 
 static void damon_test_split_evenly(struct kunit *test)
 {
-	KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5),
+	KUNIT_EXPECT_EQ(test, damon_evenly_split_region(NULL, NULL, 5),
 			-EINVAL);
 
 	damon_test_split_evenly_fail(test, 0, 100, 0);
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index b2ec0aa1ff45..0870e178b1b8 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -56,45 +56,6 @@ static struct mm_struct *damon_get_mm(struct damon_target *t)
  * Functions for the initial monitoring target regions construction
  */
 
-/*
- * Size-evenly split a region into 'nr_pieces' small regions
- *
- * Returns 0 on success, or negative error code otherwise.
- */
-static int damon_va_evenly_split_region(struct damon_target *t,
-		struct damon_region *r, unsigned int nr_pieces)
-{
-	unsigned long sz_orig, sz_piece, orig_end;
-	struct damon_region *n = NULL, *next;
-	unsigned long start;
-
-	if (!r || !nr_pieces)
-		return -EINVAL;
-
-	orig_end = r->ar.end;
-	sz_orig = r->ar.end - r->ar.start;
-	sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION);
-
-	if (!sz_piece)
-		return -EINVAL;
-
-	r->ar.end = r->ar.start + sz_piece;
-	next = damon_next_region(r);
-	for (start = r->ar.end; start + sz_piece <= orig_end;
-			start += sz_piece) {
-		n = damon_new_region(start, start + sz_piece);
-		if (!n)
-			return -ENOMEM;
-		damon_insert_region(n, r, next, t);
-		r = n;
-	}
-	/* complement last region for possible rounding error */
-	if (n)
-		n->ar.end = orig_end;
-
-	return 0;
-}
-
 static unsigned long sz_range(struct damon_addr_range *r)
 {
 	return r->end - r->start;
@@ -265,7 +226,7 @@ static void __damon_va_init_regions(struct damon_ctx *ctx,
 		damon_add_region(r, t);
 
 		nr_pieces = (regions[i].end - regions[i].start) / sz;
-		damon_va_evenly_split_region(t, r, nr_pieces);
+		damon_evenly_split_region(t, r, nr_pieces);
 	}
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH V1 2/3] mm/damon/paddr: Move "paddr" relative func to ops-common.c file
  2022-03-15 16:37 [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region() Xin Hao
@ 2022-03-15 16:37 ` Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 3/3] mm/damon/sysfs: Add CMA memory monitoring Xin Hao
  2022-03-16 15:09 ` [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support David Hildenbrand
  3 siblings, 0 replies; 10+ messages in thread
From: Xin Hao @ 2022-03-15 16:37 UTC (permalink / raw)
  To: sj; +Cc: xhao, rongwei.wang, akpm, linux-mm, linux-kernel

In the next patch, I will introduce the CMA monitoring support,
because CMA is also based on physical addresses, so there are many
functions can be shared with "paddr".

Signed-off-by: Xin Hao <xhao@linux.alibaba.com>
---
 mm/damon/ops-common.c | 247 ++++++++++++++++++++++++++++++++++++++++++
 mm/damon/ops-common.h |  15 +++
 mm/damon/paddr.c      | 246 -----------------------------------------
 3 files changed, 262 insertions(+), 246 deletions(-)

diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index fd5e98005358..0e895c0034b1 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -9,8 +9,11 @@
 #include <linux/page_idle.h>
 #include <linux/pagemap.h>
 #include <linux/rmap.h>
+#include <linux/mmu_notifier.h>
+#include <linux/swap.h>
 
 #include "ops-common.h"
+#include "../internal.h"
 
 /*
  * Get an online page for a pfn if it's in the LRU list.  Otherwise, returns
@@ -170,3 +173,247 @@ int damon_evenly_split_region(struct damon_target *t,
 
 	return 0;
 }
+
+#ifdef CONFIG_DAMON_PADDR
+
+static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma,
+		unsigned long addr, void *arg)
+{
+	struct page_vma_mapped_walk pvmw = {
+		.page = page,
+		.vma = vma,
+		.address = addr,
+	};
+
+	while (page_vma_mapped_walk(&pvmw)) {
+		addr = pvmw.address;
+		if (pvmw.pte)
+			damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr);
+		else
+			damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr);
+	}
+	return true;
+}
+
+void damon_pa_mkold(unsigned long paddr)
+{
+	struct page *page = damon_get_page(PHYS_PFN(paddr));
+	struct rmap_walk_control rwc = {
+		.rmap_one = __damon_pa_mkold,
+		.anon_lock = page_lock_anon_vma_read,
+	};
+	bool need_lock;
+
+	if (!page)
+		return;
+
+	if (!page_mapped(page) || !page_rmapping(page)) {
+		set_page_idle(page);
+		goto out;
+	}
+
+	need_lock = !PageAnon(page) || PageKsm(page);
+	if (need_lock && !trylock_page(page))
+		goto out;
+
+	rmap_walk(page, &rwc);
+
+	if (need_lock)
+		unlock_page(page);
+
+out:
+	put_page(page);
+}
+
+static void __damon_pa_prepare_access_check(struct damon_ctx *ctx,
+					    struct damon_region *r)
+{
+	r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
+
+	damon_pa_mkold(r->sampling_addr);
+}
+
+void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
+{
+	struct damon_target *t;
+	struct damon_region *r;
+
+	damon_for_each_target(t, ctx) {
+		damon_for_each_region(r, t)
+			__damon_pa_prepare_access_check(ctx, r);
+	}
+}
+
+struct damon_pa_access_chk_result {
+	unsigned long page_sz;
+	bool accessed;
+};
+
+static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma,
+		unsigned long addr, void *arg)
+{
+	struct damon_pa_access_chk_result *result = arg;
+	struct page_vma_mapped_walk pvmw = {
+		.page = page,
+		.vma = vma,
+		.address = addr,
+	};
+
+	result->accessed = false;
+	result->page_sz = PAGE_SIZE;
+	while (page_vma_mapped_walk(&pvmw)) {
+		addr = pvmw.address;
+		if (pvmw.pte) {
+			result->accessed = pte_young(*pvmw.pte) ||
+				!page_is_idle(page) ||
+				mmu_notifier_test_young(vma->vm_mm, addr);
+		} else {
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+			result->accessed = pmd_young(*pvmw.pmd) ||
+				!page_is_idle(page) ||
+				mmu_notifier_test_young(vma->vm_mm, addr);
+			result->page_sz = ((1UL) << HPAGE_PMD_SHIFT);
+#else
+			WARN_ON_ONCE(1);
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+		}
+		if (result->accessed) {
+			page_vma_mapped_walk_done(&pvmw);
+			break;
+		}
+	}
+
+	/* If accessed, stop walking */
+	return !result->accessed;
+}
+
+bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
+{
+	struct page *page = damon_get_page(PHYS_PFN(paddr));
+	struct damon_pa_access_chk_result result = {
+		.page_sz = PAGE_SIZE,
+		.accessed = false,
+	};
+	struct rmap_walk_control rwc = {
+		.arg = &result,
+		.rmap_one = __damon_pa_young,
+		.anon_lock = page_lock_anon_vma_read,
+	};
+	bool need_lock;
+
+	if (!page)
+		return false;
+
+	if (!page_mapped(page) || !page_rmapping(page)) {
+		if (page_is_idle(page))
+			result.accessed = false;
+		else
+			result.accessed = true;
+		put_page(page);
+		goto out;
+	}
+
+	need_lock = !PageAnon(page) || PageKsm(page);
+	if (need_lock && !trylock_page(page)) {
+		put_page(page);
+		return NULL;
+	}
+
+	rmap_walk(page, &rwc);
+
+	if (need_lock)
+		unlock_page(page);
+	put_page(page);
+
+out:
+	*page_sz = result.page_sz;
+	return result.accessed;
+}
+
+static void __damon_pa_check_access(struct damon_ctx *ctx,
+				    struct damon_region *r)
+{
+	static unsigned long last_addr;
+	static unsigned long last_page_sz = PAGE_SIZE;
+	static bool last_accessed;
+
+	/* If the region is in the last checked page, reuse the result */
+	if (ALIGN_DOWN(last_addr, last_page_sz) ==
+				ALIGN_DOWN(r->sampling_addr, last_page_sz)) {
+		if (last_accessed)
+			r->nr_accesses++;
+		return;
+	}
+
+	last_accessed = damon_pa_young(r->sampling_addr, &last_page_sz);
+	if (last_accessed)
+		r->nr_accesses++;
+
+	last_addr = r->sampling_addr;
+}
+
+unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
+{
+	struct damon_target *t;
+	struct damon_region *r;
+	unsigned int max_nr_accesses = 0;
+
+	damon_for_each_target(t, ctx) {
+		damon_for_each_region(r, t) {
+			__damon_pa_check_access(ctx, r);
+			max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
+		}
+	}
+
+	return max_nr_accesses;
+}
+
+unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
+		struct damon_target *t, struct damon_region *r,
+		struct damos *scheme)
+{
+	unsigned long addr, applied;
+	LIST_HEAD(page_list);
+
+	if (scheme->action != DAMOS_PAGEOUT)
+		return 0;
+
+	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+		struct page *page = damon_get_page(PHYS_PFN(addr));
+
+		if (!page)
+			continue;
+
+		ClearPageReferenced(page);
+		test_and_clear_page_young(page);
+		if (isolate_lru_page(page)) {
+			put_page(page);
+			continue;
+		}
+		if (PageUnevictable(page)) {
+			putback_lru_page(page);
+		} else {
+			list_add(&page->lru, &page_list);
+			put_page(page);
+		}
+	}
+	applied = reclaim_pages(&page_list);
+	cond_resched();
+	return applied * PAGE_SIZE;
+}
+
+int damon_pa_scheme_score(struct damon_ctx *context,
+		struct damon_target *t, struct damon_region *r,
+		struct damos *scheme)
+{
+	switch (scheme->action) {
+	case DAMOS_PAGEOUT:
+		return damon_pageout_score(context, r, scheme);
+	default:
+		break;
+	}
+
+	return DAMOS_MAX_SCORE;
+}
+
+#endif /* CONFIG_DAMON_PADDR */
diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
index fd441016a2ae..bb62fd300ea9 100644
--- a/mm/damon/ops-common.h
+++ b/mm/damon/ops-common.h
@@ -17,3 +17,18 @@ int damon_pageout_score(struct damon_ctx *c, struct damon_region *r,
 
 int damon_evenly_split_region(struct damon_target *t,
 		struct damon_region *r, unsigned int nr_pieces);
+
+#ifdef CONFIG_DAMON_PADDR
+
+void damon_pa_mkold(unsigned long paddr);
+void damon_pa_prepare_access_checks(struct damon_ctx *ctx);
+bool damon_pa_young(unsigned long paddr, unsigned long *page_sz);
+unsigned int damon_pa_check_accesses(struct damon_ctx *ctx);
+unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
+		struct damon_target *t, struct damon_region *r,
+		struct damos *scheme);
+int damon_pa_scheme_score(struct damon_ctx *context,
+		struct damon_target *t, struct damon_region *r,
+		struct damos *scheme);
+
+#endif
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 7c263797a9a9..c0a87c0bde9b 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -7,255 +7,9 @@
 
 #define pr_fmt(fmt) "damon-pa: " fmt
 
-#include <linux/mmu_notifier.h>
-#include <linux/page_idle.h>
-#include <linux/pagemap.h>
-#include <linux/rmap.h>
-#include <linux/swap.h>
-
 #include "../internal.h"
 #include "ops-common.h"
 
-static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma,
-		unsigned long addr, void *arg)
-{
-	struct page_vma_mapped_walk pvmw = {
-		.page = page,
-		.vma = vma,
-		.address = addr,
-	};
-
-	while (page_vma_mapped_walk(&pvmw)) {
-		addr = pvmw.address;
-		if (pvmw.pte)
-			damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr);
-		else
-			damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr);
-	}
-	return true;
-}
-
-static void damon_pa_mkold(unsigned long paddr)
-{
-	struct page *page = damon_get_page(PHYS_PFN(paddr));
-	struct rmap_walk_control rwc = {
-		.rmap_one = __damon_pa_mkold,
-		.anon_lock = page_lock_anon_vma_read,
-	};
-	bool need_lock;
-
-	if (!page)
-		return;
-
-	if (!page_mapped(page) || !page_rmapping(page)) {
-		set_page_idle(page);
-		goto out;
-	}
-
-	need_lock = !PageAnon(page) || PageKsm(page);
-	if (need_lock && !trylock_page(page))
-		goto out;
-
-	rmap_walk(page, &rwc);
-
-	if (need_lock)
-		unlock_page(page);
-
-out:
-	put_page(page);
-}
-
-static void __damon_pa_prepare_access_check(struct damon_ctx *ctx,
-					    struct damon_region *r)
-{
-	r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
-
-	damon_pa_mkold(r->sampling_addr);
-}
-
-static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
-{
-	struct damon_target *t;
-	struct damon_region *r;
-
-	damon_for_each_target(t, ctx) {
-		damon_for_each_region(r, t)
-			__damon_pa_prepare_access_check(ctx, r);
-	}
-}
-
-struct damon_pa_access_chk_result {
-	unsigned long page_sz;
-	bool accessed;
-};
-
-static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma,
-		unsigned long addr, void *arg)
-{
-	struct damon_pa_access_chk_result *result = arg;
-	struct page_vma_mapped_walk pvmw = {
-		.page = page,
-		.vma = vma,
-		.address = addr,
-	};
-
-	result->accessed = false;
-	result->page_sz = PAGE_SIZE;
-	while (page_vma_mapped_walk(&pvmw)) {
-		addr = pvmw.address;
-		if (pvmw.pte) {
-			result->accessed = pte_young(*pvmw.pte) ||
-				!page_is_idle(page) ||
-				mmu_notifier_test_young(vma->vm_mm, addr);
-		} else {
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-			result->accessed = pmd_young(*pvmw.pmd) ||
-				!page_is_idle(page) ||
-				mmu_notifier_test_young(vma->vm_mm, addr);
-			result->page_sz = ((1UL) << HPAGE_PMD_SHIFT);
-#else
-			WARN_ON_ONCE(1);
-#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
-		}
-		if (result->accessed) {
-			page_vma_mapped_walk_done(&pvmw);
-			break;
-		}
-	}
-
-	/* If accessed, stop walking */
-	return !result->accessed;
-}
-
-static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz)
-{
-	struct page *page = damon_get_page(PHYS_PFN(paddr));
-	struct damon_pa_access_chk_result result = {
-		.page_sz = PAGE_SIZE,
-		.accessed = false,
-	};
-	struct rmap_walk_control rwc = {
-		.arg = &result,
-		.rmap_one = __damon_pa_young,
-		.anon_lock = page_lock_anon_vma_read,
-	};
-	bool need_lock;
-
-	if (!page)
-		return false;
-
-	if (!page_mapped(page) || !page_rmapping(page)) {
-		if (page_is_idle(page))
-			result.accessed = false;
-		else
-			result.accessed = true;
-		put_page(page);
-		goto out;
-	}
-
-	need_lock = !PageAnon(page) || PageKsm(page);
-	if (need_lock && !trylock_page(page)) {
-		put_page(page);
-		return NULL;
-	}
-
-	rmap_walk(page, &rwc);
-
-	if (need_lock)
-		unlock_page(page);
-	put_page(page);
-
-out:
-	*page_sz = result.page_sz;
-	return result.accessed;
-}
-
-static void __damon_pa_check_access(struct damon_ctx *ctx,
-				    struct damon_region *r)
-{
-	static unsigned long last_addr;
-	static unsigned long last_page_sz = PAGE_SIZE;
-	static bool last_accessed;
-
-	/* If the region is in the last checked page, reuse the result */
-	if (ALIGN_DOWN(last_addr, last_page_sz) ==
-				ALIGN_DOWN(r->sampling_addr, last_page_sz)) {
-		if (last_accessed)
-			r->nr_accesses++;
-		return;
-	}
-
-	last_accessed = damon_pa_young(r->sampling_addr, &last_page_sz);
-	if (last_accessed)
-		r->nr_accesses++;
-
-	last_addr = r->sampling_addr;
-}
-
-static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
-{
-	struct damon_target *t;
-	struct damon_region *r;
-	unsigned int max_nr_accesses = 0;
-
-	damon_for_each_target(t, ctx) {
-		damon_for_each_region(r, t) {
-			__damon_pa_check_access(ctx, r);
-			max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
-		}
-	}
-
-	return max_nr_accesses;
-}
-
-static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
-		struct damon_target *t, struct damon_region *r,
-		struct damos *scheme)
-{
-	unsigned long addr, applied;
-	LIST_HEAD(page_list);
-
-	if (scheme->action != DAMOS_PAGEOUT)
-		return 0;
-
-	for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
-		struct page *page = damon_get_page(PHYS_PFN(addr));
-
-		if (!page)
-			continue;
-
-		ClearPageReferenced(page);
-		test_and_clear_page_young(page);
-		if (isolate_lru_page(page)) {
-			put_page(page);
-			continue;
-		}
-		if (PageUnevictable(page)) {
-			putback_lru_page(page);
-		} else {
-			list_add(&page->lru, &page_list);
-			put_page(page);
-		}
-	}
-	applied = reclaim_pages(&page_list);
-	cond_resched();
-	return applied * PAGE_SIZE;
-}
-
-static int damon_pa_scheme_score(struct damon_ctx *context,
-		struct damon_target *t, struct damon_region *r,
-		struct damos *scheme)
-{
-	switch (scheme->action) {
-	case DAMOS_PAGEOUT:
-		return damon_pageout_score(context, r, scheme);
-	default:
-		break;
-	}
-
-	return DAMOS_MAX_SCORE;
-}
-
 static int __init damon_pa_initcall(void)
 {
 	struct damon_operations ops = {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH V1 3/3] mm/damon/sysfs: Add CMA memory monitoring
  2022-03-15 16:37 [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region() Xin Hao
  2022-03-15 16:37 ` [RFC PATCH V1 2/3] mm/damon/paddr: Move "paddr" relative func to ops-common.c file Xin Hao
@ 2022-03-15 16:37 ` Xin Hao
  2022-03-16 15:09 ` [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support David Hildenbrand
  3 siblings, 0 replies; 10+ messages in thread
From: Xin Hao @ 2022-03-15 16:37 UTC (permalink / raw)
  To: sj; +Cc: xhao, rongwei.wang, akpm, linux-mm, linux-kernel

Users can do the CMA memory monitoring by writing a special
keyword 'cma' to the 'operations' sysfs file. Then, DAMON will
check the special keyword and configure the monitoring context to
run with the CMA reserved physical address space.

Unlike other physical memorys monitoring, the monitoring target region
will be automatically set.

Signed-off-by: Xin Hao <xhao@linux.alibaba.com>
---
 include/linux/damon.h |   1 +
 mm/damon/Makefile     |   2 +-
 mm/damon/paddr-cma.c  | 104 ++++++++++++++++++++++++++++++++++++++++++
 mm/damon/sysfs.c      |   1 +
 4 files changed, 107 insertions(+), 1 deletion(-)
 create mode 100644 mm/damon/paddr-cma.c

diff --git a/include/linux/damon.h b/include/linux/damon.h
index f23cbfa4248d..27eaa6d6c43a 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -266,6 +266,7 @@ struct damos {
 enum damon_ops_id {
 	DAMON_OPS_VADDR,
 	DAMON_OPS_PADDR,
+	DAMON_OPS_CMA,
 	NR_DAMON_OPS,
 };
 
diff --git a/mm/damon/Makefile b/mm/damon/Makefile
index dbf7190b4144..d32048f70f6d 100644
--- a/mm/damon/Makefile
+++ b/mm/damon/Makefile
@@ -2,7 +2,7 @@
 
 obj-y				:= core.o
 obj-$(CONFIG_DAMON_VADDR)	+= ops-common.o vaddr.o
-obj-$(CONFIG_DAMON_PADDR)	+= ops-common.o paddr.o
+obj-$(CONFIG_DAMON_PADDR)	+= ops-common.o paddr.o paddr-cma.o
 obj-$(CONFIG_DAMON_SYSFS)	+= sysfs.o
 obj-$(CONFIG_DAMON_DBGFS)	+= dbgfs.o
 obj-$(CONFIG_DAMON_RECLAIM)	+= reclaim.o
diff --git a/mm/damon/paddr-cma.c b/mm/damon/paddr-cma.c
new file mode 100644
index 000000000000..ad422854c8c6
--- /dev/null
+++ b/mm/damon/paddr-cma.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DAMON Primitives for The CMA Physical Address Space
+ *
+ * Author: Xin Hao <xhao@linux.alibaba.com>
+ */
+#ifdef CONFIG_CMA
+
+#define pr_fmt(fmt) "damon-cma: " fmt
+
+#include <linux/cma.h>
+
+#include "ops-common.h"
+#include "../cma.h"
+
+static int damon_cma_area_regions(struct damon_addr_range *regions, int nr_cma_area)
+{
+	int i;
+
+	if (!nr_cma_area || !regions)
+		return -EINVAL;
+
+	for (i = 0; i < nr_cma_area; i++) {
+		phys_addr_t base = cma_get_base(&cma_areas[i]);
+
+		regions[i].start = base;
+		regions[i].end = base + cma_get_size(&cma_areas[i]);
+	}
+
+	return 0;
+}
+
+static void __damon_cma_init_regions(struct damon_ctx *ctx,
+				     struct damon_target *t)
+{
+	struct damon_target *ti;
+	struct damon_region *r;
+	struct damon_addr_range regions[MAX_CMA_AREAS];
+	unsigned long sz = 0, nr_pieces;
+	int i, tidx = 0;
+
+	if (damon_cma_area_regions(regions, cma_area_count)) {
+		damon_for_each_target(ti, ctx) {
+			if (ti == t)
+				break;
+			tidx++;
+		}
+		pr_err("Failed to get CMA regions of %dth target\n", tidx);
+		return;
+	}
+
+	for (i = 0; i < cma_area_count; i++)
+		sz += regions[i].end - regions[i].start;
+	if (ctx->min_nr_regions)
+		sz /= ctx->min_nr_regions;
+	if (sz < DAMON_MIN_REGION)
+		sz = DAMON_MIN_REGION;
+
+	/* Set the initial three regions of the target */
+	for (i = 0; i < cma_area_count; i++) {
+		r = damon_new_region(regions[i].start, regions[i].end);
+		if (!r) {
+			pr_err("%d'th init region creation failed\n", i);
+			return;
+		}
+		damon_add_region(r, t);
+
+		nr_pieces = (regions[i].end - regions[i].start) / sz;
+		damon_evenly_split_region(t, r, nr_pieces);
+	}
+}
+
+static void damon_cma_init(struct damon_ctx *ctx)
+{
+	struct damon_target *t;
+
+	damon_for_each_target(t, ctx) {
+		/* the user may set the target regions as they want */
+		if (!damon_nr_regions(t))
+			__damon_cma_init_regions(ctx, t);
+	}
+}
+
+static int __init damon_cma_initcall(void)
+{
+	struct damon_operations ops = {
+		.id = DAMON_OPS_CMA,
+		.init = damon_cma_init,
+		.update = NULL,
+		.prepare_access_checks = damon_pa_prepare_access_checks,
+		.check_accesses = damon_pa_check_accesses,
+		.reset_aggregated = NULL,
+		.target_valid = NULL,
+		.cleanup = NULL,
+		.apply_scheme = damon_pa_apply_scheme,
+		.get_scheme_score = damon_pa_scheme_score,
+	};
+
+	return damon_register_ops(&ops);
+};
+
+subsys_initcall(damon_cma_initcall);
+
+#endif /* CONFIG_CMA */
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index d39f74969469..8a34880cc2c4 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1761,6 +1761,7 @@ static struct kobj_type damon_sysfs_attrs_ktype = {
 static const char * const damon_sysfs_ops_strs[] = {
 	"vaddr",
 	"paddr",
+	"cma",
 };
 
 struct damon_sysfs_context {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-15 16:37 [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support Xin Hao
                   ` (2 preceding siblings ...)
  2022-03-15 16:37 ` [RFC PATCH V1 3/3] mm/damon/sysfs: Add CMA memory monitoring Xin Hao
@ 2022-03-16 15:09 ` David Hildenbrand
  2022-03-17  7:03   ` Xin Hao
  3 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2022-03-16 15:09 UTC (permalink / raw)
  To: Xin Hao, sj; +Cc: rongwei.wang, akpm, linux-mm, linux-kernel

On 15.03.22 17:37, Xin Hao wrote:

s/minotor/monitor/

> The purpose of these patches is to add CMA memory monitoring function.
> In some memory tight scenarios, it will be a good choice to release more
> memory by monitoring the CMA memory.

I'm sorry, but it's hard to figure out what the target use case should
be. Who will release CMA memory and how? Who will monitor that? What are
the "some memory tight scenarios"? What's the overall design goal?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-16 15:09 ` [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support David Hildenbrand
@ 2022-03-17  7:03   ` Xin Hao
  2022-03-17 16:42     ` David Hildenbrand
  0 siblings, 1 reply; 10+ messages in thread
From: Xin Hao @ 2022-03-17  7:03 UTC (permalink / raw)
  To: David Hildenbrand, sj
  Cc: rongwei.wang, akpm, linux-mm, linux-kernel, Baolin Wang

Hi David,

On 3/16/22 11:09 PM, David Hildenbrand wrote:
> On 15.03.22 17:37, Xin Hao wrote:
>
> s/minotor/monitor/
Thanks,  i will fix it.
>
>> The purpose of these patches is to add CMA memory monitoring function.
>> In some memory tight scenarios, it will be a good choice to release more
>> memory by monitoring the CMA memory.
> I'm sorry, but it's hard to figure out what the target use case should
> be. Who will release CMA memory and how? Who will monitor that? What are
> the "some memory tight scenarios"? What's the overall design goal?
I may not be describing exactly what  i mean,My intention is to find out 
how much of the reserved CMA space is actually used and which is unused,
For those that are not used, I understand that they can be released by 
cma_release(). Of course, This is just a little personal thought that I 
think is helpful for saving memory.
>
-- 
Best Regards!
Xin Hao


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-17  7:03   ` Xin Hao
@ 2022-03-17 16:42     ` David Hildenbrand
  2022-03-18  5:13       ` xhao
  0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2022-03-17 16:42 UTC (permalink / raw)
  To: xhao, sj; +Cc: rongwei.wang, akpm, linux-mm, linux-kernel, Baolin Wang

On 17.03.22 08:03, Xin Hao wrote:
> Hi David,
> 
> On 3/16/22 11:09 PM, David Hildenbrand wrote:
>> On 15.03.22 17:37, Xin Hao wrote:
>>
>> s/minotor/monitor/
> Thanks,  i will fix it.
>>
>>> The purpose of these patches is to add CMA memory monitoring function.
>>> In some memory tight scenarios, it will be a good choice to release more
>>> memory by monitoring the CMA memory.
>> I'm sorry, but it's hard to figure out what the target use case should
>> be. Who will release CMA memory and how? Who will monitor that? What are
>> the "some memory tight scenarios"? What's the overall design goal?
> I may not be describing exactly what  i mean,My intention is to find out 
> how much of the reserved CMA space is actually used and which is unused,
> For those that are not used, I understand that they can be released by 
> cma_release(). Of course, This is just a little personal thought that I 
> think is helpful for saving memory.

Hm, not quite. We can place movable allocations on cma areas, to be
migrated away once required for allocations via CMA. So just looking at
the pages allocated within a CMA area doesn't really tell you what's
actually going on.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-17 16:42     ` David Hildenbrand
@ 2022-03-18  5:13       ` xhao
  2022-03-18  8:29         ` David Hildenbrand
  0 siblings, 1 reply; 10+ messages in thread
From: xhao @ 2022-03-18  5:13 UTC (permalink / raw)
  To: David Hildenbrand, sj
  Cc: rongwei.wang, akpm, linux-mm, linux-kernel, Baolin Wang, SeongJae Park


On 3/18/22 12:42 AM, David Hildenbrand wrote:
> On 17.03.22 08:03, Xin Hao wrote:
>> Hi David,
>>
>> On 3/16/22 11:09 PM, David Hildenbrand wrote:
>>> On 15.03.22 17:37, Xin Hao wrote:
>>>
>>> s/minotor/monitor/
>> Thanks,  i will fix it.
>>>> The purpose of these patches is to add CMA memory monitoring function.
>>>> In some memory tight scenarios, it will be a good choice to release more
>>>> memory by monitoring the CMA memory.
>>> I'm sorry, but it's hard to figure out what the target use case should
>>> be. Who will release CMA memory and how? Who will monitor that? What are
>>> the "some memory tight scenarios"? What's the overall design goal?
>> I may not be describing exactly what  i mean,My intention is to find out
>> how much of the reserved CMA space is actually used and which is unused,
>> For those that are not used, I understand that they can be released by
>> cma_release(). Of course, This is just a little personal thought that I
>> think is helpful for saving memory.
> Hm, not quite. We can place movable allocations on cma areas, to be
> migrated away once required for allocations via CMA. So just looking at
> the pages allocated within a CMA area doesn't really tell you what's
> actually going on.

I don't think so,  the damon not looking at the pages allocate, It is 
constantly monitoring who is using CMA area pages through tracking page 
access bit

in the kernel via the kdamond.x thread, So through damon, it can tell us 
about  the hot and cold distribution of CMA memory.

--cc  SeongJae Park <sj@kernel.org>


More about damon, you can refer to this 
link:https://sjp38.github.io/post/damon/ 
<https://sjp38.github.io/post/damon/>

>
-- 
Best Regards!
Xin Hao


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-18  5:13       ` xhao
@ 2022-03-18  8:29         ` David Hildenbrand
  2022-03-18  8:40           ` sj
  0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2022-03-18  8:29 UTC (permalink / raw)
  To: xhao, sj; +Cc: rongwei.wang, akpm, linux-mm, linux-kernel, Baolin Wang

On 18.03.22 06:13, xhao@linux.alibaba.com wrote:
> 
> On 3/18/22 12:42 AM, David Hildenbrand wrote:
>> On 17.03.22 08:03, Xin Hao wrote:
>>> Hi David,
>>>
>>> On 3/16/22 11:09 PM, David Hildenbrand wrote:
>>>> On 15.03.22 17:37, Xin Hao wrote:
>>>>
>>>> s/minotor/monitor/
>>> Thanks,  i will fix it.
>>>>> The purpose of these patches is to add CMA memory monitoring function.
>>>>> In some memory tight scenarios, it will be a good choice to release more
>>>>> memory by monitoring the CMA memory.
>>>> I'm sorry, but it's hard to figure out what the target use case should
>>>> be. Who will release CMA memory and how? Who will monitor that? What are
>>>> the "some memory tight scenarios"? What's the overall design goal?
>>> I may not be describing exactly what  i mean,My intention is to find out
>>> how much of the reserved CMA space is actually used and which is unused,
>>> For those that are not used, I understand that they can be released by
>>> cma_release(). Of course, This is just a little personal thought that I
>>> think is helpful for saving memory.
>> Hm, not quite. We can place movable allocations on cma areas, to be
>> migrated away once required for allocations via CMA. So just looking at
>> the pages allocated within a CMA area doesn't really tell you what's
>> actually going on.
> 
> I don't think so,  the damon not looking at the pages allocate, It is 
> constantly monitoring who is using CMA area pages through tracking page 
> access bit
> 
> in the kernel via the kdamond.x thread, So through damon, it can tell us 
> about  the hot and cold distribution of CMA memory.

I'm not sure I follow. With random movable pages being placed on the CMA
area, the mentioned use case of "cma_release()" to release pages doesn't
make sense to me.

I assume I'm missing the big picture -- and that should be properly
documented in the patch description. We don't add stuff just because it
could be used somehow, there should be a clear motivation how it can
actually be used.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support
  2022-03-18  8:29         ` David Hildenbrand
@ 2022-03-18  8:40           ` sj
  0 siblings, 0 replies; 10+ messages in thread
From: sj @ 2022-03-18  8:40 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: xhao, sj, rongwei.wang, akpm, linux-mm, linux-kernel, Baolin Wang

On Fri, 18 Mar 2022 09:29:20 +0100 David Hildenbrand <david@redhat.com> wrote:

> On 18.03.22 06:13, xhao@linux.alibaba.com wrote:
> > 
> > On 3/18/22 12:42 AM, David Hildenbrand wrote:
> >> On 17.03.22 08:03, Xin Hao wrote:
> >>> Hi David,
> >>>
> >>> On 3/16/22 11:09 PM, David Hildenbrand wrote:
> >>>> On 15.03.22 17:37, Xin Hao wrote:
> >>>>
> >>>> s/minotor/monitor/
> >>> Thanks,  i will fix it.
> >>>>> The purpose of these patches is to add CMA memory monitoring function.
> >>>>> In some memory tight scenarios, it will be a good choice to release more
> >>>>> memory by monitoring the CMA memory.
> >>>> I'm sorry, but it's hard to figure out what the target use case should
> >>>> be. Who will release CMA memory and how? Who will monitor that? What are
> >>>> the "some memory tight scenarios"? What's the overall design goal?
> >>> I may not be describing exactly what  i mean,My intention is to find out
> >>> how much of the reserved CMA space is actually used and which is unused,
> >>> For those that are not used, I understand that they can be released by
> >>> cma_release(). Of course, This is just a little personal thought that I
> >>> think is helpful for saving memory.
> >> Hm, not quite. We can place movable allocations on cma areas, to be
> >> migrated away once required for allocations via CMA. So just looking at
> >> the pages allocated within a CMA area doesn't really tell you what's
> >> actually going on.
> > 
> > I don't think so,  the damon not looking at the pages allocate, It is 
> > constantly monitoring who is using CMA area pages through tracking page 
> > access bit
> > 
> > in the kernel via the kdamond.x thread, So through damon, it can tell us 
> > about  the hot and cold distribution of CMA memory.
> 
> I'm not sure I follow. With random movable pages being placed on the CMA
> area, the mentioned use case of "cma_release()" to release pages doesn't
> make sense to me.
> 
> I assume I'm missing the big picture -- and that should be properly
> documented in the patch description. We don't add stuff just because it
> could be used somehow, there should be a clear motivation how it can
> actually be used.

Same opinion from my side.  The purpose and usage of this patch is unclear to
me.  Could you please clarify more, Xin?


Thanks,
SJ

> 
> -- 
> Thanks,
> 
> David / dhildenb

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-03-18  8:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-15 16:37 [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support Xin Hao
2022-03-15 16:37 ` [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region() Xin Hao
2022-03-15 16:37 ` [RFC PATCH V1 2/3] mm/damon/paddr: Move "paddr" relative func to ops-common.c file Xin Hao
2022-03-15 16:37 ` [RFC PATCH V1 3/3] mm/damon/sysfs: Add CMA memory monitoring Xin Hao
2022-03-16 15:09 ` [RFC PATCH V1 0/3] mm/damon: Add CMA minotor support David Hildenbrand
2022-03-17  7:03   ` Xin Hao
2022-03-17 16:42     ` David Hildenbrand
2022-03-18  5:13       ` xhao
2022-03-18  8:29         ` David Hildenbrand
2022-03-18  8:40           ` sj

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).