linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] DAMON: Implement The Data Access Pattern Awared Memory Management Rules
@ 2020-02-10 15:09 sjpark
  2020-02-10 15:09 ` [RFC PATCH 1/3] mm/madvise: Export madvise_common() to mm internal code sjpark
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: sjpark @ 2020-02-10 15:09 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, cai, colin.king, corbet, dwmw, jolsa, kirill,
	mark.rutland, mgorman, minchan, mingo, namhyung, peterz, rdunlap,
	rostedt, sj38.park, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

From: SeongJae Park <sjpark@amazon.de>

DAMON can make data access pattern awared memory management optimizations much
easier.  That said, users who want such optimizations should run DAMON, read
the monitoring results, analyze it, plan a new memory management scheme, and
apply the new scheme by themselves.  It would not be too hard, but still
require some level of efforts.  Such efforts will be really necessary in some
complicated cases.

However, in many other cases, the optimizations would have a simple and common
pattern.  For example, the users would just want the system to apply an actions
to a memory region of a specific size having a specific access frequency for a
specific time.  For example, "page out a memory region larger than 100 MiB but
having a low access frequency more than 10 minutes", or "Use THP for a memory
region larger than 2 MiB having a high access frequency for more than 2
seconds".

This RFC patchset makes DAMON to receive and do such simple optimizations.  All
the things users need to do for such simple cases is only to specify their
requests to DAMON in a form of rules.

For the actions, current implementation supports only a few of ``madvise()``
hints, ``MADV_WILLNEED``, ``MADV_COLD``, ``MADV_PAGEOUT``, ``MADV_HUGEPAGE``,
and ``MADV_NOHUGEPAGE``.


Sequence Of Patches
===================

The first patch allows DAMON to reuse ``madvise()`` code.  The second patch
implements the data access pattern awared memory management rules and its
kernel space programming interface.  Finally, the third patch implements a
debugfs interface for privileged user space people and programs.

The patches are based on the v5.5 plus v4 DAMON patchset[1] and Minchan's
``madvise()`` factoring out patch[2].  Minchan's patch was necessary for reuse
of ``madvise()`` code.  You can also clone the complete git tree:

    $ git clone git://github.com/sjp38/linux -b damon/rules/rfc/v1

The web is also available:
https://github.com/sjp38/linux/releases/tag/damon/rules/rfc/v1

[1] https://lore.kernel.org/linux-mm/20200210144812.26845-1-sjpark@amazon.com/
[2] https://lore.kernel.org/linux-mm/20200128001641.5086-2-minchan@kernel.org/
SeongJae Park (3):
  mm/madvise: Export madvise_common() to mm internal code
  mm/damon/rules: Implement access pattern based management rules
  mm/damon/rules: Implement a debugfs interface

 include/linux/damon.h |  28 ++++
 mm/damon.c            | 317 +++++++++++++++++++++++++++++++++++++++++-
 mm/internal.h         |   4 +
 mm/madvise.c          |   2 +-
 4 files changed, 346 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC PATCH 1/3] mm/madvise: Export madvise_common() to mm internal code
  2020-02-10 15:09 [RFC PATCH 0/3] DAMON: Implement The Data Access Pattern Awared Memory Management Rules sjpark
@ 2020-02-10 15:09 ` sjpark
  2020-02-10 15:09 ` [RFC PATCH 2/3] mm/damon/rules: Implement access pattern based management rules sjpark
  2020-02-10 15:09 ` [RFC PATCH 3/3] mm/damon/rules: Implement a debugfs interface sjpark
  2 siblings, 0 replies; 4+ messages in thread
From: sjpark @ 2020-02-10 15:09 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, cai, colin.king, corbet, dwmw, jolsa, kirill,
	mark.rutland, mgorman, minchan, mingo, namhyung, peterz, rdunlap,
	rostedt, sj38.park, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit exports ``madvise_common()`` to ``mm/`` code for future
reuse.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/internal.h | 4 ++++
 mm/madvise.c  | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/internal.h b/mm/internal.h
index 3cf20ab3ca01..dcdfe00e02ff 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -576,4 +576,8 @@ static inline bool is_migrate_highatomic_page(struct page *page)
 
 void setup_zone_pageset(struct zone *zone);
 extern struct page *alloc_new_node_page(struct page *page, unsigned long node);
+
+
+int madvise_common(struct task_struct *task, struct mm_struct *mm,
+			unsigned long start, size_t len_in, int behavior);
 #endif	/* __MM_INTERNAL_H */
diff --git a/mm/madvise.c b/mm/madvise.c
index 0c901de531e4..4bb75be7a186 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1005,7 +1005,7 @@ madvise_behavior_valid(int behavior)
  * @task could be a zombie leader if it calls sys_exit so accessing mm_struct
  * via task->mm is prohibited. Please use @mm instead of task->mm.
  */
-static int madvise_common(struct task_struct *task, struct mm_struct *mm,
+int madvise_common(struct task_struct *task, struct mm_struct *mm,
 			unsigned long start, size_t len_in, int behavior)
 {
 	unsigned long end, tmp;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCH 2/3] mm/damon/rules: Implement access pattern based management rules
  2020-02-10 15:09 [RFC PATCH 0/3] DAMON: Implement The Data Access Pattern Awared Memory Management Rules sjpark
  2020-02-10 15:09 ` [RFC PATCH 1/3] mm/madvise: Export madvise_common() to mm internal code sjpark
@ 2020-02-10 15:09 ` sjpark
  2020-02-10 15:09 ` [RFC PATCH 3/3] mm/damon/rules: Implement a debugfs interface sjpark
  2 siblings, 0 replies; 4+ messages in thread
From: sjpark @ 2020-02-10 15:09 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, cai, colin.king, corbet, dwmw, jolsa, kirill,
	mark.rutland, mgorman, minchan, mingo, namhyung, peterz, rdunlap,
	rostedt, sj38.park, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

From: SeongJae Park <sjpark@amazon.de>

DAMON can make data access pattern awared memory management
optimizations much easier.  That said, users who want such optimizations
should run DAMON, read the monitoring results, analyze it, plan a new
memory management scheme, and apply the new scheme by themselves.  It
would not be too hard, but still require some level of efforts.  Such
efforts will be really necessary in some complicated cases.

However, in many other cases, the optimizations would have a simple and
common pattern.  For example, the users would just want the system to
apply an actions to a memory region of a specific size having a specific
access frequency for a specific time.  For example, "page out a memory
region larger than 100 MiB but having a low access frequency more than
10 minutes", or "Use THP for a memory region larger than 2 MiB having a
high access frequency for more than 2 seconds".

This commit makes DAMON to receive and handle such simple optimizations
requests.  All the things users need to do for such simple cases is only
to specify their requests to DAMON in a form of rules.

Each of the rules is composed with conditions for filtering of the
target memory regions and desired memory management action for the
target.  In specific, the format is::

    <min/max size> <min/max access frequency> <min/max age> <action>

The filtering conditions are size of memory region, number of accesses
to the region monitored by DAMON, and the age of the region.  The age of
region is incremented periodically but reset when its addresses or
access frequency has significanly changed.  The specifiable memory
management schemes are simple for now.  Current implementation supports
only a few of madvise() hints, ``MADV_WILLNEED``, ``MADV_COLD``,
``MADV_PAGEOUT``, ``MADV_HUGEPAGE``, and ``MADV_NOHUGEPAGE``.

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 include/linux/damon.h |  28 ++++++++
 mm/damon.c            | 160 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 186 insertions(+), 2 deletions(-)

diff --git a/include/linux/damon.h b/include/linux/damon.h
index 78785cb88d42..bc91e945f646 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -22,6 +22,11 @@ struct damon_region {
 	unsigned long sampling_addr;
 	unsigned int nr_accesses;
 	struct list_head list;
+
+	unsigned int age;
+	unsigned long last_vm_start;
+	unsigned long last_vm_end;
+	unsigned int last_nr_accesses;
 };
 
 /* Represents a monitoring target task */
@@ -31,6 +36,26 @@ struct damon_task {
 	struct list_head list;
 };
 
+enum damon_action {
+	DAMON_MADV_WILLNEED,
+	DAMON_MADV_COLD,
+	DAMON_MADV_PAGEOUT,
+	DAMON_MADV_HUGEPAGE,
+	DAMON_MADV_NOHUGEPAGE,
+	DAMON_ACTION_LEN,
+};
+
+struct damon_rule {
+	unsigned int min_sz_region;
+	unsigned int max_sz_region;
+	unsigned int min_nr_accesses;
+	unsigned int max_nr_accesses;
+	unsigned int min_age_region;
+	unsigned int max_age_region;
+	enum damon_action action;
+	struct list_head list;
+};
+
 struct damon_ctx {
 	unsigned long sample_interval;
 	unsigned long aggr_interval;
@@ -53,6 +78,7 @@ struct damon_ctx {
 	struct rnd_state rndseed;
 
 	struct list_head tasks_list;	/* 'damon_task' objects */
+	struct list_head rules_list;	/* 'damon_rule' objects */
 
 	/* callbacks */
 	void (*sample_cb)(struct damon_ctx *context);
@@ -61,6 +87,8 @@ struct damon_ctx {
 
 int damon_set_pids(struct damon_ctx *ctx,
 			unsigned long *pids, ssize_t nr_pids);
+int damon_set_rules(struct damon_ctx *ctx,
+			struct damon_rule **rules, ssize_t nr_rules);
 int damon_set_recording(struct damon_ctx *ctx,
 			unsigned int rbuf_len, char *rfile_path);
 int damon_set_attrs(struct damon_ctx *ctx, unsigned long s, unsigned long a,
diff --git a/mm/damon.c b/mm/damon.c
index bb8eb88edaf3..5d33b5d6504b 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -11,6 +11,7 @@
 
 #define CREATE_TRACE_POINTS
 
+#include <asm-generic/mman-common.h>
 #include <linux/damon.h>
 #include <linux/debugfs.h>
 #include <linux/delay.h>
@@ -24,6 +25,8 @@
 #include <linux/slab.h>
 #include <trace/events/damon.h>
 
+#include "internal.h"
+
 #define damon_get_task_struct(t) \
 	(get_pid_task(find_vpid(t->pid), PIDTYPE_PID))
 
@@ -45,6 +48,12 @@
 #define damon_for_each_task_safe(ctx, t, next) \
 	list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list)
 
+#define damon_for_each_rule(ctx, r) \
+	list_for_each_entry(r, &(ctx)->rules_list, list)
+
+#define damon_for_each_rule_safe(ctx, r, next) \
+	list_for_each_entry_safe(r, next, &(ctx)->rules_list, list)
+
 /*
  * For each 'sample_interval', DAMON checks whether each region is accessed or
  * not.  It aggregates and keeps the access information (number of accesses to
@@ -186,6 +195,27 @@ static void damon_destroy_task(struct damon_task *t)
 	damon_free_task(t);
 }
 
+static void damon_add_rule(struct damon_ctx *ctx, struct damon_rule *r)
+{
+	list_add_tail(&r->list, &ctx->rules_list);
+}
+
+static void damon_del_rule(struct damon_rule *r)
+{
+	list_del(&r->list);
+}
+
+static void damon_free_rule(struct damon_rule *r)
+{
+	kfree(r);
+}
+
+static void damon_destroy_rule(struct damon_rule *r)
+{
+	damon_del_rule(r);
+	damon_free_rule(r);
+}
+
 /*
  * Returns number of monitoring target tasks
  */
@@ -600,11 +630,120 @@ static void kdamond_flush_aggregated(struct damon_ctx *c)
 			damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end));
 			damon_write_rbuf(c, &r->nr_accesses,
 					sizeof(r->nr_accesses));
+			r->last_vm_start = r->vm_start;
+			r->last_vm_end = r->vm_end;
+			r->last_nr_accesses = r->nr_accesses;
 			r->nr_accesses = 0;
 		}
 	}
 }
 
+#define diff_of(a, b) (a > b ? a - b : b - a)
+
+/*
+ * Adjust the age of the given region
+ *
+ * Increase '->age' if '->vm_start' and '->vm_end' has not changed and
+ * '->nr_accesses' has not changed more than the merge threshold.  Else, reset
+ * it.
+ */
+static void damon_do_count_age(struct damon_region *r, unsigned int threshold)
+{
+	if (r->vm_start != r->last_vm_start || r->vm_end != r->last_vm_end)
+		r->age = 0;
+	else if (diff_of(r->nr_accesses, r->last_nr_accesses) > threshold)
+		r->age = 0;
+	else
+		r->age++;
+}
+
+static void kdamond_count_age(struct damon_ctx *c, unsigned int threshold)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+
+	damon_for_each_task(c, t) {
+		damon_for_each_region(r, t)
+			damon_do_count_age(r, threshold);
+	}
+}
+
+static int damon_do_action(struct damon_task *task, struct damon_region *r,
+			enum damon_action action)
+{
+	struct task_struct *t;
+	struct mm_struct *mm;
+	int madv_action;
+	int ret;
+
+	switch (action) {
+	case DAMON_MADV_WILLNEED:
+		madv_action = MADV_WILLNEED;
+		break;
+	case DAMON_MADV_COLD:
+		madv_action = MADV_COLD;
+		break;
+	case DAMON_MADV_PAGEOUT:
+		madv_action = MADV_PAGEOUT;
+		break;
+	case DAMON_MADV_HUGEPAGE:
+		madv_action = MADV_HUGEPAGE;
+		break;
+	case DAMON_MADV_NOHUGEPAGE:
+		madv_action = MADV_NOHUGEPAGE;
+		break;
+	default:
+		pr_warn("Wrong action %d\n", action);
+		return -EINVAL;
+	}
+
+	t = damon_get_task_struct(task);
+	if (!t)
+		return -EINVAL;
+	mm = damon_get_mm(task);
+	if (!mm) {
+		put_task_struct(t);
+		return -EINVAL;
+	}
+
+	ret = madvise_common(t, mm, PAGE_ALIGN(r->vm_start),
+			PAGE_ALIGN(r->vm_end - r->vm_start), madv_action);
+	put_task_struct(t);
+	mmput(mm);
+	return ret;
+}
+
+static void damon_do_apply_rules(struct damon_ctx *c, struct damon_task *t,
+				struct damon_region *r)
+{
+	struct damon_rule *rule;
+	unsigned long sz;
+
+	damon_for_each_rule(c, rule) {
+		sz = r->vm_end - r->vm_start;
+		if (sz < rule->min_sz_region ||  rule->max_sz_region < sz)
+			continue;
+		if (r->nr_accesses < rule->min_nr_accesses ||
+				rule->max_nr_accesses < r->nr_accesses)
+			continue;
+		if (r->age < rule->min_age_region ||
+				rule->max_age_region < r->age)
+			continue;
+		damon_do_action(t, r, rule->action);
+	}
+}
+
+static void kdamond_apply_rules(struct damon_ctx *c)
+{
+	struct damon_task *t;
+	struct damon_region *r;
+
+	damon_for_each_task(c, t) {
+		damon_for_each_region(r, t)
+			damon_do_apply_rules(c, t, r);
+	}
+}
+
 #define sz_damon_region(r) (r->vm_end - r->vm_start)
 
 /*
@@ -620,8 +759,6 @@ static void damon_merge_two_regions(struct damon_region *l,
 	damon_destroy_region(r);
 }
 
-#define diff_of(a, b) (a > b ? a - b : b - a)
-
 /*
  * Merge adjacent regions having similar access frequencies
  *
@@ -865,6 +1002,8 @@ static int kdamond_fn(void *data)
 
 		if (kdamond_aggregate_interval_passed(ctx)) {
 			kdamond_merge_regions(ctx, max_nr_accesses / 10);
+			kdamond_count_age(ctx, max_nr_accesses / 10);
+			kdamond_apply_rules(ctx);
 			kdamond_flush_aggregated(ctx);
 			kdamond_split_regions(ctx);
 			if (ctx->aggregate_cb)
@@ -952,6 +1091,22 @@ static inline bool damon_is_target_pid(struct damon_ctx *c, unsigned long pid)
 	return false;
 }
 
+/*
+ * This function should not be called while the kdamond is running.
+ */
+int damon_set_rules(struct damon_ctx *ctx, struct damon_rule **rules,
+			ssize_t nr_rules)
+{
+	struct damon_rule *r, *next;
+	ssize_t i;
+
+	damon_for_each_rule_safe(ctx, r, next)
+		damon_destroy_rule(r);
+	for (i = 0; i < nr_rules; i++)
+		damon_add_rule(ctx, rules[i]);
+	return 0;
+}
+
 /*
  * This function should not be called while the kdamond is running.
  */
@@ -1372,6 +1527,7 @@ static int __init damon_init_user_ctx(void)
 
 	prandom_seed_state(&ctx->rndseed, 42);
 	INIT_LIST_HEAD(&ctx->tasks_list);
+	INIT_LIST_HEAD(&ctx->rules_list);
 
 	ctx->sample_cb = NULL;
 	ctx->aggregate_cb = NULL;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCH 3/3] mm/damon/rules: Implement a debugfs interface
  2020-02-10 15:09 [RFC PATCH 0/3] DAMON: Implement The Data Access Pattern Awared Memory Management Rules sjpark
  2020-02-10 15:09 ` [RFC PATCH 1/3] mm/madvise: Export madvise_common() to mm internal code sjpark
  2020-02-10 15:09 ` [RFC PATCH 2/3] mm/damon/rules: Implement access pattern based management rules sjpark
@ 2020-02-10 15:09 ` sjpark
  2 siblings, 0 replies; 4+ messages in thread
From: sjpark @ 2020-02-10 15:09 UTC (permalink / raw)
  To: akpm
  Cc: SeongJae Park, acme, alexander.shishkin, amit, brendan.d.gregg,
	brendanhiggins, cai, colin.king, corbet, dwmw, jolsa, kirill,
	mark.rutland, mgorman, minchan, mingo, namhyung, peterz, rdunlap,
	rostedt, sj38.park, vdavydov.dev, linux-mm, linux-doc,
	linux-kernel

From: SeongJae Park <sjpark@amazon.de>

This commit implements a debugfs interface for the DAMON's access
pattern based memory management rules.  It is supposed to be used by
administrators and privileged user space programs.  Users read and
update the rules using ``<debugfs>/damon/rules`` file.  The format is::

    <min/max size> <min/max access frequency> <min/max duration> <hint>

Signed-off-by: SeongJae Park <sjpark@amazon.de>
---
 mm/damon.c | 157 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 155 insertions(+), 2 deletions(-)

diff --git a/mm/damon.c b/mm/damon.c
index 5d33b5d6504b..efb85bdf9400 100644
--- a/mm/damon.c
+++ b/mm/damon.c
@@ -195,6 +195,29 @@ static void damon_destroy_task(struct damon_task *t)
 	damon_free_task(t);
 }
 
+static struct damon_rule *damon_new_rule(
+		unsigned int min_sz_region, unsigned int max_sz_region,
+		unsigned int min_nr_accesses, unsigned int max_nr_accesses,
+		unsigned int min_age_region, unsigned int max_age_region,
+		enum damon_action action)
+{
+	struct damon_rule *ret;
+
+	ret = kmalloc(sizeof(struct damon_rule), GFP_KERNEL);
+	if (!ret)
+		return NULL;
+	ret->min_sz_region = min_sz_region;
+	ret->max_sz_region = max_sz_region;
+	ret->min_nr_accesses = min_nr_accesses;
+	ret->max_nr_accesses = max_nr_accesses;
+	ret->min_age_region = min_age_region;
+	ret->max_age_region = max_age_region;
+	ret->action = action;
+	INIT_LIST_HEAD(&ret->list);
+
+	return ret;
+}
+
 static void damon_add_rule(struct damon_ctx *ctx, struct damon_rule *r)
 {
 	list_add_tail(&r->list, &ctx->rules_list);
@@ -1266,6 +1289,130 @@ static ssize_t debugfs_monitor_on_write(struct file *file,
 	return ret;
 }
 
+static ssize_t damon_sprint_rules(struct damon_ctx *c, char *buf, ssize_t len)
+{
+	char *cursor = buf;
+	struct damon_rule *r;
+	int ret;
+
+	damon_for_each_rule(c, r) {
+		ret = snprintf(cursor, len, "%u %u %u %u %u %u %d\n",
+				r->min_sz_region, r->max_sz_region,
+				r->min_nr_accesses, r->max_nr_accesses,
+				r->min_age_region, r->max_age_region,
+				r->action);
+		cursor += ret;
+	}
+	return cursor - buf;
+}
+
+static ssize_t debugfs_rules_read(struct file *file, char __user *buf,
+		size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	ssize_t len;
+	char *rules_buf;
+
+	rules_buf = kmalloc(sizeof(char) * 1024, GFP_KERNEL);
+
+	len = damon_sprint_rules(ctx, rules_buf, 1024);
+	len = simple_read_from_buffer(buf, count, ppos, rules_buf, len);
+
+	kfree(rules_buf);
+	return len;
+}
+
+static void damon_free_rules(struct damon_rule **rules, ssize_t nr_rules)
+{
+	ssize_t i;
+
+	for (i = 0; i < nr_rules; i++)
+		kfree(rules[i]);
+	kfree(rules);
+}
+
+/*
+ * Converts a string into an array of struct damon_rule pointers
+ *
+ * Returns an array of struct damon_rule pointers that converted, or NULL
+ * otherwise.
+ */
+static struct damon_rule **str_to_rules(const char *str, ssize_t len,
+				ssize_t *nr_rules)
+{
+	struct damon_rule *rule, **rules;
+	int pos = 0, parsed, ret;
+	unsigned int min_sz, max_sz, min_nr_a, max_nr_a, min_age, max_age;
+	int action;
+
+	rules = kmalloc_array(256, sizeof(struct damon_rule *), GFP_KERNEL);
+	if (!rules)
+		return NULL;
+
+	*nr_rules = 0;
+	while (pos < len && *nr_rules < 256) {
+		ret = sscanf(&str[pos], "%u %u %u %u %u %u %d%n",
+				&min_sz, &max_sz, &min_nr_a, &max_nr_a,
+				&min_age, &max_age, &action, &parsed);
+		pos += parsed;
+		if (ret != 7)
+			break;
+		if (action >= DAMON_ACTION_LEN) {
+			pr_err("wrong action %d\n", action);
+			goto error;
+		}
+
+		rule = damon_new_rule(min_sz, max_sz, min_nr_a, max_nr_a,
+				min_age, max_age, action);
+		if (!rule)
+			goto error;
+
+		rules[*nr_rules] = rule;
+		*nr_rules += 1;
+	}
+	return rules;
+error:
+	damon_free_rules(rules, *nr_rules);
+	return NULL;
+}
+
+static ssize_t debugfs_rules_write(struct file *file, const char __user *buf,
+		size_t count, loff_t *ppos)
+{
+	struct damon_ctx *ctx = &damon_user_ctx;
+	char *rules_buf;
+	struct damon_rule **rules;
+	ssize_t nr_rules, ret;
+
+	rules_buf = kmalloc(sizeof(char) * 1024, GFP_KERNEL);
+	ret = simple_write_to_buffer(rules_buf, 1024, ppos, buf, count);
+	if (ret < 0) {
+		kfree(rules_buf);
+		return ret;
+	}
+
+	rules = str_to_rules(rules_buf, ret, &nr_rules);
+	if (!rules)
+		return -EINVAL;
+
+	spin_lock(&ctx->kdamond_lock);
+	if (ctx->kdamond)
+		goto monitor_running;
+
+	damon_set_rules(ctx, rules, nr_rules);
+	spin_unlock(&ctx->kdamond_lock);
+	kfree(rules_buf);
+	return ret;
+
+monitor_running:
+	spin_unlock(&ctx->kdamond_lock);
+	pr_err("%s: kdamond is running. Turn it off first.\n", __func__);
+	ret = -EINVAL;
+	damon_free_rules(rules, nr_rules);
+	kfree(rules_buf);
+	return ret;
+}
+
 static ssize_t damon_sprint_pids(struct damon_ctx *ctx, char *buf, ssize_t len)
 {
 	char *cursor = buf;
@@ -1468,6 +1615,12 @@ static const struct file_operations pids_fops = {
 	.write = debugfs_pids_write,
 };
 
+static const struct file_operations rules_fops = {
+	.owner = THIS_MODULE,
+	.read = debugfs_rules_read,
+	.write = debugfs_rules_write,
+};
+
 static const struct file_operations record_fops = {
 	.owner = THIS_MODULE,
 	.read = debugfs_record_read,
@@ -1484,10 +1637,10 @@ static struct dentry *debugfs_root;
 
 static int __init debugfs_init(void)
 {
-	const char * const file_names[] = {"attrs", "record",
+	const char * const file_names[] = {"attrs", "record", "rules",
 		"pids", "monitor_on"};
 	const struct file_operations *fops[] = {&attrs_fops, &record_fops,
-		&pids_fops, &monitor_on_fops};
+		&rules_fops, &pids_fops, &monitor_on_fops};
 	int i;
 
 	debugfs_root = debugfs_create_dir("damon", NULL);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-02-10 15:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-10 15:09 [RFC PATCH 0/3] DAMON: Implement The Data Access Pattern Awared Memory Management Rules sjpark
2020-02-10 15:09 ` [RFC PATCH 1/3] mm/madvise: Export madvise_common() to mm internal code sjpark
2020-02-10 15:09 ` [RFC PATCH 2/3] mm/damon/rules: Implement access pattern based management rules sjpark
2020-02-10 15:09 ` [RFC PATCH 3/3] mm/damon/rules: Implement a debugfs interface sjpark

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).