From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0F24C54E8B for ; Tue, 12 May 2020 11:56:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9EAC206CC for ; Tue, 12 May 2020 11:56:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="Pmj8ihcp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729655AbgELL4C (ORCPT ); Tue, 12 May 2020 07:56:02 -0400 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:20228 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728371AbgELL4B (ORCPT ); Tue, 12 May 2020 07:56:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1589284561; x=1620820561; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=4J9HEeq+F5VStvX2uoUr/z1cZqHOD4PoOUbLKG7t7qg=; b=Pmj8ihcp7b+3G+2NviHjzhbKDvYfCviklo5hsh3+SzB8UPuJ6Tdq70lA G200clUYH8xrcM9VLC6QFWG5fVxDrE3jYlX9+EwdKw7KuUTCV4SKW+C4r mF99JfPUGWYyYycKalkRAH07/gP5ElbG2WrtQijGGMG4gPQwzzWiaiZGK U=; IronPort-SDR: Hwj0quaMH3ZJ7G+DlUD01HreIdYtuBNgaJWJOWYIvxUPZtmyZ9AP4yysgFaHiffRWczOE1zAY6 6yXh2WBG/nVQ== X-IronPort-AV: E=Sophos;i="5.73,383,1583193600"; d="scan'208";a="34489852" Received: from sea32-co-svc-lb4-vlan2.sea.corp.amazon.com (HELO email-inbound-relay-2c-2225282c.us-west-2.amazon.com) ([10.47.23.34]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 12 May 2020 11:56:00 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2c-2225282c.us-west-2.amazon.com (Postfix) with ESMTPS id 7C35CA2524; Tue, 12 May 2020 11:55:57 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 12 May 2020 11:55:57 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.100) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 12 May 2020 11:55:41 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v8 3/8] mm/damon: Implement data access monitoring-based operation schemes Date: Tue, 12 May 2020 13:53:38 +0200 Message-ID: <20200512115343.27699-4-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200512115343.27699-1-sjpark@amazon.com> References: <20200512115343.27699-1-sjpark@amazon.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.43.160.100] X-ClientProxiedBy: EX13D17UWB003.ant.amazon.com (10.43.161.42) To EX13D31EUA001.ant.amazon.com (10.43.165.15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: SeongJae Park In many cases, users might use DAMON for simple data access aware memory management optimizations such as applying an operation scheme to a memory region of a specific size having a specific access frequency for a specific time. For example, "page out a memory region larger than 100 MiB but having a low access frequency more than 10 minutes", or "Use THP for a memory region larger than 2 MiB having a high access frequency for more than 2 seconds". To minimize users from spending their time for implementation of such simple data access monitoring-based operation schemes, this commit makes DAMON to handle such schemes directly. With this commit, users can simply specify their desired schemes to DAMON. Each of the schemes is composed with conditions for filtering of the target memory regions and desired memory management action for the target. Specifically, the format is:: The filtering conditions are size of memory region, number of accesses to the region monitored by DAMON, and the age of the region. The age of region is incremented periodically but reset when its addresses or access frequency has significantly changed or the action of a scheme was applied. For the action, current implementation supports only a few of madvise() hints, ``MADV_WILLNEED``, ``MADV_COLD``, ``MADV_PAGEOUT``, ``MADV_HUGEPAGE``, and ``MADV_NOHUGEPAGE``. Signed-off-by: SeongJae Park --- include/linux/damon.h | 24 +++++++ mm/damon.c | 149 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 7276b2a31c38..0f26d8aad33c 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -36,6 +36,27 @@ struct damon_task { struct list_head list; }; +/* Data Access Monitoring-based Operation Scheme */ +enum damos_action { + DAMOS_WILLNEED, + DAMOS_COLD, + DAMOS_PAGEOUT, + DAMOS_HUGEPAGE, + DAMOS_NOHUGEPAGE, + DAMOS_ACTION_LEN, +}; + +struct damos { + unsigned int min_sz_region; + unsigned int max_sz_region; + unsigned int min_nr_accesses; + unsigned int max_nr_accesses; + unsigned int min_age_region; + unsigned int max_age_region; + enum damos_action action; + struct list_head list; +}; + /* * For each 'sample_interval', DAMON checks whether each region is accessed or * not. It aggregates and keeps the access information (number of accesses to @@ -65,6 +86,7 @@ struct damon_ctx { struct mutex kdamond_lock; struct list_head tasks_list; /* 'damon_task' objects */ + struct list_head schemes_list; /* 'damos' objects */ /* callbacks */ void (*sample_cb)(struct damon_ctx *context); @@ -75,6 +97,8 @@ int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, unsigned long min_nr_reg, unsigned long max_nr_reg); +int damon_set_schemes(struct damon_ctx *ctx, + struct damos **schemes, ssize_t nr_schemes); int damon_set_recording(struct damon_ctx *ctx, unsigned int rbuf_len, char *rfile_path); int damon_start(struct damon_ctx *ctx); diff --git a/mm/damon.c b/mm/damon.c index da266e8b2b30..13275c31a6c5 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -11,6 +11,7 @@ #define CREATE_TRACE_POINTS +#include #include #include #include @@ -52,6 +53,12 @@ #define damon_for_each_task_safe(ctx, t, next) \ list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list) +#define damon_for_each_scheme(ctx, r) \ + list_for_each_entry(r, &(ctx)->schemes_list, list) + +#define damon_for_each_scheme_safe(ctx, s, next) \ + list_for_each_entry_safe(s, next, &(ctx)->schemes_list, list) + #define MAX_RECORD_BUFFER_LEN (4 * 1024 * 1024) #define MAX_RFILE_PATH_LEN 256 @@ -167,6 +174,27 @@ static void damon_destroy_task(struct damon_task *t) damon_free_task(t); } +static void damon_add_scheme(struct damon_ctx *ctx, struct damos *s) +{ + list_add_tail(&s->list, &ctx->schemes_list); +} + +static void damon_del_scheme(struct damos *s) +{ + list_del(&s->list); +} + +static void damon_free_scheme(struct damos *s) +{ + kfree(s); +} + +static void damon_destroy_scheme(struct damos *s) +{ + damon_del_scheme(s); + damon_free_scheme(s); +} + static unsigned int nr_damon_tasks(struct damon_ctx *ctx) { struct damon_task *t; @@ -702,6 +730,101 @@ static void kdamond_count_age(struct damon_ctx *c, unsigned int threshold) } } +#ifndef CONFIG_ADVISE_SYSCALLS +static int damos_madvise(struct damon_task *task, struct damon_region *r, + int behavior) +{ + return -EINVAL; +} +#else +static int damos_madvise(struct damon_task *task, struct damon_region *r, + int behavior) +{ + struct task_struct *t; + struct mm_struct *mm; + int ret = -ENOMEM; + + t = damon_get_task_struct(task); + if (!t) + goto out; + mm = damon_get_mm(task); + if (!mm) + goto put_task_out; + + ret = do_madvise(t, mm, PAGE_ALIGN(r->vm_start), + PAGE_ALIGN(r->vm_end - r->vm_start), behavior); + mmput(mm); +put_task_out: + put_task_struct(t); +out: + return ret; +} +#endif /* CONFIG_ADVISE_SYSCALLS */ + +static int damos_do_action(struct damon_task *task, struct damon_region *r, + enum damos_action action) +{ + int madv_action; + + switch (action) { + case DAMOS_WILLNEED: + madv_action = MADV_WILLNEED; + break; + case DAMOS_COLD: + madv_action = MADV_COLD; + break; + case DAMOS_PAGEOUT: + madv_action = MADV_PAGEOUT; + break; + case DAMOS_HUGEPAGE: + madv_action = MADV_HUGEPAGE; + break; + case DAMOS_NOHUGEPAGE: + madv_action = MADV_NOHUGEPAGE; + break; + default: + pr_warn("Wrong action %d\n", action); + return -EINVAL; + } + + return damos_madvise(task, r, madv_action); +} + +static void damon_do_apply_schemes(struct damon_ctx *c, struct damon_task *t, + struct damon_region *r) +{ + struct damos *s; + unsigned long sz; + + damon_for_each_scheme(c, s) { + sz = r->vm_end - r->vm_start; + if ((s->min_sz_region && sz < s->min_sz_region) || + (s->max_sz_region && s->max_sz_region < sz)) + continue; + if ((s->min_nr_accesses && r->nr_accesses < s->min_nr_accesses) + || (s->max_nr_accesses && + s->max_nr_accesses < r->nr_accesses)) + continue; + if ((s->min_age_region && r->age < s->min_age_region) || + (s->max_age_region && + s->max_age_region < r->age)) + continue; + damos_do_action(t, r, s->action); + r->age = 0; + } +} + +static void kdamond_apply_schemes(struct damon_ctx *c) +{ + struct damon_task *t; + struct damon_region *r; + + damon_for_each_task(c, t) { + damon_for_each_region(r, t) + damon_do_apply_schemes(c, t, r); + } +} + #define sz_damon_region(r) (r->vm_end - r->vm_start) /* @@ -1057,6 +1180,7 @@ static int kdamond_fn(void *data) kdamond_count_age(ctx, max_nr_accesses / 10); if (ctx->aggregate_cb) ctx->aggregate_cb(ctx); + kdamond_apply_schemes(ctx); kdamond_reset_aggregated(ctx); kdamond_split_regions(ctx); } @@ -1137,6 +1261,30 @@ int damon_stop(struct damon_ctx *ctx) return -EPERM; } +/** + * damon_set_schemes() - Set data access monitoring based operation schemes. + * @ctx: monitoring context + * @schemes: array of the schemes + * @nr_schemes: number of entries in @schemes + * + * This function should not be called while the kdamond of the context is + * running. + * + * Return: 0 if success, or negative error code otherwise. + */ +int damon_set_schemes(struct damon_ctx *ctx, struct damos **schemes, + ssize_t nr_schemes) +{ + struct damos *s, *next; + ssize_t i; + + damon_for_each_scheme_safe(ctx, s, next) + damon_destroy_scheme(s); + for (i = 0; i < nr_schemes; i++) + damon_add_scheme(ctx, schemes[i]); + return 0; +} + /** * damon_set_pids() - Set monitoring target processes. * @ctx: monitoring context @@ -1571,6 +1719,7 @@ static int __init damon_init_user_ctx(void) mutex_init(&ctx->kdamond_lock); INIT_LIST_HEAD(&ctx->tasks_list); + INIT_LIST_HEAD(&ctx->schemes_list); return 0; } -- 2.17.1