From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BCE7C2BB55 for ; Thu, 16 Apr 2020 16:13:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30E0620857 for ; Thu, 16 Apr 2020 16:13:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FFzNqynx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2506858AbgDPQNA (ORCPT ); Thu, 16 Apr 2020 12:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2406757AbgDPQMz (ORCPT ); Thu, 16 Apr 2020 12:12:55 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 637BEC061A0C for ; Thu, 16 Apr 2020 09:12:55 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id q8so3531841pgv.13 for ; Thu, 16 Apr 2020 09:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=A4YlZ0JelJa47y4ITdTVRwEiR7bDyxC3LQiUD87DYQg=; b=FFzNqynxBWRyOECiXO08TW9/nzM2LX+rId0MEdvMFA6lVhlyCX/GJDLa5wojSpu2es LBJsyTuE35dNsfxQf8lTHg/Vkccbulv9Ls1aIrucOT7Dhe0gwJre4UuJHVa8/LwaNRMm icnvYCkuzbA5xVK0GqQIVovu1d4IbnfRArIZvvfEE2tQHaN0viTXVSpk8Wx2wLu0KZ9t dBfcnsKmVQMRxsTp0y5WbsCqXF1OoOHFa2tFFaVKTqI2/w/9x+o/P2Kqga+VDdJhaDsm dy26FuCh88AA2xhRtYk2/TpuYY9GTzhm81GolQYr70AIV3fNyJKnpOkAYE3ebigotOuD WzQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=A4YlZ0JelJa47y4ITdTVRwEiR7bDyxC3LQiUD87DYQg=; b=XVBccCT9UMAEObPcO1XxloQteMkAg0p8jKLcJ5OUUjLIskpKf5IbT/kvSR0fUDLrcI zdyUHNrZERiSqUIxnNweFIfhy4K+crktLTqc20kcuVoOZNELdbqT147LmVLCRK75ELPI ihuFKuTXu8BwYU52SXPiBESxWRVqOnrGW190UdZjmBeObfAobhzllDTBc7CCnYamXDFO bQfQt1Y8L0+N1JlSHLZ6HPBslk5uS7W3cuN94rwtD0RO5XGu+EqkBy/KCBd3Sr8cQfS3 aNlD5ZDE3LoP/8FnFY9vxZsxxidsHRymLN1FcJuvhg3QP55/7EL0XgNzcOx468FuJ+d9 MzzA== X-Gm-Message-State: AGi0PuYiQbVfdKtI7OAi1CU4BCcgRrFv7hFTl2SlN/DWjnrSYZv0JL0W 5Ba72v2zFKFUBWYmoMZPHqT9ih+NQy8xxvLWY5w= X-Google-Smtp-Source: APiQypJmv3+o6V05+VLuG3hxDl4yVwtUp8XzaHB/dFrpzqD6MaQqKxQnnDJBQdwbiwE0Zo3YVJvfdaPIUOR+54s6WqE= X-Received: by 2002:a17:90a:d703:: with SMTP id y3mr6337624pju.75.1587053574923; Thu, 16 Apr 2020 09:12:54 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:35 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 02/12] scs: add accounting From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 10d7e818e118..502ab5447c8d 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1b9de7d220fb..89aa96797743 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -156,6 +156,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index 28abed21950c..5245e992c692 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -89,6 +90,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -135,6 +141,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -153,6 +164,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -162,6 +179,8 @@ int scs_prepare(struct task_struct *tsk, int node) return -ENOMEM; task_set_scs(tsk, s); + scs_account(tsk, 1); + return 0; } @@ -182,6 +201,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 69827d4fa052..721879d56bbd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5411,6 +5411,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5433,6 +5436,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index 96d21a792b57..089602efa477 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1119,6 +1119,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", -- 2.26.1.301.g55bc3eb7cb9-goog