From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D653DC433DF for ; Tue, 4 Aug 2020 09:51:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B34FB206DA for ; Tue, 4 Aug 2020 09:51:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="Wb/i9DqC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B34FB206DA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 348A68D014C; Tue, 4 Aug 2020 05:51:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D0C88D0081; Tue, 4 Aug 2020 05:51:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1722A8D014C; Tue, 4 Aug 2020 05:51:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id F072F8D0081 for ; Tue, 4 Aug 2020 05:51:44 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AC635180AD801 for ; Tue, 4 Aug 2020 09:51:44 +0000 (UTC) X-FDA: 77112419328.21.sea29_3c1204626fa5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 895C1180442C3 for ; Tue, 4 Aug 2020 09:51:44 +0000 (UTC) X-HE-Tag: sea29_3c1204626fa5 X-Filterd-Recvd-Size: 8154 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 09:51:44 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DA0AE22B45; Tue, 4 Aug 2020 09:51:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596534703; bh=x0DAxp5W8dl2Y9jNBG2mD6C6l9O/uu9bnorpOsB3o5U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wb/i9DqCCGMY1sy+ymVE2IwBdDO3PPSz+UTA8Mo0f0h/pscUMpmqT5WulteBV4oQZ FyeWJ0852XMneG9X+T4hrFXC/Y1pepOy2YVe94MJZsZb2c8aukRwjJQmGDZxCCGpZ6 r1vibdwZ//4Q6PlZPZ3+MksZdbDrecbblYOex86c= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Viro , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v3 6/6] mm: secretmem: add ability to reserve memory at boot Date: Tue, 4 Aug 2020 12:50:35 +0300 Message-Id: <20200804095035.18778-7-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200804095035.18778-1-rppt@kernel.org> References: <20200804095035.18778-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 895C1180442C3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Taking pages out from the direct map and bringing them back may create undesired fragmentation and usage of the smaller pages in the direct mapping of the physical memory. This can be avoided if a significantly large area of the physical memory would be reserved for secretmem purposes at boot time. Add ability to reserve physical memory for secretmem at boot time using "secretmem" kernel parameter and then use that reserved memory as a globa= l pool for secret memory needs. Signed-off-by: Mike Rapoport --- mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 126 insertions(+), 8 deletions(-) diff --git a/mm/secretmem.c b/mm/secretmem.c index e42616785a88..0f3e7b30a0a7 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -35,6 +36,39 @@ struct secretmem_ctx { unsigned int mode; }; =20 +struct secretmem_pool { + struct gen_pool *pool; + unsigned long reserved_size; + void *reserved; +}; + +static struct secretmem_pool secretmem_pool; + +static struct page *secretmem_alloc_huge_page(gfp_t gfp) +{ + struct gen_pool *pool =3D secretmem_pool.pool; + unsigned long addr =3D 0; + struct page *page =3D NULL; + + if (pool) { + if (gen_pool_avail(pool) < PMD_SIZE) + return NULL; + + addr =3D gen_pool_alloc(pool, PMD_SIZE); + if (!addr) + return NULL; + + page =3D virt_to_page(addr); + } else { + page =3D alloc_pages(gfp, PMD_PAGE_ORDER); + + if (page) + split_page(page, PMD_PAGE_ORDER); + } + + return page; +} + static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) { unsigned long nr_pages =3D (1 << PMD_PAGE_ORDER); @@ -43,12 +77,11 @@ static int secretmem_pool_increase(struct secretmem_c= tx *ctx, gfp_t gfp) struct page *page; int err; =20 - page =3D alloc_pages(gfp, PMD_PAGE_ORDER); + page =3D secretmem_alloc_huge_page(gfp); if (!page) return -ENOMEM; =20 addr =3D (unsigned long)page_address(page); - split_page(page, PMD_PAGE_ORDER); =20 err =3D gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE); if (err) { @@ -274,11 +307,13 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) return err; } =20 -static void secretmem_cleanup_chunk(struct gen_pool *pool, - struct gen_pool_chunk *chunk, void *data) +static void secretmem_recycle_range(unsigned long start, unsigned long e= nd) +{ + gen_pool_free(secretmem_pool.pool, start, PMD_SIZE); +} + +static void secretmem_release_range(unsigned long start, unsigned long e= nd) { - unsigned long start =3D chunk->start_addr; - unsigned long end =3D chunk->end_addr; unsigned long nr_pages, addr; =20 nr_pages =3D (end - start + 1) / PAGE_SIZE; @@ -288,6 +323,18 @@ static void secretmem_cleanup_chunk(struct gen_pool = *pool, put_page(virt_to_page(addr)); } =20 +static void secretmem_cleanup_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long start =3D chunk->start_addr; + unsigned long end =3D chunk->end_addr; + + if (secretmem_pool.pool) + secretmem_recycle_range(start, end); + else + secretmem_release_range(start, end); +} + static void secretmem_cleanup_pool(struct secretmem_ctx *ctx) { struct gen_pool *pool =3D ctx->pool; @@ -327,14 +374,85 @@ static struct file_system_type secretmem_fs =3D { .kill_sb =3D kill_anon_super, }; =20 +static int secretmem_reserved_mem_init(void) +{ + struct gen_pool *pool; + struct page *page; + void *addr; + int err; + + if (!secretmem_pool.reserved) + return 0; + + pool =3D gen_pool_create(PMD_SHIFT, NUMA_NO_NODE); + if (!pool) + return -ENOMEM; + + err =3D gen_pool_add(pool, (unsigned long)secretmem_pool.reserved, + secretmem_pool.reserved_size, NUMA_NO_NODE); + if (err) + goto err_destroy_pool; + + for (addr =3D secretmem_pool.reserved; + addr < secretmem_pool.reserved + secretmem_pool.reserved_size; + addr +=3D PAGE_SIZE) { + page =3D virt_to_page(addr); + __ClearPageReserved(page); + set_page_count(page, 1); + } + + secretmem_pool.pool =3D pool; + page =3D virt_to_page(secretmem_pool.reserved); + __kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0); + return 0; + +err_destroy_pool: + gen_pool_destroy(pool); + return err; +} + static int secretmem_init(void) { - int ret =3D 0; + int ret; + + ret =3D secretmem_reserved_mem_init(); + if (ret) + return ret; =20 secretmem_mnt =3D kern_mount(&secretmem_fs); - if (IS_ERR(secretmem_mnt)) + if (IS_ERR(secretmem_mnt)) { + gen_pool_destroy(secretmem_pool.pool); ret =3D PTR_ERR(secretmem_mnt); + } =20 return ret; } fs_initcall(secretmem_init); + +static int __init secretmem_setup(char *str) +{ + phys_addr_t align =3D PMD_SIZE; + unsigned long reserved_size; + void *reserved; + + reserved_size =3D memparse(str, NULL); + if (!reserved_size) + return 0; + + if (reserved_size * 2 > PUD_SIZE) + align =3D PUD_SIZE; + + reserved =3D memblock_alloc(reserved_size, align); + if (!reserved) { + pr_err("failed to reserve %lu bytes\n", secretmem_pool.reserved_size); + return 0; + } + + secretmem_pool.reserved_size =3D reserved_size; + secretmem_pool.reserved =3D reserved; + + pr_info("reserved %luM\n", reserved_size >> 20); + + return 1; +} +__setup("secretmem=3D", secretmem_setup); --=20 2.26.2