From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D551C64E8A for ; Tue, 1 Dec 2020 07:49:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D244E2085B for ; Tue, 1 Dec 2020 07:48:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="k6EYvST4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728615AbgLAHsb (ORCPT ); Tue, 1 Dec 2020 02:48:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:41168 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728599AbgLAHsb (ORCPT ); Tue, 1 Dec 2020 02:48:31 -0500 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A8A74221FD; Tue, 1 Dec 2020 07:47:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1606808869; bh=DB36UnxJ8Ire65x4sAklAc3F6jMjQPYwZSaX4owToJk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k6EYvST4MyqQ+jPrlI11XcH1hVcQ0LPHCPAS0QZ8HGXc4P1idIaV1Uqm7YMF99fds MXBrK2ZUNpCmmR+lOB4IcIbKPAyMOU/RN59fkxHV0U8K1sCSZC2QTCNwu2OzrqmoR8 QUvsQhBDYIHzTcM9PzD0UNIMnDf3LO/TS4jrxIzE= From: Mike Rapoport To: Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v13 08/10] PM: hibernate: disable when there are active secretmem users Date: Tue, 1 Dec 2020 09:45:57 +0200 Message-Id: <20201201074559.27742-9-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201201074559.27742-1-rppt@kernel.org> References: <20201201074559.27742-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-arch@vger.kernel.org From: Mike Rapoport It is unsafe to allow saving of secretmem areas to the hibernation snapshot as they would be visible after the resume and this essentially will defeat the purpose of secret memory mappings. Prevent hibernation whenever there are active secret memory users. Signed-off-by: Mike Rapoport --- include/linux/secretmem.h | 6 ++++++ kernel/power/hibernate.c | 5 ++++- mm/secretmem.c | 15 +++++++++++++++ 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h index 70e7db9f94fe..907a6734059c 100644 --- a/include/linux/secretmem.h +++ b/include/linux/secretmem.h @@ -6,6 +6,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma); bool page_is_secretmem(struct page *page); +bool secretmem_active(void); #else @@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page) return false; } +static inline bool secretmem_active(void) +{ + return false; +} + #endif /* CONFIG_SECRETMEM */ #endif /* _LINUX_SECRETMEM_H */ diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index da0b41914177..559acef3fddb 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "power.h" @@ -81,7 +82,9 @@ void hibernate_release(void) bool hibernation_available(void) { - return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION); + return nohibernate == 0 && + !security_locked_down(LOCKDOWN_HIBERNATION) && + !secretmem_active(); } /** diff --git a/mm/secretmem.c b/mm/secretmem.c index 5e3e5102ad4c..2cf5cc79418b 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -45,6 +45,13 @@ struct secretmem_ctx { static struct cma *secretmem_cma; +static atomic_t secretmem_users; + +bool secretmem_active(void) +{ + return !!atomic_read(&secretmem_users); +} + static int secretmem_account_pages(struct page *page, gfp_t gfp, int order) { int err; @@ -179,6 +186,12 @@ static const struct vm_operations_struct secretmem_vm_ops = { .fault = secretmem_fault, }; +static int secretmem_release(struct inode *inode, struct file *file) +{ + atomic_dec(&secretmem_users); + return 0; +} + static int secretmem_mmap(struct file *file, struct vm_area_struct *vma) { unsigned long len = vma->vm_end - vma->vm_start; @@ -201,6 +214,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma) } static const struct file_operations secretmem_fops = { + .release = secretmem_release, .mmap = secretmem_mmap, }; @@ -318,6 +332,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) file->f_flags |= O_LARGEFILE; fd_install(fd, file); + atomic_inc(&secretmem_users); return fd; err_put_fd: -- 2.28.0