From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5295C433DF for ; Fri, 31 Jul 2020 14:29:24 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9919322B49 for ; Fri, 31 Jul 2020 14:29:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9919322B49 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6C0A31290A252; Fri, 31 Jul 2020 07:29:24 -0700 (PDT) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=217.140.110.172; helo=foss.arm.com; envelope-from=mark.rutland@arm.com; receiver= Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by ml01.01.org (Postfix) with ESMTP id EA36F1290A24F for ; Fri, 31 Jul 2020 07:29:21 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D1A831B; Fri, 31 Jul 2020 07:29:20 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.4.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C2B863F66E; Fri, 31 Jul 2020 07:29:13 -0700 (PDT) Date: Fri, 31 Jul 2020 15:29:05 +0100 From: Mark Rutland To: Catalin Marinas Subject: Re: [PATCH v2 3/7] mm: introduce memfd_secret system call to create "secret" memory areas Message-ID: <20200731142905.GA67415@C02TD0UTHF1T.local> References: <20200727162935.31714-1-rppt@kernel.org> <20200727162935.31714-4-rppt@kernel.org> <20200730162209.GB3128@gaia> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200730162209.GB3128@gaia> Message-ID-Hash: BQBXJ4HNXVL2TRXU4N7SU2MFF6LNJBNO X-Message-ID-Hash: BQBXJ4HNXVL2TRXU4N7SU2MFF6LNJBNO X-MailFrom: mark.rutland@arm.com X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: Mike Rapoport , linux-kernel@vger.kernel.org, Alexander Viro , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christopher Lameter , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel @vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Thu, Jul 30, 2020 at 05:22:10PM +0100, Catalin Marinas wrote: > On Mon, Jul 27, 2020 at 07:29:31PM +0300, Mike Rapoport wrote: > > +static int secretmem_mmap(struct file *file, struct vm_area_struct *vma) > > +{ > > + struct secretmem_ctx *ctx = file->private_data; > > + unsigned long mode = ctx->mode; > > + unsigned long len = vma->vm_end - vma->vm_start; > > + > > + if (!mode) > > + return -EINVAL; > > + > > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) > > + return -EINVAL; > > + > > + if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len)) > > + return -EAGAIN; > > + > > + switch (mode) { > > + case SECRETMEM_UNCACHED: > > + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); > > + fallthrough; > > + case SECRETMEM_EXCLUSIVE: > > + vma->vm_ops = &secretmem_vm_ops; > > + break; > > + default: > > + return -EINVAL; > > + } > > + > > + vma->vm_flags |= VM_LOCKED; > > + > > + return 0; > > +} > > I think the uncached mapping is not the right thing for arm/arm64. First > of all, pgprot_noncached() gives us Strongly Ordered (Device memory) > semantics together with not allowing unaligned accesses. I suspect the > semantics are different on x86. > The second, more serious problem, is that I can't find any place where > the caches are flushed for the page mapped on fault. When a page is > allocated, assuming GFP_ZERO, only the caches are guaranteed to be > zeroed. Exposing this subsequently to user space as uncached would allow > the user to read stale data prior to zeroing. The arm64 > set_direct_map_default_noflush() doesn't do any cache maintenance. It's also worth noting that in a virtual machine this is liable to be either broken (with a potential loss of coherency if the host has a cacheable alias as existing KVM hosts have), or pointless (if the host uses S2FWB to upgrade Stage-1 attribues to cacheable as existing KVM hosts also have). I think that trying to avoid the data caches creates many more problems than it solves, and I don't think there's a strong justification for trying to support that on arm64 to begin with, so I'd rather entirely opt-out on supporting SECRETMEM_UNCACHED. Thanks, Mark. _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org