From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08E85C433DB for ; Thu, 4 Feb 2021 11:37:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D40F664F6C for ; Thu, 4 Feb 2021 11:37:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236012AbhBDLhP (ORCPT ); Thu, 4 Feb 2021 06:37:15 -0500 Received: from mail.kernel.org ([198.145.29.99]:41398 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235541AbhBDLfa (ORCPT ); Thu, 4 Feb 2021 06:35:30 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3B41964F45; Thu, 4 Feb 2021 11:34:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612438488; bh=kdvaYUzyR0twFNgrfjDXxc4xk/nIUc95siq77AcMxUw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Sy9fx25cRRyQxUg4PQiuwRqTEHX658JxHyGPHq/88My/x1kvTH4DAJ2aISoI2aXkQ 7q4ImT49/hSnXgZZe5nIte5an8yCdFl3/nKBSmL1KtXUQmhK86d+bWvAKpFB/hCPJ+ vHFmENUyT6t71Pv7oR0jOWL5QdCiNYnsM7hkVvYdNA83FthQ8MT6l5HSIaqDtEq017 v3Uf42+f0Hl/atXgvQESI8GiyyIE+mqMxGUZE9ZuSWQ78rGF9JCQdy6unJPKAA/OZt AfPnD+C9cBVf98rjrYDltnlBiJ+vDveF6Nvs2anXspx7fDGAP1j2/zrNQYpSDiRu3a P3N5VPANbVUIA== Date: Thu, 4 Feb 2021 13:34:32 +0200 From: Mike Rapoport To: Michal Hocko Cc: Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org, Hagen Paul Pfeifer , Palmer Dabbelt Subject: Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas Message-ID: <20210204113432.GS242749@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> <20210121122723.3446-7-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Wed, Feb 03, 2021 at 01:15:58PM +0100, Michal Hocko wrote: > On Thu 21-01-21 14:27:18, Mike Rapoport wrote: > > +static struct file *secretmem_file_create(unsigned long flags) > > +{ > > + struct file *file = ERR_PTR(-ENOMEM); > > + struct secretmem_ctx *ctx; > > + struct inode *inode; > > + > > + inode = alloc_anon_inode(secretmem_mnt->mnt_sb); > > + if (IS_ERR(inode)) > > + return ERR_CAST(inode); > > + > > + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); > > + if (!ctx) > > + goto err_free_inode; > > + > > + file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", > > + O_RDWR, &secretmem_fops); > > + if (IS_ERR(file)) > > + goto err_free_ctx; > > + > > + mapping_set_unevictable(inode->i_mapping); > > Btw. you need also mapping_set_gfp_mask(mapping, GFP_HIGHUSER) because > the default is GFP_HIGHUSER_MOVABLE and you do not support migration so > no pages from movable zones should be allowed. Ok. -- Sincerely yours, Mike.