From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229C3C433ED for ; Wed, 19 May 2021 07:13:34 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6ED961353 for ; Wed, 19 May 2021 07:13:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6ED961353 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id A6114100EAB43; Wed, 19 May 2021 00:13:33 -0700 (PDT) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=198.145.29.99; helo=mail.kernel.org; envelope-from=rppt@kernel.org; receiver= Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 9F2B9100EB323 for ; Wed, 19 May 2021 00:13:30 -0700 (PDT) Received: by mail.kernel.org (Postfix) with ESMTPSA id 306EE6135B; Wed, 19 May 2021 07:13:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621408407; bh=Ki50aQXwD1+I3Hw9xcIirZa4MRm+9KRJ+IEPHMwhBkI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=oYyy/RZh+KV4+OBVGMz4vRqytOJ3S6neHrNHLUg1kHLXII61rwPkBThjJAvrsmUZo FjnPqLIJ5kAx/qeZGbRsU6kB8hQ4Hhb8cAmXfxBQtvxU8dgEHEV4y4yEtokhAPiKIr /MuOkK56iFsIUVMsvkPHddFEnBeu4T4s7c6IPe+YOzaeJb9MlSBih46lnNG19GWJXN Dr+Kz+dEZLJXS9T0HxuiavOJs0HPv1T3S1Qsh+y5zhmE6RBkRzoRn73O2mfgQvXLJ6 Y/WFNcW+0oFbapTux4HghodwAEaeJ7uRFxUxBZQ17d8823037Btr6+DWDVvpylUSU6 pxMi/n2f2auLQ== Date: Wed, 19 May 2021 10:13:09 +0300 From: Mike Rapoport To: Michal Hocko Subject: Re: [PATCH v19 5/8] mm: introduce memfd_secret system call to create "secret" memory areas Message-ID: References: <20210513184734.29317-1-rppt@kernel.org> <20210513184734.29317-6-rppt@kernel.org> <8e114f09-60e4-2343-1c42-1beaf540c150@redhat.com> <00644dd8-edac-d3fd-a080-0a175fa9bf13@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Message-ID-Hash: J32X7RDQAGN27ZYIDR2IULMJWDUM3O27 X-Message-ID-Hash: J32X7RDQAGN27ZYIDR2IULMJWDUM3O27 X-MailFrom: rppt@kernel.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: David Hildenbrand , Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Hagen Paul Pfeifer , Ingo Molnar , James Bottomley , Kees Cook , "Kirill A. Shutemov" , Matthew Wilcox , Matthew Garrett , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , "Rafael J. Wysocki" , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , Yury Norov , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit On Tue, May 18, 2021 at 01:08:27PM +0200, Michal Hocko wrote: > On Tue 18-05-21 12:35:36, David Hildenbrand wrote: > > On 18.05.21 12:31, Michal Hocko wrote: > > > > > > Although I have to say openly that I am not a great fan of VM_FAULT_OOM > > > in general. It is usually a a wrong way to tell the handle the failure > > > because it happens outside of the allocation context so you lose all the > > > details (e.g. allocation constrains, numa policy etc.). Also whenever > > > there is ENOMEM then the allocation itself has already made sure that > > > all the reclaim attempts have been already depleted. Just consider an > > > allocation with GFP_NOWAIT/NO_RETRY or similar to fail and propagate > > > ENOMEM up the call stack. Turning that into the OOM killer sounds like a > > > bad idea to me. But that is a more general topic. I have tried to bring > > > this up in the past but there was not much of an interest to fix it as > > > it was not a pressing problem... > > > > > > > I'm certainly interested; it would mean that we actually want to try > > recovering from VM_FAULT_OOM in various cases, and as you state, we might > > have to supply more information to make that work reliably. > > Or maybe we want to get rid of VM_FAULT_OOM altogether... But this is > really tangent to this discussion. The only relation is that this would > be another place to check when somebody wants to go that direction. If we are to get rid of VM_FAULT_OOM, vmf_error() would be updated and this place will get the update automagically. > > Having that said, I guess what we have here is just the same as when our > > process fails to allocate a generic page table in __handle_mm_fault(), when > > we fail p4d_alloc() and friends ... > > From a quick look it is really similar in a sense that it effectively never > happens and if it does then it certainly does the wrong thing. The point > I was trying to make is that there is likely no need to go that way. As David pointed out, failure to handle direct map in secretmem_fault() is like any allocation failure in page fault handling and most of them result in VM_FAULT_OOM, so I think that having vmf_error() in secretmem_fault() is more consistent with the rest of the code than using VM_FAULT_SIGBUS. Besides if the direct map manipulation failures would result in errors other than -ENOMEM, having vmf_error() may prove useful. -- Sincerely yours, Mike. _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org