From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 486E3C4363A for ; Wed, 28 Oct 2020 11:23:25 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 452F6246BF for ; Wed, 28 Oct 2020 11:23:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="nZtyj9b5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 452F6246BF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4CLmQx6tj9zDqTb for ; Wed, 28 Oct 2020 22:23:21 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=198.145.29.99; helo=mail.kernel.org; envelope-from=will@kernel.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=default header.b=nZtyj9b5; dkim-atps=neutral Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4CLmMZ2dRHzDqTh for ; Wed, 28 Oct 2020 22:20:26 +1100 (AEDT) Received: from willie-the-truck (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 673E0246B9; Wed, 28 Oct 2020 11:20:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603884023; bh=mTg9fm0maDJRILTxhkQjp8X0Lq65oDuCI72pj3yFYDE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=nZtyj9b5x6hecvlSEKot7wmBAZql8U1dePTmmXQiFrrzoaGBN9VqsVNPuzebn4sFU 1HxFke7nJaXnQn6G39t0fFfqokJ+mMYm+26TSIP2esL1hnqarlb48UzMIoRkRNzXTB QjioeXNRaTtnP6jlK5TH2GSPSYibVZ9M4SKhISmY= Date: Wed, 28 Oct 2020 11:20:12 +0000 From: Will Deacon To: Mike Rapoport Subject: Re: [PATCH 0/4] arch, mm: improve robustness of direct map manipulation Message-ID: <20201028112011.GB27927@willie-the-truck> References: <20201025101555.3057-1-rppt@kernel.org> <20201026090526.GA1154158@kernel.org> <20201027083816.GG1154158@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201027083816.GG1154158@kernel.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "david@redhat.com" , "peterz@infradead.org" , "catalin.marinas@arm.com" , "dave.hansen@linux.intel.com" , "linux-mm@kvack.org" , "paulus@samba.org" , "pavel@ucw.cz" , "hpa@zytor.com" , "sparclinux@vger.kernel.org" , "cl@linux.com" , "linux-riscv@lists.infradead.org" , "linux-s390@vger.kernel.org" , "x86@kernel.org" , "rppt@linux.ibm.com" , "borntraeger@de.ibm.com" , "mingo@redhat.com" , "rientjes@google.com" , "Brown, Len" , "aou@eecs.berkeley.edu" , "gor@linux.ibm.com" , "linux-pm@vger.kernel.org" , "hca@linux.ibm.com" , "bp@alien8.de" , "luto@kernel.org" , "paul.walmsley@sifive.com" , "kirill@shutemov.name" , "tglx@linutronix.de" , "iamjoonsoo.kim@lge.com" , "linux-arm-kernel@lists.infradead.org" , "rjw@rjwysocki.net" , "linux-kernel@vger.kernel.org" , "penberg@kernel.org" , "palmer@dabbelt.com" , "akpm@linux-foundation.org" , "Edgecombe, Rick P" , "linuxppc-dev@lists.ozlabs.org" , "davem@davemloft.net" Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote: > On Mon, Oct 26, 2020 at 06:05:30PM +0000, Edgecombe, Rick P wrote: > > On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote: > > > On Mon, Oct 26, 2020 at 01:13:52AM +0000, Edgecombe, Rick P wrote: > > > > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote: > > > > > Indeed, for architectures that define > > > > > CONFIG_ARCH_HAS_SET_DIRECT_MAP > > > > > it is > > > > > possible that __kernel_map_pages() would fail, but since this > > > > > function is > > > > > void, the failure will go unnoticed. > > > > > > > > Could you elaborate on how this could happen? Do you mean during > > > > runtime today or if something new was introduced? > > > > > > A failure in__kernel_map_pages() may happen today. For instance, on > > > x86 > > > if the kernel is built with DEBUG_PAGEALLOC. > > > > > > __kernel_map_pages(page, 1, 0); > > > > > > will need to split, say, 2M page and during the split an allocation > > > of > > > page table could fail. > > > > On x86 at least, DEBUG_PAGEALLOC expects to never have to break a page > > on the direct map and even disables locking in cpa because it assumes > > this. If this is happening somehow anyway then we should probably fix > > that. Even if it's a debug feature, it will not be as useful if it is > > causing its own crashes. > > > > I'm still wondering if there is something I'm missing here. It seems > > like you are saying there is a bug in some arch's, so let's add a WARN > > in cross-arch code to log it as it crashes. A warn and making things > > clearer seem like good ideas, but if there is a bug we should fix it. > > The code around the callers still functionally assume re-mapping can't > > fail. > > Oh, I've meant x86 kernel *without* DEBUG_PAGEALLOC, and indeed the call > that unmaps pages back in safe_copy_page will just reset a 4K page to > NP because whatever made it NP at the first place already did the split. > > Still, on arm64 with DEBUG_PAGEALLOC=n there is a possibility of a race > between map/unmap dance in __vunmap() and safe_copy_page() that may > cause access to unmapped memory: > > __vunmap() > vm_remove_mappings() > set_direct_map_invalid() > safe_copy_page() > __kernel_map_pages() > return > do_copy_page() -> fault > > This is a theoretical bug, but it is still not nice :) Just to clarify: this patch series fixes this problem, right? Will