From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6209FC433EF for ; Wed, 20 Apr 2022 17:10:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380935AbiDTRNS (ORCPT ); Wed, 20 Apr 2022 13:13:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380924AbiDTRNR (ORCPT ); Wed, 20 Apr 2022 13:13:17 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6600345502; Wed, 20 Apr 2022 10:10:30 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9598221107; Wed, 20 Apr 2022 17:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=qEFLX/t/h3r7OpqmNOmGOP8dA4rSO2sikq6ol1N77QIelrvEBFghQ7bcvovldgvJObmH6p 66q5lRqmWi9t95sVzEwUZIP8Cr7uNETQRm4NEKf64zjhBKay2Q/2JBXNnuG2fhAbtkoWOO HtmZ7GIqs22vDe7hH9NqFNk4XWWgWUo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=Lz02F1m0uYxMKVcER4wii/ZMq3u7/dEO9YPxA0w+R8Q7KJp2LiDa55Odye1bSPkQBU7LeS pJ2pECEhijNbb1DQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E940913A30; Wed, 20 Apr 2022 17:10:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +h0eOIM+YGIqXwAAMHmgww (envelope-from ); Wed, 20 Apr 2022 17:10:27 +0000 Message-ID: Date: Wed, 20 Apr 2022 19:10:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit Content-Language: en-US To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Gerald Schaefer , linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: Vlastimil Babka In-Reply-To: <20220329164329.208407-2-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/29/22 18:43, David Hildenbrand wrote: > Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about > it. We do this, to keep fork() logic on swap entries easy and efficient: > for example, if we wouldn't clear it when unmapping, we'd have to lookup > the page in the swapcache for each and every swap entry during fork() and > clear PG_anon_exclusive if set. > > Instead, we want to store that information directly in the swap pte, > protected by the page table lock, similarly to how we handle > SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual > swap entries, we don't want to mess with the swap type (e.g., still one > bit) because it overcomplicates swap code. > > In try_to_unmap(), we already reject to unmap in case the page might be > pinned, because we must not lose PG_anon_exclusive on pinned pages ever. > Checking if there are other unexpected references reliably *before* > completely unmapping a page is unfortunately not really possible: THP > heavily overcomplicate the situation. Once fully unmapped it's easier -- > we, for example, make sure that there are no unexpected references > *after* unmapping a page before starting writeback on that page. > > So, we currently might end up unmapping a page and clearing > PG_anon_exclusive if that page has additional references, for example, > due to a FOLL_GET. > > do_swap_page() has to re-determine if a page is exclusive, which will > easily fail if there are other references on a page, most prominently > GUP references via FOLL_GET. This can currently result in memory > corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even > when fork() is never involved: try_to_unmap() will succeed, and when > refaulting the page, it cannot be marked exclusive and will get replaced > by a copy in the page tables on the next write access, resulting in writes > via the GUP reference to the page being lost. > > In an ideal world, everybody that uses GUP and wants to modify page > content, such as O_DIRECT, would properly use FOLL_PIN. However, that > conversion will take a while. It's easier to fix what used to work in the > past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, > by remembering PG_anon_exclusive we can further reduce unnecessary COW > in some cases, so it's the natural thing to do. > > So let's transfer the PG_anon_exclusive information to the swap pte and > store it via an architecture-dependant pte bit; use that information when > restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we > simply have to clear the pte bit and are done. > > Of course, there is one corner case to handle: swap backends that don't > support concurrent page modifications while the page is under writeback. > Special case these, and drop the exclusive marker. Add a comment why that > is just fine (also, reuse_swap_page() would have done the same in the > past). > > In the future, we'll hopefully have all architectures support > __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty > stubs and the define completely. Then, we can also convert > SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to > support: either simply use a yet unused pte bit that can be used for swap > entries, steal one from the arch type bits if they exceed 5, or steal one > from the offset bits. > > Note: R/O FOLL_GET references were never really reliable, especially > when taking one on a shared page and then writing to the page (e.g., GUP > after fork()). FOLL_GET, including R/W references, were never really > reliable once fork was involved (e.g., GUP before fork(), > GUP during fork()). KSM steps back in case it stumbles over unexpected > references and is, therefore, fine. > > Signed-off-by: David Hildenbrand With the fixup as reportedy by Miaohe Lin Acked-by: Vlastimil Babka (sent a separate mm-commits mail to inquire about the fix going missing from mmotm) https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 560B0C433F5 for ; Wed, 20 Apr 2022 17:11:16 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Kk6dZ4Vq3z3bfH for ; Thu, 21 Apr 2022 03:11:14 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=suse.cz header.i=@suse.cz header.a=rsa-sha256 header.s=susede2_rsa header.b=qEFLX/t/; dkim=fail reason="signature verification failed" header.d=suse.cz header.i=@suse.cz header.a=ed25519-sha256 header.s=susede2_ed25519 header.b=Lz02F1m0; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=suse.cz (client-ip=195.135.220.28; helo=smtp-out1.suse.de; envelope-from=vbabka@suse.cz; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=suse.cz header.i=@suse.cz header.a=rsa-sha256 header.s=susede2_rsa header.b=qEFLX/t/; dkim=pass header.d=suse.cz header.i=@suse.cz header.a=ed25519-sha256 header.s=susede2_ed25519 header.b=Lz02F1m0; dkim-atps=neutral Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Kk6cn0B6sz2xBx for ; Thu, 21 Apr 2022 03:10:32 +1000 (AEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9598221107; Wed, 20 Apr 2022 17:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=qEFLX/t/h3r7OpqmNOmGOP8dA4rSO2sikq6ol1N77QIelrvEBFghQ7bcvovldgvJObmH6p 66q5lRqmWi9t95sVzEwUZIP8Cr7uNETQRm4NEKf64zjhBKay2Q/2JBXNnuG2fhAbtkoWOO HtmZ7GIqs22vDe7hH9NqFNk4XWWgWUo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=Lz02F1m0uYxMKVcER4wii/ZMq3u7/dEO9YPxA0w+R8Q7KJp2LiDa55Odye1bSPkQBU7LeS pJ2pECEhijNbb1DQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E940913A30; Wed, 20 Apr 2022 17:10:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +h0eOIM+YGIqXwAAMHmgww (envelope-from ); Wed, 20 Apr 2022 17:10:27 +0000 Message-ID: Date: Wed, 20 Apr 2022 19:10:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit Content-Language: en-US To: David Hildenbrand , linux-kernel@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: Vlastimil Babka In-Reply-To: <20220329164329.208407-2-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, Jan Kara , Catalin Marinas , Yang Shi , Dave Hansen , Peter Xu , Michal Hocko , linux-mm@kvack.org, Donald Dutile , Liang Zhang , Borislav Petkov , Alexander Gordeev , Will Deacon , Christoph Hellwig , Paul Mackerras , Andrea Arcangeli , linux-s390@vger.kernel.org, Vasily Gorbik , Rik van Riel , Hugh Dickins , Matthew Wilcox , Mike Rapoport , Ingo Molnar , Jason Gunthorpe , David Rientjes , Gerald Schaefer , Pedro Gomes , Jann Horn , John Hubbard , Heiko Carstens , Shakeel Butt , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Oded Gabbay , linuxppc-dev@lists.ozlabs.org, Oleg Nesterov , Nadav Amit , Andrew Morton , Linus Torvalds , Roman Gushchin , "Kirill A . Shutemov" , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 3/29/22 18:43, David Hildenbrand wrote: > Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about > it. We do this, to keep fork() logic on swap entries easy and efficient: > for example, if we wouldn't clear it when unmapping, we'd have to lookup > the page in the swapcache for each and every swap entry during fork() and > clear PG_anon_exclusive if set. > > Instead, we want to store that information directly in the swap pte, > protected by the page table lock, similarly to how we handle > SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual > swap entries, we don't want to mess with the swap type (e.g., still one > bit) because it overcomplicates swap code. > > In try_to_unmap(), we already reject to unmap in case the page might be > pinned, because we must not lose PG_anon_exclusive on pinned pages ever. > Checking if there are other unexpected references reliably *before* > completely unmapping a page is unfortunately not really possible: THP > heavily overcomplicate the situation. Once fully unmapped it's easier -- > we, for example, make sure that there are no unexpected references > *after* unmapping a page before starting writeback on that page. > > So, we currently might end up unmapping a page and clearing > PG_anon_exclusive if that page has additional references, for example, > due to a FOLL_GET. > > do_swap_page() has to re-determine if a page is exclusive, which will > easily fail if there are other references on a page, most prominently > GUP references via FOLL_GET. This can currently result in memory > corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even > when fork() is never involved: try_to_unmap() will succeed, and when > refaulting the page, it cannot be marked exclusive and will get replaced > by a copy in the page tables on the next write access, resulting in writes > via the GUP reference to the page being lost. > > In an ideal world, everybody that uses GUP and wants to modify page > content, such as O_DIRECT, would properly use FOLL_PIN. However, that > conversion will take a while. It's easier to fix what used to work in the > past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, > by remembering PG_anon_exclusive we can further reduce unnecessary COW > in some cases, so it's the natural thing to do. > > So let's transfer the PG_anon_exclusive information to the swap pte and > store it via an architecture-dependant pte bit; use that information when > restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we > simply have to clear the pte bit and are done. > > Of course, there is one corner case to handle: swap backends that don't > support concurrent page modifications while the page is under writeback. > Special case these, and drop the exclusive marker. Add a comment why that > is just fine (also, reuse_swap_page() would have done the same in the > past). > > In the future, we'll hopefully have all architectures support > __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty > stubs and the define completely. Then, we can also convert > SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to > support: either simply use a yet unused pte bit that can be used for swap > entries, steal one from the arch type bits if they exceed 5, or steal one > from the offset bits. > > Note: R/O FOLL_GET references were never really reliable, especially > when taking one on a shared page and then writing to the page (e.g., GUP > after fork()). FOLL_GET, including R/W references, were never really > reliable once fork was involved (e.g., GUP before fork(), > GUP during fork()). KSM steps back in case it stumbles over unexpected > references and is, therefore, fine. > > Signed-off-by: David Hildenbrand With the fixup as reportedy by Miaohe Lin Acked-by: Vlastimil Babka (sent a separate mm-commits mail to inquire about the fix going missing from mmotm) https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1123C433F5 for ; Wed, 20 Apr 2022 17:11:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7Mi8Q7kO5ovBik4MzxY9NAy1ZRTcpujhr/wJtPZ0AGo=; b=RhFBBSiLLj1ndd eb7DNYzNX8kAsfoGbpUSZg5VDp8nLEP0evn+Mj0Ac1Yx9nB9PKluXEWzKwy8SQavcLztm1MbvLBf5 Cf+Vgy23s4JnPsA0sazMhww2PLYwPgTNDS2UaW/NKtFO8haA01UN+a6BFOY8NSKe3KflIxw3V6yd7 6C0lCr/WuyB5LsE7i6WH4Na7yrw+nbKeF9accc8yP9uNBRj3Pa/p7vW4oPDa4JjBcMZQ1lLkScgZ9 1A2X6ZipL6IbLH9Lc+Jv8H/3Wcf7MACeEXIVgLCdGtrZYUZlB0MWjduS86lWJ7f8CI6dxcOElusyH IgAy2Z0RM6/NWxUNUasw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhDqg-009rEP-OU; Wed, 20 Apr 2022 17:10:38 +0000 Received: from smtp-out1.suse.de ([195.135.220.28]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhDqc-009rAP-8u for linux-arm-kernel@lists.infradead.org; Wed, 20 Apr 2022 17:10:36 +0000 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9598221107; Wed, 20 Apr 2022 17:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=qEFLX/t/h3r7OpqmNOmGOP8dA4rSO2sikq6ol1N77QIelrvEBFghQ7bcvovldgvJObmH6p 66q5lRqmWi9t95sVzEwUZIP8Cr7uNETQRm4NEKf64zjhBKay2Q/2JBXNnuG2fhAbtkoWOO HtmZ7GIqs22vDe7hH9NqFNk4XWWgWUo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=Lz02F1m0uYxMKVcER4wii/ZMq3u7/dEO9YPxA0w+R8Q7KJp2LiDa55Odye1bSPkQBU7LeS pJ2pECEhijNbb1DQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E940913A30; Wed, 20 Apr 2022 17:10:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +h0eOIM+YGIqXwAAMHmgww (envelope-from ); Wed, 20 Apr 2022 17:10:27 +0000 Message-ID: Date: Wed, 20 Apr 2022 19:10:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit Content-Language: en-US To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Gerald Schaefer , linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: Vlastimil Babka In-Reply-To: <20220329164329.208407-2-david@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220420_101034_468409_20A2B5D3 X-CRM114-Status: GOOD ( 32.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 3/29/22 18:43, David Hildenbrand wrote: > Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about > it. We do this, to keep fork() logic on swap entries easy and efficient: > for example, if we wouldn't clear it when unmapping, we'd have to lookup > the page in the swapcache for each and every swap entry during fork() and > clear PG_anon_exclusive if set. > > Instead, we want to store that information directly in the swap pte, > protected by the page table lock, similarly to how we handle > SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual > swap entries, we don't want to mess with the swap type (e.g., still one > bit) because it overcomplicates swap code. > > In try_to_unmap(), we already reject to unmap in case the page might be > pinned, because we must not lose PG_anon_exclusive on pinned pages ever. > Checking if there are other unexpected references reliably *before* > completely unmapping a page is unfortunately not really possible: THP > heavily overcomplicate the situation. Once fully unmapped it's easier -- > we, for example, make sure that there are no unexpected references > *after* unmapping a page before starting writeback on that page. > > So, we currently might end up unmapping a page and clearing > PG_anon_exclusive if that page has additional references, for example, > due to a FOLL_GET. > > do_swap_page() has to re-determine if a page is exclusive, which will > easily fail if there are other references on a page, most prominently > GUP references via FOLL_GET. This can currently result in memory > corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even > when fork() is never involved: try_to_unmap() will succeed, and when > refaulting the page, it cannot be marked exclusive and will get replaced > by a copy in the page tables on the next write access, resulting in writes > via the GUP reference to the page being lost. > > In an ideal world, everybody that uses GUP and wants to modify page > content, such as O_DIRECT, would properly use FOLL_PIN. However, that > conversion will take a while. It's easier to fix what used to work in the > past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, > by remembering PG_anon_exclusive we can further reduce unnecessary COW > in some cases, so it's the natural thing to do. > > So let's transfer the PG_anon_exclusive information to the swap pte and > store it via an architecture-dependant pte bit; use that information when > restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we > simply have to clear the pte bit and are done. > > Of course, there is one corner case to handle: swap backends that don't > support concurrent page modifications while the page is under writeback. > Special case these, and drop the exclusive marker. Add a comment why that > is just fine (also, reuse_swap_page() would have done the same in the > past). > > In the future, we'll hopefully have all architectures support > __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty > stubs and the define completely. Then, we can also convert > SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to > support: either simply use a yet unused pte bit that can be used for swap > entries, steal one from the arch type bits if they exceed 5, or steal one > from the offset bits. > > Note: R/O FOLL_GET references were never really reliable, especially > when taking one on a shared page and then writing to the page (e.g., GUP > after fork()). FOLL_GET, including R/W references, were never really > reliable once fork was involved (e.g., GUP before fork(), > GUP during fork()). KSM steps back in case it stumbles over unexpected > references and is, therefore, fine. > > Signed-off-by: David Hildenbrand With the fixup as reportedy by Miaohe Lin Acked-by: Vlastimil Babka (sent a separate mm-commits mail to inquire about the fix going missing from mmotm) https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel