From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F86C433F5 for ; Wed, 20 Apr 2022 17:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380987AbiDTRQg (ORCPT ); Wed, 20 Apr 2022 13:16:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236702AbiDTRQe (ORCPT ); Wed, 20 Apr 2022 13:16:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AAFBF457A9 for ; Wed, 20 Apr 2022 10:13:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650474826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=jBl1gmg6wPwLvVy+n5LEz1cBblL+6eeTmypLE3g+NlFdcUvl/JOa1EPCaFqBe0yCLCzuZq ay8nC4fqegacmTvzywG8D1tgPny6+mzYNpOamAhkR4xvjPSskD4zORFWurr4tjLQEVwzc+ GWb8sH1K84KTed4ewb07KESHBQ5YDJk= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-417-s8hxY59HP1KhFjzehQSw4w-1; Wed, 20 Apr 2022 13:13:45 -0400 X-MC-Unique: s8hxY59HP1KhFjzehQSw4w-1 Received: by mail-wm1-f70.google.com with SMTP id t2-20020a7bc3c2000000b003528fe59cb9so1222107wmj.5 for ; Wed, 20 Apr 2022 10:13:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=S6iF1PotjeaRCsk7846OhoojX0yBG9BdKc5WLA5FGTbeQxhfhTMEcGlISLz6/2s+Xb NVfko7g79mD+jebwf8zRW3t6z9mB2c2BpWaPSDHijelsMKyQQ+adIrZCISOfch1Lt5A9 egdTVLJA37tbYqt+4JNmQQxMOZD8E3qrIwtFK4SG/t4f9dWsC8fA6upo41uSd/o2qSPG ZqDyyZ6t4Z2M6Z2JemEWKnrIgeag7E1smISNFadfNyajkc3OEUhfrzEaTnKqKTcdfn0u pP+KnpMSBu8FvLSZYPyC3qNtoxO4ebUnyZ5iPgoul7uXg3ZqLtFJm+2nnLB4CkPU2UlQ G5Sg== X-Gm-Message-State: AOAM531QW0phYFRp5relCAWD4l8plGOvX5wt7Y+ujJnBTsQe2c9yEXVh fcOZNy66o7nC647HEkz9k1ymSDjx9zRnxVhe8/H8/Km3zJ6u1so75XgPokD/XJp54mRp4hXyg7A nyWMOWp7vJv0xZmBeR9U0WNQE X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624133wmh.186.1650474824361; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnxBGE3mnfNhX3DGODQmYyZGAR1i8EPUCCup/XZY5QMpdpdOqavd/9L1y6Dj1QMznlK0qbGA== X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624108wmh.186.1650474824060; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) Received: from ?IPV6:2003:cb:c702:3d00:23e4:4c84:67a5:3ccf? (p200300cbc7023d0023e44c8467a53ccf.dip0.t-ipconnect.de. [2003:cb:c702:3d00:23e4:4c84:67a5:3ccf]) by smtp.gmail.com with ESMTPSA id q16-20020adff950000000b00205aa05fa03sm313602wrr.58.2022.04.20.10.13.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 20 Apr 2022 10:13:43 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2022 19:13:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit Content-Language: en-US To: Vlastimil Babka , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Gerald Schaefer , linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20.04.22 19:10, Vlastimil Babka wrote: > On 3/29/22 18:43, David Hildenbrand wrote: >> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about >> it. We do this, to keep fork() logic on swap entries easy and efficient: >> for example, if we wouldn't clear it when unmapping, we'd have to lookup >> the page in the swapcache for each and every swap entry during fork() and >> clear PG_anon_exclusive if set. >> >> Instead, we want to store that information directly in the swap pte, >> protected by the page table lock, similarly to how we handle >> SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual >> swap entries, we don't want to mess with the swap type (e.g., still one >> bit) because it overcomplicates swap code. >> >> In try_to_unmap(), we already reject to unmap in case the page might be >> pinned, because we must not lose PG_anon_exclusive on pinned pages ever. >> Checking if there are other unexpected references reliably *before* >> completely unmapping a page is unfortunately not really possible: THP >> heavily overcomplicate the situation. Once fully unmapped it's easier -- >> we, for example, make sure that there are no unexpected references >> *after* unmapping a page before starting writeback on that page. >> >> So, we currently might end up unmapping a page and clearing >> PG_anon_exclusive if that page has additional references, for example, >> due to a FOLL_GET. >> >> do_swap_page() has to re-determine if a page is exclusive, which will >> easily fail if there are other references on a page, most prominently >> GUP references via FOLL_GET. This can currently result in memory >> corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even >> when fork() is never involved: try_to_unmap() will succeed, and when >> refaulting the page, it cannot be marked exclusive and will get replaced >> by a copy in the page tables on the next write access, resulting in writes >> via the GUP reference to the page being lost. >> >> In an ideal world, everybody that uses GUP and wants to modify page >> content, such as O_DIRECT, would properly use FOLL_PIN. However, that >> conversion will take a while. It's easier to fix what used to work in the >> past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, >> by remembering PG_anon_exclusive we can further reduce unnecessary COW >> in some cases, so it's the natural thing to do. >> >> So let's transfer the PG_anon_exclusive information to the swap pte and >> store it via an architecture-dependant pte bit; use that information when >> restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we >> simply have to clear the pte bit and are done. >> >> Of course, there is one corner case to handle: swap backends that don't >> support concurrent page modifications while the page is under writeback. >> Special case these, and drop the exclusive marker. Add a comment why that >> is just fine (also, reuse_swap_page() would have done the same in the >> past). >> >> In the future, we'll hopefully have all architectures support >> __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty >> stubs and the define completely. Then, we can also convert >> SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to >> support: either simply use a yet unused pte bit that can be used for swap >> entries, steal one from the arch type bits if they exceed 5, or steal one >> from the offset bits. >> >> Note: R/O FOLL_GET references were never really reliable, especially >> when taking one on a shared page and then writing to the page (e.g., GUP >> after fork()). FOLL_GET, including R/W references, were never really >> reliable once fork was involved (e.g., GUP before fork(), >> GUP during fork()). KSM steps back in case it stumbles over unexpected >> references and is, therefore, fine. >> >> Signed-off-by: David Hildenbrand > > With the fixup as reportedy by Miaohe Lin > > Acked-by: Vlastimil Babka > > (sent a separate mm-commits mail to inquire about the fix going missing from > mmotm) > > https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb Yes I saw that, thanks for catching that! -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8AEA1C433EF for ; Wed, 20 Apr 2022 17:14:39 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4Kk6jT74DHz3bdq for ; Thu, 21 Apr 2022 03:14:37 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=YPv5e3Jk; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=YPv5e3Jk; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=redhat.com (client-ip=170.10.129.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=david@redhat.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=YPv5e3Jk; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=YPv5e3Jk; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Kk6hf3Yxyz2xBx for ; Thu, 21 Apr 2022 03:13:53 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650474829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=YPv5e3Jk2akb+5zxNB9AE/jf3iOjhSq+Pyv2h1zyXEnj4/wP735qK3twkyuJ44yntF4v+z /ZtSNiU/UbfUVOxy/76GM178FXBgRzMHZHD4b2LkS94mU5P0yEN0NaulfB0LX6m/Melt6d Vy1i0K8A/udkVJ5v3mf4k6haQFYsJGI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650474829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=YPv5e3Jk2akb+5zxNB9AE/jf3iOjhSq+Pyv2h1zyXEnj4/wP735qK3twkyuJ44yntF4v+z /ZtSNiU/UbfUVOxy/76GM178FXBgRzMHZHD4b2LkS94mU5P0yEN0NaulfB0LX6m/Melt6d Vy1i0K8A/udkVJ5v3mf4k6haQFYsJGI= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-99-09l_d5-RMzOFqS6IP3QHgQ-1; Wed, 20 Apr 2022 13:13:45 -0400 X-MC-Unique: 09l_d5-RMzOFqS6IP3QHgQ-1 Received: by mail-wr1-f72.google.com with SMTP id b10-20020adfc74a000000b0020ab029d5edso576937wrh.18 for ; Wed, 20 Apr 2022 10:13:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=kXFfIRWksZSt97phqs5OfBHoQJZGq9QCQ3qr4QrloheuKZrbyYbrpEAYTrceTdylIk GRfHY2XYTgu5iVEYENU75Vx+66/axaoI5vmsppDvfEse3y7hhog7UJE3NOIIQHB6bZ3V A8+9uMx4QDHXkEpk99uW4IJzabA0wLJP6FD3ot8qawbmAwvqq14pk+NbIg96FBWUiPBM r00biSnG8RNJxM6DJLyD19v4HOYjuk4oDkni2MBFddL5bZzuYbOlQvDimAUmZawQuP4q +8ZlxYcv0EdRKKLmeo932CaUIM/LlyPxH1EhK98B2MAzNS9oIWS33J783SJl57JGPm23 8/xA== X-Gm-Message-State: AOAM530wXbsIN0ImFHLzg1j4RZkYM4Zj/GwRnZIaIONNZyDeml4ZBEEM LpSE+xvSYcseagUXMD3Nhn6InvXSLQBb7NuGNmx2MeFm/Cp4/mwd3wrx+wlDIhxJoUHNXwwj6fU CexemZzU68a02EnIKdpqjqA986Q== X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624178wmh.186.1650474824509; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnxBGE3mnfNhX3DGODQmYyZGAR1i8EPUCCup/XZY5QMpdpdOqavd/9L1y6Dj1QMznlK0qbGA== X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624108wmh.186.1650474824060; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) Received: from ?IPV6:2003:cb:c702:3d00:23e4:4c84:67a5:3ccf? (p200300cbc7023d0023e44c8467a53ccf.dip0.t-ipconnect.de. [2003:cb:c702:3d00:23e4:4c84:67a5:3ccf]) by smtp.gmail.com with ESMTPSA id q16-20020adff950000000b00205aa05fa03sm313602wrr.58.2022.04.20.10.13.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 20 Apr 2022 10:13:43 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2022 19:13:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit To: Vlastimil Babka , linux-kernel@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: x86@kernel.org, Jan Kara , Catalin Marinas , Yang Shi , Dave Hansen , Peter Xu , Michal Hocko , linux-mm@kvack.org, Donald Dutile , Liang Zhang , Borislav Petkov , Alexander Gordeev , Will Deacon , Christoph Hellwig , Paul Mackerras , Andrea Arcangeli , linux-s390@vger.kernel.org, Vasily Gorbik , Rik van Riel , Hugh Dickins , Matthew Wilcox , Mike Rapoport , Ingo Molnar , Jason Gunthorpe , David Rientjes , Gerald Schaefer , Pedro Gomes , Jann Horn , John Hubbard , Heiko Carstens , Shakeel Butt , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Oded Gabbay , linuxppc-dev@lists.ozlabs.org, Oleg Nesterov , Nadav Amit , Andrew Morton , Linus Torvalds , Roman Gushchin , "Kirill A . Shutemov" , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 20.04.22 19:10, Vlastimil Babka wrote: > On 3/29/22 18:43, David Hildenbrand wrote: >> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about >> it. We do this, to keep fork() logic on swap entries easy and efficient: >> for example, if we wouldn't clear it when unmapping, we'd have to lookup >> the page in the swapcache for each and every swap entry during fork() and >> clear PG_anon_exclusive if set. >> >> Instead, we want to store that information directly in the swap pte, >> protected by the page table lock, similarly to how we handle >> SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual >> swap entries, we don't want to mess with the swap type (e.g., still one >> bit) because it overcomplicates swap code. >> >> In try_to_unmap(), we already reject to unmap in case the page might be >> pinned, because we must not lose PG_anon_exclusive on pinned pages ever. >> Checking if there are other unexpected references reliably *before* >> completely unmapping a page is unfortunately not really possible: THP >> heavily overcomplicate the situation. Once fully unmapped it's easier -- >> we, for example, make sure that there are no unexpected references >> *after* unmapping a page before starting writeback on that page. >> >> So, we currently might end up unmapping a page and clearing >> PG_anon_exclusive if that page has additional references, for example, >> due to a FOLL_GET. >> >> do_swap_page() has to re-determine if a page is exclusive, which will >> easily fail if there are other references on a page, most prominently >> GUP references via FOLL_GET. This can currently result in memory >> corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even >> when fork() is never involved: try_to_unmap() will succeed, and when >> refaulting the page, it cannot be marked exclusive and will get replaced >> by a copy in the page tables on the next write access, resulting in writes >> via the GUP reference to the page being lost. >> >> In an ideal world, everybody that uses GUP and wants to modify page >> content, such as O_DIRECT, would properly use FOLL_PIN. However, that >> conversion will take a while. It's easier to fix what used to work in the >> past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, >> by remembering PG_anon_exclusive we can further reduce unnecessary COW >> in some cases, so it's the natural thing to do. >> >> So let's transfer the PG_anon_exclusive information to the swap pte and >> store it via an architecture-dependant pte bit; use that information when >> restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we >> simply have to clear the pte bit and are done. >> >> Of course, there is one corner case to handle: swap backends that don't >> support concurrent page modifications while the page is under writeback. >> Special case these, and drop the exclusive marker. Add a comment why that >> is just fine (also, reuse_swap_page() would have done the same in the >> past). >> >> In the future, we'll hopefully have all architectures support >> __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty >> stubs and the define completely. Then, we can also convert >> SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to >> support: either simply use a yet unused pte bit that can be used for swap >> entries, steal one from the arch type bits if they exceed 5, or steal one >> from the offset bits. >> >> Note: R/O FOLL_GET references were never really reliable, especially >> when taking one on a shared page and then writing to the page (e.g., GUP >> after fork()). FOLL_GET, including R/W references, were never really >> reliable once fork was involved (e.g., GUP before fork(), >> GUP during fork()). KSM steps back in case it stumbles over unexpected >> references and is, therefore, fine. >> >> Signed-off-by: David Hildenbrand > > With the fixup as reportedy by Miaohe Lin > > Acked-by: Vlastimil Babka > > (sent a separate mm-commits mail to inquire about the fix going missing from > mmotm) > > https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb Yes I saw that, thanks for catching that! -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D323C433EF for ; Wed, 20 Apr 2022 17:14:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8stkb2Vdh3P6zqzxlY/8YTU9R2Q1S8QS/cNsi8GHLD8=; b=WCH788R73QQXvf iYoc1Nn25QFp6rfKYmE4WgaxUj5eGjhIr+O+vD3BnLjGoVSwA0yYTNxv1YDv3xblDUEWBr34fy01L sF0+jSGc2lNJ126x88o5JOB5Kih6Xq4aXegRh0+mb630QpHipgKWHiN4KIyW/EP52xswfEOMsu6Yf cWODYW1BO288SA6pvnoluSEIPJ0OLrDgTLgAbMBHCF9w31QKPbEaybPMi+0uSnaTT/k5GwlN75hTM 78/WuAj2e1Ga1yloGepkyIEALTHRonrz44gN4Ahw1G/HKsOZ6VxZB50rea5N21fj7+XbX55K4N9at IM5K0ULdO/W/b5E8rKcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhDtq-009sCu-4i; Wed, 20 Apr 2022 17:13:54 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhDtn-009sAm-8j for linux-arm-kernel@lists.infradead.org; Wed, 20 Apr 2022 17:13:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650474829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=YPv5e3Jk2akb+5zxNB9AE/jf3iOjhSq+Pyv2h1zyXEnj4/wP735qK3twkyuJ44yntF4v+z /ZtSNiU/UbfUVOxy/76GM178FXBgRzMHZHD4b2LkS94mU5P0yEN0NaulfB0LX6m/Melt6d Vy1i0K8A/udkVJ5v3mf4k6haQFYsJGI= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-54-XqBcM1zIPZmMO-CdqqqBVQ-1; Wed, 20 Apr 2022 13:13:45 -0400 X-MC-Unique: XqBcM1zIPZmMO-CdqqqBVQ-1 Received: by mail-wr1-f70.google.com with SMTP id e2-20020adfa442000000b0020a91fa37b9so582975wra.10 for ; Wed, 20 Apr 2022 10:13:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=YUXfwOQompnJdIwVAr+ky90k3csgxSJYtFYxvhspJD8=; b=Yd+NJdwCTs9Qpmgjz4yV1rVkZyYeS0evm5h63nTg8adGKNnjjM48D0uk7cnFvVds3A qJwyOCjqhs2EoVrk0ZVj9boFqIsLhZy8+AqGN+EZbdwsw498fMecuBZ50YVZGo4O9Cxl Mcy0TDmzMhfNUrumpJwoBTOB0l2NCKxMsJmLGKVLOh1JPiP4+8rZYwBJtftyjMguXCdN hqXbtxgRK5hDYpyYEgKU8PDt2gX20lLS/zHb2GVA7X3FrZ5ZwWfJ9aE4LO6Uq1+SZYAU j+hToO6wAY3TEfd7+LmW+2nD/U9KdG6hu7AoQxsGb9kMkSy+f1EUq7WsMUSRbmC63uxu IFDA== X-Gm-Message-State: AOAM530+GRIvcLfpvqarQqefQVN974IIJk/MGpldsQSnkFnzjCDMnlNy CvDez9owEW4oNquPjb/FGgHamL+iV9jikmmEM5mJKxymhNhenagRK3h2YKmKdK1hWHHPiQQuOrj XKUts7ihS2gbRRWSzKskJQjEIOir9sp7UIkU= X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624153wmh.186.1650474824366; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnxBGE3mnfNhX3DGODQmYyZGAR1i8EPUCCup/XZY5QMpdpdOqavd/9L1y6Dj1QMznlK0qbGA== X-Received: by 2002:a1c:21c5:0:b0:38e:b464:6a39 with SMTP id h188-20020a1c21c5000000b0038eb4646a39mr4624108wmh.186.1650474824060; Wed, 20 Apr 2022 10:13:44 -0700 (PDT) Received: from ?IPV6:2003:cb:c702:3d00:23e4:4c84:67a5:3ccf? (p200300cbc7023d0023e44c8467a53ccf.dip0.t-ipconnect.de. [2003:cb:c702:3d00:23e4:4c84:67a5:3ccf]) by smtp.gmail.com with ESMTPSA id q16-20020adff950000000b00205aa05fa03sm313602wrr.58.2022.04.20.10.13.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 20 Apr 2022 10:13:43 -0700 (PDT) Message-ID: Date: Wed, 20 Apr 2022 19:13:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit To: Vlastimil Babka , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Gerald Schaefer , linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220420_101351_408099_6BB03F47 X-CRM114-Status: GOOD ( 27.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 20.04.22 19:10, Vlastimil Babka wrote: > On 3/29/22 18:43, David Hildenbrand wrote: >> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about >> it. We do this, to keep fork() logic on swap entries easy and efficient: >> for example, if we wouldn't clear it when unmapping, we'd have to lookup >> the page in the swapcache for each and every swap entry during fork() and >> clear PG_anon_exclusive if set. >> >> Instead, we want to store that information directly in the swap pte, >> protected by the page table lock, similarly to how we handle >> SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual >> swap entries, we don't want to mess with the swap type (e.g., still one >> bit) because it overcomplicates swap code. >> >> In try_to_unmap(), we already reject to unmap in case the page might be >> pinned, because we must not lose PG_anon_exclusive on pinned pages ever. >> Checking if there are other unexpected references reliably *before* >> completely unmapping a page is unfortunately not really possible: THP >> heavily overcomplicate the situation. Once fully unmapped it's easier -- >> we, for example, make sure that there are no unexpected references >> *after* unmapping a page before starting writeback on that page. >> >> So, we currently might end up unmapping a page and clearing >> PG_anon_exclusive if that page has additional references, for example, >> due to a FOLL_GET. >> >> do_swap_page() has to re-determine if a page is exclusive, which will >> easily fail if there are other references on a page, most prominently >> GUP references via FOLL_GET. This can currently result in memory >> corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even >> when fork() is never involved: try_to_unmap() will succeed, and when >> refaulting the page, it cannot be marked exclusive and will get replaced >> by a copy in the page tables on the next write access, resulting in writes >> via the GUP reference to the page being lost. >> >> In an ideal world, everybody that uses GUP and wants to modify page >> content, such as O_DIRECT, would properly use FOLL_PIN. However, that >> conversion will take a while. It's easier to fix what used to work in the >> past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, >> by remembering PG_anon_exclusive we can further reduce unnecessary COW >> in some cases, so it's the natural thing to do. >> >> So let's transfer the PG_anon_exclusive information to the swap pte and >> store it via an architecture-dependant pte bit; use that information when >> restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we >> simply have to clear the pte bit and are done. >> >> Of course, there is one corner case to handle: swap backends that don't >> support concurrent page modifications while the page is under writeback. >> Special case these, and drop the exclusive marker. Add a comment why that >> is just fine (also, reuse_swap_page() would have done the same in the >> past). >> >> In the future, we'll hopefully have all architectures support >> __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty >> stubs and the define completely. Then, we can also convert >> SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to >> support: either simply use a yet unused pte bit that can be used for swap >> entries, steal one from the arch type bits if they exceed 5, or steal one >> from the offset bits. >> >> Note: R/O FOLL_GET references were never really reliable, especially >> when taking one on a shared page and then writing to the page (e.g., GUP >> after fork()). FOLL_GET, including R/W references, were never really >> reliable once fork was involved (e.g., GUP before fork(), >> GUP during fork()). KSM steps back in case it stumbles over unexpected >> references and is, therefore, fine. >> >> Signed-off-by: David Hildenbrand > > With the fixup as reportedy by Miaohe Lin > > Acked-by: Vlastimil Babka > > (sent a separate mm-commits mail to inquire about the fix going missing from > mmotm) > > https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb Yes I saw that, thanks for catching that! -- Thanks, David / dhildenb _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel