From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B03BC433EF for ; Wed, 13 Apr 2022 16:39:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 913426B0074; Wed, 13 Apr 2022 12:39:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89CEE6B0075; Wed, 13 Apr 2022 12:39:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EE686B0078; Wed, 13 Apr 2022 12:39:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 5A9C46B0074 for ; Wed, 13 Apr 2022 12:39:55 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 27A18802BC for ; Wed, 13 Apr 2022 16:39:55 +0000 (UTC) X-FDA: 79352417550.08.A838E6B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id 9B9E940006 for ; Wed, 13 Apr 2022 16:39:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649867994; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p0aXMnUhveJJccTOKVcvOAmJYVv7ZueT3l37YMI4RmU=; b=CvKalD+sbMkJfjq11ucL+cyYvUxGnno6N5l3S3IVWSDD+LHBxyS2zhYj5CXa498xCipwJB NKJfFkgc52eB27VVGrQlIdRb/FiiMlv6175mmcCuEHGYy3vviwcP9eNjGS4jPHrj1QoqzW BWXR/04bfO6ZETvShK2EFMnxkPtYzYM= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-510-OWZ1cGlOMnKFYbW5ephWxw-1; Wed, 13 Apr 2022 12:39:50 -0400 X-MC-Unique: OWZ1cGlOMnKFYbW5ephWxw-1 Received: by mail-wm1-f69.google.com with SMTP id f12-20020a05600c154c00b0038ea9ed0a4aso996444wmg.1 for ; Wed, 13 Apr 2022 09:39:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=p0aXMnUhveJJccTOKVcvOAmJYVv7ZueT3l37YMI4RmU=; b=VSU7i4mkMvLxclKiUBuzHcZJDuqoE7u1AdXHA4Urrej35SCxSvLUa8uhZglkdFbYh/ Un2oltqv0CMmYMi43Bi+yk8QgAxTOkz6tYnStmHZ7UE0n7L7bDvi5wIWEzwNjp9bRzBq PuNB3mCostD5LAq66+y6raspnBtsBRXjh6tZ1hNtoKPAF2EqadfgkomAdRWwxScSPi96 MH0hjUqzK6Yujva56ZM0myB9nDQFGXWw3Dm6DxNSuyYx4R2qQu9y8FPUaRPmgMLxBDbn KNillWXH5869LqZ5jFPoCfefNUkHpemmMp9z/W9ZnjoiAdIV+LH6dLTrp3XBfzf/U45p Aahw== X-Gm-Message-State: AOAM532lD1SiQy8M90LtBmnUj5IkVdARqa7C7ZV4lGaDFpcE3zTllUj7 1rUGmyiCUVKd1CaOBh43H9FJrniRNWMQ/vqxEQbJqW/TXVDDgU3n5r2PFtoCCS5EU3jZsMS+ka/ I39d4PK616GY= X-Received: by 2002:a5d:5846:0:b0:204:1a79:f1ab with SMTP id i6-20020a5d5846000000b002041a79f1abmr32829232wrf.520.1649867989516; Wed, 13 Apr 2022 09:39:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJykcbG+BeznZ7c5fNBOejCki5gd90EzDQrV7Yc5yjD4w6+BWn7lmQ7xpzBTxuZ+oGxzZQBwbg== X-Received: by 2002:a5d:5846:0:b0:204:1a79:f1ab with SMTP id i6-20020a5d5846000000b002041a79f1abmr32829210wrf.520.1649867989192; Wed, 13 Apr 2022 09:39:49 -0700 (PDT) Received: from ?IPV6:2003:cb:c704:5800:1078:ebb9:e2c3:ea8c? (p200300cbc70458001078ebb9e2c3ea8c.dip0.t-ipconnect.de. [2003:cb:c704:5800:1078:ebb9:e2c3:ea8c]) by smtp.gmail.com with ESMTPSA id u7-20020a05600c19c700b0038cc9aac1a3sm3143976wmq.23.2022.04.13.09.39.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 13 Apr 2022 09:39:48 -0700 (PDT) Message-ID: <2ae0a409-3d6d-9f6a-09e8-2f6867a4069a@redhat.com> Date: Wed, 13 Apr 2022 18:39:45 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 To: Vlastimil Babka , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org References: <20220329160440.193848-1-david@redhat.com> <20220329160440.193848-13-david@redhat.com> <012e3889-563b-e7fc-c2e3-e7a6373a55ac@suse.cz> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v3 12/16] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive In-Reply-To: <012e3889-563b-e7fc-c2e3-e7a6373a55ac@suse.cz> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9B9E940006 X-Stat-Signature: agc7dq8nupkz4h84fyrthi4jqr4wkapj Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=CvKalD+s; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf07.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-HE-Tag: 1649867994-72469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 13.04.22 18:28, Vlastimil Babka wrote: > On 3/29/22 18:04, David Hildenbrand wrote: >> Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as >> exclusive, and use that information to make GUP pins reliable and stay >> consistent with the page mapped into the page table even if the >> page table entry gets write-protected. >> >> With that information at hand, we can extend our COW logic to always >> reuse anonymous pages that are exclusive. For anonymous pages that >> might be shared, the existing logic applies. >> >> As already documented, PG_anon_exclusive is usually only expressive in >> combination with a page table entry. Especially PTE vs. PMD-mapped >> anonymous pages require more thought, some examples: due to mremap() we >> can easily have a single compound page PTE-mapped into multiple page tables >> exclusively in a single process -- multiple page table locks apply. >> Further, due to MADV_WIPEONFORK we might not necessarily write-protect >> all PTEs, and only some subpages might be pinned. Long story short: once >> PTE-mapped, we have to track information about exclusivity per sub-page, >> but until then, we can just track it for the compound page in the head >> page and not having to update a whole bunch of subpages all of the time >> for a simple PMD mapping of a THP. >> >> For simplicity, this commit mostly talks about "anonymous pages", while >> it's for THP actually "the part of an anonymous folio referenced via >> a page table entry". >> >> To not spill PG_anon_exclusive code all over the mm code-base, we let >> the anon rmap code to handle all PG_anon_exclusive logic it can easily >> handle. >> >> If a writable, present page table entry points at an anonymous (sub)page, >> that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably >> pin (FOLL_PIN) on an anonymous page references via a present >> page table entry, it must only pin if PG_anon_exclusive is set for the >> mapped (sub)page. >> >> This commit doesn't adjust GUP, so this is only implicitly handled for >> FOLL_WRITE, follow-up commits will teach GUP to also respect it for >> FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages > > without FOLL_WRITE ? Indeed, thanks. > >> fully reliable. > > > >> @@ -202,11 +203,26 @@ static inline int is_writable_migration_entry(swp_entry_t entry) >> return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE); >> } >> >> +static inline int is_readable_migration_entry(swp_entry_t entry) >> +{ >> + return unlikely(swp_type(entry) == SWP_MIGRATION_READ); >> +} >> + >> +static inline int is_readable_exclusive_migration_entry(swp_entry_t entry) >> +{ >> + return unlikely(swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE); >> +} > > This one seems to be missing a !CONFIG_MIGRATION counterpart. Although the > only caller __split_huge_pmd_locked() probably indirectly only exists with > CONFIG_MIGRATION so it's not an immediate issue. (THP selects COMPACTION > selects MIGRATION) So far no builds bailed out. And yes, I think it's for the reason stated. THP without compaction would be a lost bet. > > > >> @@ -3035,10 +3083,19 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, >> >> flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); >> pmdval = pmdp_invalidate(vma, address, pvmw->pmd); >> + >> + anon_exclusive = PageAnon(page) && PageAnonExclusive(page); >> + if (anon_exclusive && page_try_share_anon_rmap(page)) { >> + set_pmd_at(mm, address, pvmw->pmd, pmdval); >> + return; > > I am admittedly not too familiar with this code, but looks like this means > we fail to migrate the THP, right? But we don't seem to be telling the > caller, which is try_to_migrate_one(), so it will continue and not terminate > the walk and return false? Right, we're not returning "false". Returning "false" would be an optimization to make rmap_walk_anon() fail faster. But, after all, the THP is exclusive (-> single mapping), so anon_vma_interval_tree_foreach() would most probably not have a lot work to do either way I'd assume? In any case, once we return from try_to_migrate(), the page will still be mapped. -- Thanks, David / dhildenb