From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69CC2C433EF for ; Thu, 20 Jan 2022 14:40:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345814AbiATOjf (ORCPT ); Thu, 20 Jan 2022 09:39:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242778AbiATOjO (ORCPT ); Thu, 20 Jan 2022 09:39:14 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A842C061746 for ; Thu, 20 Jan 2022 06:39:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2LwJlstIvGsqeuZylXzQhNRQ08SoRUnluLli12GzN0Y=; b=MLOMkHVV/aQF9+c0reWTugaM9U fqssnhX4hLmiOyyktv30+yx1DdcEpO8C56HNtNoAJkfzL7pedwkQvb6/YPZE0SHBFHKNJvKtcWxTs qLhm5NgGMg+GVT2CXPVNPdRfmIj+meXAlP8nMsOJvcEr9+xzFbGHAx9MqHsVnhvUsWRRswJk4ERvs tWWoPAGK11Mg5UfDAzezIGOoGg6pVTrruoNuK/6xFcD9lCTW4hvPRp4/IxKBqLg/ZG2ckpfBU0w3O RysaXNypn6JY3i4596syObCCuYBdQT2J/Dr9sycq7jv9EuQ9eRlVa1E2wPLaXfo7af8mEzp2/WpOQ brgwZ1Tg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAYab-00EKmj-9H; Thu, 20 Jan 2022 14:39:01 +0000 Date: Thu, 20 Jan 2022 14:39:01 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: "zhangliang (AG)" , Andrew Morton , Linux-MM , Linux Kernel Mailing List , wangzhigang17@huawei.com, Linus Torvalds Subject: Re: [PATCH] mm: reuse the unshared swapcache page in do_wp_page Message-ID: References: <20220113140318.11117-1-zhangliang5@huawei.com> <172ccfbb-7e24-db21-7d84-8c8d8c3805fd@redhat.com> <9cd7eee2-91fd-ddb8-e47d-e8585e5baa05@redhat.com> <747ff31c-6c9e-df6c-f14d-c43aa1c77b4a@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <747ff31c-6c9e-df6c-f14d-c43aa1c77b4a@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 20, 2022 at 03:15:37PM +0100, David Hildenbrand wrote: > On 17.01.22 14:31, zhangliang (AG) wrote: > > Sure, I will do that :) > > I'm polishing up / testing the patches and might send something out for discussion shortly. > Just a note that on my branch was a version with a wrong condition that should have been fixed now. > > I am still thinking about PTE mapped THP. For these, we'll always > have page_count() > 1, essentially corresponding to the number of still-mapped sub-pages. > > So if we end up with a R/O mapped part of a THP, we'll always have to COW and cannot reuse ever, > although it's really just a single process mapping the THP via PTEs. > > One approach would be to scan the currently locked page table for entries mapping > this same page. If page_count() corresponds to that value, we know that only we are > mapping the THP and there are no additional references. That would be a special case > if we find an anon THP in do_wp_page(). Hm. You're starting to optimise for some pretty weird cases at that point. Anon THP is always going to start out aligned (and can be moved by mremap()). Arguably it should be broken up if it's moved so it can be reformed into aligned THPs by khugepaged. This is completely different from file-backed THPs, where misalignment might be considered normal (if unfortunate).