From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE089C433F5 for ; Fri, 4 Feb 2022 04:49:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 692F86B0078; Thu, 3 Feb 2022 23:49:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 642846B007B; Thu, 3 Feb 2022 23:49:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E3BD6B007D; Thu, 3 Feb 2022 23:49:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 3E0BF6B0078 for ; Thu, 3 Feb 2022 23:49:25 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F172E181DB371 for ; Fri, 4 Feb 2022 04:49:24 +0000 (UTC) X-FDA: 79103868648.18.E1D7A25 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf28.hostedemail.com (Postfix) with ESMTP id 76FC2C0002 for ; Fri, 4 Feb 2022 04:49:24 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 626C3B83669; Fri, 4 Feb 2022 04:49:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B40AC340F0; Fri, 4 Feb 2022 04:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1643950162; bh=yjWNMIG51D1uSvIE77XEC6AgViA+/ifmW+Z1eUPIVfA=; h=Date:To:From:In-Reply-To:Subject:From; b=1DuAkZSr7pfJNuS0s7+4bR3QnWUCqoO692US7ZKJ692SI002cHVbyahk6SeQTkli3 0mruIDZcCDBsSvVnDIi8Ags7qQM/WsJ7OAMOb4oewhghQgDMxv+vdSVU/qn7Q1DlgQ ioQUuBrc5Djx+4MTXU6gDvjiRoMYXgsZM4b9ARwI= Received: by hp1 (sSMTP sendmail emulation); Thu, 03 Feb 2022 20:49:20 -0800 Date: Thu, 03 Feb 2022 20:49:20 -0800 To:ziy@nvidia.com,will@kernel.org,weixugc@google.com,songmuchun@bytedance.com,rppt@kernel.org,rientjes@google.com,pjt@google.com,mingo@redhat.com,jirislaby@kernel.org,hughd@google.com,hpa@zytor.com,gthelen@google.com,dave.hansen@linux.intel.com,anshuman.khandual@arm.com,aneesh.kumar@linux.ibm.com,pasha.tatashin@soleen.com,akpm@linux-foundation.org,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220203204836.88dcebe504f440686cc63a60@linux-foundation.org> Subject: [patch 04/10] mm/khugepaged: unify collapse pmd clear, flush and free Message-Id: <20220204044920.8B40AC340F0@smtp.kernel.org> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 76FC2C0002 X-Stat-Signature: rd88mgeniufmcuuf89zukjtiy31tzpd8 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=1DuAkZSr; dmarc=none; spf=pass (imf28.hostedemail.com: domain of akpm@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@kernel.org X-Rspam-User: nil X-HE-Tag: 1643950164-487687 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Pasha Tatashin Subject: mm/khugepaged: unify collapse pmd clear, flush and free Unify the code that flushes, clears pmd entry, and frees the PTE table level into a new function collapse_and_free_pmd(). This cleanup is useful as in the next patch we will add another call to this function to iterate through PTE prior to freeing the level for page table check. Link: https://lkml.kernel.org/r/20220131203249.2832273-4-pasha.tatashin@soleen.com Signed-off-by: Pasha Tatashin Acked-by: David Rientjes Cc: Aneesh Kumar K.V Cc: Anshuman Khandual Cc: Dave Hansen Cc: Greg Thelen Cc: H. Peter Anvin Cc: Hugh Dickins Cc: Ingo Molnar Cc: Jiri Slaby Cc: Mike Rapoport Cc: Muchun Song Cc: Paul Turner Cc: Wei Xu Cc: Will Deacon Cc: Zi Yan Signed-off-by: Andrew Morton --- mm/khugepaged.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) --- a/mm/khugepaged.c~mm-khugepaged-unify-collapse-pmd-clear-flush-and-free +++ a/mm/khugepaged.c @@ -1416,6 +1416,19 @@ static int khugepaged_add_pte_mapped_thp return 0; } +static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp) +{ + spinlock_t *ptl; + pmd_t pmd; + + ptl = pmd_lock(vma->vm_mm, pmdp); + pmd = pmdp_collapse_flush(vma, addr, pmdp); + spin_unlock(ptl); + mm_dec_nr_ptes(mm); + pte_free(mm, pmd_pgtable(pmd)); +} + /** * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at * address haddr. @@ -1433,7 +1446,7 @@ void collapse_pte_mapped_thp(struct mm_s struct vm_area_struct *vma = find_vma(mm, haddr); struct page *hpage; pte_t *start_pte, *pte; - pmd_t *pmd, _pmd; + pmd_t *pmd; spinlock_t *ptl; int count = 0; int i; @@ -1509,12 +1522,7 @@ void collapse_pte_mapped_thp(struct mm_s } /* step 4: collapse pmd */ - ptl = pmd_lock(vma->vm_mm, pmd); - _pmd = pmdp_collapse_flush(vma, haddr, pmd); - spin_unlock(ptl); - mm_dec_nr_ptes(mm); - pte_free(mm, pmd_pgtable(_pmd)); - + collapse_and_free_pmd(mm, vma, haddr, pmd); drop_hpage: unlock_page(hpage); put_page(hpage); @@ -1552,7 +1560,7 @@ static void retract_page_tables(struct a struct vm_area_struct *vma; struct mm_struct *mm; unsigned long addr; - pmd_t *pmd, _pmd; + pmd_t *pmd; i_mmap_lock_write(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { @@ -1591,14 +1599,8 @@ static void retract_page_tables(struct a * reverse order. Trylock is a way to avoid deadlock. */ if (mmap_write_trylock(mm)) { - if (!khugepaged_test_exit(mm)) { - spinlock_t *ptl = pmd_lock(mm, pmd); - /* assume page table is clear */ - _pmd = pmdp_collapse_flush(vma, addr, pmd); - spin_unlock(ptl); - mm_dec_nr_ptes(mm); - pte_free(mm, pmd_pgtable(_pmd)); - } + if (!khugepaged_test_exit(mm)) + collapse_and_free_pmd(mm, vma, addr, pmd); mmap_write_unlock(mm); } else { /* Try again later */ _