From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B11AC433E6 for ; Thu, 18 Feb 2021 21:54:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0DC7964DDA for ; Thu, 18 Feb 2021 21:54:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DC7964DDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8470E6B006C; Thu, 18 Feb 2021 16:54:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A8446B006E; Thu, 18 Feb 2021 16:54:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55DBE6B0070; Thu, 18 Feb 2021 16:54:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 410C26B006C for ; Thu, 18 Feb 2021 16:54:42 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0ADF063FA for ; Thu, 18 Feb 2021 21:54:42 +0000 (UTC) X-FDA: 77832743604.14.7AA36AF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 7BFB7407F8F6 for ; Thu, 18 Feb 2021 21:54:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613685281; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fxZ7naSjdLcWw4VhAiMvDw9KgZSLU2f7bqFnNZWBKu4=; b=N6QdfDb0DR8tVI4Hd2Nq084xhVda+5gEfu7B7W7811z0jQCWxJ5Xszngsq20MeHukLjwKV wCx91NncrYXFaRsG7LLMhbamDYY6HD5yI/fIFNK/CAZj57Lc7qq8LQOyuGrQLemSLQBkxi 3BLxZQNu782+O3Ui5NyyOODNmWWBX3E= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-367-NG3WhJQRNJC7BUDFWQ5_Cw-1; Thu, 18 Feb 2021 16:54:39 -0500 X-MC-Unique: NG3WhJQRNJC7BUDFWQ5_Cw-1 Received: by mail-qt1-f198.google.com with SMTP id d11so2076484qth.3 for ; Thu, 18 Feb 2021 13:54:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fxZ7naSjdLcWw4VhAiMvDw9KgZSLU2f7bqFnNZWBKu4=; b=FQmGU67s7QfKMPk6xqmZ63GuSYmuj61DVTcnGgBm63PHwbDB6fE8MWaeFFcXfOOtEU qk4bZ7wb1IdNbOGMziPOniW7URjRctB3NJ9EPM7U1VJx+DPMxPJtLCQypXppYO4ruqGq lB/lGQOXEMf4i5Iuz+d4/A1bc0mqGXsRWqm+P6EAFQmh1ErgP33VsyuJDt1qlQW3YXXH txuxDyYDi4NNq1bEN86ELBJ9mm1W25frX+aMcMEvz9VfJu47YTFsdGlOWSylSKm3bFEk GOeT3eojkqoy8w0NX7p77ufIydDu2/egl82vFITmbIvYPH3sW/ILSrdPHBf90xsbd5+W QJQQ== X-Gm-Message-State: AOAM531Ti8VYmWsb62xSXgkEDfNLZDRuWZu63ZVLGJzVqr7xYLW3KJCH poVCIC6RmdkWWUGHY5BpOozv2bqyDAv8ufITQ+vdsYyfkEjmqU0Xg8qOv5oNt0bApyzblO2umTk b95Do7VCi7mk= X-Received: by 2002:ac8:782:: with SMTP id l2mr6339642qth.127.1613685278174; Thu, 18 Feb 2021 13:54:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJwm/5cI0aSiCCzZ0s31bO0wGCb4vnsbxeU5g3vbJ//GN9uluxCGOaH4rcHYL/KTRa+ej0iLiA== X-Received: by 2002:ac8:782:: with SMTP id l2mr6339631qth.127.1613685277972; Thu, 18 Feb 2021 13:54:37 -0800 (PST) Received: from xz-x1.redhat.com (bras-vprn-toroon474qw-lp130-20-174-93-89-182.dsl.bell.ca. [174.93.89.182]) by smtp.gmail.com with ESMTPSA id r80sm4964260qke.97.2021.02.18.13.54.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Feb 2021 13:54:37 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Axel Rasmussen , Andrew Morton , peterx@redhat.com, Mike Kravetz , Mike Rapoport , "Kirill A . Shutemov" , Matthew Wilcox , Andrea Arcangeli Subject: [PATCH v3 1/4] hugetlb: Pass vma into huge_pte_alloc() and huge_pmd_share() Date: Thu, 18 Feb 2021 16:54:31 -0500 Message-Id: <20210218215434.10203-2-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210218215434.10203-1-peterx@redhat.com> References: <20210218215434.10203-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7BFB7407F8F6 X-Stat-Signature: 4rzqzqutnkut9fyopjnzbryddwrqsbmq Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613685278-254002 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is a preparation work to be able to behave differently in the per architecture huge_pte_alloc() according to different VMA attributes. Pass it deeper into huge_pmd_share() so that we can avoid the find_vma() = call. Suggested-by: Mike Kravetz Reviewed-by: Mike Kravetz Reviewed-by: Axel Rasmussen Signed-off-by: Peter Xu --- arch/arm64/mm/hugetlbpage.c | 4 ++-- arch/ia64/mm/hugetlbpage.c | 3 ++- arch/mips/mm/hugetlbpage.c | 4 ++-- arch/parisc/mm/hugetlbpage.c | 2 +- arch/powerpc/mm/hugetlbpage.c | 3 ++- arch/s390/mm/hugetlbpage.c | 2 +- arch/sh/mm/hugetlbpage.c | 2 +- arch/sparc/mm/hugetlbpage.c | 1 + include/linux/hugetlb.h | 5 +++-- mm/hugetlb.c | 15 ++++++++------- mm/userfaultfd.c | 2 +- 11 files changed, 24 insertions(+), 19 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 55ecf6de9ff7..6e3bcffe2837 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -252,7 +252,7 @@ void set_huge_swap_pte_at(struct mm_struct *mm, unsig= ned long addr, set_pte(ptep, pte); } =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgdp; @@ -286,7 +286,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, } else if (sz =3D=3D PMD_SIZE) { if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && pud_none(READ_ONCE(*pudp))) - ptep =3D huge_pmd_share(mm, addr, pudp); + ptep =3D huge_pmd_share(mm, vma, addr, pudp); else ptep =3D (pte_t *)pmd_alloc(mm, pudp, addr); } else if (sz =3D=3D (CONT_PMD_SIZE)) { diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c index b331f94d20ac..f993cb36c062 100644 --- a/arch/ia64/mm/hugetlbpage.c +++ b/arch/ia64/mm/hugetlbpage.c @@ -25,7 +25,8 @@ unsigned int hpage_shift =3D HPAGE_SHIFT_DEFAULT; EXPORT_SYMBOL(hpage_shift); =20 pte_t * -huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long s= z) +huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, unsigned long sz) { unsigned long taddr =3D htlbpage_to_page(addr); pgd_t *pgd; diff --git a/arch/mips/mm/hugetlbpage.c b/arch/mips/mm/hugetlbpage.c index b9f76f433617..7eaff5b07873 100644 --- a/arch/mips/mm/hugetlbpage.c +++ b/arch/mips/mm/hugetlbpage.c @@ -21,8 +21,8 @@ #include #include =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, - unsigned long sz) +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, unsigned long sz) { pgd_t *pgd; p4d_t *p4d; diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c index d7ba014a7fbb..e141441bfa64 100644 --- a/arch/parisc/mm/hugetlbpage.c +++ b/arch/parisc/mm/hugetlbpage.c @@ -44,7 +44,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned l= ong addr, } =20 =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgd; diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.= c index 8b3cc4d688e8..d57276b8791c 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -106,7 +106,8 @@ static int __hugepte_alloc(struct mm_struct *mm, huge= pd_t *hpdp, * At this point we do the placement change only for BOOK3S 64. This wou= ld * possibly work on other subarchs. */ -pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned= long sz) +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, unsigned long sz) { pgd_t *pg; p4d_t *p4; diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index 3b5a4d25ca9b..da36d13ffc16 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -189,7 +189,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, return pte; } =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgdp; diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c index 220d7bc43d2b..999ab5916e69 100644 --- a/arch/sh/mm/hugetlbpage.c +++ b/arch/sh/mm/hugetlbpage.c @@ -21,7 +21,7 @@ #include #include =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgd; diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index ad4b42f04988..97e0824fdbe7 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -280,6 +280,7 @@ unsigned long pmd_leaf_size(pmd_t pmd) { return 1UL <= < tte_to_shift(*(pte_t *)&p unsigned long pte_leaf_size(pte_t pte) { return 1UL << tte_to_shift(pte)= ; } =20 pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgd; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index b5807f23caf8..a6113fa6d21d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -152,7 +152,8 @@ void hugetlb_fix_reserve_counts(struct inode *inode); extern struct mutex *hugetlb_fault_mutex_table; u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx)= ; =20 -pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *p= ud); +pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, pud_t *pud); =20 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage= ); =20 @@ -161,7 +162,7 @@ extern struct list_head huge_boot_pages; =20 /* arch callbacks */ =20 -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz); pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4bdb58ab14cb..07bb9bdc3282 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3807,7 +3807,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, src_pte =3D huge_pte_offset(src, addr, sz); if (!src_pte) continue; - dst_pte =3D huge_pte_alloc(dst, addr, sz); + dst_pte =3D huge_pte_alloc(dst, vma, addr, sz); if (!dst_pte) { ret =3D -ENOMEM; break; @@ -4544,7 +4544,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, stru= ct vm_area_struct *vma, */ mapping =3D vma->vm_file->f_mapping; i_mmap_lock_read(mapping); - ptep =3D huge_pte_alloc(mm, haddr, huge_page_size(h)); + ptep =3D huge_pte_alloc(mm, vma, haddr, huge_page_size(h)); if (!ptep) { i_mmap_unlock_read(mapping); return VM_FAULT_OOM; @@ -5334,9 +5334,9 @@ void adjust_range_if_pmd_sharing_possible(struct vm= _area_struct *vma, * if !vma_shareable check at the beginning of the routine. i_mmap_rwsem= is * only required for subsequent processing. */ -pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *p= ud) +pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, pud_t *pud) { - struct vm_area_struct *vma =3D find_vma(mm, addr); struct address_space *mapping =3D vma->vm_file->f_mapping; pgoff_t idx =3D ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; @@ -5414,7 +5414,8 @@ int huge_pmd_unshare(struct mm_struct *mm, struct v= m_area_struct *vma, } #define want_pmd_share() (1) #else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ -pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *p= ud) +pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct vma, + unsigned long addr, pud_t *pud) { return NULL; } @@ -5433,7 +5434,7 @@ void adjust_range_if_pmd_sharing_possible(struct vm= _area_struct *vma, #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ =20 #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB -pte_t *huge_pte_alloc(struct mm_struct *mm, +pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { pgd_t *pgd; @@ -5452,7 +5453,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, } else { BUG_ON(sz !=3D PMD_SIZE); if (want_pmd_share() && pud_none(*pud)) - pte =3D huge_pmd_share(mm, addr, pud); + pte =3D huge_pmd_share(mm, vma, addr, pud); else pte =3D (pte_t *)pmd_alloc(mm, pud, addr); } diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9a3d451402d7..063cbb17e8d8 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -290,7 +290,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb= (struct mm_struct *dst_mm, mutex_lock(&hugetlb_fault_mutex_table[hash]); =20 err =3D -ENOMEM; - dst_pte =3D huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize); + dst_pte =3D huge_pte_alloc(dst_mm, dst_vma, dst_addr, vma_hpagesize); if (!dst_pte) { mutex_unlock(&hugetlb_fault_mutex_table[hash]); i_mmap_unlock_read(mapping); --=20 2.26.2