From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC51C43331 for ; Tue, 31 Mar 2020 14:30:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 52AFE2071A for ; Tue, 31 Mar 2020 14:30:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52AFE2071A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E38E86B0070; Tue, 31 Mar 2020 10:30:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC2618E0005; Tue, 31 Mar 2020 10:30:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3D416B0072; Tue, 31 Mar 2020 10:30:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0003.hostedemail.com [216.40.44.3]) by kanga.kvack.org (Postfix) with ESMTP id ACAA06B0070 for ; Tue, 31 Mar 2020 10:30:08 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 676CC4DC4 for ; Tue, 31 Mar 2020 14:30:08 +0000 (UTC) X-FDA: 76655892096.14.blade61_125d86129b20 X-HE-Tag: blade61_125d86129b20 X-Filterd-Recvd-Size: 4561 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Tue, 31 Mar 2020 14:30:07 +0000 (UTC) Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 38FC7F38AA079AC0D10D; Tue, 31 Mar 2020 22:30:03 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Tue, 31 Mar 2020 22:29:54 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [RFC PATCH v5 6/8] mm: tlb: Pass struct mmu_gather to flush_hugetlb_tlb_range Date: Tue, 31 Mar 2020 22:29:25 +0800 Message-ID: <20200331142927.1237-7-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200331142927.1237-1-yezhenyu2@huawei.com> References: <20200331142927.1237-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Preparations to support for passing struct mmu_gather to flush_tlb_range. See in future patches. Signed-off-by: Zhenyu Ye --- arch/powerpc/include/asm/book3s/64/tlbflush.h | 3 ++- mm/hugetlb.c | 17 ++++++++++++----- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc= /include/asm/book3s/64/tlbflush.h index 6445d179ac15..968f10ef3d51 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -57,7 +57,8 @@ static inline void flush_pmd_tlb_range(struct mmu_gathe= r *tlb, } =20 #define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE -static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, +static inline void flush_hugetlb_tlb_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dd8737a94bec..f913ce0b4831 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4441,7 +4441,8 @@ long follow_hugetlb_page(struct mm_struct *mm, stru= ct vm_area_struct *vma, * ARCHes with special requirements for evicting HUGETLB backing TLB ent= ries can * implement this. */ -#define flush_hugetlb_tlb_range(vma, addr, end) flush_tlb_range(vma, add= r, end) +#define flush_hugetlb_tlb_range(tlb, vma, addr, end) \ + flush_tlb_range(vma, addr, end) #endif =20 unsigned long hugetlb_change_protection(struct vm_area_struct *vma, @@ -4455,6 +4456,7 @@ unsigned long hugetlb_change_protection(struct vm_a= rea_struct *vma, unsigned long pages =3D 0; bool shared_pmd =3D false; struct mmu_notifier_range range; + struct mmu_gather tlb; =20 /* * In the case of shared PMDs, the area to flush could be beyond @@ -4520,10 +4522,15 @@ unsigned long hugetlb_change_protection(struct vm= _area_struct *vma, * and that page table be reused and filled with junk. If we actually * did unshare a page of pmds, flush the range corresponding to the pud= . */ - if (shared_pmd) - flush_hugetlb_tlb_range(vma, range.start, range.end); - else - flush_hugetlb_tlb_range(vma, start, end); + if (shared_pmd) { + tlb_gather_mmu(&tlb, mm, range.start, range.end); + flush_hugetlb_tlb_range(&tlb, vma, range.start, range.end); + tlb_finish_mmu(&tlb, range.start, range.end); + } else { + tlb_gather_mmu(&tlb, mm, start, end); + flush_hugetlb_tlb_range(&tlb, vma, start, end); + tlb_finish_mmu(&tlb, start, end); + } /* * No need to call mmu_notifier_invalidate_range() we are downgrading * page table protection not changing it to point to a new page. --=20 2.19.1