From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 355F2C433FE for ; Thu, 21 Oct 2021 19:52:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B508160F22 for ; Thu, 21 Oct 2021 19:52:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B508160F22 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 50CA194000A; Thu, 21 Oct 2021 15:52:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46EB1940007; Thu, 21 Oct 2021 15:52:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 297D994000A; Thu, 21 Oct 2021 15:52:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 139E2940007 for ; Thu, 21 Oct 2021 15:52:55 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BFB1982499A8 for ; Thu, 21 Oct 2021 19:52:54 +0000 (UTC) X-FDA: 78721492668.11.56386E2 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf05.hostedemail.com (Postfix) with ESMTP id 093475084E90 for ; Thu, 21 Oct 2021 19:52:47 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id ls18so1289100pjb.3 for ; Thu, 21 Oct 2021 12:52:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SRq4NQhQA7HWz7IbxvdffYxvD2YgwmAObj1FZn6RW1g=; b=oumy2yWWoX2f7QqFA6XX12D2XWvcNBNfD2oI53KYTe/CYI21sh9/H+o5EXZChJ8hTR W8JZ8j/pYe9D1GF+FC5jE7827JY9oUXOGCcfk6E6gZk3AvbjWB9f+hHAM0mEqXInfbTj kZVwVmnCHDCEA+aCif7i/Ec/76aQWZEafZqF38LW+rkH1/azuXeNi9oK+PfADFbqN2C8 cEmpsAqX1KOLQQ+aeF2K7tnVL9PxHCBqkpQEPVpjrXrFfYh2bzmgJcMgaJiaJBTrFNYy yXSZkwk4xNDNJaoKy5BS6jQtyk6O3SplXSanCOCZu5SjX8/geathPdPdOqY1xkfQbNra diXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SRq4NQhQA7HWz7IbxvdffYxvD2YgwmAObj1FZn6RW1g=; b=yP4uPb0/dVlYs0Mt5j/Iyc2AtW1X9LWjCskreendvgMTRRObTJ+TIYme9KeLrB2yXs AzOcXhs0maXInmduahdIZE1PuOzGmmLntfRWWKiSk/+ei58+lclCgoCNNe32YwwRgVC0 FZwoRTS/QccnrmGUdiEUo4ezOAtaG6H98c6bSGNmffTc4jce25nrU+AO80IuQe7ZRVji zFrTllJtti+/JNOwBdYNmyc4hAqUrHgnshX46tQINGjndkdBrdeesEwao/7GFHGbcjMy hul7od7memBiqq79b+kMIsHYg+nnBB0Rfu0X7868WPC9elfxlFO+pCtLFgeXsYQAlJVz 3w4Q== X-Gm-Message-State: AOAM530SmdYx9Ckp67Icx3m9jdxR9Cd8CowSg1qYrEF3TH/TnMdLV1t2 RqR/+PCXaITo3yL8RbUqztWXNw3m28E= X-Google-Smtp-Source: ABdhPJxuRrAgcMRKwkH6qwRdR6U/DGcHWsujfkIWXHgYU2X95hWFAKv+02N6brbIwuKvyVM1y+DQ6A== X-Received: by 2002:a17:90a:6fc1:: with SMTP id e59mr9219821pjk.103.1634845972919; Thu, 21 Oct 2021 12:52:52 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id n202sm7098078pfd.160.2021.10.21.12.52.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 12:52:52 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Xu , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH v2 4/5] mm/mprotect: use mmu_gather Date: Thu, 21 Oct 2021 05:21:11 -0700 Message-Id: <20211021122112.592634-5-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021122112.592634-1-namit@vmware.com> References: <20211021122112.592634-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 093475084E90 X-Stat-Signature: crbtokkwoakbt4gnf177k8guoe5yg4qh Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=oumy2yWW; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf05.hostedemail.com: domain of mail-pj1-f41.google.com has no SPF policy when checking 209.85.216.41) smtp.helo=mail-pj1-f41.google.com X-HE-Tag: 1634845967-949642 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit change_pXX_range() currently does not use mmu_gather, but instead implements its own deferred TLB flushes scheme. This both complicates the code, as developers need to be aware of different invalidation schemes, and prevents opportunities to avoid TLB flushes or perform them in finer granularity. The use of mmu_gather for modified PTEs has benefits in various scenarios even if pages are not released. For instance, if only a single page needs to be flushed out of a range of many pages, only that page would be flushed. If a THP page is flushed, on x86 a single TLB invlpg instruction can be used instead of 512 instructions (or a full TLB flush, which would Linux would actually use by default). mprotect() over multiple VMAs requires a single flush. Use mmu_gather in change_pXX_range(). As the pages are not released, only record the flushed range using tlb_flush_pXX_range(). Handle THP similarly and get rid of flush_cache_range() which becomes redundant since tlb_start_vma() calls it when needed. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Xu Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- fs/exec.c | 6 ++- include/linux/huge_mm.h | 5 ++- include/linux/mm.h | 5 ++- mm/huge_memory.c | 10 ++++- mm/mprotect.c | 93 ++++++++++++++++++++++------------------- mm/userfaultfd.c | 6 ++- 6 files changed, 75 insertions(+), 50 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 5a7a07dfdc81..7f8609bbc6b3 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -752,6 +752,7 @@ int setup_arg_pages(struct linux_binprm *bprm, unsigned long stack_size; unsigned long stack_expand; unsigned long rlim_stack; + struct mmu_gather tlb; =20 #ifdef CONFIG_STACK_GROWSUP /* Limit stack size */ @@ -806,8 +807,11 @@ int setup_arg_pages(struct linux_binprm *bprm, vm_flags |=3D mm->def_flags; vm_flags |=3D VM_STACK_INCOMPLETE_SETUP; =20 - ret =3D mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end, + tlb_gather_mmu(&tlb, mm); + ret =3D mprotect_fixup(&tlb, vma, &prev, vma->vm_start, vma->vm_end, vm_flags); + tlb_finish_mmu(&tlb); + if (ret) goto out_unlock; BUG_ON(prev !=3D vma); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f280f33ff223..a9b6e03e9c4c 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -36,8 +36,9 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area= _struct *vma, pud_t *pud, unsigned long addr); bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd); -int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned lon= g addr, - pgprot_t newprot, unsigned long cp_flags); +int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, + pmd_t *pmd, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags); vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, pgprot_t pgprot, bool write); =20 diff --git a/include/linux/mm.h b/include/linux/mm.h index 00bb2d938df4..f46bab158560 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2001,10 +2001,11 @@ extern unsigned long move_page_tables(struct vm_a= rea_struct *vma, #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) =20 -extern unsigned long change_protection(struct vm_area_struct *vma, unsig= ned long start, +extern unsigned long change_protection(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags); -extern int mprotect_fixup(struct vm_area_struct *vma, +extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct = *vma, struct vm_area_struct **pprev, unsigned long start, unsigned long end, unsigned long newflags); =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 435da011b1a2..f5d0357a25ce 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1720,8 +1720,9 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsi= gned long old_addr, * or if prot_numa but THP migration is not supported * - HPAGE_PMD_NR if protections changed and TLB flush necessary */ -int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, pgprot_t newprot, unsigned long cp_flags) +int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, + pmd_t *pmd, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) { struct mm_struct *mm =3D vma->vm_mm; spinlock_t *ptl; @@ -1732,6 +1733,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd= _t *pmd, bool uffd_wp =3D cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve =3D cp_flags & MM_CP_UFFD_WP_RESOLVE; =20 + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); + if (prot_numa && !thp_migration_supported()) return 1; =20 @@ -1817,6 +1820,9 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd= _t *pmd, } ret =3D HPAGE_PMD_NR; set_pmd_at(mm, addr, pmd, entry); + + tlb_flush_pmd_range(tlb, addr, HPAGE_PMD_SIZE); + BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry)); unlock: spin_unlock(ptl); diff --git a/mm/mprotect.c b/mm/mprotect.c index 883e2cc85cad..0f5c87af5c60 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -32,12 +32,13 @@ #include #include #include +#include =20 #include "internal.h" =20 -static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t = *pmd, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pte_t *pte, oldpte; spinlock_t *ptl; @@ -48,6 +49,8 @@ static unsigned long change_pte_range(struct vm_area_st= ruct *vma, pmd_t *pmd, bool uffd_wp =3D cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve =3D cp_flags & MM_CP_UFFD_WP_RESOLVE; =20 + tlb_change_page_size(tlb, PAGE_SIZE); + /* * Can be called with only the mmap_lock for reading by * prot_numa so we must check the pmd isn't constantly @@ -138,6 +141,7 @@ static unsigned long change_pte_range(struct vm_area_= struct *vma, pmd_t *pmd, ptent =3D pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry =3D pte_to_swp_entry(oldpte); @@ -219,9 +223,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_= huge(pmd_t *pmd) return 0; } =20 -static inline unsigned long change_pmd_range(struct vm_area_struct *vma, - pud_t *pud, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pmd_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pud, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pmd_t *pmd; unsigned long next; @@ -261,8 +265,12 @@ static inline unsigned long change_pmd_range(struct = vm_area_struct *vma, if (next - addr !=3D HPAGE_PMD_SIZE) { __split_huge_pmd(vma, pmd, addr, false, NULL); } else { - int nr_ptes =3D change_huge_pmd(vma, pmd, addr, - newprot, cp_flags); + /* + * change_huge_pmd() does not defer TLB flushes, + * so no need to propagate the tlb argument. + */ + int nr_ptes =3D change_huge_pmd(tlb, vma, pmd, + addr, newprot, cp_flags); =20 if (nr_ptes) { if (nr_ptes =3D=3D HPAGE_PMD_NR) { @@ -276,8 +284,8 @@ static inline unsigned long change_pmd_range(struct v= m_area_struct *vma, } /* fall through, the trans huge pmd just split */ } - this_pages =3D change_pte_range(vma, pmd, addr, next, newprot, - cp_flags); + this_pages =3D change_pte_range(tlb, vma, pmd, addr, next, + newprot, cp_flags); pages +=3D this_pages; next: cond_resched(); @@ -291,9 +299,9 @@ static inline unsigned long change_pmd_range(struct v= m_area_struct *vma, return pages; } =20 -static inline unsigned long change_pud_range(struct vm_area_struct *vma, - p4d_t *p4d, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pud_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pud_t *pud; unsigned long next; @@ -304,16 +312,16 @@ static inline unsigned long change_pud_range(struct= vm_area_struct *vma, next =3D pud_addr_end(addr, end); if (pud_none_or_clear_bad(pud)) continue; - pages +=3D change_pmd_range(vma, pud, addr, next, newprot, + pages +=3D change_pmd_range(tlb, vma, pud, addr, next, newprot, cp_flags); } while (pud++, addr =3D next, addr !=3D end); =20 return pages; } =20 -static inline unsigned long change_p4d_range(struct vm_area_struct *vma, - pgd_t *pgd, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_p4d_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { p4d_t *p4d; unsigned long next; @@ -324,44 +332,40 @@ static inline unsigned long change_p4d_range(struct= vm_area_struct *vma, next =3D p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) continue; - pages +=3D change_pud_range(vma, p4d, addr, next, newprot, + pages +=3D change_pud_range(tlb, vma, p4d, addr, next, newprot, cp_flags); } while (p4d++, addr =3D next, addr !=3D end); =20 return pages; } =20 -static unsigned long change_protection_range(struct vm_area_struct *vma, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_protection_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { struct mm_struct *mm =3D vma->vm_mm; pgd_t *pgd; unsigned long next; - unsigned long start =3D addr; unsigned long pages =3D 0; =20 BUG_ON(addr >=3D end); pgd =3D pgd_offset(mm, addr); - flush_cache_range(vma, addr, end); - inc_tlb_flush_pending(mm); + tlb_start_vma(tlb, vma); do { next =3D pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; - pages +=3D change_p4d_range(vma, pgd, addr, next, newprot, + pages +=3D change_p4d_range(tlb, vma, pgd, addr, next, newprot, cp_flags); } while (pgd++, addr =3D next, addr !=3D end); =20 - /* Only flush the TLB if we actually modified any entries: */ - if (pages) - flush_tlb_range(vma, start, end); - dec_tlb_flush_pending(mm); + tlb_end_vma(tlb, vma); =20 return pages; } =20 -unsigned long change_protection(struct vm_area_struct *vma, unsigned lon= g start, +unsigned long change_protection(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { @@ -372,7 +376,7 @@ unsigned long change_protection(struct vm_area_struct= *vma, unsigned long start, if (is_vm_hugetlb_page(vma)) pages =3D hugetlb_change_protection(vma, start, end, newprot); else - pages =3D change_protection_range(vma, start, end, newprot, + pages =3D change_protection_range(tlb, vma, start, end, newprot, cp_flags); =20 return pages; @@ -406,8 +410,9 @@ static const struct mm_walk_ops prot_none_walk_ops =3D= { }; =20 int -mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev= , - unsigned long start, unsigned long end, unsigned long newflags) +mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma, + struct vm_area_struct **pprev, unsigned long start, + unsigned long end, unsigned long newflags) { struct mm_struct *mm =3D vma->vm_mm; unsigned long oldflags =3D vma->vm_flags; @@ -494,7 +499,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_= area_struct **pprev, dirty_accountable =3D vma_wants_writenotify(vma, vma->vm_page_prot); vma_set_page_prot(vma); =20 - change_protection(vma, start, end, vma->vm_page_prot, + change_protection(tlb, vma, start, end, vma->vm_page_prot, dirty_accountable ? MM_CP_DIRTY_ACCT : 0); =20 /* @@ -528,6 +533,7 @@ static int do_mprotect_pkey(unsigned long start, size= _t len, const int grows =3D prot & (PROT_GROWSDOWN|PROT_GROWSUP); const bool rier =3D (current->personality & READ_IMPLIES_EXEC) && (prot & PROT_READ); + struct mmu_gather tlb; =20 start =3D untagged_addr(start); =20 @@ -584,6 +590,7 @@ static int do_mprotect_pkey(unsigned long start, size= _t len, if (start > vma->vm_start) prev =3D vma; =20 + tlb_gather_mmu(&tlb, current->mm); for (nstart =3D start ; ; ) { unsigned long mask_off_old_flags; unsigned long newflags; @@ -610,18 +617,18 @@ static int do_mprotect_pkey(unsigned long start, si= ze_t len, /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { error =3D -EACCES; - goto out; + goto out_tlb; } =20 /* Allow architectures to sanity-check the new flags */ if (!arch_validate_flags(newflags)) { error =3D -EINVAL; - goto out; + goto out_tlb; } =20 error =3D security_file_mprotect(vma, reqprot, prot); if (error) - goto out; + goto out_tlb; =20 tmp =3D vma->vm_end; if (tmp > end) @@ -630,27 +637,29 @@ static int do_mprotect_pkey(unsigned long start, si= ze_t len, if (vma->vm_ops && vma->vm_ops->mprotect) { error =3D vma->vm_ops->mprotect(vma, nstart, tmp, newflags); if (error) - goto out; + goto out_tlb; } =20 - error =3D mprotect_fixup(vma, &prev, nstart, tmp, newflags); + error =3D mprotect_fixup(&tlb, vma, &prev, nstart, tmp, newflags); if (error) - goto out; + goto out_tlb; =20 nstart =3D tmp; =20 if (nstart < prev->vm_end) nstart =3D prev->vm_end; if (nstart >=3D end) - goto out; + goto out_tlb; =20 vma =3D prev->vm_next; if (!vma || vma->vm_start !=3D nstart) { error =3D -ENOMEM; - goto out; + goto out_tlb; } prot =3D reqprot; } +out_tlb: + tlb_finish_mmu(&tlb); out: mmap_write_unlock(current->mm); return error; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index ac6f036298cd..15a20bb35868 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "internal.h" =20 static __always_inline @@ -674,6 +675,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, uns= igned long start, atomic_t *mmap_changing) { struct vm_area_struct *dst_vma; + struct mmu_gather tlb; pgprot_t newprot; int err; =20 @@ -715,8 +717,10 @@ int mwriteprotect_range(struct mm_struct *dst_mm, un= signed long start, else newprot =3D vm_get_page_prot(dst_vma->vm_flags); =20 - change_protection(dst_vma, start, start + len, newprot, + tlb_gather_mmu(&tlb, dst_mm); + change_protection(&tlb, dst_vma, start, start + len, newprot, enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE); + tlb_finish_mmu(&tlb); =20 err =3D 0; out_unlock: --=20 2.25.1