From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 882C3C64ED8 for ; Mon, 27 Feb 2023 22:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230178AbjB0WHE (ORCPT ); Mon, 27 Feb 2023 17:07:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230177AbjB0WGc (ORCPT ); Mon, 27 Feb 2023 17:06:32 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8254A298CF for ; Mon, 27 Feb 2023 14:06:30 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1ADAF60F2E for ; Mon, 27 Feb 2023 22:06:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70410C433EF; Mon, 27 Feb 2023 22:06:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1677535589; bh=qPm6HrsulareHVBys7JuNsBnfzB3nDTRuWrkZAIVKdE=; h=Date:To:From:Subject:From; b=IkyYXlt21C1HfwRT+pTOHt9tpBVDPnHNoyQzrL86rFMoZLTI69RvHXOOW1SOF7mFP YSdYRqrVqty1VKX+ARaaBu1gP3sgVXWQNOpl10PqrqKNkq0GmgogM8A7baKn3Icd5Z tHfdiPFlwKaUj9ApYynhAkVnIbOD6zz2kWZ14kaA= Date: Mon, 27 Feb 2023 14:06:28 -0800 To: mm-commits@vger.kernel.org, surenb@google.com, akpm@linux-foundation.org From: Andrew Morton Subject: + mm-conditionally-write-lock-vma-in-free_pgtables.patch added to mm-unstable branch Message-Id: <20230227220629.70410C433EF@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: mm: conditionally write-lock VMA in free_pgtables has been added to the -mm mm-unstable branch. Its filename is mm-conditionally-write-lock-vma-in-free_pgtables.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-conditionally-write-lock-vma-in-free_pgtables.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Suren Baghdasaryan Subject: mm: conditionally write-lock VMA in free_pgtables Date: Mon, 27 Feb 2023 09:36:18 -0800 Normally free_pgtables needs to lock affected VMAs except for the case when VMAs were isolated under VMA write-lock. munmap() does just that, isolating while holding appropriate locks and then downgrading mmap_lock and dropping per-VMA locks before freeing page tables. Add a parameter to free_pgtables for such scenario. Link: https://lkml.kernel.org/r/20230227173632.3292573-20-surenb@google.com Signed-off-by: Suren Baghdasaryan Signed-off-by: Andrew Morton --- --- a/mm/internal.h~mm-conditionally-write-lock-vma-in-free_pgtables +++ a/mm/internal.h @@ -105,7 +105,7 @@ void folio_activate(struct folio *folio) void free_pgtables(struct mmu_gather *tlb, struct maple_tree *mt, struct vm_area_struct *start_vma, unsigned long floor, - unsigned long ceiling); + unsigned long ceiling, bool mm_wr_locked); void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); struct zap_details; --- a/mm/memory.c~mm-conditionally-write-lock-vma-in-free_pgtables +++ a/mm/memory.c @@ -348,7 +348,7 @@ void free_pgd_range(struct mmu_gather *t void free_pgtables(struct mmu_gather *tlb, struct maple_tree *mt, struct vm_area_struct *vma, unsigned long floor, - unsigned long ceiling) + unsigned long ceiling, bool mm_wr_locked) { MA_STATE(mas, mt, vma->vm_end, vma->vm_end); @@ -366,6 +366,8 @@ void free_pgtables(struct mmu_gather *tl * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + if (mm_wr_locked) + vma_start_write(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); @@ -380,6 +382,8 @@ void free_pgtables(struct mmu_gather *tl && !is_vm_hugetlb_page(next)) { vma = next; next = mas_find(&mas, ceiling - 1); + if (mm_wr_locked) + vma_start_write(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); } --- a/mm/mmap.c~mm-conditionally-write-lock-vma-in-free_pgtables +++ a/mm/mmap.c @@ -2152,7 +2152,8 @@ static void unmap_region(struct mm_struc update_hiwater_rss(mm); unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked); free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, - next ? next->vm_start : USER_PGTABLES_CEILING); + next ? next->vm_start : USER_PGTABLES_CEILING, + mm_wr_locked); tlb_finish_mmu(&tlb); } @@ -3056,7 +3057,7 @@ void exit_mmap(struct mm_struct *mm) mmap_write_lock(mm); mt_clear_in_rcu(&mm->mm_mt); free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS, - USER_PGTABLES_CEILING); + USER_PGTABLES_CEILING, true); tlb_finish_mmu(&tlb); /* _ Patches currently in -mm which might be from surenb@google.com are mm-introduce-config_per_vma_lock.patch mm-move-mmap_lock-assert-function-definitions.patch mm-add-per-vma-lock-and-helper-functions-to-control-it.patch mm-mark-vma-as-being-written-when-changing-vm_flags.patch mm-mmap-move-vma_prepare-before-vma_adjust_trans_huge.patch mm-khugepaged-write-lock-vma-while-collapsing-a-huge-page.patch mm-mmap-write-lock-vmas-in-vma_prepare-before-modifying-them.patch mm-mremap-write-lock-vma-while-remapping-it-to-a-new-address-range.patch mm-write-lock-vmas-before-removing-them-from-vma-tree.patch mm-conditionally-write-lock-vma-in-free_pgtables.patch kernel-fork-assert-no-vma-readers-during-its-destruction.patch mm-mmap-prevent-pagefault-handler-from-racing-with-mmu_notifier-registration.patch mm-introduce-vma-detached-flag.patch mm-introduce-lock_vma_under_rcu-to-be-used-from-arch-specific-code.patch mm-fall-back-to-mmap_lock-if-vma-anon_vma-is-not-yet-set.patch mm-add-fault_flag_vma_lock-flag.patch mm-prevent-do_swap_page-from-handling-page-faults-under-vma-lock.patch mm-prevent-userfaults-to-be-handled-under-per-vma-lock.patch mm-introduce-per-vma-lock-statistics.patch x86-mm-try-vma-lock-based-page-fault-handling-first.patch arm64-mm-try-vma-lock-based-page-fault-handling-first.patch mm-mmap-free-vm_area_struct-without-call_rcu-in-exit_mmap.patch mm-separate-vma-lock-from-vm_area_struct.patch per-vma-locks.patch