From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A7E6ECAAD4 for ; Mon, 29 Aug 2022 21:26:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229962AbiH2V0z (ORCPT ); Mon, 29 Aug 2022 17:26:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbiH2V0a (ORCPT ); Mon, 29 Aug 2022 17:26:30 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5831591D01 for ; Mon, 29 Aug 2022 14:26:00 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id dc10-20020a056a0035ca00b0053870674be9so756906pfb.12 for ; Mon, 29 Aug 2022 14:26:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=OIPy78dmVL2e9cDtVRNWLABLgvKsSbZL8TpHXpGFOXD1iGGPPAnyGMcNCguJqgzcNF vE+9cNpICzTMp48ey6JmMRyIlEctkJFDoL9MrqQuHHLDpAc2aXZC9NHJQw5sThX00eJ4 cjI0k3jAy5gFEX5VEHv4cVfUWBvRtfOC3iZWstqSQy3ibplhtgaSEpWxPcs9EVcm88qg LMXuGuQRDlQwRuQ7YNFSCH+8XULwWBK9BYSC5nbMXIDpOIovOb1rbCvinLUjcMpntl6K B3Pgew8zey/TtmyxMcXnTuT0BjzA6VIn4rP9dbnKnx4PgoHjYUplxPVEMwf3o1trOT0a 21qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=fCwnfPlM9TCFEmHVd0/zlh9jJu98NSi/YVwhwZy6nbNVG15kX0t0wut+YAQggMu6TM AwO0JSKrKpCi4wDMT/q0kiqgjBXJvGSHG/KP821EiFPCBMW7KtvI9RQ5nx3w2ib1hX5+ I5RNiu5jAo34YUHCjb41Sq7oqL4OzljgUFJwwFa0QhpOMp/wcrB0OSn/H46rVi+G9Nz0 YSobgnPaL5Q1MbmxZpgKg0mTHbPbmLIt3AUD2/sUFPt7K77wIN4RT0YYOOnkd2Nx74aS AIjJOZNKfSOayZu0/H8Cy/o5vYdU1XundLe6QxYJPDt96+7ji6tIDOfjj4ULPZxjhk+r P8Gg== X-Gm-Message-State: ACgBeo1360aT0vOSB8FT98JV9TX0z8s/1U6Jur6/sDUCP80WovQUlS5I 7lv8Voqhvi0Lu6pfyVLqPA0xMcWWgZU= X-Google-Smtp-Source: AA6agR4q+ezvBgIOmovwk/KulEVJwqDq2XJXfw+za6wVj0A/SjlqYDTJGsWkQl5JJSuDpwxpjjn7kfBBqpA= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:a17:902:76c5:b0:174:e586:f6e2 with SMTP id j5-20020a17090276c500b00174e586f6e2mr4373889plt.37.1661808359839; Mon, 29 Aug 2022 14:25:59 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:16 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-14-surenb@google.com> Subject: [RFC PATCH 13/28] mm: conditionally mark VMA as locked in free_pgtables and unmap_page_range From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org free_pgtables and unmap_page_range functions can be called with mmap_lock held for write (e.g. in mmap_region), held for read (e.g in madvise_pageout) or not held at all (e.g in madvise_remove might drop mmap_lock before calling vfs_fallocate, which ends up calling unmap_page_range). Provide free_pgtables and unmap_page_range with additional argument indicating whether to mark the VMA as locked or not based on the usage. The parameter is set based on whether mmap_lock is held in write mode during the call. This ensures no change in behavior between mmap_lock and per-vma locks. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 2 +- mm/internal.h | 4 ++-- mm/memory.c | 32 +++++++++++++++++++++----------- mm/mmap.c | 17 +++++++++-------- mm/oom_kill.c | 3 ++- 5 files changed, 35 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 476bf936c5f0..dc72be923e5b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1874,7 +1874,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); struct mmu_notifier_range; diff --git a/mm/internal.h b/mm/internal.h index 785409805ed7..e6c0f999e0cb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -85,14 +85,14 @@ bool __folio_end_writeback(struct folio *folio); void deactivate_file_folio(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long floor, unsigned long ceiling); + unsigned long floor, unsigned long ceiling, bool lock_vma); void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); struct zap_details; void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details); + struct zap_details *details, bool lock_vma); void page_cache_ra_order(struct readahead_control *, struct file_ra_state *, unsigned int order); diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb..9ac9944e8c62 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -403,7 +403,7 @@ void free_pgd_range(struct mmu_gather *tlb, } void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long floor, unsigned long ceiling) + unsigned long floor, unsigned long ceiling, bool lock_vma) { while (vma) { struct vm_area_struct *next = vma->vm_next; @@ -413,6 +413,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); @@ -427,6 +429,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, && !is_vm_hugetlb_page(next)) { vma = next; next = vma->vm_next; + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); } @@ -1631,12 +1635,16 @@ static inline unsigned long zap_p4d_range(struct mmu_gather *tlb, void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details) + struct zap_details *details, + bool lock_vma) { pgd_t *pgd; unsigned long next; BUG_ON(addr >= end); + if (lock_vma) + vma_mark_locked(vma); + tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { @@ -1652,7 +1660,7 @@ void unmap_page_range(struct mmu_gather *tlb, static void unmap_single_vma(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, - struct zap_details *details) + struct zap_details *details, bool lock_vma) { unsigned long start = max(vma->vm_start, start_addr); unsigned long end; @@ -1691,7 +1699,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, i_mmap_unlock_write(vma->vm_file->f_mapping); } } else - unmap_page_range(tlb, vma, start, end, details); + unmap_page_range(tlb, vma, start, end, details, lock_vma); } } @@ -1715,7 +1723,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, */ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr) + unsigned long end_addr, bool lock_vma) { struct mmu_notifier_range range; struct zap_details details = { @@ -1728,7 +1736,8 @@ void unmap_vmas(struct mmu_gather *tlb, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) - unmap_single_vma(tlb, vma, start_addr, end_addr, &details); + unmap_single_vma(tlb, vma, start_addr, end_addr, &details, + lock_vma); mmu_notifier_invalidate_range_end(&range); } @@ -1753,7 +1762,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) - unmap_single_vma(&tlb, vma, start, range.end, NULL); + unmap_single_vma(&tlb, vma, start, range.end, NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1768,7 +1777,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, * The range must fit into one VMA. */ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, - unsigned long size, struct zap_details *details) + unsigned long size, struct zap_details *details, bool lock_vma) { struct mmu_notifier_range range; struct mmu_gather tlb; @@ -1779,7 +1788,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); - unmap_single_vma(&tlb, vma, address, range.end, details); + unmap_single_vma(&tlb, vma, address, range.end, details, lock_vma); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1802,7 +1811,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, !(vma->vm_flags & VM_PFNMAP)) return; - zap_page_range_single(vma, address, size, NULL); + zap_page_range_single(vma, address, size, NULL, true); } EXPORT_SYMBOL_GPL(zap_vma_ptes); @@ -3483,7 +3492,8 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, struct zap_details *details) { - zap_page_range_single(vma, start_addr, end_addr - start_addr, details); + zap_page_range_single(vma, start_addr, end_addr - start_addr, details, + false); } static inline void unmap_mapping_range_tree(struct rb_root_cached *root, diff --git a/mm/mmap.c b/mm/mmap.c index 121544fd90de..094678b4434b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -79,7 +79,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags) { @@ -1866,7 +1866,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma->vm_file = NULL; /* Undo any partial mapping done by a device driver. */ - unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end); + unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end, true); if (vm_flags & VM_SHARED) mapping_unmap_writable(file->f_mapping); free_vma: @@ -2626,7 +2626,7 @@ static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) */ static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, bool lock_vma) { struct vm_area_struct *next = vma_next(mm, prev); struct mmu_gather tlb; @@ -2634,9 +2634,10 @@ static void unmap_region(struct mm_struct *mm, lru_add_drain(); tlb_gather_mmu(&tlb, mm); update_hiwater_rss(mm); - unmap_vmas(&tlb, vma, start, end); + unmap_vmas(&tlb, vma, start, end, lock_vma); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, - next ? next->vm_start : USER_PGTABLES_CEILING); + next ? next->vm_start : USER_PGTABLES_CEILING, + lock_vma); tlb_finish_mmu(&tlb); } @@ -2849,7 +2850,7 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, if (downgrade) mmap_write_downgrade(mm); - unmap_region(mm, vma, prev, start, end); + unmap_region(mm, vma, prev, start, end, !downgrade); /* Fix up all other VM information */ remove_vma_list(mm, vma); @@ -3129,8 +3130,8 @@ void exit_mmap(struct mm_struct *mm) tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - unmap_vmas(&tlb, vma, 0, -1); - free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); + unmap_vmas(&tlb, vma, 0, -1, true); + free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING, true); tlb_finish_mmu(&tlb); /* Walk the list again, actually closing and freeing it. */ diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 3c6cf9e3cd66..6ffa7c511aa3 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -549,7 +549,8 @@ bool __oom_reap_task_mm(struct mm_struct *mm) ret = false; continue; } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); + unmap_page_range(&tlb, vma, range.start, range.end, + NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } -- 2.37.2.672.g94769d06f0-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C908ECAAD2 for ; Mon, 29 Aug 2022 21:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=I/xsoq74vvoI/1PiMBvKycQmWmxgt91WH8EQoovswb0=; b=j6ssA/ke56WrW5dqAOZ4WlU433 lBuAUQAKNNANGIh9GSXh7/HvgLWPB4aL9bowrgH5luKVxGiM6WDOTpk3k8oLGlf86VaID9woLlech //dQaQPZKS/GRp94ofdWSuwsX8mKI4jrZeVOY3nUWsfett/RagNGK2q2Q/PKuLuUgubGyAoefVBbk cWoNMe4js7QpJyX/YdUzTyQA5IBcF8x2T3yPmAaL59/QiK2AsGc9av3sKqyU/KL63/gU3LEW3L3// /PZSlHWIZ6/KFjm8o1LKs66fRHKeUiv7fMb5jECHwFPXj8BEEK+ge4hMYMv6TWi9dyc+ITFEtFbRa vzrW2iIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmZI-00Cu1T-E8; Mon, 29 Aug 2022 21:45:16 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmXq-00CtVm-73 for linux-arm-kernel@bombadil.infradead.org; Mon, 29 Aug 2022 21:43:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=T4qZpRT3gwlLAlgR5C74M1solf jBdWfxUSXJytUZWPpVc3TlNVnwzvx2U6dJenB8HNtCKCxwFY/Ou/FtPkR4IiITh0855UYQ4lGqfLH V6AYUG7GNunLnr+ZrLFCBqalxLn/fUpge63DCI3pIJs3AvcEo+gPdEVoWbG+gcEie+/cQC32dvOXJ sxUNzEoXe1A2wMPZVOuJnl4cXEPsbeK42SFwNZW1GoafQU8iuR1yuuyNZSDSNkKmBQNCM6pKU4aKN qIooViI3Gtt6OeIEm2616xLp6uLTWXQwRZY+ubcLzFT8Y4bFiu1179yFr/9b0VOQSipUmb/pSy+Pb A3cEP80A==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmGh-003Pti-EI for linux-arm-kernel@lists.infradead.org; Mon, 29 Aug 2022 21:26:05 +0000 Received: by mail-pl1-x649.google.com with SMTP id i1-20020a170902cf0100b001730caeec78so6750287plg.7 for ; Mon, 29 Aug 2022 14:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=OIPy78dmVL2e9cDtVRNWLABLgvKsSbZL8TpHXpGFOXD1iGGPPAnyGMcNCguJqgzcNF vE+9cNpICzTMp48ey6JmMRyIlEctkJFDoL9MrqQuHHLDpAc2aXZC9NHJQw5sThX00eJ4 cjI0k3jAy5gFEX5VEHv4cVfUWBvRtfOC3iZWstqSQy3ibplhtgaSEpWxPcs9EVcm88qg LMXuGuQRDlQwRuQ7YNFSCH+8XULwWBK9BYSC5nbMXIDpOIovOb1rbCvinLUjcMpntl6K B3Pgew8zey/TtmyxMcXnTuT0BjzA6VIn4rP9dbnKnx4PgoHjYUplxPVEMwf3o1trOT0a 21qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=iJ0JdJV73VATLsgB1L0k6Im3sRPy7FaLXFGAGqkfPHwMWRdS7Tv5YYARCZqkNDAjp7 aaewklg+3FM4q+VGkFMCQWGk3NRO85Mnxta2gGqH7dl/0DygFVMkHCBnbT/6g2unWMLE VHO6sQYI6OySO7Z69s9o7ZYTS2zM4OWIEEsGRGOncMcuGubbcmwS+FVWC2pPNzUecA9N PRRvaDemqtG2MEBsXFuAJ+6GQuEJgQcvx0CNgeaVaCXt3Ey/YZk/ukOZlelTqQtninJk 39ngkGMmD/785K81BH/r/AHkj6p5mLzSFfdzE23Cge5Om0UCW96DSqyV9+CtlP7q0nWr 93Tg== X-Gm-Message-State: ACgBeo2pGrEXHJuu0nyAarkE3O+J3p/yN7efuJrkmYbE8bam9d+vtssL ohLm2CTU6IIbveZvzm2M45EVQHNVHv0= X-Google-Smtp-Source: AA6agR4q+ezvBgIOmovwk/KulEVJwqDq2XJXfw+za6wVj0A/SjlqYDTJGsWkQl5JJSuDpwxpjjn7kfBBqpA= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:a17:902:76c5:b0:174:e586:f6e2 with SMTP id j5-20020a17090276c500b00174e586f6e2mr4373889plt.37.1661808359839; Mon, 29 Aug 2022 14:25:59 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:16 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-14-surenb@google.com> Subject: [RFC PATCH 13/28] mm: conditionally mark VMA as locked in free_pgtables and unmap_page_range From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220829_222603_526744_E1DDA3AB X-CRM114-Status: GOOD ( 15.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org free_pgtables and unmap_page_range functions can be called with mmap_lock held for write (e.g. in mmap_region), held for read (e.g in madvise_pageout) or not held at all (e.g in madvise_remove might drop mmap_lock before calling vfs_fallocate, which ends up calling unmap_page_range). Provide free_pgtables and unmap_page_range with additional argument indicating whether to mark the VMA as locked or not based on the usage. The parameter is set based on whether mmap_lock is held in write mode during the call. This ensures no change in behavior between mmap_lock and per-vma locks. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 2 +- mm/internal.h | 4 ++-- mm/memory.c | 32 +++++++++++++++++++++----------- mm/mmap.c | 17 +++++++++-------- mm/oom_kill.c | 3 ++- 5 files changed, 35 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 476bf936c5f0..dc72be923e5b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1874,7 +1874,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); struct mmu_notifier_range; diff --git a/mm/internal.h b/mm/internal.h index 785409805ed7..e6c0f999e0cb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -85,14 +85,14 @@ bool __folio_end_writeback(struct folio *folio); void deactivate_file_folio(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long floor, unsigned long ceiling); + unsigned long floor, unsigned long ceiling, bool lock_vma); void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); struct zap_details; void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details); + struct zap_details *details, bool lock_vma); void page_cache_ra_order(struct readahead_control *, struct file_ra_state *, unsigned int order); diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb..9ac9944e8c62 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -403,7 +403,7 @@ void free_pgd_range(struct mmu_gather *tlb, } void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long floor, unsigned long ceiling) + unsigned long floor, unsigned long ceiling, bool lock_vma) { while (vma) { struct vm_area_struct *next = vma->vm_next; @@ -413,6 +413,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); @@ -427,6 +429,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, && !is_vm_hugetlb_page(next)) { vma = next; next = vma->vm_next; + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); } @@ -1631,12 +1635,16 @@ static inline unsigned long zap_p4d_range(struct mmu_gather *tlb, void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details) + struct zap_details *details, + bool lock_vma) { pgd_t *pgd; unsigned long next; BUG_ON(addr >= end); + if (lock_vma) + vma_mark_locked(vma); + tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { @@ -1652,7 +1660,7 @@ void unmap_page_range(struct mmu_gather *tlb, static void unmap_single_vma(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, - struct zap_details *details) + struct zap_details *details, bool lock_vma) { unsigned long start = max(vma->vm_start, start_addr); unsigned long end; @@ -1691,7 +1699,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, i_mmap_unlock_write(vma->vm_file->f_mapping); } } else - unmap_page_range(tlb, vma, start, end, details); + unmap_page_range(tlb, vma, start, end, details, lock_vma); } } @@ -1715,7 +1723,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, */ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr) + unsigned long end_addr, bool lock_vma) { struct mmu_notifier_range range; struct zap_details details = { @@ -1728,7 +1736,8 @@ void unmap_vmas(struct mmu_gather *tlb, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) - unmap_single_vma(tlb, vma, start_addr, end_addr, &details); + unmap_single_vma(tlb, vma, start_addr, end_addr, &details, + lock_vma); mmu_notifier_invalidate_range_end(&range); } @@ -1753,7 +1762,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) - unmap_single_vma(&tlb, vma, start, range.end, NULL); + unmap_single_vma(&tlb, vma, start, range.end, NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1768,7 +1777,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, * The range must fit into one VMA. */ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, - unsigned long size, struct zap_details *details) + unsigned long size, struct zap_details *details, bool lock_vma) { struct mmu_notifier_range range; struct mmu_gather tlb; @@ -1779,7 +1788,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); - unmap_single_vma(&tlb, vma, address, range.end, details); + unmap_single_vma(&tlb, vma, address, range.end, details, lock_vma); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1802,7 +1811,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, !(vma->vm_flags & VM_PFNMAP)) return; - zap_page_range_single(vma, address, size, NULL); + zap_page_range_single(vma, address, size, NULL, true); } EXPORT_SYMBOL_GPL(zap_vma_ptes); @@ -3483,7 +3492,8 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, struct zap_details *details) { - zap_page_range_single(vma, start_addr, end_addr - start_addr, details); + zap_page_range_single(vma, start_addr, end_addr - start_addr, details, + false); } static inline void unmap_mapping_range_tree(struct rb_root_cached *root, diff --git a/mm/mmap.c b/mm/mmap.c index 121544fd90de..094678b4434b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -79,7 +79,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags) { @@ -1866,7 +1866,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma->vm_file = NULL; /* Undo any partial mapping done by a device driver. */ - unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end); + unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end, true); if (vm_flags & VM_SHARED) mapping_unmap_writable(file->f_mapping); free_vma: @@ -2626,7 +2626,7 @@ static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) */ static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, bool lock_vma) { struct vm_area_struct *next = vma_next(mm, prev); struct mmu_gather tlb; @@ -2634,9 +2634,10 @@ static void unmap_region(struct mm_struct *mm, lru_add_drain(); tlb_gather_mmu(&tlb, mm); update_hiwater_rss(mm); - unmap_vmas(&tlb, vma, start, end); + unmap_vmas(&tlb, vma, start, end, lock_vma); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, - next ? next->vm_start : USER_PGTABLES_CEILING); + next ? next->vm_start : USER_PGTABLES_CEILING, + lock_vma); tlb_finish_mmu(&tlb); } @@ -2849,7 +2850,7 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, if (downgrade) mmap_write_downgrade(mm); - unmap_region(mm, vma, prev, start, end); + unmap_region(mm, vma, prev, start, end, !downgrade); /* Fix up all other VM information */ remove_vma_list(mm, vma); @@ -3129,8 +3130,8 @@ void exit_mmap(struct mm_struct *mm) tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - unmap_vmas(&tlb, vma, 0, -1); - free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); + unmap_vmas(&tlb, vma, 0, -1, true); + free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING, true); tlb_finish_mmu(&tlb); /* Walk the list again, actually closing and freeing it. */ diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 3c6cf9e3cd66..6ffa7c511aa3 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -549,7 +549,8 @@ bool __oom_reap_task_mm(struct mm_struct *mm) ret = false; continue; } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); + unmap_page_range(&tlb, vma, range.start, range.end, + NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } -- 2.37.2.672.g94769d06f0-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0D50ECAAD4 for ; Mon, 29 Aug 2022 22:09:26 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4MGl393QPWz3fKt for ; Tue, 30 Aug 2022 08:09:25 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=OIPy78dm; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=flex--surenb.bounces.google.com (client-ip=2607:f8b0:4864:20::449; helo=mail-pf1-x449.google.com; envelope-from=35y4nywykdjgkmj6f38gg8d6.4gedafmphh4-56ndaklk.grd23k.gj8@flex--surenb.bounces.google.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=OIPy78dm; dkim-atps=neutral Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4MGk554dSmz3bl6 for ; Tue, 30 Aug 2022 07:26:01 +1000 (AEST) Received: by mail-pf1-x449.google.com with SMTP id c135-20020a624e8d000000b0053617082770so3572446pfb.8 for ; Mon, 29 Aug 2022 14:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=OIPy78dmVL2e9cDtVRNWLABLgvKsSbZL8TpHXpGFOXD1iGGPPAnyGMcNCguJqgzcNF vE+9cNpICzTMp48ey6JmMRyIlEctkJFDoL9MrqQuHHLDpAc2aXZC9NHJQw5sThX00eJ4 cjI0k3jAy5gFEX5VEHv4cVfUWBvRtfOC3iZWstqSQy3ibplhtgaSEpWxPcs9EVcm88qg LMXuGuQRDlQwRuQ7YNFSCH+8XULwWBK9BYSC5nbMXIDpOIovOb1rbCvinLUjcMpntl6K B3Pgew8zey/TtmyxMcXnTuT0BjzA6VIn4rP9dbnKnx4PgoHjYUplxPVEMwf3o1trOT0a 21qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=i5dTvRyWQzPS3ycEVnnedNfB+iHCFx9RUaZKzBMQ1fM=; b=axwWOtjSB3koBEUQ2S979XgxJdmY/gsW34Q6NCl4UicSH42aINPET+oYuARJNEqmX4 Oovx6VQBvPC6kzucHhoZSdPrU8KVSpuYp+JXU6xyxnTrVXsLMEEZ9fFLxMnzWTKbkraB hF3lZl8gP18qVv+b2gHSnicK/KTqvlvTIjT0mh85/tUg/scAf08lRc/M8jLdb5vVD3Jk xqmVqKwdi5G3BYiXTkQv6E8BIrPSg/03sPH+NzJGZK7GMR9Op14+oCJrGXz+JycHu6nX aw4Pegy02eY2oR7M9qdFkzBcIJ+6VszLIuzsikX0lFDJISUprRiB6mR67n5D5KzQNhoF yKfQ== X-Gm-Message-State: ACgBeo2AmHUDQ04KuHvd7YVYOEU1mAoZpUPY60S7xFEQzDlHbW2ZgK/W Z/wXSj+ahxKv9UX+4+nFzBZWs78FOYI= X-Google-Smtp-Source: AA6agR4q+ezvBgIOmovwk/KulEVJwqDq2XJXfw+za6wVj0A/SjlqYDTJGsWkQl5JJSuDpwxpjjn7kfBBqpA= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:a17:902:76c5:b0:174:e586:f6e2 with SMTP id j5-20020a17090276c500b00174e586f6e2mr4373889plt.37.1661808359839; Mon, 29 Aug 2022 14:25:59 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:16 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-14-surenb@google.com> Subject: [RFC PATCH 13/28] mm: conditionally mark VMA as locked in free_pgtables and unmap_page_range From: Suren Baghdasaryan To: akpm@linux-foundation.org Content-Type: text/plain; charset="UTF-8" X-Mailman-Approved-At: Tue, 30 Aug 2022 08:01:45 +1000 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: michel@lespinasse.org, joelaf@google.com, songliubraving@fb.com, mhocko@suse.com, david@redhat.com, peterz@infradead.org, bigeasy@linutronix.de, peterx@redhat.com, dhowells@redhat.com, linux-mm@kvack.org, jglisse@google.com, dave@stgolabs.net, minchan@google.com, x86@kernel.org, hughd@google.com, willy@infradead.org, laurent.dufour@fr.ibm.com, linux-arm-kernel@lists.infradead.org, rientjes@google.com, axelrasmussen@google.com, kernel-team@android.com, paulmck@kernel.org, riel@surriel.com, liam.howlett@oracle.com, luto@kernel.org, ldufour@linux.ibm.com, surenb@google.com, vbabka@suse.cz, linuxppc-dev@lists.ozlabs.org, kent.overstreet@linux.dev, linux-kernel@vger.kernel.org, hannes@cmpxchg.org, mgorman@techsingularity.net Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" free_pgtables and unmap_page_range functions can be called with mmap_lock held for write (e.g. in mmap_region), held for read (e.g in madvise_pageout) or not held at all (e.g in madvise_remove might drop mmap_lock before calling vfs_fallocate, which ends up calling unmap_page_range). Provide free_pgtables and unmap_page_range with additional argument indicating whether to mark the VMA as locked or not based on the usage. The parameter is set based on whether mmap_lock is held in write mode during the call. This ensures no change in behavior between mmap_lock and per-vma locks. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 2 +- mm/internal.h | 4 ++-- mm/memory.c | 32 +++++++++++++++++++++----------- mm/mmap.c | 17 +++++++++-------- mm/oom_kill.c | 3 ++- 5 files changed, 35 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 476bf936c5f0..dc72be923e5b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1874,7 +1874,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); struct mmu_notifier_range; diff --git a/mm/internal.h b/mm/internal.h index 785409805ed7..e6c0f999e0cb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -85,14 +85,14 @@ bool __folio_end_writeback(struct folio *folio); void deactivate_file_folio(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long floor, unsigned long ceiling); + unsigned long floor, unsigned long ceiling, bool lock_vma); void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); struct zap_details; void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details); + struct zap_details *details, bool lock_vma); void page_cache_ra_order(struct readahead_control *, struct file_ra_state *, unsigned int order); diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb..9ac9944e8c62 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -403,7 +403,7 @@ void free_pgd_range(struct mmu_gather *tlb, } void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long floor, unsigned long ceiling) + unsigned long floor, unsigned long ceiling, bool lock_vma) { while (vma) { struct vm_area_struct *next = vma->vm_next; @@ -413,6 +413,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, * Hide vma from rmap and truncate_pagecache before freeing * pgtables */ + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); @@ -427,6 +429,8 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, && !is_vm_hugetlb_page(next)) { vma = next; next = vma->vm_next; + if (lock_vma) + vma_mark_locked(vma); unlink_anon_vmas(vma); unlink_file_vma(vma); } @@ -1631,12 +1635,16 @@ static inline unsigned long zap_p4d_range(struct mmu_gather *tlb, void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, - struct zap_details *details) + struct zap_details *details, + bool lock_vma) { pgd_t *pgd; unsigned long next; BUG_ON(addr >= end); + if (lock_vma) + vma_mark_locked(vma); + tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { @@ -1652,7 +1660,7 @@ void unmap_page_range(struct mmu_gather *tlb, static void unmap_single_vma(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, - struct zap_details *details) + struct zap_details *details, bool lock_vma) { unsigned long start = max(vma->vm_start, start_addr); unsigned long end; @@ -1691,7 +1699,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, i_mmap_unlock_write(vma->vm_file->f_mapping); } } else - unmap_page_range(tlb, vma, start, end, details); + unmap_page_range(tlb, vma, start, end, details, lock_vma); } } @@ -1715,7 +1723,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, */ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr) + unsigned long end_addr, bool lock_vma) { struct mmu_notifier_range range; struct zap_details details = { @@ -1728,7 +1736,8 @@ void unmap_vmas(struct mmu_gather *tlb, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) - unmap_single_vma(tlb, vma, start_addr, end_addr, &details); + unmap_single_vma(tlb, vma, start_addr, end_addr, &details, + lock_vma); mmu_notifier_invalidate_range_end(&range); } @@ -1753,7 +1762,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) - unmap_single_vma(&tlb, vma, start, range.end, NULL); + unmap_single_vma(&tlb, vma, start, range.end, NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1768,7 +1777,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, * The range must fit into one VMA. */ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, - unsigned long size, struct zap_details *details) + unsigned long size, struct zap_details *details, bool lock_vma) { struct mmu_notifier_range range; struct mmu_gather tlb; @@ -1779,7 +1788,7 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); - unmap_single_vma(&tlb, vma, address, range.end, details); + unmap_single_vma(&tlb, vma, address, range.end, details, lock_vma); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } @@ -1802,7 +1811,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, !(vma->vm_flags & VM_PFNMAP)) return; - zap_page_range_single(vma, address, size, NULL); + zap_page_range_single(vma, address, size, NULL, true); } EXPORT_SYMBOL_GPL(zap_vma_ptes); @@ -3483,7 +3492,8 @@ static void unmap_mapping_range_vma(struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr, struct zap_details *details) { - zap_page_range_single(vma, start_addr, end_addr - start_addr, details); + zap_page_range_single(vma, start_addr, end_addr - start_addr, details, + false); } static inline void unmap_mapping_range_tree(struct rb_root_cached *root, diff --git a/mm/mmap.c b/mm/mmap.c index 121544fd90de..094678b4434b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -79,7 +79,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end); + unsigned long start, unsigned long end, bool lock_vma); static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags) { @@ -1866,7 +1866,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma->vm_file = NULL; /* Undo any partial mapping done by a device driver. */ - unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end); + unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end, true); if (vm_flags & VM_SHARED) mapping_unmap_writable(file->f_mapping); free_vma: @@ -2626,7 +2626,7 @@ static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) */ static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end, bool lock_vma) { struct vm_area_struct *next = vma_next(mm, prev); struct mmu_gather tlb; @@ -2634,9 +2634,10 @@ static void unmap_region(struct mm_struct *mm, lru_add_drain(); tlb_gather_mmu(&tlb, mm); update_hiwater_rss(mm); - unmap_vmas(&tlb, vma, start, end); + unmap_vmas(&tlb, vma, start, end, lock_vma); free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, - next ? next->vm_start : USER_PGTABLES_CEILING); + next ? next->vm_start : USER_PGTABLES_CEILING, + lock_vma); tlb_finish_mmu(&tlb); } @@ -2849,7 +2850,7 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, if (downgrade) mmap_write_downgrade(mm); - unmap_region(mm, vma, prev, start, end); + unmap_region(mm, vma, prev, start, end, !downgrade); /* Fix up all other VM information */ remove_vma_list(mm, vma); @@ -3129,8 +3130,8 @@ void exit_mmap(struct mm_struct *mm) tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - unmap_vmas(&tlb, vma, 0, -1); - free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); + unmap_vmas(&tlb, vma, 0, -1, true); + free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING, true); tlb_finish_mmu(&tlb); /* Walk the list again, actually closing and freeing it. */ diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 3c6cf9e3cd66..6ffa7c511aa3 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -549,7 +549,8 @@ bool __oom_reap_task_mm(struct mm_struct *mm) ret = false; continue; } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); + unmap_page_range(&tlb, vma, range.start, range.end, + NULL, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } -- 2.37.2.672.g94769d06f0-goog