From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1355BC56201 for ; Fri, 20 Nov 2020 14:38:52 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 661092224C for ; Fri, 20 Nov 2020 14:38:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="TvToZKHU"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hewr0Z2s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 661092224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rQAk/LGRDw4dY/5PJ5SNw47qOaGGwTlxoWo80fGHC38=; b=TvToZKHU3SOZcKDA/3ihsdcXK Fv2+RF73B1OkOOGu/qVHPHAXNF9ruG6evDsSEBcMwIse9bdNzdRfBQpDNUeTX5326vE4/Cqz/bMk0 unzXDw6gnq35ibPhCZttn2Giuk5b4rNxJvi+l2V2yxuF+BmDLkixf1O7u6aU/Bn+b4DpAcI2ltZmG ESPTRZpnGewR5hbbbp6ktIgoBRNoUC6D9rGot4XoIpC6lMngDlt1XCcokvCAqmlWzBvLf1z5jHg/o K2MZ4Cj2ZjHi5MFRDJNXWopxypXDM44v571p+e1Rhm8JoiMlTwRElpBel+oMOk5YORJBCpk29MByx 8P53qtAiw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7XQ-0003An-Co; Fri, 20 Nov 2020 14:37:24 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kg7WL-0002lJ-S1 for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 14:36:25 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC6A12236F; Fri, 20 Nov 2020 14:36:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605882977; bh=JrUunaT13FOAbVs5krWn+S3cSLbG3IZElKuxk2ra5Zs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hewr0Z2s9VAjf449FFaVKXNLa3oxpQEFrq22RF7GnyAapbmtehNsFPQ4nI053WAXp 11ca/MGmrJ+tDWmfwVDghHg8kd2bDwxDl7qa3iE9ckSeXg1uAXoXqlAQb5jYtCTlHc fTRbYX+P0e5QjWzWHdIGwzK2IYcm5ozrbF3sVFC8= From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 5/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm() Date: Fri, 20 Nov 2020 14:35:56 +0000 Message-Id: <20201120143557.6715-6-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120143557.6715-1-will@kernel.org> References: <20201120143557.6715-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_093618_476814_8EE56630 X-CRM114-Status: GOOD ( 17.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yu Zhao , Anshuman Khandual , Peter Zijlstra , kernel-team@android.com, Linus Torvalds , linux-mm@kvack.org, Minchan Kim , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Passing the range '0, -1' to tlb_gather_mmu() sets the 'fullmm' flag, which indicates that the mm_struct being operated on is going away. In this case, some architectures (such as arm64) can elide TLB invalidation by ensuring that the TLB tag (ASID) associated with this mm is not immediately reclaimed. Although this behaviour is documented in asm-generic/tlb.h, it's subtle and easily missed. Consequently, the /proc walker for manipulating the young and soft-dirty bits passes this range regardless. Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the entire mm and WARN() if tlb_gather_mmu() is called with an 'end' address greated than TASK_SIZE. Signed-off-by: Will Deacon --- fs/proc/task_mmu.c | 2 +- include/asm-generic/tlb.h | 6 ++++-- include/linux/mm_types.h | 1 + mm/mmap.c | 2 +- mm/mmu_gather.c | 16 ++++++++++++++-- 5 files changed, 21 insertions(+), 6 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3308292ee5c5..a76d339b5754 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, count = -EINTR; goto out_mm; } - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY)) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 6661ee1cff47..2c68a545ffa7 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -46,7 +46,9 @@ * * The mmu_gather API consists of: * - * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather + * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu() + * + * start and finish a mmu_gather * * Finish in particular will issue a (final) TLB invalidate and free * all (remaining) queued pages. @@ -91,7 +93,7 @@ * * - mmu_gather::fullmm * - * A flag set by tlb_gather_mmu() to indicate we're going to free + * A flag set by tlb_gather_mmu_fullmm() to indicate we're going to free * the entire mm; this allows a number of optimizations. * * - We can ignore tlb_{start,end}_vma(); because we don't diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7b90058a62be..42231729affe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -585,6 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); +extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); extern void tlb_finish_mmu(struct mmu_gather *tlb); static inline void init_tlb_flush_pending(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 6d94b2ee9c45..4b2809fbbd4a 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3216,7 +3216,7 @@ void exit_mmap(struct mm_struct *mm) lru_add_drain(); flush_cache_mm(mm); - tlb_gather_mmu(&tlb, mm, 0, -1); + tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index b0be5a7aa08f..87b48444e7e5 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -261,8 +261,8 @@ void tlb_flush_mmu(struct mmu_gather *tlb) * respectively when @mm is without users and we're going to destroy * the full address space (exit/execve). */ -void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) +static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) { tlb->mm = mm; @@ -287,6 +287,18 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, inc_tlb_flush_pending(tlb->mm); } +void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + WARN_ON(end > TASK_SIZE); + __tlb_gather_mmu(tlb, mm, start, end); +} + +void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm) +{ + __tlb_gather_mmu(tlb, mm, 0, -1); +} + /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish -- 2.29.2.454.gaff20da3a2-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel