From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E9A5C3A5A0 for ; Mon, 20 Apr 2020 12:09:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1C9DD20679 for ; Mon, 20 Apr 2020 12:09:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="p+JxAa3U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C9DD20679 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AD47A8E0005; Mon, 20 Apr 2020 08:09:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A836C8E0003; Mon, 20 Apr 2020 08:09:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 972688E0005; Mon, 20 Apr 2020 08:09:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 7D1E58E0003 for ; Mon, 20 Apr 2020 08:09:42 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 33B74181AEF2A for ; Mon, 20 Apr 2020 12:09:42 +0000 (UTC) X-FDA: 76728114204.23.base71_54a63584d8407 X-HE-Tag: base71_54a63584d8407 X-Filterd-Recvd-Size: 7001 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 12:09:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=R/w/fdWIc8B/K5nT4P8/UgmLyZd29J0YqIbnVwjY21s=; b=p+JxAa3U1CC19AXNdu/BH6UKzT 1q7bSOFDOCwo2MSYzmwDrR1nxzUJ5iaeaejdk8ebifXLpcWvIdj84wAekYtOu08W2zqTbLG5O0ycX U7cxXLa9eJLcwQG+48ymcIeWndR1kYb87XpKV4dUHK1Bw44WQSItD4SmKo89ou6lE4CVTb5vcfRjj vF1WtzxZRJyLQzPjGc00Ihbqfme0GHO3+ickvSm5IXmN7WccTA9LMCUpMeCIihzuhh3fIEnDByvk/ 7oo8jr/DPu6lmDLfJkk2lVyqt0YaQJezJYsVwocQZKg6J1+sHf5Y4ewPqSViS7luzhz/HFm0H7gmz MKNPdI6Q==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jQVEl-0006YO-Da; Mon, 20 Apr 2020 12:09:19 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 840AE3010BC; Mon, 20 Apr 2020 14:09:16 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 492392B8DAF90; Mon, 20 Apr 2020 14:09:16 +0200 (CEST) Date: Mon, 20 Apr 2020 14:09:16 +0200 From: Peter Zijlstra To: Zhenyu Ye Cc: mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com, aneesh.kumar@linux.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com, arnd@arndb.de, rostedt@goodmis.org, maz@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, yuzhao@google.com, Dave.Martin@arm.com, steven.price@arm.com, broonie@kernel.org, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, arm@kernel.org, xiexiangyou@huawei.com, prime.zeng@hisilicon.com, zhangshaokun@hisilicon.com, kuhn.chenqun@huawei.com Subject: Re: [PATCH v1 5/6] mm: tlb: Provide flush_*_tlb_range wrappers Message-ID: <20200420120916.GE20696@hirez.programming.kicks-ass.net> References: <20200403090048.938-1-yezhenyu2@huawei.com> <20200403090048.938-6-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200403090048.938-6-yezhenyu2@huawei.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Apr 03, 2020 at 05:00:47PM +0800, Zhenyu Ye wrote: > This patch provides flush_{pte|pmd|pud|p4d}_tlb_range() in generic > code, which are expressed through the mmu_gather APIs. These > interface set tlb->cleared_* and finally call tlb_flush(), so we > can do the tlb invalidation according to the information in > struct mmu_gather. > > Signed-off-by: Zhenyu Ye > --- > include/asm-generic/pgtable.h | 12 +++++++-- > mm/pgtable-generic.c | 50 +++++++++++++++++++++++++++++++++++ > 2 files changed, 60 insertions(+), 2 deletions(-) > > diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h > index e2e2bef07dd2..2bedeee94131 100644 > --- a/include/asm-generic/pgtable.h > +++ b/include/asm-generic/pgtable.h > @@ -1160,11 +1160,19 @@ static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) > * invalidate the entire TLB which is not desitable. > * e.g. see arch/arc: flush_pmd_tlb_range > */ > -#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) > -#define flush_pud_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) > +extern void flush_pte_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end); > +extern void flush_pmd_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end); > +extern void flush_pud_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end); > +extern void flush_p4d_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end); > #else > +#define flush_pte_tlb_range(vma, addr, end) BUILD_BUG() > #define flush_pmd_tlb_range(vma, addr, end) BUILD_BUG() > #define flush_pud_tlb_range(vma, addr, end) BUILD_BUG() > +#define flush_p4d_tlb_range(vma, addr, end) BUILD_BUG() > #endif > #endif Ideally you'd make __HAVE_ARCH_FLUSH_PMD_TLB_RANGE go away. Power certainly doesnt need it with the below. > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index 3d7c01e76efc..0f5414a4a2ec 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -101,6 +101,56 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE > +void flush_pte_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) > +{ > + struct mmu_gather tlb; > + > + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); > + tlb_start_vma(&tlb, vma); > + tlb_set_pte_range(&tlb, addr, end - addr); > + tlb_end_vma(&tlb, vma); > + tlb_finish_mmu(&tlb, addr, end); > +} > + > +void flush_pmd_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) > +{ > + struct mmu_gather tlb; > + > + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); > + tlb_start_vma(&tlb, vma); > + tlb_set_pmd_range(&tlb, addr, end - addr); > + tlb_end_vma(&tlb, vma); > + tlb_finish_mmu(&tlb, addr, end); > +} > + > +void flush_pud_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) > +{ > + struct mmu_gather tlb; > + > + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); > + tlb_start_vma(&tlb, vma); > + tlb_set_pud_range(&tlb, addr, end - addr); > + tlb_end_vma(&tlb, vma); > + tlb_finish_mmu(&tlb, addr, end); > +} > + > +void flush_p4d_tlb_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) > +{ > + struct mmu_gather tlb; > + > + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); > + tlb_start_vma(&tlb, vma); > + tlb_set_p4d_range(&tlb, addr, end - addr); > + tlb_end_vma(&tlb, vma); > + tlb_finish_mmu(&tlb, addr, end); > +} > +#endif /* __HAVE_ARCH_FLUSH_PMD_TLB_RANGE */ You're nowhere near lazy enough: #define FLUSH_Pxx_TLB_RANGE(_pxx) \ void flush_##_pxx##_tlb_range(struct vm_area_struct *vma, \ unsigned long addr, unsigned long end) \ { \ struct mmu_gather tlb; \ \ tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); \ tlb_start_vma(&tlb, vma); \ tlb_flush_##_pxx##_range(&tlb, addr, end-addr); \ tlb_end_vma(&tlb, vma); \ tlb_finish_mmu(&tlb, addr, end); \ } FLUSH_Pxx_TLB_RANGE(pte) FLUSH_Pxx_TLB_RANGE(pmd) FLUSH_Pxx_TLB_RANGE(pud) FLUSH_Pxx_TLB_RANGE(p4d)