From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B330DC433F1 for ; Fri, 17 Jul 2020 11:15:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6FB7620768 for ; Fri, 17 Jul 2020 11:15:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="i1mn0vBQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6FB7620768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 14C998D0048; Fri, 17 Jul 2020 07:15:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FCEA8D0040; Fri, 17 Jul 2020 07:15:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2DA88D0048; Fri, 17 Jul 2020 07:15:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id DAD458D0040 for ; Fri, 17 Jul 2020 07:15:02 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9AD2D2CFFC for ; Fri, 17 Jul 2020 11:15:02 +0000 (UTC) X-FDA: 77047310844.05.veil06_370ff3626f0a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 680E218490854 for ; Fri, 17 Jul 2020 11:14:24 +0000 (UTC) X-HE-Tag: veil06_370ff3626f0a X-Filterd-Recvd-Size: 5454 Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jul 2020 11:14:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=fCoot8j0cWA9ciC5yvURdWvuq4otc5g+z9FwRrkMqPI=; b=i1mn0vBQKBaNacLXDfsoK+7QG6 g0aa0ADr+4bhdnmt9IAIw4EPzJLymoj2ob/Pya5sBqVBjtTrLs5vxWkeWzIq7rGnfwzBQO0abZ1Uj BuR8IUQ0Ml0ePeFrjIt+l0gsm+yIU4UoeY5S/I68dX4EeXb+RtLM/VFN4c/WB11lJSyBJ/1GN29F3 yGhK/yyH5IZPFpfHrT6WRUl9WY0rbD0MfUTCczycRrvJeXpzl0OT0EXN7kuPWC6KR61BCLR5PDDTJ 8OmSei+N/7sTQS1KghadHxTFf1CdjVubg7GAgPWtYzeluEq1AZ7yh7NnFu/4QSNEOEyQqJLXEsnwH dBXDZN6w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jwOJb-0001eW-7I; Fri, 17 Jul 2020 11:14:07 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0E5B3304B90; Fri, 17 Jul 2020 13:14:02 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B515C203D4093; Fri, 17 Jul 2020 13:14:02 +0200 (CEST) Message-ID: <20200717111349.591934380@infradead.org> User-Agent: quilt/0.66 Date: Fri, 17 Jul 2020 13:10:09 +0200 From: Peter Zijlstra To: Will Deacon , "Aneesh Kumar K.V" , Andrew Morton , Nick Piggin , Peter Zijlstra Cc: linux-arch@vger.kernel.org, linux-sh@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yoshinori Sato , Rich Felker , "David S. Miller" , Helge Deller , Geert Uytterhoeven , Paul Burton , Tony Luck , Richard Henderson , Nick Hu , Paul Walmsley , John Paul Adrian Glaubitz , Christoph Hellwig Subject: [PATCH v2 04/11] sparc32/tlb: Fix __p*_free_tlb() References: <20200717111005.024867618@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Rspamd-Queue-Id: 680E218490854 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Just like regular pages, page directories need to observe the following order: 1) unhook 2) TLB invalidate 3) free to ensure it is safe against concurrent accesses. Because Sparc32 has non-page based page directories, use a custom table freeer. Signed-off-by: Peter Zijlstra (Intel) --- arch/sparc/Kconfig | 1 + arch/sparc/include/asm/pgalloc_32.h | 7 +++++-- arch/sparc/mm/srmmu.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 34 insertions(+), 2 deletions(-) --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -57,6 +57,7 @@ config SPARC32 select CLZ_TAB select HAVE_UID16 select OLD_SIGACTION + select MMU_GATHER_TABLE_FREE config SPARC64 def_bool 64BIT --- a/arch/sparc/include/asm/pgalloc_32.h +++ b/arch/sparc/include/asm/pgalloc_32.h @@ -12,6 +12,9 @@ struct page; +extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int idx); +extern void __tlb_remove_table(void *table); + void *srmmu_get_nocache(int size, int align); void srmmu_free_nocache(void *addr, int size); @@ -48,7 +51,7 @@ static inline void free_pmd_fast(pmd_t * } #define pmd_free(mm, pmd) free_pmd_fast(pmd) -#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) +#define __pmd_free_tlb(tlb, pmd, addr) pgtable_free_tlb((tlb), (pmd), 1) #define pmd_populate(mm, pmd, pte) pmd_set(pmd, pte) #define pmd_pgtable(pmd) (pgtable_t)__pmd_page(pmd) @@ -73,6 +76,6 @@ static inline void free_pte_fast(pte_t * #define pte_free_kernel(mm, pte) free_pte_fast(pte) void pte_free(struct mm_struct * mm, pgtable_t pte); -#define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) +#define __pte_free_tlb(tlb, pte, addr) pgtable_free_tlb((tlb), (pte), 0) #endif /* _SPARC_PGALLOC_H */ --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -38,6 +38,7 @@ #include #include #include +#include #include /* Now the cpu specific definitions. */ @@ -1831,3 +1832,30 @@ void __init load_mmu(void) sun4m_init_smp(); #endif } + +#define TLB_IDX_MASK 1UL + +void __tlb_remove_table(void *table) +{ + void *dir = (void *)((unsigned long)table & ~TLB_IDX_MASK); + int idx = (unsigned long)table & TLB_IDX_MASK; + + switch (idx) { + case 1: /* PMD */ + pmd_free(NULL, dir); + break; + case 0: /* PTE */ + pte_free(NULL, dir); + break; + } +} + +void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int idx) +{ + unsigned long pgf = (unsigned long)table; + BUG_ON(idx > TLB_IDX_MASK); + BUG_ON(pgf & TLB_IDX_MASK); + pgf |= idx; + tlb_remove_table(tlb, (void *)pgf); +} +