From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79A58C2BA19 for ; Tue, 14 Apr 2020 13:54:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 297E220644 for ; Tue, 14 Apr 2020 13:54:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Ti3ffasG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 297E220644 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA58C8E000D; Tue, 14 Apr 2020 09:54:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A55A08E0001; Tue, 14 Apr 2020 09:54:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91E2B8E000D; Tue, 14 Apr 2020 09:54:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 7AA048E0001 for ; Tue, 14 Apr 2020 09:54:01 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 46724249C for ; Tue, 14 Apr 2020 13:54:01 +0000 (UTC) X-FDA: 76706604282.19.help91_402c095607707 X-HE-Tag: help91_402c095607707 X-Filterd-Recvd-Size: 8347 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 13:54:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=4ea5vsXJHOIJ4irJzF3v/kw19HEG5lklgexzNgWZ/1E=; b=Ti3ffasGFdJfP7AhXGeAZ/PYTk 43AUQ1PSFQAZ7WGZ/VBVgxH/nNZxJgXAZj/yyuu3G93ZCLztJ2CBv3xmZV79CqVGwV498jT02i9WW BGTGLelScsgzrM52iMp9MW4B23C+J/o8VApAuCOFOWUzwQLfAlrUCTz724PYhJPxI2emn+NQ4uqMi htQVEyeUDMlWfXZtUztV0oKj+Br/dY2zJOaS8Xfg/GZ337f8nx6Lrn6ZtJP2irUgteZJ1pUsRBadO b29OU9+KR/DsBLAr3LbRvma+3vCNDe2phUCJKZiZXL980yGD1UNRONHtagJGsws//MdDfhiowtE2z m3682ViQ==; Received: from [2001:4bb8:180:384b:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jOLOe-0006ns-BO; Tue, 14 Apr 2020 13:14:37 +0000 From: Christoph Hellwig To: Andrew Morton , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , x86@kernel.org, David Airlie , Daniel Vetter , Laura Abbott , Sumit Semwal , Sakari Ailus , Minchan Kim , Nitin Gupta Cc: Robin Murphy , Christophe Leroy , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org, linux-hyperv@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/29] mm: remove vmap_page_range_noflush and vunmap_page_range Date: Tue, 14 Apr 2020 15:13:32 +0200 Message-Id: <20200414131348.444715-14-hch@lst.de> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200414131348.444715-1-hch@lst.de> References: <20200414131348.444715-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These have non-static aliases called map_kernel_range_noflush and unmap_kernel_range_noflush that just differ slightly in the calling conventions that pass addr + size instead of an end. Signed-off-by: Christoph Hellwig Acked-by: Peter Zijlstra (Intel) --- mm/vmalloc.c | 98 +++++++++++++++++++++------------------------------- 1 file changed, 40 insertions(+), 58 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index aada9e9144bd..55df5dc6a9fc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -127,10 +127,24 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned l= ong addr, unsigned long end) } while (p4d++, addr =3D next, addr !=3D end); } =20 -static void vunmap_page_range(unsigned long addr, unsigned long end) +/** + * unmap_kernel_range_noflush - unmap kernel VM area + * @addr: start of the VM area to unmap + * @size: size of the VM area to unmap + * + * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size spec= ify + * should have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsi= ble + * for calling flush_cache_vunmap() on to-be-mapped areas before calling= this + * function and flush_tlb_kernel_range() after. + */ +void unmap_kernel_range_noflush(unsigned long addr, unsigned long size) { - pgd_t *pgd; + unsigned long end =3D addr + size; unsigned long next; + pgd_t *pgd; =20 BUG_ON(addr >=3D end); pgd =3D pgd_offset_k(addr); @@ -219,18 +233,30 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long= addr, return 0; } =20 -/* - * Set up page tables in kva (addr, end). The ptes shall have prot "prot= ", and - * will have pfns corresponding to the "pages" array. +/** + * map_kernel_range_noflush - map kernel VM area with the specified page= s + * @addr: start of the VM area to map + * @size: size of the VM area to map + * @prot: page protection flags to use + * @pages: pages to map * - * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages= [N] + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specif= y should + * have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsi= ble for + * calling flush_cache_vmap() on to-be-mapped areas before calling this + * function. + * + * RETURNS: + * The number of pages mapped on success, -errno on failure. */ -static int vmap_page_range_noflush(unsigned long start, unsigned long en= d, - pgprot_t prot, struct page **pages) +int map_kernel_range_noflush(unsigned long addr, unsigned long size, + pgprot_t prot, struct page **pages) { - pgd_t *pgd; + unsigned long end =3D addr + size; unsigned long next; - unsigned long addr =3D start; + pgd_t *pgd; int err =3D 0; int nr =3D 0; =20 @@ -251,7 +277,7 @@ static int vmap_page_range(unsigned long start, unsig= ned long end, { int ret; =20 - ret =3D vmap_page_range_noflush(start, end, prot, pages); + ret =3D map_kernel_range_noflush(start, end - start, prot, pages); flush_cache_vmap(start, end); return ret; } @@ -1226,7 +1252,7 @@ EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier); */ static void unmap_vmap_area(struct vmap_area *va) { - vunmap_page_range(va->va_start, va->va_end); + unmap_kernel_range_noflush(va->va_start, va->va_end - va->va_start); } =20 /* @@ -1686,7 +1712,7 @@ static void vb_free(unsigned long addr, unsigned lo= ng size) rcu_read_unlock(); BUG_ON(!vb); =20 - vunmap_page_range(addr, addr + size); + unmap_kernel_range_noflush(addr, size); =20 if (debug_pagealloc_enabled_static()) flush_tlb_kernel_range(addr, addr + size); @@ -1984,50 +2010,6 @@ void __init vmalloc_init(void) vmap_initialized =3D true; } =20 -/** - * map_kernel_range_noflush - map kernel VM area with the specified page= s - * @addr: start of the VM area to map - * @size: size of the VM area to map - * @prot: page protection flags to use - * @pages: pages to map - * - * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size - * specify should have been allocated using get_vm_area() and its - * friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is - * responsible for calling flush_cache_vmap() on to-be-mapped areas - * before calling this function. - * - * RETURNS: - * The number of pages mapped on success, -errno on failure. - */ -int map_kernel_range_noflush(unsigned long addr, unsigned long size, - pgprot_t prot, struct page **pages) -{ - return vmap_page_range_noflush(addr, addr + size, prot, pages); -} - -/** - * unmap_kernel_range_noflush - unmap kernel VM area - * @addr: start of the VM area to unmap - * @size: size of the VM area to unmap - * - * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size - * specify should have been allocated using get_vm_area() and its - * friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is - * responsible for calling flush_cache_vunmap() on to-be-mapped areas - * before calling this function and flush_tlb_kernel_range() after. - */ -void unmap_kernel_range_noflush(unsigned long addr, unsigned long size) -{ - vunmap_page_range(addr, addr + size); -} - /** * unmap_kernel_range - unmap kernel VM area and flush cache and TLB * @addr: start of the VM area to unmap @@ -2041,7 +2023,7 @@ void unmap_kernel_range(unsigned long addr, unsigne= d long size) unsigned long end =3D addr + size; =20 flush_cache_vunmap(addr, end); - vunmap_page_range(addr, end); + unmap_kernel_range_noflush(addr, size); flush_tlb_kernel_range(addr, end); } =20 --=20 2.25.1