From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBCDAC2BB54 for ; Wed, 8 Apr 2020 12:00:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A04E720A8B for ; Wed, 8 Apr 2020 12:00:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZZXMOAg9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A04E720A8B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 48F918E0006; Wed, 8 Apr 2020 08:00:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 418998E0017; Wed, 8 Apr 2020 08:00:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 313FD8E0006; Wed, 8 Apr 2020 08:00:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 16E308E0006 for ; Wed, 8 Apr 2020 08:00:36 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D1DBA4FE6 for ; Wed, 8 Apr 2020 12:00:35 +0000 (UTC) X-FDA: 76684545630.08.list05_22453b1071f19 X-HE-Tag: list05_22453b1071f19 X-Filterd-Recvd-Size: 8297 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 12:00:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=EWiJzlAuBWg5WiYMBi0ZL/1CNmmy8Ld+0CWfgVW/ZO8=; b=ZZXMOAg9TsSBaLaHibEaupj+Aa u7W9gFYJsCsfpbufPRzq9QOVHge0BcgA9NCHZj3NlvEefv/sACbDNQRw+iOcN5YDHNVlBHcaMFn06 cZfm7tR95SbSBIFRDDdDygQNoQG9K5hgRc26qVwDngFvgh2ihstrnZ+PPHNIsaTMcp+s0EgZDRn9m cJnheUbu8qEVpV3LhgDk6ENkmtkS4lrdPuvCOh8JeDsJYjns8b87OxlGmk7U0jZUa+oLtV1HDfUej uzRVS+2jQ2pu3KFlsvPvVl0kzYOZGS9ykgTOK+dY6LZiumnftetyh5VHDVsr15iFPALiYvN+56951 KPFs71Zg==; Received: from [2001:4bb8:180:5765:65b6:f11e:f109:b151] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jM9NN-0003Vf-8Y; Wed, 08 Apr 2020 12:00:14 +0000 From: Christoph Hellwig To: Andrew Morton , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , x86@kernel.org, David Airlie , Daniel Vetter , Laura Abbott , Sumit Semwal , Sakari Ailus , Minchan Kim , Nitin Gupta Cc: Robin Murphy , Christophe Leroy , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org, linux-hyperv@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/28] mm: remove vmap_page_range_noflush and vunmap_page_range Date: Wed, 8 Apr 2020 13:59:10 +0200 Message-Id: <20200408115926.1467567-13-hch@lst.de> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200408115926.1467567-1-hch@lst.de> References: <20200408115926.1467567-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These have non-static aliases claled map_kernel_range_noflush and unmap_kernel_range_noflush that just differ slightly in the calling conventions that pass addr + size instead of an end. Signed-off-by: Christoph Hellwig --- mm/vmalloc.c | 98 +++++++++++++++++++++------------------------------- 1 file changed, 40 insertions(+), 58 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index aada9e9144bd..55df5dc6a9fc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -127,10 +127,24 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned l= ong addr, unsigned long end) } while (p4d++, addr =3D next, addr !=3D end); } =20 -static void vunmap_page_range(unsigned long addr, unsigned long end) +/** + * unmap_kernel_range_noflush - unmap kernel VM area + * @addr: start of the VM area to unmap + * @size: size of the VM area to unmap + * + * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size spec= ify + * should have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsi= ble + * for calling flush_cache_vunmap() on to-be-mapped areas before calling= this + * function and flush_tlb_kernel_range() after. + */ +void unmap_kernel_range_noflush(unsigned long addr, unsigned long size) { - pgd_t *pgd; + unsigned long end =3D addr + size; unsigned long next; + pgd_t *pgd; =20 BUG_ON(addr >=3D end); pgd =3D pgd_offset_k(addr); @@ -219,18 +233,30 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long= addr, return 0; } =20 -/* - * Set up page tables in kva (addr, end). The ptes shall have prot "prot= ", and - * will have pfns corresponding to the "pages" array. +/** + * map_kernel_range_noflush - map kernel VM area with the specified page= s + * @addr: start of the VM area to map + * @size: size of the VM area to map + * @prot: page protection flags to use + * @pages: pages to map * - * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages= [N] + * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specif= y should + * have been allocated using get_vm_area() and its friends. + * + * NOTE: + * This function does NOT do any cache flushing. The caller is responsi= ble for + * calling flush_cache_vmap() on to-be-mapped areas before calling this + * function. + * + * RETURNS: + * The number of pages mapped on success, -errno on failure. */ -static int vmap_page_range_noflush(unsigned long start, unsigned long en= d, - pgprot_t prot, struct page **pages) +int map_kernel_range_noflush(unsigned long addr, unsigned long size, + pgprot_t prot, struct page **pages) { - pgd_t *pgd; + unsigned long end =3D addr + size; unsigned long next; - unsigned long addr =3D start; + pgd_t *pgd; int err =3D 0; int nr =3D 0; =20 @@ -251,7 +277,7 @@ static int vmap_page_range(unsigned long start, unsig= ned long end, { int ret; =20 - ret =3D vmap_page_range_noflush(start, end, prot, pages); + ret =3D map_kernel_range_noflush(start, end - start, prot, pages); flush_cache_vmap(start, end); return ret; } @@ -1226,7 +1252,7 @@ EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier); */ static void unmap_vmap_area(struct vmap_area *va) { - vunmap_page_range(va->va_start, va->va_end); + unmap_kernel_range_noflush(va->va_start, va->va_end - va->va_start); } =20 /* @@ -1686,7 +1712,7 @@ static void vb_free(unsigned long addr, unsigned lo= ng size) rcu_read_unlock(); BUG_ON(!vb); =20 - vunmap_page_range(addr, addr + size); + unmap_kernel_range_noflush(addr, size); =20 if (debug_pagealloc_enabled_static()) flush_tlb_kernel_range(addr, addr + size); @@ -1984,50 +2010,6 @@ void __init vmalloc_init(void) vmap_initialized =3D true; } =20 -/** - * map_kernel_range_noflush - map kernel VM area with the specified page= s - * @addr: start of the VM area to map - * @size: size of the VM area to map - * @prot: page protection flags to use - * @pages: pages to map - * - * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size - * specify should have been allocated using get_vm_area() and its - * friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is - * responsible for calling flush_cache_vmap() on to-be-mapped areas - * before calling this function. - * - * RETURNS: - * The number of pages mapped on success, -errno on failure. - */ -int map_kernel_range_noflush(unsigned long addr, unsigned long size, - pgprot_t prot, struct page **pages) -{ - return vmap_page_range_noflush(addr, addr + size, prot, pages); -} - -/** - * unmap_kernel_range_noflush - unmap kernel VM area - * @addr: start of the VM area to unmap - * @size: size of the VM area to unmap - * - * Unmap PFN_UP(@size) pages at @addr. The VM area @addr and @size - * specify should have been allocated using get_vm_area() and its - * friends. - * - * NOTE: - * This function does NOT do any cache flushing. The caller is - * responsible for calling flush_cache_vunmap() on to-be-mapped areas - * before calling this function and flush_tlb_kernel_range() after. - */ -void unmap_kernel_range_noflush(unsigned long addr, unsigned long size) -{ - vunmap_page_range(addr, addr + size); -} - /** * unmap_kernel_range - unmap kernel VM area and flush cache and TLB * @addr: start of the VM area to unmap @@ -2041,7 +2023,7 @@ void unmap_kernel_range(unsigned long addr, unsigne= d long size) unsigned long end =3D addr + size; =20 flush_cache_vunmap(addr, end); - vunmap_page_range(addr, end); + unmap_kernel_range_noflush(addr, size); flush_tlb_kernel_range(addr, end); } =20 --=20 2.25.1