From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B758C433E1 for ; Tue, 25 Aug 2020 14:58:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7490F2075F for ; Tue, 25 Aug 2020 14:58:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Xfp2e3Xt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727120AbgHYO6h (ORCPT ); Tue, 25 Aug 2020 10:58:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725998AbgHYO6S (ORCPT ); Tue, 25 Aug 2020 10:58:18 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B801C061755; Tue, 25 Aug 2020 07:58:17 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id x25so7552407pff.4; Tue, 25 Aug 2020 07:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sv/9eriXs4YMtuUqEg5bncZ0rlcJBEbL+hLm5+QBb40=; b=Xfp2e3Xtmw0mMDWZ1ueqN80X0o3DlaU18t1Cz/zK/F/eG1y7CuKlm8YpZiO4eoodn+ pou5w1vEvYGji3W/Lm6246DlLB/o7In87TRc/AlIA0ZsHzwWyB0AsKNL0vlL6mmEPmR/ 0IFy4YCNvsAbiS9HKLsJN9yeZ1WKIzzFRSqG80JdkpP9JkQJa2GyYnw+7nlUPr8PhBrr TY8Tk3e6jBiHxthHc5rRMIvlJ2nuZy+ZQoGTKOPCVESC2mNUa1jBOftgRpfR8gVwsahg ndpNRGLeSb03UdhbXf5Oglnz1f2x6ei+zbwAPqXu52ALcrF+qkQbmNdZkrQCvBaKPAmB o9PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sv/9eriXs4YMtuUqEg5bncZ0rlcJBEbL+hLm5+QBb40=; b=a4ziACb8t5w6OINwiQeZFN4UnCoXcrI61SbdyYVWuQM4PfSQErSbajv4cGDgWVwMw7 wIhoU7YaAWM1bs3WLBITn7eCBzDLgJm2VNJMgB1Tkjka7zNdH/Xw85NCMjOJN5fx6gqf j/aN45dADVBAYV0YRjcrCFj61hoUrzaNgCaRXEHVIONXGWBfn/tNn0EaB1KYwSeSVJZ4 UQP9qST4uDOL7q2ZkF6jJ3/lziXLAe2Mkyit8ztiXBTuGKYtQvVLcO3eFyKWqsBy/ti5 lxX2LR2+2mljx94cUdMangc0E+DzVUWOgtwWlBcNMI6bw6LmUz6VK/DRiXlkfk84RnmZ Nazw== X-Gm-Message-State: AOAM530KCaIs6jKw+LEALr/eXhIxlC8VRje6C+0BXNXAo08PMYUswvYW zgBBBQgC4mbnToexzvGJpbo= X-Google-Smtp-Source: ABdhPJzgj9qz6s3F5P6AAw9oSEmK+WKjXVI/Yh0BhuKrx77hUIFnOe4T+XEIWRGHePjBI4hUL0PHmQ== X-Received: by 2002:a17:902:eb03:: with SMTP id l3mr5592962plb.296.1598367497180; Tue, 25 Aug 2020 07:58:17 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-212-105.tpgi.com.au. [61.68.212.105]) by smtp.gmail.com with ESMTPSA id e29sm15755956pfj.92.2020.08.25.07.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Aug 2020 07:58:16 -0700 (PDT) From: Nicholas Piggin To: linux-mm@kvack.org, Andrew Morton Cc: Nicholas Piggin , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Zefan Li , Jonathan Cameron , Christoph Hellwig , Christophe Leroy Subject: [PATCH v7 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Date: Wed, 26 Aug 2020 00:57:44 +1000 Message-Id: <20200825145753.529284-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200825145753.529284-1-npiggin@gmail.com> References: <20200825145753.529284-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The vmalloc mapper operates on a struct page * array rather than a linear physical address, re-name it to make this distinction clear. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4e9b21adc73d..45cd80ec7eeb 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -189,7 +189,7 @@ void unmap_kernel_range_noflush(unsigned long start, unsigned long size) arch_sync_kernel_mappings(start, end); } -static int vmap_pte_range(pmd_t *pmd, unsigned long addr, +static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -217,7 +217,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, return 0; } -static int vmap_pmd_range(pud_t *pud, unsigned long addr, +static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -229,13 +229,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, return -ENOMEM; do { next = pmd_addr_end(addr, end); - if (vmap_pte_range(pmd, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pmd++, addr = next, addr != end); return 0; } -static int vmap_pud_range(p4d_t *p4d, unsigned long addr, +static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -247,13 +247,13 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, return -ENOMEM; do { next = pud_addr_end(addr, end); - if (vmap_pmd_range(pud, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (pud++, addr = next, addr != end); return 0; } -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, +static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr, pgtbl_mod_mask *mask) { @@ -265,7 +265,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, return -ENOMEM; do { next = p4d_addr_end(addr, end); - if (vmap_pud_range(p4d, addr, next, prot, pages, nr, mask)) + if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr, mask)) return -ENOMEM; } while (p4d++, addr = next, addr != end); return 0; @@ -306,7 +306,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size, next = pgd_addr_end(addr, end); if (pgd_bad(*pgd)) mask |= PGTBL_PGD_MODIFIED; - err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); + err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) return err; } while (pgd++, addr = next, addr != end); -- 2.23.0