From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15859C433EF for ; Thu, 24 Feb 2022 05:16:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230111AbiBXFQv (ORCPT ); Thu, 24 Feb 2022 00:16:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230109AbiBXFQq (ORCPT ); Thu, 24 Feb 2022 00:16:46 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB7D423402F for ; Wed, 23 Feb 2022 21:16:16 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2d0a5824ec6so12349417b3.0 for ; Wed, 23 Feb 2022 21:16:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=mMupdIrbmOm80/sZrASEhygALQnDQAhlGvHAY7r7H5w=; b=X6XkcRUhmtAVIiQWY//3Gn68yYt9JlaMc47Vxfs+9cbIeqKDJtChfU2PP1IOBRmR1x ti8C4IVVDHdguNDBsD2+Zu2z/+0wf1X0jjy8HgGHG5uy+eXYp4j1Mytn6RWE7zYG3Pjh qcHknwrb+7+95GUvIDQnxRanAZSZWd6SqEFWEJJ9Xn2MnMnc1Ul9PgDZjLJvsIsLXGX7 AsxVI1BhGqVIogXF3czePdz1Ax326NINXaOl1FH+v9tqYAT57vqsKVzzNH50KPCClXnd VHJRzl7HiQYfXLY0uKpfCjgyCDYQj9/O8D4qmTHjNwlFd2AqpCH4gs/GGlghND4EjnTW b0nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=mMupdIrbmOm80/sZrASEhygALQnDQAhlGvHAY7r7H5w=; b=72AvGpNT1JYjpVrViE9jcCLUFeiXMIvijmVEvhGwCdlrnpZwJgu+ePK5UYPrMaHhuK aBWCDqr5yX+VcUMoWvgcqkJmc4iilpU7sTYJFqP+9hY+6yQPnTyRp1/+WvdhreyDYN0B 3lq+XCKAXp0yaBeS8AF6gmYmyYj0DAcefDEGsb51EiOZyk82VUJI42P3us5v1XaUYXCe r7qRoMJ9hkyX5/uiPVuBjc6x8Mk/kXwEMo7GKdgjtSKWhwHUSllCFwJDKQf84OTyA7Va YR2Gpx+247WgFIT5YQFbDKb3ik4hHSkWU8Z3VbyldeuJWQYupmkOXyzaZvp+iSTptTKU hKLg== X-Gm-Message-State: AOAM531Jx9tzgu08q6ilC2dM1zAXoRPOV1R4nYktr1MYJA+PxMYJdPcQ ZO+GCpfqc/fnnFynNF0RRlRJoeOv7OIYAclKzg== X-Google-Smtp-Source: ABdhPJwXwzSytvMwsjPP9WzSxgb1djKy65Lz6QLBpdS+tGwETD/J0V3ZhDRlutQHDIbQP4uTSJSAD6d/fMA2MaVCFQ== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:415c:9be0:c461:440a]) (user=kaleshsingh job=sendgmr) by 2002:a81:1cd8:0:b0:2d6:b74b:5b55 with SMTP id c207-20020a811cd8000000b002d6b74b5b55mr908040ywc.149.1645679776175; Wed, 23 Feb 2022 21:16:16 -0800 (PST) Date: Wed, 23 Feb 2022 21:13:36 -0800 In-Reply-To: <20220224051439.640768-1-kaleshsingh@google.com> Message-Id: <20220224051439.640768-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220224051439.640768-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.473.g83b2b277ed-goog Subject: [PATCH v3 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range() From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Scull , Paolo Bonzini , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org hyp_alloc_private_va_range() can be used to reserve private VA ranges in the nVHE hypervisor. Also update __create_hyp_private_mapping() to allow specifying an alignment for the private VA mapping. These will be used to implement stack guard pages for KVM nVHE hypervisor (nVHE Hyp mode / not pKVM), in a subsequent patch in the series. Signed-off-by: Kalesh Singh --- Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/include/asm/kvm_mmu.h | 4 +++ arch/arm64/kvm/mmu.c | 62 ++++++++++++++++++++------------ 2 files changed, 43 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 81839e9a8a24..0b0c71302b92 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -153,6 +153,10 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) int kvm_share_hyp(void *from, void *to); void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); +unsigned long hyp_alloc_private_va_range(size_t size, size_t align); +int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, + size_t align, unsigned long *haddr, + enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, void __iomem **haddr); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc2aba953299..fc09536c8197 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -457,22 +457,16 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return 0; } -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, - unsigned long *haddr, - enum kvm_pgtable_prot prot) + +/* + * Allocates a private VA range below io_map_base. + * + * @size: The size of the VA range to reserve. + * @align: The required alignment for the allocation. + */ +unsigned long hyp_alloc_private_va_range(size_t size, size_t align) { unsigned long base; - int ret = 0; - - if (!kvm_host_owns_hyp_mappings()) { - base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, - phys_addr, size, prot); - if (IS_ERR_OR_NULL((void *)base)) - return PTR_ERR((void *)base); - *haddr = base; - - return 0; - } mutex_lock(&kvm_hyp_pgd_mutex); @@ -484,8 +478,8 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, * * The allocated size is always a multiple of PAGE_SIZE. */ - size = PAGE_ALIGN(size + offset_in_page(phys_addr)); - base = io_map_base - size; + base = io_map_base - PAGE_ALIGN(size); + base = ALIGN_DOWN(base, align); /* * Verify that BIT(VA_BITS - 1) hasn't been flipped by @@ -493,20 +487,42 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, * overflowed the idmap/IO address range. */ if ((base ^ io_map_base) & BIT(VA_BITS - 1)) - ret = -ENOMEM; + base = (unsigned long)ERR_PTR(-ENOMEM); else io_map_base = base; mutex_unlock(&kvm_hyp_pgd_mutex); - if (ret) - goto out; + return base; +} + +int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size, + size_t align, unsigned long *haddr, + enum kvm_pgtable_prot prot) +{ + unsigned long addr; + int ret = 0; + + if (!kvm_host_owns_hyp_mappings()) { + addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping, + phys_addr, size, prot); + if (IS_ERR_OR_NULL((void *)addr)) + return addr ? PTR_ERR((void *)addr) : -ENOMEM; + *haddr = addr; + + return 0; + } + + size += offset_in_page(phys_addr); + addr = hyp_alloc_private_va_range(size, align); + if (IS_ERR_OR_NULL((void *)addr)) + return addr ? PTR_ERR((void *)addr) : -ENOMEM; - ret = __create_hyp_mappings(base, size, phys_addr, prot); + ret = __create_hyp_mappings(addr, size, phys_addr, prot); if (ret) goto out; - *haddr = base + offset_in_page(phys_addr); + *haddr = addr + offset_in_page(phys_addr); out: return ret; } @@ -537,7 +553,7 @@ int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, return 0; } - ret = __create_hyp_private_mapping(phys_addr, size, + ret = __create_hyp_private_mapping(phys_addr, size, PAGE_SIZE, &addr, PAGE_HYP_DEVICE); if (ret) { iounmap(*kaddr); @@ -564,7 +580,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, BUG_ON(is_kernel_in_hyp_mode()); - ret = __create_hyp_private_mapping(phys_addr, size, + ret = __create_hyp_private_mapping(phys_addr, size, PAGE_SIZE, &addr, PAGE_HYP_EXEC); if (ret) { *haddr = NULL; -- 2.35.1.473.g83b2b277ed-goog