From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CA35C433EF for ; Fri, 15 Jul 2022 19:30:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231319AbiGOTaT (ORCPT ); Fri, 15 Jul 2022 15:30:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231273AbiGOTaK (ORCPT ); Fri, 15 Jul 2022 15:30:10 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F36353D23 for ; Fri, 15 Jul 2022 12:30:09 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d185-20020a6236c2000000b0052af7994d56so2779339pfa.16 for ; Fri, 15 Jul 2022 12:30:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HghgVNaQ+u17Qr/hDdOGN5qXJrzqdmtpwnb0Yhy4dh8=; b=PpyN7ggMVcUjhI625UHNrva+9WRZaWAxl9DyScRxNKBPPrhj+NYsQN2HxDZIuhOHqE 4A9W4gxv9zUTUkW93n3v0cgITvTpqR5/HKn2glvLOFhg276xe9/RTQ5BU6r1v/ufYfLq o174JKAUWt0qbiMIERCHHyEYjOLKr+0rt5gvsNqvAXGXyuTIaH5nQPSYraFfgcIZi3VI ArEWciB/i3NzlYvmQ8iAhxOvqqYzkQm+aKpufw7y3deWuWfKZ44FGnl8Kmi0iYSWebQD KgXVJfTAaBNQc8Vtseah2SRJbk1aFRcEePS8rnbTahOAg0vNlE5fNbkCSKPpF2TLV62u EZAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HghgVNaQ+u17Qr/hDdOGN5qXJrzqdmtpwnb0Yhy4dh8=; b=jRguqJCjI2lT8o3RUuVX7ewguV2gnzcTs9DY5L7s+jP70xuwiQ8v8sKHe4U6YvRK/P Ze46s40tdHxOrkIvFNxtzUAAB1Usd/okGtX9F/6dDGNHcwH4xAjAR/ZfdjOX1Y0r0IT4 jfFr0tZcIyJNZg/QGSArqQZLkTsz+AYJ9OAzEb4UKo1TLc6zhnCkZQfY8cCmCIRWtutX 8w/2SiEaiBTssiCakeCSM66g1e+C79lE8hxXc3icuIqgvUOkKilJ3p+pOHsVWjrQbzMM CwuOIfqXN3Ay/U7yeVLRPYxrr6f0Q2K5pJgCXF9vMO1Kq/Ty0CHSPVSjJraXa8Rlmznz rr2g== X-Gm-Message-State: AJIora+opB9S++CS/bmnpSKN0cVM4efEj7E3DNRhdq3ve/u3yLwRXXWR m1Vtphvy0hsBlkUuxmZNajpkZEMgxWQ= X-Google-Smtp-Source: AGRyM1sDJDXiSK+Y+AQeqRGIYrc4BvuGkH/RHKyhUGaRoBx7ZuAoH+j1D0KzhTZuKEmoR0stx4vbH9VFRIg= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:203:bd4e:b81d:4780:497d]) (user=pgonda job=sendgmr) by 2002:a17:902:7406:b0:16b:dab1:12b0 with SMTP id g6-20020a170902740600b0016bdab112b0mr15062276pll.96.1657913408828; Fri, 15 Jul 2022 12:30:08 -0700 (PDT) Date: Fri, 15 Jul 2022 12:29:47 -0700 In-Reply-To: <20220715192956.1873315-1-pgonda@google.com> Message-Id: <20220715192956.1873315-3-pgonda@google.com> Mime-Version: 1.0 References: <20220715192956.1873315-1-pgonda@google.com> X-Mailer: git-send-email 2.37.0.170.g444d1eabd0-goog Subject: [RFC V1 01/10] KVM: selftests: move vm_phy_pages_alloc() earlier in file From: Peter Gonda To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, marcorr@google.com, seanjc@google.com, michael.roth@amd.com, thomas.lendacky@amd.com, joro@8bytes.org, mizhang@google.com, pbonzini@redhat.com, Peter Gonda Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michael Roth Subsequent patches will break some of this code out into file-local helper functions, which will be used by functions like vm_vaddr_alloc(), which currently are defined earlier in the file, so a forward declaration would be needed. Instead, move it earlier in the file, just above vm_vaddr_alloc() and and friends, which are the main users. Reviewed-by: Mingwei Zhang Signed-off-by: Michael Roth Signed-off-by: Peter Gonda --- tools/testing/selftests/kvm/lib/kvm_util.c | 146 ++++++++++----------- 1 file changed, 73 insertions(+), 73 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 768f3bce0161..b07c372b9b37 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1076,6 +1076,79 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id) return vcpu; } +/* + * Physical Contiguous Page Allocator + * + * Input Args: + * vm - Virtual Machine + * num - number of pages + * paddr_min - Physical address minimum + * memslot - Memory region to allocate page from + * + * Output Args: None + * + * Return: + * Starting physical address + * + * Within the VM specified by vm, locates a range of available physical + * pages at or above paddr_min. If found, the pages are marked as in use + * and their base address is returned. A TEST_ASSERT failure occurs if + * not enough pages are available at or above paddr_min. + */ +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot) +{ + struct userspace_mem_region *region; + sparsebit_idx_t pg, base; + + TEST_ASSERT(num > 0, "Must allocate at least one page"); + + TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " + "not divisible by page size.\n" + " paddr_min: 0x%lx page_size: 0x%x", + paddr_min, vm->page_size); + + region = memslot2region(vm, memslot); + base = pg = paddr_min >> vm->page_shift; + + do { + for (; pg < base + num; ++pg) { + if (!sparsebit_is_set(region->unused_phy_pages, pg)) { + base = pg = sparsebit_next_set(region->unused_phy_pages, pg); + break; + } + } + } while (pg && pg != base + num); + + if (pg == 0) { + fprintf(stderr, "No guest physical page available, " + "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", + paddr_min, vm->page_size, memslot); + fputs("---- vm dump ----\n", stderr); + vm_dump(stderr, vm, 2); + abort(); + } + + for (pg = base; pg < base + num; ++pg) + sparsebit_clear(region->unused_phy_pages, pg); + + return base * vm->page_size; +} + +vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, + uint32_t memslot) +{ + return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); +} + +/* Arbitrary minimum physical address used for virtual translation tables. */ +#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 + +vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) +{ + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); +} + /* * VM Virtual Address Unused Gap * @@ -1722,79 +1795,6 @@ const char *exit_reason_str(unsigned int exit_reason) return "Unknown"; } -/* - * Physical Contiguous Page Allocator - * - * Input Args: - * vm - Virtual Machine - * num - number of pages - * paddr_min - Physical address minimum - * memslot - Memory region to allocate page from - * - * Output Args: None - * - * Return: - * Starting physical address - * - * Within the VM specified by vm, locates a range of available physical - * pages at or above paddr_min. If found, the pages are marked as in use - * and their base address is returned. A TEST_ASSERT failure occurs if - * not enough pages are available at or above paddr_min. - */ -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot) -{ - struct userspace_mem_region *region; - sparsebit_idx_t pg, base; - - TEST_ASSERT(num > 0, "Must allocate at least one page"); - - TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " - "not divisible by page size.\n" - " paddr_min: 0x%lx page_size: 0x%x", - paddr_min, vm->page_size); - - region = memslot2region(vm, memslot); - base = pg = paddr_min >> vm->page_shift; - - do { - for (; pg < base + num; ++pg) { - if (!sparsebit_is_set(region->unused_phy_pages, pg)) { - base = pg = sparsebit_next_set(region->unused_phy_pages, pg); - break; - } - } - } while (pg && pg != base + num); - - if (pg == 0) { - fprintf(stderr, "No guest physical page available, " - "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", - paddr_min, vm->page_size, memslot); - fputs("---- vm dump ----\n", stderr); - vm_dump(stderr, vm, 2); - abort(); - } - - for (pg = base; pg < base + num; ++pg) - sparsebit_clear(region->unused_phy_pages, pg); - - return base * vm->page_size; -} - -vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, - uint32_t memslot) -{ - return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); -} - -/* Arbitrary minimum physical address used for virtual translation tables. */ -#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 - -vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) -{ - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); -} - /* * Address Guest Virtual to Host Virtual * -- 2.37.0.170.g444d1eabd0-goog