From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 063ECC433ED for ; Wed, 19 May 2021 20:04:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE0A6611BD for ; Wed, 19 May 2021 20:04:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232851AbhESUFY (ORCPT ); Wed, 19 May 2021 16:05:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232778AbhESUFR (ORCPT ); Wed, 19 May 2021 16:05:17 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3349EC0613CE for ; Wed, 19 May 2021 13:03:57 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id o14-20020a05620a130eb02902ea53a6ef80so10676531qkj.6 for ; Wed, 19 May 2021 13:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pxqIK5GbpJxk8YmWf75fUOWsmXD3unOxy6kd0FXK5W0=; b=OF22+AuJmQNe/K+Mxj2Nyykz9DEdysYmqL4MHTbZyk0pc3gP2qQWZsA2w09gz8rOYO LTP7/INkbJXUmYFEtweFanIqC3TaJ/wikdkjMvnG7o6x9vIqmye8gYXYCsjrraJjRi3y lwJ+tgY+0PxeBjJQ+7RMSHYQ3QN4uhsHTG4/AVpfxmUnCJsRXDJu90bMPhXULkG68Te0 rpZWQIkpWijBi2hvu6GPWgmv5l5kYElaNOYto3MXmKMPMl0YhxvOUuhnliO59NhDqBFm WfESqDv+PTEWG7eMFHtakXyvb1JV2I67IGW/FI4MjRCel6vdxuf8tjaHlBBMs4UO6ula OgfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pxqIK5GbpJxk8YmWf75fUOWsmXD3unOxy6kd0FXK5W0=; b=hUQir0qJ6uKjaXUTIntjdYqe3tenQ1Io8Tx4R+zJ7fWWv5BPrsFZNUj/eTkrLsu1sx A6Ziw1ejeA2nW1QjO3Iq31olCkFhpscOlNF2hZSKhCPh5SKQqJ72NPG3hDbcaSkRVPQo ecgVnGkBk+qDzoe5coRjuSjSYOYymDaaxeBFE9FxHhfnhfXsu1TWfJk9O4p0oSNeNvzT i3CNrxVccszz55M0OHTeNYGDqjq8Jjc8UmlAg4Kx/c1VdrPxvh3LQBHxFGcRtFDjION0 JGDPBCeLZ73WrV4d8aEaza/LVsH8DvOJmsb800poEPEeRIydr9W7dtxDgI5+AHnjeH5e KF0Q== X-Gm-Message-State: AOAM530+Dj9p8/CbW7xkW9DkoUzQXYtqcYfuCbTiosigMpZfZDYgtp01 7Ho/g3YcTcYh5U+Oerxw73P3Tu4SLgAo/dPfWbZp X-Google-Smtp-Source: ABdhPJzqaEXTf/Jpwf88eR4yvVct5tPFpLqO1v8Dd9klagumFJxHeJpveSx8v9yl5oUqcEsQyF/npcsmE/YFrkBBHLOT X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:7eb5:10bb:834a:d5ec]) (user=axelrasmussen job=sendgmr) by 2002:a05:6214:11a3:: with SMTP id u3mr1487447qvv.6.1621454636342; Wed, 19 May 2021 13:03:56 -0700 (PDT) Date: Wed, 19 May 2021 13:03:33 -0700 In-Reply-To: <20210519200339.829146-1-axelrasmussen@google.com> Message-Id: <20210519200339.829146-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210519200339.829146-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v2 04/10] KVM: selftests: compute correct demand paging size From: Axel Rasmussen To: Aaron Lewis , Alexander Graf , Andrew Jones , Andrew Morton , Ben Gardon , Emanuele Giuseppe Esposito , Eric Auger , Jacob Xu , Makarand Sonare , Oliver Upton , Paolo Bonzini , Peter Xu , Shuah Khan , Yanan Wang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Axel Rasmussen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is a preparatory commit needed before we can use different kinds of backing pages for guest memory. Previously, we used perf_test_args.host_page_size, which is the host's native page size (commonly 4K). For VM_MEM_SRC_ANONYMOUS this turns out to be okay, but in a follow-up commit we want to allow using different kinds of backing memory. Take VM_MEM_SRC_ANONYMOUS_HUGETLB for example. Without this change, if we used that backing page type, when we issued a UFFDIO_COPY ioctl we'd only do so with 4K, rather than the full 2M of a backing hugepage. In this case, UFFDIO_COPY returns -EINVAL (__mcopy_atomic_hugetlb checks the size). Signed-off-by: Axel Rasmussen --- tools/testing/selftests/kvm/demand_paging_test.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 601a1df24dd2..94cf047358d5 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -40,6 +40,7 @@ static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; +static size_t demand_paging_size; static char *guest_data_prototype; static void *vcpu_worker(void *data) @@ -85,7 +86,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) copy.src = (uint64_t)guest_data_prototype; copy.dst = addr; - copy.len = perf_test_args.host_page_size; + copy.len = demand_paging_size; copy.mode = 0; clock_gettime(CLOCK_MONOTONIC, &start); @@ -102,7 +103,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid, timespec_to_ns(ts_diff)); PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n", - perf_test_args.host_page_size, addr, tid); + demand_paging_size, addr, tid); return 0; } @@ -261,10 +262,12 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_args.wr_fract = 1; - guest_data_prototype = malloc(perf_test_args.host_page_size); + demand_paging_size = get_backing_src_pagesz(VM_MEM_SRC_ANONYMOUS); + + guest_data_prototype = malloc(demand_paging_size); TEST_ASSERT(guest_data_prototype, "Failed to allocate buffer for guest data pattern"); - memset(guest_data_prototype, 0xAB, perf_test_args.host_page_size); + memset(guest_data_prototype, 0xAB, demand_paging_size); vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads)); TEST_ASSERT(vcpu_threads, "Memory allocation failed"); -- 2.31.1.751.gd2f1c929bd-goog