From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BDFBC433EF for ; Fri, 3 Sep 2021 20:53:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E764610C8 for ; Fri, 3 Sep 2021 20:53:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235916AbhICUyn (ORCPT ); Fri, 3 Sep 2021 16:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235133AbhICUyk (ORCPT ); Fri, 3 Sep 2021 16:54:40 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E926C061757 for ; Fri, 3 Sep 2021 13:53:40 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id k65so770252yba.13 for ; Fri, 03 Sep 2021 13:53:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yUKMmmKj77rkr/6CwH3ULuFxFaMnzN7/v7mdAmK2yqY=; b=VlB2J9VYWWmFKpGroyKD24Ow/xVZdnzLtbJ52C4o4vnMSFvpaqzo/jDPdcjAm+zvZm yFEcMA7uUDPqKs73z0pnALkjQE+PE+VvE5Hkuggj1pZpnhdQEAJ1deAOyZfa91aZ8KLb NIMUBGP0v0D9MIv7hcKVhVsRVDSKXOAjNhuzV3vpUIJOd8eIGc7HDBjbdEaJhGxLPUbk xLiEUwoYjA4vyYUpvLGalmvpBuQ6ZuDyeIAUBApANFG3m9407qgv1lkhmfTqDcYBqoXp nwLklCumOukwYYjfCS+JuuZ/iDeHchhOT1Q0y+kjcJpjMEYo1++6xi/JF9QSRgchd7I6 563Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yUKMmmKj77rkr/6CwH3ULuFxFaMnzN7/v7mdAmK2yqY=; b=XCtEa1b6QOzSUrgj/p8hnEyz/8zibG1I2ZS7KOWmK/pCaKnvBx7oaiW0zEN5Os0vf3 0ebD31yQnxbNaVSxJ6lcpKAbqcM3IcS28N6hh4s44nJLWOo7Kvz8oOglXVQ0ITRcvAPM ma4Png4xEr9zsP4oqe0krBWpRE/2nAw4oFyX2sGigpVC9clFFIUKqgih2cgYz4iT+FZW VVIzYT7YuXSDFN4yMI563ffVctb5XdRqLybMwUi19Lu3IR+60TSbxgUimzRemmgQtQGj 0Y8+sb7Y+GZahYu4usrhIoM7E86UxPqqk6ocVe6WCkGp//7ZSAeeBpY4+P+ftYZPlVnD YQXg== X-Gm-Message-State: AOAM533YznT+16tgHRuvarEHiXOFpKsQV2V/YBosLBAw5Ci1wRycRTCa jcrcbu4nQltiGASv7wWQeVym3i5BiXSv2N4xE8vg6A== X-Google-Smtp-Source: ABdhPJyElP2537dq8IjTwGjoMctQgmIZ+c6FeLfoJAn86KwY4XYEL3+VBwVLpKubRchC2EQMcbZz0s0E873B6bCXFh0= X-Received: by 2002:a25:af81:: with SMTP id g1mr1158700ybh.457.1630702419417; Fri, 03 Sep 2021 13:53:39 -0700 (PDT) MIME-Version: 1.0 References: <20210901211412.4171835-1-rananta@google.com> <20210901211412.4171835-13-rananta@google.com> <20210903110514.22x3txynin5hg46z@gator.home> In-Reply-To: <20210903110514.22x3txynin5hg46z@gator.home> From: Raghavendra Rao Ananta Date: Fri, 3 Sep 2021 13:53:27 -0700 Message-ID: Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration To: Andrew Jones Cc: Paolo Bonzini , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , kvm@vger.kernel.org, Catalin Marinas , Peter Shier , linux-kernel@vger.kernel.org, Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 3, 2021 at 4:05 AM Andrew Jones wrote: > > On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote: > > Since the timer stack (hardware and KVM) is per-CPU, there > > are potential chances for races to occur when the scheduler > > decides to migrate a vCPU thread to a different physical CPU. > > Hence, include an option to stress-test this part as well by > > forcing the vCPUs to migrate across physical CPUs in the > > system at a particular rate. > > > > Originally, the bug for the fix with commit 3134cc8beb69d0d > > ("KVM: arm64: vgic: Resample HW pending state on deactivation") > > was discovered using arch_timer test with vCPU migrations and > > can be easily reproduced. > > > > Signed-off-by: Raghavendra Rao Ananta > > --- > > .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++- > > 1 file changed, 107 insertions(+), 1 deletion(-) > > > > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c > > index 1383f33850e9..de246c7afab2 100644 > > --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c > > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c > > @@ -14,6 +14,8 @@ > > * > > * The test provides command-line options to configure the timer's > > * period (-p), number of vCPUs (-n), and iterations per stage (-i). > > + * To stress-test the timer stack even more, an option to migrate the > > + * vCPUs across pCPUs (-m), at a particular rate, is also provided. > > * > > * Copyright (c) 2021, Google LLC. > > */ > > @@ -24,6 +26,8 @@ > > #include > > #include > > #include > > +#include > > +#include > > > > #include "kvm_util.h" > > #include "processor.h" > > @@ -41,12 +45,14 @@ struct test_args { > > int nr_vcpus; > > int nr_iter; > > int timer_period_ms; > > + int migration_freq_ms; > > }; > > > > static struct test_args test_args = { > > .nr_vcpus = NR_VCPUS_DEF, > > .nr_iter = NR_TEST_ITERS_DEF, > > .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF, > > + .migration_freq_ms = 0, /* Turn off migrations by default */ > > I'd rather we enable good tests like these by default. > Well, that was my original idea, but I was concerned about the ease for diagnosing things since it'll become too noisy. And so I let it as a personal preference. But I can include it back and see how it goes. > > }; > > > > #define msecs_to_usecs(msec) ((msec) * 1000LL) > > @@ -81,6 +87,9 @@ struct test_vcpu { > > static struct test_vcpu test_vcpu[KVM_MAX_VCPUS]; > > static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS]; > > > > +static unsigned long *vcpu_done_map; > > +static pthread_mutex_t vcpu_done_map_lock; > > + > > static void > > guest_configure_timer_action(struct test_vcpu_shared_data *shared_data) > > { > > @@ -216,6 +225,11 @@ static void *test_vcpu_run(void *arg) > > > > vcpu_run(vm, vcpuid); > > > > + /* Currently, any exit from guest is an indication of completion */ > > + pthread_mutex_lock(&vcpu_done_map_lock); > > + set_bit(vcpuid, vcpu_done_map); > > + pthread_mutex_unlock(&vcpu_done_map_lock); > > + > > switch (get_ucall(vm, vcpuid, &uc)) { > > case UCALL_SYNC: > > case UCALL_DONE: > > @@ -235,9 +249,73 @@ static void *test_vcpu_run(void *arg) > > return NULL; > > } > > > > +static uint32_t test_get_pcpu(void) > > +{ > > + uint32_t pcpu; > > + unsigned int nproc_conf; > > + cpu_set_t online_cpuset; > > + > > + nproc_conf = get_nprocs_conf(); > > + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); > > + > > + /* Randomly find an available pCPU to place a vCPU on */ > > + do { > > + pcpu = rand() % nproc_conf; > > + } while (!CPU_ISSET(pcpu, &online_cpuset)); > > + > > + return pcpu; > > +} > > +static int test_migrate_vcpu(struct test_vcpu *vcpu) > > +{ > > + int ret; > > + cpu_set_t cpuset; > > + uint32_t new_pcpu = test_get_pcpu(); > > + > > + CPU_ZERO(&cpuset); > > + CPU_SET(new_pcpu, &cpuset); > > + ret = pthread_setaffinity_np(vcpu->pt_vcpu_run, > > + sizeof(cpuset), &cpuset); > > + > > + /* Allow the error where the vCPU thread is already finished */ > > + TEST_ASSERT(ret == 0 || ret == ESRCH, > > + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", > > + vcpu->vcpuid, new_pcpu, ret); > > It'd be good to collect stats for the two cases so we know how many > vcpus we actually migrated with a successful setaffinity and how many > just completed too early. If our stats don't look good, then we can > adjust our timeouts and frequencies. > I can do that, but since we don't attempt to migrate if the migration thread learns that the vCPU is already done, I'm guessing we may not hit ESRCH as much. > > + > > + return ret; > > +} > > +static void *test_vcpu_migration(void *arg) > > +{ > > + unsigned int i, n_done; > > + bool vcpu_done; > > + > > + do { > > + usleep(msecs_to_usecs(test_args.migration_freq_ms)); > > + > > + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) { > > + pthread_mutex_lock(&vcpu_done_map_lock); > > + vcpu_done = test_bit(i, vcpu_done_map); > > + pthread_mutex_unlock(&vcpu_done_map_lock); > > + > > + if (vcpu_done) { > > + n_done++; > > + continue; > > + } > > + > > + test_migrate_vcpu(&test_vcpu[i]); > > + } > > + } while (test_args.nr_vcpus != n_done); > > + > > + return NULL; > > +} > > + > > static void test_run(struct kvm_vm *vm) > > { > > int i, ret; > > + pthread_t pt_vcpu_migration; > > + > > + pthread_mutex_init(&vcpu_done_map_lock, NULL); > > + vcpu_done_map = bitmap_alloc(test_args.nr_vcpus); > > + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); > > > > for (i = 0; i < test_args.nr_vcpus; i++) { > > ret = pthread_create(&test_vcpu[i].pt_vcpu_run, NULL, > > @@ -245,8 +323,23 @@ static void test_run(struct kvm_vm *vm) > > TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); > > } > > > > + /* Spawn a thread to control the vCPU migrations */ > > + if (test_args.migration_freq_ms) { > > + srand(time(NULL)); > > + > > + ret = pthread_create(&pt_vcpu_migration, NULL, > > + test_vcpu_migration, NULL); > > + TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); > > + } > > + > > + > > for (i = 0; i < test_args.nr_vcpus; i++) > > pthread_join(test_vcpu[i].pt_vcpu_run, NULL); > > + > > + if (test_args.migration_freq_ms) > > + pthread_join(pt_vcpu_migration, NULL); > > + > > + bitmap_free(vcpu_done_map); > > } > > > > static struct kvm_vm *test_vm_create(void) > > @@ -286,6 +379,7 @@ static void test_print_help(char *name) > > NR_TEST_ITERS_DEF); > > pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n", > > TIMER_TEST_PERIOD_MS_DEF); > > + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: 0)\n"); > > pr_info("\t-h: print this help screen\n"); > > } > > > > @@ -293,7 +387,7 @@ static bool parse_args(int argc, char *argv[]) > > { > > int opt; > > > > - while ((opt = getopt(argc, argv, "hn:i:p:")) != -1) { > > + while ((opt = getopt(argc, argv, "hn:i:p:m:")) != -1) { > > switch (opt) { > > case 'n': > > test_args.nr_vcpus = atoi(optarg); > > @@ -320,6 +414,13 @@ static bool parse_args(int argc, char *argv[]) > > goto err; > > } > > break; > > + case 'm': > > + test_args.migration_freq_ms = atoi(optarg); > > + if (test_args.migration_freq_ms < 0) { > > + pr_info("0 or positive value needed for -m\n"); > > + goto err; > > + } > > + break; > > case 'h': > > default: > > goto err; > > @@ -343,6 +444,11 @@ int main(int argc, char *argv[]) > > if (!parse_args(argc, argv)) > > exit(KSFT_SKIP); > > > > + if (get_nprocs() < 2) { > > if (test_args.migration_freq_ms && get_nprocs() < 2) > > > + print_skip("At least two physical CPUs needed for vCPU migration"); > > + exit(KSFT_SKIP); > > + } > > + > > vm = test_vm_create(); > > test_run(vm); > > kvm_vm_free(vm); > > -- > > 2.33.0.153.gba50c8fa24-goog > > > > Thanks, > drew >