From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ADA2C433F5 for ; Tue, 11 Oct 2022 19:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229912AbiJKTk4 (ORCPT ); Tue, 11 Oct 2022 15:40:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229933AbiJKTkr (ORCPT ); Tue, 11 Oct 2022 15:40:47 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AAC98306C for ; Tue, 11 Oct 2022 12:40:46 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id i3so4910454pfc.11 for ; Tue, 11 Oct 2022 12:40:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Y+WmqL8qyiSi+QRhUOFf3zGACo+9+iiKete6ei7NIsw=; b=Kkz3CcU8vqdpzZaQJ6AJAO2aByoCbf+TkBinLRtonahF5skgudOcI3PJa1yfk7o2mH a+vOTAvzYj55n2NZ4v6DBlQIwaP1dIF+Wjj1ursqRxo5nI7vvuTxSbYrAcPQXCxmg4Zy fQdLP2AiTN0ZnrAkrtzXYz+2GQHkGTHhLW3BCAiKrUfiTnf7ZDlPTMafNxx3KnWkGTP2 5IxsbGrL0oIsEmy1aIvf+nEwz7IV1A3CdLO4fvTb3dZBi9YaAaRGGPZdkpSLw2+TvJ4U 86O09D5VJFWqIH0C8hlcJx5s2jH0Hn/JZYMTumW72E1zvbJGfjDBaCXD/WHqcxi7/yoS OMBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Y+WmqL8qyiSi+QRhUOFf3zGACo+9+iiKete6ei7NIsw=; b=ZPnI/iXidzKXKyxPt3sIEYZlA3QNbsWz+qkUCehhBWdywdBd4NflDq6AeTi0ofdRyY I8Od1T+16FpUj4WqHPF/JGT4n+Q770oETGdBy3sN8Tsw2Rx9GMU1l8NVklribV7AW5Bf VwDgj1SKXzetC/Sf50ctFTXZXvVqdoeidtgPJRlEIUYsPi0IIiAr5g9S8fItq8s5jFdM cogs6l6Vg4xoO9ev4yEqs9gQHQypiZQVIvwDja9o0UrAGLx4IhRUE0Ig//P7AK8G8v8j faS13VedNyWZZAEv/dw/3Hmr98UnJAhVO5Df3nEi/KtAgCjEkVqLJzzCkoHfO1U2X8lt 9sDg== X-Gm-Message-State: ACrzQf1PWYJ5PAOjzRWOetEcHa6wgDAr7PVYgK/YkqiR3siQOhEMHRGW UsizyxEjNhzJScAbw/MjeeR7kQ== X-Google-Smtp-Source: AMsMyM64f+NFKxf1tDoi0nap3U6KmALvuJGthnMyT0RYeWibVpcoOSnmAAYg59GPJMR33XKaz5kkew== X-Received: by 2002:a05:6a00:1707:b0:563:235:769b with SMTP id h7-20020a056a00170700b005630235769bmr18577422pfc.19.1665517246042; Tue, 11 Oct 2022 12:40:46 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id bc8-20020a656d88000000b0043a1c0a0ab1sm8067623pgb.83.2022.10.11.12.40.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Oct 2022 12:40:45 -0700 (PDT) Date: Tue, 11 Oct 2022 19:40:42 +0000 From: Sean Christopherson To: Vitaly Kuznetsov Cc: kvm@vger.kernel.org, Paolo Bonzini , Wanpeng Li , Jim Mattson , Maxim Levitsky , Michael Kelley , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v4 6/6] KVM: selftests: Test Hyper-V invariant TSC control Message-ID: References: <20220922143655.3721218-1-vkuznets@redhat.com> <20220922143655.3721218-7-vkuznets@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220922143655.3721218-7-vkuznets@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-hyperv@vger.kernel.org On Thu, Sep 22, 2022, Vitaly Kuznetsov wrote: > diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c > index d4bd18bc580d..18b44450dfb8 100644 > --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c > +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c > @@ -46,20 +46,33 @@ struct hcall_data { > > static void guest_msr(struct msr_data *msr) > { > - uint64_t ignored; > + uint64_t msr_val = 0; > uint8_t vector; > > GUEST_ASSERT(msr->idx); > > - if (!msr->write) > - vector = rdmsr_safe(msr->idx, &ignored); > - else > + if (!msr->write) { > + vector = rdmsr_safe(msr->idx, &msr_val); This is subtly going to do weird things if the RDMSR faults. rdmsr_safe() overwrites @val with whatever happens to be in EDX:EAX if the RDMSR faults, i.e. this may yield garbage instead of '0'. Arguably rdmsr_safe() is a bad API, but at the same time the caller really shouldn't consume the result if RDMSR faults (though aligning with the kernel is also valuable). Aha! Idea. Assuming none of the MSRs are write-only, what about adding a prep patch to rework this code so that it verifies RDMSR returns what was written when a fault didn't occur. uint8_t vector = 0; uint64_t msr_val; GUEST_ASSERT(msr->idx); if (msr->write) vector = wrmsr_safe(msr->idx, msr->write_val); if (!vector) vector = rdmsr_safe(msr->idx, &msr_val); if (msr->fault_expected) GUEST_ASSERT_2(vector == GP_VECTOR, msr->idx, vector); else GUEST_ASSERT_2(!vector, msr->idx, vector); if (vector) goto done; GUEST_ASSERT_2(msr_val == msr->write_val, msr_val, msr->write_val); done: GUEST_DONE(); and then this patch can just slot in the extra check: uint8_t vector = 0; uint64_t msr_val; GUEST_ASSERT(msr->idx); if (msr->write) vector = wrmsr_safe(msr->idx, msr->write_val); if (!vector) vector = rdmsr_safe(msr->idx, &msr_val); if (msr->fault_expected) GUEST_ASSERT_2(vector == GP_VECTOR, msr->idx, vector); else GUEST_ASSERT_2(!vector, msr->idx, vector); if (vector) goto done; GUEST_ASSERT_2(msr_val == msr->write_val, msr_val, msr->write_val); /* Invariant TSC bit appears when TSC invariant control MSR is written to */ if (msr->idx == HV_X64_MSR_TSC_INVARIANT_CONTROL) { if (!this_cpu_has(HV_ACCESS_TSC_INVARIANT)) GUEST_ASSERT(this_cpu_has(X86_FEATURE_INVTSC)); else GUEST_ASSERT(this_cpu_has(X86_FEATURE_INVTSC) == !!(msr_val & HV_INVARIANT_TSC_EXPOSED)); } done: GUEST_DONE();