From: Sean Christopherson <seanjc@google.com>
To: Like Xu <like.xu.linux@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Sandipan Das <sandipan.das@amd.com>
Subject: Re: [PATCH v4 11/12] KVM: x86/svm/pmu: Add AMD PerfMonV2 support
Date: Fri, 7 Apr 2023 07:44:57 -0700 [thread overview]
Message-ID: <ZDAsaXvx85x+n71S@google.com> (raw)
In-Reply-To: <dfc5cba8-5efb-8ad6-01e0-2800290a9ac1@gmail.com>
On Fri, Apr 07, 2023, Like Xu wrote:
> On 7/4/2023 9:35 am, Sean Christopherson wrote:
> > On Tue, Feb 14, 2023, Like Xu wrote:
> > > + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
> > > + if (!msr_info->host_initiated)
> > > + return 0; /* Writes are ignored */
> >
> > Where is the "writes ignored" behavior documented? I can't find anything in the
> > APM that defines write behavior.
>
> KVM would follow the real hardware behavior once specifications stay silent
> on details or secret.
So is that a "this isn't actually documented anywhere" answer? It's not your
responsibility to get AMD to document their CPUs, but I want to clearly document
when KVM's behavior is based solely off of observed hardware behavior, versus an
actual specification.
> How about this:
>
> /*
> * Note, AMD ignores writes to reserved bits and read-only PMU MSRs,
> * whereas Intel generates #GP on attempts to write reserved/RO MSRs.
> */
Looks good.
> > > + pmu->nr_arch_gp_counters = min_t(unsigned int,
> > > + ebx.split.num_core_pmc,
> > > + kvm_pmu_cap.num_counters_gp);
> > > + } else if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) {
> > > pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS_CORE;
> >
> > This needs to be sanitized, no? E.g. if KVM only has access to 4 counters, but
> > userspace sets X86_FEATURE_PERFCTR_CORE anyways. Hrm, unless I'm missing something,
> > that's a pre-existing bug.
>
> Now your point is that if a user space more capbility than KVM can support,
> KVM should constrain it.
> Your previous preference was that the user space can set capbilities that
> evene if KVM doesn't support as long as it doesn't break KVM and host and the
> guest will eat its own.
Letting userspace define a "bad" configuration is perfectly ok, but KVM needs to
be careful not to endanger itself by consuming the bad state. A good example is
the handling of nested SVM features in svm_vcpu_after_set_cpuid(). KVM lets
userspace define anything and everything, but KVM only actually tries to utilize
a feature if the feature is actually supported in hardware.
In this case, it's not clear to me that putting a bogus value into "nr_arch_gp_counters"
is safe (for KVM). And AIUI, the guest can't actually use more than
kvm_pmu_cap.num_counters_gp counters, i.e. KVM isn't arbitrarily restricting the
setup.
next prev parent reply other threads:[~2023-04-07 14:45 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-14 5:07 [PATCH v4 00/12] KVM: x86: Add AMD Guest PerfMonV2 PMU support Like Xu
2023-02-14 5:07 ` [PATCH v4 01/12] KVM: x86/pmu: Rename pmc_is_enabled() to pmc_is_globally_enabled() Like Xu
2023-02-14 5:07 ` [PATCH v4 02/12] KVM: VMX: Refactor intel_pmu_set_msr() to align with other set_msr() helpers Like Xu
2023-02-16 21:13 ` Sean Christopherson
2023-02-21 8:44 ` Like Xu
2023-03-23 7:43 ` Like Xu
2023-03-23 14:28 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 03/12] KVM: x86/pmu: Rewrite reprogram_counters() to improve performance Like Xu
2023-02-14 5:07 ` [PATCH v4 04/12] KVM: x86/pmu: Expose reprogram_counters() in pmu.h Like Xu
2023-02-14 5:07 ` [PATCH v4 05/12] KVM: x86/pmu: Error when user sets the GLOBAL_STATUS reserved bits Like Xu
2023-04-06 23:45 ` Sean Christopherson
2023-04-07 5:08 ` Like Xu
2023-04-07 15:43 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 06/12] KVM: x86/pmu: Make part of the Intel v2 PMU MSRs handling x86 generic Like Xu
2023-04-06 23:57 ` Sean Christopherson
2023-04-07 1:39 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 07/12] KVM: x86/cpuid: Use fast return for cpuid "0xa" leaf when !enable_pmu Like Xu
2023-04-06 23:59 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 08/12] KVM: x86/pmu: Disable vPMU if the minimum num of counters isn't met Like Xu
2023-04-07 0:06 ` Sean Christopherson
2023-04-07 0:23 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 09/12] KVM: x86/pmu: Forget PERFCTR_CORE if the min " Like Xu
2023-04-07 0:32 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 10/12] KVM: x86/cpuid: Add X86_FEATURE_PERFMON_V2 as a scattered flag Like Xu
2023-04-07 0:41 ` Sean Christopherson
2023-02-14 5:07 ` [PATCH v4 11/12] KVM: x86/svm/pmu: Add AMD PerfMonV2 support Like Xu
2023-04-07 1:35 ` Sean Christopherson
2023-04-07 7:08 ` Like Xu
2023-04-07 14:44 ` Sean Christopherson [this message]
2023-04-10 11:34 ` Like Xu
2023-02-14 5:07 ` [PATCH v4 12/12] KVM: x86/cpuid: Add AMD CPUID ExtPerfMonAndDbg leaf 0x80000022 Like Xu
2023-04-07 1:50 ` Sean Christopherson
2023-04-07 7:19 ` Like Xu
2023-04-07 2:02 ` [PATCH v4 00/12] KVM: x86: Add AMD Guest PerfMonV2 PMU support Sean Christopherson
2023-04-07 7:28 ` Like Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZDAsaXvx85x+n71S@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sandipan.das@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).