kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Marc Orr <marcorr@google.com>
Cc: kvm@vger.kernel.org, Jim Mattson <jmattson@google.com>,
	Peter Shier <pshier@google.com>
Subject: Re: [kvm-unit-tests PATCH] x86: nvmx: test max atomic switch MSRs
Date: Fri, 13 Sep 2019 11:30:40 -0700	[thread overview]
Message-ID: <20190913183040.GA8904@linux.intel.com> (raw)
In-Reply-To: <CAA03e5F3SNxcYxdeOg6ZUfxRA5gBe7qaMxSATL13sq1cUL63KQ@mail.gmail.com>

On Fri, Sep 13, 2019 at 11:02:39AM -0700, Marc Orr wrote:
> > > +static void *alloc_2m_page(void)
> > > +{
> > > +     return alloc_pages(PAGE_2M_ORDER);
> > > +}
> >
> > Allocating 2mb pages is complete overkill.  The absolute theoretical max
> > for the number of MSRs is (8 * 512) = 4096, for a total of 32kb per list.
> > We can even show the math so that it's obvious how the size is calculated.
> > Plus one order so we can test overrun.
> > /*
> >  * The max number of MSRs is specified in 3 bits bits, plus 1. I.e. 7+1==8.
> >  * Allocate 64k bytes of data to cover max_msr_list_size and then some.
> >  */
> > static const u32 msr_list_page_order = 4;
> 
> 8 * 512 should be 16 * 512, right [1]? Therefore, the maximum
> theoretical is 64 kB.

*sigh*  I stand by my assertion that "Math is hard".  :-)

> I'll happily apply what you've suggested in v2. But I don't see why
> it's so terrible to over-allocate here. Leveraging a generic 2 MB page
> allocator can be reused going forward, and encourages uniformity
> across tests.

My main concern is avoiding setting 6mb+ of memory.  I like to run the
tests in L1 and L2 and would prefer to keep overhead at a minimum.

As for the allocation itself, being precise in the allocation size is a
form of documentation, e.g. it conveys that the size/order was chosen to
ensure enough space for the maximum theoretical list size.  A purely
arbitrary size, especially one that corresponds with a large page size,
can lead to people looking for things that don't exist, e.g. the 2mb size
is partially what led me to believe that this test was deliberately
exceeding the limit, otherwise why allocate such a large amount of memory?
I also didn't know if 2mb was sufficient to handle the maximum theoretical
list size.

> [1]
> struct vmx_msr_entry {
>   u32 index;
>   u32 reserved;
>   u64 value;
> } __aligned(16);
> 
> > > +static void populate_msr_list(struct vmx_msr_entry *msr_list, int count)
> > > +{
> > > +     int i;
> > > +
> > > +     for (i = 0; i < count; i++) {
> > > +             msr_list[i].index = MSR_IA32_TSC;
> > > +             msr_list[i].reserved = 0;
> > > +             msr_list[i].value = 0x1234567890abcdef;
> >
> > Maybe overkill, but we can use a fast string op for this.  I think
> > I got the union right?
> >
> > static void populate_msr_list(struct vmx_msr_entry *msr_list, int count)
> > {
> >         union {
> >                 struct vmx_msr_entry msr;
> >                 u64 val;
> >         } tmp;
> >
> >         tmp.msr.index = MSR_IA32_TSC;
> >         tmp.msr.reserved = 0;
> >         tmp.msr.value = 0x1234567890abcdef;
> >
> >         asm volatile (
> >                 "rep stosq\n\t"
> >                 : "=c"(count), "=D"(msr_list)
> >                 : "a"(tmp.val), "c"(count), "D"(msr_list)
> >                 : "memory"
> >         );
> > }
> 
> :-). I do think it's overkill. However, I'm OK to apply this
> suggestion in v2 if everyone is OK with me adding a comment that
> explains it in terms of the original code, so that x86 noobs, like
> myself, can understand what's going on.

Never mind, I can't count.  This only works if the size of the entry
is 8 bytes.

> > This is a misleading name, e.g. it took me quite a while to realize this
> > is testing only the passing scenario.  For me, "limit test" implies that
> > it'd be deliberately exceeding the limit, or at least testing both the
> > passing and failing cases.  I suppose we can't easily test the VMX abort
> > cases, but we can at least test VM_ENTER_LOAD.
> >
> > Distilling things down to the bare minimum yields something like the
> > following.
> 
> Looks excellent overall. Still not clear what the consensus is on
> whether or not to test the VM-entry failure. I think a flag seems like
> a reasonable compromise. I've never added a flag to a kvm-unit-test,
> so I'll see if I can figure that out.

No need for a flag if you want to go that route, just put it in a separate
VMX subtest and exclude said test from the [vmx] config, i.e. make the
test opt-in.

  reply	other threads:[~2019-09-13 18:30 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-12 18:09 [kvm-unit-tests PATCH] x86: nvmx: test max atomic switch MSRs Marc Orr
2019-09-13 15:24 ` Sean Christopherson
2019-09-13 16:26   ` Jim Mattson
2019-09-13 17:15     ` Sean Christopherson
2019-09-13 17:21       ` Jim Mattson
2019-09-13 18:02   ` Marc Orr
2019-09-13 18:30     ` Sean Christopherson [this message]
2019-09-13 21:55       ` Marc Orr
2019-09-14  0:49   ` Marc Orr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190913183040.GA8904@linux.intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=marcorr@google.com \
    --cc=pshier@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).