kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Orr <marcorr@google.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: kvm@vger.kernel.org, Jim Mattson <jmattson@google.com>,
	Peter Shier <pshier@google.com>
Subject: Re: [kvm-unit-tests PATCH] x86: nvmx: test max atomic switch MSRs
Date: Fri, 13 Sep 2019 17:49:28 -0700	[thread overview]
Message-ID: <CAA03e5FdHGoH2YUNba85hdY9_bzoFjReXFzL=TamB303yZw_tA@mail.gmail.com> (raw)
In-Reply-To: <20190913152442.GC31125@linux.intel.com>

> > +static void *alloc_2m_page(void)
> > +{
> > +     return alloc_pages(PAGE_2M_ORDER);
> > +}
>
> Allocating 2mb pages is complete overkill.  The absolute theoretical max
> for the number of MSRs is (8 * 512) = 4096, for a total of 32kb per list.
> We can even show the math so that it's obvious how the size is calculated.
> Plus one order so we can test overrun.
>
> /*
>  * The max number of MSRs is specified in 3 bits bits, plus 1. I.e. 7+1==8.
>  * Allocate 64k bytes of data to cover max_msr_list_size and then some.
>  */
> static const u32 msr_list_page_order = 4;
>

Done. Changed msr_list_page_order to 5, per our previous discussion
that you meant 16 * 512.

> > +enum atomic_switch_msr_scenario {
> > +     VM_ENTER_LOAD,
> > +     VM_EXIT_LOAD,
> > +     VM_EXIT_STORE,
> > +     ATOMIC_SWITCH_MSR_SCENARIO_END,
> > +};
>
> How about:
>
> enum atomic_switch_msr_lists {
>         VM_ENTER_LOAD,
>         VM_EXIT_LOAD,
>         VM_EXIT_STORE,
>         NR_ATOMIC_SWITCH_MSR_LISTS,
> };
>
> IMO that yields a much more intuitive test loop:
>
>         for (i = 0; i < NR_ATOMIC_SWITCH_MSR_LISTS; i++) {
>         }
>
> But we probably don't even need a loop...

Ack. Got rid of the loop.

> > +static void atomic_switch_msr_limit_test_guest(void)
> > +{
> > +     vmcall();
> > +}
> > +
> > +static void populate_msr_list(struct vmx_msr_entry *msr_list, int count)
> > +{
> > +     int i;
> > +
> > +     for (i = 0; i < count; i++) {
> > +             msr_list[i].index = MSR_IA32_TSC;
> > +             msr_list[i].reserved = 0;
> > +             msr_list[i].value = 0x1234567890abcdef;
>
> Maybe overkill, but we can use a fast string op for this.  I think
> I got the union right?
>
> static void populate_msr_list(struct vmx_msr_entry *msr_list, int count)
> {
>         union {
>                 struct vmx_msr_entry msr;
>                 u64 val;
>         } tmp;
>
>         tmp.msr.index = MSR_IA32_TSC;
>         tmp.msr.reserved = 0;
>         tmp.msr.value = 0x1234567890abcdef;
>
>         asm volatile (
>                 "rep stosq\n\t"
>                 : "=c"(count), "=D"(msr_list)
>                 : "a"(tmp.val), "c"(count), "D"(msr_list)
>                 : "memory"
>         );
> }

Skipped per our previous conversation that this doesn't work due to
the string being 16 bytes.

> > +     for (s = 0; s < ATOMIC_SWITCH_MSR_SCENARIO_END; s++) {
> > +             switch (s) {
> > +             case VM_ENTER_LOAD:
> > +                     addr_field = ENTER_MSR_LD_ADDR;
> > +                     cnt_field = ENT_MSR_LD_CNT;
> > +                     break;
> > +             case VM_EXIT_LOAD:
> > +                     addr_field = EXIT_MSR_LD_ADDR;
> > +                     cnt_field = EXI_MSR_LD_CNT;
> > +                     break;
> > +             case VM_EXIT_STORE:
> > +                     addr_field = EXIT_MSR_ST_ADDR;
> > +                     cnt_field = EXI_MSR_ST_CNT;
> > +                     break;
> > +             default:
> > +                     TEST_ASSERT(false);
> > +             }
> > +
> > +             msr_list = (struct vmx_msr_entry *)vmcs_read(addr_field);
> > +             memset(msr_list, 0xff, two_mb);
>
> Writing 8mb of data for each test is a waste of time, i.e. 6mb to reset
> each list, and another 2mb to populate the target list.
>
> The for-loop in the helper is also confusing and superfluous.

Ack. Got rid of the helper.

> > +     /* Setup atomic MSR switch lists. */
> > +     msr_list = alloc_2m_page();
> > +     vmcs_write(ENTER_MSR_LD_ADDR, virt_to_phys(msr_list));
> > +     msr_list = alloc_2m_page();
> > +     vmcs_write(EXIT_MSR_LD_ADDR, virt_to_phys(msr_list));
> > +     msr_list = alloc_2m_page();
> > +     vmcs_write(EXIT_MSR_ST_ADDR, virt_to_phys(msr_list));
>
> This memory should really be freed.  Not holding pointers for each list
> also seems silly, e.g. requires a VMREAD just to get a pointer.

Done.

> > +
> > +     /* Execute each test case. */
> > +     for (s = 0; s < ATOMIC_SWITCH_MSR_SCENARIO_END; s++) {
>
> Since you're testing the passing case, why not test all three at once?
> I.e. hammer KVM while also consuming less test cycles.  The "MSR switch"
> test already verifies the correctness of each list.

Done.

> > +             struct vmx_msr_entry *msr_list;
> > +             int count = max_msr_list_size();
> > +
> > +             switch (s) {
> > +             case VM_ENTER_LOAD:
> > +                     msr_list = (struct vmx_msr_entry *)vmcs_read(
> > +                                     ENTER_MSR_LD_ADDR);
>
> These should use phys_to_virt() since virt_to_phys() is used to write them.

Hmm. Actually, why don't we just use an explicit (u64) cast. That's
what I originally had, but then when Jim pre-reviewed the patch before
I posted it on the list he suggested virt_to_phys() with a ?. I didn't
really understand why that was better than the explicit cast. And your
suggestion to use phys_to_virt() doesn't work at all because it
returns a pointer rather than a u64.

> > +                     break;
> > +             case VM_EXIT_LOAD:
> > +                     msr_list = (struct vmx_msr_entry *)vmcs_read(
> > +                                     EXIT_MSR_LD_ADDR);
> > +                     break;
> > +             case VM_EXIT_STORE:
> > +                     msr_list = (struct vmx_msr_entry *)vmcs_read(
> > +                                     EXIT_MSR_ST_ADDR);
> > +                     break;
> > +             default:
> > +                     report("Bad test scenario, %d.", false, s);
> > +                     continue;
> > +             }
> > +
> > +             configure_atomic_switch_msr_limit_test(msr_list, count);
>
> Again, feeding the list into a helper that also iterates over the lists
> is not intuitive in terms of understanding what is being tested.

Done. Got rid of the loop.

> > +     /* Atomic MSR switch tests. */
> > +     TEST(atomic_switch_msr_limit_test),
>
> This is a misleading name, e.g. it took me quite a while to realize this
> is testing only the passing scenario.  For me, "limit test" implies that
> it'd be deliberately exceeding the limit, or at least testing both the
> passing and failing cases.  I suppose we can't easily test the VMX abort
> cases, but we can at least test VM_ENTER_LOAD.

Done. Renamed to atomic_switch_max_msrs_test.

> Distilling things down to the bare minimum yields something like the
> following.

Thanks!

      parent reply	other threads:[~2019-09-14  0:49 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-12 18:09 [kvm-unit-tests PATCH] x86: nvmx: test max atomic switch MSRs Marc Orr
2019-09-13 15:24 ` Sean Christopherson
2019-09-13 16:26   ` Jim Mattson
2019-09-13 17:15     ` Sean Christopherson
2019-09-13 17:21       ` Jim Mattson
2019-09-13 18:02   ` Marc Orr
2019-09-13 18:30     ` Sean Christopherson
2019-09-13 21:55       ` Marc Orr
2019-09-14  0:49   ` Marc Orr [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA03e5FdHGoH2YUNba85hdY9_bzoFjReXFzL=TamB303yZw_tA@mail.gmail.com' \
    --to=marcorr@google.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=pshier@google.com \
    --cc=sean.j.christopherson@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).