linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Peter Xu <peterx@redhat.com>
Cc: kvm@vger.kernel.org, linux-kselftest@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Cannon Matthews <cannonmatthews@google.com>,
	Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH 0/9] Create a userfaultfd demand paging test
Date: Mon, 30 Sep 2019 10:02:11 -0700	[thread overview]
Message-ID: <CANgfPd-=OWEiVkQMzh2mkH1ch9VeZsoc_KykPp6AqtkpQh-7EQ@mail.gmail.com> (raw)
In-Reply-To: <20190929072248.GB8903@xz-x1>

Hi Peter,
You're absolutely right that we could demonstrate more contention by
avoiding UFFD and just letting the kernel resolve page faults. I used
UFFD in this test and benchmarking for the other MMU patch set because
I believe it's a more realistic scenario. A simpler page access
benchmark would be better for identifying further scaling problems
within the MMU, but the only situation I can think of where that would
be used is VM boot. However, we don't usually see many vCPUs touching
memory all over the place on boot. In a migration or restore without
demand paging, the memory would have to be pre-populated with the
contents of guest memory and the KVM MMU fault handler wouldn't be
taking a fault in get_user_pages. In the interest of eliminating the
delay from UFFD, I will add an option to use anonymous page faults or
prefault memory instead.

I don't have any plans to customize the UFFD implementation at the
moment, but experimenting with UFFD strategies will be useful for
building higher performance post-copy in QEMU and other userspaces in
the future.
Thank you for taking a look at these patches.
Ben

On Sun, Sep 29, 2019 at 12:23 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Fri, Sep 27, 2019 at 09:18:28AM -0700, Ben Gardon wrote:
> > When handling page faults for many vCPUs during demand paging, KVM's MMU
> > lock becomes highly contended. This series creates a test with a naive
> > userfaultfd based demand paging implementation to demonstrate that
> > contention. This test serves both as a functional test of userfaultfd
> > and a microbenchmark of demand paging performance with a variable number
> > of vCPUs and memory per vCPU.
> >
> > The test creates N userfaultfd threads, N vCPUs, and a region of memory
> > with M pages per vCPU. The N userfaultfd polling threads are each set up
> > to serve faults on a region of memory corresponding to one of the vCPUs.
> > Each of the vCPUs is then started, and touches each page of its disjoint
> > memory region, sequentially. In response to faults, the userfaultfd
> > threads copy a static buffer into the guest's memory. This creates a
> > worst case for MMU lock contention as we have removed most of the
> > contention between the userfaultfd threads and there is no time required
> > to fetch the contents of guest memory.
>
> Hi, Ben,
>
> Even though I may not have enough MMU knowledge to say this... this of
> course looks like a good test at least to me.  I'm just curious about
> whether you have plan to customize the userfaultfd handler in the
> future with this infrastructure?
>
> Asked because IIUC with this series userfaultfd only plays a role to
> introduce a relatively adhoc delay to page faults.  In other words,
> I'm also curious what would be the number look like (as you mentioned
> in your MMU rework cover letter) if you simply start hundreds of vcpu
> and do the same test like this, but use the default anonymous page
> faults rather than uffd page faults.  I feel like even without uffd
> that could be a huge contention already there.  Or did I miss anything
> important on your decision to use userfaultfd?
>
> Thanks,
>
> --
> Peter Xu

  reply	other threads:[~2019-09-30 17:02 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-27 16:18 [PATCH 0/9] Create a userfaultfd demand paging test Ben Gardon
2019-09-27 16:18 ` [PATCH 1/9] KVM: selftests: Create a " Ben Gardon
2019-09-27 16:18 ` [PATCH 2/9] KVM: selftests: Add demand paging content to the " Ben Gardon
2019-09-29  7:11   ` Peter Xu
2019-09-27 16:18 ` [PATCH 3/9] KVM: selftests: Add memory size parameter " Ben Gardon
2019-09-27 16:18 ` [PATCH 4/9] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
2019-10-03  7:38   ` Andrew Jones
2019-09-27 16:18 ` [PATCH 5/9] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
2019-09-27 16:18 ` [PATCH 6/9] KVM: selftests: Time guest demand paging Ben Gardon
2019-09-27 16:18 ` [PATCH 7/9] KVM: selftests: Add parameter to _vm_create for memslot 0 base paddr Ben Gardon
2019-10-03  8:10   ` Andrew Jones
2019-09-27 16:18 ` [PATCH 8/9] KVM: selftests: Support large VMs in demand paging test Ben Gardon
2019-09-29  7:22 ` [PATCH 0/9] Create a userfaultfd " Peter Xu
2019-09-30 17:02   ` Ben Gardon [this message]
2019-12-16 21:35 Ben Gardon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANgfPd-=OWEiVkQMzh2mkH1ch9VeZsoc_KykPp6AqtkpQh-7EQ@mail.gmail.com' \
    --to=bgardon@google.com \
    --cc=cannonmatthews@google.com \
    --cc=drjones@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).