archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <>
To: Ben Gardon <>
	Paolo Bonzini <>,
	Cannon Matthews <>,
	Andrew Jones <>
Subject: Re: [PATCH 0/9] Create a userfaultfd demand paging test
Date: Sun, 29 Sep 2019 15:22:48 +0800	[thread overview]
Message-ID: <20190929072248.GB8903@xz-x1> (raw)
In-Reply-To: <>

On Fri, Sep 27, 2019 at 09:18:28AM -0700, Ben Gardon wrote:
> When handling page faults for many vCPUs during demand paging, KVM's MMU
> lock becomes highly contended. This series creates a test with a naive
> userfaultfd based demand paging implementation to demonstrate that
> contention. This test serves both as a functional test of userfaultfd
> and a microbenchmark of demand paging performance with a variable number
> of vCPUs and memory per vCPU.
> The test creates N userfaultfd threads, N vCPUs, and a region of memory
> with M pages per vCPU. The N userfaultfd polling threads are each set up
> to serve faults on a region of memory corresponding to one of the vCPUs.
> Each of the vCPUs is then started, and touches each page of its disjoint
> memory region, sequentially. In response to faults, the userfaultfd
> threads copy a static buffer into the guest's memory. This creates a
> worst case for MMU lock contention as we have removed most of the
> contention between the userfaultfd threads and there is no time required
> to fetch the contents of guest memory.

Hi, Ben,

Even though I may not have enough MMU knowledge to say this... this of
course looks like a good test at least to me.  I'm just curious about
whether you have plan to customize the userfaultfd handler in the
future with this infrastructure?

Asked because IIUC with this series userfaultfd only plays a role to
introduce a relatively adhoc delay to page faults.  In other words,
I'm also curious what would be the number look like (as you mentioned
in your MMU rework cover letter) if you simply start hundreds of vcpu
and do the same test like this, but use the default anonymous page
faults rather than uffd page faults.  I feel like even without uffd
that could be a huge contention already there.  Or did I miss anything
important on your decision to use userfaultfd?


Peter Xu

  parent reply	other threads:[~2019-09-29  7:23 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-27 16:18 [PATCH 0/9] Create a userfaultfd demand paging test Ben Gardon
2019-09-27 16:18 ` [PATCH 1/9] KVM: selftests: Create a " Ben Gardon
2019-09-27 16:18 ` [PATCH 2/9] KVM: selftests: Add demand paging content to the " Ben Gardon
2019-09-29  7:11   ` Peter Xu
2019-09-27 16:18 ` [PATCH 3/9] KVM: selftests: Add memory size parameter " Ben Gardon
2019-09-27 16:18 ` [PATCH 4/9] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
2019-10-03  7:38   ` Andrew Jones
2019-09-27 16:18 ` [PATCH 5/9] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
2019-09-27 16:18 ` [PATCH 6/9] KVM: selftests: Time guest demand paging Ben Gardon
2019-09-27 16:18 ` [PATCH 7/9] KVM: selftests: Add parameter to _vm_create for memslot 0 base paddr Ben Gardon
2019-10-03  8:10   ` Andrew Jones
2019-09-27 16:18 ` [PATCH 8/9] KVM: selftests: Support large VMs in demand paging test Ben Gardon
2019-09-29  7:22 ` Peter Xu [this message]
2019-09-30 17:02   ` [PATCH 0/9] Create a userfaultfd " Ben Gardon
2019-12-16 21:35 Ben Gardon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190929072248.GB8903@xz-x1 \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).