kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
Date: Sun, 15 Dec 2019 12:21:24 -0500	[thread overview]
Message-ID: <20191215172124.GA83861@xz-x1> (raw)
In-Reply-To: <3e6cb5ec-66c0-00ab-b75e-ad2beb1d216d@redhat.com>

On Tue, Dec 10, 2019 at 06:09:02PM +0100, Paolo Bonzini wrote:
> On 10/12/19 16:52, Peter Xu wrote:
> > On Tue, Dec 10, 2019 at 11:07:31AM +0100, Paolo Bonzini wrote:
> >>> I'm thinking whether I can start
> >>> to use this information in the next post on solving an issue I
> >>> encountered with the waitqueue.
> >>>
> >>> Current waitqueue is still problematic in that it could wait even with
> >>> the mmu lock held when with vcpu context.
> >>
> >> I think the idea of the soft limit is that the waiting just cannot
> >> happen.  That is, the number of dirtied pages _outside_ the guest (guest
> >> accesses are taken care of by PML, and are subtracted from the soft
> >> limit) cannot exceed hard_limit - (soft_limit + pml_size).
> > 
> > So the question go backs to, whether this is guaranteed somehow?  Or
> > do you prefer us to keep the warn_on_once until it triggers then we
> > can analyze (which I doubt..)?
> 
> Yes, I would like to keep the WARN_ON_ONCE just because you never know.
> 
> Of course it would be much better to audit the calls to kvm_write_guest
> and figure out how many could trigger (e.g. two from the operands of an
> emulated instruction, 5 from a nested EPT walk, 1 from a page walk, etc.).

I would say we'd better either figure out all the caller's sites to
prove it will never overflow, or, I think we'll need the waitqueue at
least.  The problem is if we release a kvm with WARN_ON_ONCE and at
last we found that it can be triggered and ring full can't be avoided,
then it means the interface and design is broken, and it could even be
too late to fix it after the interface is published.

(Actually I was not certain on previous clear_dirty interface where we
 introduced a new capability for it.  I'm not sure whether that can be
 avoided because after all the initial version is not working at all,
 and we fixed it up without changing the interface.  However for this
 one if at last we prove the design wrong, then we must introduce
 another capability for it IMHO, and the interface is prone to change
 too)

So, with the hope that we could avoid the waitqueue, I checked all the
callers of mark_page_dirty_in_slot().  Since this initial work is only
for x86, I didn't look more into other archs, assuming that can be
done later when it is implemented for other archs (and this will for
sure also cover the common code):

    mark_page_dirty_in_slot calls, per-vm (x86 only)
        __kvm_write_guest_page
            kvm_write_guest_page
                init_rmode_tss
                    vmx_set_tss_addr
                        kvm_vm_ioctl_set_tss_addr [*]
                init_rmode_identity_map
                    vmx_create_vcpu [*]
                vmx_write_pml_buffer
                    kvm_arch_write_log_dirty [&]
                kvm_write_guest
                    kvm_hv_setup_tsc_page
                        kvm_guest_time_update [&]
                    nested_flush_cached_shadow_vmcs12 [&]
                    kvm_write_wall_clock [&]
                    kvm_pv_clock_pairing [&]
                    kvmgt_rw_gpa [?]
                    kvm_write_guest_offset_cached
                        kvm_steal_time_set_preempted [&]
                        kvm_write_guest_cached
                            pv_eoi_put_user [&]
                            kvm_lapic_sync_to_vapic [&]
                            kvm_setup_pvclock_page [&]
                            record_steal_time [&]
                            apf_put_user [&]
                kvm_clear_guest_page
                    init_rmode_tss [*] (see above)
                    init_rmode_identity_map [*] (see above)
                    kvm_clear_guest
                        synic_set_msr
                            kvm_hv_set_msr [&]
        kvm_write_guest_offset_cached [&] (see above)
        mark_page_dirty
            kvm_hv_set_msr_pw [&]

We should only need to look at the leaves of the traces because
they're where the dirty request starts.  I'm marking all the leaves
with below criteria then it'll be easier to focus:

Cases with [*]: should not matter much
           [&]: actually with a per-vcpu context in the upper layer
           [?]: uncertain...

I'm a bit amazed after I took these notes, since I found that besides
those that could probbaly be ignored (marked as [*]), most of the rest
per-vm dirty requests are actually with a vcpu context.

Although now because we have kvm_get_running_vcpu() all cases for [&]
should be fine without changing anything, but I tend to add another
patch in the next post to convert all the [&] cases explicitly to pass
vcpu pointer instead of kvm pointer to be clear if no one disagrees,
then we verify that against kvm_get_running_vcpu().

So the only uncertainty now is kvmgt_rw_gpa() which is marked as [?].
Could this happen frequently?  I would guess the answer is we don't
know (which means it can).

> 
> > One thing to mention is that for with-vcpu cases, we probably can even
> > stop KVM_RUN immediately as long as either the per-vm or per-vcpu ring
> > reaches the softlimit, then for vcpu case it should be easier to
> > guarantee that.  What I want to know is the rest of cases like ioctls
> > or even something not from the userspace (which I think I should read
> > more later..).
> 
> Which ioctls?  Most ioctls shouldn't dirty memory at all.

init_rmode_tss or init_rmode_identity_map.  But I've marked them as
unimportant because they should only happen once at boot.

> 
> >>> And if we see if the mark_page_dirty_in_slot() is not with a vcpu
> >>> context (e.g. kvm_mmu_page_fault) but with an ioctl context (those
> >>> cases we'll use per-vm dirty ring) then it's probably fine.
> >>>
> >>> My planned solution:
> >>>
> >>> - When kvm_get_running_vcpu() != NULL, we postpone the waitqueue waits
> >>>   until we finished handling this page fault, probably in somewhere
> >>>   around vcpu_enter_guest, so that we can do wait_event() after the
> >>>   mmu lock released
> >>
> >> I think this can cause a race:
> >>
> >> 	vCPU 1			vCPU 2		host
> >> 	---------------------------------------------------------------
> >> 	mark page dirty
> >> 				write to page
> >> 						treat page as not dirty
> >> 	add page to ring
> >>
> >> where vCPU 2 skips the clean-page slow path entirely.
> > 
> > If we're still with the rule in userspace that we first do RESET then
> > collect and send the pages (just like what we've discussed before),
> > then IMHO it's fine to have vcpu2 to skip the slow path?  Because
> > RESET happens at "treat page as not dirty", then if we are sure that
> > we only collect and send pages after that point, then the latest
> > "write to page" data from vcpu2 won't be lost even if vcpu2 is not
> > blocked by vcpu1's ring full?
> 
> Good point, the race would become
> 
>  	vCPU 1			vCPU 2		host
>  	---------------------------------------------------------------
>  	mark page dirty
>  				write to page
> 						reset rings
> 						  wait for mmu lock
>  	add page to ring
> 	release mmu lock
> 						  ...do reset...
> 						  release mmu lock
> 						page is now dirty

Hmm, the page will be dirty after the reset, but is that an issue?

Or, could you help me to identify what I've missed?

> 
> > Maybe we can also consider to let mark_page_dirty_in_slot() return a
> > value, then the upper layer could have a chance to skip the spte
> > update if mark_page_dirty_in_slot() fails to mark the dirty bit, so it
> > can return directly with RET_PF_RETRY.
> 
> I don't think that's possible, most writes won't come from a page fault
> path and cannot retry.

Yep, maybe I should say it in the other way round: we only wait if
kvm_get_running_vcpu() == NULL.  Then in somewhere near
vcpu_enter_guest(), we add a check to wait if per-vcpu ring is full.
Would that work?

Thanks,

-- 
Peter Xu


  reply	other threads:[~2019-12-15 17:21 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01   ` Sean Christopherson
2019-12-04  9:42     ` Paolo Bonzini
2019-12-09 22:05       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32   ` Sean Christopherson
2019-12-02 20:49     ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30   ` Sean Christopherson
2019-12-02 20:53     ` Peter Xu
2019-12-02 22:19       ` Sean Christopherson
2019-12-02 22:40         ` Peter Xu
2019-12-03  5:50           ` Sean Christopherson
2019-12-03 13:41         ` Paolo Bonzini
2019-12-03 17:04           ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10   ` Sean Christopherson
2019-12-02 21:16     ` Peter Xu
2019-12-02 21:50       ` Sean Christopherson
2019-12-02 23:09         ` Peter Xu
2019-12-03 13:48         ` Paolo Bonzini
2019-12-03 18:46           ` Sean Christopherson
2019-12-04 10:05             ` Paolo Bonzini
2019-12-07  0:29               ` Sean Christopherson
2019-12-09  9:37                 ` Paolo Bonzini
2019-12-09 21:54               ` Peter Xu
2019-12-10 10:07                 ` Paolo Bonzini
2019-12-10 15:52                   ` Peter Xu
2019-12-10 17:09                     ` Paolo Bonzini
2019-12-15 17:21                       ` Peter Xu [this message]
2019-12-16 10:08                         ` Paolo Bonzini
2019-12-16 18:54                           ` Peter Xu
2019-12-17  9:01                             ` Paolo Bonzini
2019-12-17 16:24                               ` Peter Xu
2019-12-17 16:28                                 ` Paolo Bonzini
2019-12-18 21:58                                   ` Peter Xu
2019-12-18 22:24                                     ` Sean Christopherson
2019-12-18 22:37                                       ` Paolo Bonzini
2019-12-18 22:49                                         ` Peter Xu
2019-12-17  2:28                           ` Tian, Kevin
2019-12-17 16:18                             ` Alex Williamson
2019-12-17 16:30                               ` Paolo Bonzini
2019-12-18  0:29                                 ` Tian, Kevin
     [not found]                           ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17  5:17                             ` Tian, Kevin
2019-12-17  5:25                               ` Yan Zhao
2019-12-17 16:24                                 ` Alex Williamson
2019-12-03 19:13   ` Sean Christopherson
2019-12-04 10:14     ` Paolo Bonzini
2019-12-04 14:33       ` Sean Christopherson
2019-12-04 10:38   ` Jason Wang
2019-12-04 11:04     ` Paolo Bonzini
2019-12-04 19:52       ` Peter Xu
2019-12-05  6:51         ` Jason Wang
2019-12-05 12:08           ` Peter Xu
2019-12-05 13:12             ` Jason Wang
2019-12-10 13:25       ` Michael S. Tsirkin
2019-12-10 13:31         ` Paolo Bonzini
2019-12-10 16:02           ` Peter Xu
2019-12-10 21:53             ` Michael S. Tsirkin
2019-12-11  9:05               ` Paolo Bonzini
2019-12-11 13:04                 ` Michael S. Tsirkin
2019-12-11 14:54                   ` Peter Xu
2019-12-10 21:48           ` Michael S. Tsirkin
2019-12-11 12:53   ` Michael S. Tsirkin
2019-12-11 14:14     ` Paolo Bonzini
2019-12-11 20:59     ` Peter Xu
2019-12-11 22:57       ` Michael S. Tsirkin
2019-12-12  0:08         ` Paolo Bonzini
2019-12-12  7:36           ` Michael S. Tsirkin
2019-12-12  8:12             ` Paolo Bonzini
2019-12-12 10:38               ` Michael S. Tsirkin
2019-12-15 17:33           ` Peter Xu
2019-12-16  9:47             ` Michael S. Tsirkin
2019-12-16 15:07               ` Peter Xu
2019-12-16 15:33                 ` Michael S. Tsirkin
2019-12-16 15:47                   ` Peter Xu
2019-12-11 17:24   ` Christophe de Dinechin
2019-12-13 20:23     ` Peter Xu
2019-12-14  7:57       ` Paolo Bonzini
2019-12-14 16:26         ` Peter Xu
2019-12-16  9:29           ` Paolo Bonzini
2019-12-16 15:26             ` Peter Xu
2019-12-16 15:31               ` Paolo Bonzini
2019-12-16 15:43                 ` Peter Xu
2019-12-17 12:16         ` Christophe de Dinechin
2019-12-17 12:19           ` Paolo Bonzini
2019-12-17 15:38             ` Peter Xu
2019-12-17 16:31               ` Paolo Bonzini
2019-12-17 16:42                 ` Peter Xu
2019-12-17 16:48                   ` Paolo Bonzini
2019-12-17 19:41                     ` Peter Xu
2019-12-18  0:33                       ` Paolo Bonzini
2019-12-18 16:32                         ` Peter Xu
2019-12-18 16:41                           ` Paolo Bonzini
2019-12-20 18:19       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30  8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02  2:13   ` Peter Xu
2019-12-03 13:59     ` Paolo Bonzini
2019-12-05 19:30       ` Peter Xu
2019-12-05 19:59         ` Paolo Bonzini
2019-12-05 20:52           ` Peter Xu
2019-12-02 20:21   ` Sean Christopherson
2019-12-02 20:43     ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33   ` Peter Xu
2019-12-05  6:49     ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16   ` Paolo Bonzini
2019-12-11 17:15     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191215172124.GA83861@xz-x1 \
    --to=peterx@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).