kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: Re: [PATCH RFC 00/15] KVM: Dirty ring interface
Date: Sun, 1 Dec 2019 21:13:37 -0500	[thread overview]
Message-ID: <20191202021337.GB18887@xz-x1> (raw)
In-Reply-To: <b8f28d8c-2486-2d66-04fd-a2674b598cfd@redhat.com>

On Sat, Nov 30, 2019 at 09:29:42AM +0100, Paolo Bonzini wrote:
> Hi Peter,
> 
> thanks for the RFC!  Just a couple comments before I look at the series
> (for which I don't expect many surprises).
> 
> On 29/11/19 22:34, Peter Xu wrote:
> > I marked this series as RFC because I'm at least uncertain on this
> > change of vcpu_enter_guest():
> > 
> >         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> >                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> >                 /*
> >                         * If this is requested, it means that we've
> >                         * marked the dirty bit in the dirty ring BUT
> >                         * we've not written the date.  Do it now.
> >                         */
> >                 r = kvm_emulate_instruction(vcpu, 0);
> >                 r = r >= 0 ? 0 : r;
> >                 goto out;
> >         }
> 
> This is not needed, it will just be a false negative (dirty page that
> actually isn't dirty).  The dirty bit will be cleared when userspace
> resets the ring buffer; then the instruction will be executed again and
> mark the page dirty again.  Since ring full is not a common condition,
> it's not a big deal.

Actually I added this only because it failed one of the unit tests
when verifying the dirty bits..  But now after a second thought, I
probably agree with you that we can change the userspace too to fix
this.

I think the steps of the failed test case could be simplified into
something like this (assuming the QEMU migration context, might be
easier to understand):

  1. page P has data P1
  2. vcpu writes to page P, with date P2
  3. vmexit (P is still with data P1)
  4. mark P as dirty, ring full, user exit
  5. collect dirty bit P, migrate P with data P1
  6. vcpu run due to some reason, P was written with P2, user exit again
     (because ring is already reaching soft limit)
  7. do KVM_RESET_DIRTY_RINGS
  8. never write to P again

Then P will be P1 always on destination, while it'll be P2 on source.

I think maybe that's why we need to be very sure that when userspace
exits happens (soft limit reached), we need to kick all the vcpus out,
and more importantly we must _not_ let them run again before the
KVM_RESET_DIRTY_PAGES otherwise we might face the data corrupt.  I'm
not sure whether we should mention this in the document to let the
userspace to be sure of the issue.

On the other side, I tried to remove the emulate_instruction() above
and fixed the test case, though I found that the last address before
user exit is not really written again after the next vmenter right
after KVM_RESET_DIRTY_RINGS, so the dirty bit was truly lost...  I'm
pasting some traces below (I added some tracepoints too, I think I'll
just keep them for v2):

  ...
  dirty_log_test-29003 [001] 184503.384328: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384329: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384329: kvm_page_fault:       address 7fc036d000 error_code 582
  dirty_log_test-29003 [001] 184503.384331: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384332: kvm_exit: reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384332: kvm_page_fault:       address 7fc036d000 error_code 582
  dirty_log_test-29003 [001] 184503.384332: kvm_dirty_ring_push:  ring 1: dirty 0x37f reset 0x1c0 slot 1 offset 0x37e ret 0 (used 447)
  dirty_log_test-29003 [001] 184503.384333: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384334: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384334: kvm_page_fault:       address 7fc036e000 error_code 582
  dirty_log_test-29003 [001] 184503.384336: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384336: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384336: kvm_page_fault:       address 7fc036e000 error_code 582
  dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_push:  ring 1: dirty 0x380 reset 0x1c0 slot 1 offset 0x37f ret 1 (used 448)
  dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_exit:  vcpu 1
  dirty_log_test-29003 [001] 184503.384338: kvm_fpu:              unload
  dirty_log_test-29003 [001] 184503.384340: kvm_userspace_exit:   reason 0x1d (29)
  dirty_log_test-29000 [006] 184503.505103: kvm_dirty_ring_reset: ring 1: dirty 0x380 reset 0x380 (used 0)
  dirty_log_test-29003 [001] 184503.505184: kvm_fpu:              load
  dirty_log_test-29003 [001] 184503.505187: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505193: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505194: kvm_page_fault:       address 7fc036f000 error_code 582              <-------- [1]
  dirty_log_test-29003 [001] 184503.505206: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505207: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505207: kvm_page_fault:       address 7fc036f000 error_code 582
  dirty_log_test-29003 [001] 184503.505226: kvm_dirty_ring_push:  ring 1: dirty 0x381 reset 0x380 slot 1 offset 0x380 ret 0 (used 1)
  dirty_log_test-29003 [001] 184503.505226: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505227: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505228: kvm_page_fault:       address 7fc0370000 error_code 582
  dirty_log_test-29003 [001] 184503.505231: kvm_entry:            vcpu 1
  ...

The test was trying to continuously write to pages, from above log
starting from 7fc036d000. The reason 0x1d (29) is the new dirty ring
full exit reason.

So far I'm still unsure of two things:

  1. Why for each page we faulted twice rather than once.  Take the
     example of page at 7fc036e000 above, the first fault didn't
     trigger the marking dirty path, while only until the 2nd ept
     violation did we trigger kvm_dirty_ring_push.

  2. Why we didn't get the last page written again after
     kvm_userspace_exit (last page was 7fc036e000, and the test failed
     because 7fc036e000 detected change however dirty bit unset).  In
     this case the first write after KVM_RESET_DIRTY_RINGS is the line
     pointed by [1], I thought it should be a rewritten of page
     7fc036e000 because when the user exit happens logically the write
     should not happen yet and eip should keep.  However at [1] it's
     already writting to a new page.

I'll continue to dig tomorrow, or quick answers will be greatly
welcomed too. :)

> 
> > I did a kvm_emulate_instruction() when dirty ring reaches softlimit
> > and want to exit to userspace, however I'm not really sure whether
> > there could have any side effect.  I'd appreciate any comment of
> > above, or anything else.
> > 
> > Tests
> > ===========
> > 
> > I wanted to continue work on the QEMU part, but after I noticed that
> > the interface might still prone to change, I posted this series first.
> > However to make sure it's at least working, I've provided unit tests
> > together with the series.  The unit tests should be able to test the
> > series in at least three major paths:
> > 
> >   (1) ./dirty_log_test -M dirty-ring
> > 
> >       This tests async ring operations: this should be the major work
> >       mode for the dirty ring interface, say, when the kernel is
> >       queuing more data, the userspace is collecting too.  Ring can
> >       hardly reaches full when working like this, because in most
> >       cases the collection could be fast.
> > 
> >   (2) ./dirty_log_test -M dirty-ring -c 1024
> > 
> >       This set the ring size to be very small so that ring soft-full
> >       always triggers (soft-full is a soft limit of the ring state,
> >       when the dirty ring reaches the soft limit it'll do a userspace
> >       exit and let the userspace to collect the data).
> > 
> >   (3) ./dirty_log_test -M dirty-ring-wait-queue
> > 
> >       This sololy test the extreme case where ring is full.  When the
> >       ring is completely full, the thread (no matter vcpu or not) will
> >       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
> >       wake the threads up (assuming until which the ring will not be
> >       full any more).
> 
> One question about this testcase: why does the task get into
> uninterruptible wait?

Because I'm using wait_event_killable() to wait when ring is
completely full.  I thought we should be strict there because it's
after all rare (even more rare than the soft-limit reached), and with
that we will never have a change to lose a dirty bit accidentally.  Or
do you think we should still respond to non fatal signals due to some
reason even during that wait period?

Thanks,

-- 
Peter Xu


  reply	other threads:[~2019-12-02  2:13 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01   ` Sean Christopherson
2019-12-04  9:42     ` Paolo Bonzini
2019-12-09 22:05       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32   ` Sean Christopherson
2019-12-02 20:49     ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30   ` Sean Christopherson
2019-12-02 20:53     ` Peter Xu
2019-12-02 22:19       ` Sean Christopherson
2019-12-02 22:40         ` Peter Xu
2019-12-03  5:50           ` Sean Christopherson
2019-12-03 13:41         ` Paolo Bonzini
2019-12-03 17:04           ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10   ` Sean Christopherson
2019-12-02 21:16     ` Peter Xu
2019-12-02 21:50       ` Sean Christopherson
2019-12-02 23:09         ` Peter Xu
2019-12-03 13:48         ` Paolo Bonzini
2019-12-03 18:46           ` Sean Christopherson
2019-12-04 10:05             ` Paolo Bonzini
2019-12-07  0:29               ` Sean Christopherson
2019-12-09  9:37                 ` Paolo Bonzini
2019-12-09 21:54               ` Peter Xu
2019-12-10 10:07                 ` Paolo Bonzini
2019-12-10 15:52                   ` Peter Xu
2019-12-10 17:09                     ` Paolo Bonzini
2019-12-15 17:21                       ` Peter Xu
2019-12-16 10:08                         ` Paolo Bonzini
2019-12-16 18:54                           ` Peter Xu
2019-12-17  9:01                             ` Paolo Bonzini
2019-12-17 16:24                               ` Peter Xu
2019-12-17 16:28                                 ` Paolo Bonzini
2019-12-18 21:58                                   ` Peter Xu
2019-12-18 22:24                                     ` Sean Christopherson
2019-12-18 22:37                                       ` Paolo Bonzini
2019-12-18 22:49                                         ` Peter Xu
2019-12-17  2:28                           ` Tian, Kevin
2019-12-17 16:18                             ` Alex Williamson
2019-12-17 16:30                               ` Paolo Bonzini
2019-12-18  0:29                                 ` Tian, Kevin
     [not found]                           ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17  5:17                             ` Tian, Kevin
2019-12-17  5:25                               ` Yan Zhao
2019-12-17 16:24                                 ` Alex Williamson
2019-12-03 19:13   ` Sean Christopherson
2019-12-04 10:14     ` Paolo Bonzini
2019-12-04 14:33       ` Sean Christopherson
2019-12-04 10:38   ` Jason Wang
2019-12-04 11:04     ` Paolo Bonzini
2019-12-04 19:52       ` Peter Xu
2019-12-05  6:51         ` Jason Wang
2019-12-05 12:08           ` Peter Xu
2019-12-05 13:12             ` Jason Wang
2019-12-10 13:25       ` Michael S. Tsirkin
2019-12-10 13:31         ` Paolo Bonzini
2019-12-10 16:02           ` Peter Xu
2019-12-10 21:53             ` Michael S. Tsirkin
2019-12-11  9:05               ` Paolo Bonzini
2019-12-11 13:04                 ` Michael S. Tsirkin
2019-12-11 14:54                   ` Peter Xu
2019-12-10 21:48           ` Michael S. Tsirkin
2019-12-11 12:53   ` Michael S. Tsirkin
2019-12-11 14:14     ` Paolo Bonzini
2019-12-11 20:59     ` Peter Xu
2019-12-11 22:57       ` Michael S. Tsirkin
2019-12-12  0:08         ` Paolo Bonzini
2019-12-12  7:36           ` Michael S. Tsirkin
2019-12-12  8:12             ` Paolo Bonzini
2019-12-12 10:38               ` Michael S. Tsirkin
2019-12-15 17:33           ` Peter Xu
2019-12-16  9:47             ` Michael S. Tsirkin
2019-12-16 15:07               ` Peter Xu
2019-12-16 15:33                 ` Michael S. Tsirkin
2019-12-16 15:47                   ` Peter Xu
2019-12-11 17:24   ` Christophe de Dinechin
2019-12-13 20:23     ` Peter Xu
2019-12-14  7:57       ` Paolo Bonzini
2019-12-14 16:26         ` Peter Xu
2019-12-16  9:29           ` Paolo Bonzini
2019-12-16 15:26             ` Peter Xu
2019-12-16 15:31               ` Paolo Bonzini
2019-12-16 15:43                 ` Peter Xu
2019-12-17 12:16         ` Christophe de Dinechin
2019-12-17 12:19           ` Paolo Bonzini
2019-12-17 15:38             ` Peter Xu
2019-12-17 16:31               ` Paolo Bonzini
2019-12-17 16:42                 ` Peter Xu
2019-12-17 16:48                   ` Paolo Bonzini
2019-12-17 19:41                     ` Peter Xu
2019-12-18  0:33                       ` Paolo Bonzini
2019-12-18 16:32                         ` Peter Xu
2019-12-18 16:41                           ` Paolo Bonzini
2019-12-20 18:19       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30  8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02  2:13   ` Peter Xu [this message]
2019-12-03 13:59     ` Paolo Bonzini
2019-12-05 19:30       ` Peter Xu
2019-12-05 19:59         ` Paolo Bonzini
2019-12-05 20:52           ` Peter Xu
2019-12-02 20:21   ` Sean Christopherson
2019-12-02 20:43     ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33   ` Peter Xu
2019-12-05  6:49     ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16   ` Paolo Bonzini
2019-12-11 17:15     ` Peter Xu
  -- strict thread matches above, loose matches on Subject: below --
2019-11-29 21:33 Peter Xu
2019-11-29 21:32 Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191202021337.GB18887@xz-x1 \
    --to=peterx@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).