kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Christophe de Dinechin <dinechin@redhat.com>,
	Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: Re: [PATCH RFC 00/15] KVM: Dirty ring interface
Date: Wed, 11 Dec 2019 15:16:30 +0100	[thread overview]
Message-ID: <f386bca9-b967-2e76-c580-465463843aa4@redhat.com> (raw)
In-Reply-To: <m1r21bgest.fsf@redhat.com>

On 11/12/19 14:41, Christophe de Dinechin wrote:
> 
> Peter Xu writes:
> 
>> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>>
>> Overview
>> ============
>>
>> This is a continued work from Lei Cao <lei.cao@stratus.com> and Paolo
>> on the KVM dirty ring interface.  To make it simple, I'll still start
>> with version 1 as RFC.
>>
>> The new dirty ring interface is another way to collect dirty pages for
>> the virtual machine, but it is different from the existing dirty
>> logging interface in a few ways, majorly:
>>
>>   - Data format: The dirty data was in a ring format rather than a
>>     bitmap format, so the size of data to sync for dirty logging does
>>     not depend on the size of guest memory any more, but speed of
>>     dirtying.  Also, the dirty ring is per-vcpu (currently plus
>>     another per-vm ring, so total ring number is N+1), while the dirty
>>     bitmap is per-vm.
> 
> I like Sean's suggestion to fetch rings when dirtying. That could reduce
> the number of dirty rings to examine.

What do you mean by "fetch rings"?

> Also, as is, this means that the same gfn may be present in multiple
> rings, right?

I think the actual marking of a page as dirty is protected by a spinlock
but I will defer to Peter on this.

Paolo

>>
>>   - Data copy: The sync of dirty pages does not need data copy any more,
>>     but instead the ring is shared between the userspace and kernel by
>>     page sharings (mmap() on either the vm fd or vcpu fd)
>>
>>   - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>>     KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>>     called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>>     dirty pages to protected mode again (works like
>>     KVM_CLEAR_DIRTY_LOG, but ring based)
>>
>> And more.
>>
>> I would appreciate if the reviewers can start with patch "KVM:
>> Implement ring-based dirty memory tracking", especially the document
>> update part for the big picture.  Then I'll avoid copying into most of
>> them into cover letter again.
>>
>> I marked this series as RFC because I'm at least uncertain on this
>> change of vcpu_enter_guest():
>>
>>         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
>>                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>>                 /*
>>                         * If this is requested, it means that we've
>>                         * marked the dirty bit in the dirty ring BUT
>>                         * we've not written the date.  Do it now.
> 
> not written the "data" ?
> 
>>                         */
>>                 r = kvm_emulate_instruction(vcpu, 0);
>>                 r = r >= 0 ? 0 : r;
>>                 goto out;
>>         }
>>
>> I did a kvm_emulate_instruction() when dirty ring reaches softlimit
>> and want to exit to userspace, however I'm not really sure whether
>> there could have any side effect.  I'd appreciate any comment of
>> above, or anything else.
>>
>> Tests
>> ===========
>>
>> I wanted to continue work on the QEMU part, but after I noticed that
>> the interface might still prone to change, I posted this series first.
>> However to make sure it's at least working, I've provided unit tests
>> together with the series.  The unit tests should be able to test the
>> series in at least three major paths:
>>
>>   (1) ./dirty_log_test -M dirty-ring
>>
>>       This tests async ring operations: this should be the major work
>>       mode for the dirty ring interface, say, when the kernel is
>>       queuing more data, the userspace is collecting too.  Ring can
>>       hardly reaches full when working like this, because in most
>>       cases the collection could be fast.
>>
>>   (2) ./dirty_log_test -M dirty-ring -c 1024
>>
>>       This set the ring size to be very small so that ring soft-full
>>       always triggers (soft-full is a soft limit of the ring state,
>>       when the dirty ring reaches the soft limit it'll do a userspace
>>       exit and let the userspace to collect the data).
>>
>>   (3) ./dirty_log_test -M dirty-ring-wait-queue
>>
>>       This sololy test the extreme case where ring is full.  When the
>>       ring is completely full, the thread (no matter vcpu or not) will
>>       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
>>       wake the threads up (assuming until which the ring will not be
>>       full any more).
> 
> Am I correct assuming that guest memory can be dirtied by DMA operations?
> Should
> 
> Not being that familiar with the current implementation of dirty page
> tracking, I wonder who marks the pages dirty in that case, and when?
> If the VM ring is used for I/O threads, isn't it possible that a large
> DMA could dirty a sufficiently large number of GFNs to overflow the
> associated ring? Does this case need a separate way to queue the
> dirtying I/O thread?
> 
>>
>> Thanks,
>>
>> Cao, Lei (2):
>>   KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
>>   KVM: X86: Implement ring-based dirty memory tracking
>>
>> Paolo Bonzini (1):
>>   KVM: Move running VCPU from ARM to common code
>>
>> Peter Xu (12):
>>   KVM: Add build-time error check on kvm_run size
>>   KVM: Implement ring-based dirty memory tracking
>>   KVM: Make dirty ring exclusive to dirty bitmap log
>>   KVM: Introduce dirty ring wait queue
>>   KVM: selftests: Always clear dirty bitmap after iteration
>>   KVM: selftests: Sync uapi/linux/kvm.h to tools/
>>   KVM: selftests: Use a single binary for dirty/clear log test
>>   KVM: selftests: Introduce after_vcpu_run hook for dirty log test
>>   KVM: selftests: Add dirty ring buffer test
>>   KVM: selftests: Let dirty_log_test async for dirty ring test
>>   KVM: selftests: Add "-c" parameter to dirty log test
>>   KVM: selftests: Test dirty ring waitqueue
>>
>>  Documentation/virt/kvm/api.txt                | 116 +++++
>>  arch/arm/include/asm/kvm_host.h               |   2 -
>>  arch/arm64/include/asm/kvm_host.h             |   2 -
>>  arch/x86/include/asm/kvm_host.h               |   5 +
>>  arch/x86/include/uapi/asm/kvm.h               |   1 +
>>  arch/x86/kvm/Makefile                         |   3 +-
>>  arch/x86/kvm/mmu/mmu.c                        |   6 +
>>  arch/x86/kvm/vmx/vmx.c                        |   7 +
>>  arch/x86/kvm/x86.c                            |  12 +
>>  include/linux/kvm_dirty_ring.h                |  67 +++
>>  include/linux/kvm_host.h                      |  37 ++
>>  include/linux/kvm_types.h                     |   1 +
>>  include/uapi/linux/kvm.h                      |  36 ++
>>  tools/include/uapi/linux/kvm.h                |  47 ++
>>  tools/testing/selftests/kvm/Makefile          |   2 -
>>  .../selftests/kvm/clear_dirty_log_test.c      |   2 -
>>  tools/testing/selftests/kvm/dirty_log_test.c  | 452 ++++++++++++++++--
>>  .../testing/selftests/kvm/include/kvm_util.h  |   6 +
>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 103 ++++
>>  .../selftests/kvm/lib/kvm_util_internal.h     |   5 +
>>  virt/kvm/arm/arm.c                            |  29 --
>>  virt/kvm/arm/perf.c                           |   6 +-
>>  virt/kvm/arm/vgic/vgic-mmio.c                 |  15 +-
>>  virt/kvm/dirty_ring.c                         | 156 ++++++
>>  virt/kvm/kvm_main.c                           | 315 +++++++++++-
>>  25 files changed, 1329 insertions(+), 104 deletions(-)
>>  create mode 100644 include/linux/kvm_dirty_ring.h
>>  delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>>  create mode 100644 virt/kvm/dirty_ring.c
> 
> 
> --
> Cheers,
> Christophe de Dinechin (IRC c3d)
> 


  reply	other threads:[~2019-12-11 14:16 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01   ` Sean Christopherson
2019-12-04  9:42     ` Paolo Bonzini
2019-12-09 22:05       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32   ` Sean Christopherson
2019-12-02 20:49     ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30   ` Sean Christopherson
2019-12-02 20:53     ` Peter Xu
2019-12-02 22:19       ` Sean Christopherson
2019-12-02 22:40         ` Peter Xu
2019-12-03  5:50           ` Sean Christopherson
2019-12-03 13:41         ` Paolo Bonzini
2019-12-03 17:04           ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10   ` Sean Christopherson
2019-12-02 21:16     ` Peter Xu
2019-12-02 21:50       ` Sean Christopherson
2019-12-02 23:09         ` Peter Xu
2019-12-03 13:48         ` Paolo Bonzini
2019-12-03 18:46           ` Sean Christopherson
2019-12-04 10:05             ` Paolo Bonzini
2019-12-07  0:29               ` Sean Christopherson
2019-12-09  9:37                 ` Paolo Bonzini
2019-12-09 21:54               ` Peter Xu
2019-12-10 10:07                 ` Paolo Bonzini
2019-12-10 15:52                   ` Peter Xu
2019-12-10 17:09                     ` Paolo Bonzini
2019-12-15 17:21                       ` Peter Xu
2019-12-16 10:08                         ` Paolo Bonzini
2019-12-16 18:54                           ` Peter Xu
2019-12-17  9:01                             ` Paolo Bonzini
2019-12-17 16:24                               ` Peter Xu
2019-12-17 16:28                                 ` Paolo Bonzini
2019-12-18 21:58                                   ` Peter Xu
2019-12-18 22:24                                     ` Sean Christopherson
2019-12-18 22:37                                       ` Paolo Bonzini
2019-12-18 22:49                                         ` Peter Xu
2019-12-17  2:28                           ` Tian, Kevin
2019-12-17 16:18                             ` Alex Williamson
2019-12-17 16:30                               ` Paolo Bonzini
2019-12-18  0:29                                 ` Tian, Kevin
     [not found]                           ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17  5:17                             ` Tian, Kevin
2019-12-17  5:25                               ` Yan Zhao
2019-12-17 16:24                                 ` Alex Williamson
2019-12-03 19:13   ` Sean Christopherson
2019-12-04 10:14     ` Paolo Bonzini
2019-12-04 14:33       ` Sean Christopherson
2019-12-04 10:38   ` Jason Wang
2019-12-04 11:04     ` Paolo Bonzini
2019-12-04 19:52       ` Peter Xu
2019-12-05  6:51         ` Jason Wang
2019-12-05 12:08           ` Peter Xu
2019-12-05 13:12             ` Jason Wang
2019-12-10 13:25       ` Michael S. Tsirkin
2019-12-10 13:31         ` Paolo Bonzini
2019-12-10 16:02           ` Peter Xu
2019-12-10 21:53             ` Michael S. Tsirkin
2019-12-11  9:05               ` Paolo Bonzini
2019-12-11 13:04                 ` Michael S. Tsirkin
2019-12-11 14:54                   ` Peter Xu
2019-12-10 21:48           ` Michael S. Tsirkin
2019-12-11 12:53   ` Michael S. Tsirkin
2019-12-11 14:14     ` Paolo Bonzini
2019-12-11 20:59     ` Peter Xu
2019-12-11 22:57       ` Michael S. Tsirkin
2019-12-12  0:08         ` Paolo Bonzini
2019-12-12  7:36           ` Michael S. Tsirkin
2019-12-12  8:12             ` Paolo Bonzini
2019-12-12 10:38               ` Michael S. Tsirkin
2019-12-15 17:33           ` Peter Xu
2019-12-16  9:47             ` Michael S. Tsirkin
2019-12-16 15:07               ` Peter Xu
2019-12-16 15:33                 ` Michael S. Tsirkin
2019-12-16 15:47                   ` Peter Xu
2019-12-11 17:24   ` Christophe de Dinechin
2019-12-13 20:23     ` Peter Xu
2019-12-14  7:57       ` Paolo Bonzini
2019-12-14 16:26         ` Peter Xu
2019-12-16  9:29           ` Paolo Bonzini
2019-12-16 15:26             ` Peter Xu
2019-12-16 15:31               ` Paolo Bonzini
2019-12-16 15:43                 ` Peter Xu
2019-12-17 12:16         ` Christophe de Dinechin
2019-12-17 12:19           ` Paolo Bonzini
2019-12-17 15:38             ` Peter Xu
2019-12-17 16:31               ` Paolo Bonzini
2019-12-17 16:42                 ` Peter Xu
2019-12-17 16:48                   ` Paolo Bonzini
2019-12-17 19:41                     ` Peter Xu
2019-12-18  0:33                       ` Paolo Bonzini
2019-12-18 16:32                         ` Peter Xu
2019-12-18 16:41                           ` Paolo Bonzini
2019-12-20 18:19       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30  8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02  2:13   ` Peter Xu
2019-12-03 13:59     ` Paolo Bonzini
2019-12-05 19:30       ` Peter Xu
2019-12-05 19:59         ` Paolo Bonzini
2019-12-05 20:52           ` Peter Xu
2019-12-02 20:21   ` Sean Christopherson
2019-12-02 20:43     ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33   ` Peter Xu
2019-12-05  6:49     ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16   ` Paolo Bonzini [this message]
2019-12-11 17:15     ` Peter Xu
  -- strict thread matches above, loose matches on Subject: below --
2019-11-29 21:33 Peter Xu
2019-11-29 21:32 Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f386bca9-b967-2e76-c580-465463843aa4@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=dinechin@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterx@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).