kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Christophe de Dinechin <dinechin@redhat.com>,
	Christophe de Dinechin <christophe.de.dinechin@gmail.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Eric Auger <eric.auger@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
Date: Wed, 18 Dec 2019 01:33:01 +0100	[thread overview]
Message-ID: <838084bf-efd7-009c-62ce-f11493242867@redhat.com> (raw)
In-Reply-To: <20191217194114.GG7258@xz-x1>

On 17/12/19 20:41, Peter Xu wrote:
> On Tue, Dec 17, 2019 at 05:48:58PM +0100, Paolo Bonzini wrote:
>> On 17/12/19 17:42, Peter Xu wrote:
>>>
>>> However I just noticed something... Note that we still didn't read
>>> into non-x86 archs, I think it's the same question as when I asked
>>> whether we can unify the kvm[_vcpu]_write() interfaces and you'd like
>>> me to read the non-x86 archs - I think it's time I read them, because
>>> it's still possible that non-x86 archs will still need the per-vm
>>> ring... then that could be another problem if we want to at last
>>> spread the dirty ring idea outside of x86.
>>
>> We can take a look, but I think based on x86 experience it's okay if we
>> restrict dirty ring to arches that do no VM-wide accesses.
> 
> Here it is - a quick update on callers of mark_page_dirty_in_slot().
> The same reverse trace, but ignoring all common and x86 code path
> (which I covered in the other thread):
> 
> ==================================
> 
>    mark_page_dirty_in_slot (non-x86)
>         mark_page_dirty
>             kvm_write_guest_page
>                 kvm_write_guest
>                     kvm_write_guest_lock
>                         vgic_its_save_ite [?]
>                         vgic_its_save_dte [?]
>                         vgic_its_save_cte [?]
>                         vgic_its_save_collection_table [?]
>                         vgic_v3_lpi_sync_pending_status [?]
>                         vgic_v3_save_pending_tables [?]
>                     kvmppc_rtas_hcall [&]
>                     kvmppc_st [&]
>                     access_guest [&]
>                     put_guest_lc [&]
>                     write_guest_lc [&]
>                     write_guest_abs [&]
>             mark_page_dirty
>                 _kvm_mips_map_page_fast [&]
>                 kvm_mips_map_page [&]
>                 kvmppc_mmu_map_page [&]
>                 kvmppc_copy_guest
>                     kvmppc_h_page_init [&]
>                 kvmppc_xive_native_vcpu_eq_sync [&]
>                 adapter_indicators_set [?] (from kvm_set_irq)
>                 kvm_s390_sync_dirty_log [?]
>                 unpin_guest_page
>                     unpin_blocks [&]
>                     unpin_scb [&]
> 
> Cases with [*]: should not matter much
>            [&]: should be able to change to per-vcpu context
>            [?]: uncertain...
> 
> ==================================
> 
> This time we've got 8 leaves with "[?]".
> 
> I'm starting with these:
> 
>         vgic_its_save_ite [?]
>         vgic_its_save_dte [?]
>         vgic_its_save_cte [?]
>         vgic_its_save_collection_table [?]
>         vgic_v3_lpi_sync_pending_status [?]
>         vgic_v3_save_pending_tables [?]
> 
> These come from ARM specific ioctls like KVM_DEV_ARM_ITS_SAVE_TABLES,
> KVM_DEV_ARM_ITS_RESTORE_TABLES, KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES.
> IIUC ARM needed these to allow proper migration which indeed does not
> have a vcpu context.
> 
> (Though I'm a bit curious why ARM didn't simply migrate these
>  information explicitly from userspace, instead it seems to me that
>  ARM guests will dump something into guest ram and then tries to
>  recover from that which seems to be a bit weird)
>  
> Then it's this:
> 
>         adapter_indicators_set [?]
> 
> This is s390 specific, which should come from kvm_set_irq.  I'm not
> sure whether we can remove the mark_page_dirty() call of this, if it's
> applied from another kernel structure (which should be migrated
> properly IIUC).  But I might be completely wrong.
> 
>         kvm_s390_sync_dirty_log [?]
>         
> This is also s390 specific, should be collecting from the hardware
> PGSTE_UC_BIT bit.  No vcpu context for sure.
> 
> (I'd be glad too if anyone could hint me why x86 cannot use page table
>  dirty bits for dirty tracking, if there's short answer...)

With PML it is.  Without PML, however, it would be much slower to
synchronize the dirty bitmap from KVM to userspace (one atomic operation
per page instead of one per 64 pages) and even impossible to have the
dirty ring.

> I think my conclusion so far...
> 
>   - for s390 I don't think we even need this dirty ring buffer thing,
>     because I think hardware trackings should be more efficient, then
>     we don't need to care much on that either from design-wise of
>     dirty ring,

I would be surprised if it's more efficient without something like PML,
but anyway the gist is correct---without write protection-based dirty
page logging, s390 cannot use the dirty page ring buffer.

>   - for ARM, those no-vcpu-context dirty tracking probably needs to be
>     considered, but hopefully that's a very special path so it rarely
>     happen.  The bad thing is I didn't dig how many pages will be
>     dirtied when ARM guest starts to dump all these things so it could
>     be a burst...  If it is, then there's risk to trigger the ring
>     full condition (which we wanted to avoid..)

It says all vCPU locks must be held, so it could just use any vCPU.  I
am not sure what's the upper limit on the number of entries, or even
whether userspace could just dirty those pages itself, or perhaps
whether there could be a different ioctl that gets the pages into
userspace memory (and then if needed userspace can copy them into guest
memory, I don't know why it is designed like that).

Paolo


  reply	other threads:[~2019-12-18  0:33 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01   ` Sean Christopherson
2019-12-04  9:42     ` Paolo Bonzini
2019-12-09 22:05       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32   ` Sean Christopherson
2019-12-02 20:49     ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30   ` Sean Christopherson
2019-12-02 20:53     ` Peter Xu
2019-12-02 22:19       ` Sean Christopherson
2019-12-02 22:40         ` Peter Xu
2019-12-03  5:50           ` Sean Christopherson
2019-12-03 13:41         ` Paolo Bonzini
2019-12-03 17:04           ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10   ` Sean Christopherson
2019-12-02 21:16     ` Peter Xu
2019-12-02 21:50       ` Sean Christopherson
2019-12-02 23:09         ` Peter Xu
2019-12-03 13:48         ` Paolo Bonzini
2019-12-03 18:46           ` Sean Christopherson
2019-12-04 10:05             ` Paolo Bonzini
2019-12-07  0:29               ` Sean Christopherson
2019-12-09  9:37                 ` Paolo Bonzini
2019-12-09 21:54               ` Peter Xu
2019-12-10 10:07                 ` Paolo Bonzini
2019-12-10 15:52                   ` Peter Xu
2019-12-10 17:09                     ` Paolo Bonzini
2019-12-15 17:21                       ` Peter Xu
2019-12-16 10:08                         ` Paolo Bonzini
2019-12-16 18:54                           ` Peter Xu
2019-12-17  9:01                             ` Paolo Bonzini
2019-12-17 16:24                               ` Peter Xu
2019-12-17 16:28                                 ` Paolo Bonzini
2019-12-18 21:58                                   ` Peter Xu
2019-12-18 22:24                                     ` Sean Christopherson
2019-12-18 22:37                                       ` Paolo Bonzini
2019-12-18 22:49                                         ` Peter Xu
2019-12-17  2:28                           ` Tian, Kevin
2019-12-17 16:18                             ` Alex Williamson
2019-12-17 16:30                               ` Paolo Bonzini
2019-12-18  0:29                                 ` Tian, Kevin
     [not found]                           ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17  5:17                             ` Tian, Kevin
2019-12-17  5:25                               ` Yan Zhao
2019-12-17 16:24                                 ` Alex Williamson
2019-12-03 19:13   ` Sean Christopherson
2019-12-04 10:14     ` Paolo Bonzini
2019-12-04 14:33       ` Sean Christopherson
2019-12-04 10:38   ` Jason Wang
2019-12-04 11:04     ` Paolo Bonzini
2019-12-04 19:52       ` Peter Xu
2019-12-05  6:51         ` Jason Wang
2019-12-05 12:08           ` Peter Xu
2019-12-05 13:12             ` Jason Wang
2019-12-10 13:25       ` Michael S. Tsirkin
2019-12-10 13:31         ` Paolo Bonzini
2019-12-10 16:02           ` Peter Xu
2019-12-10 21:53             ` Michael S. Tsirkin
2019-12-11  9:05               ` Paolo Bonzini
2019-12-11 13:04                 ` Michael S. Tsirkin
2019-12-11 14:54                   ` Peter Xu
2019-12-10 21:48           ` Michael S. Tsirkin
2019-12-11 12:53   ` Michael S. Tsirkin
2019-12-11 14:14     ` Paolo Bonzini
2019-12-11 20:59     ` Peter Xu
2019-12-11 22:57       ` Michael S. Tsirkin
2019-12-12  0:08         ` Paolo Bonzini
2019-12-12  7:36           ` Michael S. Tsirkin
2019-12-12  8:12             ` Paolo Bonzini
2019-12-12 10:38               ` Michael S. Tsirkin
2019-12-15 17:33           ` Peter Xu
2019-12-16  9:47             ` Michael S. Tsirkin
2019-12-16 15:07               ` Peter Xu
2019-12-16 15:33                 ` Michael S. Tsirkin
2019-12-16 15:47                   ` Peter Xu
2019-12-11 17:24   ` Christophe de Dinechin
2019-12-13 20:23     ` Peter Xu
2019-12-14  7:57       ` Paolo Bonzini
2019-12-14 16:26         ` Peter Xu
2019-12-16  9:29           ` Paolo Bonzini
2019-12-16 15:26             ` Peter Xu
2019-12-16 15:31               ` Paolo Bonzini
2019-12-16 15:43                 ` Peter Xu
2019-12-17 12:16         ` Christophe de Dinechin
2019-12-17 12:19           ` Paolo Bonzini
2019-12-17 15:38             ` Peter Xu
2019-12-17 16:31               ` Paolo Bonzini
2019-12-17 16:42                 ` Peter Xu
2019-12-17 16:48                   ` Paolo Bonzini
2019-12-17 19:41                     ` Peter Xu
2019-12-18  0:33                       ` Paolo Bonzini [this message]
2019-12-18 16:32                         ` Peter Xu
2019-12-18 16:41                           ` Paolo Bonzini
2019-12-20 18:19       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30  8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02  2:13   ` Peter Xu
2019-12-03 13:59     ` Paolo Bonzini
2019-12-05 19:30       ` Peter Xu
2019-12-05 19:59         ` Paolo Bonzini
2019-12-05 20:52           ` Peter Xu
2019-12-02 20:21   ` Sean Christopherson
2019-12-02 20:43     ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33   ` Peter Xu
2019-12-05  6:49     ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16   ` Paolo Bonzini
2019-12-11 17:15     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=838084bf-efd7-009c-62ce-f11493242867@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=christophe.de.dinechin@gmail.com \
    --cc=cohuck@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=dinechin@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterx@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).