kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	"Tian, Kevin" <kevin.tian@intel.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
Date: Mon, 16 Dec 2019 11:08:15 +0100	[thread overview]
Message-ID: <f117d46a-7528-ce32-8e46-4f3f35937079@redhat.com> (raw)
In-Reply-To: <20191215172124.GA83861@xz-x1>

[Alex and Kevin: there are doubts below regarding dirty page tracking
from VFIO and mdev devices, which perhaps you can help with]

On 15/12/19 18:21, Peter Xu wrote:
>                 init_rmode_tss
>                     vmx_set_tss_addr
>                         kvm_vm_ioctl_set_tss_addr [*]
>                 init_rmode_identity_map
>                     vmx_create_vcpu [*]

These don't matter because their content is not visible to userspace
(the backing storage is mmap-ed by __x86_set_memory_region).  In fact, d

>                 vmx_write_pml_buffer
>                     kvm_arch_write_log_dirty [&]
>                 kvm_write_guest
>                     kvm_hv_setup_tsc_page
>                         kvm_guest_time_update [&]
>                     nested_flush_cached_shadow_vmcs12 [&]
>                     kvm_write_wall_clock [&]
>                     kvm_pv_clock_pairing [&]
>                     kvmgt_rw_gpa [?]

This then expands (partially) to

intel_gvt_hypervisor_write_gpa
    emulate_csb_update
        emulate_execlist_ctx_schedule_out
            complete_execlist_workload
                complete_current_workload
                     workload_thread
        emulate_execlist_ctx_schedule_in
            prepare_execlist_workload
                prepare_workload
                    dispatch_workload
                        workload_thread

So KVMGT is always writing to GPAs instead of IOVAs and basically
bypassing a guest IOMMU.  So here it would be better if kvmgt was
changed not use kvm_write_guest (also because I'd probably have nacked
that if I had known :)).

As far as I know, there is some work on live migration with both VFIO
and mdev, and that probably includes some dirty page tracking API.
kvmgt could switch to that API, or there could be VFIO APIs similar to
kvm_write_guest but taking IOVAs instead of GPAs.  Advantage: this would
fix the GPA/IOVA confusion.  Disadvantage: userspace would lose the
tracking of writes from mdev devices.  Kevin, are these writes used in
any way?  Do the calls to intel_gvt_hypervisor_write_gpa covers all
writes from kvmgt vGPUs, or can the hardware write to memory as well
(which would be my guess if I didn't know anything about kvmgt, which I
pretty much don't)?

> We should only need to look at the leaves of the traces because
> they're where the dirty request starts.  I'm marking all the leaves
> with below criteria then it'll be easier to focus:
> 
> Cases with [*]: should not matter much
>            [&]: actually with a per-vcpu context in the upper layer
>            [?]: uncertain...
> 
> I'm a bit amazed after I took these notes, since I found that besides
> those that could probbaly be ignored (marked as [*]), most of the rest
> per-vm dirty requests are actually with a vcpu context.
> 
> Although now because we have kvm_get_running_vcpu() all cases for [&]
> should be fine without changing anything, but I tend to add another
> patch in the next post to convert all the [&] cases explicitly to pass
> vcpu pointer instead of kvm pointer to be clear if no one disagrees,
> then we verify that against kvm_get_running_vcpu().

This is a good idea but remember not to convert those to
kvm_vcpu_write_guest, because you _don't_ want these writes to touch
SMRAM (most of the addresses are OS-controlled rather than
firmware-controlled).

> init_rmode_tss or init_rmode_identity_map.  But I've marked them as
> unimportant because they should only happen once at boot.

We need to check if userspace can add an arbitrary number of entries by
calling KVM_SET_TSS_ADDR repeatedly.  I think it can; we'd have to
forbid multiple calls to KVM_SET_TSS_ADDR which is not a problem in general.

>>> If we're still with the rule in userspace that we first do RESET then
>>> collect and send the pages (just like what we've discussed before),
>>> then IMHO it's fine to have vcpu2 to skip the slow path?  Because
>>> RESET happens at "treat page as not dirty", then if we are sure that
>>> we only collect and send pages after that point, then the latest
>>> "write to page" data from vcpu2 won't be lost even if vcpu2 is not
>>> blocked by vcpu1's ring full?
>>
>> Good point, the race would become
>>
>>  	vCPU 1			vCPU 2		host
>>  	---------------------------------------------------------------
>>  	mark page dirty
>>  				write to page
>> 						reset rings
>> 						  wait for mmu lock
>>  	add page to ring
>> 	release mmu lock
>> 						  ...do reset...
>> 						  release mmu lock
>> 						page is now dirty
> 
> Hmm, the page will be dirty after the reset, but is that an issue?
> 
> Or, could you help me to identify what I've missed?

Nothing: the race is always solved in such a way that there's no issue.

>> I don't think that's possible, most writes won't come from a page fault
>> path and cannot retry.
> 
> Yep, maybe I should say it in the other way round: we only wait if
> kvm_get_running_vcpu() == NULL.  Then in somewhere near
> vcpu_enter_guest(), we add a check to wait if per-vcpu ring is full.
> Would that work?

Yes, that should work, especially if we know that kvmgt is the only case
that can wait.  And since:

1) kvmgt doesn't really need dirty page tracking (because VFIO devices
generally don't track dirty pages, and because kvmgt shouldn't be using
kvm_write_guest anyway)

2) the real mode TSS and identity map shouldn't even be tracked, as they
are invisible to userspace

it seems to me that kvm_get_running_vcpu() lets us get rid of the per-VM
ring altogether.

Paolo


  reply	other threads:[~2019-12-16 10:08 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-29 21:34 [PATCH RFC 00/15] KVM: Dirty ring interface Peter Xu
2019-11-29 21:34 ` [PATCH RFC 01/15] KVM: Move running VCPU from ARM to common code Peter Xu
2019-12-03 19:01   ` Sean Christopherson
2019-12-04  9:42     ` Paolo Bonzini
2019-12-09 22:05       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 02/15] KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot Peter Xu
2019-12-02 19:32   ` Sean Christopherson
2019-12-02 20:49     ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 03/15] KVM: Add build-time error check on kvm_run size Peter Xu
2019-12-02 19:30   ` Sean Christopherson
2019-12-02 20:53     ` Peter Xu
2019-12-02 22:19       ` Sean Christopherson
2019-12-02 22:40         ` Peter Xu
2019-12-03  5:50           ` Sean Christopherson
2019-12-03 13:41         ` Paolo Bonzini
2019-12-03 17:04           ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking Peter Xu
2019-12-02 20:10   ` Sean Christopherson
2019-12-02 21:16     ` Peter Xu
2019-12-02 21:50       ` Sean Christopherson
2019-12-02 23:09         ` Peter Xu
2019-12-03 13:48         ` Paolo Bonzini
2019-12-03 18:46           ` Sean Christopherson
2019-12-04 10:05             ` Paolo Bonzini
2019-12-07  0:29               ` Sean Christopherson
2019-12-09  9:37                 ` Paolo Bonzini
2019-12-09 21:54               ` Peter Xu
2019-12-10 10:07                 ` Paolo Bonzini
2019-12-10 15:52                   ` Peter Xu
2019-12-10 17:09                     ` Paolo Bonzini
2019-12-15 17:21                       ` Peter Xu
2019-12-16 10:08                         ` Paolo Bonzini [this message]
2019-12-16 18:54                           ` Peter Xu
2019-12-17  9:01                             ` Paolo Bonzini
2019-12-17 16:24                               ` Peter Xu
2019-12-17 16:28                                 ` Paolo Bonzini
2019-12-18 21:58                                   ` Peter Xu
2019-12-18 22:24                                     ` Sean Christopherson
2019-12-18 22:37                                       ` Paolo Bonzini
2019-12-18 22:49                                         ` Peter Xu
2019-12-17  2:28                           ` Tian, Kevin
2019-12-17 16:18                             ` Alex Williamson
2019-12-17 16:30                               ` Paolo Bonzini
2019-12-18  0:29                                 ` Tian, Kevin
     [not found]                           ` <AADFC41AFE54684AB9EE6CBC0274A5D19D645E5F@SHSMSX104.ccr.corp.intel.com>
2019-12-17  5:17                             ` Tian, Kevin
2019-12-17  5:25                               ` Yan Zhao
2019-12-17 16:24                                 ` Alex Williamson
2019-12-03 19:13   ` Sean Christopherson
2019-12-04 10:14     ` Paolo Bonzini
2019-12-04 14:33       ` Sean Christopherson
2019-12-04 10:38   ` Jason Wang
2019-12-04 11:04     ` Paolo Bonzini
2019-12-04 19:52       ` Peter Xu
2019-12-05  6:51         ` Jason Wang
2019-12-05 12:08           ` Peter Xu
2019-12-05 13:12             ` Jason Wang
2019-12-10 13:25       ` Michael S. Tsirkin
2019-12-10 13:31         ` Paolo Bonzini
2019-12-10 16:02           ` Peter Xu
2019-12-10 21:53             ` Michael S. Tsirkin
2019-12-11  9:05               ` Paolo Bonzini
2019-12-11 13:04                 ` Michael S. Tsirkin
2019-12-11 14:54                   ` Peter Xu
2019-12-10 21:48           ` Michael S. Tsirkin
2019-12-11 12:53   ` Michael S. Tsirkin
2019-12-11 14:14     ` Paolo Bonzini
2019-12-11 20:59     ` Peter Xu
2019-12-11 22:57       ` Michael S. Tsirkin
2019-12-12  0:08         ` Paolo Bonzini
2019-12-12  7:36           ` Michael S. Tsirkin
2019-12-12  8:12             ` Paolo Bonzini
2019-12-12 10:38               ` Michael S. Tsirkin
2019-12-15 17:33           ` Peter Xu
2019-12-16  9:47             ` Michael S. Tsirkin
2019-12-16 15:07               ` Peter Xu
2019-12-16 15:33                 ` Michael S. Tsirkin
2019-12-16 15:47                   ` Peter Xu
2019-12-11 17:24   ` Christophe de Dinechin
2019-12-13 20:23     ` Peter Xu
2019-12-14  7:57       ` Paolo Bonzini
2019-12-14 16:26         ` Peter Xu
2019-12-16  9:29           ` Paolo Bonzini
2019-12-16 15:26             ` Peter Xu
2019-12-16 15:31               ` Paolo Bonzini
2019-12-16 15:43                 ` Peter Xu
2019-12-17 12:16         ` Christophe de Dinechin
2019-12-17 12:19           ` Paolo Bonzini
2019-12-17 15:38             ` Peter Xu
2019-12-17 16:31               ` Paolo Bonzini
2019-12-17 16:42                 ` Peter Xu
2019-12-17 16:48                   ` Paolo Bonzini
2019-12-17 19:41                     ` Peter Xu
2019-12-18  0:33                       ` Paolo Bonzini
2019-12-18 16:32                         ` Peter Xu
2019-12-18 16:41                           ` Paolo Bonzini
2019-12-20 18:19       ` Peter Xu
2019-11-29 21:34 ` [PATCH RFC 05/15] KVM: Make dirty ring exclusive to dirty bitmap log Peter Xu
2019-11-29 21:34 ` [PATCH RFC 06/15] KVM: Introduce dirty ring wait queue Peter Xu
2019-11-29 21:34 ` [PATCH RFC 07/15] KVM: X86: Implement ring-based dirty memory tracking Peter Xu
2019-11-29 21:34 ` [PATCH RFC 08/15] KVM: selftests: Always clear dirty bitmap after iteration Peter Xu
2019-11-29 21:34 ` [PATCH RFC 09/15] KVM: selftests: Sync uapi/linux/kvm.h to tools/ Peter Xu
2019-11-29 21:35 ` [PATCH RFC 10/15] KVM: selftests: Use a single binary for dirty/clear log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 11/15] KVM: selftests: Introduce after_vcpu_run hook for dirty " Peter Xu
2019-11-29 21:35 ` [PATCH RFC 12/15] KVM: selftests: Add dirty ring buffer test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 13/15] KVM: selftests: Let dirty_log_test async for dirty ring test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 14/15] KVM: selftests: Add "-c" parameter to dirty log test Peter Xu
2019-11-29 21:35 ` [PATCH RFC 15/15] KVM: selftests: Test dirty ring waitqueue Peter Xu
2019-11-30  8:29 ` [PATCH RFC 00/15] KVM: Dirty ring interface Paolo Bonzini
2019-12-02  2:13   ` Peter Xu
2019-12-03 13:59     ` Paolo Bonzini
2019-12-05 19:30       ` Peter Xu
2019-12-05 19:59         ` Paolo Bonzini
2019-12-05 20:52           ` Peter Xu
2019-12-02 20:21   ` Sean Christopherson
2019-12-02 20:43     ` Peter Xu
2019-12-04 10:39 ` Jason Wang
2019-12-04 19:33   ` Peter Xu
2019-12-05  6:49     ` Jason Wang
2019-12-11 13:41 ` Christophe de Dinechin
2019-12-11 14:16   ` Paolo Bonzini
2019-12-11 17:15     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f117d46a-7528-ce32-8e46-4f3f35937079@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterx@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).