From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jim Mattson Subject: Re: [PATCH 7/8] kvm: nVMX: Introduce KVM_CAP_VMX_STATE Date: Mon, 8 Jan 2018 09:25:08 -0800 Message-ID: References: <1480536229-11754-1-git-send-email-jmattson@google.com> <1480536229-11754-8-git-send-email-jmattson@google.com> <555d6a5a-fd6d-e1c2-6a40-3ecfbb09b379@redhat.com> <7d9c263c-3d21-8535-5fed-35adfdfb71be@redhat.com> <89880fbd-2d47-d994-81f5-2073eec96ce0@redhat.com> <3bbb7742-ece1-0e8a-cc73-45d5160bc6c4@redhat.com> <75bde61a-34ff-c0af-436e-c8328fe7e870@redhat.com> <67833a23-5dd4-c502-d724-fea5598c41df@redhat.com> <2301ab3e-c318-4494-4c1e-bd85e24ef78c@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Paolo Bonzini , kvm list , David Hildenbrand To: David Hildenbrand Return-path: Received: from mail-io0-f178.google.com ([209.85.223.178]:37607 "EHLO mail-io0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753377AbeAHRZJ (ORCPT ); Mon, 8 Jan 2018 12:25:09 -0500 Received: by mail-io0-f178.google.com with SMTP id n14so15229795iob.4 for ; Mon, 08 Jan 2018 09:25:09 -0800 (PST) In-Reply-To: <2301ab3e-c318-4494-4c1e-bd85e24ef78c@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: How do we eliminate nested_run_pending? Do we enforce the invariant that nested_run_pending is never set on return to userspace, or do we return an error if GET_NESTED_STATE is called when nested_run_pending is set? On Mon, Jan 8, 2018 at 2:35 AM, David Hildenbrand wrote: > On 19.12.2017 22:29, Paolo Bonzini wrote: >> On 19/12/2017 20:21, Jim Mattson wrote: >>> One reason is that it is a bit awkward for GET_NESTED_STATE to modify >>> guest memory. I don't know about qemu, but our userspace agent expects >>> guest memory to be quiesced by the time it starts going through its >>> sequence of GET_* ioctls. Sure, we could introduce a pre-migration >>> ioctl, but is that the best way to handle this? Another reason is that >>> it is a bit awkward for SET_NESTED_STATE to require guest memory. >>> Again, I don't know about qemu, but our userspace agent does not >>> expect any guest memory to be available when it starts going through >>> its sequence of SET_* ioctls. Sure, we could prefetch the guest page >>> containing the current VMCS12, but is that better than simply >>> including the current VMCS12 in the NESTED_STATE payload? Moreover, >>> these unpredictable (from the guest's point of view) updates to guest >>> memory leave a bad taste in my mouth (much like SMM). >> >> IIRC QEMU has no problem with either, but I think your concerns are >> valid. The active VMCS is processor state, not memory state. Same for >> the host save data in SVM. >> >> The unstructured "blob" of data is not an issue. If it becomes a >> problem, we can always document the structure... > > Thinking about it, I agree. It might be simpler/cleaner to transfer the > "loaded" VMCS. But I think we should take care of only transferring data > that actually is CPU state and not special to our current > implementation. (e.g. nested_run_pending I would says is special to out > current implementation, but we can discuss) > > So what I would consider VMX state: > - vmxon > - vmxon_ptr > - vmptr > - cached_vmcs12 > - ... ? > >> >> Paolo > > > -- > > Thanks, > > David / dhildenb