From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. Date: Sat, 17 Dec 2016 09:19:29 -0500 (EST) Message-ID: <1942485779.4450132.1481984369832.JavaMail.zimbra@redhat.com> References: <1481071577-40250-8-git-send-email-junaids@google.com> <93b5692a-0f76-a31d-46f3-b85d19298d92@linux.intel.com> <4157789.R9cn7kUSZu@js-desktop.mtv.corp.google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Cc: Xiao Guangrong , kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com To: Junaid Shahid Return-path: Received: from mx5-phx2.redhat.com ([209.132.183.37]:54679 "EHLO mx5-phx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752435AbcLQOTf (ORCPT ); Sat, 17 Dec 2016 09:19:35 -0500 In-Reply-To: <4157789.R9cn7kUSZu@js-desktop.mtv.corp.google.com> Sender: kvm-owner@vger.kernel.org List-ID: ----- Original Message ----- > From: "Junaid Shahid" > To: "Xiao Guangrong" > Cc: kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com, pbonzini@redhat.com > Sent: Saturday, December 17, 2016 3:04:22 AM > Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. > > > On Friday, December 16, 2016 09:04:56 PM Xiao Guangrong wrote: > > > void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, > > > - u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask); > > > + u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask, > > > + u64 acc_track_mask); > > > > Actually, this is the mask cleared by acc-track rather that _set_ by > > acc-track, maybe suppress_by_acc_track_mask is a better name. > > Well, the original reason behind it was that a PTE is an access-track PTE if > when masked by acc_track_mask, it yields acc_track_value. But we can change > the name if it is confusing. Though suppress_by_acc_track_mask isn’t quite > right since only the RWX bits are cleared, but the Special bit is set and > the mask includes both of these. I agree. The MMIO mask argument of kvm_mmu_set_mask_ptes requires some knowledge of the inner working of mmu.c, and acc_track_mask is the same. > > > +#define VMX_EPT_MT_MASK (7ull << VMX_EPT_MT_EPTE_SHIFT) > > > > I saw no space using this mask, can be dropped. > > Ok. I’ll drop it. Ok, I can do it too. > > > +/* The mask for the R/X bits in EPT PTEs */ > > > +#define PT64_EPT_READABLE_MASK 0x1ull > > > +#define PT64_EPT_EXECUTABLE_MASK 0x4ull > > > + > > > > Can we move this EPT specific stuff out of mmu.c? > > We need these in order to define the shadow_acc_track_saved_bits_mask and > since we don’t have vmx.h included in mmu.c so I had to define these here. > Is adding an #include for vmx.h better? Alternatively, we can have the > shadow_acc_track_saved_bits_mask passed by kvm_intel when it loads, which > was the case in the original version but I had changed it to a constant > based on previous feedback. It is a constant, it's more efficient to treat it as such. Unless someone else needs access tracking (they shouldn't), it's okay to have a minor layering violation. > > > +static inline bool is_access_track_spte(u64 spte) > > > +{ > > > + return shadow_acc_track_mask != 0 && > > > + (spte & shadow_acc_track_mask) == shadow_acc_track_value; > > > +} > > > > spte & SPECIAL_MASK && !is_mmio(spte) is more clearer. > > We can change to that. But it seems less flexible as it assumes that there is > never going to be a 3rd type of Special PTE. > > > > + /* > > > + * Verify that the write-protection that we do below will be fixable > > > + * via the fast page fault path. Currently, that is always the case, at > > > + * least when using EPT (which is when access tracking would be used). > > > + */ > > > + WARN_ONCE((spte & PT_WRITABLE_MASK) && > > > + !spte_can_locklessly_be_made_writable(spte), > > > + "kvm: Writable SPTE is not locklessly dirty-trackable\n"); > > > > This code is right but i can not understand the comment here... :( > > Basically, I was just trying to say that since making the PTE an acc-track > PTE will remove the write access as well, so we better have the ability to > restore the write access later in fast_page_fault. I’ll try to make the > comment more clear. > > > > > > > - /* > > > - * Currently, to simplify the code, only the spte > > > - * write-protected by dirty-log can be fast fixed. > > > - */ > > > - if (!spte_can_locklessly_be_made_writable(spte)) > > > + remove_acc_track = is_access_track_spte(spte); > > > + > > > > Why not check cached R/X permission can satisfy R/X access before goto > > atomic path? > > Yes, I guess we can do that since if the restored PTE doesn’t satisfy the > access we are just going to get another fault anyway. Please do it as a follow up, since it complicates the logic a bit. > > > +void vmx_enable_tdp(void) > > > +{ > > > + kvm_mmu_set_mask_ptes(VMX_EPT_READABLE_MASK, > > > + enable_ept_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull, > > > + enable_ept_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull, > > > + 0ull, VMX_EPT_EXECUTABLE_MASK, > > > + cpu_has_vmx_ept_execute_only() ? 0ull : VMX_EPT_READABLE_MASK, > > > + enable_ept_ad_bits ? 0ull : SPTE_SPECIAL_MASK | VMX_EPT_RWX_MASK); > > > > I think commonly set SPTE_SPECIAL_MASK (move set SPTE_SPECIAL_MASK to > > mmu.c) for > > mmio-mask and acc-track-mask can make the code more clearer... > > Ok. So you mean that vmx.c should just pass VMX_EPT_RWX_MASK here and > VMX_EPT_MISCONFIG_WX_VALUE for the mmio mask and then mmu.c should add in > SPTE_SPECIAL_MASK before storing these values in shadow_acc_track_mask and > shadow_mmio_mask? I think I agree, but we can do this too as a separate follow-up cleanup patch. Paolo