On 09/08/18 10:25, speck for Paolo Bonzini wrote: > On 09/08/2018 01:21, speck for Jim Mattson wrote: >> [PATCH] kvm: x86: Set highest physical address bit in non-present/reserved SPTEs >> >> Always set the upper-most supported physical address bit to 1 for SPTEs >> that are marked as non-present or reserved, to make them unusable for >> L1TF attacks from the guest. Currently, this just applies to MMIO SPTEs. >> (We do not need to mark PTEs that are completely 0 as physical page 0 >> is already reserved.) >> >> This allows mitigation of L1TF without disabling hyper-threading by using >> shadow paging mode instead of EPT. > I don't understand why the big patch is needed. MMIO SPTEs already have a mask > applied that includes the top bit on all processors that have MAXPHYADDR<52 > I would hope that all processors with MAXPHYADDR=52 will have the bug fixed > (and AFAIK none are being sold right now), but in any case something like > > if (maxphyaddr == 52) { > kvm_mmu_set_mmio_spte_mask((1ull << 51) | 1, 1ull << 51); > return; > } > > in kvm_set_mmio_spte_mask should do, or alternatively the nicer patch after > my signature (untested and unthought). Setting bit 51 doesn't mitigate L1TF on any current processor. You need to set an address bit inside L1D-maxphysaddr, and isn't cacheable on the current system. Attached is my patch for doing this generally in Xen, along with some safety heuristics for nesting.  In Xen, we need to audit each PTE a PV guest tries to write, and the bottom line safety check for that is: static inline bool is_l1tf_safe_maddr(intpte_t pte) {     paddr_t maddr = pte & l1tf_addr_mask;     return maddr == 0 || maddr >= l1tf_safe_maddr; } ~Andrew