On 09/08/2018 01:21, speck for Jim Mattson wrote: > [PATCH] kvm: x86: Set highest physical address bit in non-present/reserved SPTEs > > Always set the upper-most supported physical address bit to 1 for SPTEs > that are marked as non-present or reserved, to make them unusable for > L1TF attacks from the guest. Currently, this just applies to MMIO SPTEs. > (We do not need to mark PTEs that are completely 0 as physical page 0 > is already reserved.) > > This allows mitigation of L1TF without disabling hyper-threading by using > shadow paging mode instead of EPT. I don't understand why the big patch is needed. MMIO SPTEs already have a mask applied that includes the top bit on all processors that have MAXPHYADDR<52 I would hope that all processors with MAXPHYADDR=52 will have the bug fixed (and AFAIK none are being sold right now), but in any case something like if (maxphyaddr == 52) { kvm_mmu_set_mmio_spte_mask((1ull << 51) | 1, 1ull << 51); return; } in kvm_set_mmio_spte_mask should do, or alternatively the nicer patch after my signature (untested and unthought). Paolo diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6529,29 +6529,25 @@ static unsigned long kvm_get_guest_ip(void) static void kvm_set_mmio_spte_mask(void) { - u64 mask; + u64 mask, value; int maxphyaddr = boot_cpu_data.x86_phys_bits; /* * Set the reserved bits and the present bit of an paging-structure * entry to generate page fault with PFER.RSV = 1. */ - /* Mask the reserved physical address bits. */ - mask = rsvd_bits(maxphyaddr, 51); + mask = value = PT_PRESENT_MASK | (1ull << 51); - /* Set the present bit. */ - mask |= 1ull; - -#ifdef CONFIG_X86_64 - /* - * If reserved bit is not supported, clear the present bit to disable - * mmio page fault. - */ - if (maxphyaddr == 52) - mask &= ~1ull; -#endif + if (maxphyaddr == 52) { + /* + * If reserved bit is not supported, clear the present bit to disable + * mmio page fault. Leave the topmost bit set to separate MMIO sptes + * from other nonpresent sptes, and to protect against the L1TF bug. + */ + value &= ~PT_PRESENT_MASK; + } - kvm_mmu_set_mmio_spte_mask(mask, mask); + kvm_mmu_set_mmio_spte_mask(mask, value); } #ifdef CONFIG_X86_64