* Re: [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes [not found] <201701111154365057441@zte.com.cn> @ 2017-01-11 8:34 ` Ralf Baechle 2017-01-16 16:07 ` James Hogan 1 sibling, 0 replies; 5+ messages in thread From: Ralf Baechle @ 2017-01-11 8:34 UTC (permalink / raw) To: jiang.biao2; +Cc: james.hogan, linux-mips, pbonzini, rkrcmar, kvm On Wed, Jan 11, 2017 at 11:54:36AM +0800, jiang.biao2@zte.com.cn wrote: > Date: Wed, 11 Jan 2017 11:54:36 +0800 (CST) > From: jiang.biao2@zte.com.cn > To: james.hogan@imgtec.com > Cc: linux-mips@linux-mips.org, james.hogan@imgtec.com, pbonzini@redhat.com, > rkrcmar@redhat.com, ralf@linux-mips.org, kvm@vger.kernel.org > Subject: Re: [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes > Content-Type: multipart/mixed; boundary="=====_001_next=====" > > Hi, > > > > +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags) > > > +{ > > > + if (flags & KMF_GPA) { > > > + /* all of guest virtual address space could be affected */ > > + if (flags & KMF_KERN) > > + /* useg, kseg0, seg2/3 */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x7fffffff); > > + else > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); > > + } else { > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); > > + > > + /* kseg2/3 */ > > + if (flags & KMF_KERN) > > + kvm_mips_flush_gva_pgd(pgd, 0x60000000, 0x7fffffff); > > + } > > > +} > > > Is it maybe better to replace the hard code *0x7fffffff*, *0x60000000*, > *0x3fffffff* with marco? I think to anybody familiar with the architecture the raw numbers are easier to understand than weird defines. Ralf ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes [not found] <201701111154365057441@zte.com.cn> 2017-01-11 8:34 ` [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes Ralf Baechle @ 2017-01-16 16:07 ` James Hogan 2017-01-16 16:07 ` James Hogan 1 sibling, 1 reply; 5+ messages in thread From: James Hogan @ 2017-01-16 16:07 UTC (permalink / raw) To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm Hi, On Wed, Jan 11, 2017 at 11:54:36AM +0800, jiang.biao2@zte.com.cn wrote: > > +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags) > > +{ > > > + if (flags & KMF_GPA) { > > > + /* all of guest virtual address space could be affected */ > > + if (flags & KMF_KERN) > > + /* useg, kseg0, seg2/3 */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x7fffffff) > > + else > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff) > > + } else { > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff) > > + > > + /* kseg2/3 */ > > + if (flags & KMF_KERN) > > + kvm_mips_flush_gva_pgd(pgd, 0x60000000, 0x7fffffff) > > + } > > +} > > > > > Is it maybe better to replace the hard code *0x7fffffff*, *0x60000000*, *0x3fffffff* with marco? I did consider it. E.g. there are definitions in kvm_host.h: #define KVM_GUEST_KUSEG 0x00000000UL #define KVM_GUEST_KSEG0 0x40000000UL #define KVM_GUEST_KSEG1 0x40000000UL #define KVM_GUEST_KSEG23 0x60000000UL and conditional definitions in asm/addrspace.h: 64-bit: #define CKSEG0 _CONST64_(0xffffffff80000000) #define CKSEG1 _CONST64_(0xffffffffa0000000) #define CKSSEG _CONST64_(0xffffffffc0000000) #define CKSEG3 _CONST64_(0xffffffffe0000000) 32-bit: #define KUSEG 0x00000000 #define KSEG0 0x80000000 #define KSEG1 0xa0000000 #define KSEG2 0xc0000000 #define KSEG3 0xe0000000 #define CKUSEG 0x00000000 #define CKSEG0 0x80000000 #define CKSEG1 0xa0000000 #define CKSEG2 0xc0000000 #define CKSEG3 0xe0000000 So (u32)CKSEG0 - 1, KVM_GUEST_KSEG23, and KVM_GUEST_KSEG0 - 1 would have worked, but given that the ranges sometimes cover multiple segments it just seemed more readable to use the hex literals rather than a sprinkling of opaque definitions from different places. Thanks for reviewing! Cheers James ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes 2017-01-16 16:07 ` James Hogan @ 2017-01-16 16:07 ` James Hogan 0 siblings, 0 replies; 5+ messages in thread From: James Hogan @ 2017-01-16 16:07 UTC (permalink / raw) To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm Hi, On Wed, Jan 11, 2017 at 11:54:36AM +0800, jiang.biao2@zte.com.cn wrote: > > +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags) > > +{ > > > + if (flags & KMF_GPA) { > > > + /* all of guest virtual address space could be affected */ > > + if (flags & KMF_KERN) > > + /* useg, kseg0, seg2/3 */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x7fffffff) > > + else > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff) > > + } else { > > + /* useg */ > > + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff) > > + > > + /* kseg2/3 */ > > + if (flags & KMF_KERN) > > + kvm_mips_flush_gva_pgd(pgd, 0x60000000, 0x7fffffff) > > + } > > +} > > > > > Is it maybe better to replace the hard code *0x7fffffff*, *0x60000000*, *0x3fffffff* with marco? I did consider it. E.g. there are definitions in kvm_host.h: #define KVM_GUEST_KUSEG 0x00000000UL #define KVM_GUEST_KSEG0 0x40000000UL #define KVM_GUEST_KSEG1 0x40000000UL #define KVM_GUEST_KSEG23 0x60000000UL and conditional definitions in asm/addrspace.h: 64-bit: #define CKSEG0 _CONST64_(0xffffffff80000000) #define CKSEG1 _CONST64_(0xffffffffa0000000) #define CKSSEG _CONST64_(0xffffffffc0000000) #define CKSEG3 _CONST64_(0xffffffffe0000000) 32-bit: #define KUSEG 0x00000000 #define KSEG0 0x80000000 #define KSEG1 0xa0000000 #define KSEG2 0xc0000000 #define KSEG3 0xe0000000 #define CKUSEG 0x00000000 #define CKSEG0 0x80000000 #define CKSEG1 0xa0000000 #define CKSEG2 0xc0000000 #define CKSEG3 0xe0000000 So (u32)CKSEG0 - 1, KVM_GUEST_KSEG23, and KVM_GUEST_KSEG0 - 1 would have worked, but given that the ranges sometimes cover multiple segments it just seemed more readable to use the hex literals rather than a sprinkling of opaque definitions from different places. Thanks for reviewing! Cheers James ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 0/30] KVM: MIPS: Implement GVA page tables @ 2017-01-06 1:32 James Hogan 2017-01-06 1:32 ` [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes James Hogan 0 siblings, 1 reply; 5+ messages in thread From: James Hogan @ 2017-01-06 1:32 UTC (permalink / raw) To: linux-mips Cc: James Hogan, Paolo Bonzini, Radim Krčmář, Ralf Baechle, kvm, linux-mm Note: My intention is to take this series via the MIPS KVM tree, with a topic branch for the MIPS architecture changes, so acks welcome for the relevant parts (mainly patches 1-4, 15, 28), and please don't apply yet. This series primarily implements GVA->HPA page tables for MIPS T&E KVM implementation, and a fast TLB refill handler generated at runtime using uasm (sharing MIPS arch code to do this), accompanied by a bunch of related cleanups. There are several solid advantages of this: - An optimised TLB refill handler will be much faster than using the slow exit path through C code. It also avoids repeated guest TLB lookups for guest mapped addresses that are evicted from the host TLB (which are currently implemented as a linear walk through the guest TLB array). - The TLB refill handler can be pretty much reused in future for VZ, to fill the root TLB with GPA->HPA mappings from a soon to be implemented GPA page table. - Although not enabled yet, it potentially allows page table walker hardware (HTW) to be used during guest execution (both for VZ GPA mappings, and potentially T&E GVA mappings) further reducing TLB refill overhead. - It improves the robustness of direct access to guest memory by KVM, i.e. reading guest instructions for emulation, writing guest instructions for dynamic translation, and emulating CACHE instructions. This is because the standard userland memory accessors can be used, allowing the host kernel TLB refill handler to safely fill from the GVA page table, allowing faults to be sanely detected, and allowing it to work when EVA is enabled (which requires different instructions to be used when accessing the user address space). The main disadvantage is a higher flushing overhead when the guest ASID is changed, due to the need to walk and invalidate GVA page tables (since we only manage a single GVA page table for each guest privilege mode, across all guest ASIDs). The patches are roughly grouped as follows: Patches 1-4: These are generic or MIPS architecture changes needed by the later patches, mainly to expose the existing MIPS TLB exception generation cade to KVM. As I mentioned above I intend to combine the MIPS ones into a topic branch which can be merged into both the MIPS architecture tree and the MIPS KVM tree. Patches 5-13: These are preliminary MIPS KVM changes and cleanups. Patches 14-25: These incrementally add GVA page table support, allocating the GVA page tables, adding the fast TLB refill handler, addng page table invalidation, and finally converting guest fault handling (KSeg0, TLB mapped, and commpage) to use the GVA page table rather than injecting entries directly into the host TLB. Patches 26-27: These switch to using uaccess and protected cache ops, which fixes KVM on EVA enabled host kernels. Patches 28-30: These make some final cleanups. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Cc: linux-mm@kvack.org James Hogan (30): mm: Export init_mm for MIPS KVM use of pgd_alloc() MIPS: Export pgd/pmd symbols for KVM MIPS: uasm: Add include guards in asm/uasm.h MIPS: Export some tlbex internals for KVM to use KVM: MIPS: Drop partial KVM_NMI implementation KVM: MIPS/MMU: Simplify ASID restoration KVM: MIPS: Convert get/set_regs -> vcpu_load/put KVM: MIPS/MMU: Move preempt/ASID handling to implementation KVM: MIPS: Remove duplicated ASIDs from vcpu KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks KVM: MIPS/T&E: Restore host asid on return to host KVM: MIPS/T&E: active_mm = init_mm in guest context KVM: MIPS: Wire up vcpu uninit KVM: MIPS/T&E: Allocate GVA -> HPA page tables KVM: MIPS/T&E: Activate GVA page tables in guest context KVM: MIPS: Support NetLogic KScratch registers KVM: MIPS: Add fast path TLB refill handler KVM: MIPS/TLB: Fix off-by-one in TLB invalidate KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASID KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW KVM: MIPS/MMU: Convert KSeg0 faults to page tables KVM: MIPS/MMU: Convert TLB mapped faults to page tables KVM: MIPS/MMU: Convert commpage fault handling to page tables KVM: MIPS: Drop vm_init() callback KVM: MIPS: Use uaccess to read/modify guest instructions KVM: MIPS/Emulate: Fix CACHE emulation for EVA hosts KVM: MIPS/TLB: Drop kvm_local_flush_tlb_all() KVM: MIPS/Emulate: Drop redundant TLB flushes on exceptions KVM: MIPS/MMU: Drop kvm_get_new_mmu_context() arch/mips/include/asm/kvm_host.h | 76 ++-- arch/mips/include/asm/mmu_context.h | 9 +- arch/mips/include/asm/tlbex.h | 26 +- arch/mips/include/asm/uasm.h | 5 +- arch/mips/kvm/dyntrans.c | 28 +- arch/mips/kvm/emulate.c | 59 +-- arch/mips/kvm/entry.c | 141 +++++++- arch/mips/kvm/mips.c | 130 +------ arch/mips/kvm/mmu.c | 545 +++++++++++++++++------------ arch/mips/kvm/tlb.c | 225 +----------- arch/mips/kvm/trap_emul.c | 220 +++++++++++- arch/mips/mm/init.c | 1 +- arch/mips/mm/pgtable-32.c | 1 +- arch/mips/mm/pgtable-64.c | 3 +- arch/mips/mm/tlbex.c | 38 +- mm/init-mm.c | 2 +- 16 files changed, 861 insertions(+), 648 deletions(-) create mode 100644 arch/mips/include/asm/tlbex.h -- git-series 0.8.10 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes 2017-01-06 1:32 [PATCH 0/30] KVM: MIPS: Implement GVA page tables James Hogan @ 2017-01-06 1:32 ` James Hogan 2017-01-06 1:32 ` James Hogan 0 siblings, 1 reply; 5+ messages in thread From: James Hogan @ 2017-01-06 1:32 UTC (permalink / raw) To: linux-mips Cc: James Hogan, Paolo Bonzini, Radim Krčmář, Ralf Baechle, kvm Implement invalidation of large ranges of virtual addresses from GVA page tables in response to a guest ASID change (immediately for guest kernel page table, lazily for guest user page table). We iterate through a range of page tables invalidating entries and freeing fully invalidated tables. To minimise overhead the exact ranges invalidated depends on the flags argument to kvm_mips_flush_gva_pt(), which also allows it to be used in future KVM_CAP_SYNC_MMU patches in response to GPA changes, which unlike guest TLB mapping changes affects guest KSeg0 mappings. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org --- arch/mips/include/asm/kvm_host.h | 18 ++++- arch/mips/kvm/emulate.c | 11 +++- arch/mips/kvm/mmu.c | 134 ++++++++++++++++++++++++++++++++- arch/mips/kvm/trap_emul.c | 5 +- 4 files changed, 166 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index e2bbcfbf2d34..44554241f158 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -610,6 +610,24 @@ extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi, extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi); extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr); + +/* MMU handling */ + +/** + * enum kvm_mips_flush - Types of MMU flushes. + * @KMF_USER: Flush guest user virtual memory mappings. + * Guest USeg only. + * @KMF_KERN: Flush guest kernel virtual memory mappings. + * Guest USeg and KSeg2/3. + * @KMF_GPA: Flush guest physical memory mappings. + * Also includes KSeg0 if KMF_KERN is set. + */ +enum kvm_mips_flush { + KMF_USER = 0x0, + KMF_KERN = 0x1, + KMF_GPA = 0x2, +}; +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags); extern unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu, unsigned long gva); extern void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu, diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index 611b8996ca0c..1d399396e486 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -1172,6 +1172,17 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst, nasid); /* + * Flush entries from the GVA page + * tables. + * Guest user page table will get + * flushed lazily on re-entry to guest + * user if the guest ASID actually + * changes. + */ + kvm_mips_flush_gva_pt(kern_mm->pgd, + KMF_KERN); + + /* * Regenerate/invalidate kernel MMU * context. * The user MMU context will be diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 27d6d0dbfeb4..09146b62552f 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -12,6 +12,7 @@ #include <linux/highmem.h> #include <linux/kvm_host.h> #include <asm/mmu_context.h> +#include <asm/pgalloc.h> static u32 kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu) { @@ -80,6 +81,139 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu, return (kvm->arch.guest_pmap[gfn] << PAGE_SHIFT) + offset; } +/* + * kvm_mips_flush_gva_{pte,pmd,pud,pgd,pt}. + * Flush a range of guest physical address space from the VM's GPA page tables. + */ + +static bool kvm_mips_flush_gva_pte(pte_t *pte, unsigned long start_gva, + unsigned long end_gva) +{ + int i_min = __pte_offset(start_gva); + int i_max = __pte_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1); + int i; + + /* + * There's no freeing to do, so there's no point clearing individual + * entries unless only part of the last level page table needs flushing. + */ + if (safe_to_remove) + return true; + + for (i = i_min; i <= i_max; ++i) { + if (!pte_present(pte[i])) + continue; + + set_pte(pte + i, __pte(0)); + } + return false; +} + +static bool kvm_mips_flush_gva_pmd(pmd_t *pmd, unsigned long start_gva, + unsigned long end_gva) +{ + pte_t *pte; + unsigned long end = ~0ul; + int i_min = __pmd_offset(start_gva); + int i_max = __pmd_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PMD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pmd_present(pmd[i])) + continue; + + pte = pte_offset(pmd + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pte(pte, start_gva, end)) { + pmd_clear(pmd + i); + pte_free_kernel(NULL, pte); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_mips_flush_gva_pud(pud_t *pud, unsigned long start_gva, + unsigned long end_gva) +{ + pmd_t *pmd; + unsigned long end = ~0ul; + int i_min = __pud_offset(start_gva); + int i_max = __pud_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PUD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pud_present(pud[i])) + continue; + + pmd = pmd_offset(pud + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pmd(pmd, start_gva, end)) { + pud_clear(pud + i); + pmd_free(NULL, pmd); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_mips_flush_gva_pgd(pgd_t *pgd, unsigned long start_gva, + unsigned long end_gva) +{ + pud_t *pud; + unsigned long end = ~0ul; + int i_min = pgd_index(start_gva); + int i_max = pgd_index(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PGD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pgd_present(pgd[i])) + continue; + + pud = pud_offset(pgd + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pud(pud, start_gva, end)) { + pgd_clear(pgd + i); + pud_free(NULL, pud); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags) +{ + if (flags & KMF_GPA) { + /* all of guest virtual address space could be affected */ + if (flags & KMF_KERN) + /* useg, kseg0, seg2/3 */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x7fffffff); + else + /* useg */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); + } else { + /* useg */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); + + /* kseg2/3 */ + if (flags & KMF_KERN) + kvm_mips_flush_gva_pgd(pgd, 0x60000000, 0x7fffffff); + } +} + /* XXXKYMA: Must be called with interrupts disabled */ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr, struct kvm_vcpu *vcpu) diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c index 2c4b4ccecbcd..7ef7b77834ed 100644 --- a/arch/mips/kvm/trap_emul.c +++ b/arch/mips/kvm/trap_emul.c @@ -776,14 +776,15 @@ static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run, unsigned int gasid; /* - * Lazy host ASID regeneration for guest user mode. + * Lazy host ASID regeneration / PT flush for guest user mode. * If the guest ASID has changed since the last guest usermode * execution, regenerate the host ASID so as to invalidate stale TLB - * entries. + * entries and flush GVA PT entries too. */ if (!KVM_GUEST_KERNEL_MODE(vcpu)) { gasid = kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID; if (gasid != vcpu->arch.last_user_gasid) { + kvm_mips_flush_gva_pt(user_mm->pgd, KMF_USER); kvm_get_new_mmu_context(user_mm, cpu, vcpu); for_each_possible_cpu(i) if (i != cpu) -- git-series 0.8.10 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes 2017-01-06 1:32 ` [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes James Hogan @ 2017-01-06 1:32 ` James Hogan 0 siblings, 0 replies; 5+ messages in thread From: James Hogan @ 2017-01-06 1:32 UTC (permalink / raw) To: linux-mips Cc: James Hogan, Paolo Bonzini, Radim Krčmář, Ralf Baechle, kvm Implement invalidation of large ranges of virtual addresses from GVA page tables in response to a guest ASID change (immediately for guest kernel page table, lazily for guest user page table). We iterate through a range of page tables invalidating entries and freeing fully invalidated tables. To minimise overhead the exact ranges invalidated depends on the flags argument to kvm_mips_flush_gva_pt(), which also allows it to be used in future KVM_CAP_SYNC_MMU patches in response to GPA changes, which unlike guest TLB mapping changes affects guest KSeg0 mappings. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org --- arch/mips/include/asm/kvm_host.h | 18 ++++- arch/mips/kvm/emulate.c | 11 +++- arch/mips/kvm/mmu.c | 134 ++++++++++++++++++++++++++++++++- arch/mips/kvm/trap_emul.c | 5 +- 4 files changed, 166 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index e2bbcfbf2d34..44554241f158 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -610,6 +610,24 @@ extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi, extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi); extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr); + +/* MMU handling */ + +/** + * enum kvm_mips_flush - Types of MMU flushes. + * @KMF_USER: Flush guest user virtual memory mappings. + * Guest USeg only. + * @KMF_KERN: Flush guest kernel virtual memory mappings. + * Guest USeg and KSeg2/3. + * @KMF_GPA: Flush guest physical memory mappings. + * Also includes KSeg0 if KMF_KERN is set. + */ +enum kvm_mips_flush { + KMF_USER = 0x0, + KMF_KERN = 0x1, + KMF_GPA = 0x2, +}; +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags); extern unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu, unsigned long gva); extern void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu, diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c index 611b8996ca0c..1d399396e486 100644 --- a/arch/mips/kvm/emulate.c +++ b/arch/mips/kvm/emulate.c @@ -1172,6 +1172,17 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst, nasid); /* + * Flush entries from the GVA page + * tables. + * Guest user page table will get + * flushed lazily on re-entry to guest + * user if the guest ASID actually + * changes. + */ + kvm_mips_flush_gva_pt(kern_mm->pgd, + KMF_KERN); + + /* * Regenerate/invalidate kernel MMU * context. * The user MMU context will be diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 27d6d0dbfeb4..09146b62552f 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -12,6 +12,7 @@ #include <linux/highmem.h> #include <linux/kvm_host.h> #include <asm/mmu_context.h> +#include <asm/pgalloc.h> static u32 kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu) { @@ -80,6 +81,139 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu, return (kvm->arch.guest_pmap[gfn] << PAGE_SHIFT) + offset; } +/* + * kvm_mips_flush_gva_{pte,pmd,pud,pgd,pt}. + * Flush a range of guest physical address space from the VM's GPA page tables. + */ + +static bool kvm_mips_flush_gva_pte(pte_t *pte, unsigned long start_gva, + unsigned long end_gva) +{ + int i_min = __pte_offset(start_gva); + int i_max = __pte_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1); + int i; + + /* + * There's no freeing to do, so there's no point clearing individual + * entries unless only part of the last level page table needs flushing. + */ + if (safe_to_remove) + return true; + + for (i = i_min; i <= i_max; ++i) { + if (!pte_present(pte[i])) + continue; + + set_pte(pte + i, __pte(0)); + } + return false; +} + +static bool kvm_mips_flush_gva_pmd(pmd_t *pmd, unsigned long start_gva, + unsigned long end_gva) +{ + pte_t *pte; + unsigned long end = ~0ul; + int i_min = __pmd_offset(start_gva); + int i_max = __pmd_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PMD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pmd_present(pmd[i])) + continue; + + pte = pte_offset(pmd + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pte(pte, start_gva, end)) { + pmd_clear(pmd + i); + pte_free_kernel(NULL, pte); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_mips_flush_gva_pud(pud_t *pud, unsigned long start_gva, + unsigned long end_gva) +{ + pmd_t *pmd; + unsigned long end = ~0ul; + int i_min = __pud_offset(start_gva); + int i_max = __pud_offset(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PUD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pud_present(pud[i])) + continue; + + pmd = pmd_offset(pud + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pmd(pmd, start_gva, end)) { + pud_clear(pud + i); + pmd_free(NULL, pmd); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +static bool kvm_mips_flush_gva_pgd(pgd_t *pgd, unsigned long start_gva, + unsigned long end_gva) +{ + pud_t *pud; + unsigned long end = ~0ul; + int i_min = pgd_index(start_gva); + int i_max = pgd_index(end_gva); + bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PGD - 1); + int i; + + for (i = i_min; i <= i_max; ++i, start_gva = 0) { + if (!pgd_present(pgd[i])) + continue; + + pud = pud_offset(pgd + i, 0); + if (i == i_max) + end = end_gva; + + if (kvm_mips_flush_gva_pud(pud, start_gva, end)) { + pgd_clear(pgd + i); + pud_free(NULL, pud); + } else { + safe_to_remove = false; + } + } + return safe_to_remove; +} + +void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags) +{ + if (flags & KMF_GPA) { + /* all of guest virtual address space could be affected */ + if (flags & KMF_KERN) + /* useg, kseg0, seg2/3 */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x7fffffff); + else + /* useg */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); + } else { + /* useg */ + kvm_mips_flush_gva_pgd(pgd, 0, 0x3fffffff); + + /* kseg2/3 */ + if (flags & KMF_KERN) + kvm_mips_flush_gva_pgd(pgd, 0x60000000, 0x7fffffff); + } +} + /* XXXKYMA: Must be called with interrupts disabled */ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr, struct kvm_vcpu *vcpu) diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c index 2c4b4ccecbcd..7ef7b77834ed 100644 --- a/arch/mips/kvm/trap_emul.c +++ b/arch/mips/kvm/trap_emul.c @@ -776,14 +776,15 @@ static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run, unsigned int gasid; /* - * Lazy host ASID regeneration for guest user mode. + * Lazy host ASID regeneration / PT flush for guest user mode. * If the guest ASID has changed since the last guest usermode * execution, regenerate the host ASID so as to invalidate stale TLB - * entries. + * entries and flush GVA PT entries too. */ if (!KVM_GUEST_KERNEL_MODE(vcpu)) { gasid = kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID; if (gasid != vcpu->arch.last_user_gasid) { + kvm_mips_flush_gva_pt(user_mm->pgd, KMF_USER); kvm_get_new_mmu_context(user_mm, cpu, vcpu); for_each_possible_cpu(i) if (i != cpu) -- git-series 0.8.10 ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-01-16 16:07 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <201701111154365057441@zte.com.cn> 2017-01-11 8:34 ` [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes Ralf Baechle 2017-01-16 16:07 ` James Hogan 2017-01-16 16:07 ` James Hogan 2017-01-06 1:32 [PATCH 0/30] KVM: MIPS: Implement GVA page tables James Hogan 2017-01-06 1:32 ` [PATCH 20/30] KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes James Hogan 2017-01-06 1:32 ` James Hogan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).