From: Mike Rapoport <rppt@kernel.org> To: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <sean.j.christopherson@intel.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, David Rientjes <rientjes@google.com>, Andrea Arcangeli <aarcange@redhat.com>, Kees Cook <keescook@chromium.org>, Will Drewry <wad@chromium.org>, "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>, "Kleen, Andi" <andi.kleen@intel.com>, Liran Alon <liran.alon@oracle.com>, x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: Re: [RFCv2 15/16] KVM: Unmap protected pages from direct mapping Date: Fri, 23 Oct 2020 15:37:12 +0300 [thread overview] Message-ID: <20201023123712.GC392079@kernel.org> (raw) In-Reply-To: <20201020061859.18385-16-kirill.shutemov@linux.intel.com> On Tue, Oct 20, 2020 at 09:18:58AM +0300, Kirill A. Shutemov wrote: > If the protected memory feature enabled, unmap guest memory from > kernel's direct mappings. > > Migration and KSM is disabled for protected memory as it would require a > special treatment. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > include/linux/mm.h | 3 +++ > mm/huge_memory.c | 8 ++++++++ > mm/ksm.c | 2 ++ > mm/memory.c | 12 ++++++++++++ > mm/rmap.c | 4 ++++ > virt/lib/mem_protected.c | 21 +++++++++++++++++++++ > 6 files changed, 50 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index ee274d27e764..74efc51e63f0 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -671,6 +671,9 @@ static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) > return vma->vm_flags & VM_KVM_PROTECTED; > } > > +void kvm_map_page(struct page *page, int nr_pages); > +void kvm_unmap_page(struct page *page, int nr_pages); This still does not seem right ;-) And I still think that map/unmap primitives shoud be a part of the generic mm rather than exported by KVM. > + > #ifdef CONFIG_SHMEM > /* > * The vma_is_shmem is not inline because it is used only by slow > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index ec8cf9a40cfd..40974656cb43 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -627,6 +627,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > spin_unlock(vmf->ptl); > count_vm_event(THP_FAULT_ALLOC); > count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); > + > + /* Unmap page from direct mapping */ > + if (vma_is_kvm_protected(vma)) > + kvm_unmap_page(page, HPAGE_PMD_NR); > } > > return 0; > @@ -1689,6 +1693,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > page_remove_rmap(page, true); > VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > VM_BUG_ON_PAGE(!PageHead(page), page); > + > + /* Map the page back to the direct mapping */ > + if (vma_is_kvm_protected(vma)) > + kvm_map_page(page, HPAGE_PMD_NR); > } else if (thp_migration_supported()) { > swp_entry_t entry; > > diff --git a/mm/ksm.c b/mm/ksm.c > index 9afccc36dbd2..c720e271448f 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -528,6 +528,8 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, > return NULL; > if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) > return NULL; > + if (vma_is_kvm_protected(vma)) > + return NULL; > return vma; > } > > diff --git a/mm/memory.c b/mm/memory.c > index 2c9756b4e52f..e28bd5f902a7 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1245,6 +1245,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, > likely(!(vma->vm_flags & VM_SEQ_READ))) > mark_page_accessed(page); > } > + > + /* Map the page back to the direct mapping */ > + if (vma_is_anonymous(vma) && vma_is_kvm_protected(vma)) > + kvm_map_page(page, 1); > + > rss[mm_counter(page)]--; > page_remove_rmap(page, false); > if (unlikely(page_mapcount(page) < 0)) > @@ -3466,6 +3471,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > struct page *page; > vm_fault_t ret = 0; > pte_t entry; > + bool set = false; > > /* File mapping without ->vm_ops ? */ > if (vma->vm_flags & VM_SHARED) > @@ -3554,6 +3560,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); > page_add_new_anon_rmap(page, vma, vmf->address, false); > lru_cache_add_inactive_or_unevictable(page, vma); > + set = true; > setpte: > set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); > > @@ -3561,6 +3568,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > update_mmu_cache(vma, vmf->address, vmf->pte); > unlock: > pte_unmap_unlock(vmf->pte, vmf->ptl); > + > + /* Unmap page from direct mapping */ > + if (vma_is_kvm_protected(vma) && set) > + kvm_unmap_page(page, 1); > + > return ret; > release: > put_page(page); > diff --git a/mm/rmap.c b/mm/rmap.c > index 9425260774a1..247548d6d24b 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1725,6 +1725,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg) > { > + /* TODO */ > + if (vma_is_kvm_protected(vma)) > + return true; > + > return vma_is_temporary_stack(vma); > } > > diff --git a/virt/lib/mem_protected.c b/virt/lib/mem_protected.c > index 1dfe82534242..9d2ef99285e5 100644 > --- a/virt/lib/mem_protected.c > +++ b/virt/lib/mem_protected.c > @@ -30,6 +30,27 @@ void kvm_unmap_page_atomic(void *vaddr) > } > EXPORT_SYMBOL_GPL(kvm_unmap_page_atomic); > > +void kvm_map_page(struct page *page, int nr_pages) > +{ > + int i; > + > + /* Clear page before returning it to the direct mapping */ > + for (i = 0; i < nr_pages; i++) { > + void *p = kvm_map_page_atomic(page + i); > + memset(p, 0, PAGE_SIZE); > + kvm_unmap_page_atomic(p); > + } > + > + kernel_map_pages(page, nr_pages, 1); > +} > +EXPORT_SYMBOL_GPL(kvm_map_page); > + > +void kvm_unmap_page(struct page *page, int nr_pages) > +{ > + kernel_map_pages(page, nr_pages, 0); > +} > +EXPORT_SYMBOL_GPL(kvm_unmap_page); > + > int kvm_init_protected_memory(void) > { > guest_map_ptes = kmalloc_array(num_possible_cpus(), > -- > 2.26.2 > > -- Sincerely yours, Mike.
next prev parent reply other threads:[~2020-10-23 12:37 UTC|newest] Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-10-20 6:18 [RFCv2 00/16] KVM protected memory extension Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 01/16] x86/mm: Move force_dma_unencrypted() to common code Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 02/16] x86/kvm: Introduce KVM memory protection feature Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 03/16] x86/kvm: Make DMA pages shared Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 04/16] x86/kvm: Use bounce buffers for KVM memory protection Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Kirill A. Shutemov 2020-10-20 8:06 ` Christoph Hellwig 2020-10-20 12:47 ` Kirill A. Shutemov 2020-10-22 3:31 ` Halil Pasic 2020-10-20 6:18 ` [RFCv2 06/16] x86/kvmclock: Share hvclock memory with the host Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 07/16] x86/realmode: Share trampoline area if KVM memory protection enabled Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Kirill A. Shutemov 2020-10-20 8:25 ` John Hubbard 2020-10-20 12:51 ` Kirill A. Shutemov 2020-10-22 11:49 ` Matthew Wilcox 2020-10-22 19:58 ` John Hubbard 2020-10-26 4:21 ` Matthew Wilcox 2020-10-26 4:44 ` John Hubbard 2020-10-26 13:28 ` Matthew Wilcox 2020-10-26 14:16 ` Jason Gunthorpe 2020-10-26 20:52 ` John Hubbard 2020-10-20 17:29 ` Ira Weiny 2020-10-22 11:37 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 09/16] KVM: mm: Introduce VM_KVM_PROTECTED Kirill A. Shutemov 2020-10-21 18:47 ` Edgecombe, Rick P 2020-10-22 12:01 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 10/16] KVM: x86: Use GUP for page walk instead of __get_user() Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 11/16] KVM: Protected memory extension Kirill A. Shutemov 2020-10-20 7:17 ` Peter Zijlstra 2020-10-20 12:55 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 12/16] KVM: x86: Enabled protected " Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 13/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 14/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Kirill A. Shutemov 2020-10-21 18:50 ` Edgecombe, Rick P 2020-10-22 12:06 ` Kirill A. Shutemov 2020-10-22 16:59 ` Edgecombe, Rick P 2020-10-23 10:36 ` Kirill A. Shutemov 2020-10-22 3:26 ` Halil Pasic 2020-10-22 12:07 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 15/16] KVM: Unmap protected pages from direct mapping Kirill A. Shutemov 2020-10-20 7:12 ` Peter Zijlstra 2020-10-20 12:18 ` David Hildenbrand 2020-10-20 13:20 ` David Hildenbrand 2020-10-21 1:20 ` Edgecombe, Rick P 2020-10-26 19:55 ` Tom Lendacky 2020-10-21 18:49 ` Edgecombe, Rick P 2020-10-23 12:37 ` Mike Rapoport [this message] 2020-10-23 16:32 ` Sean Christopherson 2020-10-20 6:18 ` [RFCv2 16/16] mm: Do not use zero page for VM_KVM_PROTECTED VMAs Kirill A. Shutemov 2020-10-20 7:46 ` [RFCv2 00/16] KVM protected memory extension Vitaly Kuznetsov 2020-10-20 13:49 ` Kirill A. Shutemov 2020-10-21 14:46 ` Vitaly Kuznetsov 2020-10-23 11:35 ` Kirill A. Shutemov 2020-10-23 12:01 ` Vitaly Kuznetsov 2020-10-21 18:20 ` Andy Lutomirski 2020-10-26 15:29 ` Kirill A. Shutemov 2020-10-26 23:58 ` Andy Lutomirski
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20201023123712.GC392079@kernel.org \ --to=rppt@kernel.org \ --cc=aarcange@redhat.com \ --cc=andi.kleen@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=jmattson@google.com \ --cc=joro@8bytes.org \ --cc=keescook@chromium.org \ --cc=kirill.shutemov@linux.intel.com \ --cc=kirill@shutemov.name \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liran.alon@oracle.com \ --cc=luto@kernel.org \ --cc=pbonzini@redhat.com \ --cc=peterz@infradead.org \ --cc=rick.p.edgecombe@intel.com \ --cc=rientjes@google.com \ --cc=sean.j.christopherson@intel.com \ --cc=vkuznets@redhat.com \ --cc=wad@chromium.org \ --cc=wanpengli@tencent.com \ --cc=x86@kernel.org \ --subject='Re: [RFCv2 15/16] KVM: Unmap protected pages from direct mapping' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).