From: "Kirill A. Shutemov" <kirill@shutemov.name> To: Dave Hansen <dave.hansen@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <sean.j.christopherson@intel.com>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org> Cc: David Rientjes <rientjes@google.com>, Andrea Arcangeli <aarcange@redhat.com>, Kees Cook <keescook@chromium.org>, Will Drewry <wad@chromium.org>, "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>, "Kleen, Andi" <andi.kleen@intel.com>, Liran Alon <liran.alon@oracle.com>, Mike Rapoport <rppt@kernel.org>, x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [RFCv2 16/16] mm: Do not use zero page for VM_KVM_PROTECTED VMAs Date: Tue, 20 Oct 2020 09:18:59 +0300 [thread overview] Message-ID: <20201020061859.18385-17-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> Presence of zero pages in the mapping would disclose content of the mapping. Don't use them if KVM memory protection is enabled. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/s390/include/asm/pgtable.h | 2 +- include/linux/mm.h | 4 ++-- mm/huge_memory.c | 3 +-- mm/memory.c | 3 +-- 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index b55561cc8786..72ca3b3f04cb 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -543,7 +543,7 @@ static inline int mm_alloc_pgste(struct mm_struct *mm) * In the case that a guest uses storage keys * faults should no longer be backed by zero pages */ -#define mm_forbids_zeropage mm_has_pgste +#define vma_forbids_zeropage(vma) mm_has_pgste(vma->vm_mm) static inline int mm_uses_skeys(struct mm_struct *mm) { #ifdef CONFIG_PGSTE diff --git a/include/linux/mm.h b/include/linux/mm.h index 74efc51e63f0..ee713b7c2819 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -130,8 +130,8 @@ extern int mmap_rnd_compat_bits __read_mostly; * s390 does this to prevent multiplexing of hardware bits * related to the physical page in case of virtualization. */ -#ifndef mm_forbids_zeropage -#define mm_forbids_zeropage(X) (0) +#ifndef vma_forbids_zeropage +#define vma_forbids_zeropage(vma) vma_is_kvm_protected(vma) #endif /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40974656cb43..383614b24c4f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -709,8 +709,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) return VM_FAULT_OOM; - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm) && + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma) && transparent_hugepage_use_zero_page()) { pgtable_t pgtable; struct page *zero_page; diff --git a/mm/memory.c b/mm/memory.c index e28bd5f902a7..9907ffe00490 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3495,8 +3495,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) return 0; /* Use the zero-page for reads */ - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm)) { + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma)) { entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address), vma->vm_page_prot)); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, -- 2.26.2
next prev parent reply other threads:[~2020-10-20 6:19 UTC|newest] Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-10-20 6:18 [RFCv2 00/16] KVM protected memory extension Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 01/16] x86/mm: Move force_dma_unencrypted() to common code Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 02/16] x86/kvm: Introduce KVM memory protection feature Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 03/16] x86/kvm: Make DMA pages shared Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 04/16] x86/kvm: Use bounce buffers for KVM memory protection Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Kirill A. Shutemov 2020-10-20 8:06 ` Christoph Hellwig 2020-10-20 12:47 ` Kirill A. Shutemov 2020-10-22 3:31 ` Halil Pasic 2020-10-20 6:18 ` [RFCv2 06/16] x86/kvmclock: Share hvclock memory with the host Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 07/16] x86/realmode: Share trampoline area if KVM memory protection enabled Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Kirill A. Shutemov 2020-10-20 8:25 ` John Hubbard 2020-10-20 12:51 ` Kirill A. Shutemov 2020-10-22 11:49 ` Matthew Wilcox 2020-10-22 19:58 ` John Hubbard 2020-10-26 4:21 ` Matthew Wilcox 2020-10-26 4:44 ` John Hubbard 2020-10-26 13:28 ` Matthew Wilcox 2020-10-26 14:16 ` Jason Gunthorpe 2020-10-26 20:52 ` John Hubbard 2020-10-20 17:29 ` Ira Weiny 2020-10-22 11:37 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 09/16] KVM: mm: Introduce VM_KVM_PROTECTED Kirill A. Shutemov 2020-10-21 18:47 ` Edgecombe, Rick P 2020-10-22 12:01 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 10/16] KVM: x86: Use GUP for page walk instead of __get_user() Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 11/16] KVM: Protected memory extension Kirill A. Shutemov 2020-10-20 7:17 ` Peter Zijlstra 2020-10-20 12:55 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 12/16] KVM: x86: Enabled protected " Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 13/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 14/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Kirill A. Shutemov 2020-10-21 18:50 ` Edgecombe, Rick P 2020-10-22 12:06 ` Kirill A. Shutemov 2020-10-22 16:59 ` Edgecombe, Rick P 2020-10-23 10:36 ` Kirill A. Shutemov 2020-10-22 3:26 ` Halil Pasic 2020-10-22 12:07 ` Kirill A. Shutemov 2020-10-20 6:18 ` [RFCv2 15/16] KVM: Unmap protected pages from direct mapping Kirill A. Shutemov 2020-10-20 7:12 ` Peter Zijlstra 2020-10-20 12:18 ` David Hildenbrand 2020-10-20 13:20 ` David Hildenbrand 2020-10-21 1:20 ` Edgecombe, Rick P 2020-10-26 19:55 ` Tom Lendacky 2020-10-21 18:49 ` Edgecombe, Rick P 2020-10-23 12:37 ` Mike Rapoport 2020-10-23 16:32 ` Sean Christopherson 2020-10-20 6:18 ` Kirill A. Shutemov [this message] 2020-10-20 7:46 ` [RFCv2 00/16] KVM protected memory extension Vitaly Kuznetsov 2020-10-20 13:49 ` Kirill A. Shutemov 2020-10-21 14:46 ` Vitaly Kuznetsov 2020-10-23 11:35 ` Kirill A. Shutemov 2020-10-23 12:01 ` Vitaly Kuznetsov 2020-10-21 18:20 ` Andy Lutomirski 2020-10-26 15:29 ` Kirill A. Shutemov 2020-10-26 23:58 ` Andy Lutomirski
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20201020061859.18385-17-kirill.shutemov@linux.intel.com \ --to=kirill@shutemov.name \ --cc=aarcange@redhat.com \ --cc=andi.kleen@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=jmattson@google.com \ --cc=joro@8bytes.org \ --cc=keescook@chromium.org \ --cc=kirill.shutemov@linux.intel.com \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liran.alon@oracle.com \ --cc=luto@kernel.org \ --cc=pbonzini@redhat.com \ --cc=peterz@infradead.org \ --cc=rick.p.edgecombe@intel.com \ --cc=rientjes@google.com \ --cc=rppt@kernel.org \ --cc=sean.j.christopherson@intel.com \ --cc=vkuznets@redhat.com \ --cc=wad@chromium.org \ --cc=wanpengli@tencent.com \ --cc=x86@kernel.org \ --subject='Re: [RFCv2 16/16] mm: Do not use zero page for VM_KVM_PROTECTED VMAs' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).