From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3522DC433E1 for ; Fri, 22 May 2020 12:52:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2F53206C3 for ; Fri, 22 May 2020 12:52:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="uMPxr4NV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2F53206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C073280012; Fri, 22 May 2020 08:52:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8DA880008; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BADF80011; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 7D4BE80008 for ; Fri, 22 May 2020 08:52:24 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3D6791F10 for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) X-FDA: 76844343408.07.bit77_895cd7a88e90f X-HE-Tag: bit77_895cd7a88e90f X-Filterd-Recvd-Size: 8993 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id c12so6437665lfc.10 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=uMPxr4NVfF3vE1qVqq7sYsFj93VH+Y34ADY3/+IJkn11M5r/WatHHUjnukprdosb1T yo55gN9SX8HuBKmD+23WnyRvesSBKuSXQVtISqJ7086/9z6VeO2Xww8Ar+v85g91tiJt W0C/h2VXuOHKVnZNbOcSQkmhe8wicTwNh8SySiF7S62NpX2whckYamOo0Y5tYn0NNACH lbYF8qvggQ3Hq+I2RP22XsY014kHkea5U1sutNdpE0tb9CGfLQtUaz+qUWdf4lxp/yZZ EVwfQUaqG5nxl49hfwRyoY83XXfPpb3nZL1+IYFkOn7WLdUmDIZ44pLogc8bXKvC0BG/ sDgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=i08HXLVMVbliIubKXmOktRMtaXbo1EHzpv4XGhPgBKAW102ZyCR5Jfsq7xy5r+m96l fhnjKnLpHjR78b8h/OoEYkx4cgxHuICCbehHmagdT/Ya4x1c3M2kva9hYwb6u4sdu0jg ZQYxJkm0y8DEylWJjxBLWVhxpqA/QNDK0FPnMsBWY29KezO55AU9E6+i7Ev5zr83lVQx uT74mxEJEmCK/MrjKMZleuDfx1Jx7z4SWqVvRYMMjV5XsgC4K5gSJ9QJ+C3DRSp3+J5b o6zgqg8NcyVLcZ3DwLsnbsKN//HCUOnRca2OtnXSNRX+4RscSH+FDn3GUQeWUHX1j6Dh IHxw== X-Gm-Message-State: AOAM5304a21CMfl0A9MquDpt4KLempQQV9qec7/tIwjMxKL+F+CmtyUh KWvE55MK0Mb+oDMSThSI1Cehdw== X-Google-Smtp-Source: ABdhPJwQqkBG5wVjIjX0BwokaMhLdwCURG8El+OhoC+P0UogOvR1pNP5w8yY/1CvTRn+oqftAvC5tA== X-Received: by 2002:a19:c64c:: with SMTP id w73mr7336911lff.67.1590151942584; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id i11sm2644335ljg.9.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:21 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E0440102057; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 09/16] KVM: Protected memory extension Date: Fri, 22 May 2020 15:52:07 +0300 Message-Id: <20200522125214.31348-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add infrastructure that handles protected memory extension. Arch-specific code has to provide hypercalls and define non-zero VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 ++ mm/mprotect.c | 1 + virt/kvm/kvm_main.c | 131 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 136 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bd0bb600f610..d7072f6d6aa0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -700,6 +700,10 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); =20 +int kvm_protect_all_memory(struct kvm *kvm); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); =20 diff --git a/mm/mprotect.c b/mm/mprotect.c index 494192ca954b..552be3b4c80a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -505,6 +505,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_= area_struct **pprev, vm_unacct_memory(charged); return error; } +EXPORT_SYMBOL_GPL(mprotect_fixup); =20 /* * pkey=3D=3D-1 when doing a legacy mprotect() diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 530af95efdf3..07d45da5d2aa 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -155,6 +155,8 @@ static void kvm_uevent_notify_change(unsigned int typ= e, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; =20 +static int protect_memory(unsigned long start, unsigned long end, bool p= rotect); + __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable) { @@ -1309,6 +1311,14 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out_bitmap; =20 + if (mem->memory_size && kvm->mem_protected) { + r =3D protect_memory(new.userspace_addr, + new.userspace_addr + new.npages * PAGE_SIZE, + true); + if (r) + goto out_bitmap; + } + if (old.dirty_bitmap && !new.dirty_bitmap) kvm_destroy_dirty_bitmap(&old); return 0; @@ -2652,6 +2662,127 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vc= pu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); =20 +static int protect_memory(unsigned long start, unsigned long end, bool p= rotect) +{ + struct mm_struct *mm =3D current->mm; + struct vm_area_struct *vma, *prev; + int ret; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + + ret =3D -ENOMEM; + vma =3D find_vma(current->mm, start); + if (!vma) + goto out; + + ret =3D -EINVAL; + if (vma->vm_start > start) + goto out; + + if (start > vma->vm_start) + prev =3D vma; + else + prev =3D vma->vm_prev; + + ret =3D 0; + while (true) { + unsigned long newflags, tmp; + + tmp =3D vma->vm_end; + if (tmp > end) + tmp =3D end; + + newflags =3D vma->vm_flags; + if (protect) + newflags |=3D VM_KVM_PROTECTED; + else + newflags &=3D ~VM_KVM_PROTECTED; + + /* The VMA has been handled as part of other memslot */ + if (newflags =3D=3D vma->vm_flags) + goto next; + + ret =3D mprotect_fixup(vma, &prev, start, tmp, newflags); + if (ret) + goto out; + +next: + start =3D tmp; + if (start < prev->vm_end) + start =3D prev->vm_end; + + if (start >=3D end) + goto out; + + vma =3D prev->vm_next; + if (!vma || vma->vm_start !=3D start) { + ret =3D -ENOMEM; + goto out; + } + } +out: + up_write(&mm->mmap_sem); + return ret; +} + +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect) +{ + struct kvm_memory_slot *memslot; + unsigned long start, end; + gfn_t numpages; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + if (!npages) + return 0; + + memslot =3D gfn_to_memslot(kvm, gfn); + /* Not backed by memory. It's okay. */ + if (!memslot) + return 0; + + start =3D gfn_to_hva_many(memslot, gfn, &numpages); + end =3D start + npages * PAGE_SIZE; + + /* XXX: Share range across memory slots? */ + if (WARN_ON(numpages < npages)) + return -EINVAL; + + return protect_memory(start, end, protect); +} +EXPORT_SYMBOL_GPL(kvm_protect_memory); + +int kvm_protect_all_memory(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + unsigned long start, end; + int i, ret =3D 0;; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + mutex_lock(&kvm->slots_lock); + kvm->mem_protected =3D true; + for (i =3D 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots =3D __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + start =3D memslot->userspace_addr; + end =3D start + memslot->npages * PAGE_SIZE; + ret =3D protect_memory(start, end, true); + if (ret) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_protect_all_memory); + void kvm_sigset_activate(struct kvm_vcpu *vcpu) { if (!vcpu->sigset_active) --=20 2.26.2