From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AAFFC4320A for ; Wed, 25 Aug 2021 16:17:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2811861153 for ; Wed, 25 Aug 2021 16:17:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238857AbhHYQSM (ORCPT ); Wed, 25 Aug 2021 12:18:12 -0400 Received: from foss.arm.com ([217.140.110.172]:54662 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239510AbhHYQSG (ORCPT ); Wed, 25 Aug 2021 12:18:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386A9D6E; Wed, 25 Aug 2021 09:17:20 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE56B3F66F; Wed, 25 Aug 2021 09:17:18 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 05/39] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Date: Wed, 25 Aug 2021 17:17:41 +0100 Message-Id: <20210825161815.266051-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Userspace resets a VCPU that has already run by means of a KVM_ARM_VCPU_INIT ioctl. This is usually done after a VM shutdown and before the same VM is rebooted, and during this interval the VM memory can be modified by userspace (for example, to copy the original guest kernel image). In this situation, KVM unmaps the entire stage 2 to trigger stage 2 faults, which ensures that the guest has the same view of memory as the host's userspace. Unmapping stage 2 is not an option for locked memslots, so instead do the cache maintenance the first time a VCPU is run, similar to what KVM does when a memslot is locked. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/mmu.c | 13 ++++++++++++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ed67f914d169..68905bd47f85 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -114,7 +114,8 @@ struct kvm_arch_memory_slot { /* kvm->arch.mmu_pending_ops flags */ #define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_MAX_MMU_PENDING_OPS 1 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_MAX_MMU_PENDING_OPS 2 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 94fa08f3d9d3..f1f8a87550d1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -560,8 +560,16 @@ void stage2_unmap_vm(struct kvm *kvm) spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + if (memslot_is_locked(memslot)) { + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, + &kvm->arch.mmu_pending_ops); + set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, + &kvm->arch.mmu_pending_ops); + continue; + } stage2_unmap_memslot(kvm, memslot); + } spin_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); @@ -1281,6 +1289,9 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) } } + if (test_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops)) + icache_inval_all_pou(); + bitmap_zero(&kvm->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS); out_unlock: -- 2.33.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60FABC4320A for ; Wed, 25 Aug 2021 16:17:46 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 1320F61153 for ; Wed, 25 Aug 2021 16:17:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1320F61153 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id BBE204B20D; Wed, 25 Aug 2021 12:17:45 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Euvf7uYnTCz0; Wed, 25 Aug 2021 12:17:41 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 98C6B4B28F; Wed, 25 Aug 2021 12:17:30 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 5BA514B213 for ; Wed, 25 Aug 2021 12:17:30 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BZG7-+wWmXhk for ; Wed, 25 Aug 2021 12:17:26 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id A4EB14B259 for ; Wed, 25 Aug 2021 12:17:20 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386A9D6E; Wed, 25 Aug 2021 09:17:20 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE56B3F66F; Wed, 25 Aug 2021 09:17:18 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 05/39] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Date: Wed, 25 Aug 2021 17:17:41 +0100 Message-Id: <20210825161815.266051-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Userspace resets a VCPU that has already run by means of a KVM_ARM_VCPU_INIT ioctl. This is usually done after a VM shutdown and before the same VM is rebooted, and during this interval the VM memory can be modified by userspace (for example, to copy the original guest kernel image). In this situation, KVM unmaps the entire stage 2 to trigger stage 2 faults, which ensures that the guest has the same view of memory as the host's userspace. Unmapping stage 2 is not an option for locked memslots, so instead do the cache maintenance the first time a VCPU is run, similar to what KVM does when a memslot is locked. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/mmu.c | 13 ++++++++++++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ed67f914d169..68905bd47f85 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -114,7 +114,8 @@ struct kvm_arch_memory_slot { /* kvm->arch.mmu_pending_ops flags */ #define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_MAX_MMU_PENDING_OPS 1 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_MAX_MMU_PENDING_OPS 2 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 94fa08f3d9d3..f1f8a87550d1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -560,8 +560,16 @@ void stage2_unmap_vm(struct kvm *kvm) spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + if (memslot_is_locked(memslot)) { + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, + &kvm->arch.mmu_pending_ops); + set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, + &kvm->arch.mmu_pending_ops); + continue; + } stage2_unmap_memslot(kvm, memslot); + } spin_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); @@ -1281,6 +1289,9 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) } } + if (test_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops)) + icache_inval_all_pou(); + bitmap_zero(&kvm->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS); out_unlock: -- 2.33.0 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DAE9C432BE for ; Wed, 25 Aug 2021 16:20:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 009E861151 for ; Wed, 25 Aug 2021 16:20:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 009E861151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sSD/22TmNtjHf0MoTbW8bIY+zj9dViJVPBd1zFn0PRI=; b=CQHmirsACWKdOg gQNWkEjRg8GHA81LnfFpRPrat0yHnctDgQtZw/AeAugMv0boKQg5DbNqkloHM9wnhUqAyxlet+Jrc nzm2sq/oSGh0l5JNlrcqIo1s630QjR3UrNnhsL7zeoqT+xTAgHowWCxqGk3REqx2XyrOapM3KkSjM 3BiuQ+ySWgadVHUTsavtGklGFhbNE+U1SY5jdYRmMuHIMVNBJuDUFlJ1BwMCCUq7Npi5aOLKAPHiD Gvr/10n3HuQHLlcqY1HWLKkQDNIKB9BYVNAMMX+Xd/mUtUJVcF0DFRlcbj2dTU4BA8iLWrU8YetE7 x4o77g2T1IKm6SVQENxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbz-007h8T-AP; Wed, 25 Aug 2021 16:18:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvab-007gRh-TE for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386A9D6E; Wed, 25 Aug 2021 09:17:20 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE56B3F66F; Wed, 25 Aug 2021 09:17:18 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 05/39] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Date: Wed, 25 Aug 2021 17:17:41 +0100 Message-Id: <20210825161815.266051-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091722_059285_56119FBA X-CRM114-Status: GOOD ( 15.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Userspace resets a VCPU that has already run by means of a KVM_ARM_VCPU_INIT ioctl. This is usually done after a VM shutdown and before the same VM is rebooted, and during this interval the VM memory can be modified by userspace (for example, to copy the original guest kernel image). In this situation, KVM unmaps the entire stage 2 to trigger stage 2 faults, which ensures that the guest has the same view of memory as the host's userspace. Unmapping stage 2 is not an option for locked memslots, so instead do the cache maintenance the first time a VCPU is run, similar to what KVM does when a memslot is locked. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/mmu.c | 13 ++++++++++++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ed67f914d169..68905bd47f85 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -114,7 +114,8 @@ struct kvm_arch_memory_slot { /* kvm->arch.mmu_pending_ops flags */ #define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_MAX_MMU_PENDING_OPS 1 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_MAX_MMU_PENDING_OPS 2 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 94fa08f3d9d3..f1f8a87550d1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -560,8 +560,16 @@ void stage2_unmap_vm(struct kvm *kvm) spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + if (memslot_is_locked(memslot)) { + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, + &kvm->arch.mmu_pending_ops); + set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, + &kvm->arch.mmu_pending_ops); + continue; + } stage2_unmap_memslot(kvm, memslot); + } spin_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); @@ -1281,6 +1289,9 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) } } + if (test_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops)) + icache_inval_all_pou(); + bitmap_zero(&kvm->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS); out_unlock: -- 2.33.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel