From: Andrew Jones <drjones@redhat.com>
To: Marc Zyngier <maz@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org, will@kernel.org,
qperret@google.com, dbrazdil@google.com,
Steven Price <steven.price@arm.com>,
Fuad Tabba <tabba@google.com>,
Srivatsa Vaddagiri <vatsa@codeaurora.org>,
Shanker R Donthineni <sdonthineni@nvidia.com>,
James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Alexandru Elisei <alexandru.elisei@arm.com>,
kernel-team@android.com
Subject: Re: [PATCH v2 04/16] KVM: arm64: Add MMIO checking infrastructure
Date: Wed, 6 Oct 2021 13:37:51 +0200 [thread overview]
Message-ID: <20211006113751.damskwaz7akpk5fc@gator.home> (raw)
In-Reply-To: <20211004174849.2831548-5-maz@kernel.org>
On Mon, Oct 04, 2021 at 06:48:37PM +0100, Marc Zyngier wrote:
> Introduce the infrastructure required to identify an IPA region
> that is expected to be used as an MMIO window.
>
> This include mapping, unmapping and checking the regions. Nothing
> calls into it yet, so no expected functional change.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 2 +
> arch/arm64/include/asm/kvm_mmu.h | 5 ++
> arch/arm64/kvm/mmu.c | 109 ++++++++++++++++++++++++++++++
> 3 files changed, 116 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index f63ca8fb4e58..ba9781eb84d6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -125,6 +125,8 @@ struct kvm_arch {
> #define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0
> /* Memory Tagging Extension enabled for the guest */
> #define KVM_ARCH_FLAG_MTE_ENABLED 1
> + /* Gues has bought into the MMIO guard extension */
> +#define KVM_ARCH_FLAG_MMIO_GUARD 2
> unsigned long flags;
>
> /*
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 02d378887743..454a6265d45d 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -170,6 +170,11 @@ phys_addr_t kvm_mmu_get_httbr(void);
> phys_addr_t kvm_get_idmap_vector(void);
> int kvm_mmu_init(u32 *hyp_va_bits);
>
> +/* MMIO guard */
> +bool kvm_install_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa);
> +bool kvm_remove_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa);
> +bool kvm_check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa);
> +
> static inline void *__kvm_vector_slot2addr(void *base,
> enum arm64_hyp_spectre_vector slot)
> {
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 1a94a7ca48f2..2470a55ca675 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1172,6 +1172,115 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
> kvm_set_pfn_accessed(pte_pfn(pte));
> }
>
> +/* Replace this with something more structured once day */
one day
> +#define MMIO_NOTE (('M' << 24 | 'M' << 16 | 'I' << 8 | 'O') << 1)
Would it be better to have kvm_pgtable_stage2_annotate() shift its
inputs (<< 1) instead of requiring all annotations to remember that
requirement? Although the owner id is shifted 2 bits, but I'm not
sure why.
> +
> +bool kvm_install_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa)
> +{
> + struct kvm_mmu_memory_cache *memcache;
> + struct kvm_memory_slot *memslot;
> + struct kvm *kvm = vcpu->kvm;
> + int ret, idx;
> +
> + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &kvm->arch.flags))
> + return false;
> +
> + /* Must be page-aligned */
> + if (ipa & ~PAGE_MASK)
> + return false;
> +
> + /*
> + * The page cannot be in a memslot. At some point, this will
> + * have to deal with device mappings though.
> + */
> + idx = srcu_read_lock(&kvm->srcu);
> + mutex_lock(&kvm->slots_arch_lock);
> + memslot = gfn_to_memslot(kvm, ipa >> PAGE_SHIFT);
> + if (memslot) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + /* Guest has direct access to the GICv2 virtual CPU interface */
> + if (irqchip_in_kernel(kvm) &&
> + kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2 &&
> + ipa == kvm->arch.vgic.vgic_cpu_base) {
> + ret = 0;
> + goto out;
> + }
> +
> + memcache = &vcpu->arch.mmu_page_cache;
> + if (kvm_mmu_topup_memory_cache(memcache,
> + kvm_mmu_cache_min_pages(kvm))) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> + spin_lock(&kvm->mmu_lock);
> + ret = kvm_pgtable_stage2_annotate(vcpu->arch.hw_mmu->pgt,
> + ipa, PAGE_SIZE, memcache,
> + MMIO_NOTE);
> + spin_unlock(&kvm->mmu_lock);
> +
> +out:
> + mutex_unlock(&kvm->slots_arch_lock);
> + srcu_read_unlock(&kvm->srcu, idx);
> + return ret == 0;
I guess the callers need this to return a boolean? Just seems odd that
pains were taken above to set ret to EINVAL/ENOMEM just to translate
that to true/false here though.
> +}
> +
> +static bool __check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa)
> +{
> + kvm_pte_t pte = 0;
> + u32 level = 0;
> + int ret;
> +
> + lockdep_assert_held(&vcpu->kvm->mmu_lock);
> +
> + ret = kvm_pgtable_get_leaf(vcpu->arch.hw_mmu->pgt, ipa, &pte, &level);
> + VM_BUG_ON(ret);
> + VM_BUG_ON(level >= KVM_PGTABLE_MAX_LEVELS);
> +
> + /* Must be a PAGE_SIZE mapping with our annotation */
> + return (BIT(ARM64_HW_PGTABLE_LEVEL_SHIFT(level)) == PAGE_SIZE &&
> + pte == MMIO_NOTE);
> +}
> +
> +bool kvm_remove_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa)
> +{
> + bool ret;
> +
> + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags))
> + return false;
> +
> + /* Keep the PT locked across the two walks */
> + spin_lock(&vcpu->kvm->mmu_lock);
> +
> + ret = __check_ioguard_page(vcpu, ipa);
> + if (ret) /* Drop the annotation */
> + kvm_pgtable_stage2_unmap(vcpu->arch.hw_mmu->pgt,
> + ALIGN_DOWN(ipa, PAGE_SIZE), PAGE_SIZE);
How about
if (ret) {
/* Drop the annotation */
kvm_pgtable_stage2_unmap(vcpu->arch.hw_mmu->pgt,
ALIGN_DOWN(ipa, PAGE_SIZE), PAGE_SIZE);
}
to be a bit easier to read.
> +
> + spin_unlock(&vcpu->kvm->mmu_lock);
> + return ret;
> +}
> +
> +bool kvm_check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa)
> +{
> + bool ret;
> +
> + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags))
> + return true;
> +
> + spin_lock(&vcpu->kvm->mmu_lock);
> + ret = __check_ioguard_page(vcpu, ipa & PAGE_MASK);
> + spin_unlock(&vcpu->kvm->mmu_lock);
> +
> + if (!ret)
> + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> +
> + return ret;
> +}
> +
> /**
> * kvm_handle_guest_abort - handles all 2nd stage aborts
> * @vcpu: the VCPU pointer
> --
> 2.30.2
>
Besides the nits
Reviewed-by: Andrew Jones <drjones@redhat.com>
Thanks,
drew
next prev parent reply other threads:[~2021-10-06 11:38 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-04 17:48 [PATCH v2 00/16] KVM: arm64: MMIO guard PV services Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 01/16] KVM: arm64: Generalise VM features into a set of flags Marc Zyngier
2021-10-06 10:24 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 02/16] KVM: arm64: Check for PTE valitity when checking for executable/cacheable Marc Zyngier
2021-10-06 10:41 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 03/16] KVM: arm64: Turn kvm_pgtable_stage2_set_owner into kvm_pgtable_stage2_annotate Marc Zyngier
2021-10-06 11:02 ` Andrew Jones
2021-10-06 11:22 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 04/16] KVM: arm64: Add MMIO checking infrastructure Marc Zyngier
2021-10-06 11:37 ` Andrew Jones [this message]
2021-10-04 17:48 ` [PATCH v2 05/16] KVM: arm64: Plumb MMIO checking into the fault handling Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 06/16] KVM: arm64: Force a full unmap on vpcu reinit Marc Zyngier
2021-10-07 13:49 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 07/16] KVM: arm64: Wire MMIO guard hypercalls Marc Zyngier
2021-10-07 13:53 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 08/16] KVM: arm64: Add tracepoint for failed MMIO guard check Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 09/16] KVM: arm64: Advertise a capability for MMIO guard Marc Zyngier
2021-10-07 14:22 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 10/16] KVM: arm64: Add some documentation for the MMIO guard feature Marc Zyngier
2021-10-07 14:38 ` Andrew Jones
2021-10-07 14:42 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 11/16] firmware/smccc: Call arch-specific hook on discovering KVM services Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 12/16] mm/vmalloc: Add arch-specific callbacks to track io{remap,unmap} physical pages Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 13/16] arm64: Implement ioremap/iounmap hooks calling into KVM's MMIO guard Marc Zyngier
2021-10-07 15:36 ` Andrew Jones
2021-10-04 17:48 ` [PATCH v2 14/16] arm64: Enroll into KVM's MMIO guard if required Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 15/16] arm64: Add a helper to retrieve the PTE of a fixmap Marc Zyngier
2021-10-04 17:48 ` [PATCH v2 16/16] arm64: Register earlycon fixmap with the MMIO guard Marc Zyngier
2021-10-07 15:52 ` [PATCH v2 00/16] KVM: arm64: MMIO guard PV services Andrew Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211006113751.damskwaz7akpk5fc@gator.home \
--to=drjones@redhat.com \
--cc=alexandru.elisei@arm.com \
--cc=dbrazdil@google.com \
--cc=james.morse@arm.com \
--cc=kernel-team@android.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=qperret@google.com \
--cc=sdonthineni@nvidia.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=vatsa@codeaurora.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).