linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Christoffer Dall <christoffer.dall@arm.com>
To: Marc Zyngier <maz@kernel.org>
Cc: kvm@vger.kernel.org, Suzuki K Poulose <suzuki.poulose@arm.com>,
	stable@vger.kernel.org, James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 1/3] KVM: arm/arm64: Properly handle faulting of device mappings
Date: Fri, 13 Dec 2019 12:14:00 +0100	[thread overview]
Message-ID: <20191213111400.GI28840@e113682-lin.lund.arm.com> (raw)
In-Reply-To: <7f86824f4cbd17cd75ef347473e34278@www.loen.fr>

On Fri, Dec 13, 2019 at 09:28:59AM +0000, Marc Zyngier wrote:
> Hi Christoffer,
> 
> On 2019-12-13 08:29, Christoffer Dall wrote:
> > Hi Marc,
> > 
> > On Wed, Dec 11, 2019 at 04:56:48PM +0000, Marc Zyngier wrote:
> > > A device mapping is normally always mapped at Stage-2, since there
> > > is very little gain in having it faulted in.
> > 
> > It is actually becoming less clear to me what the real benefits of
> > pre-populating the stage 2 page table are, especially given that we can
> > provoke a situation where they're faulted in anyhow.  Do you recall if
> > we had any specific case that motivated us to pre-fault in the pages?
> 
> It's only a minor performance optimization that was introduced by Ard in
> 8eef91239e57d. Which makes sense for platform devices that have a single
> fixed location in memory. It makes slightly less sense for PCI, where
> you can move things around.

User space could still decide to move things around in its VA map even
if the device is fixed.

Anyway, I was thinking more if there was some sort of device, like a
frambuffer, which for example crosses page boundaries and where it would
be visible to the user that there's a sudden performance drop while
operating the device over page boundaries.  Anything like that?

> 
> > > Nonetheless, it is possible to end-up in a situation where the
> > > device
> > > mapping has been removed from Stage-2 (userspace munmaped the VFIO
> > > region, and the MMU notifier did its job), but present in a
> > > userspace
> > > mapping (userpace has mapped it back at the same address). In such
> > > a situation, the device mapping will be demand-paged as the guest
> > > performs memory accesses.
> > > 
> > > This requires to be careful when dealing with mapping size, cache
> > > management, and to handle potential execution of a device mapping.
> > > 
> > > Cc: stable@vger.kernel.org
> > > Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  virt/kvm/arm/mmu.c | 21 +++++++++++++++++----
> > >  1 file changed, 17 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > > index a48994af70b8..0b32a904a1bb 100644
> > > --- a/virt/kvm/arm/mmu.c
> > > +++ b/virt/kvm/arm/mmu.c
> > > @@ -38,6 +38,11 @@ static unsigned long io_map_base;
> > >  #define KVM_S2PTE_FLAG_IS_IOMAP		(1UL << 0)
> > >  #define KVM_S2_FLAG_LOGGING_ACTIVE	(1UL << 1)
> > > 
> > > +static bool is_iomap(unsigned long flags)
> > > +{
> > > +	return flags & KVM_S2PTE_FLAG_IS_IOMAP;
> > > +}
> > > +
> > 
> > nit: I'm not really sure this indirection makes the code more readable,
> > but I guess that's a matter of taste.
> > 
> > >  static bool memslot_is_logging(struct kvm_memory_slot *memslot)
> > >  {
> > >  	return memslot->dirty_bitmap && !(memslot->flags &
> > > KVM_MEM_READONLY);
> > > @@ -1698,6 +1703,7 @@ static int user_mem_abort(struct kvm_vcpu
> > > *vcpu, phys_addr_t fault_ipa,
> > > 
> > >  	vma_pagesize = vma_kernel_pagesize(vma);
> > >  	if (logging_active ||
> > > +	    (vma->vm_flags & VM_PFNMAP) ||
> > 
> > WHat is actually the rationale for this?
> > 
> > Why is a huge mapping not permitted to device memory?
> > 
> > Are we guaranteed that VM_PFNMAP on the vma results in device mappings?
> > I'm not convinced this is the case, and it would be better if we can
> > stick to a single primitive (either kvm_is_device_pfn, or VM_PFNMAP) to
> > detect device mappings.
> 
> For now, I've tried to keep the two paths that deal with mapping devices
> (or rather, things that we interpret as devices) as close as possible.
> If we drop the "eager" mapping, then we're at liberty to restructure
> this in creative ways.
> 
> This includes potential huge mappings, but I'm not sure the rest of the
> kernel uses them for devices anyway (I need to find out).
> 
> > As a subsequent patch, I'd like to make sure that at the very least our
> > memslot prepare function follows the exact same logic for mapping device
> > memory as a fault-in approach does, or that we simply always fault pages
> > in.
> 
> As far as I can see, the two approach are now identical. Am I missing
> something?
> And yes, getting rid of the eager mapping works for me.
> 

As far as I can tell, our user_mem_abort() uses gfn_to_pfn_prot() which
goes doesn a long trail which ends up at hva_to_pfn_remapped(), which
might result in doing the same offset calculation that we do in
kvm_arch_prepare_memory_region(), but it also considers other scenarios.

Even if we analyze all that and convince oursleves it's always all the
same on arm64, the two code paths could change, leading to really hard
to debug differing behavior, and nobody will actively keep the two paths
in sync.  I'd be fine with keeping the performance optimization if we
have good grounds for that though, and using the same translation
mechanism for VM_PFNMAP as user_mem_abort.

Am I missing something?

> > 
> > >  	    !fault_supports_stage2_huge_mapping(memslot, hva,
> > > vma_pagesize)) {
> > >  		force_pte = true;
> > >  		vma_pagesize = PAGE_SIZE;
> > > @@ -1760,6 +1766,9 @@ static int user_mem_abort(struct kvm_vcpu
> > > *vcpu, phys_addr_t fault_ipa,
> > >  			writable = false;
> > >  	}
> > > 
> > > +	if (exec_fault && is_iomap(flags))
> > > +		return -ENOEXEC;
> > > +
> > 
> > nit: why don't you just do this when checking kvm_is_device_pfn() and
> > avoid having logic in two places to deal with this case?
> 
> Good point. I've already sent the PR, but that could be a further cleanup.
> 

Sure, I can have a look when we agree on the above.

> > 
> > >  	spin_lock(&kvm->mmu_lock);
> > >  	if (mmu_notifier_retry(kvm, mmu_seq))
> > >  		goto out_unlock;
> > > @@ -1781,7 +1790,7 @@ static int user_mem_abort(struct kvm_vcpu
> > > *vcpu, phys_addr_t fault_ipa,
> > >  	if (writable)
> > >  		kvm_set_pfn_dirty(pfn);
> > > 
> > > -	if (fault_status != FSC_PERM)
> > > +	if (fault_status != FSC_PERM && !is_iomap(flags))
> > >  		clean_dcache_guest_page(pfn, vma_pagesize);
> > > 
> > >  	if (exec_fault)
> > > @@ -1948,9 +1957,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu
> > > *vcpu, struct kvm_run *run)
> > >  	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> > >  		if (is_iabt) {
> > >  			/* Prefetch Abort on I/O address */
> > > -			kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> > > -			ret = 1;
> > > -			goto out_unlock;
> > > +			ret = -ENOEXEC;
> > > +			goto out;
> > >  		}
> > > 
> > >  		/*
> > > @@ -1992,6 +2000,11 @@ int kvm_handle_guest_abort(struct kvm_vcpu
> > > *vcpu, struct kvm_run *run)
> > >  	ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
> > >  	if (ret == 0)
> > >  		ret = 1;
> > > +out:
> > > +	if (ret == -ENOEXEC) {
> > > +		kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> > > +		ret = 1;
> > > +	}
> > >  out_unlock:
> > >  	srcu_read_unlock(&vcpu->kvm->srcu, idx);
> > >  	return ret;
> > > --
> > > 2.20.1
> > > 
> > 
> > I can't seem to decide for myself if I think there's a sematic
> > difference between trying to execute from somewhere the VMM has
> > explicitly told us is device memory and from somewhere which we happen
> > to have mapped with VM_PFNMAP from user space.  But I also can't seem to
> > really fault it (pun intended).  Thoughts?
> 
> The issue is that the VMM never really tells us whether something is a
> device mapping or not (the only exception being the GICv2 cpuif). Even
> with PFNMAP, we guess it (it could well be memory that lives outside
> of the linear mapping). I don't see a way to lift this ambiguity.
> 
> Ideally, faulting on executing a non-mapping should be offloaded to
> userspace for emulation, in line with your patches that offload
> non-emulated data accesses. That'd be a new ABI, and I can't imagine
> anyone willing to deal with it.

So what I was asking was if it makes sense to report the Prefetch Abort
in the case where the VMM has already told us that it doesn't want to
register anything backing the IPA (no memslot), and instead return an
error to user space, so that it can make a decision (for example inject
an external abort, which may have been the right thing to do in the
former case as well, but that could be considered ABI now, so let's not
kick that hornet's nest).

In any case, no strong feelings here, I just have a vague feeling that
injecting more prefetch aborts on execute-from-some-device is not
necessarily the right thing to do.


Thanks,

    Christoffer

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-12-13 22:30 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-11 16:56 [PATCH 0/3] KVM: arm/arm64: user_mem_abort() assorted fixes Marc Zyngier
2019-12-11 16:56 ` [PATCH 1/3] KVM: arm/arm64: Properly handle faulting of device mappings Marc Zyngier
2019-12-11 17:53   ` Alexandru Elisei
2019-12-11 18:01     ` Marc Zyngier
2019-12-12 15:35   ` James Morse
2019-12-13  8:29   ` Christoffer Dall
2019-12-13  9:28     ` Marc Zyngier
2019-12-13 11:14       ` Christoffer Dall [this message]
2019-12-16 10:31         ` Marc Zyngier
2019-12-18 15:13           ` Christoffer Dall
2019-12-11 16:56 ` [PATCH 2/3] KVM: arm/arm64: Re-check VMA on detecting a poisoned page Marc Zyngier
2019-12-12 11:33   ` Marc Zyngier
2019-12-12 15:34     ` James Morse
2019-12-12 15:40       ` Marc Zyngier
2019-12-13  9:25       ` Christoffer Dall
2019-12-13  9:22   ` Christoffer Dall
2019-12-16 18:29     ` James Morse
2019-12-11 16:56 ` [PATCH 3/3] KVM: arm/arm64: Drop spurious message when faulting on a non-existent mapping Marc Zyngier
2019-12-13  9:26   ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191213111400.GI28840@e113682-lin.lund.arm.com \
    --to=christoffer.dall@arm.com \
    --cc=alexandru.elisei@arm.com \
    --cc=james.morse@arm.com \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).