From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C5A0FA372C for ; Fri, 8 Nov 2019 11:19:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 254E121D7B for ; Fri, 8 Nov 2019 11:19:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729896AbfKHLTX (ORCPT ); Fri, 8 Nov 2019 06:19:23 -0500 Received: from foss.arm.com ([217.140.110.172]:40838 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725730AbfKHLTX (ORCPT ); Fri, 8 Nov 2019 06:19:23 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8ADA631B; Fri, 8 Nov 2019 03:19:22 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DEB53F719; Fri, 8 Nov 2019 03:19:21 -0800 (PST) Date: Fri, 8 Nov 2019 12:19:20 +0100 From: Christoffer Dall To: kvm@vger.kernel.org Cc: Paolo Bonzini , Marc Zyngier , Ard Biesheuvel , sean.j.christopherson@intel.com, borntraeger@de.ibm.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: Memory regions and VMAs across architectures Message-ID: <20191108111920.GD17608@e113682-lin.lund.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi, I had a look at our relatively complicated logic in kvm_arch_prepare_memory_region(), and was wondering if there was room to unify some of this handling between architectures. (If you haven't seen our implementation, you can find it in virt/kvm/arm/mmu.c, and it has lovely ASCII art!) I then had a look at the x86 code, but that doesn't actually do anything when creating memory regions, which makes me wonder why the arhitectures differ in this aspect. The reason we added the logic that we have for arm/arm64 is that we don't really want to take faults for I/O accesses. I'm not actually sure if this is a corretness thing, or an optimization effort, and the original commit message doesn't really explain. Ard, you wrote that code, do you recall the details? In any case, what we do is to check for each VMA backing a memslot, we check if the memslot flags and vma flags are a reasonable match, and we try to detect I/O mappings by looking for the VM_PFNMAP flag on the VMA and pre-populate stage 2 page tables (our equivalent of EPT/NPT/...). However, there are some things which are not clear to me: First, what prevents user space from messing around with the VMAs after kvm_arch_prepare_memory_region() completes? If nothing, then what is the value of the cheks we perform wrt. to VMAs? Second, why would arm/arm64 need special handling for I/O mappings compared to other architectures, and how is this dealt with for x86/s390/power/... ? Thanks, Christoffer