From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63E4AC433F5 for ; Fri, 20 May 2022 16:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351307AbiETQDk (ORCPT ); Fri, 20 May 2022 12:03:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351283AbiETQDe (ORCPT ); Fri, 20 May 2022 12:03:34 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 47F7A17CE6F for ; Fri, 20 May 2022 09:03:19 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9CD621477; Fri, 20 May 2022 09:03:19 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E3013F73D; Fri, 20 May 2022 09:03:16 -0700 (PDT) Date: Fri, 20 May 2022 17:03:29 +0100 From: Alexandru Elisei To: Will Deacon Cc: kvmarm@lists.cs.columbia.edu, Ard Biesheuvel , Sean Christopherson , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 33/89] KVM: arm64: Handle guest stage-2 page-tables entirely at EL2 Message-ID: References: <20220519134204.5379-1-will@kernel.org> <20220519134204.5379-34-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220519134204.5379-34-will@kernel.org> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi, On Thu, May 19, 2022 at 02:41:08PM +0100, Will Deacon wrote: > Now that EL2 is able to manage guest stage-2 page-tables, avoid > allocating a separate MMU structure in the host and instead introduce a > new fault handler which responds to guest stage-2 faults by sharing > GUP-pinned pages with the guest via a hypercall. These pages are > recovered (and unpinned) on guest teardown via the page reclaim > hypercall. > > Signed-off-by: Will Deacon > --- [..] > +static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > + unsigned long hva) > +{ > + struct kvm_hyp_memcache *hyp_memcache = &vcpu->arch.pkvm_memcache; > + struct mm_struct *mm = current->mm; > + unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; > + struct kvm_pinned_page *ppage; > + struct kvm *kvm = vcpu->kvm; > + struct page *page; > + u64 pfn; > + int ret; > + > + ret = topup_hyp_memcache(hyp_memcache, kvm_mmu_cache_min_pages(kvm)); > + if (ret) > + return -ENOMEM; > + > + ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); > + if (!ppage) > + return -ENOMEM; > + > + ret = account_locked_vm(mm, 1, true); > + if (ret) > + goto free_ppage; > + > + mmap_read_lock(mm); > + ret = pin_user_pages(hva, 1, flags, &page, NULL); When I implemented memory pinning via GUP for the KVM SPE series, I discovered that the pages were regularly unmapped at stage 2 because of automatic numa balancing, as change_prot_numa() ends up calling mmu_notifier_invalidate_range_start(). I was curious how you managed to avoid that, I don't know my way around pKVM and can't seem to find where that's implemented. Thanks, Alex From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECF1EC433F5 for ; Fri, 20 May 2022 16:03:24 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6A4F44B3A9; Fri, 20 May 2022 12:03:24 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Sib0W5gBLa4n; Fri, 20 May 2022 12:03:23 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 14FD94B37B; Fri, 20 May 2022 12:03:23 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 4B3B74B374 for ; Fri, 20 May 2022 12:03:22 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id q08GL0ptRnuD for ; Fri, 20 May 2022 12:03:20 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D2A1E4B33C for ; Fri, 20 May 2022 12:03:20 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9CD621477; Fri, 20 May 2022 09:03:19 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E3013F73D; Fri, 20 May 2022 09:03:16 -0700 (PDT) Date: Fri, 20 May 2022 17:03:29 +0100 From: Alexandru Elisei To: Will Deacon Subject: Re: [PATCH 33/89] KVM: arm64: Handle guest stage-2 page-tables entirely at EL2 Message-ID: References: <20220519134204.5379-1-will@kernel.org> <20220519134204.5379-34-will@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220519134204.5379-34-will@kernel.org> Cc: Marc Zyngier , kvm@vger.kernel.org, Andy Lutomirski , linux-arm-kernel@lists.infradead.org, Michael Roth , Catalin Marinas , Chao Peng , kernel-team@android.com, kvmarm@lists.cs.columbia.edu X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Hi, On Thu, May 19, 2022 at 02:41:08PM +0100, Will Deacon wrote: > Now that EL2 is able to manage guest stage-2 page-tables, avoid > allocating a separate MMU structure in the host and instead introduce a > new fault handler which responds to guest stage-2 faults by sharing > GUP-pinned pages with the guest via a hypercall. These pages are > recovered (and unpinned) on guest teardown via the page reclaim > hypercall. > > Signed-off-by: Will Deacon > --- [..] > +static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > + unsigned long hva) > +{ > + struct kvm_hyp_memcache *hyp_memcache = &vcpu->arch.pkvm_memcache; > + struct mm_struct *mm = current->mm; > + unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; > + struct kvm_pinned_page *ppage; > + struct kvm *kvm = vcpu->kvm; > + struct page *page; > + u64 pfn; > + int ret; > + > + ret = topup_hyp_memcache(hyp_memcache, kvm_mmu_cache_min_pages(kvm)); > + if (ret) > + return -ENOMEM; > + > + ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); > + if (!ppage) > + return -ENOMEM; > + > + ret = account_locked_vm(mm, 1, true); > + if (ret) > + goto free_ppage; > + > + mmap_read_lock(mm); > + ret = pin_user_pages(hva, 1, flags, &page, NULL); When I implemented memory pinning via GUP for the KVM SPE series, I discovered that the pages were regularly unmapped at stage 2 because of automatic numa balancing, as change_prot_numa() ends up calling mmu_notifier_invalidate_range_start(). I was curious how you managed to avoid that, I don't know my way around pKVM and can't seem to find where that's implemented. Thanks, Alex _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9F7AC433EF for ; Fri, 20 May 2022 16:04:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uso78516ZhTdNypdUcp5+e79H1pxwSUfDbkX9YH06T8=; b=IDY2nDcCYBKHTe vv+iHlwCMFRZm2kVUXxbUgfOKGG3nWugQZ9LhyVEMSa2lhFsulB4GAo+JO2NWG6GCJ4Z3EWEiMlF2 dHk01NYeHfcm3xsW8wnNga91B2vJrbNs17QfrlcvfXbJTjGTQ6vXufXBE1BXzNUN0YcEuVYfWtMwp rOd5JRHXQCmneeREP9w0ixMXV3PgMUaCPfKRo3u6B+1glHq839a/8pi+K//jLZ0zY+f9+oLC+5hMW NbEgD/NARrVyhRqshiHCg4K3j70yO8b7qGcoWjIn1wVchufW7+rh3hED2NkQZCZOdU+3kkvUxwNPq 2U160hhAB/ydfz8yrxzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ns565-00DNrh-Iz; Fri, 20 May 2022 16:03:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ns561-00DNqH-LZ for linux-arm-kernel@lists.infradead.org; Fri, 20 May 2022 16:03:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9CD621477; Fri, 20 May 2022 09:03:19 -0700 (PDT) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E3013F73D; Fri, 20 May 2022 09:03:16 -0700 (PDT) Date: Fri, 20 May 2022 17:03:29 +0100 From: Alexandru Elisei To: Will Deacon Cc: kvmarm@lists.cs.columbia.edu, Ard Biesheuvel , Sean Christopherson , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 33/89] KVM: arm64: Handle guest stage-2 page-tables entirely at EL2 Message-ID: References: <20220519134204.5379-1-will@kernel.org> <20220519134204.5379-34-will@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220519134204.5379-34-will@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220520_090321_797647_F8640C92 X-CRM114-Status: GOOD ( 12.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, On Thu, May 19, 2022 at 02:41:08PM +0100, Will Deacon wrote: > Now that EL2 is able to manage guest stage-2 page-tables, avoid > allocating a separate MMU structure in the host and instead introduce a > new fault handler which responds to guest stage-2 faults by sharing > GUP-pinned pages with the guest via a hypercall. These pages are > recovered (and unpinned) on guest teardown via the page reclaim > hypercall. > > Signed-off-by: Will Deacon > --- [..] > +static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > + unsigned long hva) > +{ > + struct kvm_hyp_memcache *hyp_memcache = &vcpu->arch.pkvm_memcache; > + struct mm_struct *mm = current->mm; > + unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; > + struct kvm_pinned_page *ppage; > + struct kvm *kvm = vcpu->kvm; > + struct page *page; > + u64 pfn; > + int ret; > + > + ret = topup_hyp_memcache(hyp_memcache, kvm_mmu_cache_min_pages(kvm)); > + if (ret) > + return -ENOMEM; > + > + ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); > + if (!ppage) > + return -ENOMEM; > + > + ret = account_locked_vm(mm, 1, true); > + if (ret) > + goto free_ppage; > + > + mmap_read_lock(mm); > + ret = pin_user_pages(hva, 1, flags, &page, NULL); When I implemented memory pinning via GUP for the KVM SPE series, I discovered that the pages were regularly unmapped at stage 2 because of automatic numa balancing, as change_prot_numa() ends up calling mmu_notifier_invalidate_range_start(). I was curious how you managed to avoid that, I don't know my way around pKVM and can't seem to find where that's implemented. Thanks, Alex _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel