From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 617FFC433E0 for ; Wed, 31 Mar 2021 07:43:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33531619D6 for ; Wed, 31 Mar 2021 07:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234280AbhCaHmf (ORCPT ); Wed, 31 Mar 2021 03:42:35 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:49451 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234204AbhCaHmX (ORCPT ); Wed, 31 Mar 2021 03:42:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617176542; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=h4MJeym0pQC58EWxdfbCrpqhZJh7EReaizmu8gFJBuT+9wMqmeyfdkO1WfNK0ytwWuqZA4 9Xufpt2PJeA6QK45I/7jmLio5D4h6qXL/kSIbX2ie7okhvSP6j6cxqxQJCyOOdWifBf5WJ Iot8M3UbofT4aKEC1jeTampm/8P0MrQ= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-574-0Vrx_lA2NYSbIVpoanrtuQ-1; Wed, 31 Mar 2021 03:41:41 -0400 X-MC-Unique: 0Vrx_lA2NYSbIVpoanrtuQ-1 Received: by mail-wr1-f69.google.com with SMTP id h5so541946wrr.17 for ; Wed, 31 Mar 2021 00:41:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=CgiK2HAQEWRB+sQHqjLdnIdm8dOpiASQ8wrAF1/nbcO679iYfRusm8CvoR2tFitpJL xhIoOp4oZo0YJwmAxri7ate/F4aMPL1Pqos85bQ/U9RGAzaKCUNFBck915BcRvMvFPSH 5Btgf9Voq9uCovDP5lH7gwp/JYDWSiDO4Uuas2DhVOFnYWOtaDYldb0Q+jnXoNdP2fXu 9MmBLGoUTyUh+rV+ZeQOlEjZRVHt/Uuo7/tsW/QXEmUav8Zlu7eU7WDHjiY2/KTOfpsp oCIPRa3/4Subb95405miRgIS9Q+Z5xVp12bCOYq5gzuReNMdPVmOjbZnYltZubKRTLdi G5TA== X-Gm-Message-State: AOAM532b1b7M8nUU4H3EBJRkRhrva8hy4rKfQVUx2iCASpKUE+7+IxIZ yY1ObpWL62NJu7uT6gJcnWZCQ84LolRyCOW2yndNMUE/C2OJ6h8R/EWKSvbrlltA5raDpBG21fl kVeQkEUpYfZ0GnIVSnblVViVw X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904504wmf.106.1617176499782; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiHmYD2Le+p2rRCHafUQsuCWJvR5BP1vyRx1fx+QhpZoWEEC9/7YIvjmyyJZB+n8xVLiTOmA== X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904488wmf.106.1617176499531; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e]) by smtp.gmail.com with ESMTPSA id a131sm2662492wmc.48.2021.03.31.00.41.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 31 Mar 2021 00:41:38 -0700 (PDT) Subject: Re: [PATCH 12/18] KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks To: Sean Christopherson , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras Cc: James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon References: <20210326021957.1424875-1-seanjc@google.com> <20210326021957.1424875-13-seanjc@google.com> From: Paolo Bonzini Message-ID: <26c87b3e-7a89-6cfa-1410-25486b114f32@redhat.com> Date: Wed, 31 Mar 2021 09:41:34 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <20210326021957.1424875-13-seanjc@google.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 26/03/21 03:19, Sean Christopherson wrote: > Move MIPS to the gfn-based MMU notifier APIs, which do the hva->gfn > lookup in common code, and whose code is nearly identical to MIPS' > lookup. > > No meaningful functional change intended, though the exact order of > operations is slightly different since the memslot lookups occur before > calling into arch code. > > Signed-off-by: Sean Christopherson I'll post a couple patches to enable more coalescing of the flushes, but this particular patch is okay. Paolo > --- > arch/mips/include/asm/kvm_host.h | 1 + > arch/mips/kvm/mmu.c | 97 ++++++-------------------------- > 2 files changed, 17 insertions(+), 81 deletions(-) > > diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h > index feaa77036b67..374a3c8806e8 100644 > --- a/arch/mips/include/asm/kvm_host.h > +++ b/arch/mips/include/asm/kvm_host.h > @@ -967,6 +967,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, > bool write); > > #define KVM_ARCH_WANT_MMU_NOTIFIER > +#define KVM_ARCH_WANT_NEW_MMU_NOTIFIER_APIS > > /* Emulation */ > int kvm_get_inst(u32 *opc, struct kvm_vcpu *vcpu, u32 *out); > diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c > index 3dabeda82458..3dc885df2e32 100644 > --- a/arch/mips/kvm/mmu.c > +++ b/arch/mips/kvm/mmu.c > @@ -439,85 +439,36 @@ static int kvm_mips_mkold_gpa_pt(struct kvm *kvm, gfn_t start_gfn, > end_gfn << PAGE_SHIFT); > } > > -static int handle_hva_to_gpa(struct kvm *kvm, > - unsigned long start, > - unsigned long end, > - int (*handler)(struct kvm *kvm, gfn_t gfn, > - gpa_t gfn_end, > - struct kvm_memory_slot *memslot, > - void *data), > - void *data) > +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > { > - struct kvm_memslots *slots; > - struct kvm_memory_slot *memslot; > - int ret = 0; > - > - slots = kvm_memslots(kvm); > - > - /* we only care about the pages that the guest sees */ > - kvm_for_each_memslot(memslot, slots) { > - unsigned long hva_start, hva_end; > - gfn_t gfn, gfn_end; > - > - hva_start = max(start, memslot->userspace_addr); > - hva_end = min(end, memslot->userspace_addr + > - (memslot->npages << PAGE_SHIFT)); > - if (hva_start >= hva_end) > - continue; > - > - /* > - * {gfn(page) | page intersects with [hva_start, hva_end)} = > - * {gfn_start, gfn_start+1, ..., gfn_end-1}. > - */ > - gfn = hva_to_gfn_memslot(hva_start, memslot); > - gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); > - > - ret |= handler(kvm, gfn, gfn_end, memslot, data); > - } > - > - return ret; > -} > - > - > -static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > -{ > - kvm_mips_flush_gpa_pt(kvm, gfn, gfn_end); > - return 1; > -} > - > -int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, > - unsigned flags) > -{ > - handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); > + kvm_mips_flush_gpa_pt(kvm, range->start, range->end); > > kvm_mips_callbacks->flush_shadow_all(kvm); > return 0; > } > > -static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +static bool __kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > - pte_t hva_pte = *(pte_t *)data; > + gpa_t gpa = range->start << PAGE_SHIFT; > + pte_t hva_pte = range->pte; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > pte_t old_pte; > > if (!gpa_pte) > - return 0; > + return false; > > /* Mapping may need adjusting depending on memslot flags */ > old_pte = *gpa_pte; > - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > hva_pte = pte_mkclean(hva_pte); > - else if (memslot->flags & KVM_MEM_READONLY) > + else if (range->slot->flags & KVM_MEM_READONLY) > hva_pte = pte_wrprotect(hva_pte); > > set_pte(gpa_pte, hva_pte); > > /* Replacing an absent or old page doesn't need flushes */ > if (!pte_present(old_pte) || !pte_young(old_pte)) > - return 0; > + return false; > > /* Pages swapped, aged, moved, or cleaned require flushes */ > return !pte_present(hva_pte) || > @@ -526,27 +477,21 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > (pte_dirty(old_pte) && !pte_dirty(hva_pte)); > } > > -int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) > +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - unsigned long end = hva + PAGE_SIZE; > - int ret; > - > - ret = handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pte); > - if (ret) > + if (__kvm_set_spte_gfn(kvm, range)) > kvm_mips_callbacks->flush_shadow_all(kvm); > - return 0; > + return false; > } > > -static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - return kvm_mips_mkold_gpa_pt(kvm, gfn, gfn_end); > + return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); > } > > -static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > + gpa_t gpa = range->start << PAGE_SHIFT; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > > if (!gpa_pte) > @@ -554,16 +499,6 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > return pte_young(*gpa_pte); > } > > -int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) > -{ > - return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); > -} > - > -int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) > -{ > - return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); > -} > - > /** > * _kvm_mips_map_page_fast() - Fast path GPA fault handler. > * @vcpu: VCPU pointer. > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85A69C433DB for ; Wed, 31 Mar 2021 07:44:32 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id E1D6E619BD for ; Wed, 31 Mar 2021 07:44:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E1D6E619BD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 5B6E54B394; Wed, 31 Mar 2021 03:44:31 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@redhat.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YHgNLk4Uih24; Wed, 31 Mar 2021 03:44:29 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D94104B37F; Wed, 31 Mar 2021 03:44:29 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id F11C54B2C7 for ; Wed, 31 Mar 2021 03:44:28 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fgFYDvzyGs+O for ; Wed, 31 Mar 2021 03:44:27 -0400 (EDT) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mm01.cs.columbia.edu (Postfix) with ESMTP id B93214B294 for ; Wed, 31 Mar 2021 03:44:27 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617176667; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=SkR7Nw7IyJ+G51SVOif8y2tkitAY+0SvmBDqn8EOhG0SOdrh6U9GnriE2ZEANeDGVg1D6J mwgG8wj0qxP+e1I6sjTvKcsxkxeZhtkn/o6n1sakCnrJ9D27UkTSXSSfHgYzUxYHUQ4tA9 fLITZPiESpKTCbP1KKBY1dV2iRGbniE= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-386-wVBasrAyPKmr3fhwN5Zbfg-1; Wed, 31 Mar 2021 03:41:41 -0400 X-MC-Unique: wVBasrAyPKmr3fhwN5Zbfg-1 Received: by mail-wm1-f69.google.com with SMTP id a3so116027wmm.0 for ; Wed, 31 Mar 2021 00:41:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=O86AfIAWKTyRr2O92NSZCXbrSz7W9rBLkOV6ZIhqppMVe3ihE7WJL8wGNEWDJecHkD qUClszo7nNyPjuFMHEyYj9rEqQM8Nx4d+wAhcGffBpo0chY7PZL/SyYGrof44Hsv1IPD Vg6/32LaqyxJyv1oK/v4Dt3iD0m6THmDANS+D4RgDDKFSZhkw7fnw56WwgRWwpeL+com iWaZpNpYQSQxj8Z7WDYJs/+qt6MfL7TfdJ2AVFYkP9pceK4b1CeGdc1pI06GAKLpRCz5 3oVAlUJSoPiYTcPt3aW1Va01bJMUv/MGQtemrygriA/5vU89iiPh7qryeePKjfV/33dG EOfA== X-Gm-Message-State: AOAM530N6+4lvqnH8WyeEI7kvVxsZjRySVkw5IZpR81WGm5YKHdHjr6P goZA3RW6Qg5QLCCuCC6UnPUQD/vqvSRvLX9t11zHUpwmuU/4iXDjLD3W941Ag71OoSHVlj/l8bx cZBTAXwWkwXVFjWqVtEJIP7Q+ X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904509wmf.106.1617176499782; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiHmYD2Le+p2rRCHafUQsuCWJvR5BP1vyRx1fx+QhpZoWEEC9/7YIvjmyyJZB+n8xVLiTOmA== X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904488wmf.106.1617176499531; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e]) by smtp.gmail.com with ESMTPSA id a131sm2662492wmc.48.2021.03.31.00.41.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 31 Mar 2021 00:41:38 -0700 (PDT) Subject: Re: [PATCH 12/18] KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks To: Sean Christopherson , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras References: <20210326021957.1424875-1-seanjc@google.com> <20210326021957.1424875-13-seanjc@google.com> From: Paolo Bonzini Message-ID: <26c87b3e-7a89-6cfa-1410-25486b114f32@redhat.com> Date: Wed, 31 Mar 2021 09:41:34 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <20210326021957.1424875-13-seanjc@google.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Cc: Wanpeng Li , kvm@vger.kernel.org, Joerg Roedel , linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ben Gardon , Vitaly Kuznetsov , kvmarm@lists.cs.columbia.edu, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On 26/03/21 03:19, Sean Christopherson wrote: > Move MIPS to the gfn-based MMU notifier APIs, which do the hva->gfn > lookup in common code, and whose code is nearly identical to MIPS' > lookup. > > No meaningful functional change intended, though the exact order of > operations is slightly different since the memslot lookups occur before > calling into arch code. > > Signed-off-by: Sean Christopherson I'll post a couple patches to enable more coalescing of the flushes, but this particular patch is okay. Paolo > --- > arch/mips/include/asm/kvm_host.h | 1 + > arch/mips/kvm/mmu.c | 97 ++++++-------------------------- > 2 files changed, 17 insertions(+), 81 deletions(-) > > diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h > index feaa77036b67..374a3c8806e8 100644 > --- a/arch/mips/include/asm/kvm_host.h > +++ b/arch/mips/include/asm/kvm_host.h > @@ -967,6 +967,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, > bool write); > > #define KVM_ARCH_WANT_MMU_NOTIFIER > +#define KVM_ARCH_WANT_NEW_MMU_NOTIFIER_APIS > > /* Emulation */ > int kvm_get_inst(u32 *opc, struct kvm_vcpu *vcpu, u32 *out); > diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c > index 3dabeda82458..3dc885df2e32 100644 > --- a/arch/mips/kvm/mmu.c > +++ b/arch/mips/kvm/mmu.c > @@ -439,85 +439,36 @@ static int kvm_mips_mkold_gpa_pt(struct kvm *kvm, gfn_t start_gfn, > end_gfn << PAGE_SHIFT); > } > > -static int handle_hva_to_gpa(struct kvm *kvm, > - unsigned long start, > - unsigned long end, > - int (*handler)(struct kvm *kvm, gfn_t gfn, > - gpa_t gfn_end, > - struct kvm_memory_slot *memslot, > - void *data), > - void *data) > +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > { > - struct kvm_memslots *slots; > - struct kvm_memory_slot *memslot; > - int ret = 0; > - > - slots = kvm_memslots(kvm); > - > - /* we only care about the pages that the guest sees */ > - kvm_for_each_memslot(memslot, slots) { > - unsigned long hva_start, hva_end; > - gfn_t gfn, gfn_end; > - > - hva_start = max(start, memslot->userspace_addr); > - hva_end = min(end, memslot->userspace_addr + > - (memslot->npages << PAGE_SHIFT)); > - if (hva_start >= hva_end) > - continue; > - > - /* > - * {gfn(page) | page intersects with [hva_start, hva_end)} = > - * {gfn_start, gfn_start+1, ..., gfn_end-1}. > - */ > - gfn = hva_to_gfn_memslot(hva_start, memslot); > - gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); > - > - ret |= handler(kvm, gfn, gfn_end, memslot, data); > - } > - > - return ret; > -} > - > - > -static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > -{ > - kvm_mips_flush_gpa_pt(kvm, gfn, gfn_end); > - return 1; > -} > - > -int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, > - unsigned flags) > -{ > - handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); > + kvm_mips_flush_gpa_pt(kvm, range->start, range->end); > > kvm_mips_callbacks->flush_shadow_all(kvm); > return 0; > } > > -static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +static bool __kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > - pte_t hva_pte = *(pte_t *)data; > + gpa_t gpa = range->start << PAGE_SHIFT; > + pte_t hva_pte = range->pte; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > pte_t old_pte; > > if (!gpa_pte) > - return 0; > + return false; > > /* Mapping may need adjusting depending on memslot flags */ > old_pte = *gpa_pte; > - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > hva_pte = pte_mkclean(hva_pte); > - else if (memslot->flags & KVM_MEM_READONLY) > + else if (range->slot->flags & KVM_MEM_READONLY) > hva_pte = pte_wrprotect(hva_pte); > > set_pte(gpa_pte, hva_pte); > > /* Replacing an absent or old page doesn't need flushes */ > if (!pte_present(old_pte) || !pte_young(old_pte)) > - return 0; > + return false; > > /* Pages swapped, aged, moved, or cleaned require flushes */ > return !pte_present(hva_pte) || > @@ -526,27 +477,21 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > (pte_dirty(old_pte) && !pte_dirty(hva_pte)); > } > > -int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) > +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - unsigned long end = hva + PAGE_SIZE; > - int ret; > - > - ret = handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pte); > - if (ret) > + if (__kvm_set_spte_gfn(kvm, range)) > kvm_mips_callbacks->flush_shadow_all(kvm); > - return 0; > + return false; > } > > -static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - return kvm_mips_mkold_gpa_pt(kvm, gfn, gfn_end); > + return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); > } > > -static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > + gpa_t gpa = range->start << PAGE_SHIFT; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > > if (!gpa_pte) > @@ -554,16 +499,6 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > return pte_young(*gpa_pte); > } > > -int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) > -{ > - return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); > -} > - > -int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) > -{ > - return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); > -} > - > /** > * _kvm_mips_map_page_fast() - Fast path GPA fault handler. > * @vcpu: VCPU pointer. > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8697DC433DB for ; Wed, 31 Mar 2021 07:44:22 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2064619B1 for ; Wed, 31 Mar 2021 07:44:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2064619B1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dZgchw5F3EGOxrRjjaySNN+Y5wn7uaAInBDIggl3t6E=; b=VHtY9ILls8pdGH+7Na95jpYf3 VurDHKtg88b7fY54zi2Kp5WpyKit49EmasGJGltmmZGXeiGLyZtkVrHE6TSxmJjFpcxt+5tl30UEy HYqPGYyM/IegyNjSJYYv615XiKt8oU9Qdo3tj5AWL/X2X6LdWU+8sJnvmgy9ZLqGwdUSPMiu0dCCd xwOsUG792z/r/4Kxrlb/NpEK7iMP/RZLVV20YJAnKBVIj2EU2W5MEnZef3LCL7CyraCr/M0w06UMe kKhhKjxjU3A4Q6gp9OyeYEqmArUEBYH7Fkxrp9yt9k5UVlwZhLdrSGpwe9LBAM/TD3EBv7QsvSqtd UHwyd9b0Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lRVUi-005p9A-AY; Wed, 31 Mar 2021 07:42:29 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lRVUd-005p7K-SY for linux-arm-kernel@lists.infradead.org; Wed, 31 Mar 2021 07:42:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1617176542; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=h4MJeym0pQC58EWxdfbCrpqhZJh7EReaizmu8gFJBuT+9wMqmeyfdkO1WfNK0ytwWuqZA4 9Xufpt2PJeA6QK45I/7jmLio5D4h6qXL/kSIbX2ie7okhvSP6j6cxqxQJCyOOdWifBf5WJ Iot8M3UbofT4aKEC1jeTampm/8P0MrQ= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-386-eUpeOi6VNAyDWByAXzfpOw-1; Wed, 31 Mar 2021 03:41:40 -0400 X-MC-Unique: eUpeOi6VNAyDWByAXzfpOw-1 Received: by mail-wm1-f70.google.com with SMTP id n17so115797wmi.2 for ; Wed, 31 Mar 2021 00:41:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=HCixOYNQFJEmCwNjkKeRd8lCZMOXCEBzJPf2kb2CuW8=; b=EBcje7m9JyARVS2riX9vSdt4sDTAq8vvxGn1mGua2d71RTMqqITOHszNikKOQp2lXm 3kAbe+a8T8ecGRcZlhSyEH/c8r+RUekgjIAzB5rsWamjVNpEa6Qq36vztLEn5qemgeYn y2Ofb87pxT+oyUaij14dkSMXSVe81+PPlNulIZ01XnCq/9E+XmLSLzg1It3JuiJwlgjr OB5uEA1DtFysYhueFaG/Pxw5FHpaZ1SJhKrfkWLjVceKze6tzbBF6qAf8UjeaseXD8Gm Vc1T2S+C9C60HuuV2f1gDJTlmz4Og/bSHhAUGSHfXC0J99JgcepeBHWY19m02V7kUBcA 6xag== X-Gm-Message-State: AOAM531N0RYDHInn3nblZMlaWiabW5D2h/GwE4d4xXQ/Gcf7BlLhJImH QiaqGC5vtvNwuvgoGw5/B0L3q3WQ4U3l6A1NVoaqxBdNRNT+xBUy4BQCUpVrC50SRb1C8FSrGNB AfFuoZlmaYlJMOKbmTqM2w/3AJMUA2UTyyFM= X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904515wmf.106.1617176499784; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiHmYD2Le+p2rRCHafUQsuCWJvR5BP1vyRx1fx+QhpZoWEEC9/7YIvjmyyJZB+n8xVLiTOmA== X-Received: by 2002:a1c:b789:: with SMTP id h131mr1904488wmf.106.1617176499531; Wed, 31 Mar 2021 00:41:39 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e]) by smtp.gmail.com with ESMTPSA id a131sm2662492wmc.48.2021.03.31.00.41.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 31 Mar 2021 00:41:38 -0700 (PDT) Subject: Re: [PATCH 12/18] KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks To: Sean Christopherson , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras Cc: James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon References: <20210326021957.1424875-1-seanjc@google.com> <20210326021957.1424875-13-seanjc@google.com> From: Paolo Bonzini Message-ID: <26c87b3e-7a89-6cfa-1410-25486b114f32@redhat.com> Date: Wed, 31 Mar 2021 09:41:34 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <20210326021957.1424875-13-seanjc@google.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210331_084224_501797_BB09BBC4 X-CRM114-Status: GOOD ( 24.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 26/03/21 03:19, Sean Christopherson wrote: > Move MIPS to the gfn-based MMU notifier APIs, which do the hva->gfn > lookup in common code, and whose code is nearly identical to MIPS' > lookup. > > No meaningful functional change intended, though the exact order of > operations is slightly different since the memslot lookups occur before > calling into arch code. > > Signed-off-by: Sean Christopherson I'll post a couple patches to enable more coalescing of the flushes, but this particular patch is okay. Paolo > --- > arch/mips/include/asm/kvm_host.h | 1 + > arch/mips/kvm/mmu.c | 97 ++++++-------------------------- > 2 files changed, 17 insertions(+), 81 deletions(-) > > diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h > index feaa77036b67..374a3c8806e8 100644 > --- a/arch/mips/include/asm/kvm_host.h > +++ b/arch/mips/include/asm/kvm_host.h > @@ -967,6 +967,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, > bool write); > > #define KVM_ARCH_WANT_MMU_NOTIFIER > +#define KVM_ARCH_WANT_NEW_MMU_NOTIFIER_APIS > > /* Emulation */ > int kvm_get_inst(u32 *opc, struct kvm_vcpu *vcpu, u32 *out); > diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c > index 3dabeda82458..3dc885df2e32 100644 > --- a/arch/mips/kvm/mmu.c > +++ b/arch/mips/kvm/mmu.c > @@ -439,85 +439,36 @@ static int kvm_mips_mkold_gpa_pt(struct kvm *kvm, gfn_t start_gfn, > end_gfn << PAGE_SHIFT); > } > > -static int handle_hva_to_gpa(struct kvm *kvm, > - unsigned long start, > - unsigned long end, > - int (*handler)(struct kvm *kvm, gfn_t gfn, > - gpa_t gfn_end, > - struct kvm_memory_slot *memslot, > - void *data), > - void *data) > +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > { > - struct kvm_memslots *slots; > - struct kvm_memory_slot *memslot; > - int ret = 0; > - > - slots = kvm_memslots(kvm); > - > - /* we only care about the pages that the guest sees */ > - kvm_for_each_memslot(memslot, slots) { > - unsigned long hva_start, hva_end; > - gfn_t gfn, gfn_end; > - > - hva_start = max(start, memslot->userspace_addr); > - hva_end = min(end, memslot->userspace_addr + > - (memslot->npages << PAGE_SHIFT)); > - if (hva_start >= hva_end) > - continue; > - > - /* > - * {gfn(page) | page intersects with [hva_start, hva_end)} = > - * {gfn_start, gfn_start+1, ..., gfn_end-1}. > - */ > - gfn = hva_to_gfn_memslot(hva_start, memslot); > - gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); > - > - ret |= handler(kvm, gfn, gfn_end, memslot, data); > - } > - > - return ret; > -} > - > - > -static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > -{ > - kvm_mips_flush_gpa_pt(kvm, gfn, gfn_end); > - return 1; > -} > - > -int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, > - unsigned flags) > -{ > - handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); > + kvm_mips_flush_gpa_pt(kvm, range->start, range->end); > > kvm_mips_callbacks->flush_shadow_all(kvm); > return 0; > } > > -static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +static bool __kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > - pte_t hva_pte = *(pte_t *)data; > + gpa_t gpa = range->start << PAGE_SHIFT; > + pte_t hva_pte = range->pte; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > pte_t old_pte; > > if (!gpa_pte) > - return 0; > + return false; > > /* Mapping may need adjusting depending on memslot flags */ > old_pte = *gpa_pte; > - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > hva_pte = pte_mkclean(hva_pte); > - else if (memslot->flags & KVM_MEM_READONLY) > + else if (range->slot->flags & KVM_MEM_READONLY) > hva_pte = pte_wrprotect(hva_pte); > > set_pte(gpa_pte, hva_pte); > > /* Replacing an absent or old page doesn't need flushes */ > if (!pte_present(old_pte) || !pte_young(old_pte)) > - return 0; > + return false; > > /* Pages swapped, aged, moved, or cleaned require flushes */ > return !pte_present(hva_pte) || > @@ -526,27 +477,21 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > (pte_dirty(old_pte) && !pte_dirty(hva_pte)); > } > > -int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) > +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - unsigned long end = hva + PAGE_SIZE; > - int ret; > - > - ret = handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pte); > - if (ret) > + if (__kvm_set_spte_gfn(kvm, range)) > kvm_mips_callbacks->flush_shadow_all(kvm); > - return 0; > + return false; > } > > -static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - return kvm_mips_mkold_gpa_pt(kvm, gfn, gfn_end); > + return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); > } > > -static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > + gpa_t gpa = range->start << PAGE_SHIFT; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > > if (!gpa_pte) > @@ -554,16 +499,6 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > return pte_young(*gpa_pte); > } > > -int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) > -{ > - return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); > -} > - > -int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) > -{ > - return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); > -} > - > /** > * _kvm_mips_map_page_fast() - Fast path GPA fault handler. > * @vcpu: VCPU pointer. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Date: Wed, 31 Mar 2021 07:41:34 +0000 Subject: Re: [PATCH 12/18] KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks Message-Id: <26c87b3e-7a89-6cfa-1410-25486b114f32@redhat.com> List-Id: References: <20210326021957.1424875-1-seanjc@google.com> <20210326021957.1424875-13-seanjc@google.com> In-Reply-To: <20210326021957.1424875-13-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sean Christopherson , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras Cc: James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon On 26/03/21 03:19, Sean Christopherson wrote: > Move MIPS to the gfn-based MMU notifier APIs, which do the hva->gfn > lookup in common code, and whose code is nearly identical to MIPS' > lookup. > > No meaningful functional change intended, though the exact order of > operations is slightly different since the memslot lookups occur before > calling into arch code. > > Signed-off-by: Sean Christopherson I'll post a couple patches to enable more coalescing of the flushes, but this particular patch is okay. Paolo > --- > arch/mips/include/asm/kvm_host.h | 1 + > arch/mips/kvm/mmu.c | 97 ++++++-------------------------- > 2 files changed, 17 insertions(+), 81 deletions(-) > > diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h > index feaa77036b67..374a3c8806e8 100644 > --- a/arch/mips/include/asm/kvm_host.h > +++ b/arch/mips/include/asm/kvm_host.h > @@ -967,6 +967,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, > bool write); > > #define KVM_ARCH_WANT_MMU_NOTIFIER > +#define KVM_ARCH_WANT_NEW_MMU_NOTIFIER_APIS > > /* Emulation */ > int kvm_get_inst(u32 *opc, struct kvm_vcpu *vcpu, u32 *out); > diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c > index 3dabeda82458..3dc885df2e32 100644 > --- a/arch/mips/kvm/mmu.c > +++ b/arch/mips/kvm/mmu.c > @@ -439,85 +439,36 @@ static int kvm_mips_mkold_gpa_pt(struct kvm *kvm, gfn_t start_gfn, > end_gfn << PAGE_SHIFT); > } > > -static int handle_hva_to_gpa(struct kvm *kvm, > - unsigned long start, > - unsigned long end, > - int (*handler)(struct kvm *kvm, gfn_t gfn, > - gpa_t gfn_end, > - struct kvm_memory_slot *memslot, > - void *data), > - void *data) > +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) > { > - struct kvm_memslots *slots; > - struct kvm_memory_slot *memslot; > - int ret = 0; > - > - slots = kvm_memslots(kvm); > - > - /* we only care about the pages that the guest sees */ > - kvm_for_each_memslot(memslot, slots) { > - unsigned long hva_start, hva_end; > - gfn_t gfn, gfn_end; > - > - hva_start = max(start, memslot->userspace_addr); > - hva_end = min(end, memslot->userspace_addr + > - (memslot->npages << PAGE_SHIFT)); > - if (hva_start >= hva_end) > - continue; > - > - /* > - * {gfn(page) | page intersects with [hva_start, hva_end)} > - * {gfn_start, gfn_start+1, ..., gfn_end-1}. > - */ > - gfn = hva_to_gfn_memslot(hva_start, memslot); > - gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); > - > - ret |= handler(kvm, gfn, gfn_end, memslot, data); > - } > - > - return ret; > -} > - > - > -static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > -{ > - kvm_mips_flush_gpa_pt(kvm, gfn, gfn_end); > - return 1; > -} > - > -int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, > - unsigned flags) > -{ > - handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); > + kvm_mips_flush_gpa_pt(kvm, range->start, range->end); > > kvm_mips_callbacks->flush_shadow_all(kvm); > return 0; > } > > -static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +static bool __kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > - pte_t hva_pte = *(pte_t *)data; > + gpa_t gpa = range->start << PAGE_SHIFT; > + pte_t hva_pte = range->pte; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > pte_t old_pte; > > if (!gpa_pte) > - return 0; > + return false; > > /* Mapping may need adjusting depending on memslot flags */ > old_pte = *gpa_pte; > - if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > + if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte)) > hva_pte = pte_mkclean(hva_pte); > - else if (memslot->flags & KVM_MEM_READONLY) > + else if (range->slot->flags & KVM_MEM_READONLY) > hva_pte = pte_wrprotect(hva_pte); > > set_pte(gpa_pte, hva_pte); > > /* Replacing an absent or old page doesn't need flushes */ > if (!pte_present(old_pte) || !pte_young(old_pte)) > - return 0; > + return false; > > /* Pages swapped, aged, moved, or cleaned require flushes */ > return !pte_present(hva_pte) || > @@ -526,27 +477,21 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > (pte_dirty(old_pte) && !pte_dirty(hva_pte)); > } > > -int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) > +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - unsigned long end = hva + PAGE_SIZE; > - int ret; > - > - ret = handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pte); > - if (ret) > + if (__kvm_set_spte_gfn(kvm, range)) > kvm_mips_callbacks->flush_shadow_all(kvm); > - return 0; > + return false; > } > > -static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - return kvm_mips_mkold_gpa_pt(kvm, gfn, gfn_end); > + return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); > } > > -static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > - struct kvm_memory_slot *memslot, void *data) > +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > - gpa_t gpa = gfn << PAGE_SHIFT; > + gpa_t gpa = range->start << PAGE_SHIFT; > pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); > > if (!gpa_pte) > @@ -554,16 +499,6 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, > return pte_young(*gpa_pte); > } > > -int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) > -{ > - return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); > -} > - > -int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) > -{ > - return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); > -} > - > /** > * _kvm_mips_map_page_fast() - Fast path GPA fault handler. > * @vcpu: VCPU pointer. >