From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE35C433C1 for ; Tue, 30 Mar 2021 18:34:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95A08619D3 for ; Tue, 30 Mar 2021 18:34:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232846AbhC3SdB (ORCPT ); Tue, 30 Mar 2021 14:33:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232827AbhC3Scf (ORCPT ); Tue, 30 Mar 2021 14:32:35 -0400 Received: from mail-io1-xd34.google.com (mail-io1-xd34.google.com [IPv6:2607:f8b0:4864:20::d34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22090C061764 for ; Tue, 30 Mar 2021 11:32:35 -0700 (PDT) Received: by mail-io1-xd34.google.com with SMTP id x16so17449355iob.1 for ; Tue, 30 Mar 2021 11:32:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=LdVURA+kcNmnzr/OmxvAldswhg5Hv5K+yZBcwCjQoMugQeFaMm3UMyYuNoBFuURK7x wW9jUPDk/lN2NWOWWx0C+HM8fHbdwPVXOoGeR3GF2krU9rJukM386cGHQW8I5mKRlRhl FK944SqBcaNxe9bg0srb4MmcOByctpvvrYF/uEDGOooXDy9x7lW0dq8OJxF2/6wG6mh8 QUtdaXz6D2omEiXIzbRACVWnf7gARf2rCWDNFLpUCSjV3bjZ6TrZFZVX7zDhTaQ1xch3 f/yGaKwbqEli9Uvqsxq31Y2h/cqT4iY2RLC+2faOt9pr4StzcjyvoFLtPEvayIi0J9vr 40BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=UQHIuL6EY8K5nlLNsP6rZNpEIoANzOUe9HWJHnsrj6A4dXLXQXCCOj4RVcrb4GRIjQ nK439J9sc1kdBbekK1UaeFGkKG9O0C7fpch1GyRiDnCEpcmGNpkQ7iXnmgYh3p42d7eA d5Xo8ur7xOkPvetbG0rl/hzRiwn0qSvavF8G1czfy6oRYMJq/0nZfLAiV39Z8J+PtYky 7CwDq/Cxp97Irl3bcNjcJl3woTRfxCPvGSEGkcNkWAxn1JttrXn5MWh7mLRK2m5wu9A3 LAnPo0t5CM9Tn9Yvqvoozrzf/h/cdH0NEo/utmVI6bd9OeGdof/KZlEKiNO2IAlozuof M5xg== X-Gm-Message-State: AOAM532wrV4sdQSuNyh8xPd9Zh7tC1a/oztNGL+9HO3Q0jEnNlFQU854 T/dDHb78HiFJmig1BK1W75XjBojwdt8QvWGiGN3V0A== X-Google-Smtp-Source: ABdhPJydba/r9kFe0x64VHG21TaCqRbVPKRpUh3kGFyYZT4VeQOXcqv0jV6nWOmc06Khu/5vhjH/mZsKNTnmLddBUEQ= X-Received: by 2002:a5d:9959:: with SMTP id v25mr26278546ios.189.1617129153795; Tue, 30 Mar 2021 11:32:33 -0700 (PDT) MIME-Version: 1.0 References: <20210326021957.1424875-1-seanjc@google.com> In-Reply-To: <20210326021957.1424875-1-seanjc@google.com> From: Ben Gardon Date: Tue, 30 Mar 2021 11:32:22 -0700 Message-ID: Subject: Re: [PATCH 00/18] KVM: Consolidate and optimize MMU notifiers To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Paolo Bonzini , James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm , kvm-ppc@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson wrote: > > The end goal of this series is to optimize the MMU notifiers to take > mmu_lock if and only if the notification is relevant to KVM, i.e. the hva > range overlaps a memslot. Large VMs (hundreds of vCPUs) are very > sensitive to mmu_lock being taken for write at inopportune times, and > such VMs also tend to be "static", e.g. backed by HugeTLB with minimal > page shenanigans. The vast majority of notifications for these VMs will > be spurious (for KVM), and eliding mmu_lock for spurious notifications > avoids an otherwise unacceptable disruption to the guest. > > To get there without potentially degrading performance, e.g. due to > multiple memslot lookups, especially on non-x86 where the use cases are > largely unknown (from my perspective), first consolidate the MMU notifier > logic by moving the hva->gfn lookups into common KVM. > > Applies on my TDP MMU TLB flushing bug fixes[*], which conflict horribly > with the TDP MMU changes in this series. That code applies on kvm/queue > (commit 4a98623d5d90, "KVM: x86/mmu: Mark the PAE roots as decrypted for > shadow paging"). > > Speaking of conflicts, Ben will soon be posting a series to convert a > bunch of TDP MMU flows to take mmu_lock only for read. Presumably there > will be an absurd number of conflicts; Ben and I will sort out the > conflicts in whichever series loses the race. > > Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC, > PPC e500, and s390. Absolutely needs to be tested for real on non-x86, > I give it even odds that I introduced an off-by-one bug somewhere. > > [*] https://lkml.kernel.org/r/20210325200119.1359384-1-seanjc@google.com > > > Patches 1-7 are x86 specific prep patches to play nice with moving > the hva->gfn memslot lookups into common code. There ended up being waaay > more of these than I expected/wanted, but I had a hell of a time getting > the flushing logic right when shuffling the memslot and address space > loops. In the end, I was more confident I got things correct by batching > the flushes. > > Patch 8 moves the existing API prototypes into common code. It could > technically be dropped since the old APIs are gone in the end, but I > thought the switch to the new APIs would suck a bit less this way. Patches 1-8 look good to me. Feel free to add my Reviewed-by tag to those. I appreciate the care you took to make all those changes tiny and reviewable. > > Patch 9 moves arm64's MMU notifier tracepoints into common code so that > they are not lost when arm64 is converted to the new APIs, and so that all > architectures can benefit. > > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely > because I could actually test it. All architectures use nearly identical > code, so I don't think it actually matters in the end. I'm still reviewing 10 and 14-18. 10 is a huge change and the diff is pretty hard to parse. > > Patches 11-13 move arm64, MIPS, and PPC to the new APIs. > > Patch 14 yanks out the old APIs. > > Patch 15 adds the mmu_lock elision, but only for unpaired notifications. Reading through all this code and considering the changes I'm preparing for the TDP MMU have me wondering if it might help to have a more general purpose MMU lock context struct which could be embedded in the structs added in this patch. I'm thinking something like: enum kvm_mmu_lock_mode { KVM_MMU_LOCK_NONE, KVM_MMU_LOCK_READ, KVM_MMU_LOCK_WRITE, }; struct kvm_mmu_lock_context { enum kvm_mmu_lock_mode lock_mode; bool can_block; bool can_yield; bool flush; }; This could yield some grossly long lines, but it would also have potential to unify a bunch of ad-hoc handling. The above struct could also fit into a single byte, so it'd be pretty easy to pass it around. > > Patch 16 adds mmu_lock elision for paired .invalidate_range_{start,end}(). > This is quite nasty and no small part of me thinks the patch should be > burned with fire (I won't spoil it any further), but it's also the most > problematic scenario for our particular use case. :-/ > > Patches 17-18 are additional x86 cleanups. > > Sean Christopherson (18): > KVM: x86/mmu: Coalesce TDP MMU TLB flushes when zapping collapsible > SPTEs > KVM: x86/mmu: Move flushing for "slot" handlers to caller for legacy > MMU > KVM: x86/mmu: Coalesce TLB flushes when zapping collapsible SPTEs > KVM: x86/mmu: Coalesce TLB flushes across address spaces for gfn range > zap > KVM: x86/mmu: Pass address space ID to __kvm_tdp_mmu_zap_gfn_range() > KVM: x86/mmu: Pass address space ID to TDP MMU root walkers > KVM: x86/mmu: Use leaf-only loop for walking TDP SPTEs when changing > SPTE > KVM: Move prototypes for MMU notifier callbacks to generic code > KVM: Move arm64's MMU notifier trace events to generic code > KVM: Move x86's MMU notifier memslot walkers to generic code > KVM: arm64: Convert to the gfn-based MMU notifier callbacks > KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks > KVM: PPC: Convert to the gfn-based MMU notifier callbacks > KVM: Kill off the old hva-based MMU notifier callbacks > KVM: Take mmu_lock when handling MMU notifier iff the hva hits a > memslot > KVM: Don't take mmu_lock for range invalidation unless necessary > KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if > possible > KVM: x86/mmu: Drop trace_kvm_age_page() tracepoint > > arch/arm64/include/asm/kvm_host.h | 5 - > arch/arm64/kvm/mmu.c | 118 ++---- > arch/arm64/kvm/trace_arm.h | 66 ---- > arch/mips/include/asm/kvm_host.h | 5 - > arch/mips/kvm/mmu.c | 97 +---- > arch/powerpc/include/asm/kvm_book3s.h | 12 +- > arch/powerpc/include/asm/kvm_host.h | 7 - > arch/powerpc/include/asm/kvm_ppc.h | 9 +- > arch/powerpc/kvm/book3s.c | 18 +- > arch/powerpc/kvm/book3s.h | 10 +- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++--- > arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +- > arch/powerpc/kvm/book3s_hv.c | 12 +- > arch/powerpc/kvm/book3s_pr.c | 56 +-- > arch/powerpc/kvm/e500_mmu_host.c | 29 +- > arch/powerpc/kvm/trace_booke.h | 15 - > arch/x86/include/asm/kvm_host.h | 6 +- > arch/x86/kvm/mmu/mmu.c | 180 ++++----- > arch/x86/kvm/mmu/mmu_internal.h | 10 + > arch/x86/kvm/mmu/tdp_mmu.c | 344 +++++++----------- > arch/x86/kvm/mmu/tdp_mmu.h | 31 +- > include/linux/kvm_host.h | 22 +- > include/trace/events/kvm.h | 90 +++-- > tools/testing/selftests/kvm/lib/kvm_util.c | 4 - > .../selftests/kvm/lib/x86_64/processor.c | 2 + > virt/kvm/kvm_main.c | 312 ++++++++++++---- > 26 files changed, 697 insertions(+), 886 deletions(-) > > -- > 2.31.0.291.g576ba9dcdaf-goog > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81985C433DB for ; Tue, 30 Mar 2021 20:21:07 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id DA3616192C for ; Tue, 30 Mar 2021 20:21:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA3616192C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 4767F4B305; Tue, 30 Mar 2021 16:21:06 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K7CbCZNe4Q4r; Tue, 30 Mar 2021 16:21:04 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E594D4B330; Tue, 30 Mar 2021 16:21:04 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 16B834B2D2 for ; Tue, 30 Mar 2021 14:32:36 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id S72FDh9fzjSG for ; Tue, 30 Mar 2021 14:32:34 -0400 (EDT) Received: from mail-io1-f48.google.com (mail-io1-f48.google.com [209.85.166.48]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id A33364B28F for ; Tue, 30 Mar 2021 14:32:34 -0400 (EDT) Received: by mail-io1-f48.google.com with SMTP id k8so17434448iop.12 for ; Tue, 30 Mar 2021 11:32:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=LdVURA+kcNmnzr/OmxvAldswhg5Hv5K+yZBcwCjQoMugQeFaMm3UMyYuNoBFuURK7x wW9jUPDk/lN2NWOWWx0C+HM8fHbdwPVXOoGeR3GF2krU9rJukM386cGHQW8I5mKRlRhl FK944SqBcaNxe9bg0srb4MmcOByctpvvrYF/uEDGOooXDy9x7lW0dq8OJxF2/6wG6mh8 QUtdaXz6D2omEiXIzbRACVWnf7gARf2rCWDNFLpUCSjV3bjZ6TrZFZVX7zDhTaQ1xch3 f/yGaKwbqEli9Uvqsxq31Y2h/cqT4iY2RLC+2faOt9pr4StzcjyvoFLtPEvayIi0J9vr 40BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=VwBC8wU+7FpEivrNJej62Ag6qS4VEEMIU7F+fO3wrr/FL5NvVM5XN5H69mDb9yYcXw jfK0nN/BIeFvfmwG0iY7J03C+M1pYdVNfWR4lxnBuMsN59HvyTa1bruZvYjweu14w6Zq NaPIxJnXRBenONgJkZckNWEW4pKPM8gTvFz9Eb8S7VZ8Q33r6drEksEfoOL7YBCANACj gfXnZvlUdiW2WRpoh0lYJ8AXymTrXGgTgS+emIPJ5SWgguaOVfyuEKffIlAvbzwZ5DiA 1domjJHpZq1y3cnrxp8rU/8YFhgSG62wUcTkM9bjCWJ1r5P8p79yjzq5GyecC2sts7NV wnfw== X-Gm-Message-State: AOAM531ls8JdUZ/hgqWMVzCKjd3bPVOOEndv7FuI/TawRp/xgnzYl/eA b914rTXtVOKGM0LUxmvnrwiXKy1Up3VSjjgMtvzvHQ== X-Google-Smtp-Source: ABdhPJydba/r9kFe0x64VHG21TaCqRbVPKRpUh3kGFyYZT4VeQOXcqv0jV6nWOmc06Khu/5vhjH/mZsKNTnmLddBUEQ= X-Received: by 2002:a5d:9959:: with SMTP id v25mr26278546ios.189.1617129153795; Tue, 30 Mar 2021 11:32:33 -0700 (PDT) MIME-Version: 1.0 References: <20210326021957.1424875-1-seanjc@google.com> In-Reply-To: <20210326021957.1424875-1-seanjc@google.com> From: Ben Gardon Date: Tue, 30 Mar 2021 11:32:22 -0700 Message-ID: Subject: Re: [PATCH 00/18] KVM: Consolidate and optimize MMU notifiers To: Sean Christopherson X-Mailman-Approved-At: Tue, 30 Mar 2021 16:21:03 -0400 Cc: Wanpeng Li , kvm , Marc Zyngier , Joerg Roedel , Huacai Chen , linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org, LKML , Paul Mackerras , Aleksandar Markovic , linux-arm-kernel@lists.infradead.org, Paolo Bonzini , Vitaly Kuznetsov , kvmarm@lists.cs.columbia.edu, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson wrote: > > The end goal of this series is to optimize the MMU notifiers to take > mmu_lock if and only if the notification is relevant to KVM, i.e. the hva > range overlaps a memslot. Large VMs (hundreds of vCPUs) are very > sensitive to mmu_lock being taken for write at inopportune times, and > such VMs also tend to be "static", e.g. backed by HugeTLB with minimal > page shenanigans. The vast majority of notifications for these VMs will > be spurious (for KVM), and eliding mmu_lock for spurious notifications > avoids an otherwise unacceptable disruption to the guest. > > To get there without potentially degrading performance, e.g. due to > multiple memslot lookups, especially on non-x86 where the use cases are > largely unknown (from my perspective), first consolidate the MMU notifier > logic by moving the hva->gfn lookups into common KVM. > > Applies on my TDP MMU TLB flushing bug fixes[*], which conflict horribly > with the TDP MMU changes in this series. That code applies on kvm/queue > (commit 4a98623d5d90, "KVM: x86/mmu: Mark the PAE roots as decrypted for > shadow paging"). > > Speaking of conflicts, Ben will soon be posting a series to convert a > bunch of TDP MMU flows to take mmu_lock only for read. Presumably there > will be an absurd number of conflicts; Ben and I will sort out the > conflicts in whichever series loses the race. > > Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC, > PPC e500, and s390. Absolutely needs to be tested for real on non-x86, > I give it even odds that I introduced an off-by-one bug somewhere. > > [*] https://lkml.kernel.org/r/20210325200119.1359384-1-seanjc@google.com > > > Patches 1-7 are x86 specific prep patches to play nice with moving > the hva->gfn memslot lookups into common code. There ended up being waaay > more of these than I expected/wanted, but I had a hell of a time getting > the flushing logic right when shuffling the memslot and address space > loops. In the end, I was more confident I got things correct by batching > the flushes. > > Patch 8 moves the existing API prototypes into common code. It could > technically be dropped since the old APIs are gone in the end, but I > thought the switch to the new APIs would suck a bit less this way. Patches 1-8 look good to me. Feel free to add my Reviewed-by tag to those. I appreciate the care you took to make all those changes tiny and reviewable. > > Patch 9 moves arm64's MMU notifier tracepoints into common code so that > they are not lost when arm64 is converted to the new APIs, and so that all > architectures can benefit. > > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely > because I could actually test it. All architectures use nearly identical > code, so I don't think it actually matters in the end. I'm still reviewing 10 and 14-18. 10 is a huge change and the diff is pretty hard to parse. > > Patches 11-13 move arm64, MIPS, and PPC to the new APIs. > > Patch 14 yanks out the old APIs. > > Patch 15 adds the mmu_lock elision, but only for unpaired notifications. Reading through all this code and considering the changes I'm preparing for the TDP MMU have me wondering if it might help to have a more general purpose MMU lock context struct which could be embedded in the structs added in this patch. I'm thinking something like: enum kvm_mmu_lock_mode { KVM_MMU_LOCK_NONE, KVM_MMU_LOCK_READ, KVM_MMU_LOCK_WRITE, }; struct kvm_mmu_lock_context { enum kvm_mmu_lock_mode lock_mode; bool can_block; bool can_yield; bool flush; }; This could yield some grossly long lines, but it would also have potential to unify a bunch of ad-hoc handling. The above struct could also fit into a single byte, so it'd be pretty easy to pass it around. > > Patch 16 adds mmu_lock elision for paired .invalidate_range_{start,end}(). > This is quite nasty and no small part of me thinks the patch should be > burned with fire (I won't spoil it any further), but it's also the most > problematic scenario for our particular use case. :-/ > > Patches 17-18 are additional x86 cleanups. > > Sean Christopherson (18): > KVM: x86/mmu: Coalesce TDP MMU TLB flushes when zapping collapsible > SPTEs > KVM: x86/mmu: Move flushing for "slot" handlers to caller for legacy > MMU > KVM: x86/mmu: Coalesce TLB flushes when zapping collapsible SPTEs > KVM: x86/mmu: Coalesce TLB flushes across address spaces for gfn range > zap > KVM: x86/mmu: Pass address space ID to __kvm_tdp_mmu_zap_gfn_range() > KVM: x86/mmu: Pass address space ID to TDP MMU root walkers > KVM: x86/mmu: Use leaf-only loop for walking TDP SPTEs when changing > SPTE > KVM: Move prototypes for MMU notifier callbacks to generic code > KVM: Move arm64's MMU notifier trace events to generic code > KVM: Move x86's MMU notifier memslot walkers to generic code > KVM: arm64: Convert to the gfn-based MMU notifier callbacks > KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks > KVM: PPC: Convert to the gfn-based MMU notifier callbacks > KVM: Kill off the old hva-based MMU notifier callbacks > KVM: Take mmu_lock when handling MMU notifier iff the hva hits a > memslot > KVM: Don't take mmu_lock for range invalidation unless necessary > KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if > possible > KVM: x86/mmu: Drop trace_kvm_age_page() tracepoint > > arch/arm64/include/asm/kvm_host.h | 5 - > arch/arm64/kvm/mmu.c | 118 ++---- > arch/arm64/kvm/trace_arm.h | 66 ---- > arch/mips/include/asm/kvm_host.h | 5 - > arch/mips/kvm/mmu.c | 97 +---- > arch/powerpc/include/asm/kvm_book3s.h | 12 +- > arch/powerpc/include/asm/kvm_host.h | 7 - > arch/powerpc/include/asm/kvm_ppc.h | 9 +- > arch/powerpc/kvm/book3s.c | 18 +- > arch/powerpc/kvm/book3s.h | 10 +- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++--- > arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +- > arch/powerpc/kvm/book3s_hv.c | 12 +- > arch/powerpc/kvm/book3s_pr.c | 56 +-- > arch/powerpc/kvm/e500_mmu_host.c | 29 +- > arch/powerpc/kvm/trace_booke.h | 15 - > arch/x86/include/asm/kvm_host.h | 6 +- > arch/x86/kvm/mmu/mmu.c | 180 ++++----- > arch/x86/kvm/mmu/mmu_internal.h | 10 + > arch/x86/kvm/mmu/tdp_mmu.c | 344 +++++++----------- > arch/x86/kvm/mmu/tdp_mmu.h | 31 +- > include/linux/kvm_host.h | 22 +- > include/trace/events/kvm.h | 90 +++-- > tools/testing/selftests/kvm/lib/kvm_util.c | 4 - > .../selftests/kvm/lib/x86_64/processor.c | 2 + > virt/kvm/kvm_main.c | 312 ++++++++++++---- > 26 files changed, 697 insertions(+), 886 deletions(-) > > -- > 2.31.0.291.g576ba9dcdaf-goog > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06BF3C433C1 for ; Tue, 30 Mar 2021 18:34:17 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 64E4D61924 for ; Tue, 30 Mar 2021 18:34:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 64E4D61924 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From:In-Reply-To: References:MIME-Version:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0J/gM5jDFOtkjFyjskntYsj9FiMWVPRGXN/0ampfbvc=; b=bomWweGZ7WsdlpTAZXNN/T0Yx OrPMfZ4gHM7gNzGxUMkyzLNFyn1FDhEU4yDgdz1XZtU4YJxt4rgWr7YqQqlEs/GTjjy7rNSH2gkQo PncZezCa1H9AFEgeHGjM4Mv5btTf53QgidtgegDUcJfLRn/FrrxfVRrvJqGEppgpM5gxj8Co044eo 0vEuZK3txlrNvYlFKK3j2G1Zg9axhTmgjoEoI/T9Y/mOE5YBzdggvx7LYlsHAo8Ua6mf/rs3qz/gY HEoUUnYPZPYMByk88Fvad2trQh13MQN3YQzH2RX+CSKLSVVwOyqT2REngwdAEKq7BK4b+KCqPBvSz +71loWKYQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lRJAQ-004czi-1F; Tue, 30 Mar 2021 18:32:42 +0000 Received: from mail-io1-xd33.google.com ([2607:f8b0:4864:20::d33]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lRJAK-004cyF-KZ for linux-arm-kernel@lists.infradead.org; Tue, 30 Mar 2021 18:32:39 +0000 Received: by mail-io1-xd33.google.com with SMTP id k8so17434446iop.12 for ; Tue, 30 Mar 2021 11:32:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=LdVURA+kcNmnzr/OmxvAldswhg5Hv5K+yZBcwCjQoMugQeFaMm3UMyYuNoBFuURK7x wW9jUPDk/lN2NWOWWx0C+HM8fHbdwPVXOoGeR3GF2krU9rJukM386cGHQW8I5mKRlRhl FK944SqBcaNxe9bg0srb4MmcOByctpvvrYF/uEDGOooXDy9x7lW0dq8OJxF2/6wG6mh8 QUtdaXz6D2omEiXIzbRACVWnf7gARf2rCWDNFLpUCSjV3bjZ6TrZFZVX7zDhTaQ1xch3 f/yGaKwbqEli9Uvqsxq31Y2h/cqT4iY2RLC+2faOt9pr4StzcjyvoFLtPEvayIi0J9vr 40BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ry6+wSaahnTAtySVVADqpebYg77xvyK52stUUFNcEOE=; b=t+tUOtDdvheSYuquulDH7wGsejUpvLTQ4DpfbZv1Rmk342QhXJyiW7JRHz8ENzxELG Idhxp5JJakzLz0bOenqjiC3v+JAvOy6e2La0uUSbZQKIUtsKLqkgDIWpxG/N4jG8ufga uxtFetWm8YgAORlXSQUV+CxoRi/SrXh+kcjwLaC3CFC84Zyb//ZcN4XgsDgunIQSA1+y 0YRcLSuTRdETeGLViwwnrug4VBr8GPVUQoTOfFPyqrMTB6Z1TNlx6GkGRyC8rfu3qKhn Muk/a2pSdsljszJqwmB+8nd8UR7xqI+YJ5N6RWQUwkXN1ATSMlfsvEG9VaG7mJ/HI+qY Hk6g== X-Gm-Message-State: AOAM532u2KLeC3K6A7Y0Kb0HReWY5DVm5fI/Fme7zyQG6y9S3BFjs1bj eepDmToi4kAtBw6eilyB6fYh/bd3hBP7+hMj+zfTKQ== X-Google-Smtp-Source: ABdhPJydba/r9kFe0x64VHG21TaCqRbVPKRpUh3kGFyYZT4VeQOXcqv0jV6nWOmc06Khu/5vhjH/mZsKNTnmLddBUEQ= X-Received: by 2002:a5d:9959:: with SMTP id v25mr26278546ios.189.1617129153795; Tue, 30 Mar 2021 11:32:33 -0700 (PDT) MIME-Version: 1.0 References: <20210326021957.1424875-1-seanjc@google.com> In-Reply-To: <20210326021957.1424875-1-seanjc@google.com> From: Ben Gardon Date: Tue, 30 Mar 2021 11:32:22 -0700 Message-ID: Subject: Re: [PATCH 00/18] KVM: Consolidate and optimize MMU notifiers To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Paolo Bonzini , James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm , kvm-ppc@vger.kernel.org, LKML X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210330_193237_060093_1F560D77 X-CRM114-Status: GOOD ( 37.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson wrote: > > The end goal of this series is to optimize the MMU notifiers to take > mmu_lock if and only if the notification is relevant to KVM, i.e. the hva > range overlaps a memslot. Large VMs (hundreds of vCPUs) are very > sensitive to mmu_lock being taken for write at inopportune times, and > such VMs also tend to be "static", e.g. backed by HugeTLB with minimal > page shenanigans. The vast majority of notifications for these VMs will > be spurious (for KVM), and eliding mmu_lock for spurious notifications > avoids an otherwise unacceptable disruption to the guest. > > To get there without potentially degrading performance, e.g. due to > multiple memslot lookups, especially on non-x86 where the use cases are > largely unknown (from my perspective), first consolidate the MMU notifier > logic by moving the hva->gfn lookups into common KVM. > > Applies on my TDP MMU TLB flushing bug fixes[*], which conflict horribly > with the TDP MMU changes in this series. That code applies on kvm/queue > (commit 4a98623d5d90, "KVM: x86/mmu: Mark the PAE roots as decrypted for > shadow paging"). > > Speaking of conflicts, Ben will soon be posting a series to convert a > bunch of TDP MMU flows to take mmu_lock only for read. Presumably there > will be an absurd number of conflicts; Ben and I will sort out the > conflicts in whichever series loses the race. > > Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC, > PPC e500, and s390. Absolutely needs to be tested for real on non-x86, > I give it even odds that I introduced an off-by-one bug somewhere. > > [*] https://lkml.kernel.org/r/20210325200119.1359384-1-seanjc@google.com > > > Patches 1-7 are x86 specific prep patches to play nice with moving > the hva->gfn memslot lookups into common code. There ended up being waaay > more of these than I expected/wanted, but I had a hell of a time getting > the flushing logic right when shuffling the memslot and address space > loops. In the end, I was more confident I got things correct by batching > the flushes. > > Patch 8 moves the existing API prototypes into common code. It could > technically be dropped since the old APIs are gone in the end, but I > thought the switch to the new APIs would suck a bit less this way. Patches 1-8 look good to me. Feel free to add my Reviewed-by tag to those. I appreciate the care you took to make all those changes tiny and reviewable. > > Patch 9 moves arm64's MMU notifier tracepoints into common code so that > they are not lost when arm64 is converted to the new APIs, and so that all > architectures can benefit. > > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely > because I could actually test it. All architectures use nearly identical > code, so I don't think it actually matters in the end. I'm still reviewing 10 and 14-18. 10 is a huge change and the diff is pretty hard to parse. > > Patches 11-13 move arm64, MIPS, and PPC to the new APIs. > > Patch 14 yanks out the old APIs. > > Patch 15 adds the mmu_lock elision, but only for unpaired notifications. Reading through all this code and considering the changes I'm preparing for the TDP MMU have me wondering if it might help to have a more general purpose MMU lock context struct which could be embedded in the structs added in this patch. I'm thinking something like: enum kvm_mmu_lock_mode { KVM_MMU_LOCK_NONE, KVM_MMU_LOCK_READ, KVM_MMU_LOCK_WRITE, }; struct kvm_mmu_lock_context { enum kvm_mmu_lock_mode lock_mode; bool can_block; bool can_yield; bool flush; }; This could yield some grossly long lines, but it would also have potential to unify a bunch of ad-hoc handling. The above struct could also fit into a single byte, so it'd be pretty easy to pass it around. > > Patch 16 adds mmu_lock elision for paired .invalidate_range_{start,end}(). > This is quite nasty and no small part of me thinks the patch should be > burned with fire (I won't spoil it any further), but it's also the most > problematic scenario for our particular use case. :-/ > > Patches 17-18 are additional x86 cleanups. > > Sean Christopherson (18): > KVM: x86/mmu: Coalesce TDP MMU TLB flushes when zapping collapsible > SPTEs > KVM: x86/mmu: Move flushing for "slot" handlers to caller for legacy > MMU > KVM: x86/mmu: Coalesce TLB flushes when zapping collapsible SPTEs > KVM: x86/mmu: Coalesce TLB flushes across address spaces for gfn range > zap > KVM: x86/mmu: Pass address space ID to __kvm_tdp_mmu_zap_gfn_range() > KVM: x86/mmu: Pass address space ID to TDP MMU root walkers > KVM: x86/mmu: Use leaf-only loop for walking TDP SPTEs when changing > SPTE > KVM: Move prototypes for MMU notifier callbacks to generic code > KVM: Move arm64's MMU notifier trace events to generic code > KVM: Move x86's MMU notifier memslot walkers to generic code > KVM: arm64: Convert to the gfn-based MMU notifier callbacks > KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks > KVM: PPC: Convert to the gfn-based MMU notifier callbacks > KVM: Kill off the old hva-based MMU notifier callbacks > KVM: Take mmu_lock when handling MMU notifier iff the hva hits a > memslot > KVM: Don't take mmu_lock for range invalidation unless necessary > KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if > possible > KVM: x86/mmu: Drop trace_kvm_age_page() tracepoint > > arch/arm64/include/asm/kvm_host.h | 5 - > arch/arm64/kvm/mmu.c | 118 ++---- > arch/arm64/kvm/trace_arm.h | 66 ---- > arch/mips/include/asm/kvm_host.h | 5 - > arch/mips/kvm/mmu.c | 97 +---- > arch/powerpc/include/asm/kvm_book3s.h | 12 +- > arch/powerpc/include/asm/kvm_host.h | 7 - > arch/powerpc/include/asm/kvm_ppc.h | 9 +- > arch/powerpc/kvm/book3s.c | 18 +- > arch/powerpc/kvm/book3s.h | 10 +- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++--- > arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +- > arch/powerpc/kvm/book3s_hv.c | 12 +- > arch/powerpc/kvm/book3s_pr.c | 56 +-- > arch/powerpc/kvm/e500_mmu_host.c | 29 +- > arch/powerpc/kvm/trace_booke.h | 15 - > arch/x86/include/asm/kvm_host.h | 6 +- > arch/x86/kvm/mmu/mmu.c | 180 ++++----- > arch/x86/kvm/mmu/mmu_internal.h | 10 + > arch/x86/kvm/mmu/tdp_mmu.c | 344 +++++++----------- > arch/x86/kvm/mmu/tdp_mmu.h | 31 +- > include/linux/kvm_host.h | 22 +- > include/trace/events/kvm.h | 90 +++-- > tools/testing/selftests/kvm/lib/kvm_util.c | 4 - > .../selftests/kvm/lib/x86_64/processor.c | 2 + > virt/kvm/kvm_main.c | 312 ++++++++++++---- > 26 files changed, 697 insertions(+), 886 deletions(-) > > -- > 2.31.0.291.g576ba9dcdaf-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ben Gardon Date: Tue, 30 Mar 2021 18:32:22 +0000 Subject: Re: [PATCH 00/18] KVM: Consolidate and optimize MMU notifiers Message-Id: List-Id: References: <20210326021957.1424875-1-seanjc@google.com> In-Reply-To: <20210326021957.1424875-1-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Paolo Bonzini , James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm , kvm-ppc@vger.kernel.org, LKML On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson wrote: > > The end goal of this series is to optimize the MMU notifiers to take > mmu_lock if and only if the notification is relevant to KVM, i.e. the hva > range overlaps a memslot. Large VMs (hundreds of vCPUs) are very > sensitive to mmu_lock being taken for write at inopportune times, and > such VMs also tend to be "static", e.g. backed by HugeTLB with minimal > page shenanigans. The vast majority of notifications for these VMs will > be spurious (for KVM), and eliding mmu_lock for spurious notifications > avoids an otherwise unacceptable disruption to the guest. > > To get there without potentially degrading performance, e.g. due to > multiple memslot lookups, especially on non-x86 where the use cases are > largely unknown (from my perspective), first consolidate the MMU notifier > logic by moving the hva->gfn lookups into common KVM. > > Applies on my TDP MMU TLB flushing bug fixes[*], which conflict horribly > with the TDP MMU changes in this series. That code applies on kvm/queue > (commit 4a98623d5d90, "KVM: x86/mmu: Mark the PAE roots as decrypted for > shadow paging"). > > Speaking of conflicts, Ben will soon be posting a series to convert a > bunch of TDP MMU flows to take mmu_lock only for read. Presumably there > will be an absurd number of conflicts; Ben and I will sort out the > conflicts in whichever series loses the race. > > Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC, > PPC e500, and s390. Absolutely needs to be tested for real on non-x86, > I give it even odds that I introduced an off-by-one bug somewhere. > > [*] https://lkml.kernel.org/r/20210325200119.1359384-1-seanjc@google.com > > > Patches 1-7 are x86 specific prep patches to play nice with moving > the hva->gfn memslot lookups into common code. There ended up being waaay > more of these than I expected/wanted, but I had a hell of a time getting > the flushing logic right when shuffling the memslot and address space > loops. In the end, I was more confident I got things correct by batching > the flushes. > > Patch 8 moves the existing API prototypes into common code. It could > technically be dropped since the old APIs are gone in the end, but I > thought the switch to the new APIs would suck a bit less this way. Patches 1-8 look good to me. Feel free to add my Reviewed-by tag to those. I appreciate the care you took to make all those changes tiny and reviewable. > > Patch 9 moves arm64's MMU notifier tracepoints into common code so that > they are not lost when arm64 is converted to the new APIs, and so that all > architectures can benefit. > > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely > because I could actually test it. All architectures use nearly identical > code, so I don't think it actually matters in the end. I'm still reviewing 10 and 14-18. 10 is a huge change and the diff is pretty hard to parse. > > Patches 11-13 move arm64, MIPS, and PPC to the new APIs. > > Patch 14 yanks out the old APIs. > > Patch 15 adds the mmu_lock elision, but only for unpaired notifications. Reading through all this code and considering the changes I'm preparing for the TDP MMU have me wondering if it might help to have a more general purpose MMU lock context struct which could be embedded in the structs added in this patch. I'm thinking something like: enum kvm_mmu_lock_mode { KVM_MMU_LOCK_NONE, KVM_MMU_LOCK_READ, KVM_MMU_LOCK_WRITE, }; struct kvm_mmu_lock_context { enum kvm_mmu_lock_mode lock_mode; bool can_block; bool can_yield; bool flush; }; This could yield some grossly long lines, but it would also have potential to unify a bunch of ad-hoc handling. The above struct could also fit into a single byte, so it'd be pretty easy to pass it around. > > Patch 16 adds mmu_lock elision for paired .invalidate_range_{start,end}(). > This is quite nasty and no small part of me thinks the patch should be > burned with fire (I won't spoil it any further), but it's also the most > problematic scenario for our particular use case. :-/ > > Patches 17-18 are additional x86 cleanups. > > Sean Christopherson (18): > KVM: x86/mmu: Coalesce TDP MMU TLB flushes when zapping collapsible > SPTEs > KVM: x86/mmu: Move flushing for "slot" handlers to caller for legacy > MMU > KVM: x86/mmu: Coalesce TLB flushes when zapping collapsible SPTEs > KVM: x86/mmu: Coalesce TLB flushes across address spaces for gfn range > zap > KVM: x86/mmu: Pass address space ID to __kvm_tdp_mmu_zap_gfn_range() > KVM: x86/mmu: Pass address space ID to TDP MMU root walkers > KVM: x86/mmu: Use leaf-only loop for walking TDP SPTEs when changing > SPTE > KVM: Move prototypes for MMU notifier callbacks to generic code > KVM: Move arm64's MMU notifier trace events to generic code > KVM: Move x86's MMU notifier memslot walkers to generic code > KVM: arm64: Convert to the gfn-based MMU notifier callbacks > KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks > KVM: PPC: Convert to the gfn-based MMU notifier callbacks > KVM: Kill off the old hva-based MMU notifier callbacks > KVM: Take mmu_lock when handling MMU notifier iff the hva hits a > memslot > KVM: Don't take mmu_lock for range invalidation unless necessary > KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if > possible > KVM: x86/mmu: Drop trace_kvm_age_page() tracepoint > > arch/arm64/include/asm/kvm_host.h | 5 - > arch/arm64/kvm/mmu.c | 118 ++---- > arch/arm64/kvm/trace_arm.h | 66 ---- > arch/mips/include/asm/kvm_host.h | 5 - > arch/mips/kvm/mmu.c | 97 +---- > arch/powerpc/include/asm/kvm_book3s.h | 12 +- > arch/powerpc/include/asm/kvm_host.h | 7 - > arch/powerpc/include/asm/kvm_ppc.h | 9 +- > arch/powerpc/kvm/book3s.c | 18 +- > arch/powerpc/kvm/book3s.h | 10 +- > arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++--- > arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +- > arch/powerpc/kvm/book3s_hv.c | 12 +- > arch/powerpc/kvm/book3s_pr.c | 56 +-- > arch/powerpc/kvm/e500_mmu_host.c | 29 +- > arch/powerpc/kvm/trace_booke.h | 15 - > arch/x86/include/asm/kvm_host.h | 6 +- > arch/x86/kvm/mmu/mmu.c | 180 ++++----- > arch/x86/kvm/mmu/mmu_internal.h | 10 + > arch/x86/kvm/mmu/tdp_mmu.c | 344 +++++++----------- > arch/x86/kvm/mmu/tdp_mmu.h | 31 +- > include/linux/kvm_host.h | 22 +- > include/trace/events/kvm.h | 90 +++-- > tools/testing/selftests/kvm/lib/kvm_util.c | 4 - > .../selftests/kvm/lib/x86_64/processor.c | 2 + > virt/kvm/kvm_main.c | 312 ++++++++++++---- > 26 files changed, 697 insertions(+), 886 deletions(-) > > -- > 2.31.0.291.g576ba9dcdaf-goog >