From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EDC9C433DB for ; Thu, 25 Mar 2021 21:49:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D02861A32 for ; Thu, 25 Mar 2021 21:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230512AbhCYVsh (ORCPT ); Thu, 25 Mar 2021 17:48:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230285AbhCYVsE (ORCPT ); Thu, 25 Mar 2021 17:48:04 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89BD0C06174A for ; Thu, 25 Mar 2021 14:48:04 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id n21so3415921ioa.7 for ; Thu, 25 Mar 2021 14:48:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LjgpCYxPhOQJpuC7Cc9bLaIGNPJJ0qzOtd9jTBZBhrg=; b=GQO/z+TzFRAglc2wQUo8ySqTD1u9odjmmny4u/mrp6OZ10vnij6+gZLGPlregr5x79 zC8pK5A4bqQNhJmY8WbTMqRLM1qbkzx5wF7kptCuRxXpeKa12MR6CUT2dBii3jRR9m+X I3kqOkNW39cy59qOIog55kufGixaOlqWjvdX9mocL3rQ8oem6ZFoghyiDW92Fml0Im6C VDqnOUeygBjb38ertVpGA56KQSjcliJGlwHXg4dG4yeGJ0U/rVp5IIhVXodQ7w55bx0J 67bHn8qf5EY8Ne/AXZeOxUT4YRKipya/JAnVSSiT5moGmFVcAWB/3CkllT642UhC+Ytb x3uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LjgpCYxPhOQJpuC7Cc9bLaIGNPJJ0qzOtd9jTBZBhrg=; b=dzp2F6YJjV901ht4/5Hv7CJr63lJuRrBYS2CU6CPQOkLzqVGTkivks8TgrcKPHYkti xIbwceeQN1gl38pW7sZZbDhKm7xuuLGmPMMquJ2PgomqTB2k/vZii8ElBaRM04vI1EfJ NNhG5IqKCDo+vshdnTgTNTTKJIESXXvtG+GEiwFQTiO17cwC3dnwRBeS7fwEAk/WRiAk IF0djxrpMSg0woAxt5weVlU4EFlZVZxjx3UwQWFJ4/TykzwMInapXgMru1ut2Tp2CToq H32pacC5nGI1b2ejm49FiCM4gBL1ovnr1q5z2aX9AonwF/2fdqDwJiJgTxkZV0nM0cEF b34w== X-Gm-Message-State: AOAM530Vyc4Zi9ZlM3GHwxlC86AmmITqaj5k7ot4XMcGEGEaDmynv59B hurNfweGOrbDELJgT33LXAPut5YlIvaf8MEHUZuckA== X-Google-Smtp-Source: ABdhPJyGF3V9ymsojPk0PUm++AQs+v0r7ENAJRobTfFkX6wLoztHpPkMlf3C/RGErMgGa7pWL8hfuWOgPJ7pRJ9UKfE= X-Received: by 2002:a6b:7f4d:: with SMTP id m13mr7917152ioq.134.1616708883934; Thu, 25 Mar 2021 14:48:03 -0700 (PDT) MIME-Version: 1.0 References: <20210325200119.1359384-1-seanjc@google.com> <20210325200119.1359384-3-seanjc@google.com> In-Reply-To: <20210325200119.1359384-3-seanjc@google.com> From: Ben Gardon Date: Thu, 25 Mar 2021 14:47:53 -0700 Message-ID: Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 25, 2021 at 1:01 PM Sean Christopherson wrote: > > Honor the "flush needed" return from kvm_tdp_mmu_zap_gfn_range(), which > does the flush itself if and only if it yields (which it will never do in > this particular scenario), and otherwise expects the caller to do the > flush. If pages are zapped from the TDP MMU but not the legacy MMU, then > no flush will occur. > > Fixes: 29cf0f5007a2 ("kvm: x86/mmu: NX largepage recovery for TDP MMU") > Cc: stable@vger.kernel.org > Cc: Ben Gardon > Signed-off-by: Sean Christopherson Reviewed-by: Ben Gardon > --- > arch/x86/kvm/mmu/mmu.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index c6ed633594a2..5a53743b37bc 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5939,6 +5939,8 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) > struct kvm_mmu_page *sp; > unsigned int ratio; > LIST_HEAD(invalid_list); > + bool flush = false; > + gfn_t gfn_end; > ulong to_zap; > > rcu_idx = srcu_read_lock(&kvm->srcu); > @@ -5960,19 +5962,20 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) > lpage_disallowed_link); > WARN_ON_ONCE(!sp->lpage_disallowed); > if (is_tdp_mmu_page(sp)) { > - kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, > - sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level)); > + gfn_end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level); > + flush = kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, gfn_end); > } else { > kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); > WARN_ON_ONCE(sp->lpage_disallowed); > } > > if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { > - kvm_mmu_commit_zap_page(kvm, &invalid_list); > + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); > cond_resched_rwlock_write(&kvm->mmu_lock); > + flush = false; > } > } > - kvm_mmu_commit_zap_page(kvm, &invalid_list); > + kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); > > write_unlock(&kvm->mmu_lock); > srcu_read_unlock(&kvm->srcu, rcu_idx); > -- > 2.31.0.291.g576ba9dcdaf-goog >