From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C38C4332E for ; Tue, 12 Jan 2021 18:13:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86EEA22DFA for ; Tue, 12 Jan 2021 18:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406425AbhALSNL (ORCPT ); Tue, 12 Jan 2021 13:13:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406282AbhALSMu (ORCPT ); Tue, 12 Jan 2021 13:12:50 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32D84C06138B for ; Tue, 12 Jan 2021 10:11:15 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id p20so2061460qtq.3 for ; Tue, 12 Jan 2021 10:11:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=lYSk7MOddYKWjNX3MxEIYz0vLNaYEHnCnR3+/yAINXw=; b=NKHnnTd7jpdNfSK6bP6++NsuIw0k1dByDPu7R62QtqGk9cZov5lC1ZdIKgrHYqkdCq O+2dgK9s8JHUTSGLSBK1za1HRCgZPrMOgbdIcJy/9el/kMdhoQ1sG666QH9DWiGcbJqK SsANBDgOzW+6iREdYcJhDBBTxN5TgtJEdFd4BOJ3xg6Ascaz1j5400bsKZ74pyyZvT8D rsOpa4bVKxZE8qW2zCPRCpEvvJwe6wA/6j/8QYIUFZnx5X0IHqtZexLxW5AiPG+Bf3cv qGBYy9P3zlhf/Tl6W+O0mihrfRTW/R+eKV/cBCTpDyVCAnI8xQ3BpkyfaPGdKG/eauBc HZCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lYSk7MOddYKWjNX3MxEIYz0vLNaYEHnCnR3+/yAINXw=; b=ul1D8Sa/OENB3lnqE1mUUKtufc6ioe8uBGpRFpH0OwWRVWh//XPLNoxkmiH4VUHPZn GCJzLF5KRKrWGx5JrUI/rLA0A+mp14s0agtG6feI8F3F+VaWtvoVOMoiz41YLsofE0sW /mnjmqffaenvRRtumkI9SHG41vmuGzXqWfP6wSmAnLFWF3JcD6L4oVpOaLudKgBC6aao sxQ/cmsnblLnFJPQq2oRsOqcmcI3DXXONMWq51SeAvBPWOSiE9aWMio5/DKg5LVOtqLX SjcVHWEeJGd8Y5JV7mPkdo/4r3AUbi5eDq2axlNzMckmAW5c1AbAbF4FYpnAPoFcoWmJ u9ug== X-Gm-Message-State: AOAM530N2FI2qDie9TI9pt5GmvBiyIiyuHZhrldMrGrDK0zXuVtPoD25 KqzsVCk0eN+WcQtNBO7ZSaG2aO2IJqQXIjuMWeDLj+xSsvf/pFe2iASyEneMjV5YyHzcKTQtdwq QfRv5qJSDhynOLT7fRW+4D44dyXu9An/gBYgdSvGdT50Yjk/8hRI9e38aamOeYW3BzXcmrXVy X-Google-Smtp-Source: ABdhPJwYwzAWMtBQK6mILE8NEYAUYZdTPRPkm/F7Ns0Y++eItf45aU6HhfxEeu4QJg7N1rYh67ntpkm7bfn+ Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a05:6214:14ee:: with SMTP id k14mr670741qvw.36.1610475074199; Tue, 12 Jan 2021 10:11:14 -0800 (PST) Date: Tue, 12 Jan 2021 10:10:33 -0800 In-Reply-To: <20210112181041.356734-1-bgardon@google.com> Message-Id: <20210112181041.356734-17-bgardon@google.com> Mime-Version: 1.0 References: <20210112181041.356734-1-bgardon@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 16/24] kvm: mmu: Wrap mmu_lock assertions From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Wrap assertions and warnings checking the MMU lock state in a function which uses lockdep_assert_held. While the existing checks use a few different functions to check the lock state, they are all better off using lockdep_assert_held. This will support a refactoring to move the mmu_lock to struct kvm_arch so that it can be replaced with an rwlock for x86. Reviewed-by: Peter Feiner Signed-off-by: Ben Gardon --- arch/arm64/kvm/mmu.c | 2 +- arch/powerpc/include/asm/kvm_book3s_64.h | 7 +++---- arch/powerpc/kvm/book3s_hv_nested.c | 3 +-- arch/x86/kvm/mmu/mmu_internal.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 5 +++++ 7 files changed, 17 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 57ef1ec23b56..8b54eb58bf47 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -130,7 +130,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 struct kvm *kvm = mmu->kvm; phys_addr_t end = start + size; - assert_spin_locked(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); WARN_ON(size & ~PAGE_MASK); WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap, may_block)); diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 9bb9bb370b53..db2e437cd97c 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -650,8 +650,8 @@ static inline pte_t *find_kvm_secondary_pte(struct kvm *kvm, unsigned long ea, { pte_t *pte; - VM_WARN(!spin_is_locked(&kvm->mmu_lock), - "%s called with kvm mmu_lock not held \n", __func__); + kvm_mmu_lock_assert_held(kvm); + pte = __find_linux_pte(kvm->arch.pgtable, ea, NULL, hshift); return pte; @@ -662,8 +662,7 @@ static inline pte_t *find_kvm_host_pte(struct kvm *kvm, unsigned long mmu_seq, { pte_t *pte; - VM_WARN(!spin_is_locked(&kvm->mmu_lock), - "%s called with kvm mmu_lock not held \n", __func__); + kvm_mmu_lock_assert_held(kvm); if (mmu_notifier_retry(kvm, mmu_seq)) return NULL; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 18890dca9476..6d5987d1eee7 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -767,8 +767,7 @@ pte_t *find_kvm_nested_guest_pte(struct kvm *kvm, unsigned long lpid, if (!gp) return NULL; - VM_WARN(!spin_is_locked(&kvm->mmu_lock), - "%s called with kvm mmu_lock not held \n", __func__); + kvm_mmu_lock_assert_held(kvm); pte = __find_linux_pte(gp->shadow_pgtable, ea, NULL, hshift); return pte; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 7f599cc64178..cc8268cf28d2 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -101,14 +101,14 @@ void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, static inline void kvm_mmu_get_root(struct kvm *kvm, struct kvm_mmu_page *sp) { BUG_ON(!sp->root_count); - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); ++sp->root_count; } static inline bool kvm_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *sp) { - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); --sp->root_count; return !sp->root_count; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fb911ca428b2..1d7c01300495 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -117,7 +117,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) { gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT); - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); WARN_ON(root->root_count); WARN_ON(!root->tdp_mmu_page); @@ -425,7 +425,7 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, struct kvm_mmu_page *root = sptep_to_sp(root_pt); int as_id = kvm_mmu_page_as_id(root); - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); WRITE_ONCE(*iter->sptep, new_spte); @@ -1139,7 +1139,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root; int root_as_id; - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); for_each_tdp_mmu_root(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) @@ -1324,7 +1324,7 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, int root_as_id; bool spte_set = false; - lockdep_assert_held(&kvm->mmu_lock); + kvm_mmu_lock_assert_held(kvm); for_each_tdp_mmu_root(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6e2773fc406c..022e3522788f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1499,5 +1499,6 @@ void kvm_mmu_lock(struct kvm *kvm); void kvm_mmu_unlock(struct kvm *kvm); int kvm_mmu_lock_needbreak(struct kvm *kvm); int kvm_mmu_lock_cond_resched(struct kvm *kvm); +void kvm_mmu_lock_assert_held(struct kvm *kvm); #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b4c49a7e0556..c504f876176b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -452,6 +452,11 @@ int kvm_mmu_lock_cond_resched(struct kvm *kvm) return cond_resched_lock(&kvm->mmu_lock); } +void kvm_mmu_lock_assert_held(struct kvm *kvm) +{ + lockdep_assert_held(&kvm->mmu_lock); +} + #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) { -- 2.30.0.284.gd98b1dd5eaa7-goog