From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD3EAC433E6 for ; Tue, 26 Jan 2021 05:23:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0E8522795 for ; Tue, 26 Jan 2021 05:23:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729300AbhAZFX1 (ORCPT ); Tue, 26 Jan 2021 00:23:27 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40512 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726448AbhAYJ5G (ORCPT ); Mon, 25 Jan 2021 04:57:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611568463; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=BQWXsTywCg/HpuhUqGg4Mb/h0MABQoSPqp38t2qFTo4=; b=Q0XQnHe+N7pBRuPwzz4seLChQv4aTG9SDNb1Xo20KH/mDBvntu/JWw2bAJ2qi7At6MgezS bt0TtU/xQNmKXSvqmo5WduPxztTLAAM/IlYeBnvoaw/pB2MdIM8fJx+U5ymPXxXe+B1roq zk7FCXR2A7O42opmv06STMCO0+PsfJ0= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-439-6wsXe-JWNN2Eb6h1RHaQ1Q-1; Mon, 25 Jan 2021 04:54:21 -0500 X-MC-Unique: 6wsXe-JWNN2Eb6h1RHaQ1Q-1 Received: by mail-ed1-f71.google.com with SMTP id m18so413070edp.13 for ; Mon, 25 Jan 2021 01:54:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=BQWXsTywCg/HpuhUqGg4Mb/h0MABQoSPqp38t2qFTo4=; b=kRvgeGK8YoXWJZ3Ii8PLrypCLCBTWEokzYRYZp5PaQObMRBBXyEjc7N1ZwYeemMHZC yenoZ9oAMrVdBb5HjLbHj4kKLTLGVuNAHUvDJWiNDmsqm+yuXW1QLU39mE+Xj1cxiv8I pnYdpyWk1sQLK2mgVVnl3+21yWU9yJzm9owKLSA71es6VoVMsiS3b5URk98BqItOLhJp EsKJklwhmWg4uyNj5GpSFID7GhjANd1M3uCJX4pmfLTFx8oVNtQ6aeKklwASWvdbsco6 RDtp05SBiIaRYe65dIIAksmX4waHdOsoSGCDCIiCYqfsI1MOXNYJ68anRjqE8okBuLQ+ GOZw== X-Gm-Message-State: AOAM530nn9qs3bfqMkkpG5z2NHfZJ02T1BQ/dlTx+mTN3BChNxfONUKg n3pbVK782kGAIvTi/+/GKuSrR+MzJvXazbMi+pY6dEk35+yBUjPZWSsMBte3V4a6Ag9JVsyXEcq nAVMa9FLjnFUaeFxJ9KRg9Zi4 X-Received: by 2002:a05:6402:100b:: with SMTP id c11mr623831edu.193.1611568460050; Mon, 25 Jan 2021 01:54:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJxcqcQjZJ1pJCHs48lf5IQcDnFo3BNnKaESA4fXV5jd/jEtPbExIpDIqbrHSMeIExDR9IX4ZQ== X-Received: by 2002:a05:6402:100b:: with SMTP id c11mr623817edu.193.1611568459926; Mon, 25 Jan 2021 01:54:19 -0800 (PST) Received: from vitty.brq.redhat.com (g-server-2.ign.cz. [91.219.240.2]) by smtp.gmail.com with ESMTPSA id hr31sm7965914ejc.125.2021.01.25.01.54.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Jan 2021 01:54:19 -0800 (PST) From: Vitaly Kuznetsov To: Stephen Zhang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Stephen Zhang , pbonzini@redhat.com, seanjc@google.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com Subject: Re: [PATCH] KVM: x86/mmu: improve robustness of some functions In-Reply-To: <1611314323-2770-1-git-send-email-stephenzhangzsd@gmail.com> References: <1611314323-2770-1-git-send-email-stephenzhangzsd@gmail.com> Date: Mon, 25 Jan 2021 10:54:18 +0100 Message-ID: <87a6sx4a0l.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Stephen Zhang writes: > If the name of this function changes, you can easily > forget to modify the code in the corresponding place. > In fact, such errors already exist in spte_write_protect > and spte_clear_dirty. > What if we do something like (completely untested): diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index bfc6389edc28..5ec15e4160b1 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -12,7 +12,7 @@ extern bool dbg; #define pgprintk(x...) do { if (dbg) printk(x); } while (0) -#define rmap_printk(x...) do { if (dbg) printk(x); } while (0) +#define rmap_printk(fmt, args...) do { if (dbg) printk("%s: " fmt, __func__, ## args); } while (0) #define MMU_WARN_ON(x) WARN_ON(x) #else #define pgprintk(x...) do { } while (0) and eliminate the need to pass '__func__,' explicitly? We can probably do the same to pgprintk(). > Signed-off-by: Stephen Zhang > --- > arch/x86/kvm/mmu/mmu.c | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6d16481..09462c3d 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -844,17 +844,17 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, > int i, count = 0; > > if (!rmap_head->val) { > - rmap_printk("pte_list_add: %p %llx 0->1\n", spte, *spte); > + rmap_printk("%s: %p %llx 0->1\n", __func__, spte, *spte); > rmap_head->val = (unsigned long)spte; > } else if (!(rmap_head->val & 1)) { > - rmap_printk("pte_list_add: %p %llx 1->many\n", spte, *spte); > + rmap_printk("%s: %p %llx 1->many\n", __func__, spte, *spte); > desc = mmu_alloc_pte_list_desc(vcpu); > desc->sptes[0] = (u64 *)rmap_head->val; > desc->sptes[1] = spte; > rmap_head->val = (unsigned long)desc | 1; > ++count; > } else { > - rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte); > + rmap_printk("%s: %p %llx many->many\n", __func__, spte, *spte); > desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); > while (desc->sptes[PTE_LIST_EXT-1]) { > count += PTE_LIST_EXT; > @@ -1115,7 +1115,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) > !(pt_protect && spte_can_locklessly_be_made_writable(spte))) > return false; > > - rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); > + rmap_printk("%s: spte %p %llx\n", __func__, sptep, *sptep); > > if (pt_protect) > spte &= ~SPTE_MMU_WRITEABLE; > @@ -1142,7 +1142,7 @@ static bool spte_clear_dirty(u64 *sptep) > { > u64 spte = *sptep; > > - rmap_printk("rmap_clear_dirty: spte %p %llx\n", sptep, *sptep); > + rmap_printk("%s: spte %p %llx\n", __func__, sptep, *sptep); > > MMU_WARN_ON(!spte_ad_enabled(spte)); > spte &= ~shadow_dirty_mask; > @@ -1184,7 +1184,7 @@ static bool spte_set_dirty(u64 *sptep) > { > u64 spte = *sptep; > > - rmap_printk("rmap_set_dirty: spte %p %llx\n", sptep, *sptep); > + rmap_printk("%s: spte %p %llx\n", __func__, sptep, *sptep); > > /* > * Similar to the !kvm_x86_ops.slot_disable_log_dirty case, > @@ -1363,8 +1363,8 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, > > restart: > for_each_rmap_spte(rmap_head, &iter, sptep) { > - rmap_printk("kvm_set_pte_rmapp: spte %p %llx gfn %llx (%d)\n", > - sptep, *sptep, gfn, level); > + rmap_printk("%s: spte %p %llx gfn %llx (%d)\n", > + __func__, sptep, *sptep, gfn, level); > > need_flush = 1; -- Vitaly