From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,SUBJECT_DIET,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3876CC2BB9A for ; Tue, 15 Dec 2020 10:34:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D67A72220F for ; Tue, 15 Dec 2020 10:34:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728278AbgLOKeK (ORCPT ); Tue, 15 Dec 2020 05:34:10 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:28338 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728201AbgLOKdz (ORCPT ); Tue, 15 Dec 2020 05:33:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1608028348; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ckovXueqNToJoyauJ6BBxrCZAffZux6jQj9uhlKTS6U=; b=aPturgJJF0qWXFDKuctSLE12AyqDzSDhEajmfCt5FStCz8eL+sY47GUmBVmUZMg37DUK5i LuTuzuf2dlkYJLgOIrkL4OodZamRmz6Of7R9SS5v5s6jv389h1DwMJGDTKDIhneAdCeS55 EHhEBY7Gxi00wndtPBMecpBWTLAhoVM= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-565-4hBT4DIRMdC3oGXdOPKEvg-1; Tue, 15 Dec 2020 05:32:26 -0500 X-MC-Unique: 4hBT4DIRMdC3oGXdOPKEvg-1 Received: by mail-ej1-f71.google.com with SMTP id y14so5832718ejf.11 for ; Tue, 15 Dec 2020 02:32:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=ckovXueqNToJoyauJ6BBxrCZAffZux6jQj9uhlKTS6U=; b=BdygzbRb9521xV5DjbI3wMOpNX1XnAGM3Bk9UrGHpKIZgD7cBaBuV4hP2fe+Hva3xn xIKo1J4WOogJhhvNpi+MdjVVhsFNrI+gzQwBHiEHrRgd9l+MaNMeSieWPj1YFNrEpU/U voYlQuYZQST7eLgc7w9WfuhnPOZzjdhotJskrjHHtftj52mrD8Ua4iw4RdhE27gbEnSU lda01EmKmANPOcxvUsjStTSKzc8aAW32ndxLQT5Hl76C6RO9GBrcGGHrOrC6kjI0hd5/ r5xmSjfhXLr6fp+ivW02gp+IwaT3uA+yO43cXF2EXihejaG4e1TSBJGNPbk+rF2BTepk IQcQ== X-Gm-Message-State: AOAM532nnVFkW9rdtCo/RPHXb6uuIJ3nVKKqa6cMXLP3fC98BwAwp0Se 3A7bewoKXeGbuSWLLLJH+bZRLH4SELu7Mkkjmz0WaIlivNt5UptvGuiUIocLlCBzWupr+HwNO8p LCK6BMBBAif7HB1QAVTGs1bbk X-Received: by 2002:a17:907:20f1:: with SMTP id rh17mr25919260ejb.147.1608028344788; Tue, 15 Dec 2020 02:32:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJyln15V4fjgJWljXqjU9fYmJSjQNmNN8qCXounds/yTQF5md7b/wLva1eXixiztbKeWanW0Bg== X-Received: by 2002:a17:907:20f1:: with SMTP id rh17mr25919249ejb.147.1608028344617; Tue, 15 Dec 2020 02:32:24 -0800 (PST) Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a]) by smtp.gmail.com with ESMTPSA id g18sm17603258edt.2.2020.12.15.02.32.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 15 Dec 2020 02:32:23 -0800 (PST) Subject: Re: [PATCH] kvm: don't lose the higher 32 bits of tlbs_dirty To: Sean Christopherson , Lai Jiangshan Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Lai Jiangshan References: <20201213044913.15137-1-jiangshanlai@gmail.com> From: Paolo Bonzini Message-ID: Date: Tue, 15 Dec 2020 11:32:23 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/12/20 18:20, Sean Christopherson wrote: > On Sun, Dec 13, 2020, Lai Jiangshan wrote: >> From: Lai Jiangshan >> >> In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as: >> need_tlb_flush |= kvm->tlbs_dirty; >> with need_tlb_flush's type being int and tlbs_dirty's type being long. >> >> It means that tlbs_dirty is always used as int and the higher 32 bits >> is useless. > > It's probably worth noting in the changelog that it's _extremely_ unlikely this > bug can cause problems in practice. It would require encountering tlbs_dirty > on a 4 billion count boundary, and KVM would need to be using shadow paging or > be running a nested guest. > >> We can just change need_tlb_flush's type to long to >> make full use of tlbs_dirty. > > Hrm, this does solve the problem, but I'm not a fan of continuing to use an > integer variable as a boolean. Rather than propagate tlbs_dirty to > need_tlb_flush, what if this bug fix patch checks tlbs_dirty directly, and then > a follow up patch converts need_tlb_flush to a bool and removes the unnecessary > initialization (see below). Indeed, the compiler should be able to convert || to | if useful and valid (it may or may not do it depending on the sizes of types involved, but that's Someone Else's Problem and this is not really a path where every instruction matter). Paolo > E.g. the net result of both patches would be: > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 3abcb2ce5b7d..93b6986d3dfc 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -473,7 +473,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > const struct mmu_notifier_range *range) > { > struct kvm *kvm = mmu_notifier_to_kvm(mn); > - int need_tlb_flush = 0, idx; > + bool need_tlb_flush; > + int idx; > > idx = srcu_read_lock(&kvm->srcu); > spin_lock(&kvm->mmu_lock); > @@ -483,11 +484,10 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, > * count is also read inside the mmu_lock critical section. > */ > kvm->mmu_notifier_count++; > - need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, > - range->flags); > - need_tlb_flush |= kvm->tlbs_dirty; > + need_tlb_flush = !!kvm_unmap_hva_range(kvm, range->start, range->end, > + range->flags); > /* we've to flush the tlb before the pages can be freed */ > - if (need_tlb_flush) > + if (need_tlb_flush || kvm->tlbs_dirty) > kvm_flush_remote_tlbs(kvm); > > spin_unlock(&kvm->mmu_lock); > > Cc: stable@vger.kernel.org > Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") > >> Signed-off-by: Lai Jiangshan >> --- >> virt/kvm/kvm_main.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index 2541a17ff1c4..4e519a517e9f 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -470,7 +470,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, >> const struct mmu_notifier_range *range) >> { >> struct kvm *kvm = mmu_notifier_to_kvm(mn); >> - int need_tlb_flush = 0, idx; >> + long need_tlb_flush = 0; > > need_tlb_flush doesn't need to be initialized here, it's explicitly set via the > call to kvm_unmap_hva_range(). > >> + int idx; >> >> idx = srcu_read_lock(&kvm->srcu); >> spin_lock(&kvm->mmu_lock); >> -- >> 2.19.1.6.gb485710b >> >