From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5972CC2D0B1 for ; Fri, 7 Feb 2020 06:24:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FEA020838 for ; Fri, 7 Feb 2020 06:24:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="oUXSOgDc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726586AbgBGGYU (ORCPT ); Fri, 7 Feb 2020 01:24:20 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:50288 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726451AbgBGGYU (ORCPT ); Fri, 7 Feb 2020 01:24:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:Cc:To: From:Date:Sender:Reply-To:Content-ID:Content-Description; bh=FZP0iQ7V7SgWd2g6clqUyIaa9pDDhceClvjQC6yZ8gc=; b=oUXSOgDctYqjzalLqUrfRY/oYl 3mNEyg7oK6ZeFlBNI96//BS/C/Ctcc5nSXpQ48i3TsFWVv+5j/mY1zaU671yoLGocq7yt7Xqrov/x VEyXJU4iQtBmh1vacrt9kaitB3/6aZH//a6jhq5zeLEuZf7BzoyRo5ATwlW8puh6tSg9gNi2jppcX 8zG1W4Qq4Qjwc7xYQzVahjWAIc95ByxtYv+GgsM/z9iAG2r/jfUaMin6EN/2ns7MqG2P0AEfym/qk YjrY/8ck2xuWrurwH40AplewuUuYbeUpx1aOxgBSKZncGs/ItS6xuC2zsabBoQJpZEojVBhvmEJPF O/yIP+XQ==; Received: from [80.156.29.194] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1izx3k-0002BL-Vf; Fri, 07 Feb 2020 06:24:13 +0000 Date: Fri, 7 Feb 2020 07:24:09 +0100 From: Mauro Carvalho Chehab To: Paolo Bonzini Cc: Cornelia Huck , Linux Media Mailing List , Jonathan Corbet , kvm@vger.kernel.org, linux-doc@vger.kernel.org Subject: Re: [PATCH v2 21/27] docs: kvm: Convert locking.txt to ReST format Message-ID: <20200207072409.2cb038da@infradead.org> In-Reply-To: <20200206234736.196ef417@kernel.org> References: <1464d69fe780940cec6ecec4ac2505b9701a1e01.1581000481.git.mchehab+huawei@kernel.org> <20200206171132.4f51f17a.cohuck@redhat.com> <20200206234736.196ef417@kernel.org> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org > > > > > Would be nicer but this is acceptable too I think. Especially, the > > monospaced font allows breaking the table and keeping the parts aligned. I couldn't resist trying to use a table ;-) The following patch does that. IMO, it looks nice on both text and html outputs. Cheers, Mauro diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 428cb3412ecc..c02291beac3f 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -59,30 +59,39 @@ The mapping from gfn to pfn may be changed since we can only ensure the pfn is not changed during cmpxchg. This is a ABA problem, for example, below case will happen: -At the beginning:: - - gpte = gfn1 - gfn1 is mapped to pfn1 on host - spte is the shadow page table entry corresponding with gpte and - spte = pfn1 - - VCPU 0 VCPU0 - -on fast page fault path:: - - old_spte = *spte; - pfn1 is swapped out: - spte = 0; - - pfn1 is re-alloced for gfn2. - - gpte is changed to point to - gfn2 by the guest: - spte = pfn1; - - if (cmpxchg(spte, old_spte, old_spte+W) - mark_page_dirty(vcpu->kvm, gfn1) - OOPS!!! ++------------------------------------------------------------------------+ +| At the beginning:: | +| | +| gpte = gfn1 | +| gfn1 is mapped to pfn1 on host | +| spte is the shadow page table entry corresponding with gpte and | +| spte = pfn1 | ++------------------------------------------------------------------------+ +| On fast page fault path: | ++------------------------------------+-----------------------------------+ +| CPU 0: | CPU 1: | ++------------------------------------+-----------------------------------+ +| :: | | +| | | +| old_spte = *spte; | | ++------------------------------------+-----------------------------------+ +| | pfn1 is swapped out:: | +| | | +| | spte = 0; | +| | | +| | pfn1 is re-alloced for gfn2. | +| | | +| | gpte is changed to point to | +| | gfn2 by the guest:: | +| | | +| | spte = pfn1; | ++------------------------------------+-----------------------------------+ +| :: | +| | +| if (cmpxchg(spte, old_spte, old_spte+W) | +| mark_page_dirty(vcpu->kvm, gfn1) | +| OOPS!!! | ++------------------------------------------------------------------------+ We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. @@ -109,36 +118,42 @@ Accessed bit and Dirty bit can not be lost. But it is not true after fast page fault since the spte can be marked writable between reading spte and updating spte. Like below case: -At the beginning:: - - spte.W = 0 - spte.Accessed = 1 - - VCPU 0 VCPU0 - -In mmu_spte_clear_track_bits():: - - old_spte = *spte; - - /* 'if' condition is satisfied. */ - if (old_spte.Accessed == 1 && - old_spte.W == 0) - spte = 0ull; - on fast page fault path: - spte.W = 1 - memory write on the spte: - spte.Dirty = 1 - - - else - old_spte = xchg(spte, 0ull) - - - if (old_spte.Accessed == 1) - kvm_set_pfn_accessed(spte.pfn); - if (old_spte.Dirty == 1) - kvm_set_pfn_dirty(spte.pfn); - OOPS!!! ++------------------------------------------------------------------------+ +| At the beginning:: | +| | +| spte.W = 0 | +| spte.Accessed = 1 | ++------------------------------------+-----------------------------------+ +| CPU 0: | CPU 1: | ++------------------------------------+-----------------------------------+ +| In mmu_spte_clear_track_bits():: | | +| | | +| old_spte = *spte; | | +| | | +| | | +| /* 'if' condition is satisfied. */| | +| if (old_spte.Accessed == 1 && | | +| old_spte.W == 0) | | +| spte = 0ull; | | ++------------------------------------+-----------------------------------+ +| | on fast page fault path:: | +| | | +| | spte.W = 1 | +| | | +| | memory write on the spte:: | +| | | +| | spte.Dirty = 1 | ++------------------------------------+-----------------------------------+ +| :: | | +| | | +| else | | +| old_spte = xchg(spte, 0ull) | | +| if (old_spte.Accessed == 1) | | +| kvm_set_pfn_accessed(spte.pfn);| | +| if (old_spte.Dirty == 1) | | +| kvm_set_pfn_dirty(spte.pfn); | | +| OOPS!!! | | ++------------------------------------+-----------------------------------+ The Dirty bit is lost in this case.