From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45461 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OntzN-0002gh-Uw for qemu-devel@nongnu.org; Tue, 24 Aug 2010 09:56:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OntzM-0004pT-Cu for qemu-devel@nongnu.org; Tue, 24 Aug 2010 09:56:33 -0400 Received: from mail-yx0-f173.google.com ([209.85.213.173]:37908) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OntzM-0004pH-Af for qemu-devel@nongnu.org; Tue, 24 Aug 2010 09:56:32 -0400 Received: by yxn35 with SMTP id 35so2810619yxn.4 for ; Tue, 24 Aug 2010 06:56:31 -0700 (PDT) Message-ID: <4C73CF8D.5060405@codemonkey.ws> Date: Tue, 24 Aug 2010 08:56:29 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <1282646430-5777-1-git-send-email-kwolf@redhat.com> <4C73C2BF.8050300@codemonkey.ws> <4C73C622.7080808@redhat.com> <4C73C926.3010901@codemonkey.ws> <4C73C9CF.7090800@redhat.com> <4C73CAA9.2060104@codemonkey.ws> <4C73CB85.9010306@redhat.com> <4C73CBD6.7000900@codemonkey.ws> <4C73CCCB.6050704@redhat.com> In-Reply-To: <4C73CCCB.6050704@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [RFC][STABLE 0.13] Revert "qcow2: Use bdrv_(p)write_sync for metadata writes" List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Kevin Wolf , stefanha@gmail.com, mjt@tls.msk.ru, qemu-devel@nongnu.org, hch@lst.de On 08/24/2010 08:44 AM, Avi Kivity wrote: > On 08/24/2010 04:40 PM, Anthony Liguori wrote: >>> 1. Allocate a cluster (increase refcount table) >>> >>> 2. Link cluster to L2 table >>> >>> 3. Second operation makes it to disk; first still in pagecache >>> >>> 4. Crash >>> >>> 5. Dangling pointer from L2 to freed cluster >> >> >> Yes, having this discussion in IRC. >> >> The problem is that we maintain a refcount table. > > Are you sure that's the only issue? No. >> If we didn't do internal disk snapshots, we wouldn't have this >> problem. IOW, VMDK doesn't have this problem so the answer to my >> very first question is that qcow2 is too difficult a format to get >> right. > > One doesn't follow from the other (though I'm no fan of internal > snapshots, myself). It does. Let's consider the failure scenarios: 1) guest submits write request 2) allocate extent 3) write data to disk (a) 4) write (a) completes 5) update reference count table for new extent (b) 6) write (b) completes 7) write extent table (c) 8) write (c) completes 9) complete guest write request If this all happened in order and we lost power, the worst case error is that we leak a block which isn't terrible. But we're not guaranteed that this happens in order. If (b) or (c) happen before (a), then the image is not corrupted but data gets lost. That's okay because it's part of the guest contract. If (c) happens before (b), then we've created an extent that's attached to a table with a zero reference count. This is a corrupt image. Let's consider if we eliminate the reference count table which means eliminating internal snapshots. 1) guest submits write request 2) allocate extent 3) write data to disk (a) 4) write (a) completes 5) write extent table (c) 6) write (c) completes 7) complete guest write request If this all happens in order and we lose power, we just leak a block. It means we need a periodic fsck. If (c) completes before (a), then it means that the image is not corrupted but data gets lost. This is okay based on the guest contract. And that's it. There is no scenario where the disk is corrupted. So in summary, both situations are not perfect, but scenario (1) can result in a corrupted image whereas scenario (2) results in leakage. The classic solution to this is fsck. Regards, Anthony Liguori