From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Fri, 12 May 2017 14:42:02 +0100 From: Joe Thornber Message-ID: <20170512134157.GA2523@nim> References: <4b4d56ef-3127-212b-0e68-00b595faa241@redhat.com> <0535f3d744145eceea9121b1e68b622d@assyoma.it> <4fb6f017d9734892eff6b0ef544d99fc@assyoma.it> <20ddda25-dacf-f4e2-8df4-f9bed1c62fe7@redhat.com> <921a6b9c-103e-3c71-97d2-44ceb5a6bf87@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Subject: Re: [linux-lvm] Snapshot behavior on classic LVM vs ThinLVM Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: LVM general discussion and development Cc: Zdenek Kabelac On Fri, May 12, 2017 at 03:02:58PM +0200, Gionatan Danti wrote: > On 02/05/2017 13:00, Gionatan Danti wrote: > >>Anyway, I think (and maybe I am wrong...) that the better solution is to > >>fail *all* writes to a full pool, even the ones directed to allocated > >>space. This will effectively "freeze" the pool and avoid any > >>long-standing inconsistencies. I think dm-thin behaviour is fine given the semantics of write and flush IOs. A block device can complete a write even if it hasn't hit the physical media, a flush request needs to come in at a later time which means 'flush all IOs that you've previously completed'. So any software using a block device (fs, database etc), tends to generate batches of writes, followed by a flush to commit the changes. For example if there was a power failure between the batch of write io completing and the flush completing you do not know how much of the writes will be visible when the machine comes back. When a pool is full it will allow writes to provisioned areas of a thin to succeed. But if any writes failed due to inability to provision then a REQ_FLUSH io to that thin device will *not* succeed. - Joe