From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Date: Wed, 26 Apr 2017 18:37:37 +0200 From: Gionatan Danti In-Reply-To: <921a6b9c-103e-3c71-97d2-44ceb5a6bf87@redhat.com> References: <1438f48b-0a6d-4fb7-92dc-3688251e0a00@assyoma.it> <2f9c4346d4e9646ca058efdf535d435e@xenhideout.nl> <5df13342-8c31-4a0b-785e-1d12f0d2d9e8@redhat.com> <6dd12ab9-0390-5c07-f4b7-de0d8fbbeacf@redhat.com> <3831e817d7d788e93a69f20e5dda1159@xenhideout.nl> <0ab1c4e1-b15e-b22e-9455-5569eeaa0563@redhat.com> <51faeb921acf634609b61bff5fd269d4@xenhideout.nl> <4b4d56ef-3127-212b-0e68-00b595faa241@redhat.com> <0535f3d744145eceea9121b1e68b622d@assyoma.it> <4fb6f017d9734892eff6b0ef544d99fc@assyoma.it> <20ddda25-dacf-f4e2-8df4-f9bed1c62fe7@redhat.com> <921a6b9c-103e-3c71-97d2-44ceb5a6bf87@redhat.com> Message-ID: Subject: Re: [linux-lvm] Snapshot behavior on classic LVM vs ThinLVM Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: Content-Type: text/plain; charset="us-ascii"; format="flowed" To: Zdenek Kabelac Cc: LVM general discussion and development Il 26-04-2017 16:33 Zdenek Kabelac ha scritto: > But you get correct 'write' error - so from application POV - you get > failing > transaction update/write - so app knows 'data' were lost and should > not proceed with next transaction - so it's in line with 'no data is > lost' and filesystem is not damaged and is in correct state > (mountable). True, but the case exists that, even on a full pool, an application with multiple outstanding writes will have some of them completed/commited while other get I/O error, as writes to already allocated space are permitted while writes to non-allocated space are failed. If, for example, I overwrite some already-allocated files, writes will be committed even if the pool is completely full. In past discussion, I had the impression that the only filesystem you feel safe with thinpool is ext4 + remount-ro, on the assumption that *any* failed writes will trigger the read-only mode. But from my test it seems that only *failed metadata updates* trigger the read-only mode. If this is really the case, remount-ro really is a mandatory option. However, as metadata can reside on alredy-allocated blocks, even of a full pool they have a chance to be committed, without triggering the remount-ro. At the same time, I thought that you consider the thinpool + xfs combo somewhat "risky", as xfs does not have a remount-ro option. Actually, xfs seems to *always* shutdown the filesystem in case of failed metadata update. Maybe I misunderstood some yours message; in this case, sorry for that. Anyway, I think (and maybe I am wrong...) that the better solution is to fail *all* writes to a full pool, even the ones directed to allocated space. This will effectively "freeze" the pool and avoid any long-standing inconsistencies. Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@assyoma.it - info@assyoma.it GPG public key ID: FF5F32A8