From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bella.media.mit.edu (bella.media.mit.edu [18.85.58.176]) by mail.server123.net (Postfix) with ESMTP for ; Mon, 8 Feb 2016 04:43:25 +0100 (CET) From: f-dm-c@media.mit.edu In-reply-to: <56B80183.60006@whgl.uni-frankfurt.de> (message from Sven Eschenberg on Mon, 8 Feb 2016 03:46:27 +0100) References: <56B30DE8.1060502@gmail.com> <20160204092017.GA25029@yeono.kjorling.se> <56B37D92.2030306@whgl.uni-frankfurt.de> <20160204172311.GB20874@tansi.org> <20160205155743.GA32705@tansi.org> <56B5356B.3030704@whgl.uni-frankfurt.de> <20160206025854.GA5986@tansi.org> <56B56605.4030907@whgl.uni-frankfurt.de> <20160207070958.35502402ED@darkstar.media.mit.edu> <20160207231750.GB29215@tansi.org> <20160208020631.E16BA402ED@darkstar.media.mit.edu> <56B80183.60006@whgl.uni-frankfurt.de> Message-Id: <20160208034324.790B2402ED@darkstar.media.mit.edu> Date: Sun, 7 Feb 2016 22:43:24 -0500 (EST) Subject: Re: [dm-crypt] The future of disk encryption with LUKS2 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: dm-crypt@saout.de > Date: Mon, 8 Feb 2016 03:46:27 +0100 > From: Sven Eschenberg > If a sector fails, it is not that uncommon that a whole chunk of > consecutive sectors fail (for rotating disks that is). Oh, come on. A one-meg gap is 256 4K sectors and 1024 1K sectors. I've never seen anything take out more than a handful of sectors adjacent to each other unless the disk has completely failed. Anything that's chewing up multiple megs or tens of megs at the start of your FS is likely to destroy any other random parts of it as well. Okay, how about a -10- meg gap? That enough? If you need resiliency from massive corruption like that, use a header backup -on other media-, and -also- an actual backup of the FS. Complicating LUKS to the point where resizing becomes fraught and difficult to handle and other tools need all kinds of special instructions to solve a problem where the disk is already in severe distress or something's written tens of megs of garbage all over it seems pointless. The (potentially solvable) problem we've seen most on this list is not massive disk failure, but OS's that decide to overwrite a sector or two near the front. So maybe we'll be extravagent and use 10 megs of clear space between the two copies---that's still absolutely in the noise on any reasonable disk, while being dead simple to implement, does not require any knowledge of the ultimate container size, does not require motion if the size changes, and will withstand almost any conceivable failure except someone doing "dd if=/dev/zero of=part" and then not noticing until a minute later---at which point, it's time to go to the backups anyway. And it doesn't involve hairing up the options to enable/disable/move around/dance a jig with where the backup header is stored. Keep it simple.