All of lore.kernel.org
 help / color / mirror / Atom feed
* how do i bring this disk back into the fold?
@ 2021-03-28  2:12 David T-G
  2021-04-02  0:40 ` David T-G
  0 siblings, 1 reply; 12+ messages in thread
From: David T-G @ 2021-03-28  2:12 UTC (permalink / raw)
  To: Linux RAID list

Hi, all --

I recently migrated our disk farm to a new server (see the next email),
and I see that one of the partitions in a RAID5 set it inactive:

  diskfarm:~ # cat /proc/mdstat 
  Personalities : [raid6] [raid5] [raid4] 
  md127 : inactive sdf2[1](S) sdl2[0](S) sdj2[3](S)
        2196934199 blocks super 1.2
         
  md0 : active raid5 sdc1[3] sdd1[4] sdb1[0]
        11720265216 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]

  diskfarm:~ # mdadm --examine /dev/sd[bcde]1 | egrep '/dev|Name|Role|State|Checksum|Events|UUID'
  /dev/sdb1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : bbcf5aff:e4a928b8:4fd788c2:c3f298da
         Checksum : 4aa669d5 - correct
           Events : 77944
     Device Role : Active device 0
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sdc1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : c0a32425:2d206e98:78f9c264:d39e9720
         Checksum : 38ee846d - correct
           Events : 77944
     Device Role : Active device 2
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sdd1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : clean
      Device UUID : f05a143b:50c9b024:36714b9a:44b6a159
         Checksum : 49b381d8 - correct
           Events : 77944
     Device Role : Active device 3
     Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
  /dev/sde1:
       Array UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
             Name : diskfarm:0  (local to host diskfarm)
            State : active
      Device UUID : 835389bd:c065c575:0b9f2357:9070a400
         Checksum : 80e1b8c4 - correct
           Events : 60360
     Device Role : Active device 1
     Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

  diskfarm:~ # mdadm --detail /dev/md0
  /dev/md0:
             Version : 1.2
       Creation Time : Mon Feb  6 05:56:35 2017
          Raid Level : raid5
          Array Size : 11720265216 (10.92 TiB 12.00 TB)
       Used Dev Size : 3906755072 (3.64 TiB 4.00 TB)
        Raid Devices : 4
       Total Devices : 3
         Persistence : Superblock is persistent

         Update Time : Sun Mar 28 01:44:43 2021
               State : clean, degraded 
      Active Devices : 3
     Working Devices : 3
      Failed Devices : 0
       Spare Devices : 0

              Layout : left-symmetric
          Chunk Size : 512K

  Consistency Policy : resync

                Name : diskfarm:0  (local to host diskfarm)
                UUID : ca7008ef:90693dae:6c231ad7:08b3f92d
              Events : 77944

      Number   Major   Minor   RaidDevice State
         0       8       17        0      active sync   /dev/sdb1
         -       0        0        1      removed
         3       8       33        2      active sync   /dev/sdc1
         4       8       49        3      active sync   /dev/sdd1

Before I go too crazy ...  What do I need to do to bring sde1 back into
the RAID volume, either to catch up the missing 17k events (probably
preferred) or just to rebuild it?


TIA & HANN

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: how do i bring this disk back into the fold?
  2021-03-28  2:12 how do i bring this disk back into the fold? David T-G
@ 2021-04-02  0:40 ` David T-G
  2021-04-02  0:46   ` antlists
  0 siblings, 1 reply; 12+ messages in thread
From: David T-G @ 2021-04-02  0:40 UTC (permalink / raw)
  To: Linux RAID list

Hi again, all --

...and then David T-G home said...
% 
...
%   diskfarm:~ # cat /proc/mdstat 
%   Personalities : [raid6] [raid5] [raid4] 
...
%   md0 : active raid5 sdc1[3] sdd1[4] sdb1[0]
%         11720265216 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
% 
%   diskfarm:~ # mdadm --examine /dev/sd[bcde]1 | egrep '/dev|Name|Role|State|Checksum|Events|UUID'
%   /dev/sdb1:
...
%             State : clean
...
%            Events : 77944
%      Device Role : Active device 0
%      Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
%   /dev/sdc1:
...
%             State : clean
...
%            Events : 77944
%      Device Role : Active device 2
%      Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
%   /dev/sdd1:
...
%             State : clean
...
%            Events : 77944
%      Device Role : Active device 3
%      Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
%   /dev/sde1:
...
%             State : active
...
%            Events : 60360
%      Device Role : Active device 1
%      Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
% 
...
% Before I go too crazy ...  What do I need to do to bring sde1 back into
% the RAID volume, either to catch up the missing 17k events (probably
% preferred) or just to rebuild it?
[snip]

Any advice?


Thanks again & HANW

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: how do i bring this disk back into the fold?
  2021-04-02  0:40 ` David T-G
@ 2021-04-02  0:46   ` antlists
  2021-04-02  5:05     ` David T-G
  0 siblings, 1 reply; 12+ messages in thread
From: antlists @ 2021-04-02  0:46 UTC (permalink / raw)
  To: David T-G, Linux RAID list

On 02/04/2021 01:40, David T-G wrote:
> % Before I go too crazy ...  What do I need to do to bring sde1 back into
> % the RAID volume, either to catch up the missing 17k events (probably
> % preferred) or just to rebuild it?
> [snip]
> 
> Any advice?

mdadm --re-add?

Re-add will replay all the missed updates if it can, if it can't it just 
does an add and rebuilds.

Check the man page for details, and come back if it looks scary ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: how do i bring this disk back into the fold?
  2021-04-02  0:46   ` antlists
@ 2021-04-02  5:05     ` David T-G
  2021-04-02 19:41       ` Roger Heflin
  0 siblings, 1 reply; 12+ messages in thread
From: David T-G @ 2021-04-02  5:05 UTC (permalink / raw)
  To: antlists; +Cc: Linux RAID list

Wol, et al --

...and then antlists said...
% 
% mdadm --re-add?
[snip]

Thanks!  Sure enough, --re-add didn't work, but --add happily got under
way.  Only nine hours to go ... :-)


Thanks again & HANW

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: how do i bring this disk back into the fold?
  2021-04-02  5:05     ` David T-G
@ 2021-04-02 19:41       ` Roger Heflin
  2021-04-05  3:46         ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") David T-G
  0 siblings, 1 reply; 12+ messages in thread
From: Roger Heflin @ 2021-04-02 19:41 UTC (permalink / raw)
  To: David T-G; +Cc: antlists, Linux RAID list

The re-add will only work if the array has bitmaps.  For quick disk
hiccups the re-add is nice because instead of 9 hours, often it
finishes in only a few minutes assuming the disk has not been out of
the array for long.

On Fri, Apr 2, 2021 at 12:08 AM David T-G <davidtg-robot@justpickone.org> wrote:
>
> Wol, et al --
>
> ...and then antlists said...
> %
> % mdadm --re-add?
> [snip]
>
> Thanks!  Sure enough, --re-add didn't work, but --add happily got under
> way.  Only nine hours to go ... :-)
>
>
> Thanks again & HANW
>
> :-D
> --
> David T-G
> See http://justpickone.org/davidtg/email/
> See http://justpickone.org/davidtg/tofu.txt
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs (was "Re: how do i bring this disk back into the fold?")
  2021-04-02 19:41       ` Roger Heflin
@ 2021-04-05  3:46         ` David T-G
  2021-04-05 11:30           ` Roger Heflin
  0 siblings, 1 reply; 12+ messages in thread
From: David T-G @ 2021-04-05  3:46 UTC (permalink / raw)
  To: Linux RAID list

Roger, et al --

...and then Roger Heflin said...
% 
% The re-add will only work if the array has bitmaps.  For quick disk

Ahhhhh...  Good point.

It didn't really take 9 hours; a few minutes later it was up to 60+
hours, and then it dropped to a couple of hours and was done the next
time I looked.  I also forced the other array using just the last two
drives and saw everything happy, so I then added the "first" drive and
now it's all happy as well.  Woo hoo.


% hiccups the re-add is nice because instead of 9 hours, often it
% finishes in only a few minutes assuming the disk has not been out of
% the array for long.

I love the idea.  I've been reading up, and in addition to questions of
what size bitmap I need for my sizes

  diskfarm:~ # df -kh /mnt/4Traid5md/ /mnt/750Graid5md/
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/md0p1       11T   11T  309G  98% /mnt/4Traid5md
  /dev/md127p1    1.4T  1.4T   14G 100% /mnt/750Graid5md

and how to tell it (or *if* I tell it; that still isn't clear) there's
also the question of whether or not xfs

  diskfarm:~ # grep /mnt/ssd /etc/fstab
  LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0

will work for my bitmap files target, since all I see is that it must be
an ext2 or ext3 (not ext4? old news?) device.

Anyway, thanks again for the sanity checks and pointers.  It's good to be
whole again :-)  I look forward to the day when I can dig into growing to
more larger disks and have to contemplate reshaping from RAID5 to RAID6 :-)


HAND

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs (was "Re: how do i bring this disk back into the fold?")
  2021-04-05  3:46         ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") David T-G
@ 2021-04-05 11:30           ` Roger Heflin
  2021-04-05 17:29             ` antlists
  0 siblings, 1 reply; 12+ messages in thread
From: Roger Heflin @ 2021-04-05 11:30 UTC (permalink / raw)
  To: David T-G; +Cc: Linux RAID list

On Sun, Apr 4, 2021 at 10:47 PM David T-G <davidtg-robot@justpickone.org> wrote:
>
> Roger, et al --
>
> ...and then Roger Heflin said...
> %
> % The re-add will only work if the array has bitmaps.  For quick disk
>
> Ahhhhh...  Good point.
>
> It didn't really take 9 hours; a few minutes later it was up to 60+
> hours, and then it dropped to a couple of hours and was done the next
> time I looked.  I also forced the other array using just the last two
> drives and saw everything happy, so I then added the "first" drive and
> now it's all happy as well.  Woo hoo.
>
>
> % hiccups the re-add is nice because instead of 9 hours, often it
> % finishes in only a few minutes assuming the disk has not been out of
> % the array for long.
>
> I love the idea.  I've been reading up, and in addition to questions of
> what size bitmap I need for my sizes
>
>   diskfarm:~ # df -kh /mnt/4Traid5md/ /mnt/750Graid5md/
>   Filesystem      Size  Used Avail Use% Mounted on
>   /dev/md0p1       11T   11T  309G  98% /mnt/4Traid5md
>   /dev/md127p1    1.4T  1.4T   14G 100% /mnt/750Graid5md
>
> and how to tell it (or *if* I tell it; that still isn't clear) there's
> also the question of whether or not xfs

Easy enough to tell if it is working:
md14 : active raid6 sdh4[11] sdg4[6] sdf4[10] sdd4[5] sdc4[9] sdb4[7] sde4[1]
      3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
      bitmap: 1/6 pages [4KB], 65536KB chunk

I also hack my disks into several partitions such that I have 4 raid6
arrays.   This helps because the rebuild time on the entire disk is
days, and it makes me feel better when expanding the arrays as it
makes the chunks smaller.    The biggest help is when I start getting
bad blocks on one of the disks typically it is only 1 of the 4
arrays/disk sections are having bad blocks.  I also made sure that
md*4 always has partition sd*4 so to reduce the thinking about what
was where.

>
>   diskfarm:~ # grep /mnt/ssd /etc/fstab
>   LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0
>
> will work for my bitmap files target, since all I see is that it must be
> an ext2 or ext3 (not ext4? old news?) device.
>
I don't know, I have always done mine internal.   I could see some
advantage to have it on a SSD vs internally.  I may have to try that,
I am about to do some array reworks to go from all 3tb disks to start
using some 6tb disks.   If the file was pre-allocated I would not
think it would matter which.    The page is dated 2011 so that would
have been old enough that no one tested ext4/xfs.

I was going to tell you you could just create a LV and format it ext3
and use it, but I see it appears you are using direct partitions only.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs (was "Re: how do i bring this disk back into the fold?")
  2021-04-05 11:30           ` Roger Heflin
@ 2021-04-05 17:29             ` antlists
  2021-04-05 17:46               ` bitmaps on xfs David T-G
  2021-04-05 21:02               ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") Roger Heflin
  0 siblings, 2 replies; 12+ messages in thread
From: antlists @ 2021-04-05 17:29 UTC (permalink / raw)
  To: Roger Heflin, David T-G; +Cc: Linux RAID list

On 05/04/2021 12:30, Roger Heflin wrote:
>>    diskfarm:~ # grep /mnt/ssd /etc/fstab
>>    LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0
>>
>> will work for my bitmap files target, since all I see is that it must be
>> an ext2 or ext3 (not ext4? old news?) device.

Bear in mind you're better off using a journal (and bitmaps and journals 
are incompatible).

"not ext4" seems odd to me because - from a kernel point of view - ext's 
2 and 3 no longer longer exist.
>>
> I don't know, I have always done mine internal.   I could see some
> advantage to have it on a SSD vs internally.  I may have to try that,
> I am about to do some array reworks to go from all 3tb disks to start
> using some 6tb disks.   If the file was pre-allocated I would not
> think it would matter which.    The page is dated 2011 so that would
> have been old enough that no one tested ext4/xfs.
> 
Umm... don't use all the space on your 6TB disks. I'm planning to build 
my arrays on dm-integrity, which will make raid 5 a bit more trustworthy.

> I was going to tell you you could just create a LV and format it ext3
> and use it, but I see it appears you are using direct partitions only.

Ny new system? 4TB disks, with one terabyte raided and lvm on top for 
root partitions (I'll be configuring it multi-boot or VMs...). Then 
three terabytes with dm-integrity at the bottom, then raid, then lvm on 
top for /home and backup snapshots.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs
  2021-04-05 17:29             ` antlists
@ 2021-04-05 17:46               ` David T-G
  2021-04-05 17:58                 ` antlists
  2021-04-05 21:02               ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") Roger Heflin
  1 sibling, 1 reply; 12+ messages in thread
From: David T-G @ 2021-04-05 17:46 UTC (permalink / raw)
  To: Linux RAID list

Wol & Roger, et al --

...and then antlists said...
% 
% On 05/04/2021 12:30, Roger Heflin wrote:
% >>   diskfarm:~ # grep /mnt/ssd /etc/fstab
% >>   LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0
% >>
% >>will work for my bitmap files target, since all I see is that it must be
% >>an ext2 or ext3 (not ext4? old news?) device.
% 
% Bear in mind you're better off using a journal (and bitmaps and
% journals are incompatible).

A journal of the filesystem (XFS or ReiserFS) on the RAID5 device?  Or a journal
of the actual md?

  diskfarm:~ # df -kh /mnt/4Traid5md/ /mnt/750Graid5md/
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/md0p1       11T   11T  309G  98% /mnt/4Traid5md
  /dev/md127p1    1.4T  1.4T   14G 100% /mnt/750Graid5md

  diskfarm:~ # mount | grep /dev/md
  /dev/md0p1 on /mnt/4Traid5md type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=2048,noquota)
  /dev/md127p1 on /mnt/750Graid5md type reiserfs (rw,relatime)


% 
...
% >I am about to do some array reworks to go from all 3tb disks to start
% >using some 6tb disks.   If the file was pre-allocated I would not
...
% >
% Umm... don't use all the space on your 6TB disks. I'm planning to
% build my arrays on dm-integrity, which will make raid 5 a bit more
% trustworthy.
[snip]

Oooh, something else to learn :-)  I hope to go from 4 drives to 6 when I
do, and I'll be buying the best GB/$ at the time, but it will also be a
grow-over-time thing.


Thanks again & HAND

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs
  2021-04-05 17:46               ` bitmaps on xfs David T-G
@ 2021-04-05 17:58                 ` antlists
  2021-04-07 20:14                   ` Mark Wagner
  0 siblings, 1 reply; 12+ messages in thread
From: antlists @ 2021-04-05 17:58 UTC (permalink / raw)
  To: David T-G, Linux RAID list

On 05/04/2021 18:46, David T-G wrote:
> Wol & Roger, et al --
> 
> ...and then antlists said...
> %
> % On 05/04/2021 12:30, Roger Heflin wrote:
> % >>   diskfarm:~ # grep /mnt/ssd /etc/fstab
> % >>   LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0
> % >>
> % >>will work for my bitmap files target, since all I see is that it must be
> % >>an ext2 or ext3 (not ext4? old news?) device.
> %
> % Bear in mind you're better off using a journal (and bitmaps and
> % journals are incompatible).
> 
> A journal of the filesystem (XFS or ReiserFS) on the RAID5 device?  Or a journal
> of the actual md?

Journal of the md. I'm thinking raid journal, which fixes the raid-5 
write hole (I don't understand it, but if a system crashes in the middle 
of a raid-5 write it can apparently mess things up something horrid).
> 
>    diskfarm:~ # df -kh /mnt/4Traid5md/ /mnt/750Graid5md/
>    Filesystem      Size  Used Avail Use% Mounted on
>    /dev/md0p1       11T   11T  309G  98% /mnt/4Traid5md
>    /dev/md127p1    1.4T  1.4T   14G 100% /mnt/750Graid5md
> 
>    diskfarm:~ # mount | grep /dev/md
>    /dev/md0p1 on /mnt/4Traid5md type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=2048,noquota)
>    /dev/md127p1 on /mnt/750Graid5md type reiserfs (rw,relatime)
> 
> 
> %
> ...
> % >I am about to do some array reworks to go from all 3tb disks to start
> % >using some 6tb disks.   If the file was pre-allocated I would not
> ...
> % >
> % Umm... don't use all the space on your 6TB disks. I'm planning to
> % build my arrays on dm-integrity, which will make raid 5 a bit more
> % trustworthy.
> [snip]
> 
> Oooh, something else to learn :-)  I hope to go from 4 drives to 6 when I
> do, and I'll be buying the best GB/$ at the time, but it will also be a
> grow-over-time thing.
> 
dm-integrity is nothing to do with raid per-se, but it does a checksum 
of the data on disk. If your data is corrupted (rather than lost) 
there's no way you can get it back with raid-5. dm-integrity turns 
corruption into data loss allowing raid-5 to recover.

Read my journey building my new system ... :-)
https://raid.wiki.kernel.org/index.php/System2020

I've got a little more to add, but it's stalled for the perennial 
problem of finding time to do and concentrate.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs (was "Re: how do i bring this disk back into the fold?")
  2021-04-05 17:29             ` antlists
  2021-04-05 17:46               ` bitmaps on xfs David T-G
@ 2021-04-05 21:02               ` Roger Heflin
  1 sibling, 0 replies; 12+ messages in thread
From: Roger Heflin @ 2021-04-05 21:02 UTC (permalink / raw)
  To: antlists; +Cc: David T-G, Linux RAID list

I have read the analysis on data corruption in disks setups.    From
what I can tell they are using incorrect assumptions and models and
the estimates of risk are a fair number of orders of higher that is
actually the case.

Because of that I am not that worried about the risks to my data.
If I lose a few blocks of data it is not the end of the world and
there is a performance impact.  The data I have would be annoying to
lose parts of it, but it is only annoying.

I have been managing large environments for 20+ years with 1000's
petabyte years.    I have seen a total of 3 events where undetected
corruptions happened, and the way they were detected makes it pretty
unlikely there are more than 2x that number of undetected corruptions
total in the environment.

One was a pci controller being bad corrupting reads (confirmed writes
failed at least 100x less).
One was a pci bus set too fast corrupting reads (confirmed writes
failed at least 100x less).

The 3rd was the worst.  It was an enterprise/near enterprise 1st
vendor array where a very small number (5 out of 1000's of ssds) would
reboot/reset unexpectedly when not completely written data in them.
And I believe they did have the drive write caches disabled, but the
way the SSD's firmware worked if it lost power at the wrong time the
data was not yet really written and was lost.  The 5 disks themselves
seem to be broken.   The biggest mistake the vendor made was not
immediately booting off the array any device that randomly "rebooted"
and or reset without being told to.

On Mon, Apr 5, 2021 at 12:29 PM antlists <antlists@youngman.org.uk> wrote:
>
> On 05/04/2021 12:30, Roger Heflin wrote:
> >>    diskfarm:~ # grep /mnt/ssd /etc/fstab
> >>    LABEL=diskfarm-ssd      /mnt/ssd        xfs     defaults        0  0
> >>
> >> will work for my bitmap files target, since all I see is that it must be
> >> an ext2 or ext3 (not ext4? old news?) device.
>
> Bear in mind you're better off using a journal (and bitmaps and journals
> are incompatible).
>
> "not ext4" seems odd to me because - from a kernel point of view - ext's
> 2 and 3 no longer longer exist.
> >>
> > I don't know, I have always done mine internal.   I could see some
> > advantage to have it on a SSD vs internally.  I may have to try that,
> > I am about to do some array reworks to go from all 3tb disks to start
> > using some 6tb disks.   If the file was pre-allocated I would not
> > think it would matter which.    The page is dated 2011 so that would
> > have been old enough that no one tested ext4/xfs.
> >
> Umm... don't use all the space on your 6TB disks. I'm planning to build
> my arrays on dm-integrity, which will make raid 5 a bit more trustworthy.
>
> > I was going to tell you you could just create a LV and format it ext3
> > and use it, but I see it appears you are using direct partitions only.
>
> Ny new system? 4TB disks, with one terabyte raided and lvm on top for
> root partitions (I'll be configuring it multi-boot or VMs...). Then
> three terabytes with dm-integrity at the bottom, then raid, then lvm on
> top for /home and backup snapshots.
>
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: bitmaps on xfs
  2021-04-05 17:58                 ` antlists
@ 2021-04-07 20:14                   ` Mark Wagner
  0 siblings, 0 replies; 12+ messages in thread
From: Mark Wagner @ 2021-04-07 20:14 UTC (permalink / raw)
  To: Linux RAID list

On Tue, Apr 6, 2021 at 12:14 AM antlists <antlists@youngman.org.uk> wrote:

> (I don't understand it, but if a system crashes in the middle
> of a raid-5 write it can apparently mess things up something horrid).

Short version is that the disks making up a RAID stripe don't always
get written simultaneously.  If things crash just wrong, you can get
half the stripe with old data, half the stripe with new data, and no
way to tell which is which.  A journal fixes this by writing the data
twice: first to the journal, then to the array.  If the system crashes
while writing to the journal, you've still got the entire old data on
the array; if it crashes while writing to the array, you've got the
entire new data on the journal.  You're never in an inconsistent
half-and-half situation.

-- 
Mark

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-04-07 20:15 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-28  2:12 how do i bring this disk back into the fold? David T-G
2021-04-02  0:40 ` David T-G
2021-04-02  0:46   ` antlists
2021-04-02  5:05     ` David T-G
2021-04-02 19:41       ` Roger Heflin
2021-04-05  3:46         ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") David T-G
2021-04-05 11:30           ` Roger Heflin
2021-04-05 17:29             ` antlists
2021-04-05 17:46               ` bitmaps on xfs David T-G
2021-04-05 17:58                 ` antlists
2021-04-07 20:14                   ` Mark Wagner
2021-04-05 21:02               ` bitmaps on xfs (was "Re: how do i bring this disk back into the fold?") Roger Heflin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.