All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Murphy <lists@colorremedies.com>
To: Daniel Sanabria <sanabria.d@gmail.com>
Cc: Chris Murphy <lists@colorremedies.com>,
	Linux-RAID <linux-raid@vger.kernel.org>
Subject: Re: Inactive arrays
Date: Wed, 14 Sep 2016 12:37:16 -0600	[thread overview]
Message-ID: <CAJCQCtS_bzN5v_+s3SSD5tagq+zq7DzHgH3fXufvMden6K78_g@mail.gmail.com> (raw)
In-Reply-To: <CAHscji2BKxNDLzZUonGHVB8PzeFbtuwn32NBaX315QPvvhOxyg@mail.gmail.com>

On Wed, Sep 14, 2016 at 12:16 PM, Daniel Sanabria <sanabria.d@gmail.com> wrote:
> BRAVO!!!!!!
>
> Thanks a million Chris! After following your advice on recovering the
> MBR and the GPT the arrays re-assembled automatically and all data is
> there.
>
> I already changed the type to make it consistent (FD00 on both
> partitions) and working on setting up the timeouts to 180 at boot
> time. Other than replacing the green drives with something more
> suitable (any suggestions are welcome), what else would you suggest to
> change to make the setup a bit more consistent and upgrade proof (i.e.
> having different metadata versions doesn't look right to me)?

Like I mentioned there's something about Greens spinning down that you
might look at. I'm not sure if delays in spinning back up is a
contributing factor to anything? I'd kinda expect that if the
kernel/libata send a command to the drive, and one spins up slow, the
kernel is just going to wait up to whatever the command timer is set
to. So if you set that to 180 seconds, it should be fine because no
drive takes 3 minute to spin up. But... I dunno if there's some other
vector for these drives to cause confusion.

Umm, yeah I don't think you need to worry too much about the metadata.
0.9 is deprecated, uses kernel autodetect rather than initrd based
detection like metadata 1.x, and can be more complex to troubleshoot.
But so long as it's working I honestly wouldn't mess with it. If you
do want to simplify it just make sure you have current backups because
changes are a RIPE time for mistakes that end up in user data loss. I
would pretty much just assume the user will break something, you have
a not too complex layout compared to others I've seen, but there are
some opportunities to make simple mistakes that will just blow shit up
and then you're screwed.

So I'd say it's easier to just plot a future when you're going to buy
a bunch of new drives and do a complete migration, rather than change
the existing setup metadata just for the sake of changing it.

And one thing to incorporate in the planning stage is LVM RAID.  You
could take all of your drives into one big pool, and create LV's like
you are individual RAIDs, and each LV can have its own RAID level. In
many ways it's easier because you're already using LVM on top of RAID
on top of partitioning. Instead you can create basically one
partition, add them all to LVM, and then manage the LV and raid level
at the same time. The main issue here is, familiarity with all the
tools. If you're more comfortable with mdadm, then use that. If you
can get over the hurdle that is lvm tools (it's like emacs for
storage, its metric piles of flags, documentation, features, and as
yet doesn't have all the same features as mdadm still for the raid
stuff). But it'll do scrubs, and device replacements, all the basic
stuff is there. Monitoring for drive failures is a little different, I
don't think it has a way to email you like mdadm does in case of drive
failures/ejections. So you'll have to look at that also. Note that on
the backend LVM raid uses the md kernel driver just like mdadm does,
it's just the user space tools and on disk metadata that differ.



-- 
Chris Murphy

  reply	other threads:[~2016-09-14 18:37 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-02  7:36 Inactive arrays Daniel Sanabria
2016-08-02 10:17 ` Wols Lists
2016-08-02 10:45   ` Daniel Sanabria
2016-08-03 19:18     ` Daniel Sanabria
2016-08-03 21:31       ` Wols Lists
2016-09-11 18:48     ` Daniel Sanabria
2016-09-11 20:06       ` Daniel Sanabria
2016-09-12 19:41         ` Daniel Sanabria
2016-09-12 21:13           ` Daniel Sanabria
2016-09-12 21:37             ` Chris Murphy
2016-09-13  6:51               ` Daniel Sanabria
2016-09-13 15:04                 ` Chris Murphy
2016-09-12 21:39             ` Wols Lists
2016-09-13  6:56               ` Daniel Sanabria
2016-09-13  7:02                 ` Adam Goryachev
2016-09-13 15:20                 ` Chris Murphy
2016-09-13 19:43                   ` Daniel Sanabria
2016-09-13 19:52                     ` Chris Murphy
2016-09-13 20:04                       ` Daniel Sanabria
2016-09-13 20:13                         ` Chris Murphy
2016-09-13 20:29                           ` Daniel Sanabria
2016-09-13 20:36                           ` Daniel Sanabria
2016-09-13 21:10                             ` Chris Murphy
2016-09-13 21:46                               ` Daniel Sanabria
2016-09-13 21:26                             ` Wols Lists
2016-09-14  4:33                         ` Chris Murphy
2016-09-14 10:36                           ` Daniel Sanabria
2016-09-14 14:32                             ` Chris Murphy
2016-09-14 14:57                               ` Daniel Sanabria
2016-09-14 15:15                                 ` Chris Murphy
2016-09-14 15:47                                   ` Daniel Sanabria
2016-09-14 16:10                                     ` Chris Murphy
2016-09-14 16:13                                       ` Chris Murphy
2016-09-14 18:16                                         ` Daniel Sanabria
2016-09-14 18:37                                           ` Chris Murphy [this message]
2016-09-14 18:42                                           ` Wols Lists
2016-09-15  9:21                                             ` Brad Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJCQCtS_bzN5v_+s3SSD5tagq+zq7DzHgH3fXufvMden6K78_g@mail.gmail.com \
    --to=lists@colorremedies.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=sanabria.d@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.