linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] raid10 with missing redundancy, but health status claims it is ok.
Date: Wed, 1 Jun 2022 17:58:31 -0400	[thread overview]
Message-ID: <25239.57607.208850.622855@quad.stoffel.home> (raw)
In-Reply-To: <617ddb5b-3993-8d9a-fac2-32f457077c60@syseleven.de>

>>>>> "Olaf" == Olaf Seibert <o.seibert@syseleven.de> writes:

Olaf> Replying to myself:
Olaf> On 30.05.22 10:16, Olaf Seibert wrote:
>> First, John, thanks for your reply.

Olaf> I contacted the customer and it turned out their VM's disk (this
Olaf> LV) was broken anyway. So there is no need any more to try to
Olaf> repair it...

So I'm not really surprised, because when that disk dies, it probably
took out their data, or at least a chunk of it, so even though it
looks like it might have kept running, it probably also got corrupted
in a big way too.  


So I think you guys need to re-architect your storage design.  If you
have paying customers on there, you should really be using MD with
RAID10, and a hot spare disk on there as well, so when a disk dies, it
can be automatically replaced, even if it fails at 2am in the
morning.  It's not cheap, but neither is a customer losing data.  

The other critical thing to do here is to make sure you're using disks
with proper SCTERC timeouts, so that when they have problems, the
disks just fail quickly, without blocking the system and causing
outages.

Look back in the linux-raid mailing list archives for discussions on
this.

And of course I'd also try to setup a remote backup server with even
bigger disks, so that you can replicate customer data onto other
storage just in case.  

Olaf> Thanks for your thoughts anyway.

Glad I could try to help, been flat out busy with $WORK and just now
following up here.  Sorry!

Olaf> -- 
Olaf> SysEleven GmbH
Olaf> Boxhagener Straße 80
Olaf> 10245 Berlin

Olaf> T +49 30 233 2012 0
Olaf> F +49 30 616 7555 0

Olaf> http://www.syseleven.de
Olaf> http://www.facebook.com/SysEleven
Olaf> https://www.instagram.com/syseleven/

Olaf> Aktueller System-Status immer unter:
Olaf> http://www.twitter.com/syseleven

Olaf> Firmensitz: Berlin
Olaf> Registergericht: AG Berlin Charlottenburg, HRB 108571 B
Olaf> Geschäftsführer: Marc Korthaus, Jens Ihlenfeld, Andreas Hermann

Olaf> _______________________________________________
Olaf> linux-lvm mailing list
Olaf> linux-lvm@redhat.com
Olaf> https://listman.redhat.com/mailman/listinfo/linux-lvm
Olaf> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  reply	other threads:[~2022-06-01 21:58 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-27 13:56 Olaf Seibert
2022-05-28 16:15 ` John Stoffel
2022-05-30  8:16   ` Olaf Seibert
2022-05-30  8:49     ` Olaf Seibert
2022-06-01 21:58       ` John Stoffel [this message]
2022-05-30 14:07     ` Demi Marie Obenour
2022-05-31 11:27       ` Olaf Seibert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=25239.57607.208850.622855@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=linux-lvm@redhat.com \
    --subject='Re: [linux-lvm] raid10 with missing redundancy, but health status claims it is ok.' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).