All of lore.kernel.org
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Tejas Rao <raot@bnl.gov>
Cc: Scott Sinno <scott.sinno@nasa.gov>,
	linux-raid@vger.kernel.org, "Knister,
	Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP]"
	<aaron.s.knister@nasa.gov>
Subject: Re: clustered MD - beyond RAID1
Date: Tue, 22 Dec 2015 15:13:59 +1100	[thread overview]
Message-ID: <87wps72k8o.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <5678A908.6070401@bnl.gov>

[-- Attachment #1: Type: text/plain, Size: 1413 bytes --]

On Tue, Dec 22 2015, Tejas Rao wrote:

> Each GPFS disk (block device) has a list of servers associated with it. 
> When the first storage server fails (expired disk lease), the storage 
> node is expelled and a different server which also sees the shared 
> storage will do I/O.

In that case something probably could be made to work with md/raid5
using much of the cluster support developed for md/raid1.

The raid5 module would take a cluster lock that covered some region of
the array and would not need to release it until a fail-over happened.
So there would be little performance penalty.

The simplest approach would be to lock the whole array.  This would
preclude the possibility of different partitions being accessed from
different nodes.  Maybe that is not a problem.  If it were, a solution
could probably be found but there would be little point searching for a
solution before a clear need was presented.

>
> In the future ,we would prefer to use linux raid (RAID6) in a shared 
> environment shielding us against server failures. Unfortunately we can 
> only do this after Redhat supports such an environment with linux raid. 
> Currently they do not support this even in an active/passive environment 
> (only one server can have a md device assembled and active regardless).

Obviously that is something you would need to discuss with Redhat.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

  parent reply	other threads:[~2015-12-22  4:13 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-18 15:29 clustered MD - beyond RAID1 Scott Sinno
2015-12-20 23:25 ` NeilBrown
2015-12-21 19:19   ` Tejas Rao
2015-12-21 20:47     ` NeilBrown
2015-12-21 21:27       ` Tejas Rao
2015-12-21 22:03         ` NeilBrown
2015-12-21 22:29           ` Adam Goryachev
2015-12-21 23:09             ` NeilBrown
2015-12-22  1:36           ` Tejas Rao
2015-12-22  2:29             ` Alireza Haghdoost
2015-12-22  4:13             ` NeilBrown [this message]
     [not found]               ` <CAB9NSeXhoHd3_BDRrWAsBrW0Dj2=NucyUFt8pSP0zB5K=RkUOg@mail.gmail.com>
2016-12-05  1:46                 ` Aaron Knister
     [not found]           ` <5678A2B9.6070008@bnl.gov>
2015-12-22  1:50             ` Aaron Knister
2015-12-22  2:33               ` Tejas Rao
     [not found]                 ` <5678B693.40907-IGkKxAqZmp0@public.gmane.org>
2015-12-25  8:47                   ` roger zhou
2016-12-02 18:12 Robert Woodworth
2016-12-02 20:02 ` Shaohua Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wps72k8o.fsf@notabene.neil.brown.name \
    --to=neilb@suse.de \
    --cc=aaron.s.knister@nasa.gov \
    --cc=linux-raid@vger.kernel.org \
    --cc=raot@bnl.gov \
    --cc=scott.sinno@nasa.gov \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.