* (unknown)
@ 2011-06-18 20:39 Dragon
2011-06-19 18:40 ` Phil Turmel
2011-06-21 14:46 ` Mail etiquette Phillip Susi
0 siblings, 2 replies; 3+ messages in thread
From: Dragon @ 2011-06-18 20:39 UTC (permalink / raw)
To: philip; +Cc: linux-raid
Monitor your background reshape with "cat /proc/mdstat".
When the reshape is complete, the extra disk will be marked "spare".
Then you can use "mdadm --remove".
-->after a view days the reshape was done and i take the disk out of the raid -> many thx for that
> at this point i think i take the disk out of the raid, because i need the space of
the disk.
Understood, but you are living on the edge. You have no backup, and only one drive
of redundancy. If one of your drives does fail, the odds of losing the whole array
while replacing it is significant. Your Samsung drives claim a non-recoverable read
error rate of 1 per 1x10^15 bits. Your eleven data disks contain 1.32x10^14 bits,
all of which must be read during rebuild. That means a _13%_ chance of total
failure while replacing a failed drive.
I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
--> nice calculation, where do you have the data from?
--> most of it is important, i will look for a better solution
> I need another advise of you. While the computer is actualy build with 13 disk and
i will become more data in the next month and the limit of power supply
connecotors is reached i am looking forward to another solution. one possibility
is to build up a better computer with more sata and sas connectors and add further
raid-controller-cards. an other idea is to build a kind of cluster or dfs with two
and later 3,4... computer. i read something about gluster.org. do you have a tip
for me or experience in this?
Unfortunately, no. Although I skirt the edges in my engineering work, I'm primarily
an end-user. Both personal and work projects have relatively modest needs. From
the engineering side, I do recommend you spend extra on power supplies & UPS.
Phil
--> and than, ext4 max size is actually 16TB, what should i do?
--> for an end-user you have many knowledge about swraid ;)
sunny
--
NEU: FreePhone - kostenlos mobil telefonieren!
Jetzt informieren: http://www.gmx.net/de/go/freephone
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re:
2011-06-18 20:39 (unknown) Dragon
@ 2011-06-19 18:40 ` Phil Turmel
2011-06-21 14:46 ` Mail etiquette Phillip Susi
1 sibling, 0 replies; 3+ messages in thread
From: Phil Turmel @ 2011-06-19 18:40 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
Hi Dragon,
On 06/18/2011 04:39 PM, Dragon wrote:
> Monitor your background reshape with "cat /proc/mdstat".
>
> When the reshape is complete, the extra disk will be marked "spare".
>
> Then you can use "mdadm --remove".
> -->after a view days the reshape was done and i take the disk out of the raid -> many thx for that
Good to hear.
>> at this point i think i take the disk out of the raid, because i need the space of
> the disk.
>
> Understood, but you are living on the edge. You have no backup, and only one drive
> of redundancy. If one of your drives does fail, the odds of losing the whole array
> while replacing it is significant. Your Samsung drives claim a non-recoverable read
> error rate of 1 per 1x10^15 bits. Your eleven data disks contain 1.32x10^14 bits,
> all of which must be read during rebuild. That means a _13%_ chance of total
> failure while replacing a failed drive.
>
> I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
> --> nice calculation, where do you have the data from?
> --> most of it is important, i will look for a better solution
The error rate is from Samsung, for your HD154UI drives:
http://www.samsung.com/latin_en/consumer/monitor-peripherals-printer/hard-disk-drives/internal/HD154UI/CKW/index.idx?pagetype=prd_detail&tab=specification
error rate = 1 / 1*10^15 = 1x10^-15
The rest comes from your setup:
11 disks * (1465138496 * 1024) bytes/disk * 8 bits/bytes = 1.32026560152e+14
% odds of failure = (data quantity * error rate) * 100%
[...]
> --> and than, ext4 max size is actually 16TB, what should i do?
I've been playing with XFS. The only significant maintenance drawback I've identified is that it cannot be shrunk. Not even offline. It's not really holding me back, though, as I tend to layer LVM on top of my raid arrays, then allocate to specific volumes. I always hold back a substantial fraction of the space for future use of "lvextend".
> --> for an end-user you have many knowledge about swraid ;)
Thank you. I was a geek before I became an engineer :) .
Phil
^ permalink raw reply [flat|nested] 3+ messages in thread
* Mail etiquette
2011-06-18 20:39 (unknown) Dragon
2011-06-19 18:40 ` Phil Turmel
@ 2011-06-21 14:46 ` Phillip Susi
1 sibling, 0 replies; 3+ messages in thread
From: Phillip Susi @ 2011-06-21 14:46 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
You keep creating new threads with no subject when you reply. This is
rude and annoying. When replying to a message you should be using your
mail client's reply function, rather than start a new message and paste
in your quotations. This should preserve the subject line and add the
proper In-Reply-To: header so that the message is properly sorted into
the existing thread. If you have been using your mail client's reply
function instead of composing a new message, then your mail client is
broken so please use another one.
On 6/18/2011 4:39 PM, Dragon wrote:
> Monitor your background reshape with "cat /proc/mdstat".
>
> When the reshape is complete, the extra disk will be marked "spare".
>
> Then you can use "mdadm --remove".
> -->after a view days the reshape was done and i take the disk out of the raid -> many thx for that
>
>> at this point i think i take the disk out of the raid, because i need the space of
> the disk.
>
> Understood, but you are living on the edge. You have no backup, and only one drive
> of redundancy. If one of your drives does fail, the odds of losing the whole array
> while replacing it is significant. Your Samsung drives claim a non-recoverable read
> error rate of 1 per 1x10^15 bits. Your eleven data disks contain 1.32x10^14 bits,
> all of which must be read during rebuild. That means a _13%_ chance of total
> failure while replacing a failed drive.
>
> I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
> --> nice calculation, where do you have the data from?
> --> most of it is important, i will look for a better solution
>
>> I need another advise of you. While the computer is actualy build with 13 disk and
> i will become more data in the next month and the limit of power supply
> connecotors is reached i am looking forward to another solution. one possibility
> is to build up a better computer with more sata and sas connectors and add further
> raid-controller-cards. an other idea is to build a kind of cluster or dfs with two
> and later 3,4... computer. i read something about gluster.org. do you have a tip
> for me or experience in this?
>
> Unfortunately, no. Although I skirt the edges in my engineering work, I'm primarily
> an end-user. Both personal and work projects have relatively modest needs. From
> the engineering side, I do recommend you spend extra on power supplies& UPS.
>
> Phil
> --> and than, ext4 max size is actually 16TB, what should i do?
> --> for an end-user you have many knowledge about swraid ;)
> sunny
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-06-21 14:46 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-18 20:39 (unknown) Dragon
2011-06-19 18:40 ` Phil Turmel
2011-06-21 14:46 ` Mail etiquette Phillip Susi
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.