All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID 5 On Linux
@ 2003-07-25  0:54 c4c3m
  2003-07-25  1:05 ` Neil Brown
  0 siblings, 1 reply; 6+ messages in thread
From: c4c3m @ 2003-07-25  0:54 UTC (permalink / raw)
  To: linux-raid

Dear All,

Let me introduce my self, my name is Hendri, i'm application
administrator at
www.6221.net.I find your email from the source code, i try to solve my
self but
it seems i must contact you personally to ask your help about my
problem.That
Problem are :I had installed RH 6.2 on IBM Netfinity 5100, it seems ok
for 1
year till today, the partition  doesn't Work, i have 6 partition that
doesnt
alive.4 partition contain about Oracle Data, and 3 2 more is /usr
partition(From
Now On, i think its a very Bad Idea to locate RAID 5 to /usr :(( )then i
discovered /dev/md1 - /dev/md6 won't work, and the error message on
/proc/mdstat
wont show up.i can "reconfigure" at all, what should i do for saving
all those
data, because i dont have any backup
the md modules already loaded, because some partition, for example md0
is up and
can be mounted as /, and /proc/mdstat show some information that inform
any
partition that already sucess to load, such as / -> md0, /home -> md9
beside
that, mdx partition doesnt show any information.
Could i recover that partition
(which doesnt alive?)?

Created md5
md : superblock update time inconsistency -- using the most recent one
md : kicking non - fresh sdc10 from array !
md : kicking non - fresh sdc10 from array !
md : md5 : raid array is not clean -- Starting background reconstruction
md5 : max total readhead window set to 512k
raid 5 : not enough operational devices for md5 (2/3 failed)
raid 5 : failed to run raid set md5
do_md_run() returned -22
md5 stopped

/proc/mdstat

Personalities : [raid1][raid5] read_ahead 1024 sectors
md0 : active raid1 sdb1p0[ sda[1] 1566208 blocks [2/2][UU]
md9 : active raid5 sdc4[2] sdc3[1] sdc1[0] 995140 blocks level 5, 64k
chunk,
algorithm 0[3/3][UUU]
md2 : active raid5 sdc7[2] sda7[1] 1043968 blocks level 5, 64 k chunk,
algorithm
0[3/2][_UU]
md7 : active raid5 sdc12[2] sdb12[0] sda12[1] 819072 blocks level 5,
64k chunk,
algorithm 0[3/3][UUU]
md8 : active raid5 sdc13[2] sdb13[0] sda13[1] 1268864 blocks level 5,
64k chunk,
algorithm 0[3/3] [UUU]
unused devices : <none>/2][_UU]] [UUU]e raid5 sdc12[2] sdb12[0]
sda12[1] 819072
blocks c13[2] sdb13[0] sda13[1] 1268864 blocks level 5, 64k chunk,
algorithm
0[3/3] [UUU]e raid5 sdc12[2] sdb12[0] sda12[1] 819072 blocks level 5,
64k chunk,
0[3/2][_UU]

but in my /etc/fstab contain :
/dev/md0 until /dev/md9
so in /proc/mdstat doesnt apear beside md0, md9, md2, md7 and md8

When i look at the source
#define OUT_OF_DATE KERN_ERR \
"md: superblock update time inconsistency -- using the most recent
one\n"
md : kicking non - fresh sdc10 from array !
md : kicking non - fresh sdc10 from array !
menunjukkan ke source
if (ev1 < ev2) {
printk(KERN_WARNING "md: kicking non-fresh %s from
array!\n",
partition_name(rdev->dev));
kick_rdev_from_array(rdev);
continue;
}

Thanks for the attention




-----------------------------------------
This email was sent using OkeMail.
 "Your Communication LifeStyle!"
      http://www.oke.net.id/



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 5 On Linux
  2003-07-25  0:54 RAID 5 On Linux c4c3m
@ 2003-07-25  1:05 ` Neil Brown
  0 siblings, 0 replies; 6+ messages in thread
From: Neil Brown @ 2003-07-25  1:05 UTC (permalink / raw)
  To: c4c3m; +Cc: linux-raid

On Friday July 25, c4c3m@oke.net.id wrote:
> Dear All,
> 
> Let me introduce my self, my name is Hendri, i'm application
> administrator at
> www.6221.net.I find your email from the source code, i try to solve my
> self but
> it seems i must contact you personally to ask your help about my
> problem.That
> Problem are :I had installed RH 6.2 on IBM Netfinity 5100, it seems ok
> for 1
> year till today, the partition  doesn't Work, i have 6 partition that
> doesnt
> alive.4 partition contain about Oracle Data, and 3 2 more is /usr
> partition(From
> Now On, i think its a very Bad Idea to locate RAID 5 to /usr :(( )then i
> discovered /dev/md1 - /dev/md6 won't work, and the error message on
> /proc/mdstat
> wont show up.i can "reconfigure" at all, what should i do for saving
> all those
> data, because i dont have any backup
> the md modules already loaded, because some partition, for example md0
> is up and
> can be mounted as /, and /proc/mdstat show some information that inform
> any
> partition that already sucess to load, such as / -> md0, /home -> md9
> beside
> that, mdx partition doesnt show any information.
> Could i recover that partition
> (which doesnt alive?)?

There isn't really enough information here to see what is happening.
Could you run
   mdadm -Eb /dev/sd*
and
   mdadm -E /dev/sd*

and send the output.

If you don't have mdadm, you can get it from
   http://www.kernel.org/pub/linux/utils/raid/mdadm/

NeilBrown

> 
> Created md5
> md : superblock update time inconsistency -- using the most recent one
> md : kicking non - fresh sdc10 from array !
> md : kicking non - fresh sdc10 from array !
> md : md5 : raid array is not clean -- Starting background reconstruction
> md5 : max total readhead window set to 512k
> raid 5 : not enough operational devices for md5 (2/3 failed)
> raid 5 : failed to run raid set md5
> do_md_run() returned -22
> md5 stopped
> 
> /proc/mdstat
> 
> Personalities : [raid1][raid5] read_ahead 1024 sectors
> md0 : active raid1 sdb1p0[ sda[1] 1566208 blocks [2/2][UU]
> md9 : active raid5 sdc4[2] sdc3[1] sdc1[0] 995140 blocks level 5, 64k
> chunk,
> algorithm 0[3/3][UUU]
> md2 : active raid5 sdc7[2] sda7[1] 1043968 blocks level 5, 64 k chunk,
> algorithm
> 0[3/2][_UU]
> md7 : active raid5 sdc12[2] sdb12[0] sda12[1] 819072 blocks level 5,
> 64k chunk,
> algorithm 0[3/3][UUU]
> md8 : active raid5 sdc13[2] sdb13[0] sda13[1] 1268864 blocks level 5,
> 64k chunk,
> algorithm 0[3/3] [UUU]
> unused devices : <none>/2][_UU]] [UUU]e raid5 sdc12[2] sdb12[0]
> sda12[1] 819072
> blocks c13[2] sdb13[0] sda13[1] 1268864 blocks level 5, 64k chunk,
> algorithm
> 0[3/3] [UUU]e raid5 sdc12[2] sdb12[0] sda12[1] 819072 blocks level 5,
> 64k chunk,
> 0[3/2][_UU]
> 
> but in my /etc/fstab contain :
> /dev/md0 until /dev/md9
> so in /proc/mdstat doesnt apear beside md0, md9, md2, md7 and md8
> 
> When i look at the source
> #define OUT_OF_DATE KERN_ERR \
> "md: superblock update time inconsistency -- using the most recent
> one\n"
> md : kicking non - fresh sdc10 from array !
> md : kicking non - fresh sdc10 from array !
> menunjukkan ke source
> if (ev1 < ev2) {
> printk(KERN_WARNING "md: kicking non-fresh %s from
> array!\n",
> partition_name(rdev->dev));
> kick_rdev_from_array(rdev);
> continue;
> }
> 
> Thanks for the attention
> 
> 
> 
> 
> -----------------------------------------
> This email was sent using OkeMail.
>  "Your Communication LifeStyle!"
>       http://www.oke.net.id/
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 5 On Linux
  2003-07-25  3:18     ` c4c3m
@ 2003-07-25  3:24       ` Neil Brown
  0 siblings, 0 replies; 6+ messages in thread
From: Neil Brown @ 2003-07-25  3:24 UTC (permalink / raw)
  To: c4c3m; +Cc: linux-raid

On Friday July 25, c4c3m@oke.net.id wrote:
> the version mdadm that i use is mdadm-1.2.0 and i did make mdadm.static then
> make install..but it still need library when i load from faulty machine ( i
> try install mdadm-1.2.0 from debian 3.0 machine and red hat 9 machine). any
> suggestion ??

   make mdadm.static
makes a program called "mdadm.static"
   make install
installs "mdadm".

You have to 
   make mdadm.static

and then just run the program called "mdadm.static" that was created.

NeilBrown

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 5 On Linux
  2003-07-25  2:21   ` Neil Brown
@ 2003-07-25  3:18     ` c4c3m
  2003-07-25  3:24       ` Neil Brown
  0 siblings, 1 reply; 6+ messages in thread
From: c4c3m @ 2003-07-25  3:18 UTC (permalink / raw)
  To: neilb; +Cc: c4c3m, linux-raid

the version mdadm that i use is mdadm-1.2.0 and i did make mdadm.static then
make install..but it still need library when i load from faulty machine ( i
try install mdadm-1.2.0 from debian 3.0 machine and red hat 9 machine). any
suggestion ??

thanks a lot
regards

hendri


> On Friday July 25, c4c3m@oke.net.id wrote:
>> i did it what u say. i was install mdadm from other machine and loat
>> it on faulty machine. but another  error show up (mdadm:
>> /lib/libc.so.6: version 'GLIBC_2.2.3' not found (required by mdadm)).
>> i'm using red hat 6.2 and when i try to check the files
>> (libc.so.6)...it was there...what should i do ???
>> note:
>> * i can't access /usr and /var partition
>>
>
> You can build a statically linked mdadm with
>
>   make mdadm.static
>
> if you copy this across to your faulty machine, it should run without
> needing to find any libraries.
>
> NeilBrown



-----------------------------------------
This email was sent using OkeMail.
 "Your Communication LifeStyle!"
      http://www.oke.net.id/



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 5 On Linux
  2003-07-25  2:14 ` c4c3m
@ 2003-07-25  2:21   ` Neil Brown
  2003-07-25  3:18     ` c4c3m
  0 siblings, 1 reply; 6+ messages in thread
From: Neil Brown @ 2003-07-25  2:21 UTC (permalink / raw)
  To: c4c3m; +Cc: linux-raid

On Friday July 25, c4c3m@oke.net.id wrote:
> i did it what u say. i was install mdadm from other machine and loat it on
> faulty machine. but another  error show up (mdadm: /lib/libc.so.6: version
> 'GLIBC_2.2.3' not found (required by mdadm)).
> i'm using red hat 6.2 and when i try to check the files (libc.so.6)...it was
> there...what should i do ???
> note:
> * i can't access /usr and /var partition
> 

You can build a statically linked mdadm with

   make mdadm.static

if you copy this across to your faulty machine, it should run without
needing to find any libraries.

NeilBrown

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID 5 On Linux
       [not found] <16160.34422.833142.148043@gargle.gargle.HOWL>
@ 2003-07-25  2:14 ` c4c3m
  2003-07-25  2:21   ` Neil Brown
  0 siblings, 1 reply; 6+ messages in thread
From: c4c3m @ 2003-07-25  2:14 UTC (permalink / raw)
  To: linux-raid

i did it what u say. i was install mdadm from other machine and loat it on
faulty machine. but another  error show up (mdadm: /lib/libc.so.6: version
'GLIBC_2.2.3' not found (required by mdadm)).
i'm using red hat 6.2 and when i try to check the files (libc.so.6)...it was
there...what should i do ???
note:
* i can't access /usr and /var partition


Thanks a lot
regards

hendri


> On Friday July 25, c4c3m@oke.net.id wrote:
>>
>> > There isn't really enough information here to see what is happening.
>> > Could you run
>> >   mdadm -Eb /dev/sd*
>> > and
>> >   mdadm -E /dev/sd*
>> >
>> > and send the output.
>> >
>> > If you don't have mdadm, you can get it from
>> >   http://www.kernel.org/pub/linux/utils/raid/mdadm/
>> >
>> > NeilBrown
>> i can't run mdam -E /dev/sd* because i guess it is in /var and /usr.
>> and the bad news is /usr include in raid 5 partition then i can't
>> access it.
>
> 1/ please reply to the list as well.
>
> 2/ Do you have another computer that you can build mdadm on, and the
> copy it onto a diskette, and the load it in on the faulty computer?
>
> NeilBrown



-----------------------------------------
This email was sent using OkeMail.
 "Your Communication LifeStyle!"
      http://www.oke.net.id/



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2003-07-25  3:24 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-07-25  0:54 RAID 5 On Linux c4c3m
2003-07-25  1:05 ` Neil Brown
     [not found] <16160.34422.833142.148043@gargle.gargle.HOWL>
2003-07-25  2:14 ` c4c3m
2003-07-25  2:21   ` Neil Brown
2003-07-25  3:18     ` c4c3m
2003-07-25  3:24       ` Neil Brown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.