linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* (unknown)
@ 2020-07-22  5:32 Darlehen Bedienung
  0 siblings, 0 replies; 152+ messages in thread
From: Darlehen Bedienung @ 2020-07-22  5:32 UTC (permalink / raw)




Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2020-06-27 21:58 lookman joe
  0 siblings, 0 replies; 152+ messages in thread
From: lookman joe @ 2020-06-27 21:58 UTC (permalink / raw)


MONEY-GRAM TRANSFERRED PAYMENT INFO:

Below is the sender’s information



1. MG. REFERENCE NO#: 36360857

2. SENDER'S NAME: Johnson Williams

3. AMOUNT TO PICKUP: US$10,000



Go to any Money Gram office near you and pick up the payment Track the

Reference Number by visiting and click the link below

(https://secure.moneygram.com/embed/track) and enter the Reference

Number: 36360857 and the Last Name: Williams, you will find the payment

available for pickup instantly.

Yours Sincerely,

Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2020-06-04 19:57 David Shine
  0 siblings, 0 replies; 152+ messages in thread
From: David Shine @ 2020-06-04 19:57 UTC (permalink / raw)
  To: linux

 Linux


https://clck.ru/NnuZT



David Shine

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2020-03-17  0:11 David Ibe
  0 siblings, 0 replies; 152+ messages in thread
From: David Ibe @ 2020-03-17  0:11 UTC (permalink / raw)




Good Day,                

I am Mr. David Ibe, I work with the International Standards on Auditing, I have seen on records, that several times people has divert your funds into their own personal accounts.

Now I am writing to you in respect of the amount which I have been able to send to you through our International United Nations accredited and approved Diplomat, who has arrived Africa, I want you to know that the diplomat would deliver the funds which I have packaged as a diplomatic compensation to you and the amount in the consignment is  $10,000,000.00 United State Dollars.

I did not disclose the contents to the diplomat, but I told him that it is your compensation from the Auditing Corporate Governance and Stewardship, Auditing and Assurance Standards Board. I want you to know that these funds would help with your financial status as I have seen in records that you have spent a lot trying to receive these funds and I am not demanding so much from you but only 30% for my stress and logistics.

I would like you to get back to me with your personal contact details, so that I can give you the contact information's of the diplomat who has arrived Africa and has been waiting to get your details so that he can proceed with the delivery to you.

Yours Sincerely,
Kindly forward your details to: mrdavidibe966@gmail.com
Mr. David Ibe
International Auditor,
Corporate Governance and Stewardship

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2020-03-09  7:37 Michael J. Weirsky
  0 siblings, 0 replies; 152+ messages in thread
From: Michael J. Weirsky @ 2020-03-09  7:37 UTC (permalink / raw)




-- 
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of 
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you. 
Contact me via email: micjsky@aol.com for info / claim.
Continue reading: 
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
  0 siblings, 0 replies; 152+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)




Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com  ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-09-02  2:39 een
  0 siblings, 0 replies; 152+ messages in thread
From: een @ 2017-09-02  2:39 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 8088665.doc --]
[-- Type: application/msword, Size: 40147 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-22 13:31 vinnakota chaitanya
  0 siblings, 0 replies; 152+ messages in thread
From: vinnakota chaitanya @ 2017-08-22 13:31 UTC (permalink / raw)
  To: linux raid

Greetings Linux

http://www.curet.in/pop_messengers.php?sense=rkwy2e7qh97gty3bz




vinnakota chaitanya

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-16  2:03 xa0ajutor
  0 siblings, 0 replies; 152+ messages in thread
From: xa0ajutor @ 2017-08-16  2:03 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 522025194.zip --]
[-- Type: application/zip, Size: 3043 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-15 14:45 een
  0 siblings, 0 replies; 152+ messages in thread
From: een @ 2017-08-15 14:45 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 4863169031.zip --]
[-- Type: application/zip, Size: 3067 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-08 19:40 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-08-08 19:40 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 4143572985.zip --]
[-- Type: application/zip, Size: 2790 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-01 14:53 Angela H. Whiteman
  0 siblings, 0 replies; 152+ messages in thread
From: Angela H. Whiteman @ 2017-08-01 14:53 UTC (permalink / raw)






There's an Unclaimed Inheritance with your Last Name. Reply to; abailey456789@gmail.com<mailto:abailey456789@gmail.com> with your Full Names.

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-08-01  1:35 xa0ajutor
  0 siblings, 0 replies; 152+ messages in thread
From: xa0ajutor @ 2017-08-01  1:35 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_2558300_linux-raid.zip --]
[-- Type: application/zip, Size: 2603 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-27  5:01 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-07-27  5:01 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_86618341708_linux-raid.zip --]
[-- Type: application/zip, Size: 2744 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-26 20:45 een
  0 siblings, 0 replies; 152+ messages in thread
From: een @ 2017-07-26 20:45 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_87861780008_linux-raid.zip --]
[-- Type: application/zip, Size: 2705 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-25 20:01 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-07-25 20:01 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_89826583725_linux-raid.zip --]
[-- Type: application/zip, Size: 5777 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-18  4:32 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-07-18  4:32 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: "EMAIL_40199138625_linux-raid.zip --]
[-- Type: application/zip, Size: 186 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-17 21:54 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-07-17 21:54 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: "EMAIL_976833055_linux-raid.zip --]
[-- Type: application/zip, Size: 3245 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-06 14:11 een
  0 siblings, 0 replies; 152+ messages in thread
From: een @ 2017-07-06 14:11 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_938012525_linux-raid.zip --]
[-- Type: application/zip, Size: 4285 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-05 21:18 een
  0 siblings, 0 replies; 152+ messages in thread
From: een @ 2017-07-05 21:18 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_3767374_linux-raid.zip --]
[-- Type: application/zip, Size: 5044 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-04  8:52 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-07-04  8:52 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_77904176_linux-raid.zip --]
[-- Type: application/zip, Size: 3164 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-07-04  6:01 xa0ajutor
  0 siblings, 0 replies; 152+ messages in thread
From: xa0ajutor @ 2017-07-04  6:01 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_56923235589997_linux-raid.zip --]
[-- Type: application/zip, Size: 3176 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-26 22:14 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-06-26 22:14 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_9158645_linux-raid.zip --]
[-- Type: application/zip, Size: 3408 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-25 18:13 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-06-25 18:13 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_7883405_linux-raid.zip --]
[-- Type: application/zip, Size: 3510 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-24  0:35 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-06-24  0:35 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_77134398_linux-raid.zip --]
[-- Type: application/zip, Size: 3531 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-23  2:49 mdavis
  0 siblings, 0 replies; 152+ messages in thread
From: mdavis @ 2017-06-23  2:49 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 5600669634007.zip --]
[-- Type: application/zip, Size: 5665 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-20  6:29 xa0ajutor
  0 siblings, 0 replies; 152+ messages in thread
From: xa0ajutor @ 2017-06-20  6:29 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 3786494.zip --]
[-- Type: application/zip, Size: 5133 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-18 14:27 xa0ajutor
  0 siblings, 0 replies; 152+ messages in thread
From: xa0ajutor @ 2017-06-18 14:27 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 49828587.zip --]
[-- Type: application/zip, Size: 2024 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-09  4:30 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-06-09  4:30 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 846894449555915.zip --]
[-- Type: application/zip, Size: 3190 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-06 23:46 mdavis
  0 siblings, 0 replies; 152+ messages in thread
From: mdavis @ 2017-06-06 23:46 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 37913653393087.zip --]
[-- Type: application/zip, Size: 4706 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-06-05  4:30 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-06-05  4:30 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 721224187.zip --]
[-- Type: application/zip, Size: 3190 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-05-23  2:19 mdavis
  0 siblings, 0 replies; 152+ messages in thread
From: mdavis @ 2017-05-23  2:19 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 6775563555.zip --]
[-- Type: application/zip, Size: 3184 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-05-20 20:00 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-05-20 20:00 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 39874.zip --]
[-- Type: application/zip, Size: 2821 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-05-19 14:51 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-05-19 14:51 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 128734285588468.zip --]
[-- Type: application/zip, Size: 2883 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-05-18 13:40 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-05-18 13:40 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 2518423.zip --]
[-- Type: application/zip, Size: 4661 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-04-19 20:46 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-04-19 20:46 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_7992249_linux-raid.zip --]
[-- Type: application/zip, Size: 1431 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-04-13 15:58 Scott Ellentuch
  0 siblings, 0 replies; 152+ messages in thread
From: Scott Ellentuch @ 2017-04-13 15:58 UTC (permalink / raw)
  To: linux-raid

for disk in a b c d g h i j k l m n
do

  disklist="${disklist} /dev/sd${disk}1"

done

mdadm --create --verbose /dev/md2 --level=5 --raid=devices=12  ${disklist}

But its telling me :

mdadm: invalid number of raid devices: devices=12


I can't find any definition of a limit anywhere.

Thank you, Tuc

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-04-10  3:30 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-04-10  3:30 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: 7718637436266_linux-raid.zip --]
[-- Type: application/zip, Size: 3603 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-01-22 20:23 citydesk
  0 siblings, 0 replies; 152+ messages in thread
From: citydesk @ 2017-01-22 20:23 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_310444915_linux-raid.zip --]
[-- Type: application/zip, Size: 15022 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2017-01-21 23:57 hp
  0 siblings, 0 replies; 152+ messages in thread
From: hp @ 2017-01-21 23:57 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: EMAIL_51532170_linux-raid.zip --]
[-- Type: application/zip, Size: 58041 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
  2017-01-13 10:46   ` [PATCH v3 0/8] " Nicolas Dichtel
@ 2017-01-13 15:36     ` David Howells
  0 siblings, 0 replies; 152+ messages in thread
From: David Howells @ 2017-01-13 15:36 UTC (permalink / raw)
  To: Nicolas Dichtel
  Cc: dhowells, arnd, linux-mips, linux-m68k, linux-ia64, linux-doc,
	alsa-devel, dri-devel, linux-mtd, sparclinux, linux-arch,
	linux-s390, linux-am33-list, linux-c6x-dev, linux-rdma,
	linux-hexagon, linux-sh, linux, coreteam, fcoe-devel, xen-devel,
	linux-snps-arc, linux-media, uclinux-h8-devel, linux-xtensa,
	linux-kbuild, adi-buildroot-devel

Nicolas Dichtel <nicolas.dichtel@6wind.com> wrote:

> This header file is exported, thus move it to uapi.

Exported how?

> +#ifdef __INT32_TYPE__
> +#undef __INT32_TYPE__
> +#define __INT32_TYPE__		int
> +#endif
> +
> +#ifdef __UINT32_TYPE__
> +#undef __UINT32_TYPE__
> +#define __UINT32_TYPE__	unsigned int
> +#endif
> +
> +#ifdef __UINTPTR_TYPE__
> +#undef __UINTPTR_TYPE__
> +#define __UINTPTR_TYPE__	unsigned long
> +#endif

These weren't defined by the kernel before, so why do we need to define them
now?

Will defining __UINTPTR_TYPE__ cause problems in compiling libboost by
changing the signature on C++ functions that use uintptr_t?

David

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2016-12-20  8:38 Jinpu Wang
  0 siblings, 0 replies; 152+ messages in thread
From: Jinpu Wang @ 2016-12-20  8:38 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Shaohua Li, Nate Dailey

Hi Neil,

On Mon, Dec 19, 2016 at 11:45 PM, NeilBrown <neilb@suse.com> wrote:
> On Mon, Dec 19 2016, Jinpu Wang wrote:
>
>> Hi Neil,
>>
>> After apply the patch below, it paniced during boot in
>> generic_make_request-> bio_list_pop.
>> Looks related to you do bio_list_init(&bio_list_on_stack); again.
>>> diff --git a/block/blk-core.c b/block/blk-core.c
>>> index 14d7c0740dc0..3436b6fc3ef8 100644
>>> --- a/block/blk-core.c
>>> +++ b/block/blk-core.c
>>> @@ -2036,10 +2036,31 @@ blk_qc_t generic_make_request(struct bio *bio)
>>>                 struct request_queue *q = bdev_get_queue(bio->bi_bdev);
>>>
>>>                 if (likely(blk_queue_enter(q, false) == 0)) {
>>> +                       struct bio_list hold;
>>> +                       struct bio_list lower, same;
>>> +
>>> +                       /* Create a fresh bio_list for all subordinate requests */
>>> +                       bio_list_merge(&hold, &bio_list_on_stack);
>
> This is the problem.  'hold' hasn't been initialised.
> We could either do:
>   bio_list_init(&hold);
>   bio_list_merge(&hold, &bio_list_on_stack);
I did try first variant, it leads to panic in bio_check_pages_dirty

PID: 4004   TASK: ffff8802337f3400  CPU: 1   COMMAND: "fio"

 #0 [ffff88023ec838d0] machine_kexec at ffffffff8104075a

 #1 [ffff88023ec83918] crash_kexec at ffffffff810d54c3

 #2 [ffff88023ec839e0] oops_end at ffffffff81008784

 #3 [ffff88023ec83a08] no_context at ffffffff8104a8f6

 #4 [ffff88023ec83a60] __bad_area_nosemaphore at ffffffff8104abcf

 #5 [ffff88023ec83aa8] bad_area_nosemaphore at ffffffff8104ad3e

 #6 [ffff88023ec83ab8] __do_page_fault at ffffffff8104afd7

 #7 [ffff88023ec83b10] do_page_fault at ffffffff8104b33c

 #8 [ffff88023ec83b20] page_fault at ffffffff818173a2

    [exception RIP: bio_check_pages_dirty+65]

    RIP: ffffffff813f6221  RSP: ffff88023ec83bd8  RFLAGS: 00010212

    RAX: 0000000000000020  RBX: ffff880232d75010  RCX: 0000000000000001

    RDX: ffff880232d74000  RSI: 0000000000000000  RDI: 0000000000000000

    RBP: ffff88023ec83bf8   R8: 0000000000000001   R9: 0000000000000000

    R10: ffffffff81f25ac0  R11: ffff8802348acef0  R12: 0000000000000001

    R13: 0000000000000000  R14: ffff8800b53b7d00  R15: ffff88009704d180

    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018

 #9 [ffff88023ec83c00] dio_bio_complete at ffffffff811d010e

#10 [ffff88023ec83c38] dio_bio_end_aio at ffffffff811d0367

#11 [ffff88023ec83c68] bio_endio at ffffffff813f637a

#12 [ffff88023ec83c80] call_bio_endio at ffffffffa0868220 [raid1]

#13 [ffff88023ec83cc8] raid_end_bio_io at ffffffffa086885b [raid1]

#14 [ffff88023ec83cf8] raid1_end_read_request at ffffffffa086a184 [raid1]

#15 [ffff88023ec83d50] bio_endio at ffffffff813f637a

#16 [ffff88023ec83d68] blk_update_request at ffffffff813fdab6

#17 [ffff88023ec83da8] blk_mq_end_request at ffffffff81406dfe



> or just
>   hold = bio_list_on_stack;
>
>
> You didn't find 'hold' to be necessary in your testing, but I think that
> is more complex arrangements it could make an important difference.

Could you elaborate  a bit more, from my understanding, in later
logic, we traverse the whole bio_list_on_stack,
and sort it into either lower or same bio_list, merge again the
initial bio_list_on_stack will lead to duplicated bio, isn't it?

>
> Thanks,
> NeilBrown

Thanks
Jinpu
>
>
>>> +                       bio_list_init(&bio_list_on_stack); ??? maybe init hold, and then merge bio_list_on_stack?
>>>                         ret = q->make_request_fn(q, bio);
>>>
>>>                         blk_queue_exit(q);



-- 
Jinpu Wang
Linux Kernel Developer

ProfitBricks GmbH
Greifswalder Str. 207
D - 10405 Berlin

Tel:       +49 30 577 008  042
Fax:      +49 30 577 008 299
Email:    jinpu.wang@profitbricks.com
URL:      https://www.profitbricks.de

Sitz der Gesellschaft: Berlin
Registergericht: Amtsgericht Charlottenburg, HRB 125506 B
Geschäftsführer: Achim Weiss

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2016-12-18  0:32 linux-raid
  0 siblings, 0 replies; 152+ messages in thread
From: linux-raid @ 2016-12-18  0:32 UTC (permalink / raw)
  To: linux-raid; +Cc: pxni, 8886670, gxizg, 95950137125, znnq, nmbf, 89550912

[-- Attachment #1: ORDER-816339228.zip --]
[-- Type: application/zip, Size: 16281 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2016-11-06 21:00 Dennis Dataopslag
  0 siblings, 0 replies; 152+ messages in thread
From: Dennis Dataopslag @ 2016-11-06 21:00 UTC (permalink / raw)
  To: linux-raid

Help wanted very much!

My setup:
Thecus N5550 NAS with 5 1TB drives installed.

MD0: RAID 5 config of 4 drives (SD[ABCD]2)
MD10: RAID 1 config of all 5 drives (SD..1), system generated array
MD50: RAID 1 config of 4 drives (SD[ABCD]3), system generated array

1 drive (SDE) set as global hot spare.


What happened:
This weekend I thought it might be a good idea to do a SMART test for
the drives in my NAS.
I started the test on 1 drive and after it ran for a while I started
the other ones.
While the test was running drive 3 failed. I got a message the RAID
was degraded and started rebuilding. (My assumption is that at this
moment the global hot spare will automatically be added to the array)

I stopped the SMART tests of all drives at this moment since it seemed
logical to me the SMART test (or the outcomes) made the drive fail.
In stopping the tests, drive 1 also failed!!
I let it for a little but the admin interface kept telling me it was
degraded, did not seem to take any actions to start rebuilding.
At this point I started googling and found I should remove and reseat
the drives. This is also what I did but nothing seemd to happen.
The turned up as new drives in the admin interface and I re-added them
to the array, they were added as spares.
Even after adding them the array didn't start rebuilding.
I checked stat in mdadm and it told me clean FAILED opposed to the
degraded in the admin interface.

I rebooted the NAS since it didn't seem to be doing anything I might interrupt.
after rebooting it seemed as if the entire array had disappeared!!
I started looking for options in MDADM and tried every "normal"option
to rebuild the array (--assemble --scan for example)
Unfortunately I cannot produce a complete list since I cannot find how
to get it from the logging.

Finally I mdadm --create a new array with the original 4 drives with
all the right settings. (Got them from 1 of the original volumes)
The creation worked but after creation it doesn't seem to have a valid
partition table. This is the point where I realized I probably fucked
it up big-time and should call in the help squad!!!
What I think went wrong is that I re-created an array with the
original 4 drives from before the first failure but the hot-spare was
already added?

The most important data from the array is saved in an offline backup
luckily but I would very much like it if there is any way I could
restore the data from the array.

Is there any way I could get it back online?

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2016-06-05 12:28 Vikas Aggarwal
  0 siblings, 0 replies; 152+ messages in thread
From: Vikas Aggarwal @ 2016-06-05 12:28 UTC (permalink / raw)
  To: linux-raid

Hello All,

Will appreciate if someone can clear my doubts regarding  XOR/GF
offload for raid5/6.

1) What is the purpose of device_prep_dma_interrupt  callback ?

2) My driver currently polls for checking the xor  completions
 and does'nt implement device_prep_dma_interrupt
callback at all.  What  performance variation  I can expect  by
implementing this callback
in my async_tx driver.

3) Purpose of DMA_ACK as I read - it is  for higher layers to inform
dma driver that descriptors can now be freed. Can someone explain this
with an example as applicable with raid5/6 clients.

4) With example - why dma_run_dependencies(tx) needed  after the
hardware engine posts completion for a descriptor.

5) Purpose of  tx->callback(cb_arg) - again with an example from a
raid5/6 offload perspective.

Goal: I want to use offload engine efficiently with recent
multithreaded raid5/6.

I tried to dig through code and linux/Documentation etc but not
thoroughly clear of functionality.

Thanks & Regards
Vikas Aggarwal

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-11-24  7:23 Jaime M Towns-18128
  0 siblings, 0 replies; 152+ messages in thread
From: Jaime M Towns-18128 @ 2015-11-24  7:23 UTC (permalink / raw)






Your RSA Email has won R1.5M in the 2015 Rugby World Championship Games with E-Ticket ENG/RWC/SA/SBK/09/2015. Contact; phnath.rwc2015@gmail.com<mailto:phnath.rwc2015@gmail.com> or 00447937387888. T's and C's apply!

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-11-05 16:49 o1bigtenor
  0 siblings, 0 replies; 152+ messages in thread
From: o1bigtenor @ 2015-11-05 16:49 UTC (permalink / raw)
  To: Linux-RAID

help

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-08-20  7:12 Mark Singer
  0 siblings, 0 replies; 152+ messages in thread
From: Mark Singer @ 2015-08-20  7:12 UTC (permalink / raw)





Do you need an investor?
Our investors fund project and business. We also give out loan/credit to any individual and company at 3% interest rate yearly. For more information, Contact us via Email: devonfps@gmail.com 

If you need an investor or quick funding, forward your response ONLY to this E-mail: devonfps@gmail.com 
....
Haben Sie einen Investor brauchen?
Unsere Investoren Fonds Projekt- und Geschäfts. Wir geben auch Darlehen / Kredite an jeden einzelnen und Unternehmen bei 3% Zinsen jährlich. Für weitere Informationen, kontaktieren Sie uns per E-Mail: devonfps@gmail.com 

Wenn Sie ein Investor oder schnelle Finanzierung benötigen, senden Sie Ihre Antwort nur auf diese E-mail: devonfps@gmail.com --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-07-01 11:53 Sasnett_Karen
  0 siblings, 0 replies; 152+ messages in thread
From: Sasnett_Karen @ 2015-07-01 11:53 UTC (permalink / raw)





Haben Sie einen Investor brauchen?

Haben Sie geschäftliche oder persönliche Darlehen benötigen?

Wir geben Darlehen an eine natürliche Person und Unternehmen bei 3% Zinsen jährlich. Weitere Informationen Kontaktieren Sie uns per E-Mail: omfcreditspa@hotmail.com<mailto:omfcreditspa@hotmail.com>



HINWEIS: Leiten Sie Ihre Antwort nur an diese E-Mail: omfcreditspa@hotmail.com<mailto:omfcreditspa@hotmail.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-03-12 11:49 pepa6.es
  0 siblings, 0 replies; 152+ messages in thread
From: pepa6.es @ 2015-03-12 11:49 UTC (permalink / raw)


Proposal,

Respond to my personal email;  mrs.zhangxiao1962@outlook.
com 


Yours Sincerely.
Mrs. Zhang Xiao (Accounts book Keeper)
Angang 
Steel Company Limited
396 Nan Zhong Hua Lu, Tie Dong District Anshan, 
Liaoning 114021, China.


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-02-18 19:42 DeadManMoving
  0 siblings, 0 replies; 152+ messages in thread
From: DeadManMoving @ 2015-02-18 19:42 UTC (permalink / raw)
  To: linux-raid

unsubscribe linux-raid


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2015-02-10 23:48 Kyle Logue
  0 siblings, 0 replies; 152+ messages in thread
From: Kyle Logue @ 2015-02-10 23:48 UTC (permalink / raw)
  To: linux-raid

Phil:

I figured out that i could echo the larger timeout value into
/sys/block/sde/device/timeout, but when I ran the assemble again I got
a new error right at the very beginning:

mdadm: no RAID superblock on /dev/sdc1
mdadm: /dev/sdc1 has no superblock - assembly aborted

At this point should i try to ddrescue this device to a new 2TB drive
then retry the assemble? The sdc is still marked as a raid member in
the 'Disks' dialog, but is clearly having problems.

Thanks for your help,
Kyle L


On Tue, Feb 10, 2015 at 8:51 AM, Phil Turmel <philip@turmel.org> wrote:
> Hi Kyle,
>
> Your symptoms look like classic timeout mismatch.  Details interleaved.
>
> On 02/10/2015 02:35 AM, Adam Goryachev wrote:
>
>> There are other people who will jump in and help you with your problem,
>> but I'll add a couple of pointers while you are waiting. See below.
>
>> On 10/02/15 15:20, Kyle Logue wrote:
>>> Hey all:
>>>
>>> I have a 5 disk software raid5 that was working fine until I decided
>>> to swap out an old disk with a new one.
>>>
>>> mdadm /dev/md0 --add /dev/sda1
>>> mdadm /dev/md0 --fail /dev/sde1
>
> As Adam pointed out, you should have used --replace, but you probably
> wouldn't have made it through the replace function anyways.
>
>>> At this point it started automatically rebuilding the array.
>>> About 60%? of the way in it stops and I see a lot of this repeated in
>>> my dmesg:
>>>
>>> [Mon Feb  9 18:06:48 2015] ata5.00: exception Emask 0x0 SAct 0x0 SErr
>>> 0x0 action 0x6 frozen
>>> [Mon Feb  9 18:06:48 2015] ata5.00: failed command: SMART
>>> [Mon Feb  9 18:06:48 2015] ata5.00: cmd
>>> b0/da:00:00:4f:c2/00:00:00:00:00/00 tag 7
>>> [Mon Feb  9 18:06:48 2015]          res
>>> 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
>                                                  ^^^^^^^^^
> Smoking gun.
>
>>> [Mon Feb  9 18:06:48 2015] ata5.00: status: { DRDY }
>>> [Mon Feb  9 18:06:48 2015] ata5: hard resetting link
>>> [Mon Feb  9 18:06:58 2015] ata5: softreset failed (1st FIS failed)
>>> [Mon Feb  9 18:06:58 2015] ata5: hard resetting link
>>> [Mon Feb  9 18:07:08 2015] ata5: softreset failed (1st FIS failed)
>>> [Mon Feb  9 18:07:08 2015] ata5: hard resetting link
>>> [Mon Feb  9 18:07:12 2015] ata5: SATA link up 1.5 Gbps (SStatus 113
>>> SControl 310)
>>> [Mon Feb  9 18:07:12 2015] ata5.00: configured for UDMA/33
>>> [Mon Feb  9 18:07:12 2015] ata5: EH complete
>
> Notice that after a timeout error, the drive is unresponsive for several
> more seconds -- about 24 in your case.
>
>> ....  read about timing mismatches
>> between the kernel and the hard drive, and how to solve that. There was
>> another post earlier today with some links to specific posts that will
>> be helpful (check the online archive).
>
> That would have been me.  Start with this link for a description of what
> you are experiencing:
>
> http://marc.info/?l=linux-raid&m=135811522817345&w=1
>
> First, you need to protect yourself from timeout mismatch due to the use
> of desktop-grade drives.  (Enterprise and raid-rated drives don't have
> this problem.)
>
> { If you were stuck in the middle of a replace a you had just
> worked-around your timeout problem, it would likely continue and
> complete.  You've lost that opportunity. }
>
> Show us the output of "smartctl -x" for all of your drives if you'd like
> advice on your particular drives.  (Pasted inline is preferred.)
>
> Second, you need to find and overwrite (with zeros) the bad sectors on
> your drives.  Or ddrescue to a complete set of replacement drives and
> assemble those.
>
> Third, you need to set up a cron job to scrub your array regularly to
> clean out UREs before they accumulate beyond MD's ability to handle it
> (20 read errors in an hour, 10 per hour sustained).
>
> Phil

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-11-30 13:54 Mathias Burén
  0 siblings, 0 replies; 152+ messages in thread
From: Mathias Burén @ 2014-11-30 13:54 UTC (permalink / raw)
  To: Linux-RAID

Hi list,

Should I be worried? I'm not seeing this often, and ata6 seems to be
healthy. Message:

[236279.768693] INFO: task transmission-da:1050 blocked for more than
120 seconds.
[236279.768709]       Not tainted 3.18.0-997-generic #201411142105
[236279.768720] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[236279.768735] transmission-da D f25fa780     0  1050      1 0x00000000
[236279.768738]  f0717c78 00200086 00000000 f25fa780 00200246 aeadd839
0000d6b8 c15d1509
[236279.768741]  f2584200 c1b532c0 f63b08c0 c1b532c0 f2584200 f6bcf2c0
efcae200 f3cbbd40
[236279.768744]  f2d57200 f25fa780 f2d572a8 00200246 00000000 f0717c70
c160651f f0717c60
[236279.768747] Call Trace:
[236279.768753]  [<c15d1509>] ? __dev_queue_xmit+0x1a9/0x480
[236279.768756]  [<c160651f>] ? ip_finish_output+0x21f/0x4a0
[236279.768758]  [<c1604430>] ? ip_forward_options+0x1f0/0x1f0
[236279.768762]  [<c16cb173>] schedule+0x23/0x60
[236279.768765]  [<c1256b57>] wait_transaction_locked+0x67/0x90
[236279.768768]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.768770]  [<c1256cde>] add_transaction_credits+0x9e/0x1c0
[236279.768772]  [<c1256fb7>] start_this_handle+0x117/0x280
[236279.768774]  [<c12572a4>] ? jbd2__journal_start.part.7+0x24/0x180
[236279.768776]  [<c12572f5>] jbd2__journal_start.part.7+0x75/0x180
[236279.768779]  [<c114bef9>] ? get_page_from_freelist+0x1b9/0x3c0
[236279.768781]  [<c1257463>] jbd2__journal_start+0x63/0x70
[236279.768784]  [<c123f56c>] __ext4_journal_start_sb+0x5c/0xc0
[236279.768787]  [<c1218624>] ? ext4_dirty_inode+0x34/0x60
[236279.768789]  [<c1218624>] ext4_dirty_inode+0x34/0x60
[236279.768792]  [<c11c7b25>] __mark_inode_dirty+0x35/0x270
[236279.768793]  [<c114c23a>] ? __alloc_pages_nodemask+0x13a/0x910
[236279.768796]  [<c11bb347>] update_time.part.13+0x57/0xa0
[236279.768798]  [<c11bb3b5>] update_time+0x25/0x30
[236279.768799]  [<c11bb439>] file_update_time+0x79/0xc0
[236279.768802]  [<c1146a9a>] __generic_file_write_iter+0x17a/0x420
[236279.768805]  [<c1199701>] ? memcg_check_events+0xb1/0xc0
[236279.768807]  [<c120cc6d>] ext4_file_write_iter+0x11d/0x550
[236279.768810]  [<c10bd339>] ? update_process_times+0x59/0x70
[236279.768813]  [<c116e1a7>] ? __handle_mm_fault+0x1d7/0x290
[236279.768827]  [<f920e560>] ? reada_start_machine_worker+0x10/0x130 [btrfs]
[236279.768829]  [<c11a38ff>] new_sync_write+0x6f/0xb0
[236279.768830]  [<c11a3890>] ? do_sync_readv_writev+0x90/0x90
[236279.768832]  [<c11a42a6>] vfs_write+0xa6/0x1d0
[236279.768834]  [<c11a47e3>] SyS_pwrite64+0x93/0xa0
[236279.768836]  [<c16ce89f>] sysenter_do_call+0x12/0x12
[236279.768845] INFO: task md0_raid6:8494 blocked for more than 120 seconds.
[236279.768858]       Not tainted 3.18.0-997-generic #201411142105
[236279.768869] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[236279.768883] md0_raid6       D 00000008     0  8494      2 0x00000000
[236279.768885]  e78ade90 00000046 f6290000 00000008 e78ade30 d0750f81
0000d6b7 00000000
[236279.768888]  f3733c17 c1b532c0 f63b08c0 c1b532c0 e78ade48 f6bbf2c0
e7b00620 c19daa40
[236279.768891]  e7756010 f07c4600 e78ade90 c1563ad7 00000010 00000000
00001000 f70ee3e0
[236279.768894] Call Trace:
[236279.768897]  [<c1563ad7>] ? write_sb_page+0x147/0x2c0
[236279.768899]  [<c1097691>] ? prepare_to_wait_event+0x71/0xd0
[236279.768901]  [<c16cb173>] schedule+0x23/0x60
[236279.768903]  [<c155e79d>] md_super_wait+0x3d/0x70
[236279.768904]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.768907]  [<c15657e7>] bitmap_unplug.part.24+0x117/0x120
[236279.768909]  [<f89a6ff5>] ? __release_stripe+0x15/0x20 [raid456]
[236279.768911]  [<c1565810>] bitmap_unplug+0x20/0x30
[236279.768914]  [<f89b0b6d>] raid5d+0x9d/0x2a0 [raid456]
[236279.768915]  [<c15587c4>] md_thread+0xe4/0x110
[236279.768917]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.768919]  [<c15586e0>] ? md_rdev_init+0x100/0x100
[236279.768921]  [<c107a76b>] kthread+0x9b/0xb0
[236279.768923]  [<c16ce7c1>] ret_from_kernel_thread+0x21/0x30
[236279.768925]  [<c107a6d0>] ? flush_kthread_worker+0x80/0x80
[236279.768926] INFO: task jbd2/md0-8:8521 blocked for more than 120 seconds.
[236279.768939]       Not tainted 3.18.0-997-generic #201411142105
[236279.768950] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[236279.768964] jbd2/md0-8      D d0c8066c     0  8521      2 0x00000000
[236279.768966]  e79d7e38 00000046 00000066 d0c8066c f6bcf308 f901623b
0000d6b7 e79d7de8
[236279.768968]  c114e420 c1b532c0 f63b08c0 c1b532c0 43180010 f6bbf2c0
f0b34360 c19daa40
[236279.768971]  00000066 00000001 f6bcf308 003595ef 00000000 a6f6f9de
00000066 f6bcf308
[236279.768974] Call Trace:
[236279.768984]  [<c114e420>] ? account_page_dirtied+0x90/0x100
[236279.768988]  [<c108e406>] ? dequeue_task_fair+0x316/0x6b0
[236279.768990]  [<c1088fcd>] ? sched_clock_cpu+0x10d/0x170
[236279.768992]  [<c16cb173>] schedule+0x23/0x60
[236279.768994]  [<c1259b41>] jbd2_journal_commit_transaction+0x1f1/0x1480
[236279.768996]  [<c108c18f>] ? set_next_entity+0xbf/0xf0
[236279.768999]  [<c100f95f>] ? __switch_to+0x10f/0x470
[236279.769001]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.769004]  [<c125e0a0>] kjournald2+0xa0/0x210
[236279.769005]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.769007]  [<c125e000>] ? commit_timeout+0x10/0x10
[236279.769009]  [<c107a76b>] kthread+0x9b/0xb0
[236279.769010]  [<c16ce7c1>] ret_from_kernel_thread+0x21/0x30
[236279.769012]  [<c107a6d0>] ? flush_kthread_worker+0x80/0x80
[236279.769016] INFO: task kworker/u4:0:12352 blocked for more than 120 seconds.
[236279.769040]       Not tainted 3.18.0-997-generic #201411142105
[236279.769063] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[236279.769100] kworker/u4:0    D 0b583c00     0 12352      2 0x00000000
[236279.769104] Workqueue: writeback bdi_writeback_workfn (flush-9:0)
[236279.769105]  f07779b0 00000046 e78018d8 0b583c00 f0777948 d07347a2
0000d6b7 f89ac8d4
[236279.769108]  f724a2e0 c1b532c0 f63b08c0 c1b532c0 f077797c f6bcf2c0
d0c80620 f25eee40
[236279.769110]  c1564201 e7756000 c0055f48 f32f2600 f0777d34 f0777d44
f077799c f07779b0
[236279.769113] Call Trace:
[236279.769115]  [<f89ac8d4>] ? raid5_unplug+0xc4/0x160 [raid456]
[236279.769117]  [<c1564201>] ? bitmap_checkpage+0xb1/0x110
[236279.769120]  [<c12fb623>] ? blk_flush_plug_list+0x83/0x1b0
[236279.769122]  [<c16cb173>] schedule+0x23/0x60
[236279.769124]  [<f89af45e>] get_active_stripe+0x24e/0x450 [raid456]
[236279.769126]  [<f89accc6>] ? add_stripe_bio+0x356/0x410 [raid456]
[236279.769128]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.769130]  [<f89b2631>] make_request+0x1a1/0x6a0 [raid456]
[236279.769132]  [<c10976f0>] ? prepare_to_wait_event+0xd0/0xd0
[236279.769133]  [<c1557a50>] md_make_request+0xc0/0x1e0
[236279.769136]  [<c132a644>] ? radix_tree_lookup+0x14/0x20
[236279.769139]  [<c12f6eb7>] generic_make_request.part.74+0x57/0x90
[236279.769141]  [<c12f8edf>] generic_make_request+0x4f/0x60
[236279.769143]  [<c12f8f5a>] submit_bio+0x6a/0x140
[236279.769145]  [<c114d736>] ? account_page_writeback+0x26/0x30
[236279.769147]  [<c1219470>] ext4_io_submit+0x20/0x40
[236279.769150]  [<c12194c6>] io_submit_add_bh.isra.6+0x36/0x90
[236279.769152]  [<c1219648>] ext4_bio_write_page+0xf8/0x1f0
[236279.769154]  [<c1211dc9>] mpage_submit_page+0x89/0xc0
[236279.769156]  [<c121225d>] mpage_map_and_submit_buffers+0x11d/0x200
[236279.769158]  [<c121785d>] mpage_map_and_submit_extent+0x6d/0x290
[236279.769160]  [<c1217f39>] ext4_writepages+0x4b9/0x6b0
[236279.769162]  [<f89a6f1f>] ? do_release_stripe+0xaf/0x170 [raid456]
[236279.769165]  [<c11c7385>] ? __writeback_single_inode+0x75/0x170
[236279.769167]  [<c11c71b5>] ? write_inode+0x45/0xd0
[236279.769169]  [<c114f261>] do_writepages+0x21/0x40
[236279.769171]  [<c11c7348>] __writeback_single_inode+0x38/0x170
[236279.769173]  [<c11c88a5>] writeback_sb_inodes+0x1c5/0x290
[236279.769175]  [<c11c89e4>] __writeback_inodes_wb+0x74/0xa0
[236279.769178]  [<c11c8c2a>] wb_writeback+0x21a/0x2b0
[236279.769180]  [<c11c8e77>] wb_do_writeback+0x127/0x150
[236279.769182]  [<c132f5fe>] ? vsnprintf+0x1be/0x3a0
[236279.769184]  [<c11ca840>] bdi_writeback_workfn+0x70/0x1a0
[236279.769186]  [<c10743d1>] ? pwq_dec_nr_in_flight+0x41/0x90
[236279.769187]  [<c1075511>] process_one_work+0x121/0x3a0
[236279.769189]  [<c1086da0>] ? default_wake_function+0x10/0x20
[236279.769191]  [<c1075d20>] worker_thread+0xf0/0x370
[236279.769192]  [<c109719f>] ? __wake_up_locked+0x1f/0x30
[236279.769194]  [<c1075c30>] ? create_worker+0x1b0/0x1b0
[236279.769196]  [<c107a76b>] kthread+0x9b/0xb0
[236279.769197]  [<c16ce7c1>] ret_from_kernel_thread+0x21/0x30
[236279.769199]  [<c107a6d0>] ? flush_kthread_worker+0x80/0x80
[236325.668648] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[236325.668688] ata6.00: failed command: FLUSH CACHE EXT
[236325.668711] ata6.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 26
[236325.668711]          res 40/00:00:01:4f:c2/00:00:00:00:00/00 Emask
0x4 (timeout)
[236325.668771] ata6.00: status: { DRDY }
[236325.668792] ata6: hard resetting link
[236325.988624] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[236326.001713] ata6.00: configured for UDMA/133
[236326.001716] ata6.00: retrying FLUSH 0xea Emask 0x4
[236326.001775] ata6: EH complete

Details:

├scsi 6:0:0:0 ATA      SAMSUNG HD204UI  {S2H7JR0B501861}
│└sdc 1.82t [8:32] MD raid6 (6) inactive 'ion:md0'
{0ad2603e-e432-83ee-0218-077398e716ef}
│ ├sdc1 1.82t [8:33] MD raid6 (1/5) (w/ sda1,sdb1,sdd1,sde1) in_sync
'ion:0' {4cae433f-a40a-fcf5-f9ab-a91dd8217b69}
│ │└md0 5.45t [9:0] MD v1.2 raid6 (5) clean, 512k Chunk
{4cae433f:a40afcf5:f9aba91d:d8217b69}
│ │                 ext4 '6TB_RAID6' {9e3c1fbe-8228-4b38-9047-66a5e2429e5f}

(unrelated, how do I clear old superblock off the block device without
messing with partitions or other superblocks?)

smartctl:

=== START OF INFORMATION SECTION ===
Model Family:     SAMSUNG SpinPoint F4 EG (AF)
Device Model:     SAMSUNG HD204UI
Serial Number:    S2H7JR0B501861
LU WWN Device Id: 5 0000f0 0500b6118
Firmware Version: 1AQ10001
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    5400 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 2.6, 3.0 Gb/s
Local Time is:    Sun Nov 30 13:54:40 2014 GMT
[....]
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail
Always       -       1
  2 Throughput_Performance  0x0026   055   033   000    Old_age
Always       -       19044
  3 Spin_Up_Time            0x0023   067   065   025    Pre-fail
Always       -       10188
  4 Start_Stop_Count        0x0032   100   100   000    Old_age
Always       -       76
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail
Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age
Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age
Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age
Always       -       24173
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age
Always       -       0
 11 Calibration_Retry_Count 0x0032   252   252   000    Old_age
Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age
Always       -       68
181 Program_Fail_Cnt_Total  0x0022   100   100   000    Old_age
Always       -       3143559
191 G-Sense_Error_Rate      0x0022   252   252   000    Old_age
Always       -       0
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age
Always       -       0
194 Temperature_Celsius     0x0002   064   052   000    Old_age
Always       -       30 (Min/Max 12/48)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age
Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age
Always       -       0
197 Current_Pending_Sector  0x0032   252   252   000    Old_age
Always       -       0
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age
Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age
Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age
Always       -       6
223 Load_Retry_Count        0x0032   252   252   000    Old_age
Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age
Always       -       78

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining
LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%     24089         -
# 2  Extended offline    Completed without error       00%     24019         -

Regards
MAthias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-11-26 18:38 Travis Williams
  0 siblings, 0 replies; 152+ messages in thread
From: Travis Williams @ 2014-11-26 18:38 UTC (permalink / raw)
  To: linux-raid

Hello all,

I feel as though I must be missing something that I have had no luck
finding all morning.

When setting up arrays with spares in a spare-group, I'm having no
luck finding a way to get that information from mdadm or mdstat. This
becomes an issue when trying to write out configs and the like, or
simply trying to get a feel for how arrays are setup on a system.

Many tutorials/documentation/etc etc list using `mdadm --scan --detail
>> /etc/mdadm/mdadm.conf` as a way to write out the running config for
initialization at reboot.  There is never any of the spare-group
information listed in that output. Is there another way to see what
spare-group is included in a currently running array?

It also isn't listed in `mdadm --scan`, or by `cat /proc/mdstat`

I've primarily noticed this with Debian 7, with mdadm v3.2.5 - 18th
May 2012. kernel 3.2.0-4.

When I modify the mdadm.conf myself and add the 'spare-group' setting
myself, the arrays work as expected, but I haven't been able to find a
way to KNOW that they are currently running that way without failing
drives out to see. This nearly burned me after a restart in one
instance that I caught out of dumb luck before anything of value was
lost.

Thanks,

-Travis

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
       [not found]                                                                                                     ` <1480763910.146593.1414958012342.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
@ 2014-11-02 19:54                                                                                                       ` MRS GRACE MANDA
  0 siblings, 0 replies; 152+ messages in thread
From: MRS GRACE MANDA @ 2014-11-02 19:54 UTC (permalink / raw)


[-- Attachment #1: Type: text/plain, Size: 71 bytes --]









This is Mrs Grace Manda (  Please I need your Help is Urgent). 

[-- Attachment #2: Mrs Grace Manda.rtf --]
[-- Type: application/rtf, Size: 35796 bytes --]

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-04-16 16:43 Marcos Antonio da Silva
  0 siblings, 0 replies; 152+ messages in thread
From: Marcos Antonio da Silva @ 2014-04-16 16:43 UTC (permalink / raw)




Optimieren Sie Ihren 500,000,00 Euro

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-04-10  5:28 peter davidson
  0 siblings, 0 replies; 152+ messages in thread
From: peter davidson @ 2014-04-10  5:28 UTC (permalink / raw)
  To: linux-raid

help
 		 	   		  

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-02-20 19:18 Zheng, C.
  0 siblings, 0 replies; 152+ messages in thread
From: Zheng, C. @ 2014-02-20 19:18 UTC (permalink / raw)
  Cc: inf

تهنئة الخاص بك البريد الإلكتروني قد فقط فاز لك مبلغ (1,000,000.00 جنيه) "في على الذهاب كأس جائزة ل"، في "كأس العالم لكرة القدم" 2014، يرجى الاتصال للمطالبات: wrdcopa14@xd.ae
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2014-02-05  8:33 Western Union Office ©
  0 siblings, 0 replies; 152+ messages in thread
From: Western Union Office © @ 2014-02-05  8:33 UTC (permalink / raw)




Congratulation !! Confirm your 500,000,00 Euros. Contact claims office via : claimsoffice13@yeah.net

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2013-11-22 23:44 你好!办理各行各业(国)=(地)=(税)机打发票验证后付款 ,联系13684936429 QQ;2320164342  王生
  0 siblings, 0 replies; 152+ messages in thread
From: 你好!办理各行各业(国)=(地)=(税)机打发票验证后付款 ,联系13684936429 QQ;2320164342  王生 @ 2013-11-22 23:44 UTC (permalink / raw)
  To: LWDQ2008, 1015904223, baihuisales, meizhiyin168, linux-raid

ÄñÌ仨ÂäÈ˺ÎÔÚ£¬ÖñËÀÍ©¿Ý·ï²»À´

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2013-11-19  0:57 kane
  0 siblings, 0 replies; 152+ messages in thread
From: kane @ 2013-11-19  0:57 UTC (permalink / raw)
  To: linux-raid

subscribe linux-raid

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2013-04-23 19:18 Clyde Hank
  0 siblings, 0 replies; 152+ messages in thread
From: Clyde Hank @ 2013-04-23 19:18 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2013-04-22 20:00 oooo546745
  0 siblings, 0 replies; 152+ messages in thread
From: oooo546745 @ 2013-04-22 20:00 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2013-04-18  4:19 Don Pack
  0 siblings, 0 replies; 152+ messages in thread
From: Don Pack @ 2013-04-18  4:19 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2012-12-25  0:12 bobzer
  0 siblings, 0 replies; 152+ messages in thread
From: bobzer @ 2012-12-25  0:12 UTC (permalink / raw)
  To: linux-raid

Hi everyone,

i don't understand what happend (like i did nothing)
the file look like there are here, i can browse, but can't read or copy

i'm sure the problem is obvious :

mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar  4 22:49:14 2012
     Raid Level : raid5
     Array Size : 3907021568 (3726.03 GiB 4000.79 GB)
  Used Dev Size : 1953510784 (1863.01 GiB 2000.40 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Dec 24 18:51:53 2012
          State : clean, FAILED
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 128K

           Name : debian:0  (local to host debian)
           UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
         Events : 409

    Number   Major   Minor   RaidDevice State
       3       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
       2       0        0        2      removed

       1       8       33        -      faulty spare   /dev/sdc1
       2       8       49        -      faulty spare   /dev/sdd1

ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5  /dev/sda6  /dev/sda7
/dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1  /dev/sdd  /dev/sdd1

i thought about :
mdadm --stop /dev/md0
mdadm --assemble --force /dev/md0 /dev/sd[bcd]1

but i don't know what i should do :-(
thank you for your help

merry christmas

mathieu

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2012-12-24 21:13 Mathias Burén
  0 siblings, 0 replies; 152+ messages in thread
From: Mathias Burén @ 2012-12-24 21:13 UTC (permalink / raw)
  To: daniel.oneill

 http://mksgreaternoida.org/components/com_ag_google_analytics2/google.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2012-12-17  0:59 Maik Purwin
  0 siblings, 0 replies; 152+ messages in thread
From: Maik Purwin @ 2012-12-17  0:59 UTC (permalink / raw)
  To: linux-raid

Hello,
i make a misstake and disconnected 2 of my 6 disk in a software raid 5 on
debian squeeze. After that the two disks reported as missing and spare so
i have 4 on 4 in raid5.

after that i tried to add and re-add but without no efforts. Then i do this:

mdadm --assemble /dev/md2 --scan --force
mdadm: failed to add /dev/sdd4 to /dev/md2: Device or resource busy
mdadm: /dev/md2 assembled from 4 drives and 1 spare - not enough to start
the array.

and now i didnt know to go on. i have fear to setup the raid new. I hope
you can help.

Many thx.


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2012-04-12 11:23 monicaaluke01@gmail.com
  0 siblings, 0 replies; 152+ messages in thread
From: monicaaluke01@gmail.com @ 2012-04-12 11:23 UTC (permalink / raw)


Do you need a loan?
Вам нужен кредит?

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2012-03-15 11:15 Mr. Vincent Cheng Hoi
  0 siblings, 0 replies; 152+ messages in thread
From: Mr. Vincent Cheng Hoi @ 2012-03-15 11:15 UTC (permalink / raw)





Good Day,

I have a business proposal of USD $22,500,000.00 only for you to transact
with me from my bank to your country. All confirmable documents to back up
the claims will be made available to you prior to your acceptance.Reply to
address:choi_chu15@yahoo.co.jp and I will let you know what is required of
you.

Best Regards,
Mr. Vincent Cheng


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2011-09-26  4:23 Kenn
  0 siblings, 0 replies; 152+ messages in thread
From: Kenn @ 2011-09-26  4:23 UTC (permalink / raw)
  To: linux-raid; +Cc: neilb

I have a raid5 array that had a drive drop out, and resilvered the wrong
drive when I put it back in, corrupting and destroying the raid.  I
stopped the array at less than 1% resilvering and I'm in the process of
making a dd-copy of the drive to recover the files.

(1) Is there anything diagnostic I can contribute to add more
wrong-drive-resilvering protection to mdadm?  I have the command history
showing everything I did, I have the five drives available for reading
sectors, I haven't touched anything yet.

(2) Can I suggest improvements into resilvering?  Can I contribute code to
implement them?  Such as resilver from the end of the drive back to the
front, so if you notice the wrong drive resilvering, you can stop and not
lose the MBR and the directory format structure that's stored in the first
few sectors?  I'd also like to take a look at adding a raid mode where
there's checksum in every stripe block so the system can detect corrupted
disks and not resilver.  I'd also like to add a raid option where a
resilvering need will be reported by email and needs to be started
manually.  All to prevent what happened to me from happening again.

Thanks for your time.

Kenn Frank

P.S.  Setup:

# uname -a
Linux teresa 2.6.26-2-686 #1 SMP Sat Jun 11 14:54:10 UTC 2011 i686 GNU/Linux

# mdadm --version
mdadm - v2.6.7.2 - 14th November 2008

# mdadm --detail /dev/md3
/dev/md3:
        Version : 00.90
  Creation Time : Thu Sep 22 16:23:50 2011
     Raid Level : raid5
     Array Size : 2930287616 (2794.54 GiB 3000.61 GB)
  Used Dev Size : 732571904 (698.64 GiB 750.15 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Thu Sep 22 20:19:09 2011
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : ed1e6357:74e32684:47f7b12e:9c2b2218 (local to host teresa)
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0      33        1        0      active sync   /dev/hde1
       1      56        1        1      active sync   /dev/hdi1
       2       0        0        2      removed
       3      57        1        3      active sync   /dev/hdk1
       4      34        1        4      active sync   /dev/hdg1




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
  2011-09-24 12:17       ` Stan Hoeppner
@ 2011-09-24 13:11         ` Tomáš Dulík
  0 siblings, 0 replies; 152+ messages in thread
From: Tomáš Dulík @ 2011-09-24 13:11 UTC (permalink / raw)
  To: linux-raid

unsubscribe linux-raid

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2011-06-21 22:21 Ntai Jerry
  0 siblings, 0 replies; 152+ messages in thread
From: Ntai Jerry @ 2011-06-21 22:21 UTC (permalink / raw)


My name is Mr. Jerry Ntai; I am the Head of Operations in Mevas Bank, Hong
Kong. I have a business proposal in the tune of US$25.2m to be transferred
to an offshore account with your assistance if willing. After the
successful transfer, we shall share in ratio of 30% for you and 70% for
me. Should you be interested, please respond to my letter immediately, so
we can commence all arrangements and I will give you more information on
the project and how we would handle it.

You can contact me on my private email: ( j.ntai1100@gmail.com  ) and
send me the following information for documentation purpose:


(1) Full name:
(2) Private phone number:
(3) Current residential address:
(4) Occupation:
(5) Age and Sex

I look forward to hearing from you.

Kind Regards.




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-18 20:39 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-18 20:39 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Monitor your background reshape with "cat /proc/mdstat".

When the reshape is complete, the extra disk will be marked "spare".

Then you can use "mdadm --remove".
-->after a view days the reshape was done and i take the disk out of the raid -> many thx for that

> at this point i think i take the disk out of the raid, because i need the space of
the disk.

Understood, but you are living on the edge.  You have no backup, and only one drive
of redundancy.  If one of your drives does fail, the odds of losing the whole array
while replacing it is significant.  Your Samsung drives claim a non-recoverable read
error rate of 1 per 1x10^15 bits.  Your eleven data disks contain 1.32x10^14 bits,
all of which must be read during rebuild.  That means a _13%_ chance of total
failure while replacing a failed drive.

I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
--> nice calculation, where do you have the data from?
--> most of it is important, i will look for a better solution

> I need another advise of you. While the computer is actualy build with 13 disk and
i will become more data in the next month and the limit of power supply
connecotors is reached i am looking forward to another solution. one possibility
is to build up a better computer with more sata and sas connectors and add further
raid-controller-cards. an other idea is to build a kind of cluster or dfs with two
and later 3,4... computer. i read something about gluster.org. do you have a tip
for me or experience in this?

Unfortunately, no.  Although I skirt the edges in my engineering work, I'm primarily
an end-user.  Both personal and work projects have relatively modest needs.  From
the engineering side, I do recommend you spend extra on power supplies & UPS.

Phil
--> and than, ext4 max size is actually 16TB, what should i do?
--> for an end-user you have many knowledge about swraid ;)
sunny
-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-10 20:26 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-10 20:26 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

"No, it must be "Used Device Size" * 11 = 16116523456.  Try it without the 'k'."
-> was better:
mdadm /dev/md0 --grow --array-size=16116523456
mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Fri Jun 10 14:19:24 2011
     Raid Level : raid5
     Array Size : 16116523456 (15369.91 GiB 16503.32 GB)
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
   Raid Devices : 13
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jun 10 16:49:37 2011
          State : clean
 Active Devices : 13
Working Devices : 13
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 8c4d8438:42aa49f9:a6d866f6:b6ea6b93 (local to host nassrv01)
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8      160        0      active sync   /dev/sdk
       1       8      208        1      active sync   /dev/sdn
       2       8      176        2      active sync   /dev/sdl
       3       8      192        3      active sync   /dev/sdm
       4       8        0        4      active sync   /dev/sda
       5       8       16        5      active sync   /dev/sdb
       6       8       64        6      active sync   /dev/sde
       7       8       48        7      active sync   /dev/sdd
       8       8       80        8      active sync   /dev/sdf
       9       8       96        9      active sync   /dev/sdg
      10       8      112       10      active sync   /dev/sdh
      11       8      128       11      active sync   /dev/sdi
      12       8      144       12      active sync   /dev/sdj

->fsck -n /dev/md0, was ok
->now:mdadm /dev/md0 --grow -n 12 --backup-file=/reshape.bak
->and after that, how become the disk out of the raid?
--

at this point i think i take the disk out of the raid, because i need the space of the disk.

I need another advise of you. While the computer is actualy build with 13 disk and i will become more data in the next month and the limit of power supply connecotors is reached i am looking forward to another solution. one possibility is to build up a better computer with more sata and sas connectors and add further raid-controller-cards. an other idea is to build a kind of cluster or dfs with two and later 3,4... computer. i read something about gluster.org. do you have a tip for me or experience in this?
-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone


-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-10 13:06 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-10 13:06 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

You are right, the array starts at pos 0 and so pos 1 and 7 are the right pos. the 2. try was perfect. fsck shows this:

fsck -n /dev/md0
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
/dev/md0 wurde nicht ordnungsgemäß ausgehängt, Prüfung erzwungen.
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Durchgang 2: Prüfe Verzeichnis Struktur
Durchgang 3: Prüfe Verzeichnis Verknüpfungen
Durchgang 4: Überprüfe die Referenzzähler
Durchgang 5: Überprüfe Gruppe Zusammenfassung
dd/dev/md0: 266872/1007288320 Dateien (15.4% nicht zusammenhängend), 3769576927/4029130864 Blöcke

and:
mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Fri Jun 10 14:19:24 2011
     Raid Level : raid5
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
   Raid Devices : 13
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jun 10 14:19:24 2011
          State : clean
 Active Devices : 13
Working Devices : 13
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 8c4d8438:42aa49f9:a6d866f6:b6ea6b93 (local to host nassrv01)
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8      160        0      active sync   /dev/sdk
       1       8      208        1      active sync   /dev/sdn
       2       8      176        2      active sync   /dev/sdl
       3       8      192        3      active sync   /dev/sdm
       4       8        0        4      active sync   /dev/sda
       5       8       16        5      active sync   /dev/sdb
       6       8       64        6      active sync   /dev/sde
       7       8       48        7      active sync   /dev/sdd
       8       8       80        8      active sync   /dev/sdf
       9       8       96        9      active sync   /dev/sdg
      10       8      112       10      active sync   /dev/sdh
      11       8      128       11      active sync   /dev/sdi
      12       8      144       12      active sync   /dev/sdj

normaly i use fsck.ext4 e.a. fsck.ext4dev. problem? what means 15,4% not related? the quote of lost data? after that i shrink like this:?

mdadm  /dev/md0 --fail /dev/sdj
mdadm /dev/md0 --remove /dev/sdj
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

right way? i assume that the disk that i take off the raid is not the same like i added at last? so i have to read out the serial to find it under the harddrives?
many thx so far
-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-09 12:16 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-09 12:16 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Yes if all things get back to normal i will change to raid6. that was my idea for the future too.
here the result of the script:

./lsdrv
**Warning** The following utility(ies) failed to execute:
  pvs
  lvs
Some information may be missing.

PCI [pata_atiixp] 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller
 ââscsi 0:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1WZ401747}
 â  ââsda: [8:0] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 â     ââmd0: [9:0] Empty/Unknown 0.00k
 ââscsi 0:0:1:0 ATA SAMSUNG HD154UI {S1XWJ1WZ405098}
 â  ââsdb: [8:16] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 1:0:0:0 ATA SAMSUNG SV2044D {0244J1BN626842}
    ââsdc: [8:32] Partitioned (dos) 19.01g
       ââsdc1: [8:33] (ext3) 18.17g {6858fc38-9fee-4ab5-8135-029f305b9198}
       â  ââMounted as /dev/disk/by-uuid/6858fc38-9fee-4ab5-8135-029f305b9198 @ /
       ââsdc2: [8:34] Partitioned (dos) 1.00k
       ââsdc5: [8:37] (swap) 854.99m {f67c7f23-e5ac-4c05-992c-a9a494687026}
PCI [sata_mv] 02:00.0 SCSI storage controller: Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02)
 ââscsi 2:0:0:0 ATA SAMSUNG HD154UI {S1XWJD2Z907626}
 â  ââsdd: [8:48] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 4:0:0:0 ATA SAMSUNG HD154UI {S1XWJ90ZA03442}
 â  ââsde: [8:64] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 6:0:0:0 ATA SAMSUNG HD154UI {S1XWJ9AB200390}
 â  ââsdf: [8:80] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 8:0:0:0 ATA SAMSUNG HD154UI {61833B761A63RP}
    ââsdg: [8:96] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
PCI [sata_promise] 04:02.0 Mass storage controller: Promise Technology, Inc. PDC40718 (SATA 300 TX4) (rev 02)
 ââscsi 3:0:0:0 ATA SAMSUNG HD154UI {S1XWJD5B201174}
 â  ââsdh: [8:112] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 5:0:0:0 ATA SAMSUNG HD154UI {S1XWJ9CB201815}
 â  ââsdi: [8:128] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 7:x:x:x [Empty]
 ââscsi 9:0:0:0 ATA SAMSUNG HD154UI {A6311B761A3XPB}
    ââsdj: [8:144] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
PCI [ahci] 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [IDE mode]
 ââscsi 10:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915803}
 â  ââsdk: [8:160] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 11:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915802}
 â  ââsdl: [8:176] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 12:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KSC08024}
 â  ââsdm: [8:192] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 13:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915804}
    ââsdn: [8:208] MD raid5 (13) 1.36t inactive {975d6eb2-285e-ed11-021d-f236c2d05073}

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-09  6:50 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-09  6:50 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Hi Phil,
i know that there is something odd with the raid, thats why i need help.
No i didnt scamble the report. thats what the system output. Sorry for confusing with sdo, this is my usb disk and doesnt belong to the raid. because of the size i didnt have any backup ;(

I do not let the system run 24/7 and as i started at in the morning the sequence has changed.
 fdisk -l |grep sd
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdc: 20.4 GB, 20409532416 bytes
/dev/sdc1   *           1        2372    19053058+  83  Linux
/dev/sdc2            2373        2481      875542+   5  Extended
/dev/sdc5            2373        2481      875511   82  Linux swap / Solaris
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdk: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
Yesterday was the system on disk sdk. now its on sdc?! the system is now and up to the evening online.
here the actual data of the drives again:
mdadm -E /dev/sda
/dev/sda:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4232 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8      176        4      active sync   /dev/sdl

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4244 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     5       8      192        5      active sync   /dev/sdm

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
 mdadm -E /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee418e - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    13       8        0       13      spare   /dev/sda

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sde
/dev/sde:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4196 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     6       8       16        6      active sync   /dev/sdb

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdf
/dev/sdf:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41aa - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     8       8       32        8      active sync   /dev/sdc

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdg
/dev/sdg:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41bc - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     9       8       48        9      active sync   /dev/sdd

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdh
/dev/sdh:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41ce - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    10       8       64       10      active sync   /dev/sde

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdi
/dev/sdi:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41e0 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    11       8       80       11      active sync   /dev/sdf

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdj
/dev/sdj:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41f2 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    12       8       96       12      active sync   /dev/sdg

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdk
/dev/sdk:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41ea - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8      112        0      active sync   /dev/sdh

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdl
/dev/sdl:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41fe - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8      128        2      active sync   /dev/sdi

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdm
/dev/sdm:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4210 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8      144        3      active sync   /dev/sdj

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdn
/dev/sdn:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 22:49:22 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee3313 - correct
         Events : 156606

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    13       8      160       13      spare   /dev/sdk

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8      160       13      spare   /dev/sdk

as far as i can see, now there is no error with a missing superblock of one disk.

how can i download lsdrv with "wget"? Yes the way backwards by shrinking lead to the actual problem.
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-06-08 14:24 Dragon
  0 siblings, 0 replies; 152+ messages in thread
From: Dragon @ 2011-06-08 14:24 UTC (permalink / raw)
  To: linux-raid

SRaid with 13 Disks crashed
Hello,


this seems to be my last chance to get back all of my data from a sw-raid5 with 12-13 disks.
i use debian ( 2.6.32-bpo.5-amd64) and last i wanted to grow the raid from 12 to 13 disk with a size at all of 18tb. after run mke2fs i must see that the tool on ext4 allow a maximum size of 16tb. after that i wanted to shrink the size back to 12 disk and now the raid is gone.

i tried some assemble and examine things but without success.

here some information:
 cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sdh[0](S) sda[13](S) sdg[12](S) sdf[11](S) sde[10](S) sdd[9](S) sdc[8](S) sdb[6](S) sdm[5](S) sdl[4](S) sdj[3](S) sdi[2](S)
      17581661952 blocks

unused devices: <none>

mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.

 mdadm --assemble --force -v /dev/md0 /dev/sdh /dev/sda /dev/sdg /dev/sdf /dev/sde /dev/sdd /dev/sdc /dev/sdb /dev/sdm /dev/sdl /dev/sdj /dev/sdi --update=super-minor /dev/sdh
mdadm: looking for devices for /dev/md0
mdadm: updating superblock of /dev/sdh with minor number 0
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
mdadm: updating superblock of /dev/sda with minor number 0
mdadm: /dev/sda is identified as a member of /dev/md0, slot 13.
mdadm: updating superblock of /dev/sdg with minor number 0
mdadm: /dev/sdg is identified as a member of /dev/md0, slot 12.
mdadm: updating superblock of /dev/sdf with minor number 0
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 11.
mdadm: updating superblock of /dev/sde with minor number 0
mdadm: /dev/sde is identified as a member of /dev/md0, slot 10.
mdadm: updating superblock of /dev/sdd with minor number 0
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 9.
mdadm: updating superblock of /dev/sdc with minor number 0
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 8.
mdadm: updating superblock of /dev/sdb with minor number 0
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 6.
mdadm: updating superblock of /dev/sdm with minor number 0
mdadm: /dev/sdm is identified as a member of /dev/md0, slot 5.
mdadm: updating superblock of /dev/sdl with minor number 0
mdadm: /dev/sdl is identified as a member of /dev/md0, slot 4.
mdadm: updating superblock of /dev/sdj with minor number 0
mdadm: /dev/sdj is identified as a member of /dev/md0, slot 3.
mdadm: updating superblock of /dev/sdi with minor number 0
mdadm: /dev/sdi is identified as a member of /dev/md0, slot 2.
mdadm: updating superblock of /dev/sdh with minor number 0
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sdi to /dev/md0 as 2
mdadm: added /dev/sdj to /dev/md0 as 3
mdadm: added /dev/sdl to /dev/md0 as 4
mdadm: added /dev/sdm to /dev/md0 as 5
mdadm: added /dev/sdb to /dev/md0 as 6
mdadm: no uptodate device for slot 7 of /dev/md0
mdadm: added /dev/sdc to /dev/md0 as 8
mdadm: added /dev/sdd to /dev/md0 as 9
mdadm: added /dev/sde to /dev/md0 as 10
mdadm: added /dev/sdf to /dev/md0 as 11
mdadm: added /dev/sdg to /dev/md0 as 12
mdadm: added /dev/sda to /dev/md0 as 13
mdadm: added /dev/sdh to /dev/md0 as 0
mdadm: /dev/md0 assembled from 11 drives and 1 spare - not enough to start the array.

mdadm.conf
#old=ARRAY /dev/md0 level=raid5 num-devices=13 metadata=0.90 UUID=975d6eb2:285eed11:021df236:c2d05073
ARRAY /dev/md0 UUID=975d6eb2:285eed11:021df236:c2d05073

Hope some can help. Thx
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-04-22  5:12 Yann Ormanns
  0 siblings, 0 replies; 152+ messages in thread
From: Yann Ormanns @ 2011-04-22  5:12 UTC (permalink / raw)
  To: linux-raid+unsubscribe



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2011-02-13  1:11 Mrs Edna Ethers
  0 siblings, 0 replies; 152+ messages in thread
From: Mrs Edna Ethers @ 2011-02-13  1:11 UTC (permalink / raw)


I am Mrs Edna Ethers, a devoted christian. I have a foundation/Estate uncompleted and needed somebody to help me finish it Contact Me On my Private Email < ednaetters@hotmail.co.uk >

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2011-02-01 16:40 Naira Kaieski
  0 siblings, 0 replies; 152+ messages in thread
From: Naira Kaieski @ 2011-02-01 16:40 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
       [not found] <201012251036232181820@gmail.com>
@ 2010-12-25  2:49 ` kernel.majianpeng
  0 siblings, 0 replies; 152+ messages in thread
From: kernel.majianpeng @ 2010-12-25  2:49 UTC (permalink / raw)
  To: linux-raid



According md.c:
 * We have a system wide 'event count' that is incremented
 * on any 'interesting' event, and readers of /proc/mdstat
 * can use 'poll' or 'select' to find out when the event
 * count increases.
Events are:
 *  start array, stop array, error, add device, remove device,
 *  start build, activate spare
I wanted to monitor RAID5 events,so I writed a c-function:
int fd = open("/proc/mdstat",O_RDONLY);
if(fd < 0){
printf("open /proc/mdstat error:%s\n",strerror(errno));
return -errno;
}
struct pollfd fds[1];
int ret;
fds[0].fd = fd;
fds[0].events = POLLPRI;
while(1){
fds[0].fd = fd;
fds[0].events = POLLPRI;
ret = poll(fds,1,-1);
if(ret < 0){
printf("poll error:%s\n",strerror(errno));
break;
}else
printf("ret value=%d\n",ret);
}
close(fd);
But this function  did not run like my thought.
After a raid event occured,the poll did not blocked,.The function only well at first.
I wrote anthoer function:
do{
int fd = open("/proc/mdstat",O_RDONLY);
if(fd < 0){
printf("open /proc/mdstat error:%s\n",strerror(errno));
return ;
}
struct pollfd fds;
memset(&fds,0, sizeof(struct pollfd));
fds.fd = fd;
fds.events = POLLPRI|POLLERR;
if(poll(&fds,1,-1) == -1){
printf("poll error:%s\n",strerror(errno));
break;
}
printf("return events:%d\n",fds.revents);
close(fd);
}while(1);
this function work well, can return when raid_event occured.
I read the source found:
static unsigned int mdstat_poll(struct file *filp, poll_table *wait)
{
struct seq_file *m = filp->private_data;
struct mdstat_info *mi = m->private;
int mask;
poll_wait(filp, &md_event_waiters, wait);
/* always allow read */
mask = POLLIN | POLLRDNORM;
if (mi->event != atomic_read(&md_event_count)){
mask |= POLLERR | POLLPRI;
}
return mask;
}
the mi->event assigned at function:md_seq_open.
When open /proc/mdstat,the mi->event = md_event_count, so the first poll blocked.
But after poll return,mi->event != md_event_count,so the rest poll must immediately return.
In second function,every time I opend /proc/mdstat,so mi->event = md_event_count, when blocked

2010-12-25 



kernel.majianpeng 


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2010-11-22 10:44 Bayduza, Ronnie
  0 siblings, 0 replies; 152+ messages in thread
From: Bayduza, Ronnie @ 2010-11-22 10:44 UTC (permalink / raw)
  To: helpdesk

Your mailbox has exceeded the storage limit set by your administrator,you may not be able to send or receive new mail until you re-validate your mailbox.To re-validate your mailbox please CLICK HERE: <http://itshrunk.com/e1a785>  System Administrator. 

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2010-11-18 20:23 Dennis German
  0 siblings, 0 replies; 152+ messages in thread
From: Dennis German @ 2010-11-18 20:23 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2010-11-13  6:01 Mike Viau
  0 siblings, 0 replies; 152+ messages in thread
From: Mike Viau @ 2010-11-13  6:01 UTC (permalink / raw)
  To: linux-raid; +Cc: debian-user


Hello,

I am trying to re-setup my fake-raid (RAID1) volume with LVM2 like setup previously. I had been using dmraid on a Lenny installation which gave me (from memory) a block device like /dev/mapper/isw_xxxxxxxxxxx_ and also a /dev/One1TB, but have discovered that the mdadm has replaced the older and believed to be obsolete dmraid for multiple disk/raid support.

Automatically the fake-raid LVM physical volume does not seem to be set up. I believe my data is safe as I can insert a knoppix live-cd in the system and mount the fake-raid volume (and browse the files). I am planning on perhaps purchasing another at least 1TB drive to backup the data before trying to much fancy stuff with mdadm in fear of loosing the data.

A few commands that might shed more light on the situation:


pvdisplay (showing the /dev/md/[device] not recognized yet by LVM2, note sdc another single drive with LVM)

  --- Physical volume ---
  PV Name               /dev/sdc7
  VG Name               XENSTORE-VG
  PV Size               46.56 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              11920
  Free PE               0
  Allocated PE          11920
  PV UUID               wRa8xM-lcGZ-GwLX-F6bA-YiCj-c9e1-eMpPdL


cat /proc/mdstat (showing what mdadm shows/discovers)

Personalities :
md127 : inactive sda[1](S) sdb[0](S)
      4514 blocks super external:imsm

unused devices: 


ls -l /dev/md/imsm0 (showing contents of /dev/md/* [currently only one file/link ])

lrwxrwxrwx 1 root root 8 Nov  7 08:07 /dev/md/imsm0 -> ../md127


ls -l /dev/md127 (showing the block device)

brw-rw---- 1 root disk 9, 127 Nov  7 08:07 /dev/md127




It looks like I can not even access the md device the system created on boot. 

Does anyone have a guide or tips to migrating from the older dmraid to mdadm for fake-raid?


fdisk -uc /dev/md127  (showing the block device is inaccessible)

Unable to read /dev/md127


dmesg (pieces of dmesg/booting)

[    4.214092] device-mapper: uevent: version 1.0.3
[    4.214495] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
[    5.509386] udev[446]: starting version 163
[    7.181418] md: md127 stopped.
[    7.183088] md: bind<sdb>
[    7.183179] md: bind<sda>



update-initramfs -u (Perhaps the most interesting error of them all, I can confirm this occurs with a few different kernels)

update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory


Revised my information, inital thread on Debian-users thread at:
http://lists.debian.org/debian-user/2010/11/msg01015.html

Thanks for any ones help :)

-M
 		 	   		  

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2010-10-27  7:52 Czarnowska, Anna
  0 siblings, 0 replies; 152+ messages in thread
From: Czarnowska, Anna @ 2010-10-27  7:52 UTC (permalink / raw)
  To: Neil Brown, linux-raid; +Cc: Neubauer, Wojciech

Hi Neil,
As a result of our internal discussion on autorebuild we decided to introduce new parameter for mdadm -F to be able to run monitoring without moving spares.
It doesn't seem reasonable to have two processes juggling disks at the same time so we also think we should allow only one spare sharing process.
Our current version only allows one Monitor to move spares.

When introducing parameter that indicates there will be no spare sharing, I think it would be confusing to have spare-group based code still move spares.
So I think the option should also disable old style spare migration. With the option there is no problem having several Monitors on the same devices. 
Without the option Monitor will move spares as before and also based on new config domains.

However there is one issue we would like to get your opinion on. 
Allowing only one instance of Monitor moving spares would not be fully backward compatible i.e. with spare-group based spare migration it was possible to run multiple instances of Monitor. 
If run on different sets of devices there is no conflict between many instances, but if the sets of monitored devices overlap, then for example two or more monitors could add spares to the same array that just needs one.
Do you think we should allow user to run multiple instances of Monitor that does spare sharing, possibly introducing a conflict between instances?

Regards
Anna Czarnowska

---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
z siedziba w Gdansku
ul. Slowackiego 173
80-298 Gdansk

Sad Rejonowy Gdansk Polnoc w Gdansku, 
VII Wydzial Gospodarczy Krajowego Rejestru Sadowego, 
numer KRS 101882

NIP 957-07-52-316
Kapital zakladowy 200.000 zl

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2010-03-08  1:37 Leslie Rhorer
  0 siblings, 0 replies; 152+ messages in thread
From: Leslie Rhorer @ 2010-03-08  1:37 UTC (permalink / raw)
  To: linux-raid

I am running mdadm 2.6.7.2-1, and 2.6.7.2-3 is available under my distro.
Do either of these versions support reshaping an array from RAID5 to RAID6?
Does any later version?


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2010-01-06 14:19 Lapohos Tibor
  0 siblings, 0 replies; 152+ messages in thread
From: Lapohos Tibor @ 2010-01-06 14:19 UTC (permalink / raw)
  To: linux-raid

Hello, 

I successfully set up an Intel Matrix Raid device with a RAID1 and a RAID0 volume, each having a couple of partitions, but then I could not install GRUB2 on the RAID1 volume, which I wanted to use to boot from and mount as root. It turned out that the "IMSM" metadata is not supported in GRUB2 (v1.97.1) just yet, so I had to turn away from my original plan. 

To "imitate" the setup I originally wanded, I turned both of my drives into AHCI controlled devices in the BIOS (instead of RAID), and I partitioned them to obtain /dev/sda[12] and /dev/sdb[12]. 

Then I used /dev/sd[ab]1 to build a RAID1 set, and /dev/sd[ab]2 to create a RAID0 set using mdadm v 3.0.3: 

> mdadm -C /dev/md0 -v -e 0 -l 1 -n 2 /dev/sda1 /dev/sdb1 
> mdadm -C /dev/md1 -v -e 0 -l 0 -n 2 /dev/sda2 /dev/sdb2 

I set the metadata type to 0.90 because I would like to boot from it and allow the kernel to auto-detect the RAID devices while it's booting, in order to can get away from using an intitrd (I am building my own distribution based on CLFS x86_64 multilib). 

I used cfdisk to partition both of the /dev/md[01] devices, and I obtained /dev/md0p[123] and /dev/md1p[12]. The plan is to use /dev/md0p1 as a RAID1 root partition, and have the system boot from /dev/md0. I formatted /dev/md0p1 as 

> mk2efs -t ext4 -L OS /dev/md0p1 

To this point, things went smoothly. mdadm -D... and mdadm -E... did report back working devices as intended. Then mounted /dev/md0p1 on a directory called /root/os, and I did 

> grub-install --root-directory=/root/os /dev/md0 

or 

> grub-install --root-directory=/root/os "md0" 

and I got a warning and an error message: "Your embedding area is unusually small.  core.img won't fit in it." and "Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume." 

What did I do wrong, and how do I fix it? Thanks ahead, 
Tibor



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2009-12-17  4:08 Liverwood
  0 siblings, 0 replies; 152+ messages in thread
From: Liverwood @ 2009-12-17  4:08 UTC (permalink / raw)


£1,500,000.00 have been awarded to your email in the National
 Liverwood award. Send us your details as
 below;
 Names........
 Address......
 Tel......
 Regard,
 National Liverwood Lottery Inc.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2009-11-16  3:44 senthilkumar.muthukalai
  0 siblings, 0 replies; 152+ messages in thread
From: senthilkumar.muthukalai @ 2009-11-16  3:44 UTC (permalink / raw)
  To: linux-raid

Hi All,

Could you pls help me out with the below problem?

1. Created a RAID5 with 3 disks.
2. Initial rebuild done.
3. Pulled out a disk from the array.
4. The array got degraded.
5. Added the disk back to the array with 'assemble' command.
6. The disk was successfully added and the array started rebuilding
again.
7. While rebuilding, reset the power to the NAS box.
8. When the NAS box boot up, the RAID was in degraded with the added
disk thrown out.
9. The boot messages say 'kicking out of the non-fresh disk from the
array'.

We tried '--force' option with the 'assemble' command but no success.

Could you pls share your thought on this?

Thanks,
Senthil M

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2009-10-06  4:17 EAGLE LOAN MANAGEMENT
  0 siblings, 0 replies; 152+ messages in thread
From: EAGLE LOAN MANAGEMENT @ 2009-10-06  4:17 UTC (permalink / raw)




This is Eagle Loan Management, a private loan lender.We provide funding for
companies and individuals that need funding. We work domestic as well as
international companies. Our funding sources specialize in creative  
solutions to
meet your needs for expansion, growth etc.
Our company do grant loans to individuals and companies as the loan  
grant varies
from $5 thousand to $5 million Dollars with an interest rate of just 2.5%.

Borrower's Information Needed
Full Names:................................................
Country:...................................................
Phone Number:............................................
Loan Amount Needed:..................................
Loan Term Duration:.....................................

Contact us today with the above information at
eagleinvestment@sifymail.com

Company Name:EAGLE LOAN MANAGEMENT.
Registration Number: EA-ASL/941OYI/02/LN-UK
Telephone: +44-701-112-8005
Fax: +44 91-791-52-20

Regards,
EAGLE LOAN MANAGEMENT
eagleloan.2009@gmail.com

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2009-09-02 18:46 me at tmr.com
  0 siblings, 0 replies; 152+ messages in thread
From: me at tmr.com @ 2009-09-02 18:46 UTC (permalink / raw)


Clinton Lee Taylor wrote:
> Greetings ...
>
> 2009/9/2 Bill Davidsen <davidsen@tmr.com>:
>   
>> Clinton Lee Taylor wrote:
>>     
>>> http://www.issociate.de/board/post/498227/Ext3_convert_to_RAID1_....html
>>>
>>> Wanting to convert an already created and populated ext3 filesystem.
>>>
>>> I unmounted the filesystem, ran e2fsck -f /dev/sdb1 to check that the
>>> current filesystem had no errors.
>>> Then ran mdadm --create /dev/md0 --level=1 -n 1 /dev/sdb1 --force to
>>> create the RAID1 device, answered yes to the question.
>>>
>>>       
>> Right here is where you invite problems.
>>     
>  This just a warning or have you had problems doing this?
>
>
> If you don't remember to shrink the filesystem you lose data. The list
> has had tales of woe from people who have done it. I personally
> haven't. Oh, and shrinking a filesystem is not totally without
> possibility of having problems due to hardware or power issues or even
> just a crash.
>
>
> Doing it the other way avoids this, all failures keep the original data safe.
>
> - create an array using the NEW partition
> - make the filesystem on the new array
> - mount the new filesystem
> - copy the data to the new array and verify
> - umount the old partition
> - mount the array on the OLD mount point
> - add the OLD partition to the array and let the system refresh it
>> You want to create the array using
>> the new device or partition, and put a new filesystem on it.
>>     
>  No, I want to convert an existing ext3 to RAID1 partition ...
>   

See above, you want to wind up with the data on an array, preferably 
without modifying the old data until the old data has been moved and 
verified.
>   
>> Read and
>> understand the man page for mke2fs in the stride= and stripe-width=
>> parameters, it shouldn't matter for raid-1 but would if you use raid-[56].
>>     
>  How would striding effect RAID growing or shrinking? Does not the
> striding just effect performance or is it a big problem? Would a RAID
> defragger help?
>
>   
On raid-[456] it can improve performance. I mentioned it because people 
overlook it. And if I were doing this I would use raid-10 to get better 
performance, but that's me.
>> Then mount the array, copy the data to the array, verify it, and then
>> unmount the old partition and add it.
>>     
>  I know this is a tried, tested and accept procedure to
> transfer/transform an existing ext3 partition to a RAID partition, but
> this takes allot of data coping and requires double extra storage ...
>   

You are going to use the NEW partition as part of the array anyway, it 
takes no extra storage.

> What I'm trying to get right, is to create and test a procedure ( with
> audience help and peer review ), to convert an ext3 partition to a
> RAID1, maybe later other RAID, but this is a first step/test ...
>
>   
Using a missing disk component should work with any raid level but 
raid-0.  ;-)
>>> Ran e2fsck -v /dev/md0 to check that the RAID1 device had no
>>> filesystem corruption on it, which it did not.
>>> Added a spared RAID device using mdadm --add /dev/md0 /dev/sdc1
>>> Then grew the RAID1 device to two compents with mdadm --grow /dev/md0
>>> --raid-disks=2 --backup-file=/root/raid1.backup.file
>>>       
>> I have an entry in my raid notes which says that's the wrong thing to do,
>> the array should be created with the correct number of members and one left
>> "missing" to be added later. My note says it should be done that way, but
>> not why it's better, but it says "per Neil" so I bet there is a reason. It
>> does seem to work that way, I just did an adventure in file moving to test
>> it the hard way. I was doing a mix of raid-1, raid-10, and raid-5 arrays
>> moving from little drives (750GB) to larger ones.
>>     
>  Okay, but now we have a big question, creating RAID MD with less
> devices than they should have should only be done with "missing" or
> forced with number of devices?  Could the really Neil stand up now?
> ;-)
>
>   
I'd like to hear at this point, too. I don't want to modify the old 
partition until the new one is working, other than being paranoid is 
there a downside to that?
>>> Did another filesystem check once the RAID finished rebuilding and all
>>> seemed fine.
>>> Double checked that the data on the RAID was the same as the original
>>> data by diffing the two, again all was fine.
>>>
>>>  Now is this just lucky or would this be an acceptable way to convert
>>> an existing ext3 filesystem to RAID1?
>>>       
>> See above, given the resize you didn't mention it's okay, but forget the
>> resize and you risk your data.
>>     
>  Okay, so, you saying that I should make sure that I shrink the ext3
> before try and convert, which is what was comment on before ... I only
> edited out what I thought was not needed for the basic question of
> converting, but, when I write up an article covering this, I will be
> sure to detail that and explain that md metadata version 0.90 puts
> it's metadata at the end of the device, which should be free, after
> the shrink ...
>   


-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc

"Now we have another quarterback besides Kurt Warner telling us during postgame
interviews that he owes every great thing that happens to him on a football
field to his faith in Jesus. I knew there had to be a reason why the Almighty
included a mute button on my remote."
			-- Arthur Troyer on Tim Tebow (Sports Illustrated)


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2009-09-02 18:46 me at tmr.com
  0 siblings, 0 replies; 152+ messages in thread
From: me at tmr.com @ 2009-09-02 18:46 UTC (permalink / raw)


Paul Clements wrote:
> Bill Davidsen wrote:
>> NeilBrown wrote:
>>> On Tue, August 25, 2009 12:39 am, Simon Jackson wrote:
>>>  
>>>> I am trying to use write intent bitmaps on some RAID 1 volumes to 
>>>> reduce
>>>> the rebuild times in the event of hard resets that cause the md 
>>>> driver to
>>>> kick members out of my arrays.
>>>>
>>>> I used the mdadm --grow /dev/md0 --bitmap=internal  and this 
>>>> appeared to
>>>> succeed, but when I tried to examine the bitmap I get an error.
>>>>
>>>>
>>>> :~$ sudo mdadm --grow /dev/md0 --bitmap=internal
>>>> :~$ sudo mdadm -X /dev/md0
>>>>         Filename : /dev/md0
>>>>            Magic : 00000000
>>>> mdadm: invalid bitmap magic 0x0, the bitmap file appears to be 
>>>> corrupted
>>>>          Version : 0
>>>> mdadm: unknown bitmap version 0, either the bitmap file is 
>>>> corrupted or
>>>> you need to upgrade your tools
>>>>     
>>>
>>> Quoting from the man page:
>>>
>>>        -X, --examine-bitmap
>>>               Report  information about a bitmap file.  The argument is
>>> either
>>>               an external bitmap file or an array  component  in  
>>> case  of
>>>  an
>>>               internal  bitmap.   Note  that  running  this on an array
>>> device
>>>               (e.g.  /dev/md0) does not report the bitmap for that 
>>> array.
>>>
>>>
>>> Particularly read the last sentence.
>>> Then try
>>>    mdadm -X /dev/sda5
>>>   
>>
>> Well that's nice and clear, but raises the question "why not?" This 
>> would seem to be one of the most common things someone would do, to 
>> look at the bitmap for an array.
>
> Two reasons why not:
>
> The examine code simply takes the device or file you give it and looks 
> for a bitmap in that file or device. You'd have to do some hand-waving 
> to "read the bitmap for /dev/md0". There actually is no bitmap on 
> /dev/md0; there is a bitmap stored either in a file or on each of the 
> component devices. So which version of the bitmap do you read? From 
> the first, second, third ... component disk?
>
I know what the code does now, the question is why it doesn't handle the 
most intuitive usage.  The software can select which component to check 
if (a) a bitmap exists at all on this array, and (b) this isn't a 
component. And software could select a component with a current event 
count, just in case. It's useful if the tool does something useful 
instead of requiring multiple manual steps.
> Also, mdadm's behavior would be ambiguous if you implemented the 
> above. What if /dev/md0 is itself a component of another md device? 
> Then how is mdadm to know which bitmap you want? The one that actually 
> physically exists on md0, or the ones that the components of md0 contain?

The two steps above are deterministic, so it will always do the same 
thing, in most cases the desired thing, and in all cases where the 
target is a component the same thing it does now.
>
> Perhaps better would be to simply throw an error in this case?

It does, and the error has three major faults:
- it misdiagnoses the problem in the majority of cases
- it suggests that the bitmap is broken or the tools don't match the kernel
- it needlessly alarms users, who equate "corrupted" with "lost data"

    mdadm: unknown bitmap version 0, either the bitmap file is corrupted
    or you need to upgrade your tools
      

---

bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc

"Now we have another quarterback besides Kurt Warner telling us during postgame
interviews that he owes every great thing that happens to him on a football
field to his faith in Jesus. I knew there had to be a reason why the Almighty
included a mute button on my remote."
			-- Arthur Troyer on Tim Tebow (Sports Illustrated)


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2009-06-05  0:50 Jack Etherington
  0 siblings, 0 replies; 152+ messages in thread
From: Jack Etherington @ 2009-06-05  0:50 UTC (permalink / raw)
  To: linux-raid

Hello,
I am not sure whether troubleshooting messages are allowed on the mdadm
mailing list (or it is for development and bugs only) so please point me in
the right direction if this is not the right place.

Before posting here I have tried using the following resources for
information:
>Google
>Distribution IRC channel (Ubuntu)
>Linuxquestions.org

My knowledge of Linux is beginner/moderate.

My setup is:
9x1tb Hard Drives (2xhitachi and 7x Samsung HD103UJ)
Supermicro AOC-SAT2-MV8 8 Port SATA Card
1xMotherboard SATA port
Single RAID5 array created with mdadm, printout of /proc/mdstat:

root@server3:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid5 sdj1[7] sdc1[0] sda1[8] sdg1[6] sdi1[9](F) sdd1[4]
sde1[3] sdh1[2] sdf1[10](F)
      7814079488 blocks level 5, 64k chunk, algorithm 2 [9/7] [U_UUU_UUU]


A printout of /var/messages is available here: http://pastebin.com/m6499846
as not to make this post any longer...
(The array has been down for about a month now. It is my home storage
server, non-critical, but I do not have a backup)

Also a printout of ‘mdadm --detail /dev/md0’ is available here:
http://pastebin.com/f44b6e069

I have used ‘mdadm -v -A -f /dev/md0’ to get the array online again, and can
read data (intact without errors) from the array, but it soon becomes
degraded again.

Any help on where to start would be greatly appreciated :)

Jack


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2009-04-02  4:16 Lelsie Rhorer
  0 siblings, 0 replies; 152+ messages in thread
From: Lelsie Rhorer @ 2009-04-02  4:16 UTC (permalink / raw)
  To: linux-raid

I'm having a severe problem whose root cause I cannot determine.  I have a
RAID 6 array managed by mdadm running on Debian "Lenny" with a 3.2GHz AMD
Athlon 64 x 2 processor and 8G of RAM.  There are ten 1 Terabyte SATA
drives, unpartitioned, fully allocated to the /dev/md0 device. The drive
are served by 3 Silicon Image SATA port multipliers and a Silicon Image 4
port eSATA controller.  The /dev/md0 device is also unpartitioned, and all
8T of active space is formatted as a single Reiserfs file system.  The
entire volume is mounted to /RAID.  Various directories on the volume are
shared using both NFS and SAMBA.

Performance of the RAID system is very good.  The array can read and write
at over 450 Mbps, and I don't know if the limit is the array itself or the
network, but since the performance is more than adequate I really am not
concerned which is the case.

The issue is the entire array will occasionally pause completely for about
40 seconds when a file is created.  This does not always happen, but the
situation is easily reproducible.  The frequency at which the symptom
occurs seems to be related to the transfer load on the array.  If no other
transfers are in process, then the failure seems somewhat more rare,
perhaps accompanying less than 1 file creation in 10..  During heavy file
transfer activity, sometimes the system halts with every other file
creation.  Although I have observed many dozens of these events, I have
never once observed it to happen except when a file creation occurs. 
Reading and writing existing files never triggers the event, although any
read or write occurring during the event is halted for the duration. 
(There is one cron jog which runs every half-hour that creates a tiny file;
this is the most common failure vector.)  There are other drives formatted
with other file systems on the machine, but the issue has never been seen
on any of the other drives.  When the array runs its regularly scheduled
health check, the problem is much worse.  Not only does it lock up with
almost every single file creation, but the lock-up time is much longer -
sometimes in excess of 2 minutes.

Transfers via Linux based utilities (ftp, NFS, cp, mv, rsync, etc) all
recover after the event, but SAMBA based transfers frequently fail, both
reads and writes.

How can I troubleshoot and more importantly resolve this issue?


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2008-05-14 12:53 Henry, Andrew
  0 siblings, 0 replies; 152+ messages in thread
From: Henry, Andrew @ 2008-05-14 12:53 UTC (permalink / raw)
  To: linux-raid

I'm new to software RAID and this list.  I read a few months of archives to see if I found answers but only partly...

I set up a raid1 set using 2xWD Mybook eSATA discs on a Sil CardBus controller.  I was not aware of automount rules and it didn't work, and I want to wipe it all and start again but cannot.  I read the thread listed in my subject and it helped me quite a lot but not fully.  Perhaps someone would be kind enough to help me the rest of the way.  This is what I have done:

1. badblocks -c 10240 -s -w -t random -v /dev/sd[ab]
2. parted /dev/sdX mklabel msdos ##on both drives
3a. parted /dev/sdX mkpart primary 0 500.1GB ##on both drives
3b. parted /dev/sdX set 1 raid on ##on both drives
4. mdadm --create --verbose /dev/md0 --metadata=1.0 --raid-devices=2 --level=raid1 --name=backupArray /dev/sd[ab]1
5. mdadm --examine --scan | tee /etc/mdadm.conf and set 'DEVICES partitions' so that I don't hard code any devide names that may change on reboot.
6. mdadm --assemble --name=mdBackup /dev/md0 ##assemble is run during --create it seems and this was not needed.
7. cryptsetup --verbose --verify-passphrase luksFormat /dev/md0
8. cryptsetup luksOpen /dev/md0 raid500
9. pvcreate /dev/mapper/raid500
10. vgcreate vgbackup /dev/mapper/raid500
11. lvcreate --name lvbackup --size 450G vgbackup ## check PEs first with vgdisplay
12. mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/vgbackup/lvbackup
13. mkdir /mnt/raid500; mount /dev/vgbackup/lvbackup /mnt/raid500"

This worked perfectly.  Did not test but everything lokked fine and I could use the mount.  Thought: lets see if everything comes up at boot (yes, I had edited fstab to mount /dev/vgbackup/lvbackup and set crypttab to start luks on raid500.
Reboot failed.  Fsck could not check raid device and would not boot.  Kernel had not autodetected md0.  I now know this is because superblock format 1.0 puts metadata at end of device and therefore kernel cannot autodetect.
I started a LiveCD, mounted my root lvm, removed entries from fstab/crypttab and rebooted.  Reboot was now OK.
Now I tried to wipe the array so I can re-create with 0.9 metadata superblock.
I ran dd on sd[ab] for a few hundred megs, which wiped partitions.  I removed /etc/mdadm.conf.  I then repartitioned and rebooted.  I then tried to recreate the array with:

mdadm --create --verbose /dev/md0 --raid-devices=2 --level=raid1 /dev/sd[ab]1

but it reports that the devices are already part of an array and do I want to continue??  I say yes and it then immedialtely  says "out of sync, resyncing existing array" (not exact words but I suppose you get the idea)
I reboot to kill sync and then dd again, repartition, etc ect then reboot.
Now when server comes up, fdisk reports (it's the two 500GB discs that are in the array):

[root@k2 ~]# fdisk -l

Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          19      152586   83  Linux
/dev/hda2              20        9729    77995575   8e  Linux LVM

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       60801   488384001   fd  Linux raid autodetect

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       38913   312568641   83  Linux

Disk /dev/md0: 500.1 GB, 500105150464 bytes
2 heads, 4 sectors/track, 122095984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Where previously, I had /dev/sdc that was the same as /dev/sda above (ignore the 320GB, that is separate and on boot, they sometimes come up in different order).
Now, I cannot write to sda above (500GB disc) with commands: dd, mdadm -zero-superblock etc etc.  I can write to md0 with dd but what the heck happened to sdc??  Why did it become /dev/md0??
Now I read the forum thread and ran dd on beginning and end of sda and md0 with /dev/zero using seek to skip first 490GB and deleted /dev/md0 then rebooted and now I see sda but there is no sdc or md0.
I cannot see any copy of mdadm.conf in /boot and initramfs-update does not work on CentOS, but I am more used to Debian and do not know the CentOS equivalent.  I do know that I have now completely dd'ed the first 10MB and last 2MB of sda and md0 and have deleted (with rm -f) /dev/md0, and now *only* /dev/sda (plus internal had and extra 320GB sdb) shows up in fdisk -l:  There is no md0 or sdc.

So after all that rambling, my question is:

Why did /dev/md0 appear in fdisk -l when it had previously been sda/sdb even after successfully creating my array before reboot?
How do I remove the array?  Have I now done everything to remove it?
I suppose (hope) that if I go to the server and power cycle it and the esata discs, my sdc probably will appear again ( I have not done this yet-no chance today) but why does it not appear after a soft reboot after having dd'd /dev/md0?


andrew henry
Oracle DBA

infra solutions|ao/bas|dba
Logica

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2008-05-12 11:29 me at tmr.com
  0 siblings, 0 replies; 152+ messages in thread
From: me at tmr.com @ 2008-05-12 11:29 UTC (permalink / raw)


David Lethe wrote:
> there are still other factors to consider.  HW raid can usually be configured to monitor and repair bad blocks and data consistency in background with no cpu impact (but allow for bus overhead, depending on architecture.  When things go bad and RAID is in stress, then there is a world of difference between the methodologies. People rarely consider that ... Until they have corruption.  HW RAID (with  battery backup) will rarely corrupt on power failure or os crash, but is not immune.  SW RAID, however, exposes you much more.  Read the threads relating to bugs and data losses on md rebuilds after failures. The md code just can't address certain failure scenarios that HW RAID protects against .. But it still does a good job.  HW RAID is not immune by any means, some controllers have higher 
 risk of loss then others.  Yes the OP asked for performance diffs, but performance under stress is fair game, as is data integrity.
>
> Think about it ... 100 percent of disks fail, eventually, so data integrity and recovery must be considered.
>
> Neither SW or HW RAID is best or covers all failure scenarios, but please don't make a deployment decision based on performance when everything is working fine.  Testing RAID is one of the things I do, so I speak from authority here. Too many people have blind faith that any kind of parity-protected RAID protects against hardware faults.  This is not the real-world behavior.  
>   

One other thought, there is *no such thing* as "hardware raid," it's 
*all* software raid, your choice is to have it in the kernel, or in an 
eprom on the controller, or in a big box near your computer, so all you 
really have is a chance to play "who do you trust?"

-- 
Bill Davidsen <davidsen@tmr.com>
  "Woe unto the statesman who makes war without a reason that will still
  be valid when the war is over..." Otto von Bismark 



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2007-10-09  9:56 Frédéric Mantegazza
  0 siblings, 0 replies; 152+ messages in thread
From: Frédéric Mantegazza @ 2007-10-09  9:56 UTC (permalink / raw)
  To: linux-raid

subscribe linux-raid
-- 
   Frédéric

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
  2007-02-15 18:28             ` (unknown) Derek Yeung
@ 2007-02-15 18:53               ` Derek Yeung
  0 siblings, 0 replies; 152+ messages in thread
From: Derek Yeung @ 2007-02-15 18:53 UTC (permalink / raw)
  To: Derek Yeung; +Cc: linux-raid

unsubscribe linux-raid


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
  2007-02-15 18:23           ` John Stilson
@ 2007-02-15 18:28             ` Derek Yeung
  2007-02-15 18:53               ` (unknown) Derek Yeung
  0 siblings, 1 reply; 152+ messages in thread
From: Derek Yeung @ 2007-02-15 18:28 UTC (permalink / raw)
  To: linux-raid

help


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-10-15 14:20 upcajxhkb
  0 siblings, 0 replies; 152+ messages in thread
From: upcajxhkb @ 2006-10-15 14:20 UTC (permalink / raw)


\x01BOUNDARY_OUTLOOK

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-10-03 12:24 Jochen Oekonomopulos
  0 siblings, 0 replies; 152+ messages in thread
From: Jochen Oekonomopulos @ 2006-10-03 12:24 UTC (permalink / raw)
  To: linux-raid; +Cc: mingo


Hello Neil, Ingo and [insert your name here],

I try to understand the raid5 and md code and I have a question
concerning the cache.

There are two ways of calculating the parity: read-modify-write and
reconstruct-write. In my understanding, the code only checks how many
buffers it has to read for each method (rmw or rcw) without considering
the cache. But what if there was relevant data in the cache? How would
the raid code know it so it can build a decision on top of this
knowledge?

I hope You can help me, since I could not find any information on this
in the mailing list archive.

Thanks in advance,
Jochen


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2006-07-01 23:38 Guy Hampton
  0 siblings, 0 replies; 152+ messages in thread
From: Guy Hampton @ 2006-07-01 23:38 UTC (permalink / raw)
  To: linux-raid

 One of your friends used our send-to-a-friend-option: Check this out:
register for job alert, as well as i did - http://job-alert.net


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-06-18 19:23 bertieysauseda
  0 siblings, 0 replies; 152+ messages in thread
From: bertieysauseda @ 2006-06-18 19:23 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-06-17 10:17 rowdu
  0 siblings, 0 replies; 152+ messages in thread
From: rowdu @ 2006-06-17 10:17 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-06-17 10:16 rowdu
  0 siblings, 0 replies; 152+ messages in thread
From: rowdu @ 2006-06-17 10:16 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-04-30 23:40 gearewayne
  0 siblings, 0 replies; 152+ messages in thread
From: gearewayne @ 2006-04-30 23:40 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-03-11 21:02 lwvxfb
  0 siblings, 0 replies; 152+ messages in thread
From: lwvxfb @ 2006-03-11 21:02 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-02-13 23:58 service
  0 siblings, 0 replies; 152+ messages in thread
From: service @ 2006-02-13 23:58 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
  2006-01-23 12:31 Possible libata/sata/Asus problem (was Re: Need to upgrade to latest stable mdadm version?) David Greaves
@ 2006-01-23 17:05 ` Shawn Usry
  0 siblings, 0 replies; 152+ messages in thread
From: Shawn Usry @ 2006-01-23 17:05 UTC (permalink / raw)
  To: linux-raid

help

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2006-01-11 14:47 bhess
  0 siblings, 0 replies; 152+ messages in thread
From: bhess @ 2006-01-11 14:47 UTC (permalink / raw)
  To: linux-raid

linux-raid@vger.kernel.org



I originally sent this to Neil Brown who suggested I sent it to you.







Any help would be appreciated.







Has anyone put an effort into building a raid 1 based on USB connected

drives under Redhat 3/4 not as the root/boot

drive. A year ago I don't think this made any sense but with the

price of drives being far less that then the equivalent tape media

and the simple USB to IDE smart cable I am evaluating an expandable

USB disk farm for two uses. A reasonable robust place to store

data until I can select what I want to put on tape. The second

is secondary storage for all of the family video tapes that I am

capturing in preparation for editing to DVD. The system does not

have to be fast just large, robust, expandable and cheep.



I currently run a Redhat sandbox with a hardware raid 5 and 4 120G

SATA drives. I have added USB drives and have them mount with

the LABEL=/PNAME option in fstab. In this manner they end up

in the right place after reboot. I do not know enough about

the Linux drive interface to know if USB attached devices will

get properly mounted into the raid at reboot and after changes

or additions of drives to the USB.



I am a retired Bell Labs Research supervisor. I was in Murray Hill

when UNIX was born and still use Intel based UNIX in the current

form of SCO Unixware both professionally and personally. Unixware

is no longer a viable product since I see no future in it and

Oracle is not supported. I know way to much about how the guts

of Unixware works thanks to a friend who was one of SCO's kernel

and storage designers. I know way to little how Linux works to

get a USB based raid up without a lot of research and tinkering.

I don't mind research and tinkering but I don't like reinventing

the wheel.



I have read The Software-RAID HOWTO by Jakob 0stergaard and

Emilio Bueso and downloaded mdadm. I have not tried it yet.





The system I have in mind uses a Intel server motherboard,

hardware raid 1 SATA root/boot/swap drive, SCSI tape drive

and a 4 port USB card. In a 2U chasses. A second 2U chassis

will contain a supply, up to 14 drives and lots of fans.

I have everything except the drives. The sole use of this system

will be a disk farm with a NFS and Samba server. It will run

under Redhat 3 or 4. I am leaning toward Redhat 4 since I

understand SCSI tape support is more stable under 4. Any

comment in this area would also be appreciated.



Can you point me in the direction of newer articles that cover

Linux raid using USB connected drives or do you have any

suggestions on the configuration of a system. My main concern

is how to get USB drives correctly put back in the raid after

boot and/or a USB change since I do not know how they are assigned

to /dev/sdxy in the first place and how USB hubs interact with

the assignments. I realize I should have other concerns and

just don't know enough. Ignorance is bliss, up to an init 6.



Thank You for your time.



Bill Hess



bhess@patmedia.net 


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-11-08 14:56 service
  0 siblings, 0 replies; 152+ messages in thread
From: service @ 2005-11-08 14:56 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-07-27 16:19 drlim
  0 siblings, 0 replies; 152+ messages in thread
From: drlim @ 2005-07-27 16:19 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-07-23  4:50 Mr.Derrick Tanner.
  0 siblings, 0 replies; 152+ messages in thread
From: Mr.Derrick Tanner. @ 2005-07-23  4:50 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2005-07-07 12:28 Uncle Den
  0 siblings, 0 replies; 152+ messages in thread
From: Uncle Den @ 2005-07-07 12:28 UTC (permalink / raw)
  To: linux-raid

Hello!

Glad to hear from you!

Bye

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-06-21 11:48 pliskie
  0 siblings, 0 replies; 152+ messages in thread
From: pliskie @ 2005-06-21 11:48 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-06-11  2:00 dtasman
  0 siblings, 0 replies; 152+ messages in thread
From: dtasman @ 2005-06-11  2:00 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-06-10  2:30 bewails
  0 siblings, 0 replies; 152+ messages in thread
From: bewails @ 2005-06-10  2:30 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2005-03-15 20:48 Gary Lawton
  0 siblings, 0 replies; 152+ messages in thread
From: Gary Lawton @ 2005-03-15 20:48 UTC (permalink / raw)
  To: Linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2004-10-01 14:04 Agustín Ciciliani
  0 siblings, 0 replies; 152+ messages in thread
From: Agustín Ciciliani @ 2004-10-01 14:04 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 151 bytes --]

Hi,

Everytime I boot my system It says that my partitions have different UUID.
If anybody know what I can do about it...

Thanks in advance,

Agustín

[-- Attachment #2: different UUID.txt --]
[-- Type: text/plain, Size: 10480 bytes --]

Oct  1 03:34:41 maria kernel: md: Autodetecting RAID arrays.
Oct  1 03:34:41 maria kernel: md: autorun ...
Oct  1 03:34:41 maria kernel: md: considering hdc13 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc13 ...
Oct  1 03:34:41 maria kernel: md: hdc12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md:  adding hda13 ...
Oct  1 03:34:41 maria kernel: md: hda12 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda11 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda9 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc13
Oct  1 03:34:41 maria kernel: md: created md13
Oct  1 03:34:41 maria kernel: md: bind<hda13>
Oct  1 03:34:41 maria kernel: md: bind<hdc13>
Oct  1 03:34:41 maria kernel: md: running: <hdc13><hda13>
Oct  1 03:34:41 maria kernel: raid1: raid set md13 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc12 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc12 ...
Oct  1 03:34:41 maria kernel: md: hdc11 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc9 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md:  adding hda12 ...
Oct  1 03:34:41 maria kernel: md: hda11 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda9 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc12
Oct  1 03:34:41 maria kernel: md: created md12
Oct  1 03:34:41 maria kernel: md: bind<hda12>
Oct  1 03:34:41 maria kernel: md: bind<hdc12>
Oct  1 03:34:41 maria kernel: md: running: <hdc12><hda12>
Oct  1 03:34:41 maria kernel: raid1: raid set md12 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc11 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc11 ...
Oct  1 03:34:41 maria kernel: md: hdc9 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md:  adding hda11 ...
Oct  1 03:34:41 maria kernel: md: hda9 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc11
Oct  1 03:34:41 maria kernel: md: created md11
Oct  1 03:34:41 maria kernel: md: bind<hda11>
Oct  1 03:34:41 maria kernel: md: bind<hdc11>
Oct  1 03:34:41 maria kernel: md: running: <hdc11><hda11>
Oct  1 03:34:41 maria kernel: raid1: raid set md11 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc9 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc9 ...
Oct  1 03:34:41 maria kernel: md: hdc8 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md:  adding hda9 ...
Oct  1 03:34:41 maria kernel: md: hda8 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc9
Oct  1 03:34:41 maria kernel: md: created md9
Oct  1 03:34:41 maria kernel: md: bind<hda9>
Oct  1 03:34:41 maria kernel: md: bind<hdc9>
Oct  1 03:34:41 maria kernel: md: running: <hdc9><hda9>
Oct  1 03:34:41 maria kernel: raid1: raid set md9 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc8 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc8 ...
Oct  1 03:34:41 maria kernel: md: hdc7 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md:  adding hda8 ...
Oct  1 03:34:41 maria kernel: md: hda7 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc8
Oct  1 03:34:41 maria kernel: md: created md8
Oct  1 03:34:41 maria kernel: md: bind<hda8>
Oct  1 03:34:41 maria kernel: md: bind<hdc8>
Oct  1 03:34:41 maria kernel: md: running: <hdc8><hda8>
Oct  1 03:34:41 maria kernel: raid1: raid set md8 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc7 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc7 ...
Oct  1 03:34:41 maria kernel: md: hdc6 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md:  adding hda7 ...
Oct  1 03:34:41 maria kernel: md: hda6 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc7
Oct  1 03:34:41 maria kernel: md: created md7
Oct  1 03:34:41 maria kernel: md: bind<hda7>
Oct  1 03:34:41 maria kernel: md: bind<hdc7>
Oct  1 03:34:41 maria kernel: md: running: <hdc7><hda7>
Oct  1 03:34:41 maria kernel: raid1: raid set md7 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc6 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc6 ...
Oct  1 03:34:41 maria kernel: md: hdc5 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md:  adding hda6 ...
Oct  1 03:34:41 maria kernel: md: hda5 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc6
Oct  1 03:34:41 maria kernel: md: created md6
Oct  1 03:34:41 maria kernel: md: bind<hda6>
Oct  1 03:34:41 maria kernel: md: bind<hdc6>
Oct  1 03:34:41 maria kernel: md: running: <hdc6><hda6>
Oct  1 03:34:41 maria kernel: raid1: raid set md6 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc5 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc5 ...
Oct  1 03:34:41 maria kernel: md: hdc2 has different UUID to hdc5
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc5
Oct  1 03:34:41 maria kernel: md:  adding hda5 ...
Oct  1 03:34:41 maria kernel: md: hda2 has different UUID to hdc5
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc5
Oct  1 03:34:41 maria kernel: md: created md5
Oct  1 03:34:41 maria kernel: md: bind<hda5>
Oct  1 03:34:41 maria kernel: md: bind<hdc5>
Oct  1 03:34:41 maria kernel: md: running: <hdc5><hda5>
Oct  1 03:34:41 maria kernel: raid1: raid set md5 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc2 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc2 ...
Oct  1 03:34:41 maria kernel: md: hdc1 has different UUID to hdc2
Oct  1 03:34:41 maria kernel: md:  adding hda2 ...
Oct  1 03:34:41 maria kernel: md: hda1 has different UUID to hdc2
Oct  1 03:34:41 maria kernel: md: created md2
Oct  1 03:34:41 maria kernel: md: bind<hda2>
Oct  1 03:34:41 maria kernel: md: bind<hdc2>
Oct  1 03:34:41 maria kernel: md: running: <hdc2><hda2>
Oct  1 03:34:41 maria kernel: raid1: raid set md2 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: considering hdc1 ...
Oct  1 03:34:41 maria kernel: md:  adding hdc1 ...
Oct  1 03:34:41 maria kernel: md:  adding hda1 ...
Oct  1 03:34:41 maria kernel: md: created md1
Oct  1 03:34:41 maria kernel: md: bind<hda1>
Oct  1 03:34:41 maria kernel: md: bind<hdc1>
Oct  1 03:34:41 maria kernel: md: running: <hdc1><hda1>
Oct  1 03:34:41 maria kernel: raid1: raid set md1 active with 2 out of 2 mirrors
Oct  1 03:34:41 maria kernel: md: ... autorun DONE.

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-09-20 13:19 Biju A
  0 siblings, 0 replies; 152+ messages in thread
From: Biju A @ 2004-09-20 13:19 UTC (permalink / raw)
  To: linux-raid

autho 8317050a subscribe linux-raid \
biju_amaravat@yahoo.com






		
_______________________________
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2004-09-02 14:27 Larry
  0 siblings, 0 replies; 152+ messages in thread
From: Larry @ 2004-09-02 14:27 UTC (permalink / raw)
  To: linux-kernel

Save up to 80% on popular meds!

*** GREAT SPECIALS ***

Check it out: http://www.oabwkdfbabdfj.com/?92

- No doctor visits or hassles
- Quick delivery to your front door

Visit us here: http://www.oabwkdfbabdfj.com/?92


On medication long term?  
Buy bulk through us and LITERALLY SAVE THOUSANDS!



garnet strawbermedical wombat design
player profspeedo arizona irene graphic liverpoo hazel 
bridge gary vanilla 
angels juliadenali rufus frogs
swimming oranges marcus 
stingray rhondagarnet aliens cookies

valentin t-bonehanna sweety scooby theatre cherry republic 


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-23 23:55 Vick
  0 siblings, 0 replies; 152+ messages in thread
From: Vick @ 2004-08-23 23:55 UTC (permalink / raw)


Received=3A from 79=2E212=2E218=2E86 by modemcable218=2E188-70-69=2Emc=2Evideotron=2Eca=3B Mon=2C 23 Aug 2004 18=3A55=3A17 -0500=0D=0AMessage-ID=3A =3Clxubpqzadtiyibfnharfz=40brars=2Eorg=2Euk=3E=0D=0AFrom=3A =22Saundra Cortez=22 =3CVick=40kiskapu=2Ehu=3E=0D=0AReply-To=3A =22Saundra Cortez=22 =3CVick=40kiskapu=2Ehu=3E=0D=0ATo=3A linux-raid=40vger=2Ekernel=2Eorg=0D=0ASubject=3A The Best Soft=2Eware For Accessible Pric=2Ees =28inequality=29=0D=0ADate=3A Mon=2C 23 Aug 2004 18=3A55=3A17 -0500=0D=0AMIME-Version=3A 1=2E0=0D=0AContent-Type=3A multipart=2Falternative=3B=0D=0A=09boundary=3D=22--6627085957013902083=22=0D=0AX-Webmail-Time=3A Mon=2C 23 Aug 2004 18=3A55=3A17 -0500=0D=0A=0D=0A----6627085957013902083=0D=0AContent-Type=3A text=2Fhtml=3B=0D=0A=09 charset=3D=22ISO-8859-1=22=0D=0AContent-Transfer-Encoding=3A quoted-printable=0D=0A=0D=0A=3Chtml=3E=0D=0A=3Cbody=3E=0D=0A=3Cdiv align=3D=22cente
 r=22=3E=0D=0A  =3Ccenter=3E=0D=0A  =3Ctable border=3D=221=22 width=3D=22600=22 style=3D=22border-collapse=3A collapse=22 =0D=0Abordercolor=3D=22#C0C0C0=22 cellpadding=3D=222=22=3E=0D=0A    =3Ctr=3E=0D=0A      =3Ctd align=3D=22center=22 colspan=3D=224=22 width=3D=22782=22=3E=0D=0A      =3Cp style=3D=22margin-top=3A 2=3B margin-bottom=3A 2=22=3E=3Cb=3E=0D=0A      =3Cfont face=3D=22Tahoma=22=3EThe Best Software For Accessible Prices =0D=0A!=3C=2Ffont=3E=3C=2Fb=3E=3C=2Fp=3E=0D=0A      =3Cp style=3D=22margin-top=3A 2=3B margin-bottom=3A 2=22=3E=0D=0A      =3Cfont face=3D=22Arial=22 size=3D=221=22 color=3D=22#FFFFFC=22=3Esaran icicle =0D=0Afilter =0D=0A      dominique agleam=3C=2Ffont=3E=3C=2Ftd=3E=0D=0A    =3C=2Ftr=3E=0D=0A    =3Ctr=3E=0D=0A      =3Ctd colspan=3D=222=22 width=3D=22361=22 bgcolor=3D=22#C0C0C0=22=3E=3Cfont =0D=0Acolor=3D=22#000080=22=3E=3Cb=3EMS Windows XP =0D=0A      Profess
 ional=3C=2Fb=3E=3C=2Ffont=3E=3C=2Ftd=3E=0D=0A      =3Ctd width=3D=22413=22 colspan=3D=222=22=3E=3Cfont color=3D=22#808080=22=3E=3Cb=3EAhead NERO =0D=0A6=2E3 =0D=0A      POWERPACK=3C=2Fb=3E=3C=2Ffont=3E=3C=2Ftd=3E=0D=0A    =3C=2Ftr=3E=0D=0A    =3Ctr=3E=0D=0A      =3Ctd width=3D=2256=22 bgcolor=3D=22#CCCCCC=22=3E=0D=0A      =3Cp align=3D=22center=22=3E=3Cfont color=3D=22#FF0000=22=3E=3Cb=3E$80=3C=2Fb=3E=3C=2Ffont=3E=3C=2Ftd=3E=0D=0A      =3Ctd width=3D=22301=22 bgcolor=3D=22#CCCCCC=22=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 0=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E&nbsp=3B- Retail Price=3A =3Cstrike=3E=3Cb=3E=0D=0A$270=2E99=3C=2Fb=3E=3C=2Fstrike=3E=3C=2Ffont=3E=3C=2Fp=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 0=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E&nbsp=3B- Our Price=3A =3Cb=3E$80=3C=2Fb=3E=
 0D=0A=3C=2Ffont=3E=3C=2Fp=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 3=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E&nbsp=3B- You save=3A =3Cfont =0D=0Acolor=3D=22#000080=22=3E=3Cb=3E$190=2C99=3C=2Fb=3E=3C=2Ffont=3E=3C=2Ffont=3E=3Cp style=3D=22margin-top=3A 0=3B =0D=0Amargin-bottom=3A 0=22 align=3D=22center=22=3E=0D=0A      =3Cb=3E=3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E=0D=0A      =3Ca href=3D=22http=3A=2F=2Fonlineprog=2Einfo=2Findex=2Ephp=3Fs=3D1078=22=3EBUY NOW=3C=2Fa=3E=0D=0A=3C=2Ffont=3E=3C=2Fb=3E=3C=2Ftd=3E=0D=0A      =3Ctd width=3D=2273=22=3E=0D=0A      =3Cp align=3D=22center=22=3E=3Cfont color=3D=22#FF0000=22=3E=3Cb=3E$60=3C=2Fb=3E=3C=2Ffont=3E=3C=2Ftd=3E=0D=0A      =3Ctd width=3D=22340=22=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 0=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E&nbsp=3B- Retail
  Price=3A =3Cstrike=3E=3Cb=3E=0D=0A$130=2E99=3C=2Fb=3E=3C=2Fstrike=3E=3C=2Ffont=3E=3C=2Fp=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 0=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 size=3D=222=22=3E&nbsp=3B- Our Price=3A =3Cb=3E$60=3C=2Fb=3E=0D=0A=3C=2Ffont=3E=3C=2Fp=3E=0D=0A      =3Cp style=3D=22margin-top=3A 0=3B margin-bottom=3A 3=22=3E=0D=0A      =3Cfont face=3D=22Tahoma=22 si

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-17 21:04 service
  0 siblings, 0 replies; 152+ messages in thread
From: service @ 2004-08-17 21:04 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-14  6:38 sky
  0 siblings, 0 replies; 152+ messages in thread
From: sky @ 2004-08-14  6:38 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-11  1:10 sky
  0 siblings, 0 replies; 152+ messages in thread
From: sky @ 2004-08-11  1:10 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-07  4:32 sky
  0 siblings, 0 replies; 152+ messages in thread
From: sky @ 2004-08-07  4:32 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-08-07  1:05 kkkkkkk
  0 siblings, 0 replies; 152+ messages in thread
From: kkkkkkk @ 2004-08-07  1:05 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-06-29 12:44 Pierre Berthier
  0 siblings, 0 replies; 152+ messages in thread
From: Pierre Berthier @ 2004-06-29 12:44 UTC (permalink / raw)
  To: linux-raid

unsubscribe
end

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-06-14  8:10 FETCHMAIL-DAEMON
  0 siblings, 0 replies; 152+ messages in thread
From: FETCHMAIL-DAEMON @ 2004-06-14  8:10 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 64 bytes --]

Some addresses were rejected by the MDA fetchmail forwards to.


[-- Attachment #2: Type: message/delivery-status, Size: 239 bytes --]

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: Type: text/rfc822-headers, Size: 857 bytes --]

Received: from punt-3.mail.demon.net by mailstore
	for vxduser@bongo.demon.co.uk id 1BZmhi-0000ST-5F;
	Mon, 14 Jun 2004 08:20:30 +0000
Received: from [194.217.242.71] (helo=anchor-hub.mail.demon.net)
	by punt-3.mail.demon.net with esmtp id 1BZmhi-0000ST-5F
	for vxduser@bongo.demon.co.uk; Mon, 14 Jun 2004 08:20:30 +0000
Received: from [203.81.210.10] (helo=bongo.demon.co.uk)
	by anchor-hub.mail.demon.net with esmtp id 1BZmga-0004sx-Pu
	for vxduser@bongo.demon.co.uk; Mon, 14 Jun 2004 08:19:34 +0000
From: linux-raid@vger.kernel.org
To: vxduser@bongo.demon.co.uk
Subject: Delivery Bot (vxduser@bongo.demon.co.uk)
Date: Mon, 14 Jun 2004 13:19:20 +0500
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0016----=_NextPart_000_0016"
X-Priority: 1
X-MSMail-Priority: High
Message-Id: <E1BZmga-0004sx-Pu@anchor-hub.mail.demon.net>

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-05-20 12:09 何捷
  0 siblings, 0 replies; 152+ messages in thread
From: 何捷 @ 2004-05-20 12:09 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-04-16 21:05 Abhishek Rai
  0 siblings, 0 replies; 152+ messages in thread
From: Abhishek Rai @ 2004-04-16 21:05 UTC (permalink / raw)
  To: linux-raid

unsubscribe linux-raid

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2004-02-25  0:26 Cullen
  0 siblings, 0 replies; 152+ messages in thread
From: Cullen @ 2004-02-25  0:26 UTC (permalink / raw)




^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2004-02-09  6:22 Heinz Wittenbecher
  0 siblings, 0 replies; 152+ messages in thread
From: Heinz Wittenbecher @ 2004-02-09  6:22 UTC (permalink / raw)
  To: linux-raid

	auth dcbb9ba5 subscribe linux-raid heinz@bytedesigns.com


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2003-11-11  1:05 a
  0 siblings, 0 replies; 152+ messages in thread
From: a @ 2003-11-11  1:05 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2003-10-28  4:15 yyc
  0 siblings, 0 replies; 152+ messages in thread
From: yyc @ 2003-10-28  4:15 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2003-09-24 23:37 Loris
  0 siblings, 0 replies; 152+ messages in thread
From: Loris @ 2003-09-24 23:37 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 4411 bytes --]

Hello,

I have some problems and I need to be sure of what to do?

I'm not sure if I have understood everything about chunks, superblocks 
and other disk structure of raid level 5.
I will try to write what I think to understand and I'd like to be 
corrected if I made any error in following, please.

Lets take an exemple:
If the information I have is:
0100 1011 0101 0100 0101 0100 1011 1011 0111 0101 0111 0101
If I use 1 byte chunks, my chunks will be:
0100 1011 0101 0100 0101 0100 1011 1011 0111 0101 0111 0101
\-------/ \-------/ \-------/ \-------/ \-------/ \-------/
so:
chunk0: 0100 1011
chunk1: 0101 0100
chunk2: 0101 0100
chunk3: 1011 1011
chunk4: 0111 0101
chunk5: 0111 0101
A question now: is the chunk splited on the different disks or not?
I mean is it like that:
|     disk0     |     disk1     |     disk2     |
+---------------+---------------+---------------+
|    chunk0     |    chunk1     | chunk0^chunk1 |
|    chunk2     | chunk2^chunk3 |    chunk3     |
| chunk4^chunk5 |    chunk4     |    chunk5     |
|    chunk6     |    chunk7     | chunk6^chunk7 |
|    chunk8     | chunk8^chunk9 |    chunk9     |
etc. with ^ the bitwise xor (^=circumflexe (in case on mistransmission))

Are the chunk splited in (n-1) parts distributed on the (n-1) disks?

Are "parity chunks" first on disk2 then on disk1 next on disk0 next on 
disk2 next on disk1 next on disk0... or order is another?

In this case, I understood everything right until now, the disks should 
like that:
|   disk0   |   disk1   |   disk2   |
+-----------+-----------+-----------+
| 0100 1011 | 0101 0100 | 0001 1111 |
| 0101 0100 | 1110 1111 | 1011 1011 |
| 0000 0000 | 0111 0101 | 0111 0101 |
without XORed chunk you should read information "horizontally"?
|   disk0   |   disk1   |   disk2   |
+-----------+-----------+-----------+
| 0100 1011 | 0101 0100 |           |
| 0101 0100 |           | 1011 1011 |
|           | 0111 0101 | 0111 0101 |

Here come my real problem: I must move the disks in another computer 
after processor burning. I'm not sure on disk order (cause of 
master,slave,cable select). But if what I said before is correct, even 
if I now move the disks let's say:
|   disk2   |   disk1   |   disk0   |
+-----------+-----------+-----------+
| 0001 1111 | 0101 0100 | 0100 1011 |
| 1011 1011 | 1110 1111 | 0101 0100 |
| 0111 0101 | 0111 0101 | 0000 0000 |

for resyncing you only do the same because xor is commutative and so:
disk0 = disk2^disk1
|   disk2   |   disk1   |   disk0   |
+-----------+-----------+-----------+
| 0001 1111 | 0101 0100 |           |
| 1011 1011 | 1110 1111 |           |
| 0111 0101 | 0111 0101 |           |
be resynced as:
|   disk2   |   disk1   |   disk0   |
+-----------+-----------+-----------+
| 0001 1111 | 0101 0100 | 0100 1011 |
| 1011 1011 | 1110 1111 | 0101 0100 |
| 0111 0101 | 0111 0101 | 0000 0000 |

If everything is correct until now this should be non destructive? (is 
that not the goal of raid5?)

I think that is all about real data part of the disk?

Now other questions about structure of the disk:
(I tried to read sources of the kernel but I didn't understand 
everything...)
what is inside the raid5 partition?
What I think to understand is:
+-------partition------------------------------------------------
| offset 0     +-------usable part-------------------------------
| offset 1     | informations accessible after /dev/mdX mounting
| offset 2     | a little scrambled...
| ...          |
| offset sb-1  +---end of usable part----------------------------
| offset sb    +--suberblock-------------------------------------
| offset sb+1  |
| offset sb+2  |
| ...          |
| offset end   +---end of sb-------------------------------------
+-----end of partition-------------------------------------------
Is there other parts?
Does the "usable part" begins at the beginning of the partition?
Where begins the superblock? Is that the end of the "usable part"? (In 
case I need to "manually" reconstruct the informations)
Is the end of the superblock the end of the partition?
Does the resync only resync the usable part?

If:
-disks are physically good (I think) and
-resyncing bad ordered disks don't corrupts data and
-the resync only resync the usable part and
-superblock were crush (by bad advises on moving disks found on web)
I can try to mount read-only without corrupting data?

This is the end, my friend...

Thanks for helping,

Loris

[-- Attachment #2: Type: text/enriched, Size: 4539 bytes --]

<fontfamily><param>Courier</param>Hello,


I have some problems and I need to be sure of what to do?


I'm not sure if I have understood everything about chunks, superblocks
and other disk structure of raid level 5.

I will try to write what I think to understand and I'd like to be
corrected if I made any error in following, please.


Lets take an exemple:

If the information I have is:

0100 1011 0101 0100 0101 0100 1011 1011 0111 0101 0111 0101

If I use 1 byte chunks, my chunks will be:

0100 1011 0101 0100 0101 0100 1011 1011 0111 0101 0111 0101

\-------/ \-------/ \-------/ \-------/ \-------/ \-------/

so:

chunk0: 0100 1011

chunk1: 0101 0100

chunk2: 0101 0100

chunk3: 1011 1011

chunk4: 0111 0101

chunk5: 0111 0101

A question now: is the chunk splited on the different disks or not?

I mean is it like that:

|     disk0     |     disk1     |     disk2     |

+---------------+---------------+---------------+

|    chunk0     |    chunk1     | chunk0^chunk1 |

|    chunk2     | chunk2^chunk3 |    chunk3     |

| chunk4^chunk5 |    chunk4     |    chunk5     |

|    chunk6     |    chunk7     | chunk6^chunk7 |

|    chunk8     | chunk8^chunk9 |    chunk9     |

etc. with ^ the bitwise xor (^=circumflexe (in case on
mistransmission))


Are the chunk splited in (n-1) parts distributed on the (n-1) disks?


Are "parity chunks" first on disk2 then on disk1 next on disk0 next on
disk2 next on disk1 next on disk0... or order is another?


In this case, I understood everything right until now, the disks
should like that:

|   disk0   |   disk1   |   disk2   |

+-----------+-----------+-----------+

| 0100 1011 | 0101 0100 | 0001 1111 |

| 0101 0100 | 1110 1111 | 1011 1011 |

| 0000 0000 | 0111 0101 | 0111 0101 |

without XORed chunk you should read information "horizontally"?

|   disk0   |   disk1   |   disk2   |

+-----------+-----------+-----------+

| 0100 1011 | 0101 0100 |           |

| 0101 0100 |           | 1011 1011 |

|           | 0111 0101 | 0111 0101 |


Here come my real problem: I must move the disks in another computer
after processor burning. I'm not sure on disk order (cause of
master,slave,cable select). But if what I said before is correct, even
if I now move the disks let's say:

|   disk2   |   disk1   |   disk0   |

+-----------+-----------+-----------+

| 0001 1111 | 0101 0100 | 0100 1011 |

| 1011 1011 | 1110 1111 | 0101 0100 |

| 0111 0101 | 0111 0101 | 0000 0000 |


for resyncing you only do the same because xor is commutative and so:

disk0 = disk2^disk1

|   disk2   |   disk1   |   disk0   |

+-----------+-----------+-----------+

| 0001 1111 | 0101 0100 |           |

| 1011 1011 | 1110 1111 |           |

| 0111 0101 | 0111 0101 |           |

be resynced as:

|   disk2   |   disk1   |   disk0   |

+-----------+-----------+-----------+

| 0001 1111 | 0101 0100 | 0100 1011 |

| 1011 1011 | 1110 1111 | 0101 0100 |

| 0111 0101 | 0111 0101 | 0000 0000 |


If everything is correct until now this should be non destructive? (is
that not the goal of raid5?)


I think that is all about real data part of the disk?


Now other questions about structure of the disk:

(I tried to read sources of the kernel but I didn't understand
everything...)

what is inside the raid5 partition?

What I think to understand is:

+-------partition------------------------------------------------

| offset 0     +-------usable part-------------------------------

| offset 1     | informations accessible after /dev/mdX mounting

| offset 2     | a little scrambled...

| ...          |

| offset sb-1  +---end of usable part----------------------------

| offset sb    +--suberblock-------------------------------------

| offset sb+1  |

| offset sb+2  |

| ...          |

| offset end   +---end of sb-------------------------------------

+-----end of partition-------------------------------------------

Is there other parts?

Does the "usable part" begins at the beginning of the partition?

Where begins the superblock? Is that the end of the "usable part"? (In
case I need to "manually" reconstruct the informations)

Is the end of the superblock the end of the partition?

Does the resync only resync the usable part?


If:

-disks are physically good (I think) and

-resyncing bad ordered disks don't corrupts data and

-the resync only resync the usable part and

-superblock were crush (by bad advises on moving disks found on web)

I can try to mount read-only without corrupting data?


This is the end, my friend...


Thanks for helping,


Loris</fontfamily>

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2003-08-19  1:46 jshankar
  0 siblings, 0 replies; 152+ messages in thread
From: jshankar @ 2003-08-19  1:46 UTC (permalink / raw)
  To: linux-raid

Suscribe
linux-raid


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2003-06-04  2:38 sideroff
  0 siblings, 0 replies; 152+ messages in thread
From: sideroff @ 2003-06-04  2:38 UTC (permalink / raw)
  To: linux-raid

subscribe



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2003-05-21  1:30 ultraice
  0 siblings, 0 replies; 152+ messages in thread
From: ultraice @ 2003-05-21  1:30 UTC (permalink / raw)
  To: linux-raid





^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2003-03-10  8:44 linguist
  0 siblings, 0 replies; 152+ messages in thread
From: linguist @ 2003-03-10  8:44 UTC (permalink / raw)
  To: linux-raid



^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2003-01-05 15:31 Joseph P. Schmo
  0 siblings, 0 replies; 152+ messages in thread
From: Joseph P. Schmo @ 2003-01-05 15:31 UTC (permalink / raw)
  To: linux-raid

subscribe linux-raid


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2002-12-28  1:25 TJ
  0 siblings, 0 replies; 152+ messages in thread
From: TJ @ 2002-12-28  1:25 UTC (permalink / raw)
  To: linux-raid

Do the current raidtools allow enlarging a RAID 5 array by adding more disks
to it without initializing a new
array? This feature is found in some hardware raid controllers.

Even if the array must be taken down and processed while offline by a
utility, this would be greatly
appreciated.


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-10-30  1:26 Michael Robinton
  0 siblings, 0 replies; 152+ messages in thread
From: Michael Robinton @ 2002-10-30  1:26 UTC (permalink / raw)
  To: linux-raid

>I've taken a look at the ML archives, and found an old thread (06/2002)
>on this subject, but found no solution.
>
>I've a working setup with a two disks RAID1 root, which boots
>flawlessly. Troubles arise when simulating hw failure. RAID setup is as
>follows:
>
>raiddev                 /dev/md0
>raid-level              1
>nr-raid-disks           2
>nr-spare-disks          0
>chunk-size              4
>
>device                  /dev/hda1
>raid-disk               0
>
>device                  /dev/hdc1
>raid-disk               1

>If I disconnect /dev/hda before booting, the kernel tries to initialize
>the array, can't access /dev/hda1 (no wonder), marks it as faulty, then
>refuses to initialize the array, dieing with a kernel panic, unable to
>mount root.
>
>If I disconnect /dev/hdc before booting, the array gets started in
>degraded mode, and the startup goes on without a glitch.
>
>If I disconnect /dev/hda and move /dev/hdc to its place (so it's now
>/dev/hda), the array gets started in degraded mode and the startup goes
>on.
>
>Actually, this is already a workable solution (if the first disk dies, I
>just "promote" the second to hda and go looking for a replacement of the
>broken disk), but I think this is not _elegant_. 8)
>
>Could anyone help me shedding some light on the subject?
>
>Tnx in advance.
>--
>Massimiliano Masserelli

There is no standard for the behavior of the motherboard bios when the
first device 0x80 is not available at boot time. Some motherboards will
automove 0x81 -> 0x80, some can do it as a bios change, some you're stuck.

Most scsii controllers will do this and a few IDE controllers will as
well.

Generally for best flexibility, use an independent lilo file for each hard
disk and set the boot disk pointer individually for each drive to 0x80 or
0x81 as needed for your environment rather than using the "raid" feature
of lilo.

See the Boot-Root-Raid-LILO howto for examples. This doc is a bit out of
date, but the examples and setups are all applicable for the 2.4 series
kernels.

Michael


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2002-09-23  7:06 James McKiernan
  0 siblings, 0 replies; 152+ messages in thread
From: James McKiernan @ 2002-09-23  7:06 UTC (permalink / raw)
  To: 'linux-raid@vger.kernel.org'

subscribe

**********************************************************************
CAUTION: Electronic mail sent through the Internet is not secure and could
be intercepted by a third party. 

This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they   
are addressed. If you have received this email in error please notify 
your systems administrator.

This footnote also confirms that this email message has been swept by 
MessageLabs Virus Scanning Service. For further information visit
http://www.messagelabs.com/stats.asp
**********************************************************************

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-07-16 23:14 Michael Robinton
  0 siblings, 0 replies; 152+ messages in thread
From: Michael Robinton @ 2002-07-16 23:14 UTC (permalink / raw)
  To: linux-raid

Neil Brown <neilb@cse.unsw.edu.au> replied:
>> i86 2.4.17 kernel
>> 2 - 1de 20 gig western digital drives on separate channels (ASUS-me99)
>> There are several raid partitions on this drive, one managed to get out
>of
>> sync when the cpu failed. The ext2 file system is fine but the arrray
>will
>> not reconstruct -- I have 7 linux boxes running various flavors or raid
>1
>> and 5 for several years, and this is the first time I've seen anything
>> like this.
>>
>> The reconstruction proceeds until /proc/mdstat says 99.9% complete
>> (2015872,2015936) finish 0.0 min
.> then the speed keeps going down with each successive query. There is no
>> disk activity per the "red" light. Tried this twice with identical
.results
>> each time.
>>
>> Any suggestions??>>
.
>Sounds like a bug that was fixed recently...
>It may be that you just need to encourage some other disc activity on
>that system and it will spring to life and finish.
>
>NeilBrown

hmmm.... seems when I rebooted the array is now happy. That didn't happen
the first time I tried it.

What version of the 2.4 kernel is the fix in?? Did it make it into
2.4.18??

Michael


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-06-06 18:10 Colonel
  0 siblings, 0 replies; 152+ messages in thread
From: Colonel @ 2002-06-06 18:10 UTC (permalink / raw)
  To: linux-raid

From: Colonel <klink@clouddancer.com>
To: linux-raid@vger.kernel.org
In-reply-to: <3CFF4E0D.518D9089@aitel.hist.no> (message from Helge Hafting on
	Thu, 06 Jun 2002 13:57:01 +0200)
Subject: Re: RAID-6 support in kernel?
Reply-to: klink@clouddancer.com
References: <Pine.GSO.4.21.0206051716530.16571-100000@gecko.roadtoad.net> <3CFF1D2C.5861132C@daimi.au.dk> <3CFF4E0D.518D9089@aitel.hist.no>

   Date:	Thu, 06 Jun 2002 13:57:01 +0200
   From: Helge Hafting <helgehaf@aitel.hist.no>

   Kasper Dupont wrote:
   > 
   > Derek Vadala wrote:
   > >
   > >   RAID-1 --------> RAID-5 (D0,D1,D2,D3,P0)
   > >               |--> RAID-5 (D0,D1,D2,D3,P0)
   > >    (four disks used for data, only one from each RAID-5 can fail)
   > 
   > Wrong, any three disks can fail. If the one RAID has only
   > one faulty disk, the other RAID can have any number of
   > faulty disks without loosing data.
   > 
   This is a bit excessive, you waste more than half your disks
   for 3-disk safety.  Consider raid-5 on top of raid-5.

SLOW

   I'm not sure about the write performance for such
   a beast, but it should be fine for reading.  I.e.
   a safe archive.

Generally, non-RAID5 designs are interested in speed, the reliability
bonuses that come from particlar architectures are merely iceing on
the cake.  RAID5 write operations require parity computation and
store, which involve more disks than these designs -- thus slow
(relatively speaking, you need the load to notice the difference).


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-06-05  1:54 Colonel
  0 siblings, 0 replies; 152+ messages in thread
From: Colonel @ 2002-06-05  1:54 UTC (permalink / raw)
  To: linux-raid, pegasus

From: Colonel <klink@clouddancer.com>
To: pegasus@telemach.net
CC: linux-raid@vger.kernel.org
In-reply-to: <20020604235518.2cb7d7d7.pegasus@telemach.net> (message from Jure
	Pecar on Tue, 4 Jun 2002 23:55:18 +0200)
Subject: Re: 
Reply-to: klink@clouddancer.com
References: <20020604154712.AD48D8347@phoenix.clouddancer.com> <20020604235518.2cb7d7d7.pegasus@telemach.net>

   Date: Tue, 4 Jun 2002 23:55:18 +0200
   From: Jure Pecar <pegasus@telemach.net>

   On Tue,  4 Jun 2002 08:47:12 -0700 (PDT)
   klink@clouddancer.com (Colonel) wrote:

   > 
   > True, I think that the point is that of the 5 possible 2 disk
   > failures, 2 of them (in striped mirrors, not mirrored stripes) kill
   > the array.  For RAID5, all of them kill the array.  But the fancy RAID
   > setups are for _large_ arrays, not 4 disks, unless you are after the
   > small write speed improvement (as I am).

   going offtopic here ...
   what kind of raid setup is the best for write intensive load like mail
   queues & co?

define "best"

For sequential writes, RAID10 is considered 'best'.

However, there are two ways to make "10", one has far better reliability
-- which is best for some configurations, the striped mirrors.

YMMV

To truely answer your question requires knowing how many drives,
detailed load and bandwidth info, economics and politics.



   > Plus any raid metadevice made of metadevices cannot autostart, which
   > means tinkering during startup, which is only worth it for those large
   > drive arrays.

   hm? it does for me. probalby the redhat's rc.sysinit does the right
   thing ... 


If you can run one on / (the root partition), then it autostarted.  If
it's on /usr, then it was manually started (if it's striped mirrors)
to my knowledge.  On the other hand, you may have mirrored stripes --
which potentially do autostart (I vaguely remember this discussion
many moons ago, when M.Ingo was introducing the current raid, but I
wanted raid 5 ...).

For myself, I grew disatisfied with the disk usage in RAID5. All drive
'lights' were on too often.  Now, it's quieter, slightly faster, less
blinky lights and smaller.

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-06-04 15:47 Colonel
  0 siblings, 0 replies; 152+ messages in thread
From: Colonel @ 2002-06-04 15:47 UTC (permalink / raw)
  To: linux-raid, roy

From: Colonel <klink@clouddancer.com>
To: roy@karlsbakk.net
CC: linux-raid@vger.kernel.org
In-reply-to: <200206041259.g54CxuP07700@mail.pronto.tv> (message from Roy
	Sigurd Karlsbakk on Tue, 4 Jun 2002 14:59:55 +0200)
Subject: Re: SV: RAID-6 support in kernel?
Reply-to: klink@clouddancer.com
References: <2D0AFEFEE711D611923E009027D39F2B02F17E@nasexs1.meridian-data.com> <200206041259.g54CxuP07700@mail.pronto.tv>

   From: Roy Sigurd Karlsbakk <roy@karlsbakk.net>
   Organization: Pronto TV AS
   Date:	Tue, 4 Jun 2002 14:59:55 +0200
   Cc: Christian Vik <cvik@vanadis.no>, linux-kernel@vger.kernel.org,
	   linux-raid@vger.kernel.org
   Sender: linux-raid-owner@vger.kernel.org
   X-Mailing-List:	linux-raid@vger.kernel.org
   News-Group: list.kernel

   > Of course, for a 4 drive setup there's no reason to use RAID 6 at all (RAID
   > 10 will withstand any two drive failure if you only use 4 drives), but
   > that's the reasoning.  I think the best way to deal with the read-modify
   > write problem for RAID 6 is to use a small chunk size and deal with NxN
   > chunks as a unit.  But YMMV.

   RAID10 will _not_ withstand any two-drive fail in a 4-drive scenario. If D1 
   and D3 fail, you're fscked

   D1 D2
   D3 D4


True, I think that the point is that of the 5 possible 2 disk
failures, 2 of them (in striped mirrors, not mirrored stripes) kill
the array.  For RAID5, all of them kill the array.  But the fancy RAID
setups are for _large_ arrays, not 4 disks, unless you are after the
small write speed improvement (as I am).

Plus any raid metadevice made of metadevices cannot autostart, which
means tinkering during startup, which is only worth it for those large
drive arrays.


r

---
Personalities : [raid0] [raid1] 
read_ahead 1024 sectors
md0 : active raid0 md4[3] md3[2] md2[1] md1[0]
      34517312 blocks 64k chunks

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-06-02 22:08 Colonel
  0 siblings, 0 replies; 152+ messages in thread
From: Colonel @ 2002-06-02 22:08 UTC (permalink / raw)
  To: linux-raid

From: Colonel <klink@clouddancer.com>
To: linux-raid@vger.kernel.org
In-reply-to: <023701c20a2d$9f8fc240$6401a8c0@cfl.rr.com> (mblack@csihq.com)
Subject: OOPS on raidtools-20010914
Reply-to: klink@clouddancer.com
References: <20020602112056.2FEB38347@phoenix.clouddancer.com> <023701c20a2d$9f8fc240$6401a8c0@cfl.rr.com>


   From: "Mike Black" <mblack@csihq.com>
   Date: Sun, 2 Jun 2002 08:04:17 -0400

Thanks for the reply.

   Either copy /usr/lib/libpopt* to /lib or add a -static switch to the Makefile to compile the utilities.
   One of the problems of shared libraries....

Well, actually I'm wondering why -static is not already present.
These tools reside in /sbin after all.

   And...as long as you have stripes compiled into the kernel and partition type 0xfd it should autostart (I'm pretty sure stripes
   should do that too -- I don't use them).  That would also solve
   your problem.

Ah, no.  Striped mirrors have stripes made from meta-devices.  The
mirrors autostart fine.


   And while you're upgrading why don't you use the latest (it's called mdadm now)?
   http://www.cse.unsw.edu.au/~neilb/source/mdadm/

I know raidtools has worked in the past, this is a production system
and I have nothing to experiment upon.  So I looked in mingos
directory...will look at this site.


   ----- Original Message -----
   From: "Colonel" <klink@clouddancer.com>
   To: <linux-raid@vger.kernel.org>
   Sent: Sunday, June 02, 2002 7:20 AM


   > To: linux-raid@vger.kernel.org
   > Subject: OOPS on raidtools-20010914
   > Reply-to: klink@clouddancer.com
   > From: Colonel <klink@clouddancer.com>
   >
   >
   > I recently switched to these tools, I'd been using the 1999 ones
   > previously.  I run striped mirrors for /usr, I must (apparently)
   > manually start that meta-device as part of the startup scripts.  So
   > having raidstart and all it's pieces on the / partition is pretty
   > important.  Imagine my fun after a reboot told me that lilpopt was
   > unavailable for raidstart, thus I had no /usr and no way to do
   > anything about it within this machine (because libpopt is stored in
   > /usr/lib).  Fortunately, I had not upgraded all the other machines to
   > these newer tools.
   >
   > Perhaps libpopt is misplaced on my system, or perhaps the raidtool
   > makefile doesn't expect a manual start raid array....
   >
   > Any solutions?
   >
   > Ron
   > -
   > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
   > the body of a message to majordomo@vger.kernel.org
   > More majordomo info at  http://vger.kernel.org/majordomo-info.html
   >


^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown)
@ 2002-06-02 11:20 Colonel
  0 siblings, 0 replies; 152+ messages in thread
From: Colonel @ 2002-06-02 11:20 UTC (permalink / raw)
  To: linux-raid

To: linux-raid@vger.kernel.org
Subject: OOPS on raidtools-20010914
Reply-to: klink@clouddancer.com
From: Colonel <klink@clouddancer.com>


I recently switched to these tools, I'd been using the 1999 ones
previously.  I run striped mirrors for /usr, I must (apparently)
manually start that meta-device as part of the startup scripts.  So
having raidstart and all it's pieces on the / partition is pretty
important.  Imagine my fun after a reboot told me that lilpopt was
unavailable for raidstart, thus I had no /usr and no way to do
anything about it within this machine (because libpopt is stored in
/usr/lib).  Fortunately, I had not upgraded all the other machines to
these newer tools.

Perhaps libpopt is misplaced on my system, or perhaps the raidtool
makefile doesn't expect a manual start raid array....

Any solutions?

Ron

^ permalink raw reply	[flat|nested] 152+ messages in thread

* (unknown), 
@ 2002-05-02 12:36 Heiss, Christian
  0 siblings, 0 replies; 152+ messages in thread
From: Heiss, Christian @ 2002-05-02 12:36 UTC (permalink / raw)
  To: linux-raid

 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

unsubscribe linux-raid

-----BEGIN PGP SIGNATURE-----
Version: PGP 7.0.4

iQA/AwUBPNEzztealdhg/f9MEQIS+QCglnNwFqT+aiXjPhllcY+wjS9sqZwAoK2+
X+SVsF40ITalYo8EV0m7ZSrP
=eQ/s
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 152+ messages in thread

end of thread, other threads:[~2020-07-22  5:32 UTC | newest]

Thread overview: 152+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-22  5:32 (unknown) Darlehen Bedienung
  -- strict thread matches above, loose matches on Subject: below --
2020-06-27 21:58 (unknown) lookman joe
2020-06-04 19:57 (unknown) David Shine
2020-03-17  0:11 (unknown) David Ibe
2020-03-09  7:37 (unknown) Michael J. Weirsky
2020-03-05 10:46 (unknown) Juanito S. Galang
2017-09-02  2:39 (unknown), een
2017-08-22 13:31 (unknown), vinnakota chaitanya
2017-08-16  2:03 (unknown), xa0ajutor
2017-08-15 14:45 (unknown), een
2017-08-08 19:40 (unknown), citydesk
2017-08-01 14:53 (unknown), Angela H. Whiteman
2017-08-01  1:35 (unknown), xa0ajutor
2017-07-27  5:01 (unknown), hp
2017-07-26 20:45 (unknown), een
2017-07-25 20:01 (unknown), hp
2017-07-18  4:32 (unknown), citydesk
2017-07-17 21:54 (unknown), citydesk
2017-07-06 14:11 (unknown), een
2017-07-05 21:18 (unknown), een
2017-07-04  8:52 (unknown), citydesk
2017-07-04  6:01 (unknown), xa0ajutor
2017-06-26 22:14 (unknown), citydesk
2017-06-25 18:13 (unknown), citydesk
2017-06-24  0:35 (unknown), citydesk
2017-06-23  2:49 (unknown), mdavis
2017-06-20  6:29 (unknown), xa0ajutor
2017-06-18 14:27 (unknown), xa0ajutor
2017-06-09  4:30 (unknown), citydesk
2017-06-06 23:46 (unknown), mdavis
2017-06-05  4:30 (unknown), citydesk
2017-05-23  2:19 (unknown), mdavis
2017-05-20 20:00 (unknown), citydesk
2017-05-19 14:51 (unknown), citydesk
2017-05-18 13:40 (unknown), hp
2017-04-19 20:46 (unknown), hp
2017-04-13 15:58 (unknown), Scott Ellentuch
2017-04-10  3:30 (unknown), hp
2017-01-22 20:23 (unknown), citydesk
2017-01-21 23:57 (unknown), hp
2017-01-13 10:46 [PATCH v3 1/8] arm: put types.h in uapi Nicolas Dichtel
2017-01-09 11:33 ` [PATCH v2 0/7] uapi: export all headers under uapi directories Arnd Bergmann
2017-01-13 10:46   ` [PATCH v3 0/8] " Nicolas Dichtel
2017-01-13 15:36     ` (unknown) David Howells
2016-12-20  8:38 (unknown), Jinpu Wang
2016-12-18  0:32 (unknown), linux-raid
2016-11-06 21:00 (unknown), Dennis Dataopslag
2016-06-05 12:28 (unknown), Vikas Aggarwal
2015-11-24  7:23 (unknown), Jaime M Towns-18128
2015-11-05 16:49 (unknown), o1bigtenor
2015-08-20  7:12 (unknown), Mark Singer
2015-07-01 11:53 (unknown), Sasnett_Karen
2015-03-12 11:49 (unknown), pepa6.es
2015-02-18 19:42 (unknown), DeadManMoving
2015-02-10 23:48 (unknown), Kyle Logue
2014-11-30 13:54 (unknown), Mathias Burén
2014-11-26 18:38 (unknown), Travis Williams
     [not found] <1570038211.167595.1414613146892.JavaMail.yahoo@jws10056.mail.ne1.yahoo.com>
     [not found] ` <1835234304.171617.1414613165674.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]   ` <1938862685.172387.1414613200459.JavaMail.yahoo@jws100180.mail.ne1.yahoo.com>
     [not found]     ` <705402329.170339.1414613213653.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
     [not found]       ` <760168749.169371.1414613227586.JavaMail.yahoo@jws10082.mail.ne1.yahoo.com>
     [not found]         ` <1233923671.167957.1414613439879.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
     [not found]           ` <925985882.172122.1414613520734.JavaMail.yahoo@jws100207.mail.ne1.yahoo.com>
     [not found]             ` <1216694778.172990.1414613570775.JavaMail.yahoo@jws100152.mail.ne1.yahoo.com>
     [not found]               ` <1213035306.169838.1414613612716.JavaMail.yahoo@jws10097.mail.ne1.yahoo.com>
     [not found]                 ` <2058591563.172973.1414613668636.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]                   ` <1202030640.175493 .1414613712352.JavaMail.yahoo@jws10036.mail.ne1.yahoo.com>
     [not found]                     ` <1111049042.175610.1414613739099.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                       ` <574125160.175950.1414613784216.JavaMail.yahoo@jws100158.mail.ne1.yahoo.com>
     [not found]                         ` <1726966600.175552.1414613846198.JavaMail.yahoo@jws100190.mail.ne1.yahoo.com>
     [not found]                           ` <976499752.219775.1414613888129.JavaMail.yahoo@jws100101.mail.ne1.yahoo.com>
     [not found]                             ` <1400960529.171566.1414613936238.JavaMail.yahoo@jws10059.mail.ne1.yahoo.com>
     [not found]                               ` <1333619289.175040.1414613999304.JavaMail.yahoo@jws100196.mail.ne1.yahoo.com>
     [not found]                                 ` <1038759122.176173.1414614054070.JavaMail.yahoo@jws100138.mail.ne1.yahoo.com>
     [not found]                                   ` <1109995533.176150.1414614101940.JavaMail.yahoo@jws100140.mail.ne1.yahoo.com>
     [not found]                                     ` <809474730.174920.1414614143971.JavaMail.yahoo@jws100154.mail.ne1.yahoo.com>
     [not found]                                       ` <1234226428.170349.1414614189490.JavaMail .yahoo@jws10056.mail.ne1.yahoo.com>
     [not found]                                         ` <1122464611.177103.1414614228916.JavaMail.yahoo@jws100161.mail.ne1.yahoo.com>
     [not found]                                           ` <1350859260.174219.1414614279095.JavaMail.yahoo@jws100176.mail.ne1.yahoo.com>
     [not found]                                             ` <1730751880.171557.1414614322033.JavaMail.yahoo@jws10060.mail.ne1.yahoo.com>
     [not found]                                               ` <642429550.177328.1414614367628.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                                                 ` <1400780243.20511.1414614418178.JavaMail.yahoo@jws100162.mail.ne1.yahoo.com>
     [not found]                                                   ` <2025652090.173204.1414614462119.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
     [not found]                                                     ` <859211720.180077.1414614521867.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
     [not found]                                                       ` <258705675.173585.1414614563057.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
     [not found]                                                         ` <1773234186.173687.1414614613736.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
     [not found]                                                           ` <1132079010.173033.1414614645153.JavaMail.yahoo@jws10066.mail.ne1.ya hoo.com>
     [not found]                                                             ` <1972302405.176488.1414614708676.JavaMail.yahoo@jws100166.mail.ne1.yahoo.com>
     [not found]                                                               ` <1713123000.176308.1414614771694.JavaMail.yahoo@jws10045.mail.ne1.yahoo.com>
     [not found]                                                                 ` <299800233.173413.1414614817575.JavaMail.yahoo@jws10066.mail.ne1.yahoo.com>
     [not found]                                                                   ` <494469968.179875.1414614903152.JavaMail.yahoo@jws100144.mail.ne1.yahoo.com>
     [not found]                                                                     ` <2136945987.171995.1414614942776.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
     [not found]                                                                       ` <257674219.177708.1414615022592.JavaMail.yahoo@jws100181.mail.ne1.yahoo.com>
     [not found]                                                                         ` <716927833.181664.1414615075308.JavaMail.yahoo@jws100145.mail.ne1.yahoo.com>
     [not found]                                                                           ` <874940984.178797.1414615132802.JavaMail.yahoo@jws100157.mail.ne1.yahoo.com>
     [not found]                                                                             ` <1283488887.176736.1414615187657.JavaMail.yahoo@jws100183.mail.ne1.yahoo.com>
     [not found]                                                                               ` <777665713.175887.1414615236293.JavaMail.yahoo@jws10083.mail.ne1.yahoo.com>
     [not found]                                                                                 ` <585395776.176325.1 414615298260.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
     [not found]                                                                                   ` <178352191.221832.1414615355071.JavaMail.yahoo@jws100104.mail.ne1.yahoo.com>
     [not found]                                                                                     ` <108454213.176606.1414615522058.JavaMail.yahoo@jws10053.mail.ne1.yahoo.com>
     [not found]                                                                                       ` <1617229176.177502.1414615563724.JavaMail.yahoo@jws10030.mail.ne1.yahoo.com>
     [not found]                                                                                         ` <324334617.178254.1414615625247.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]                                                                                           ` <567135865.82376.1414615664442.JavaMail.yahoo@jws100136.mail.ne1.yahoo.com>
     [not found]                                                                                             ` <764758300.179669.1414615711821.JavaMail.yahoo@jws100107.mail.ne1.yahoo.com>
     [not found]                                                                                               ` <1072855470.183388.1414615775798.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
     [not found]                                                                                                 ` <2134283632.173314.1414615831322.JavaMail.yahoo@jws10094.mail.ne1.yahoo.com>
     [not found]                                                                                                   ` <1454491902.178612.1414615875076.JavaMail.yahoo@jws100209.mail.ne1.yahoo.com>
     [not found]                                                                                                     ` <1480763910.146593.1414958012342.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
2014-11-02 19:54                                                                                                       ` (unknown) MRS GRACE MANDA
2014-04-16 16:43 (unknown), Marcos Antonio da Silva
2014-04-10  5:28 (unknown), peter davidson
2014-02-20 19:18 (unknown), Zheng, C.
2014-02-05  8:33 (unknown), Western Union Office ©
2013-11-22 23:44 (unknown) 你好!办理各行各业(国)=(地)=(税)机打发票验证后付款 ,联系13684936429 QQ;2320164342  王生
2013-11-19  0:57 (unknown), kane
2013-04-23 19:18 (unknown), Clyde Hank
2013-04-22 20:00 (unknown), oooo546745
2013-04-18  4:19 (unknown), Don Pack
2012-12-25  0:12 (unknown), bobzer
2012-12-24 21:13 (unknown), Mathias Burén
2012-12-17  0:59 (unknown), Maik Purwin
2012-04-12 11:23 (unknown), monicaaluke01@gmail.com
2012-03-15 11:15 (unknown), Mr. Vincent Cheng Hoi
2011-09-26  4:23 (unknown), Kenn
2011-09-23  1:50 potentially lost largeish raid5 array Thomas Fjellstrom
2011-09-23 16:22 ` Thomas Fjellstrom
2011-09-23 23:24   ` Stan Hoeppner
2011-09-24  0:11     ` Thomas Fjellstrom
2011-09-24 12:17       ` Stan Hoeppner
2011-09-24 13:11         ` (unknown) Tomáš Dulík
2011-06-21 22:21 (unknown), Ntai Jerry
2011-06-18 20:39 (unknown) Dragon
2011-06-10 20:26 (unknown) Dragon
2011-06-10 13:06 (unknown) Dragon
2011-06-09 12:16 (unknown) Dragon
2011-06-09  6:50 (unknown) Dragon
2011-06-08 14:24 (unknown) Dragon
2011-04-22  5:12 (unknown) Yann Ormanns
2011-02-13  1:11 (unknown), Mrs Edna Ethers
2011-02-01 16:40 (unknown) Naira Kaieski
     [not found] <201012251036232181820@gmail.com>
2010-12-25  2:49 ` (unknown), kernel.majianpeng
2010-11-22 10:44 (unknown), Bayduza, Ronnie
2010-11-18 20:23 (unknown) Dennis German
2010-11-13  6:01 (unknown), Mike Viau
2010-10-27  7:52 (unknown), Czarnowska, Anna
2010-03-08  1:37 (unknown), Leslie Rhorer
2010-01-06 14:19 (unknown) Lapohos Tibor
2009-12-17  4:08 (unknown), Liverwood
2009-11-16  3:44 (unknown), senthilkumar.muthukalai
2009-10-06  4:17 (unknown), EAGLE LOAN MANAGEMENT
2009-09-02 18:46 (unknown) me at tmr.com
2009-09-02 18:46 (unknown) me at tmr.com
2009-06-05  0:50 (unknown), Jack Etherington
2009-04-02  4:16 (unknown), Lelsie Rhorer
2008-05-14 12:53 (unknown), Henry, Andrew
2008-05-12 11:29 (unknown) me at tmr.com
2007-10-09  9:56 (unknown), Frédéric Mantegazza
2007-02-14 22:08 RAID 10 resync leading to attempt to access beyond end of device John Stilson
2007-02-14 23:37 ` Neil Brown
     [not found]   ` <e1e9d81a0702141606r7dea6288qea942cee2d978ee2@mail.gmail.com>
     [not found]     ` <17875.57273.543122.581106@notabene.brown>
     [not found]       ` <e1e9d81a0702142051v152c4c8dme2b20e1c53e1f4b2@mail.gmail.com>
2007-02-15 18:02         ` John Stilson
2007-02-15 18:23           ` John Stilson
2007-02-15 18:28             ` (unknown) Derek Yeung
2007-02-15 18:53               ` (unknown) Derek Yeung
2006-10-15 14:20 (unknown) upcajxhkb
2006-10-03 12:24 (unknown) Jochen Oekonomopulos
2006-07-01 23:38 (unknown), Guy Hampton
2006-06-18 19:23 (unknown) bertieysauseda
2006-06-17 10:17 (unknown) rowdu
2006-06-17 10:16 (unknown) rowdu
2006-04-30 23:40 (unknown) gearewayne
2006-03-11 21:02 (unknown) lwvxfb
2006-02-13 23:58 (unknown) service
2006-01-23 12:31 Possible libata/sata/Asus problem (was Re: Need to upgrade to latest stable mdadm version?) David Greaves
2006-01-23 17:05 ` (unknown), Shawn Usry
2006-01-11 14:47 (unknown) bhess
2005-11-08 14:56 (unknown) service
2005-07-27 16:19 (unknown) drlim
2005-07-23  4:50 (unknown) Mr.Derrick Tanner.
2005-07-07 12:28 (unknown), Uncle Den
2005-06-21 11:48 (unknown) pliskie
2005-06-11  2:00 (unknown) dtasman
2005-06-10  2:30 (unknown) bewails
2005-03-15 20:48 (unknown) Gary Lawton
2004-10-01 14:04 (unknown), Agustín Ciciliani
2004-09-20 13:19 (unknown) Biju A
2004-09-02 14:27 (unknown), Larry
2004-08-23 23:55 (unknown) Vick
2004-08-17 21:04 (unknown) service
2004-08-14  6:38 (unknown) sky
2004-08-11  1:10 (unknown) sky
2004-08-07  4:32 (unknown) sky
2004-08-07  1:05 (unknown) kkkkkkk
2004-06-29 12:44 (unknown) Pierre Berthier
2004-06-14  8:10 (unknown) FETCHMAIL-DAEMON
2004-05-20 12:09 (unknown) 何捷
2004-04-16 21:05 (unknown) Abhishek Rai
2004-02-25  0:26 (unknown) Cullen
2004-02-09  6:22 (unknown), Heinz Wittenbecher
2003-11-11  1:05 (unknown) a
2003-10-28  4:15 (unknown) yyc
2003-09-24 23:37 (unknown), Loris
2003-08-19  1:46 (unknown), jshankar
2003-06-04  2:38 (unknown), sideroff
2003-05-21  1:30 (unknown), ultraice
2003-03-10  8:44 (unknown) linguist
2003-01-05 15:31 (unknown) Joseph P. Schmo
2002-12-28  1:25 (unknown), TJ
2002-10-30  1:26 (unknown) Michael Robinton
2002-09-23  7:06 (unknown), James McKiernan
2002-07-16 23:14 (unknown) Michael Robinton
2002-06-06 18:10 (unknown) Colonel
2002-06-05  1:54 (unknown) Colonel
2002-06-04 15:47 (unknown) Colonel
2002-06-02 22:08 (unknown) Colonel
2002-06-02 11:20 (unknown) Colonel
2002-05-02 12:36 (unknown), Heiss, Christian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).