linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm raid1 regression
@ 2013-04-18 10:38 Vasiliy Tolstov
  2013-04-19 20:52 ` Greg KH
  2013-04-21 22:35 ` NeilBrown
  0 siblings, 2 replies; 7+ messages in thread
From: Vasiliy Tolstov @ 2013-04-18 10:38 UTC (permalink / raw)
  To: stable, linux-kernel

Hello. I'm using linux 3.8.6 and mdadm 3.2.6 (from git).
I have many raid1 arrays that have data offset 2048 (metadata 1.2,
created with various mdadm versions but mostly 3.2.1 on linux 2.6.32).
If i create raid1 with never mdadm on 3.8.6 i have data offset 8192?? Why?

My problem:
Sometimes i'm doing mdadm --zero-superblock on both parts of array and
re-create it. On older systems i have no errors. On new (linux 3.8.6
and mdadm 3.2.6) i get corrupted ext3 fs and partition table. Why this
happening?
P.S. If i use mdadm 3.2.2 i get data offset 4096 that not breaks data,
but inconsistent with older versions.
P.P.S. I'm try to specify --data-offset when create array but as i see
- its ignored and data offset still have 8192).


Thanks!
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2013-04-18 10:38 mdadm raid1 regression Vasiliy Tolstov
@ 2013-04-19 20:52 ` Greg KH
  2013-04-21 22:35 ` NeilBrown
  1 sibling, 0 replies; 7+ messages in thread
From: Greg KH @ 2013-04-19 20:52 UTC (permalink / raw)
  To: Vasiliy Tolstov; +Cc: stable, linux-kernel

On Thu, Apr 18, 2013 at 02:38:53PM +0400, Vasiliy Tolstov wrote:
> Hello. I'm using linux 3.8.6 and mdadm 3.2.6 (from git).
> I have many raid1 arrays that have data offset 2048 (metadata 1.2,
> created with various mdadm versions but mostly 3.2.1 on linux 2.6.32).
> If i create raid1 with never mdadm on 3.8.6 i have data offset 8192?? Why?

Please try asking this on the mailing list for the raid developers
(linux-raid@vger.kernel.org).


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2013-04-18 10:38 mdadm raid1 regression Vasiliy Tolstov
  2013-04-19 20:52 ` Greg KH
@ 2013-04-21 22:35 ` NeilBrown
  2013-04-22  6:28   ` Vasiliy Tolstov
  2013-12-27  6:48   ` Vasiliy Tolstov
  1 sibling, 2 replies; 7+ messages in thread
From: NeilBrown @ 2013-04-21 22:35 UTC (permalink / raw)
  To: Vasiliy Tolstov; +Cc: stable, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1690 bytes --]

On Thu, 18 Apr 2013 14:38:53 +0400 Vasiliy Tolstov <v.tolstov@selfip.ru>
wrote:

> Hello. I'm using linux 3.8.6 and mdadm 3.2.6 (from git).
> I have many raid1 arrays that have data offset 2048 (metadata 1.2,
> created with various mdadm versions but mostly 3.2.1 on linux 2.6.32).
> If i create raid1 with never mdadm on 3.8.6 i have data offset 8192?? Why?

More room for various useful things.
In particular, if you one day want to convert this raid1 to a raid5, then
having a bit of extra space at the front will mean you can avoid a 'backup
file' and all the problems they cause (code for this isn't quite ready, but
is getting there).

> 
> My problem:
> Sometimes i'm doing mdadm --zero-superblock on both parts of array and
> re-create it. On older systems i have no errors. On new (linux 3.8.6
> and mdadm 3.2.6) i get corrupted ext3 fs and partition table. Why this
> happening?

Why are you doing that?

> P.S. If i use mdadm 3.2.2 i get data offset 4096 that not breaks data,
> but inconsistent with older versions.

I suggest you use mdadm 3.2.2 then.

> P.P.S. I'm try to specify --data-offset when create array but as i see
> - its ignored and data offset still have 8192).

I'll try to make sure that works correctly for the next release.
Thanks for the report.

NeilBrown

> 
> 
> Thanks!
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2013-04-21 22:35 ` NeilBrown
@ 2013-04-22  6:28   ` Vasiliy Tolstov
  2013-12-27  6:48   ` Vasiliy Tolstov
  1 sibling, 0 replies; 7+ messages in thread
From: Vasiliy Tolstov @ 2013-04-22  6:28 UTC (permalink / raw)
  To: NeilBrown; +Cc: stable, linux-kernel, linux-raid

2013/4/22 NeilBrown <neilb@suse.de>:
>> Hello. I'm using linux 3.8.6 and mdadm 3.2.6 (from git).
>> I have many raid1 arrays that have data offset 2048 (metadata 1.2,
>> created with various mdadm versions but mostly 3.2.1 on linux 2.6.32).
>> If i create raid1 with never mdadm on 3.8.6 i have data offset 8192?? Why?
>
> More room for various useful things.
> In particular, if you one day want to convert this raid1 to a raid5, then
> having a bit of extra space at the front will mean you can avoid a 'backup
> file' and all the problems they cause (code for this isn't quite ready, but
> is getting there).
>

Very good news =)

>>
>> My problem:
>> Sometimes i'm doing mdadm --zero-superblock on both parts of array and
>> re-create it. On older systems i have no errors. On new (linux 3.8.6
>> and mdadm 3.2.6) i get corrupted ext3 fs and partition table. Why this
>> happening?
>
> Why are you doing that?

Our storage have two nodes each provides disk via srp to nodes. On
each node we create separate lvm (we not using clvm) and assemble md.
Sometimes we resize lvm and need to zero superblock, becouse sometimes
we can still have old mdadm data on lvm part (from previous user for
example). And then we add disk to raid we can get sometimes broken
data (invalid sync).

>
>> P.S. If i use mdadm 3.2.2 i get data offset 4096 that not breaks data,
>> but inconsistent with older versions.
>
> I suggest you use mdadm 3.2.2 then.

Yes, i'm already do that, but i think that lates mdadm version with
data-offset patches can solve my problems. Is that possible to correct
it behaviour and write in docs which data offset used in which version
of mdadm?

>
>> P.P.S. I'm try to specify --data-offset when create array but as i see
>> - its ignored and data offset still have 8192).
>
> I'll try to make sure that works correctly for the next release.
> Thanks for the report.
>
> NeilBrown

Thanks!

P.S. Very big thanks for all.


--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2013-04-21 22:35 ` NeilBrown
  2013-04-22  6:28   ` Vasiliy Tolstov
@ 2013-12-27  6:48   ` Vasiliy Tolstov
  2014-01-05 22:11     ` NeilBrown
  1 sibling, 1 reply; 7+ messages in thread
From: Vasiliy Tolstov @ 2013-12-27  6:48 UTC (permalink / raw)
  To: NeilBrown; +Cc: stable, linux-kernel

2013/4/22 NeilBrown <neilb@suse.de>:
> I'll try to make sure that works correctly for the next release.
> Thanks for the report.


Sorry, Neil. for bumping up old thread. I'm again have problems with
data-offset param for mdadm.
I'm using version from git master (guthub). If i try to create raid1 like
/sbin/mdadm --create --data-offset=2048 --metadata=1.2 --verbose
--force --run --bitmap=internal --assume-clean --name=md21_901
md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
/dev/mapper/sas01-21_901
I have
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 64e0028e:301aa3ce:cdf1a54f:a9e28f27
           Name : xen25:md21_901  (local to host xen25)
  Creation Time : Fri Dec 27 10:43:06 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 10489856 (5.00 GiB 5.37 GB)
     Array Size : 5244928 (5.00 GiB 5.37 GB)
    Data Offset : 4096 sectors
   Super Offset : 8 sectors
   Unused Space : before=4008 sectors, after=0 sectors
          State : clean
    Device UUID : 38771de6:cb5f0dbc:9f32f85f:164e1e89

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Dec 27 10:43:22 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 7f07eb77 - correct
         Events : 2


But when i try to create raid1 like
/sbin/mdadm --create --data-offset=1024 --metadata=1.2 --verbose
--force --run --bitmap=internal --assume-clean --name=md21_901
md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
/dev/mapper/sas01-21_901
I getting
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : ef22dca1:1424ea9e:1b4dce89:27c61a91
           Name : xen25:md21_901  (local to host xen25)
  Creation Time : Fri Dec 27 10:44:21 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 10491904 (5.00 GiB 5.37 GB)
     Array Size : 5245952 (5.00 GiB 5.37 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=0 sectors
          State : clean
    Device UUID : afae5e27:6c706246:4c3e3cb0:e5c726ac

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Dec 27 10:44:26 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 45be4cd1 - correct
         Events : 2


Why data offset specified in command line grows twice in resulting md
array component?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2013-12-27  6:48   ` Vasiliy Tolstov
@ 2014-01-05 22:11     ` NeilBrown
  2014-01-22 11:56       ` Vasiliy Tolstov
  0 siblings, 1 reply; 7+ messages in thread
From: NeilBrown @ 2014-01-05 22:11 UTC (permalink / raw)
  To: Vasiliy Tolstov; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2981 bytes --]

On Fri, 27 Dec 2013 10:48:03 +0400 Vasiliy Tolstov <v.tolstov@selfip.ru>
wrote:

> 2013/4/22 NeilBrown <neilb@suse.de>:
> > I'll try to make sure that works correctly for the next release.
> > Thanks for the report.
> 
> 
> Sorry, Neil. for bumping up old thread. I'm again have problems with
> data-offset param for mdadm.
> I'm using version from git master (guthub). If i try to create raid1 like
> /sbin/mdadm --create --data-offset=2048 --metadata=1.2 --verbose
> --force --run --bitmap=internal --assume-clean --name=md21_901
> md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
> /dev/mapper/sas01-21_901
> I have
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 64e0028e:301aa3ce:cdf1a54f:a9e28f27
>            Name : xen25:md21_901  (local to host xen25)
>   Creation Time : Fri Dec 27 10:43:06 2013
>      Raid Level : raid1
>    Raid Devices : 2
> 
>  Avail Dev Size : 10489856 (5.00 GiB 5.37 GB)
>      Array Size : 5244928 (5.00 GiB 5.37 GB)
>     Data Offset : 4096 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=4008 sectors, after=0 sectors
>           State : clean
>     Device UUID : 38771de6:cb5f0dbc:9f32f85f:164e1e89
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Dec 27 10:43:22 2013
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 7f07eb77 - correct
>          Events : 2
> 
> 
> But when i try to create raid1 like
> /sbin/mdadm --create --data-offset=1024 --metadata=1.2 --verbose
> --force --run --bitmap=internal --assume-clean --name=md21_901
> md21_901 --level=raid1 --raid-devices=2 /dev/mapper/sas00-21_901
> /dev/mapper/sas01-21_901
> I getting
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : ef22dca1:1424ea9e:1b4dce89:27c61a91
>            Name : xen25:md21_901  (local to host xen25)
>   Creation Time : Fri Dec 27 10:44:21 2013
>      Raid Level : raid1
>    Raid Devices : 2
> 
>  Avail Dev Size : 10491904 (5.00 GiB 5.37 GB)
>      Array Size : 5245952 (5.00 GiB 5.37 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=1960 sectors, after=0 sectors
>           State : clean
>     Device UUID : afae5e27:6c706246:4c3e3cb0:e5c726ac
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Dec 27 10:44:26 2013
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 45be4cd1 - correct
>          Events : 2
> 
> 
> Why data offset specified in command line grows twice in resulting md
> array component?
> 

The value given to --data-offset is assumed to be kilobytes unless it has a
suffix: 'M' for megabytes, 's' for sectors.

The value reported by 'mdadm -D' is (as it says) in sectors.
1024 kilobytes  is 2048 sectors.
If you want to specify sectors, add an 's' suffix.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: mdadm raid1 regression
  2014-01-05 22:11     ` NeilBrown
@ 2014-01-22 11:56       ` Vasiliy Tolstov
  0 siblings, 0 replies; 7+ messages in thread
From: Vasiliy Tolstov @ 2014-01-22 11:56 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-kernel

2014/1/6 NeilBrown <neilb@suse.de>:
> The value given to --data-offset is assumed to be kilobytes unless it has a
> suffix: 'M' for megabytes, 's' for sectors.
>
> The value reported by 'mdadm -D' is (as it says) in sectors.
> 1024 kilobytes  is 2048 sectors.
> If you want to specify sectors, add an 's' suffix.


Thanks, Neil. Sorry for delay. I'm test and all works fine.

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-01-22 11:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-18 10:38 mdadm raid1 regression Vasiliy Tolstov
2013-04-19 20:52 ` Greg KH
2013-04-21 22:35 ` NeilBrown
2013-04-22  6:28   ` Vasiliy Tolstov
2013-12-27  6:48   ` Vasiliy Tolstov
2014-01-05 22:11     ` NeilBrown
2014-01-22 11:56       ` Vasiliy Tolstov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).