All of lore.kernel.org
 help / color / mirror / Atom feed
* mdadm failed to create internal bitmap
@ 2020-05-26  3:40 Jonas Fisher
  2020-05-26 10:14 ` antlists
  2020-06-11  4:16 ` Jonas Fisher
  0 siblings, 2 replies; 4+ messages in thread
From: Jonas Fisher @ 2020-05-26  3:40 UTC (permalink / raw)
  To: linux-raid

Hi all,

I got a raid1 composed with 2 disks
/dev/sda -- 2T
/dev/sdb -- 4T

mdadm version is 3.3 and md metadata version is 1.0

At first, I was only using 1T of the each disk,

then I grew the array recently with the command

mdadm --grow /dev/md1 --size=1951944704K

I also tried to add the internal bitmap after expansion finished

mdadm --grow /dev/md1 --bitmap=internal

But I got the following message

mdadm: failed to create internal bitmap - chunksize problem.

I found that Avail Dev Size in superblock examine of two disks

are the same, same as the value I set when I expanded the array (1951944704K).

Then I found that in mdadm bitmap chunksize calculation,

in function add_internal_bitmap1 (super1.c)

variable "room" and and "max_bits" seems to be overflowed under this situation

/dev/sdb3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
           Name : 1
  Creation Time : Sun Apr  8 09:54:47 2018
     Raid Level : raid1
   Raid Devices : 2

Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
     Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
   Super Offset : 7810899368 sectors
   Unused Space : before=0 sectors, after=3907009952 sectors
          State : clean
    Device UUID : 3546fab2:3bfd9a17:39d78059:3d1eb830

    Update Time : Sun May 17 10:24:33 2020
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : cf552c50 - correct
         Events : 93088


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
           Name : 1
  Creation Time : Sun Apr  8 09:54:47 2018
     Raid Level : raid1
   Raid Devices : 2

Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
     Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
   Super Offset : 3903891368 sectors
   Unused Space : before=0 sectors, after=1952 sectors
          State : clean
    Device UUID : 980038ac:99f4e8c6:39d91851:bdf6ed6d

    Update Time : Sun May 17 10:24:33 2020
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : c3ce8290 - correct
         Events : 93088


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

I was wondering is this because mdadm set the size of the rdevs in the array

before doing expansion (in function Grow_reshape)

that caused the sb->data_size not equals to actual raw device size

and consequently led to bitmap chunksize calculation error

or it is simply a data type issue.

Thanks,

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mdadm failed to create internal bitmap
  2020-05-26  3:40 mdadm failed to create internal bitmap Jonas Fisher
@ 2020-05-26 10:14 ` antlists
  2020-05-28  7:44   ` Jonas Fisher
  2020-06-11  4:16 ` Jonas Fisher
  1 sibling, 1 reply; 4+ messages in thread
From: antlists @ 2020-05-26 10:14 UTC (permalink / raw)
  To: Jonas Fisher, linux-raid

On 26/05/2020 04:40, Jonas Fisher wrote:
> Hi all,
> 
> I got a raid1 composed with 2 disks
> /dev/sda -- 2T
> /dev/sdb -- 4T
> 
> mdadm version is 3.3 and md metadata version is 1.0

That's a well ancient mdadm, you need to upgrade ...
> 
> At first, I was only using 1T of the each disk,
> 
> then I grew the array recently with the command
> 
> mdadm --grow /dev/md1 --size=1951944704K
> 
> I also tried to add the internal bitmap after expansion finished
> 
> mdadm --grow /dev/md1 --bitmap=internal
> 
> But I got the following message
> 
> mdadm: failed to create internal bitmap - chunksize problem.
> 
> I found that Avail Dev Size in superblock examine of two disks
> 
> are the same, same as the value I set when I expanded the array (1951944704K).

Makes sense, it's a mirror ...
> 
> Then I found that in mdadm bitmap chunksize calculation,
> 
> in function add_internal_bitmap1 (super1.c)
> 
> variable "room" and and "max_bits" seems to be overflowed under this situation

Could well be fault of the old mdadm ...
> 
> 
> I was wondering is this because mdadm set the size of the rdevs in the array
> 
> before doing expansion (in function Grow_reshape)
> 
> that caused the sb->data_size not equals to actual raw device size
> 
> and consequently led to bitmap chunksize calculation error
> 
> or it is simply a data type issue.
> 
> Thanks,
> 
Download and run a new mdadm. If the problem still persists, then I 
guess the mdadm guys will take a look.

https://raid.wiki.kernel.org/index.php/Linux_Raid

https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Getting_mdadm

It seems odd to be mirroring a 2TB and 4TB, but never mind. It's not 
(that much) a problem if you're using desktop drives for a mirror, but 
if you do get a new 4TB drive, read the advice on the website and make 
sure you get a proper raid drive.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mdadm failed to create internal bitmap
  2020-05-26 10:14 ` antlists
@ 2020-05-28  7:44   ` Jonas Fisher
  0 siblings, 0 replies; 4+ messages in thread
From: Jonas Fisher @ 2020-05-28  7:44 UTC (permalink / raw)
  To: antlists; +Cc: linux-raid

Hi,

I've tried the latest version of mdadm

and the problem still exists.

I believed it was because in add_internal_super1, variable "room"

overflowed that made it become a negative value,

and if room less than 3 * 2 and also __le32_to_cpu(sb->max_dev) <= 384

room would become 6, and max_bits became extremely small,

unfortunately, this is a 2T array, so the bitmap creation failed.

I still wondered, why would mdadm stored the value (if specified)

to rdev->sectors other than just make it to the largest.

In Grow_reshape (Grow.c)

		/* Update the size of each member device in case
		 * they have been resized.  This will never reduce
		 * below the current used-size.  The "size" attribute
		 * understands '0' to mean 'max'.
		 */
		min_csize = 0;
		for (mdi = sra->devs; mdi; mdi = mdi->next) {
			sysfs_set_num(sra, mdi, "size",
				      s->size == MAX_SIZE ? 0 : s->size);
                                                                ^^^^^^^^

Thanks,


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On 2020年5月26日Tuesday 18:14, antlists <antlists@youngman.org.uk> wrote:

> On 26/05/2020 04:40, Jonas Fisher wrote:
>
> > Hi all,
> > I got a raid1 composed with 2 disks
> > /dev/sda -- 2T
> > /dev/sdb -- 4T
> > mdadm version is 3.3 and md metadata version is 1.0
>
> That's a well ancient mdadm, you need to upgrade ...
>
> > At first, I was only using 1T of the each disk,
> > then I grew the array recently with the command
> > mdadm --grow /dev/md1 --size=1951944704K
> > I also tried to add the internal bitmap after expansion finished
> > mdadm --grow /dev/md1 --bitmap=internal
> > But I got the following message
> > mdadm: failed to create internal bitmap - chunksize problem.
> > I found that Avail Dev Size in superblock examine of two disks
> > are the same, same as the value I set when I expanded the array (1951944704K).
>
> Makes sense, it's a mirror ...
>
> > Then I found that in mdadm bitmap chunksize calculation,
> > in function add_internal_bitmap1 (super1.c)
> > variable "room" and and "max_bits" seems to be overflowed under this situation
>
> Could well be fault of the old mdadm ...
>
> > I was wondering is this because mdadm set the size of the rdevs in the array
> > before doing expansion (in function Grow_reshape)
> > that caused the sb->data_size not equals to actual raw device size
> > and consequently led to bitmap chunksize calculation error
> > or it is simply a data type issue.
> > Thanks,
>
> Download and run a new mdadm. If the problem still persists, then I
> guess the mdadm guys will take a look.
>
> https://raid.wiki.kernel.org/index.php/Linux_Raid
>
> https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Getting_mdadm
>
> It seems odd to be mirroring a 2TB and 4TB, but never mind. It's not
> (that much) a problem if you're using desktop drives for a mirror, but
> if you do get a new 4TB drive, read the advice on the website and make
> sure you get a proper raid drive.
>
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: mdadm failed to create internal bitmap
  2020-05-26  3:40 mdadm failed to create internal bitmap Jonas Fisher
  2020-05-26 10:14 ` antlists
@ 2020-06-11  4:16 ` Jonas Fisher
  1 sibling, 0 replies; 4+ messages in thread
From: Jonas Fisher @ 2020-06-11  4:16 UTC (permalink / raw)
  To: linux-raid

Hi,

About this issue, it is because of setting rdev size that

leads to integer overflow while adding internal bitmap.

Besides fixing integer overflow, I was wondering are there

any concerns if we always set the rdev size to the max while doing expansion?

Thanks,


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

> Hi all,
>
> I got a raid1 composed with 2 disks
> /dev/sda -- 2T
> /dev/sdb -- 4T
>
> mdadm version is 3.3 and md metadata version is 1.0
>
> At first, I was only using 1T of the each disk,
>
> then I grew the array recently with the command
>
> mdadm --grow /dev/md1 --size=1951944704K
>
> I also tried to add the internal bitmap after expansion finished
>
> mdadm --grow /dev/md1 --bitmap=internal
>
> But I got the following message
>
> mdadm: failed to create internal bitmap - chunksize problem.
>
> I found that Avail Dev Size in superblock examine of two disks
>
> are the same, same as the value I set when I expanded the array (1951944704K).
>
> Then I found that in mdadm bitmap chunksize calculation,
>
> in function add_internal_bitmap1 (super1.c)
>
> variable "room" and and "max_bits" seems to be overflowed under this situation
>
> /dev/sdb3:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
> Name : 1
> Creation Time : Sun Apr 8 09:54:47 2018
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
> Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
> Super Offset : 7810899368 sectors
> Unused Space : before=0 sectors, after=3907009952 sectors
> State : clean
> Device UUID : 3546fab2:3bfd9a17:39d78059:3d1eb830
>
> Update Time : Sun May 17 10:24:33 2020
> Bad Block Log : 512 entries available at offset -8 sectors
> Checksum : cf552c50 - correct
> Events : 93088
>
> Device Role : Active device 0
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> /dev/sda3:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
> Name : 1
> Creation Time : Sun Apr 8 09:54:47 2018
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
> Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
> Super Offset : 3903891368 sectors
> Unused Space : before=0 sectors, after=1952 sectors
> State : clean
> Device UUID : 980038ac:99f4e8c6:39d91851:bdf6ed6d
>
> Update Time : Sun May 17 10:24:33 2020
> Bad Block Log : 512 entries available at offset -8 sectors
> Checksum : c3ce8290 - correct
> Events : 93088
>
> Device Role : Active device 1
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> I was wondering is this because mdadm set the size of the rdevs in the array
>
> before doing expansion (in function Grow_reshape)
>
> that caused the sb->data_size not equals to actual raw device size
>
> and consequently led to bitmap chunksize calculation error
>
> or it is simply a data type issue.
>
> Thanks,

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-06-11  4:16 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-26  3:40 mdadm failed to create internal bitmap Jonas Fisher
2020-05-26 10:14 ` antlists
2020-05-28  7:44   ` Jonas Fisher
2020-06-11  4:16 ` Jonas Fisher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.