All of lore.kernel.org
 help / color / mirror / Atom feed
* Why can't I re-add my drive after partition shrink?
@ 2017-07-13 23:17 Ram Ramesh
  2017-07-13 23:37 ` Anthony Youngman
  2017-07-14  1:35 ` NeilBrown
  0 siblings, 2 replies; 10+ messages in thread
From: Ram Ramesh @ 2017-07-13 23:17 UTC (permalink / raw)
  To: Linux Raid

I am trying to shring my mdadm underlying disks/partitions as a way of 
reclaiming space after md shrink operation. Here is my md
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid6 sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9] sdc1[10]
>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
> [6/6] [UUUUUU]
>       bitmap: 2/23 pages [8KB], 65536KB chunk
>
> unused devices: <none>

Here is my mdadm detail about /dev/sdb1
> Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
I fail, remove and repartition /dev/sdb1 so that new partition table 
looks like this. (No change to data obviously)
>    New partition table:
>    Number  Start (sector)    End (sector)  Size       Code  Name
>        1            2048      6442452991   3.0 TiB     FD00  Linux RAID
>        2      6442452992     11721045134   2.5 TiB     FD00  Linux RAID

Since new /dev/sdb1 is significantly bigger than what mdadm says that it 
is using, I thought I could simply re-add the drive. However, I get
> sudo mdadm /dev/md0 --re-add /dev/sdb1
> ***mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible***
Why? What did I miss here? Is it possible to fix this so that I can 
repartition other drives and -re-add them without having to go through 
full rebuild after each change?

I researched as much as I could on the net and came up with nothing 
except some one saying that mdadm keeps something at the end of the disk 
regardless of what it says about "Used Dev Size." Is it possible to move 
this info so that I could re-add?

Ramesh


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-13 23:17 Why can't I re-add my drive after partition shrink? Ram Ramesh
@ 2017-07-13 23:37 ` Anthony Youngman
  2017-07-14  1:35 ` NeilBrown
  1 sibling, 0 replies; 10+ messages in thread
From: Anthony Youngman @ 2017-07-13 23:37 UTC (permalink / raw)
  To: Ram Ramesh, Linux Raid

On 14/07/17 00:17, Ram Ramesh wrote:
> I am trying to shring my mdadm underlying disks/partitions as a way of 
> reclaiming space after md shrink operation. Here is my md
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
>> [raid4] [raid10]
>> md0 : active raid6 sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9] sdc1[10]
>>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
>> [6/6] [UUUUUU]
>>       bitmap: 2/23 pages [8KB], 65536KB chunk
>>
>> unused devices: <none>
> 
> Here is my mdadm detail about /dev/sdb1
>> Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
> I fail, remove and repartition /dev/sdb1 so that new partition table 
> looks like this. (No change to data obviously)
>>    New partition table:
>>    Number  Start (sector)    End (sector)  Size       Code  Name
>>        1            2048      6442452991   3.0 TiB     FD00  Linux RAID
>>        2      6442452992     11721045134   2.5 TiB     FD00  Linux RAID
> 
> Since new /dev/sdb1 is significantly bigger than what mdadm says that it 
> is using, I thought I could simply re-add the drive. However, I get
>> sudo mdadm /dev/md0 --re-add /dev/sdb1
>> ***mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible***
> Why? What did I miss here? Is it possible to fix this so that I can 
> repartition other drives and -re-add them without having to go through 
> full rebuild after each change?
> 
> I researched as much as I could on the net and came up with nothing 
> except some one saying that mdadm keeps something at the end of the disk 
> regardless of what it says about "Used Dev Size." Is it possible to move 
> this info so that I could re-add?
> 
I can't say why it won't re-add, but what's at the end of the disk is 
the superblock - except it isn't at the end. The 0.9, or v1.0, 
superblocks are at the end, but you've got a v1.2 superblock which is 
stored at the start - to be precise, 4K in from the front of the partition.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-13 23:17 Why can't I re-add my drive after partition shrink? Ram Ramesh
  2017-07-13 23:37 ` Anthony Youngman
@ 2017-07-14  1:35 ` NeilBrown
  2017-07-15  0:17   ` Ram Ramesh
  1 sibling, 1 reply; 10+ messages in thread
From: NeilBrown @ 2017-07-14  1:35 UTC (permalink / raw)
  To: Ram Ramesh, Linux Raid

[-- Attachment #1: Type: text/plain, Size: 2268 bytes --]

On Thu, Jul 13 2017, Ram Ramesh wrote:

> I am trying to shring my mdadm underlying disks/partitions as a way of 
> reclaiming space after md shrink operation. Here is my md
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
>> [raid4] [raid10]
>> md0 : active raid6 sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9] sdc1[10]
>>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
>> [6/6] [UUUUUU]
>>       bitmap: 2/23 pages [8KB], 65536KB chunk
>>
>> unused devices: <none>
>
> Here is my mdadm detail about /dev/sdb1
>> Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
> I fail, remove and repartition /dev/sdb1 so that new partition table 
> looks like this. (No change to data obviously)
>>    New partition table:
>>    Number  Start (sector)    End (sector)  Size       Code  Name
>>        1            2048      6442452991   3.0 TiB     FD00  Linux RAID
>>        2      6442452992     11721045134   2.5 TiB     FD00  Linux RAID
>
> Since new /dev/sdb1 is significantly bigger than what mdadm says that it 
> is using, I thought I could simply re-add the drive. However, I get
>> sudo mdadm /dev/md0 --re-add /dev/sdb1
>> ***mdadm: --re-add for /dev/sdb1 to /dev/md0 is not possible***
> Why? What did I miss here? Is it possible to fix this so that I can 
> repartition other drives and -re-add them without having to go through 
> full rebuild after each change?

Please report output of "mdadm --examine" on both a device that is active
in the array, and the device that you are trying to add.
Also "mdadm --examine-bitmap" of a device that is active in the array.

NeilBrown

>
> I researched as much as I could on the net and came up with nothing 
> except some one saying that mdadm keeps something at the end of the disk 
> regardless of what it says about "Used Dev Size." Is it possible to move 
> this info so that I could re-add?
>
> Ramesh
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-14  1:35 ` NeilBrown
@ 2017-07-15  0:17   ` Ram Ramesh
  2017-07-16 22:37     ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Ram Ramesh @ 2017-07-15  0:17 UTC (permalink / raw)
  To: NeilBrown, Linux Raid

On 07/13/2017 08:35 PM, NeilBrown wrote:
> <snip
> Please report output of "mdadm --examine" on both a device that is active
> in the array, and the device that you are trying to add.
> Also "mdadm --examine-bitmap" of a device that is active in the array.
>
> NeilBrown
>
>> I researched as much as I could on the net and came up with nothing
>> except some one saying that mdadm keeps something at the end of the disk
>> regardless of what it says about "Used Dev Size." Is it possible to move
>> this info so that I could re-add?
>>
>> Ramesh
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Neil,

   Since the problem, I did not want to leave my md in degraded state. 
So, I added my drive back and paid the penalty for rebuilding. I have 
other disks that need to be resized and *can get you want*. Please let 
me know if that is what you meant. If  you wanted the current info after 
successfully rebuilding the array after a regular add, it is below.

First the device that is currently in after a regular add.
> zym [rramesh] 497 > sudo mdadm --examine /dev/sdb1
> /dev/sdb1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>            Name : zym:0  (local to host zym)
>   Creation Time : Mon Apr 22 00:08:12 2013
>      Raid Level : raid6
>    Raid Devices : 6
>
>  Avail Dev Size : 6442188800 (3071.88 GiB 3298.40 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 702ca77d:564d69ff:e45d9679:64c314fa
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Jul 14 18:36:37 2017
>        Checksum : c5502356 - correct
>          Events : 297182
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>    Device Role : Active device 4
>    Array State : AAAAAA ('A' == active, '.' == missing)

Member disk that has been in the array before /dev/sdb1 resize
> zym [rramesh] 498 > sudo mdadm --examine /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>            Name : zym:0  (local to host zym)
>   Creation Time : Mon Apr 22 00:08:12 2013
>      Raid Level : raid6
>    Raid Devices : 6
>
>  Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 7e035b56:d1e1882b:e78a08ad:3ba50667
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Jul 14 18:36:37 2017
>        Checksum : a5288a4c - correct
>          Events : 297182
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>    Device Role : Active device 2
>    Array State : AAAAAA ('A' == active, '.' == missing)

Now examine bit map on /dev/sdb1 (device that got in after a regular add)
> zym [rramesh] 499 > sudo mdadm --examine-bitmap /dev/sdb1
>         Filename : /dev/sdb1
>            Magic : 6d746962
>          Version : 4
>             UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>           Events : 297182
>   Events Cleared : 297182
>            State : OK
>        Chunksize : 64 MB
>           Daemon : 5s flush period
>       Write Mode : Normal
>        Sync Size : 3087007744 (2944.00 GiB 3161.10 GB)
>           Bitmap : 47104 bits (chunks), 0 dirty (0.0%)

Ramesh

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-15  0:17   ` Ram Ramesh
@ 2017-07-16 22:37     ` NeilBrown
  2017-07-17  5:15       ` Ram Ramesh
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2017-07-16 22:37 UTC (permalink / raw)
  To: Ram Ramesh, Linux Raid

[-- Attachment #1: Type: text/plain, Size: 1421 bytes --]

On Fri, Jul 14 2017, Ram Ramesh wrote:

> On 07/13/2017 08:35 PM, NeilBrown wrote:
>> <snip
>> Please report output of "mdadm --examine" on both a device that is active
>> in the array, and the device that you are trying to add.
>> Also "mdadm --examine-bitmap" of a device that is active in the array.
>>
>> NeilBrown
>>
>>> I researched as much as I could on the net and came up with nothing
>>> except some one saying that mdadm keeps something at the end of the disk
>>> regardless of what it says about "Used Dev Size." Is it possible to move
>>> this info so that I could re-add?
>>>
>>> Ramesh
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Neil,
>
>    Since the problem, I did not want to leave my md in degraded state. 
> So, I added my drive back and paid the penalty for rebuilding. I have 
> other disks that need to be resized and *can get you want*. Please let 
> me know if that is what you meant. If  you wanted the current info after 
> successfully rebuilding the array after a regular add, it is below.

I only requested the information because it might help fix, or explain,
your difficulty.  If you don't currently have a difficulty, then I don't
need to look at any details.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-16 22:37     ` NeilBrown
@ 2017-07-17  5:15       ` Ram Ramesh
  2017-07-19 20:55         ` Ram Ramesh
  0 siblings, 1 reply; 10+ messages in thread
From: Ram Ramesh @ 2017-07-17  5:15 UTC (permalink / raw)
  To: NeilBrown, Linux Raid

On 07/16/2017 05:37 PM, NeilBrown wrote:
> On Fri, Jul 14 2017, Ram Ramesh wrote:
>
>> On 07/13/2017 08:35 PM, NeilBrown wrote:
>>> <snip
>>> Please report output of "mdadm --examine" on both a device that is active
>>> in the array, and the device that you are trying to add.
>>> Also "mdadm --examine-bitmap" of a device that is active in the array.
>>>
>>> NeilBrown
>>>
>>>> I researched as much as I could on the net and came up with nothing
>>>> except some one saying that mdadm keeps something at the end of the disk
>>>> regardless of what it says about "Used Dev Size." Is it possible to move
>>>> this info so that I could re-add?
>>>>
>>>> Ramesh
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Neil,
>>
>>     Since the problem, I did not want to leave my md in degraded state.
>> So, I added my drive back and paid the penalty for rebuilding. I have
>> other disks that need to be resized and *can get you want*. Please let
>> me know if that is what you meant. If  you wanted the current info after
>> successfully rebuilding the array after a regular add, it is below.
> I only requested the information because it might help fix, or explain,
> your difficulty.  If you don't currently have a difficulty, then I don't
> need to look at any details.
>
> Thanks,
> NeilBrown

Thanks for your time. Yes, I still have the problem as I need to shrink 
other 5 disks in the array and I like to re-add rather than add and 
rebuild each time.

The host with the array is currently busy, and I will get this info 
tomorrow  when I attempt the process on my next hard drive.

Ramesh


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-17  5:15       ` Ram Ramesh
@ 2017-07-19 20:55         ` Ram Ramesh
  2017-07-19 23:14           ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Ram Ramesh @ 2017-07-19 20:55 UTC (permalink / raw)
  To: NeilBrown, Linux Raid

<snip>
>>>     Since the problem, I did not want to leave my md in degraded state.
>>> So, I added my drive back and paid the penalty for rebuilding. I have
>>> other disks that need to be resized and *can get you want*. Please let
>>> me know if that is what you meant. If  you wanted the current info 
>>> after
>>> successfully rebuilding the array after a regular add, it is below.
>> I only requested the information because it might help fix, or explain,
>> your difficulty.  If you don't currently have a difficulty, then I don't
>> need to look at any details.
>>
>> Thanks,
>> NeilBrown
>
> Thanks for your time. Yes, I still have the problem as I need to 
> shrink other 5 disks in the array and I like to re-add rather than add 
> and rebuild each time.
>
> The host with the array is currently busy, and I will get this info 
> tomorrow  when I attempt the process on my next hard drive.
>
> Ramesh
>
Here is my attempt to repeat the steps in my last attempt to remove, 
repartition, re-add. Last time I did it on /dev/sdb. Now I am going to 
do it on /dev/sdc. Note that I have not been successful as you see at 
the end. I am going to keep the array degraded so that I can still get 
old info from /dev/sdc1, if you need anything else. I will keep it this 
way till tomorrow and then add the device for md to rebuild. Please ask 
anything else before that or send me a note to keep the array degraded 
so that you can examine /dev/sdc1 more.

<start>
<current-status>

> zym [rramesh] 251 > cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid6 sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9] sdc1[10]
>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
> [6/6] [UUUUUU]
>       bitmap: 0/23 pages [0KB], 65536KB chunk
>
> unused devices: <none>

<sdc partitions before any changes>
> zym [rramesh] 252 > sudo gdisk -l /dev/sdc
> GPT fdisk (gdisk) version 0.8.8
>
> Partition table scan:
>   MBR: protective
>   BSD: not present
>   APM: not present
>   GPT: present
>
> Found valid GPT with protective MBR; using GPT.
> Disk /dev/sdc: 11721045168 sectors, 5.5 TiB
> Logical sector size: 512 bytes
> Disk identifier (GUID): EF5E7965-FC30-4137-9DDC-1B2C7966B936
> Partition table holds up to 128 entries
> First usable sector is 34, last usable sector is 11721045134
> Partitions will be aligned on 2048-sector boundaries
> Total free space is 2014 sectors (1007.0 KiB)
>
> Number  Start (sector)    End (sector)  Size       Code  Name
>    1            2048     11721045134   5.5 TiB     FD00  Linux RAID

<sdc mdadm info before any changes>
> zym [rramesh] 253 > sudo mdadm --examine /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>            Name : zym:0  (local to host zym)
>   Creation Time : Mon Apr 22 00:08:12 2013
>      Raid Level : raid6
>    Raid Devices : 6
>
>  Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 7e035b56:d1e1882b:e78a08ad:3ba50667
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Wed Jul 19 15:12:46 2017
>        Checksum : a52ef205 - correct
>          Events : 297182
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>    Device Role : Active device 2
>    Array State : AAAAAA ('A' == active, '.' == missing)
> zym [rramesh] 256 > sudo mdadm --examine-bitmap /dev/sdc1
>         Filename : /dev/sdc1
>            Magic : 6d746962
>          Version : 4
>             UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>           Events : 297182
>   Events Cleared : 297182
>            State : OK
>        Chunksize : 64 MB
>           Daemon : 5s flush period
>       Write Mode : Normal
>        Sync Size : 3087007744 (2944.00 GiB 3161.10 GB)
>           Bitmap : 47104 bits (chunks), 0 dirty (0.0%)

<removal and repartition begins>
> zym [rramesh] 254 > sudo  mdadm /dev/md0 --fail /dev/sdc1
> mdadm: set /dev/sdc1 faulty in /dev/md0
>
> zym [rramesh] 255 > sudo mdadm /dev/md0 --remove /dev/sdc1
> mdadm: hot removed /dev/sdc1 from /dev/md0
>
> zym [rramesh] 261 > gdisk /dev/sdc
>  <snip>
> Command (? for help): p
> <snip>
>
> Number  Start (sector)    End (sector)  Size       Code  Name
>    1            2048     11721045134   5.5 TiB     FD00  Linux RAID
>
> Command (? for help): d
> Using 1
>
> Command (? for help): n
> Partition number (1-128, default 1):
> First sector (34-11721045134, default = 2048) or {+-}size{KMGTP}:
> Last sector (2048-11721045134, default = 11721045134) or 
> {+-}size{KMGTP}: 6442452991
> Current type is 'Linux filesystem'
> Hex code or GUID (L to show codes, Enter = 8300): FD00
> Changed type of partition to 'Linux RAID'
>
> Command (? for help): n
> Partition number (2-128, default 2):
> First sector (34-11721045134, default = 6442452992) or {+-}size{KMGTP}:
> Last sector (6442452992-11721045134, default = 11721045134) or 
> {+-}size{KMGTP}:
> Current type is 'Linux filesystem'
> Hex code or GUID (L to show codes, Enter = 8300): FD00
> Changed type of partition to 'Linux RAID'
>
> Command (? for help): p
> Disk /dev/sdc: 11721045168 sectors, 5.5 TiB
> Logical sector size: 512 bytes
> Disk identifier (GUID): EF5E7965-FC30-4137-9DDC-1B2C7966B936
> Partition table holds up to 128 entries
> First usable sector is 34, last usable sector is 11721045134
> Partitions will be aligned on 2048-sector boundaries
> Total free space is 2014 sectors (1007.0 KiB)
>
> Number  Start (sector)    End (sector)  Size       Code  Name
>    1            2048      6442452991   3.0 TiB     FD00  Linux RAID
>    2      6442452992     11721045134   2.5 TiB     FD00  Linux RAID
>
> Command (? for help): w
>
> Final checks complete. About to write GPT data. THIS WILL OVERWRITE 
> EXISTING
> PARTITIONS!!
>
> Do you want to proceed? (Y/N): Y
> OK; writing new GUID partition table (GPT) to /dev/sdc.
> The operation has completed successfully.
>

<good device that is still in md0>
> zym [rramesh] 264 > cat /proc/partitions |fgrep sdb
>    8       16 5860522584 sdb
>    8       17 3221225472 sdb1
>    8       18 2639296071 sdb2

<device just removed and repartitioned>
> zym [rramesh] 271 > cat /proc/partitions |fgrep sdc
>    8       32 5860522584 sdc
>    8       33 3221225472 sdc1
>    8       34 2639296071 sdc2

<good device still in md0>
> zym [rramesh] 265 > sudo  mdadm --examine /dev/sdb1
> /dev/sdb1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>            Name : zym:0  (local to host zym)
>   Creation Time : Mon Apr 22 00:08:12 2013
>      Raid Level : raid6
>    Raid Devices : 6
>
>  Avail Dev Size : 6442188800 (3071.88 GiB 3298.40 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 702ca77d:564d69ff:e45d9679:64c314fa
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Wed Jul 19 15:15:00 2017
>        Checksum : c5578b94 - correct
>          Events : 297185
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>    Device Role : Active device 4
>    Array State : AA.AAA ('A' == active, '.' == missing)

<good device still in md0>
> zym [rramesh] 266 > sudo mdadm --examine-bitmap  /dev/sdb1
>         Filename : /dev/sdb1
>            Magic : 6d746962
>          Version : 4
>             UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>           Events : 297185
>   Events Cleared : 297182
>            State : OK
>        Chunksize : 64 MB
>           Daemon : 5s flush period
>       Write Mode : Normal
>        Sync Size : 3087007744 (2944.00 GiB 3161.10 GB)
>           Bitmap : 47104 bits (chunks), 0 dirty (0.0%)

<device just removed and repartitioned>
> zym [rramesh] 267 > sudo mdadm --examine /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>            Name : zym:0  (local to host zym)
>   Creation Time : Mon Apr 22 00:08:12 2013
>      Raid Level : raid6
>    Raid Devices : 6
>
>  Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>      Array Size : 12348030976 (11776.00 GiB 12644.38 GB)
>   Used Dev Size : 6174015488 (2944.00 GiB 3161.10 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 7e035b56:d1e1882b:e78a08ad:3ba50667
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Wed Jul 19 15:12:46 2017
>        Checksum : a52ef205 - correct
>          Events : 297182
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>    Device Role : Active device 2
>    Array State : AAAAAA ('A' == active, '.' == missing)

<device just removed and repartitioned>
> zym [rramesh] 268 > sudo mdadm --examine-bitmap /dev/sdc1
>         Filename : /dev/sdc1
>            Magic : 6d746962
>          Version : 4
>             UUID : 0e9f76b5:4a89171a:a930bccd:78749144
>           Events : 297182
>   Events Cleared : 297182
>            State : OK
>        Chunksize : 64 MB
>           Daemon : 5s flush period
>       Write Mode : Normal
>        Sync Size : 3087007744 (2944.00 GiB 3161.10 GB)
>           Bitmap : 47104 bits (chunks), 0 dirty (0.0%)
> zym [rramesh] 269 > cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid6 sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9]
>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
> [6/5] [UU_UUU]
>       bitmap: 0/23 pages [0KB], 65536KB chunk
>
> unused devices: <none>


<Cannot re-add!!!!>
> zym [rramesh] 270 > sudo mdadm /dev/md0 --re-add /dev/sdc1
> mdadm: --re-add for /dev/sdc1 to /dev/md0 is not possible

I have not added this device yet and I am keeping the array degraded, 
just in case you need anything else. I will do so till tomorrow. After 
that I will simply add the device so that it will rebuild unless you ask 
for delay or additional info.

Ramesh

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-19 20:55         ` Ram Ramesh
@ 2017-07-19 23:14           ` NeilBrown
  2017-07-20  0:39             ` Ram Ramesh
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2017-07-19 23:14 UTC (permalink / raw)
  To: Ram Ramesh, Linux Raid

[-- Attachment #1: Type: text/plain, Size: 1401 bytes --]

On Wed, Jul 19 2017, Ram Ramesh wrote:

> Here is my attempt to repeat the steps in my last attempt to remove, 
> repartition, re-add. Last time I did it on /dev/sdb. Now I am going to 
> do it on /dev/sdc. Note that I have not been successful as you see at 
> the end. I am going to keep the array degraded so that I can still get 
> old info from /dev/sdc1, if you need anything else. I will keep it this 
> way till tomorrow and then add the device for md to rebuild. Please ask 
> anything else before that or send me a note to keep the array degraded 
> so that you can examine /dev/sdc1 more.

Thanks.  I *love* getting all the details.  You cannot send too many
details!

This:
> <good device still in md0>
>> zym [rramesh] 265 > sudo  mdadm --examine /dev/sdb1
>> /dev/sdb1:
..
>>  Avail Dev Size : 6442188800 (3071.88 GiB 3298.40 GB)

and this:

> <device just removed and repartitioned>
>> zym [rramesh] 267 > sudo mdadm --examine /dev/sdc1
>> /dev/sdc1:
...
>>  Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)

Shows the key difference.  "Avail Dev Size", aka sb->data_size, is
wrong.  We can fix it.

>
> <Cannot re-add!!!!>
>> zym [rramesh] 270 > sudo mdadm /dev/md0 --re-add /dev/sdc1
>> mdadm: --re-add for /dev/sdc1 to /dev/md0 is not possible

Please try
   sudo mdadm /dev/md0 --re-add /dev/sdc1 --update=devicesize

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-19 23:14           ` NeilBrown
@ 2017-07-20  0:39             ` Ram Ramesh
  2017-07-20 10:23               ` Wols Lists
  0 siblings, 1 reply; 10+ messages in thread
From: Ram Ramesh @ 2017-07-20  0:39 UTC (permalink / raw)
  To: NeilBrown, Linux Raid

On 07/19/2017 06:14 PM, NeilBrown wrote:
> On Wed, Jul 19 2017, Ram Ramesh wrote:
>
>> Here is my attempt to repeat the steps in my last attempt to remove,
>> repartition, re-add. Last time I did it on /dev/sdb. Now I am going to
>> do it on /dev/sdc. Note that I have not been successful as you see at
>> the end. I am going to keep the array degraded so that I can still get
>> old info from /dev/sdc1, if you need anything else. I will keep it this
>> way till tomorrow and then add the device for md to rebuild. Please ask
>> anything else before that or send me a note to keep the array degraded
>> so that you can examine /dev/sdc1 more.
> Thanks.  I *love* getting all the details.  You cannot send too many
> details!
>
> This:
>> <good device still in md0>
>>> zym [rramesh] 265 > sudo  mdadm --examine /dev/sdb1
>>> /dev/sdb1:
> ..
>>>   Avail Dev Size : 6442188800 (3071.88 GiB 3298.40 GB)
> and this:
>
>> <device just removed and repartitioned>
>>> zym [rramesh] 267 > sudo mdadm --examine /dev/sdc1
>>> /dev/sdc1:
> ...
>>>   Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
> Shows the key difference.  "Avail Dev Size", aka sb->data_size, is
> wrong.  We can fix it.
>
>> <Cannot re-add!!!!>
>>> zym [rramesh] 270 > sudo mdadm /dev/md0 --re-add /dev/sdc1
>>> mdadm: --re-add for /dev/sdc1 to /dev/md0 is not possible
> Please try
>     sudo mdadm /dev/md0 --re-add /dev/sdc1 --update=devicesize
>
> Thanks,
> NeilBrown

Neil,

Thanks a ton. That does it. It got re-added without any issue. It is 
rebuilding because the array was used to record two TV programs when it 
was in degraded state. But the re-add is accepted.

> zym [rramesh] 274 > sudo mdadm /dev/md0 --re-add /dev/sdc1 
> --update=devicesize
> Size was 11720780943
> Size is 6442188800
> mdadm: re-added /dev/sdc1
> zym [rramesh] 275 > cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md0 : active raid6 sdc1[10] sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9]
>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2 
> [6/5] [UU_UUU]
>       [========>............]  recovery = 42.6% 
> (1316769920/3087007744) finish=292.2min speed=100952K/sec
>       bitmap: 2/23 pages [8KB], 65536KB chunk
>
> unused devices: <none>

Wol,

    If you read this, this may worth a mention on wiki page.

Ramesh


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Why can't I re-add my drive after partition shrink?
  2017-07-20  0:39             ` Ram Ramesh
@ 2017-07-20 10:23               ` Wols Lists
  0 siblings, 0 replies; 10+ messages in thread
From: Wols Lists @ 2017-07-20 10:23 UTC (permalink / raw)
  To: Ram Ramesh, NeilBrown, Linux Raid

On 20/07/17 01:39, Ram Ramesh wrote:
> On 07/19/2017 06:14 PM, NeilBrown wrote:
>> On Wed, Jul 19 2017, Ram Ramesh wrote:
>>
>>> Here is my attempt to repeat the steps in my last attempt to remove,
>>> repartition, re-add. Last time I did it on /dev/sdb. Now I am going to
>>> do it on /dev/sdc. Note that I have not been successful as you see at
>>> the end. I am going to keep the array degraded so that I can still get
>>> old info from /dev/sdc1, if you need anything else. I will keep it this
>>> way till tomorrow and then add the device for md to rebuild. Please ask
>>> anything else before that or send me a note to keep the array degraded
>>> so that you can examine /dev/sdc1 more.
>> Thanks.  I *love* getting all the details.  You cannot send too many
>> details!
>>
>> This:
>>> <good device still in md0>
>>>> zym [rramesh] 265 > sudo  mdadm --examine /dev/sdb1
>>>> /dev/sdb1:
>> ..
>>>>   Avail Dev Size : 6442188800 (3071.88 GiB 3298.40 GB)
>> and this:
>>
>>> <device just removed and repartitioned>
>>>> zym [rramesh] 267 > sudo mdadm --examine /dev/sdc1
>>>> /dev/sdc1:
>> ...
>>>>   Avail Dev Size : 11720780943 (5588.90 GiB 6001.04 GB)
>> Shows the key difference.  "Avail Dev Size", aka sb->data_size, is
>> wrong.  We can fix it.
>>
>>> <Cannot re-add!!!!>
>>>> zym [rramesh] 270 > sudo mdadm /dev/md0 --re-add /dev/sdc1
>>>> mdadm: --re-add for /dev/sdc1 to /dev/md0 is not possible
>> Please try
>>     sudo mdadm /dev/md0 --re-add /dev/sdc1 --update=devicesize
>>
>> Thanks,
>> NeilBrown
> 
> Neil,
> 
> Thanks a ton. That does it. It got re-added without any issue. It is
> rebuilding because the array was used to record two TV programs when it
> was in degraded state. But the re-add is accepted.
> 
>> zym [rramesh] 274 > sudo mdadm /dev/md0 --re-add /dev/sdc1
>> --update=devicesize
>> Size was 11720780943
>> Size is 6442188800
>> mdadm: re-added /dev/sdc1
>> zym [rramesh] 275 > cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md0 : active raid6 sdc1[10] sdb1[6] sdg1[11] sdd1[12] sdf1[8] sde1[9]
>>       12348030976 blocks super 1.2 level 6, 64k chunk, algorithm 2
>> [6/5] [UU_UUU]
>>       [========>............]  recovery = 42.6%
>> (1316769920/3087007744) finish=292.2min speed=100952K/sec
>>       bitmap: 2/23 pages [8KB], 65536KB chunk
>>
>> unused devices: <none>
> 
> Wol,
> 
>    If you read this, this may worth a mention on wiki page.
> 
> Ramesh
> 
Got that :-)

I'll have to think how to do that - probably a section on the use of
--update to fix problems. Anyway, I've marked this email so when I work
my way through stuff I'll find it :-)

Cheers,
Wol


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-07-20 10:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-13 23:17 Why can't I re-add my drive after partition shrink? Ram Ramesh
2017-07-13 23:37 ` Anthony Youngman
2017-07-14  1:35 ` NeilBrown
2017-07-15  0:17   ` Ram Ramesh
2017-07-16 22:37     ` NeilBrown
2017-07-17  5:15       ` Ram Ramesh
2017-07-19 20:55         ` Ram Ramesh
2017-07-19 23:14           ` NeilBrown
2017-07-20  0:39             ` Ram Ramesh
2017-07-20 10:23               ` Wols Lists

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.