All of lore.kernel.org
 help / color / mirror / Atom feed
* problem after growing
@ 2013-02-13 17:04 Rémi Cailletaud
  2013-02-13 17:20 ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-13 17:04 UTC (permalink / raw)
  To: xfs

Hi,

I face a strange and scary issue. I just grow a xfs filesystem (44To), 
and no way to mount it anymore :
XFS: device supports only 4096 byte sectors (not 512)

# xfs_check /dev/vg0/tomo-201111
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

# xfs_repair -L /dev/vg0/tomo-201111
xfs_repair: warning - cannot set blocksize 512 on block device 
/dev/vg0/tomo-201111: Argument invalide
Phase 1 - find and verify superblock...
superblock read failed, offset 1099511623680, size 2048, ag 1, rval -1

fatal error -- Invalid argument

Conf is as follow :

LVM : 3pv - 1vg

the lv containing the xfs system is on several extents :

   tomo-201111 vg0  -wi-ao    1 linear  15,34t /dev/sda:5276160-9298322
   tomo-201111 vg0  -wi-ao    1 linear  18,66t /dev/sdb:0-4890732
   tomo-201111 vg0  -wi-ao    1 linear   8,81t /dev/sdb:6987885-9298322
   tomo-201111 vg0  -wi-ao    1 linear   1,19t /dev/sdc:2883584-3194585

before growing fs, I lvextend the vg, and a new extents on /dev/sdc was 
used. I cant think it caused this issue... I saw there can be problem 
with underlying device (an ARECA 1880). With xfs_db, I found this strange :
  "logsectsize = 0"

# xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
magicnum = 0x58465342
blocksize = 4096
dblocks = 10468982745
rblocks = 0
rextents = 0
uuid = 09793bea-952b-44fa-be71-02f59e69b41b
logstart = 1342177284
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 268435455
agcount = 39
rbmblocks = 0
logblocks = 521728
versionnum = 0xb4b4
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 28
rextslog = 0
inprogress = 0
imax_pct = 5
icount = 6233280
ifree = 26
fdblocks = 1218766953
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 1
features2 = 0xa
bad_features2 = 0xa


Any idea ?

Cheers,
rémi

-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:04 problem after growing Rémi Cailletaud
@ 2013-02-13 17:20 ` Eric Sandeen
  2013-02-13 17:27   ` Rémi Cailletaud
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 17:20 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs

On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
> Hi,
> 
> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
> XFS: device supports only 4096 byte sectors (not 512)

Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?

that's probably not something we anticipate or check for....

What sector size(s) are the actual lowest level disks under all the lvm pieces?

-Eric

> # xfs_check /dev/vg0/tomo-201111
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
> 
> # xfs_repair -L /dev/vg0/tomo-201111
> xfs_repair: warning - cannot set blocksize 512 on block device /dev/vg0/tomo-201111: Argument invalide
> Phase 1 - find and verify superblock...
> superblock read failed, offset 1099511623680, size 2048, ag 1, rval -1
> 
> fatal error -- Invalid argument
> 
> Conf is as follow :
> 
> LVM : 3pv - 1vg
> 
> the lv containing the xfs system is on several extents :
> 
>   tomo-201111 vg0  -wi-ao    1 linear  15,34t /dev/sda:5276160-9298322
>   tomo-201111 vg0  -wi-ao    1 linear  18,66t /dev/sdb:0-4890732
>   tomo-201111 vg0  -wi-ao    1 linear   8,81t /dev/sdb:6987885-9298322
>   tomo-201111 vg0  -wi-ao    1 linear   1,19t /dev/sdc:2883584-3194585
> 
> before growing fs, I lvextend the vg, and a new extents on /dev/sdc was used. I cant think it caused this issue... I saw there can be problem with underlying device (an ARECA 1880). With xfs_db, I found this strange :
>  "logsectsize = 0"
> 
> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
> magicnum = 0x58465342
> blocksize = 4096
> dblocks = 10468982745
> rblocks = 0
> rextents = 0
> uuid = 09793bea-952b-44fa-be71-02f59e69b41b
> logstart = 1342177284
> rootino = 128
> rbmino = 129
> rsumino = 130
> rextsize = 1
> agblocks = 268435455
> agcount = 39
> rbmblocks = 0
> logblocks = 521728
> versionnum = 0xb4b4
> sectsize = 512
> inodesize = 256
> inopblock = 16
> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
> blocklog = 12
> sectlog = 9
> inodelog = 8
> inopblog = 4
> agblklog = 28
> rextslog = 0
> inprogress = 0
> imax_pct = 5
> icount = 6233280
> ifree = 26
> fdblocks = 1218766953
> frextents = 0
> uquotino = 0
> gquotino = 0
> qflags = 0
> flags = 0
> shared_vn = 0
> inoalignmt = 2
> unit = 0
> width = 0
> dirblklog = 0
> logsectlog = 0
> logsectsize = 0
> logsunit = 1
> features2 = 0xa
> bad_features2 = 0xa
> 
> 
> Any idea ?
> 
> Cheers,
> rémi
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:20 ` Eric Sandeen
@ 2013-02-13 17:27   ` Rémi Cailletaud
  2013-02-13 17:39     ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-13 17:27 UTC (permalink / raw)
  To: Eric Sandeen

Le 13/02/2013 18:20, Eric Sandeen a écrit :
> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>> Hi,
>>
>> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
>> XFS: device supports only 4096 byte sectors (not 512)
> Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?

The three devices are ARECA 1880 card, but the last one was added later, 
and I never check for sector physical configuration on card configuration.
But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 
4k... :(

> that's probably not something we anticipate or check for....
>
> What sector size(s) are the actual lowest level disks under all the lvm pieces?

What command to run to get this info ?

rémi


>
> -Eric
>
>> # xfs_check /dev/vg0/tomo-201111
>> ERROR: The filesystem has valuable metadata changes in a log which needs to
>> be replayed.  Mount the filesystem to replay the log, and unmount it before
>> re-running xfs_check.  If you are unable to mount the filesystem, then use
>> the xfs_repair -L option to destroy the log and attempt a repair.
>> Note that destroying the log may cause corruption -- please attempt a mount
>> of the filesystem before doing this.
>>
>> # xfs_repair -L /dev/vg0/tomo-201111
>> xfs_repair: warning - cannot set blocksize 512 on block device /dev/vg0/tomo-201111: Argument invalide
>> Phase 1 - find and verify superblock...
>> superblock read failed, offset 1099511623680, size 2048, ag 1, rval -1
>>
>> fatal error -- Invalid argument
>>
>> Conf is as follow :
>>
>> LVM : 3pv - 1vg
>>
>> the lv containing the xfs system is on several extents :
>>
>>    tomo-201111 vg0  -wi-ao    1 linear  15,34t /dev/sda:5276160-9298322
>>    tomo-201111 vg0  -wi-ao    1 linear  18,66t /dev/sdb:0-4890732
>>    tomo-201111 vg0  -wi-ao    1 linear   8,81t /dev/sdb:6987885-9298322
>>    tomo-201111 vg0  -wi-ao    1 linear   1,19t /dev/sdc:2883584-3194585
>>
>> before growing fs, I lvextend the vg, and a new extents on /dev/sdc was used. I cant think it caused this issue... I saw there can be problem with underlying device (an ARECA 1880). With xfs_db, I found this strange :
>>   "logsectsize = 0"
>>
>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>> magicnum = 0x58465342
>> blocksize = 4096
>> dblocks = 10468982745
>> rblocks = 0
>> rextents = 0
>> uuid = 09793bea-952b-44fa-be71-02f59e69b41b
>> logstart = 1342177284
>> rootino = 128
>> rbmino = 129
>> rsumino = 130
>> rextsize = 1
>> agblocks = 268435455
>> agcount = 39
>> rbmblocks = 0
>> logblocks = 521728
>> versionnum = 0xb4b4
>> sectsize = 512
>> inodesize = 256
>> inopblock = 16
>> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
>> blocklog = 12
>> sectlog = 9
>> inodelog = 8
>> inopblog = 4
>> agblklog = 28
>> rextslog = 0
>> inprogress = 0
>> imax_pct = 5
>> icount = 6233280
>> ifree = 26
>> fdblocks = 1218766953
>> frextents = 0
>> uquotino = 0
>> gquotino = 0
>> qflags = 0
>> flags = 0
>> shared_vn = 0
>> inoalignmt = 2
>> unit = 0
>> width = 0
>> dirblklog = 0
>> logsectlog = 0
>> logsectsize = 0
>> logsunit = 1
>> features2 = 0xa
>> bad_features2 = 0xa
>>
>>
>> Any idea ?
>>
>> Cheers,
>> rémi
>>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:27   ` Rémi Cailletaud
@ 2013-02-13 17:39     ` Eric Sandeen
  2013-02-13 17:44       ` Rémi Cailletaud
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 17:39 UTC (permalink / raw)
  To: Rémi Cailletaud, xfs-oss

On 2/13/13 11:27 AM, Rémi Cailletaud wrote:
> Le 13/02/2013 18:20, Eric Sandeen a écrit :
>> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>>> Hi,
>>>
>>> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
>>> XFS: device supports only 4096 byte sectors (not 512)
>> Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?
> 
> The three devices are ARECA 1880 card, but the last one was added later, and I never check for sector physical configuration on card configuration.
> But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 4k... :(
> 
>> that's probably not something we anticipate or check for....
>>
>> What sector size(s) are the actual lowest level disks under all the lvm pieces?

(re-cc'ing xfs list)

> What command to run to get this info ?

IIRC,

# blockdev --getpbsz --getss  /dev/sda

to print the physical & logical sector size

You can also look at i.e.:
/sys/block/sda/queue/hw_sector_size
/sys/block/sda/queue/physical_block_size
/sys/block/sda/queue/logical_block_size


I wonder what the recovery steps would be here.  I wouldn't do anything yet; I wish you hadn't already cleared the log, but oh well.

So you grew it, that all worked ok, you were able to copy new data into the new space, you unmounted it, but now it won't mount, correct?

-Eric



> rémi
> 
> 
>>
>> -Eric
>>
>>> # xfs_check /dev/vg0/tomo-201111
>>> ERROR: The filesystem has valuable metadata changes in a log which needs to
>>> be replayed.  Mount the filesystem to replay the log, and unmount it before
>>> re-running xfs_check.  If you are unable to mount the filesystem, then use
>>> the xfs_repair -L option to destroy the log and attempt a repair.
>>> Note that destroying the log may cause corruption -- please attempt a mount
>>> of the filesystem before doing this.
>>>
>>> # xfs_repair -L /dev/vg0/tomo-201111
>>> xfs_repair: warning - cannot set blocksize 512 on block device /dev/vg0/tomo-201111: Argument invalide
>>> Phase 1 - find and verify superblock...
>>> superblock read failed, offset 1099511623680, size 2048, ag 1, rval -1
>>>
>>> fatal error -- Invalid argument
>>>
>>> Conf is as follow :
>>>
>>> LVM : 3pv - 1vg
>>>
>>> the lv containing the xfs system is on several extents :
>>>
>>>    tomo-201111 vg0  -wi-ao    1 linear  15,34t /dev/sda:5276160-9298322
>>>    tomo-201111 vg0  -wi-ao    1 linear  18,66t /dev/sdb:0-4890732
>>>    tomo-201111 vg0  -wi-ao    1 linear   8,81t /dev/sdb:6987885-9298322
>>>    tomo-201111 vg0  -wi-ao    1 linear   1,19t /dev/sdc:2883584-3194585
>>>
>>> before growing fs, I lvextend the vg, and a new extents on /dev/sdc was used. I cant think it caused this issue... I saw there can be problem with underlying device (an ARECA 1880). With xfs_db, I found this strange :
>>>   "logsectsize = 0"
>>>
>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>> magicnum = 0x58465342
>>> blocksize = 4096
>>> dblocks = 10468982745
>>> rblocks = 0
>>> rextents = 0
>>> uuid = 09793bea-952b-44fa-be71-02f59e69b41b
>>> logstart = 1342177284
>>> rootino = 128
>>> rbmino = 129
>>> rsumino = 130
>>> rextsize = 1
>>> agblocks = 268435455
>>> agcount = 39
>>> rbmblocks = 0
>>> logblocks = 521728
>>> versionnum = 0xb4b4
>>> sectsize = 512
>>> inodesize = 256
>>> inopblock = 16
>>> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
>>> blocklog = 12
>>> sectlog = 9
>>> inodelog = 8
>>> inopblog = 4
>>> agblklog = 28
>>> rextslog = 0
>>> inprogress = 0
>>> imax_pct = 5
>>> icount = 6233280
>>> ifree = 26
>>> fdblocks = 1218766953
>>> frextents = 0
>>> uquotino = 0
>>> gquotino = 0
>>> qflags = 0
>>> flags = 0
>>> shared_vn = 0
>>> inoalignmt = 2
>>> unit = 0
>>> width = 0
>>> dirblklog = 0
>>> logsectlog = 0
>>> logsectsize = 0
>>> logsunit = 1
>>> features2 = 0xa
>>> bad_features2 = 0xa
>>>
>>>
>>> Any idea ?
>>>
>>> Cheers,
>>> rémi
>>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:39     ` Eric Sandeen
@ 2013-02-13 17:44       ` Rémi Cailletaud
  2013-02-13 17:52         ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-13 17:44 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

Le 13/02/2013 18:39, Eric Sandeen a écrit :
> On 2/13/13 11:27 AM, Rémi Cailletaud wrote:
>> Le 13/02/2013 18:20, Eric Sandeen a écrit :
>>> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>>>> Hi,
>>>>
>>>> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
>>>> XFS: device supports only 4096 byte sectors (not 512)
>>> Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?
>> The three devices are ARECA 1880 card, but the last one was added later, and I never check for sector physical configuration on card configuration.
>> But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 4k... :(
>>
>>> that's probably not something we anticipate or check for....
>>>
>>> What sector size(s) are the actual lowest level disks under all the lvm pieces?
> (re-cc'ing xfs list)
>
>> What command to run to get this info ?
> IIRC,
>
> # blockdev --getpbsz --getss  /dev/sda
>
> to print the physical&  logical sector size
>
> You can also look at i.e.:
> /sys/block/sda/queue/hw_sector_size
> /sys/block/sda/queue/physical_block_size
> /sys/block/sda/queue/logical_block_size
ouch... nice guess :
#  blockdev --getpbsz --getss  /dev/sda
512
512
#  blockdev --getpbsz --getss  /dev/sdb
512
512
#  blockdev --getpbsz --getss  /dev/sdc
4096
4096


> I wonder what the recovery steps would be here.  I wouldn't do anything yet; I wish you hadn't already cleared the log, but oh well.

I tried a xfs_repair -L (as mentionned by xfs_check), but it early 
failed as show on my first post...
> So you grew it, that all worked ok, you were able to copy new data into the new space, you unmounted it, but now it won't mount, correct?
I never was able to copy data to new space. I had an input/output error 
just after growing.
may pmove-ing extents on 4k device on a 512k device be a solution ?

rémi

> -Eric
>
>
>
>> rémi
>>
>>
>>> -Eric
>>>
>>>> # xfs_check /dev/vg0/tomo-201111
>>>> ERROR: The filesystem has valuable metadata changes in a log which needs to
>>>> be replayed.  Mount the filesystem to replay the log, and unmount it before
>>>> re-running xfs_check.  If you are unable to mount the filesystem, then use
>>>> the xfs_repair -L option to destroy the log and attempt a repair.
>>>> Note that destroying the log may cause corruption -- please attempt a mount
>>>> of the filesystem before doing this.
>>>>
>>>> # xfs_repair -L /dev/vg0/tomo-201111
>>>> xfs_repair: warning - cannot set blocksize 512 on block device /dev/vg0/tomo-201111: Argument invalide
>>>> Phase 1 - find and verify superblock...
>>>> superblock read failed, offset 1099511623680, size 2048, ag 1, rval -1
>>>>
>>>> fatal error -- Invalid argument
>>>>
>>>> Conf is as follow :
>>>>
>>>> LVM : 3pv - 1vg
>>>>
>>>> the lv containing the xfs system is on several extents :
>>>>
>>>>     tomo-201111 vg0  -wi-ao    1 linear  15,34t /dev/sda:5276160-9298322
>>>>     tomo-201111 vg0  -wi-ao    1 linear  18,66t /dev/sdb:0-4890732
>>>>     tomo-201111 vg0  -wi-ao    1 linear   8,81t /dev/sdb:6987885-9298322
>>>>     tomo-201111 vg0  -wi-ao    1 linear   1,19t /dev/sdc:2883584-3194585
>>>>
>>>> before growing fs, I lvextend the vg, and a new extents on /dev/sdc was used. I cant think it caused this issue... I saw there can be problem with underlying device (an ARECA 1880). With xfs_db, I found this strange :
>>>>    "logsectsize = 0"
>>>>
>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>> magicnum = 0x58465342
>>>> blocksize = 4096
>>>> dblocks = 10468982745
>>>> rblocks = 0
>>>> rextents = 0
>>>> uuid = 09793bea-952b-44fa-be71-02f59e69b41b
>>>> logstart = 1342177284
>>>> rootino = 128
>>>> rbmino = 129
>>>> rsumino = 130
>>>> rextsize = 1
>>>> agblocks = 268435455
>>>> agcount = 39
>>>> rbmblocks = 0
>>>> logblocks = 521728
>>>> versionnum = 0xb4b4
>>>> sectsize = 512
>>>> inodesize = 256
>>>> inopblock = 16
>>>> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
>>>> blocklog = 12
>>>> sectlog = 9
>>>> inodelog = 8
>>>> inopblog = 4
>>>> agblklog = 28
>>>> rextslog = 0
>>>> inprogress = 0
>>>> imax_pct = 5
>>>> icount = 6233280
>>>> ifree = 26
>>>> fdblocks = 1218766953
>>>> frextents = 0
>>>> uquotino = 0
>>>> gquotino = 0
>>>> qflags = 0
>>>> flags = 0
>>>> shared_vn = 0
>>>> inoalignmt = 2
>>>> unit = 0
>>>> width = 0
>>>> dirblklog = 0
>>>> logsectlog = 0
>>>> logsectsize = 0
>>>> logsunit = 1
>>>> features2 = 0xa
>>>> bad_features2 = 0xa
>>>>
>>>>
>>>> Any idea ?
>>>>
>>>> Cheers,
>>>> rémi
>>>>
>>> _______________________________________________
>>> xfs mailing list
>>> xfs@oss.sgi.com
>>> http://oss.sgi.com/mailman/listinfo/xfs
>>>
>>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:44       ` Rémi Cailletaud
@ 2013-02-13 17:52         ` Eric Sandeen
  2013-02-13 18:09           ` Rémi Cailletaud
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 17:52 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs-oss

On 2/13/13 11:44 AM, Rémi Cailletaud wrote:
> Le 13/02/2013 18:39, Eric Sandeen a écrit :
>> On 2/13/13 11:27 AM, Rémi Cailletaud wrote:
>>> Le 13/02/2013 18:20, Eric Sandeen a écrit :
>>>> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>>>>> Hi,
>>>>>
>>>>> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
>>>>> XFS: device supports only 4096 byte sectors (not 512)
>>>> Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?
>>> The three devices are ARECA 1880 card, but the last one was added later, and I never check for sector physical configuration on card configuration.
>>> But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 4k... :(
>>>
>>>> that's probably not something we anticipate or check for....
>>>>
>>>> What sector size(s) are the actual lowest level disks under all the lvm pieces?
>> (re-cc'ing xfs list)
>>
>>> What command to run to get this info ?
>> IIRC,
>>
>> # blockdev --getpbsz --getss  /dev/sda
>>
>> to print the physical&  logical sector size
>>
>> You can also look at i.e.:
>> /sys/block/sda/queue/hw_sector_size
>> /sys/block/sda/queue/physical_block_size
>> /sys/block/sda/queue/logical_block_size
> ouch... nice guess :
> #  blockdev --getpbsz --getss  /dev/sda
> 512
> 512
> #  blockdev --getpbsz --getss  /dev/sdb
> 512
> 512
> #  blockdev --getpbsz --getss  /dev/sdc
> 4096
> 4096
> 
> 
>> I wonder what the recovery steps would be here.  I wouldn't do anything yet; I wish you hadn't already cleared the log, but oh well.
> 
> I tried a xfs_repair -L (as mentionned by xfs_check), but it early failed as show on my first post...

Ah, right.

>> So you grew it, that all worked ok, you were able to copy new data into the new space, you unmounted it, but now it won't mount, correct?
> I never was able to copy data to new space. I had an input/output error just after growing.
> may pmove-ing extents on 4k device on a 512k device be a solution ?

Did the filesystem grow actually work?

# xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
magicnum = 0x58465342
blocksize = 4096
dblocks = 10468982745 

That looks like it's (still?) a 38TiB/42TB filesystem, with:

sectsize = 512 

512 sectors.

How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.

At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.

This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...

*if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.

Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.

First let's find out if the filesystem actually thinks it's living on the new space.

-Eric

> rémi


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 17:52         ` Eric Sandeen
@ 2013-02-13 18:09           ` Rémi Cailletaud
  2013-02-13 19:50             ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-13 18:09 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

Le 13/02/2013 18:52, Eric Sandeen a écrit :
> On 2/13/13 11:44 AM, Rémi Cailletaud wrote:
>> Le 13/02/2013 18:39, Eric Sandeen a écrit :
>>> On 2/13/13 11:27 AM, Rémi Cailletaud wrote:
>>>> Le 13/02/2013 18:20, Eric Sandeen a écrit :
>>>>> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I face a strange and scary issue. I just grow a xfs filesystem (44To), and no way to mount it anymore :
>>>>>> XFS: device supports only 4096 byte sectors (not 512)
>>>>> Did you expand an LV made of 512-sector physical devices by adding 4k-sector physical devices?
>>>> The three devices are ARECA 1880 card, but the last one was added later, and I never check for sector physical configuration on card configuration.
>>>> But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 4k... :(
>>>>
>>>>> that's probably not something we anticipate or check for....
>>>>>
>>>>> What sector size(s) are the actual lowest level disks under all the lvm pieces?
>>> (re-cc'ing xfs list)
>>>
>>>> What command to run to get this info ?
>>> IIRC,
>>>
>>> # blockdev --getpbsz --getss  /dev/sda
>>>
>>> to print the physical&   logical sector size
>>>
>>> You can also look at i.e.:
>>> /sys/block/sda/queue/hw_sector_size
>>> /sys/block/sda/queue/physical_block_size
>>> /sys/block/sda/queue/logical_block_size
>> ouch... nice guess :
>> #  blockdev --getpbsz --getss  /dev/sda
>> 512
>> 512
>> #  blockdev --getpbsz --getss  /dev/sdb
>> 512
>> 512
>> #  blockdev --getpbsz --getss  /dev/sdc
>> 4096
>> 4096
>>
>>
>>> I wonder what the recovery steps would be here.  I wouldn't do anything yet; I wish you hadn't already cleared the log, but oh well.
>> I tried a xfs_repair -L (as mentionned by xfs_check), but it early failed as show on my first post...
> Ah, right.
>
>>> So you grew it, that all worked ok, you were able to copy new data into the new space, you unmounted it, but now it won't mount, correct?
>> I never was able to copy data to new space. I had an input/output error just after growing.
>> may pmove-ing extents on 4k device on a 512k device be a solution ?
> Did the filesystem grow actually work?
>
> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
> magicnum = 0x58465342
> blocksize = 4096
> dblocks = 10468982745
>
> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>
> sectsize = 512
>
> 512 sectors.
>
> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.

Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some 
chance that it was never really growed.
> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>
> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>
> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
I dont either know how to see nor set LV sector size.  It's 100% sure 
that anything was copied on 4k sector size, and pretty sure that the fs 
did not really grow.

> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
Sigh... No backup (44To is too large for us...) ! I'm running a testdisk 
recover, but I'm not very confident about success...
Thanks to deeper investigate this...
> First let's find out if the filesystem actually thinks it's living on the new space.
What is the way to make it talk about that ?

Thanks again for your help !

rémi

> -Eric
>
>> rémi
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 18:09           ` Rémi Cailletaud
@ 2013-02-13 19:50             ` Eric Sandeen
  2013-02-13 20:12               ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 19:50 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs-oss

On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
> Le 13/02/2013 18:52, Eric Sandeen a écrit :

<snip>

>> Did the filesystem grow actually work?
>>
>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>> magicnum = 0x58465342
>> blocksize = 4096
>> dblocks = 10468982745
>>
>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>
>> sectsize = 512
>>
>> 512 sectors.
>>
>> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.
> 
> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance that it was never really growed.
>> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>>
>> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>>
>> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
> I dont either know how to see nor set LV sector size.  It's 100% sure that anything was copied on 4k sector size, and pretty sure that the fs did not really grow.

I think the same blockdev command will tell you.

 
>> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk recover, but I'm not very confident about success...
> Thanks to deeper investigate this...
>> First let's find out if the filesystem actually thinks it's living on the new space.
> What is the way to make it talk about that ?

well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 bytes of xfs filesystem.

Look at your lvm layout, does that extend into the new disk space or is it confined to the original disk space?

-Eric

> Thanks again for your help !
> 
> rémi
> 
>> -Eric
>>
>>> rémi
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 19:50             ` Eric Sandeen
@ 2013-02-13 20:12               ` Eric Sandeen
  2013-02-13 21:18                 ` Rémi Cailletaud
  2013-02-13 21:38                 ` Eric Sandeen
  0 siblings, 2 replies; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 20:12 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs-oss

On 2/13/13 1:50 PM, Eric Sandeen wrote:
> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
> 
> <snip>
> 
>>> Did the filesystem grow actually work?
>>>
>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>> magicnum = 0x58465342
>>> blocksize = 4096
>>> dblocks = 10468982745
>>>
>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>
>>> sectsize = 512
>>>
>>> 512 sectors.
>>>
>>> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.
>>
>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance that it was never really growed.
>>> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>>>
>>> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>>>
>>> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
>> I dont either know how to see nor set LV sector size.  It's 100% sure that anything was copied on 4k sector size, and pretty sure that the fs did not really grow.
> 
> I think the same blockdev command will tell you.
> 
>  
>>> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
>> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk recover, but I'm not very confident about success...
>> Thanks to deeper investigate this...
>>> First let's find out if the filesystem actually thinks it's living on the new space.
>> What is the way to make it talk about that ?
> 
> well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 bytes of xfs filesystem.
> 
> Look at your lvm layout, does that extend into the new disk space or is it confined to the original disk space?

lvm folks I talk to say that if you remove the 4k device from the lvm volume it should switch back to 512 sectors.

so if you can can convince yourself that 42880953323520 bytes does not cross into the newly added disk space, just remove it again, and everything should be happy.

Unless your rash decision to start running "testdisk" made things worse ;)

-Eric

> -Eric
> 
>> Thanks again for your help !
>>
>> rémi
>>
>>> -Eric
>>>
>>>> rémi

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 20:12               ` Eric Sandeen
@ 2013-02-13 21:18                 ` Rémi Cailletaud
  2013-02-13 21:38                 ` Eric Sandeen
  1 sibling, 0 replies; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-13 21:18 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss


Eric Sandeen <sandeen@sandeen.net> a écrit :

>On 2/13/13 1:50 PM, Eric Sandeen wrote:
>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>> 
>> <snip>
>> 
>>>> Did the filesystem grow actually work?
>>>>
>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>> magicnum = 0x58465342
>>>> blocksize = 4096
>>>> dblocks = 10468982745
>>>>
>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>>
>>>> sectsize = 512
>>>>
>>>> 512 sectors.
>>>>
>>>> How big was it before you tried to grow it, and how much did you
>try to grow it by?  Maybe the size never changed.
>>>
>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some
>chance that it was never really growed.
>>>> At mount time it tries to set the sector size of the device; its' a
>hard-4k device, so setting it to 512 fails.
>>>>
>>>> This may be as much of an LVM issue as anything; how do you get the
>LVM device back to something with 512-byte logical sectors?  I have no
>idea...
>>>>
>>>> *if* the fs didn't actually grow, and if the new 4k-sector space is
>not used by the filesystem, and if you can somehow remove that new
>space from the device and set the LV back to 512 sectors, you might be
>in good shape.
>>> I dont either know how to see nor set LV sector size.  It's 100%
>sure that anything was copied on 4k sector size, and pretty sure that
>the fs did not really grow.
>> 
>> I think the same blockdev command will tell you.
>> 
>>  
>>>> Proceed with extreme caution here, I wouldn't start just trying
>random things unless you have some other way to get your data back
>(backups?).  I'd check with LVM folks as well, and maybe see if
>dchinner or the sgi folks have other suggestions.
>>> Sigh... No backup (44To is too large for us...) ! I'm running a
>testdisk recover, but I'm not very confident about success...
>>> Thanks to deeper investigate this...
>>>> First let's find out if the filesystem actually thinks it's living
>on the new space.
>>> What is the way to make it talk about that ?
>> 
>> well, you have 10468982745 4k blocks in your filesystem, so
>42880953323520 bytes of xfs filesystem.
>> 
>> Look at your lvm layout, does that extend into the new disk space or
>is it confined to the original disk space?
>
>lvm folks I talk to say that if you remove the 4k device from the lvm
>volume it should switch back to 512 sectors.
>
>so if you can can convince yourself that 42880953323520 bytes does not
>cross into the newly added disk space, just remove it again, and
>everything should be happy.
>

Ok, I'll check that tomorrow and try to remove added space if my fs does not live on...

>Unless your rash decision to start running "testdisk" made things worse
>;)

It's only analyse, it does not modify anything... it should not have any effect...

Thx again,

rémi

>-Eric
>
>> -Eric
>> 
>>> Thanks again for your help !
>>>
>>> rémi
>>>
>>>> -Eric
>>>>
>>>>> rémi
>
>_______________________________________________
>xfs mailing list
>xfs@oss.sgi.com
>http://oss.sgi.com/mailman/listinfo/xfs


-- 
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 20:12               ` Eric Sandeen
  2013-02-13 21:18                 ` Rémi Cailletaud
@ 2013-02-13 21:38                 ` Eric Sandeen
  2013-02-14  8:21                   ` Rémi Cailletaud
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Sandeen @ 2013-02-13 21:38 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs-oss

On 2/13/13 2:12 PM, Eric Sandeen wrote:
> On 2/13/13 1:50 PM, Eric Sandeen wrote:
>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>>
>> <snip>
>>
>>>> Did the filesystem grow actually work?
>>>>
>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>> magicnum = 0x58465342
>>>> blocksize = 4096
>>>> dblocks = 10468982745
>>>>
>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>>
>>>> sectsize = 512
>>>>
>>>> 512 sectors.
>>>>
>>>> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.
>>>
>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance that it was never really growed.
>>>> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>>>>
>>>> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>>>>
>>>> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
>>> I dont either know how to see nor set LV sector size.  It's 100% sure that anything was copied on 4k sector size, and pretty sure that the fs did not really grow.
>>
>> I think the same blockdev command will tell you.
>>
>>  
>>>> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
>>> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk recover, but I'm not very confident about success...
>>> Thanks to deeper investigate this...
>>>> First let's find out if the filesystem actually thinks it's living on the new space.
>>> What is the way to make it talk about that ?
>>
>> well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 bytes of xfs filesystem.
>>
>> Look at your lvm layout, does that extend into the new disk space or is it confined to the original disk space?
> 
> lvm folks I talk to say that if you remove the 4k device from the lvm volume it should switch back to 512 sectors.
> 
> so if you can can convince yourself that 42880953323520 bytes does not cross into the newly added disk space, just remove it again, and everything should be happy.
> 
> Unless your rash decision to start running "testdisk" made things worse ;)

I tested this.  I had a PV on a normal 512 device, then used scsi_debug to create a 4k device.

I created an LV on the 512 device & mounted it, then added the 4k device as you did.  growfs failed immediately, and the device won't remount due to the sector size change.

I verified that removing the 4k device from the LV changes the LV back to a 512 sector size.

However, I'm not 100% sure how to remove just the 4K PV; when I did it, I did something wrong and it reduced the size of my LV to the point where it corrupted the filesystem.  :)  Perhaps you are a better lvm admin than I am.

But in any case - if you know how to safely remove ONLY the 4k device from the LV, you should be in good shape again.

-Eric


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-13 21:38                 ` Eric Sandeen
@ 2013-02-14  8:21                   ` Rémi Cailletaud
  2013-02-14  9:39                     ` Rémi Cailletaud
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-14  8:21 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

Le 13/02/2013 22:38, Eric Sandeen a écrit :
> On 2/13/13 2:12 PM, Eric Sandeen wrote:
>> On 2/13/13 1:50 PM, Eric Sandeen wrote:
>>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>>> <snip>
>>>
>>>>> Did the filesystem grow actually work?
>>>>>
>>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>>> magicnum = 0x58465342
>>>>> blocksize = 4096
>>>>> dblocks = 10468982745
>>>>>
>>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>>>
>>>>> sectsize = 512
>>>>>
>>>>> 512 sectors.
>>>>>
>>>>> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.
>>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance that it was never really growed.
>>>>> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>>>>>
>>>>> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>>>>>
>>>>> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
>>>> I dont either know how to see nor set LV sector size.  It's 100% sure that anything was copied on 4k sector size, and pretty sure that the fs did not really grow.
>>> I think the same blockdev command will tell you.
>>>
>>>
>>>>> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
>>>> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk recover, but I'm not very confident about success...
>>>> Thanks to deeper investigate this...
>>>>> First let's find out if the filesystem actually thinks it's living on the new space.
>>>> What is the way to make it talk about that ?
>>> well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 bytes of xfs filesystem.
>>>
>>> Look at your lvm layout, does that extend into the new disk space or is it confined to the original disk space?
Seems it does not : lvm map shows 48378494844928 bytes, 1304432738304 on 
the 4K device.

>> lvm folks I talk to say that if you remove the 4k device from the lvm volume it should switch back to 512 sectors.
>>
>> so if you can can convince yourself that 42880953323520 bytes does not cross into the newly added disk space, just remove it again, and everything should be happy.
>>
>> Unless your rash decision to start running "testdisk" made things worse ;)
> I tested this.  I had a PV on a normal 512 device, then used scsi_debug to create a 4k device.
>
> I created an LV on the 512 device&  mounted it, then added the 4k device as you did.  growfs failed immediately, and the device won't remount due to the sector size change.
>
> I verified that removing the 4k device from the LV changes the LV back to a 512 sector size.
>
> However, I'm not 100% sure how to remove just the 4K PV; when I did it, I did something wrong and it reduced the size of my LV to the point where it corrupted the filesystem.  :)  Perhaps you are a better lvm admin than I am.
How did you remove the pv ? I would tend to use vgreduce, but I'm a bit 
(a lot, in fact) scary about fs corruption. That's why I was wondering 
about pvmove'ing extents on a 512K device

rémi

> But in any case - if you know how to safely remove ONLY the 4k device from the LV, you should be in good shape again.
>
> -Eric
>
>
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-14  8:21                   ` Rémi Cailletaud
@ 2013-02-14  9:39                     ` Rémi Cailletaud
       [not found]                       ` <511CFC06.2030103@3sr-grenoble.fr>
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-14  9:39 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

Le 14/02/2013 09:21, Rémi Cailletaud a écrit :
> Le 13/02/2013 22:38, Eric Sandeen a écrit :
>> On 2/13/13 2:12 PM, Eric Sandeen wrote:
>>> On 2/13/13 1:50 PM, Eric Sandeen wrote:
>>>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>>>> <snip>
>>>>
>>>>>> Did the filesystem grow actually work?
>>>>>>
>>>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>>>> magicnum = 0x58465342
>>>>>> blocksize = 4096
>>>>>> dblocks = 10468982745
>>>>>>
>>>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>>>>
>>>>>> sectsize = 512
>>>>>>
>>>>>> 512 sectors.
>>>>>>
>>>>>> How big was it before you tried to grow it, and how much did you 
>>>>>> try to grow it by?  Maybe the size never changed.
>>>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is 
>>>>> some chance that it was never really growed.
>>>>>> At mount time it tries to set the sector size of the device; its' 
>>>>>> a hard-4k device, so setting it to 512 fails.
>>>>>>
>>>>>> This may be as much of an LVM issue as anything; how do you get 
>>>>>> the LVM device back to something with 512-byte logical sectors?  
>>>>>> I have no idea...
>>>>>>
>>>>>> *if* the fs didn't actually grow, and if the new 4k-sector space 
>>>>>> is not used by the filesystem, and if you can somehow remove that 
>>>>>> new space from the device and set the LV back to 512 sectors, you 
>>>>>> might be in good shape.
>>>>> I dont either know how to see nor set LV sector size.  It's 100% 
>>>>> sure that anything was copied on 4k sector size, and pretty sure 
>>>>> that the fs did not really grow.
>>>> I think the same blockdev command will tell you.
>>>>
>>>>
>>>>>> Proceed with extreme caution here, I wouldn't start just trying 
>>>>>> random things unless you have some other way to get your data 
>>>>>> back (backups?).  I'd check with LVM folks as well, and maybe see 
>>>>>> if dchinner or the sgi folks have other suggestions.
>>>>> Sigh... No backup (44To is too large for us...) ! I'm running a 
>>>>> testdisk recover, but I'm not very confident about success...
>>>>> Thanks to deeper investigate this...
>>>>>> First let's find out if the filesystem actually thinks it's 
>>>>>> living on the new space.
>>>>> What is the way to make it talk about that ?
>>>> well, you have 10468982745 4k blocks in your filesystem, so 
>>>> 42880953323520 bytes of xfs filesystem.
>>>>
>>>> Look at your lvm layout, does that extend into the new disk space 
>>>> or is it confined to the original disk space?
> Seems it does not : lvm map shows 48378494844928 bytes, 1304432738304 
> on the 4K device.
>
>>> lvm folks I talk to say that if you remove the 4k device from the 
>>> lvm volume it should switch back to 512 sectors.
>>>
>>> so if you can can convince yourself that 42880953323520 bytes does 
>>> not cross into the newly added disk space, just remove it again, and 
>>> everything should be happy.
>>>
>>> Unless your rash decision to start running "testdisk" made things 
>>> worse ;)
>> I tested this.  I had a PV on a normal 512 device, then used 
>> scsi_debug to create a 4k device.
>>
>> I created an LV on the 512 device&  mounted it, then added the 4k 
>> device as you did.  growfs failed immediately, and the device won't 
>> remount due to the sector size change.
>>
>> I verified that removing the 4k device from the LV changes the LV 
>> back to a 512 sector size.
>>
>> However, I'm not 100% sure how to remove just the 4K PV; when I did 
>> it, I did something wrong and it reduced the size of my LV to the 
>> point where it corrupted the filesystem.  :)  Perhaps you are a 
>> better lvm admin than I am.
> How did you remove the pv ? I would tend to use vgreduce, but I'm a 
> bit (a lot, in fact) scary about fs corruption. That's why I was 
> wondering about pvmove'ing extents on a 512K device
Or may a vgcfgrestore be safer ? Should I ask lvm folks ?

rémi

>
> rémi
>
>> But in any case - if you know how to safely remove ONLY the 4k device 
>> from the LV, you should be in good shape again.
>>
>> -Eric
>>
>>
>>
>
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
       [not found]                         ` <511CFCBD.504@sandeen.net>
@ 2013-02-14 16:37                           ` Rémi Cailletaud
  2013-02-14 18:34                             ` Eric Sandeen
  0 siblings, 1 reply; 15+ messages in thread
From: Rémi Cailletaud @ 2013-02-14 16:37 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

Le 14/02/2013 16:03, Eric Sandeen a écrit :
> On 2/14/13 9:00 AM, Rémi Cailletaud wrote:
>> Le 14/02/2013 10:39, Rémi Cailletaud a écrit :
>>> Le 14/02/2013 09:21, Rémi Cailletaud a écrit :
>>>> Le 13/02/2013 22:38, Eric Sandeen a écrit :
>>>>> On 2/13/13 2:12 PM, Eric Sandeen wrote:
>>>>>> On 2/13/13 1:50 PM, Eric Sandeen wrote:
>>>>>>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>>>>>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>>>>>>> <snip>
>>>>>>>
>>>>>>>>> Did the filesystem grow actually work?
>>>>>>>>>
>>>>>>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>>>>>>> magicnum = 0x58465342
>>>>>>>>> blocksize = 4096
>>>>>>>>> dblocks = 10468982745
>>>>>>>>>
>>>>>>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>>>>>>>
>>>>>>>>> sectsize = 512
>>>>>>>>>
>>>>>>>>> 512 sectors.
>>>>>>>>>
>>>>>>>>> How big was it before you tried to grow it, and how much did you try to grow it by?  Maybe the size never changed.
>>>>>>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance that it was never really growed.
>>>>>>>>> At mount time it tries to set the sector size of the device; its' a hard-4k device, so setting it to 512 fails.
>>>>>>>>>
>>>>>>>>> This may be as much of an LVM issue as anything; how do you get the LVM device back to something with 512-byte logical sectors?  I have no idea...
>>>>>>>>>
>>>>>>>>> *if* the fs didn't actually grow, and if the new 4k-sector space is not used by the filesystem, and if you can somehow remove that new space from the device and set the LV back to 512 sectors, you might be in good shape.
>>>>>>>> I dont either know how to see nor set LV sector size.  It's 100% sure that anything was copied on 4k sector size, and pretty sure that the fs did not really grow.
>>>>>>> I think the same blockdev command will tell you.
>>>>>>>
>>>>>>>
>>>>>>>>> Proceed with extreme caution here, I wouldn't start just trying random things unless you have some other way to get your data back (backups?).  I'd check with LVM folks as well, and maybe see if dchinner or the sgi folks have other suggestions.
>>>>>>>> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk recover, but I'm not very confident about success...
>>>>>>>> Thanks to deeper investigate this...
>>>>>>>>> First let's find out if the filesystem actually thinks it's living on the new space.
>>>>>>>> What is the way to make it talk about that ?
>>>>>>> well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 bytes of xfs filesystem.
>>>>>>>
>>>>>>> Look at your lvm layout, does that extend into the new disk space or is it confined to the original disk space?
>>>> Seems it does not : lvm map shows 48378494844928 bytes, 1304432738304 on the 4K device.
>>>>
>>>>>> lvm folks I talk to say that if you remove the 4k device from the lvm volume it should switch back to 512 sectors.
>>>>>>
>>>>>> so if you can can convince yourself that 42880953323520 bytes does not cross into the newly added disk space, just remove it again, and everything should be happy.
>>>>>>
>>>>>> Unless your rash decision to start running "testdisk" made things worse ;)
>>>>> I tested this.  I had a PV on a normal 512 device, then used scsi_debug to create a 4k device.
>>>>>
>>>>> I created an LV on the 512 device&   mounted it, then added the 4k device as you did.  growfs failed immediately, and the device won't remount due to the sector size change.
>>>>>
>>>>> I verified that removing the 4k device from the LV changes the LV back to a 512 sector size.
>>>>>
>>>>> However, I'm not 100% sure how to remove just the 4K PV; when I did it, I did something wrong and it reduced the size of my LV to the point where it corrupted the filesystem.  :)  Perhaps you are a better lvm admin than I am.
>>>> How did you remove the pv ? I would tend to use vgreduce, but I'm a bit (a lot, in fact) scary about fs corruption. That's why I was wondering about pvmove'ing extents on a 512K device
>>> Or may a vgcfgrestore be safer ? Should I ask lvm folks ?
>> I tried a test as you suggest using scsi_debug.
>>
>> 2 PV, one 512 and one 4096 bytes sector.
>>
>> after adding the 4K device, growfs fail, and partition wont remount. I tried a vgcfgrestore and vgreduce, but it does not mount : same error...
>> XFS (dm-5): device supports 4096 byte sectors (not 512)
> In that case I think you must not have actually (completely?) removed the 4k device.

That's it! After vgchange-ing un/available, lv mounts !

Following steps reproduce the "bug", considering we already have one pv 
on /dev/sdc (512 sectors device) :

- create a virtual 4k scsi device and create a pv on it :
# modprobe scsi_debug sector_size=4096 dev_size_mb=256
# pvcreate /dev/sdd

- create a vg with both pv, and create an lv on sdc (I specified exact 
extents count of sdc) :
# vgcreate vgtest  /dev/sdc /dev/sdd
# lvcreate -n lvtest -l 3759 vgtest

- mkfs, mount :
# mkfs.xfs /dev/vgtest/lvtest
# mount /dev/vgtest/lvtest /mnt/tmp

- the bad thing : lvextend and growfs (should not lvm or xfs check this 
sector size stuff ?):
# lvextend -l+40 /dev/vgtest/lvtest
# xfs_growfs /mnt/tmp
(fail with xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed)

- the scary part :
# umount /mnt/tmp
# mount /dev/vgtest/lvtest /mnt/tmp
mount: function not implemented
# tail -1 /var/log/messages
Feb 14 16:41:52 hamaika kernel: [  481.055422] XFS (dm-5): device 
supports 4096 byte sectors (not 512)

- the huge relief (restoring *before* lvextend) :
# vgcfgrestore -f /etc/lvm/archive/vgtest_00029-84595486.vg vgtest
# vgchange -a n vgtest
# vgchange -a y vgtest
# mount /dev/vgtest/lvtest /mnt/tmp

yippee, my datas are back !!


Should I submit a bug report ? On LVM, XFS, both ?

However, a great thanks for your help... I learned some stuff about lvm 
and xfs today ;)
Cheers,

rémi

>
> I think you'll need to seek help from LVM people in order to proceed...  I'm sure it's possible to safely and completely remove the newly added space, but I don't know how.
>
> -Eric
>
>


-- 
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@3sr-grenoble.fr
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: problem after growing
  2013-02-14 16:37                           ` Rémi Cailletaud
@ 2013-02-14 18:34                             ` Eric Sandeen
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Sandeen @ 2013-02-14 18:34 UTC (permalink / raw)
  To: Rémi Cailletaud; +Cc: xfs-oss

On 2/14/13 10:37 AM, Rémi Cailletaud wrote:
> Le 14/02/2013 16:03, Eric Sandeen a écrit :


<big snip>

>>> after adding the 4K device, growfs fail, and partition wont remount. I tried a vgcfgrestore and vgreduce, but it does not mount : same error...
>>> XFS (dm-5): device supports 4096 byte sectors (not 512)
>> In that case I think you must not have actually (completely?) removed the 4k device.
> 
> That's it! After vgchange-ing un/available, lv mounts !
> 
> Following steps reproduce the "bug", considering we already have one pv on /dev/sdc (512 sectors device) :
> 
> - create a virtual 4k scsi device and create a pv on it :
> # modprobe scsi_debug sector_size=4096 dev_size_mb=256
> # pvcreate /dev/sdd
> 
> - create a vg with both pv, and create an lv on sdc (I specified exact extents count of sdc) :
> # vgcreate vgtest  /dev/sdc /dev/sdd
> # lvcreate -n lvtest -l 3759 vgtest
> 
> - mkfs, mount :
> # mkfs.xfs /dev/vgtest/lvtest
> # mount /dev/vgtest/lvtest /mnt/tmp
> 
> - the bad thing : lvextend and growfs (should not lvm or xfs check this sector size stuff ?):

xfs does check, to some degree:

> # lvextend -l+40 /dev/vgtest/lvtest
> # xfs_growfs /mnt/tmp
> (fail with xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed)

^^^ see?  ;)  It could maybe be more explicit, but xfs is already in trouble by this point (before growfs, it still won't be remountable).  There is no opportunity for xfs to catch this damage before it's done.

Yes, I think lvm should check before allowing the change.

> 
> - the scary part :
> # umount /mnt/tmp
> # mount /dev/vgtest/lvtest /mnt/tmp
> mount: function not implemented
> # tail -1 /var/log/messages
> Feb 14 16:41:52 hamaika kernel: [  481.055422] XFS (dm-5): device supports 4096 byte sectors (not 512)
> 
> - the huge relief (restoring *before* lvextend) :
> # vgcfgrestore -f /etc/lvm/archive/vgtest_00029-84595486.vg vgtest
> # vgchange -a n vgtest
> # vgchange -a y vgtest
> # mount /dev/vgtest/lvtest /mnt/tmp
> 
> yippee, my datas are back !!

cool.

> 
> Should I submit a bug report ? On LVM, XFS, both ?

I don't know what xfs could have done here.  Even if you didn't growfs, by the time you did lvextend xfs wouldn't have been able to remount.  I think it's up to lvm to protect the user from this, personally, so a bug report there seems warranted.

> However, a great thanks for your help... I learned some stuff about lvm and xfs today ;)

You're welcome, very glad you got it back.
Thank my employer Red Hat for paying me to work on this stuff, too ;)

-Eric

> Cheers,
> 
> rémi
> 
>>
>> I think you'll need to seek help from LVM people in order to proceed...  I'm sure it's possible to safely and completely remove the newly added space, but I don't know how.
>>
>> -Eric
>>
>>
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-02-14 18:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-13 17:04 problem after growing Rémi Cailletaud
2013-02-13 17:20 ` Eric Sandeen
2013-02-13 17:27   ` Rémi Cailletaud
2013-02-13 17:39     ` Eric Sandeen
2013-02-13 17:44       ` Rémi Cailletaud
2013-02-13 17:52         ` Eric Sandeen
2013-02-13 18:09           ` Rémi Cailletaud
2013-02-13 19:50             ` Eric Sandeen
2013-02-13 20:12               ` Eric Sandeen
2013-02-13 21:18                 ` Rémi Cailletaud
2013-02-13 21:38                 ` Eric Sandeen
2013-02-14  8:21                   ` Rémi Cailletaud
2013-02-14  9:39                     ` Rémi Cailletaud
     [not found]                       ` <511CFC06.2030103@3sr-grenoble.fr>
     [not found]                         ` <511CFCBD.504@sandeen.net>
2013-02-14 16:37                           ` Rémi Cailletaud
2013-02-14 18:34                             ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.