All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Unable to mount a XFS filesystem
@ 2016-05-09 10:32 issa-gorissen
  2016-05-09 12:17 ` Carlos E. R.
  0 siblings, 1 reply; 6+ messages in thread
From: issa-gorissen @ 2016-05-09 10:32 UTC (permalink / raw)
  To: XFS mail list

Received: 09:18 PM CEST, 05/08/2016
From: "Carlos E. R." <robin.listas@telefonica.net>

> 
> How did you do that upgrade? Zypper dup, or boot dvd, choose upgrade?
> 
In fact it was not a "real" upgrade. I added a new boot disk on which I
installed Tumbleweed as a new OS. The MD RAID disks were kept in the computer.
It seems the install did something to it I don't know about.


> Me, I would try to find out if the array is readable:
> 
> dd if=/dev/md0 of=/dev/null
> 

array is fine but somehow the md partitions have been shrunk with the
superblock moved. Don't know know. Version of the superblock is 1.0; which is
stored at the end of the partition. So XFS filesystem starts at the start of
the partition. If after the new OS install I cannot mount XFS anymore, and as
Dave said XFS mount will try to read end of filesystem; then the end of the
partition have changed.

Thx

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Unable to mount a XFS filesystem
  2016-05-09 10:32 Unable to mount a XFS filesystem issa-gorissen
@ 2016-05-09 12:17 ` Carlos E. R.
  0 siblings, 0 replies; 6+ messages in thread
From: Carlos E. R. @ 2016-05-09 12:17 UTC (permalink / raw)
  To: XFS mail list


[-- Attachment #1.1: Type: text/plain, Size: 1817 bytes --]

On 2016-05-09 12:32, issa-gorissen@usa.net wrote:
> Received: 09:18 PM CEST, 05/08/2016
> From: "Carlos E. R." <>
> 
>>
>> How did you do that upgrade? Zypper dup, or boot dvd, choose upgrade?
>>
> In fact it was not a "real" upgrade. I added a new boot disk on which I
> installed Tumbleweed as a new OS. The MD RAID disks were kept in the computer.
> It seems the install did something to it I don't know about.

Ah, then it is a fresh install, inheriting data disks.

Then you probably keep the old boot disk. Perhaps you could try it and
find out if it still can mount the raid filesystem. If that is so,
perhaps data can be rescued. If you don't have a spare disk for a
backup, you might remove one side of the mirror and create it there,
then create again the raid in TW.

If you don't have the old boot disk, you can create a boot media on an
USB stick. You can use the 12.3 XFCE rescue image:

http://download.opensuse.org/distribution/13.2/iso/openSUSE-13.2-Rescue-CD-x86_64.iso

just cp it to the raw usb device, and boot it.


But leave that procedure as a last resource; wait for more comments or
ideas ;-)


>> Me, I would try to find out if the array is readable:
>>
>> dd if=/dev/md0 of=/dev/null
>>
> 
> array is fine but somehow the md partitions have been shrunk with the
> superblock moved. Don't know know. Version of the superblock is 1.0; which is
> stored at the end of the partition. So XFS filesystem starts at the start of
> the partition. If after the new OS install I cannot mount XFS anymore, and as
> Dave said XFS mount will try to read end of filesystem; then the end of the
> partition have changed.

Yes, I read his post. My idea was wrong, then.


-- 
Cheers / Saludos,

		Carlos E. R.
		(from 13.1 x86_64 "Bottle" at Telcontar)


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Unable to mount a XFS filesystem
@ 2016-05-09 10:26 issa-gorissen
  0 siblings, 0 replies; 6+ messages in thread
From: issa-gorissen @ 2016-05-09 10:26 UTC (permalink / raw)
  To: xfs

------ Original Message ------
Received: 10:51 PM CEST, 05/08/2016
From: Dave Chinner <david@fromorbit.com>

> Yup, the kernel also emits the same error on the same check - most
> likely during your upgrade the MD RAID device has changed size and
> is now 112 sectors smaller than before, hence the filesystem will
> refuse to mount.
> 

> 
> Unlikely to be an XFS problem, more likely a MD device/upgrade issue.


Thanks Dave for your pointer.

As I don't have much experiences in debugging XFS or MD RAID; I just took a
shortcut.

Your input helped a little.

I tried to resize the MD partition (for the missing sectors XFS was
complaining about) but as the MD superblock is at the end on my partition,
after the resize, mdadm could not find back the superblock and I did not want
to spend time trying to move the superblock along with the partition resize
(don't know if this is feasable).

So of one of the two disks; I could mount the resized MD partition as a XFS
filesystem after a xfs_repair on it. The folders structure is lost as
everything is in random folders in lost+found; but it seems the files are
there.

So I will create a new MD RAID from the disk I could mount.

It seems the setup of openSuse Tumbleweed messed up with my md raid
partitions! :-(

Thx,
--
Issa

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Unable to mount a XFS filesystem
  2016-05-08 14:12 Issa Gorissen
  2016-05-08 19:18 ` Carlos E. R.
@ 2016-05-08 20:51 ` Dave Chinner
  1 sibling, 0 replies; 6+ messages in thread
From: Dave Chinner @ 2016-05-08 20:51 UTC (permalink / raw)
  To: Issa Gorissen; +Cc: xfs

On Sun, May 08, 2016 at 04:12:59PM +0200, Issa Gorissen wrote:
> Hello,
> 
> After I have upgraded my htpc from openSuse 12.3 running kernel
> 3.7.10 to openSuse Tumbleweed running kernel 4.5.2; I am unable to
> mount my XFS filesystem anymore.
....
>  Avail Dev Size : 5860529896 (2794.52 GiB 3000.59 GB)
>      Array Size : 2930264896 (2794.52 GiB 3000.59 GB)
>   Used Dev Size : 5860529792 (2794.52 GiB 3000.59 GB)
>    Super Offset : 5860530160 sectors
>    Unused Space : before=0 sectors, after=344 sectors

Both devices are reporting 344 unused sectors. Which may be
important, because....

.....
> tv:/ # xfs_db /dev/md0
> xfs_db: error - read only 0 of 512 bytes

... something is wrong with the RAID device for this error to be
emitted. XFS checks to see if it can access the last sector of the
filesystem before it starts using it, and this is indicative of a
that read being beyond the end of the device.

> tv:/ # mount -t xfs /dev/md0 /data
> mount: /dev/md0: can't read superblock
> 
> dmesg outputs
> 
> 
> [ 5525.861750] SGI XFS with ACLs, security attributes, realtime, no
> debug enabled
> [ 5525.862231] attempt to access beyond end of device
> [ 5525.862232] md0: rw=32, want=5860529904, limit=5860529792
> [ 5525.862234] XFS (md0): last sector read failed

Yup, the kernel also emits the same error on the same check - most
likely during your upgrade the MD RAID device has changed size and
is now 112 sectors smaller than before, hence the filesystem will
refuse to mount.

> Any pointers to fix this ? It seems the disks still contains a XFS
> filesystem but for some reason, I cannot access it.

Unlikely to be an XFS problem, more likely a MD device/upgrade issue.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Unable to mount a XFS filesystem
  2016-05-08 14:12 Issa Gorissen
@ 2016-05-08 19:18 ` Carlos E. R.
  2016-05-08 20:51 ` Dave Chinner
  1 sibling, 0 replies; 6+ messages in thread
From: Carlos E. R. @ 2016-05-08 19:18 UTC (permalink / raw)
  To: XFS mail list


[-- Attachment #1.1: Type: text/plain, Size: 801 bytes --]

On 2016-05-08 16:12, Issa Gorissen wrote:
> Hello,
> 
> After I have upgraded my htpc from openSuse 12.3 running kernel 3.7.10
> to openSuse Tumbleweed running kernel 4.5.2; I am unable to mount my XFS
> filesystem anymore.

How did you do that upgrade? Zypper dup, or boot dvd, choose upgrade?



> dmesg outputs
> 
> 
> [ 5525.861750] SGI XFS with ACLs, security attributes, realtime, no
> debug enabled
> [ 5525.862231] attempt to access beyond end of device
> [ 5525.862232] md0: rw=32, want=5860529904, limit=5860529792
> [ 5525.862234] XFS (md0): last sector read failed

Me, I would try to find out if the array is readable:

dd if=/dev/md0 of=/dev/null

and see where it stops.

-- 
Cheers / Saludos,

		Carlos E. R.
		(from 13.1 x86_64 "Bottle" at Telcontar)


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Unable to mount a XFS filesystem
@ 2016-05-08 14:12 Issa Gorissen
  2016-05-08 19:18 ` Carlos E. R.
  2016-05-08 20:51 ` Dave Chinner
  0 siblings, 2 replies; 6+ messages in thread
From: Issa Gorissen @ 2016-05-08 14:12 UTC (permalink / raw)
  To: xfs

Hello,

After I have upgraded my htpc from openSuse 12.3 running kernel 3.7.10 
to openSuse Tumbleweed running kernel 4.5.2; I am unable to mount my XFS 
filesystem anymore.

I probably made some mistakes as I tried some things but did not note 
them because I thought the problem would not be hard to solve.

As I still cannot mount it, I am asking for help.

Here is the details.

The XFS filesystem is on /dev/md0 device which is a RAID1 on top of two 
partitions hosted on two hard disks.

cat /proc/mdstat gives
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
       2930264896 blocks super 1.0 [2/2] [UU]
       bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>


tv:/ # hdparm -N /dev/sdb

/dev/sdb:
  max sectors   = 5860533168/5860533168, HPA is disabled
tv:/ # hdparm -N /dev/sdc

/dev/sdc:
  max sectors   = 5860533168/5860533168, HPA is disabled


tv:/ # mdadm -Evvvvvvvvs /dev/sdb1 /dev/sdc1
/dev/sdb1:
           Magic : a92b4efc
         Version : 1.0
     Feature Map : 0x1
      Array UUID : 227f22af:6aa194ed:254ca070:f896b2ce
            Name : any:0
   Creation Time : Sun May  1 23:07:01 2016
      Raid Level : raid1
    Raid Devices : 2

  Avail Dev Size : 5860529896 (2794.52 GiB 3000.59 GB)
      Array Size : 2930264896 (2794.52 GiB 3000.59 GB)
   Used Dev Size : 5860529792 (2794.52 GiB 3000.59 GB)
    Super Offset : 5860530160 sectors
    Unused Space : before=0 sectors, after=344 sectors
           State : clean
     Device UUID : 8f585fbd:6cf98a5b:60f7b824:a87e424f

Internal Bitmap : -24 sectors from superblock
     Update Time : Sun May  8 15:28:25 2016
   Bad Block Log : 512 entries available at offset -8 sectors
        Checksum : 12a61779 - correct
          Events : 6600


    Device Role : Active device 0
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
           Magic : a92b4efc
         Version : 1.0
     Feature Map : 0x1
      Array UUID : 227f22af:6aa194ed:254ca070:f896b2ce
            Name : any:0
   Creation Time : Sun May  1 23:07:01 2016
      Raid Level : raid1
    Raid Devices : 2

  Avail Dev Size : 5860529896 (2794.52 GiB 3000.59 GB)
      Array Size : 2930264896 (2794.52 GiB 3000.59 GB)
   Used Dev Size : 5860529792 (2794.52 GiB 3000.59 GB)
    Super Offset : 5860530160 sectors
    Unused Space : before=0 sectors, after=344 sectors
           State : clean
     Device UUID : da81fee9:5b4fd038:a1967771:b9a5c6b9

Internal Bitmap : -24 sectors from superblock
     Update Time : Sun May  8 15:28:25 2016
   Bad Block Log : 512 entries available at offset -8 sectors
        Checksum : d3cd5d06 - correct
          Events : 6600


    Device Role : Active device 1
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)



tv:/ # xfs_db /dev/md0
xfs_db: error - read only 0 of 512 bytes
xfs_db> sb 0
xfs_db> p
magicnum = 0x58465342
blocksize = 4096
dblocks = 732566238
rblocks = 0
rextents = 0
uuid = 2259f9c7-4f2a-4c02-a9f4-a12e91771126
logstart = 536870916
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 22892695
agcount = 32
rbmblocks = 0
logblocks = 357698
versionnum = 0xb4a4
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 25
rextslog = 0
inprogress = 0
imax_pct = 5
icount = 54720
ifree = 1112
fdblocks = 555446971
frextents = 0
uquotino = null
gquotino = null
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 1
features2 = 0xa
bad_features2 = 0xa
features_compat = 0
features_ro_compat = 0
features_incompat = 0
features_log_incompat = 0
crc = 0 (unchecked)
spino_align = 0
pquotino = 0
lsn = 0
meta_uuid = 00000000-0000-0000-0000-000000000000
xfs_db>



tv:/ # mount -t xfs /dev/md0 /data
mount: /dev/md0: can't read superblock

dmesg outputs


[ 5525.861750] SGI XFS with ACLs, security attributes, realtime, no 
debug enabled
[ 5525.862231] attempt to access beyond end of device
[ 5525.862232] md0: rw=32, want=5860529904, limit=5860529792
[ 5525.862234] XFS (md0): last sector read failed




Any pointers to fix this ? It seems the disks still contains a XFS 
filesystem but for some reason, I cannot access it.

Thanks
--
Issa

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-05-09 12:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-09 10:32 Unable to mount a XFS filesystem issa-gorissen
2016-05-09 12:17 ` Carlos E. R.
  -- strict thread matches above, loose matches on Subject: below --
2016-05-09 10:26 issa-gorissen
2016-05-08 14:12 Issa Gorissen
2016-05-08 19:18 ` Carlos E. R.
2016-05-08 20:51 ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.