All of lore.kernel.org
 help / color / mirror / Atom feed
* Mismatch UUID
@ 2014-11-14  8:57 Robert Tench
  2014-11-14 13:42 ` Brian Foster
  2014-11-14 22:31 ` Chris Murphy
  0 siblings, 2 replies; 9+ messages in thread
From: Robert Tench @ 2014-11-14  8:57 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2405 bytes --]

                                                            Robert has a file to share with you on OneDrive. To view it, click the link below.                                                                                                        xfs.log                                                                            So finally have managed to find a way to save the complete log of running xfs_repair -n /dev.md4

And below is the output of xfs_check /dev/md4

root@ubuntu:~# xfs_check /dev/md4
* ERROR: mismatched uuid in log
*            SB : 813833a7-1bd3-4447-b575-09d1471bb652
*            log: ea3833af-25ce-9f91-b575-018fb49df3b1
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

And the output from mdadm -D /dev/md4 is as follows

root@ubuntu:~# mdadm -D /dev/md4
/dev/md4:
        Version : 1.0
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
  Used Dev Size : 2928240640 (2792.59 GiB 2998.52 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Fri Nov 14 15:58:16 2014
          State : clean 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : (none):4
           UUID : e0829810:9782b51f:25529f65:8823419c
         Events : 1243386

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       6       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       5       8       50        3      active sync   /dev/sdd2
       4       8       66        4      active sync   /dev/sde2


And then the response from mdadm -E /dev/md4

root@ubuntu:~# mdadm -E /dev/md4
mdadm: No md superblock detected on /dev/md4.

Not sure what to do, any help would be appreciated

Regards,

Rob



                 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6105 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
  2014-11-14  8:57 Mismatch UUID Robert Tench
@ 2014-11-14 13:42 ` Brian Foster
       [not found]   ` <BLU172-W1423188AD065CADCC115E6C48C0@phx.gbl>
  2014-11-14 22:31 ` Chris Murphy
  1 sibling, 1 reply; 9+ messages in thread
From: Brian Foster @ 2014-11-14 13:42 UTC (permalink / raw)
  To: Robert Tench; +Cc: xfs

On Fri, Nov 14, 2014 at 07:57:42PM +1100, Robert Tench wrote:
>                                                             Robert has a file to share with you on OneDrive. To view it, click the link below.                                                                                                        xfs.log                                                                            So finally have managed to find a way to save the complete log of running xfs_repair -n /dev.md4
> 
> And below is the output of xfs_check /dev/md4
> 
> root@ubuntu:~# xfs_check /dev/md4
> * ERROR: mismatched uuid in log
> *            SB : 813833a7-1bd3-4447-b575-09d1471bb652
> *            log: ea3833af-25ce-9f91-b575-018fb49df3b1
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
> 

You want to use xfs_repair (-n) rather than xfs_check. I think you
mentioned in your other email you've tried xfs_repair..? The above
message indicates a dirty log, have you attempted to mount the device to
replay the log?

> And the output from mdadm -D /dev/md4 is as follows
> 

How did you put the array back together? Did it assemble fine or did you
have to recreate it? If the latter, how are you sure the geometry is
correct (it looks like it's syncing)?

Brian

> root@ubuntu:~# mdadm -D /dev/md4
> /dev/md4:
>         Version : 1.0
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>   Used Dev Size : 2928240640 (2792.59 GiB 2998.52 GB)
>    Raid Devices : 5
>   Total Devices : 5
>     Persistence : Superblock is persistent
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>           State : clean 
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : (none):4
>            UUID : e0829810:9782b51f:25529f65:8823419c
>          Events : 1243386
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        2        0      active sync   /dev/sda2
>        6       8       18        1      active sync   /dev/sdb2
>        2       8       34        2      active sync   /dev/sdc2
>        5       8       50        3      active sync   /dev/sdd2
>        4       8       66        4      active sync   /dev/sde2
> 
> 
> And then the response from mdadm -E /dev/md4
> 
> root@ubuntu:~# mdadm -E /dev/md4
> mdadm: No md superblock detected on /dev/md4.
> 
> Not sure what to do, any help would be appreciated
> 
> Regards,
> 
> Rob
> 
> 
> 
>                  		 	   		  

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
  2014-11-14  8:57 Mismatch UUID Robert Tench
  2014-11-14 13:42 ` Brian Foster
@ 2014-11-14 22:31 ` Chris Murphy
  2014-11-14 23:40   ` Eric Sandeen
  1 sibling, 1 reply; 9+ messages in thread
From: Chris Murphy @ 2014-11-14 22:31 UTC (permalink / raw)
  To: Robert Tench; +Cc: xfs


On Nov 14, 2014, at 1:57 AM, Robert Tench <robtench@hotmail.com> wrote:

> Robert has a file to share with you on OneDrive. To view it, click the link below.
>  xfs.log
> So finally have managed to find a way to save the complete log of running xfs_repair -n /dev.md4
> 
> And below is the output of xfs_check /dev/md4
> 
> root@ubuntu:~# xfs_check /dev/md4
> * ERROR: mismatched uuid in log
> *            SB : 813833a7-1bd3-4447-b575-09d1471bb652
> *            log: ea3833af-25ce-9f91-b575-018fb49df3b1
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
> 
> And the output from mdadm -D /dev/md4 is as follows
> 
> root@ubuntu:~# mdadm -D /dev/md4
> /dev/md4:
>         Version : 1.0
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>   Used Dev Size : 2928240640 (2792.59 GiB 2998.52 GB)
>    Raid Devices : 5
>   Total Devices : 5
>     Persistence : Superblock is persistent
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>           State : clean 
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : (none):4
>            UUID : e0829810:9782b51f:25529f65:8823419c
>          Events : 1243386
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        2        0      active sync   /dev/sda2
>        6       8       18        1      active sync   /dev/sdb2
>        2       8       34        2      active sync   /dev/sdc2
>        5       8       50        3      active sync   /dev/sdd2
>        4       8       66        4      active sync   /dev/sde2
> 
> 
> And then the response from mdadm -E /dev/md4
> 
> root@ubuntu:~# mdadm -E /dev/md4
> mdadm: No md superblock detected on /dev/md4.

-D is for examining the logical md device, -E is for examining the individual members so you’d use:

mdadm -E /dev/sd[abcde]2

Hopefully you haven’t used mdadm -C/—create ? The web is full of such suggestions and it’s almost always the wrong thing to do, it’s a near last resort in any case.

> 
> Not sure what to do, any help would be appreciated

It’s very good to ask instead of haphazardly trying things. Trying to normally mount the file system should be safe; and then use dmesg to check for kernel messages. The xfs kernel code is responsible for log replay and making most kinds of repairs, anything it can’t deal with will be reported as a kernel message. If mount fails, report kernel xfs related messages, and also the results from xfs_check -n.

Chris Murphy
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
  2014-11-14 22:31 ` Chris Murphy
@ 2014-11-14 23:40   ` Eric Sandeen
  2014-11-15  0:03     ` Chris Murphy
                       ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Eric Sandeen @ 2014-11-14 23:40 UTC (permalink / raw)
  To: Chris Murphy, Robert Tench; +Cc: xfs

On 11/14/14 4:31 PM, Chris Murphy wrote:

> It’s very good to ask instead of haphazardly trying things. Trying to
> normally mount the file system should be safe; and then use dmesg to
> check for kernel messages. The xfs kernel code is responsible for log
> replay and making most kinds of repairs, anything it can’t deal with
> will be reported as a kernel message. If mount fails, report kernel
> xfs related messages, and also the results from xfs_check -n.

xfs_repair -n

xfs_check is deprecated and has no -n option ;)

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
  2014-11-14 23:40   ` Eric Sandeen
@ 2014-11-15  0:03     ` Chris Murphy
  2014-11-15  2:17     ` Robert Tench
  2014-11-15  2:58     ` Robert Tench
  2 siblings, 0 replies; 9+ messages in thread
From: Chris Murphy @ 2014-11-15  0:03 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Robert Tench, xfs


On Nov 14, 2014, at 4:40 PM, Eric Sandeen <sandeen@sandeen.net> wrote:

> On 11/14/14 4:31 PM, Chris Murphy wrote:
> 
>> It’s very good to ask instead of haphazardly trying things. Trying to
>> normally mount the file system should be safe; and then use dmesg to
>> check for kernel messages. The xfs kernel code is responsible for log
>> replay and making most kinds of repairs, anything it can’t deal with
>> will be reported as a kernel message. If mount fails, report kernel
>> xfs related messages, and also the results from xfs_check -n.
> 
> xfs_repair -n
> 
> xfs_check is deprecated and has no -n option ;)

Yep, brain flatulence.

Chris
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Mismatch UUID
  2014-11-14 23:40   ` Eric Sandeen
  2014-11-15  0:03     ` Chris Murphy
@ 2014-11-15  2:17     ` Robert Tench
  2014-11-15 14:39       ` Brian Foster
  2014-11-15  2:58     ` Robert Tench
  2 siblings, 1 reply; 9+ messages in thread
From: Robert Tench @ 2014-11-15  2:17 UTC (permalink / raw)
  To: Eric Sandeen, Chris Murphy, Brian Foster, xfs


[-- Attachment #1.1: Type: text/plain, Size: 5138 bytes --]

Hi all,

Many thanks for the replies so far.

So here is the reponse from the command line mdadm -E /dev/sd[abcde]2

ubuntu@ubuntu:~$ sudo mdadm -E /dev/sd[abcde]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : e0829810:9782b51f:25529f65:8823419c
           Name : (none):4
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
   Super Offset : 5856481536 sectors
          State : clean
    Device UUID : d405d0c5:2a07d7ed:27abcb5a:0eeadc7d

    Update Time : Fri Nov 14 15:58:16 2014
       Checksum : c082e9bb - correct
         Events : 1243386

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : e0829810:9782b51f:25529f65:8823419c
           Name : (none):4
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
   Super Offset : 5856481536 sectors
          State : clean
    Device UUID : 33acf23a:52a0ad00:cda69d9f:468a8c13

    Update Time : Fri Nov 14 15:58:16 2014
       Checksum : 22fdc525 - correct
         Events : 1243386

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : e0829810:9782b51f:25529f65:8823419c
           Name : (none):4
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
   Super Offset : 5856481536 sectors
          State : clean
    Device UUID : 1a4f20e2:34926568:af5351a1:9c9f60e9

    Update Time : Fri Nov 14 15:58:16 2014
       Checksum : 96b21ac - correct
         Events : 1243386

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : e0829810:9782b51f:25529f65:8823419c
           Name : (none):4
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
   Super Offset : 5856481536 sectors
          State : clean
    Device UUID : 0ee6b711:242fbd18:44c2839e:60e8ad0d

    Update Time : Fri Nov 14 15:58:16 2014
       Checksum : ada0762 - correct
         Events : 1243386

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sde2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : e0829810:9782b51f:25529f65:8823419c
           Name : (none):4
  Creation Time : Fri Jan  1 01:31:17 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
     Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
   Super Offset : 5856481536 sectors
          State : clean
    Device UUID : f89fdd4c:627a6650:3de0dcab:6790076f

    Update Time : Fri Nov 14 15:58:16 2014
       Checksum : ec5bd289 - correct
         Events : 1243386

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)


What I am curious about is how I should go about attempting to Mount the device and the command line needed. Am I mounting the Raid Array /dev/md4, or am I trying to mount the XFS data partitions contained within with would be made up of /dev/sd[abcde2]? I am a little bit confused by all the various options and what I am exactly suppose to use given being a complete linux novice.

Any help is appreciated?

Rob


> Date: Fri, 14 Nov 2014 17:40:10 -0600
> From: sandeen@sandeen.net
> To: lists@colorremedies.com; robtench@hotmail.com
> CC: xfs@oss.sgi.com
> Subject: Re: Mismatch UUID
> 
> On 11/14/14 4:31 PM, Chris Murphy wrote:
> 
> > It’s very good to ask instead of haphazardly trying things. Trying to
> > normally mount the file system should be safe; and then use dmesg to
> > check for kernel messages. The xfs kernel code is responsible for log
> > replay and making most kinds of repairs, anything it can’t deal with
> > will be reported as a kernel message. If mount fails, report kernel
> > xfs related messages, and also the results from xfs_check -n.
> 
> xfs_repair -n
> 
> xfs_check is deprecated and has no -n option ;)
> 
> -Eric
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 8091 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Mismatch UUID
  2014-11-14 23:40   ` Eric Sandeen
  2014-11-15  0:03     ` Chris Murphy
  2014-11-15  2:17     ` Robert Tench
@ 2014-11-15  2:58     ` Robert Tench
  2 siblings, 0 replies; 9+ messages in thread
From: Robert Tench @ 2014-11-15  2:58 UTC (permalink / raw)
  To: Eric Sandeen, Chris Murphy, Brian Foster, xfs


[-- Attachment #1.1: Type: text/plain, Size: 6894 bytes --]

Hi all,

So I used mount command mount -o ro /dev/md4 /xfs  (after having made the xfs directory)

Returned with saying 'Structure needs cleaning'

Here are parts of the dmesg relating to the raid array and the attempted mounting
[   12.359728] EXT4-fs (sda7): mounting ext3 file system using the ext4 subsystem
[   12.361314] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null)
[   14.382337]  sde: sde1 sde2 sde3 sde4 sde5 sde6 sde7 sde8 sde9 sde10
[   14.388298] sd 4:0:0:0: [sde] No Caching mode page found
[   14.388303] sd 4:0:0:0: [sde] Assuming drive cache: write through
[   14.388308] sd 4:0:0:0: [sde] Attached SCSI disk
[   14.676543] md: bind<sde8>
[   14.681538] md: kicking non-fresh sdd8 from array!
[   14.681544] md: unbind<sdd8>
[   14.700028] md: export_rdev(sdd8)
[   14.700055] md: kicking non-fresh sdb8 from array!
[   14.700063] md: unbind<sdb8>
[   14.708021] md: export_rdev(sdb8)
[   14.709145] md/raid1:md1: active with 3 out of 5 mirrors
[   14.709165] md1: detected capacity change from 0 to 863567872
[   14.713570] md: bind<sde5>
[   14.713663]  md1: unknown partition table
[   14.715628] md: kicking non-fresh sdd5 from array!
[   14.715635] md: unbind<sdd5>
[   14.724074] md: export_rdev(sdd5)
[   14.724095] md: kicking non-fresh sdb5 from array!
[   14.724102] md: unbind<sdb5>
[   14.736021] md: export_rdev(sdb5)
[   14.737203] md/raid1:md3: active with 3 out of 5 mirrors
[   14.737222] md3: detected capacity change from 0 to 262078464
[   14.739508]  md3: unknown partition table
[   14.742037] md: bind<sde2>
[   14.748793] md: bind<sde9>
[   14.749250] md/raid:md4: device sde2 operational as raid disk 4
[   14.749252] md/raid:md4: device sdd2 operational as raid disk 3
[   14.749254] md/raid:md4: device sda2 operational as raid disk 0
[   14.749255] md/raid:md4: device sdb2 operational as raid disk 1
[   14.749257] md/raid:md4: device sdc2 operational as raid disk 2
[   14.749707] md/raid:md4: allocated 0kB
[   14.749814] md/raid:md4: raid level 5 active with 5 out of 5 devices, algorithm 2
[   14.749815] RAID conf printout:
[   14.749816]  --- level:5 rd:5 wd:5
[   14.749818]  disk 0, o:1, dev:sda2
[   14.749820]  disk 1, o:1, dev:sdb2
[   14.749821]  disk 2, o:1, dev:sdc2
[   14.749823]  disk 3, o:1, dev:sdd2
[   14.749824]  disk 4, o:1, dev:sde2
[   14.749846] md4: detected capacity change from 0 to 11994073661440
[   14.769544] md: kicking non-fresh sdd9 from array!
[   14.769553] md: unbind<sdd9>
[   14.770503]  md4: unknown partition table
[   14.784024] md: export_rdev(sdd9)
[   14.784040] md: kicking non-fresh sdb9 from array!
[   14.784045] md: unbind<sdb9>
[   14.796026] md: export_rdev(sdb9)
[   14.797259] md/raid1:md2: active with 3 out of 5 mirrors
[   14.797278] md2: detected capacity change from 0 to 896466944
[   14.804201]  md2: unknown partition table
[   15.536766] EXT4-fs (sdb8): mounting ext3 file system using the ext4 subsystem
[   15.556508] EXT4-fs (sdb8): mounted filesystem with ordered data mode. Opts: (null)
[   15.575379] EXT4-fs (sdb9): mounting ext3 file system using the ext4 subsystem
[   15.581829] EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null)
[   16.044429] sd 5:0:0:0: [sdf] 60499968 512-byte logical blocks: (30.9 GB/28.8 GiB)
[   16.045044] sd 5:0:0:0: [sdf] Write Protect is off
[   16.045049] sd 5:0:0:0: [sdf] Mode Sense: 23 00 00 00
[   16.045669] sd 5:0:0:0: [sdf] No Caching mode page found
[   16.045673] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[   16.049167] sd 5:0:0:0: [sdf] No Caching mode page found
[   16.049172] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[   16.072442]  sdf: sdf1
[   16.075294] sd 5:0:0:0: [sdf] No Caching mode page found
[   16.075299] sd 5:0:0:0: [sdf] Assuming drive cache: write through
[   16.075303] sd 5:0:0:0: [sdf] Attached SCSI removable disk
[   38.533458] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[   39.643476] FAT-fs (sdf1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[   40.485718] EXT4-fs (loop1): mounting ext2 file system using the ext4 subsystem
[   40.489678] EXT4-fs (loop1): warning: mounting unchecked fs, running e2fsck is recommended
[   40.495984] EXT4-fs (loop1): mounted filesystem without journal. Opts: (null)
[   88.354507] Adding 255932k swap on /dev/sdb5.  Priority:-1 extents:1 across:255932k FS
[   88.359216] Adding 255932k swap on /dev/sdd5.  Priority:-2 extents:1 across:255932k FS


[ 1636.143467] systemd-hostnamed[20496]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname!
[ 1918.676658] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
[ 1925.660198] XFS (md4): Mounting Filesystem
[ 1925.824734] XFS (md4): log has mismatched uuid - can't recover
[ 1925.824748] XFS (md4): failed to find log head
[ 1925.824751] XFS (md4): log mount/recovery failed: error 117
[ 1925.824793] XFS (md4): log mount failed
[ 2049.523180] systemd-hostnamed[21362]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname!
[20679.813021] XFS (md4): Mounting Filesystem
[20679.884528] XFS (md4): log has mismatched uuid - can't recover
[20679.884542] XFS (md4): failed to find log head
[20679.884545] XFS (md4): log mount/recovery failed: error 117
[20679.884590] XFS (md4): log mount failed
[21024.505390] XFS (md4): Mounting Filesystem
[21024.601886] XFS (md4): log has mismatched uuid - can't recover
[21024.601900] XFS (md4): failed to find log head
[21024.601903] XFS (md4): log mount/recovery failed: error 117
[21024.601946] XFS (md4): log mount failed
[21672.562447] XFS (md4): Mounting Filesystem
[21672.617195] XFS (md4): log has mismatched uuid - can't recover
[21672.617209] XFS (md4): failed to find log head
[21672.617212] XFS (md4): log mount/recovery failed: error 117
[21672.617246] XFS (md4): log mount failed

Rob

> Date: Fri, 14 Nov 2014 17:40:10 -0600
> From: sandeen@sandeen.net
> To: lists@colorremedies.com; robtench@hotmail.com
> CC: xfs@oss.sgi.com
> Subject: Re: Mismatch UUID
> 
> On 11/14/14 4:31 PM, Chris Murphy wrote:
> 
> > It’s very good to ask instead of haphazardly trying things. Trying to
> > normally mount the file system should be safe; and then use dmesg to
> > check for kernel messages. The xfs kernel code is responsible for log
> > replay and making most kinds of repairs, anything it can’t deal with
> > will be reported as a kernel message. If mount fails, report kernel
> > xfs related messages, and also the results from xfs_check -n.
> 
> xfs_repair -n
> 
> xfs_check is deprecated and has no -n option ;)
> 
> -Eric


 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 8349 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
       [not found]   ` <BLU172-W1423188AD065CADCC115E6C48C0@phx.gbl>
@ 2014-11-15 14:27     ` Brian Foster
  0 siblings, 0 replies; 9+ messages in thread
From: Brian Foster @ 2014-11-15 14:27 UTC (permalink / raw)
  To: Robert Tench; +Cc: xfs

(Re-CC xfs list)

On Sat, Nov 15, 2014 at 08:10:18AM +1100, Robert Tench wrote:
> Hi Brian,
> 
> Thanks for your reply.
> 
> I had run xfs_repair -n previously which spewed out a ton of stuff, in the last email you would have received, there should be a link to hotmail onedrive where you could view the output of that command, I called it xfs.log
> 

I didn't see a link in the plaintext of the message. I see something now
buried in an html attachment that my mailer doesn't interpret very well,
and the link doesn't appear to work.

> If I try to mount the array, it comes back with 'Structure needs Cleaning' and won't mount.
> 
> When I tried to reassemble the array, it would only begin with 2 of the 5 drives, as I know that 3 of the drives had different update times and were out of sync.
> 
> I ended up doing a force assemble, which created the array and went into a resync process (really not sure if I did the right thing here).
> 

It's been a while since I've played around with md raid. Do you have a
command that you ran to put things back together? As was mentioned
up-thread, using a create (-C) command could just force an array
together in a particular geometry and write new metadata. This would
make the array look fine afterwards, but then there's no way to know
whether the array is actually in the original order and the data could
very well be scrambled.

Brian

> As to the geometry of the raid, I hope I had it in the right order. I previously had a data recovery tech remote connect to my desktop who had a look at the array. He had also not been able to successfully mount the raid, it was him that told me the order of the drives, which happened to be the exact order of their placement in the drive bays of the lacie nas.
> 
> Is there a way to check whether I have the correct geometry, or by doing the force assemble it would now be impossible to tell?
> 
> The data recovery tech, was also having the same issue of mismatched UUID.
> 
> Any help is appreciated,
> 
> Rob
> 
> > Date: Fri, 14 Nov 2014 08:42:08 -0500
> > From: bfoster@redhat.com
> > To: robtench@hotmail.com
> > CC: xfs@oss.sgi.com
> > Subject: Re: Mismatch UUID
> > 
> > On Fri, Nov 14, 2014 at 07:57:42PM +1100, Robert Tench wrote:
> > >                                                             Robert has a file to share with you on OneDrive. To view it, click the link below.                                                                                                        xfs.log                                                                            So finally have managed to find a way to save the complete log of running xfs_repair -n /dev.md4
> > > 
> > > And below is the output of xfs_check /dev/md4
> > > 
> > > root@ubuntu:~# xfs_check /dev/md4
> > > * ERROR: mismatched uuid in log
> > > *            SB : 813833a7-1bd3-4447-b575-09d1471bb652
> > > *            log: ea3833af-25ce-9f91-b575-018fb49df3b1
> > > ERROR: The filesystem has valuable metadata changes in a log which needs to
> > > be replayed.  Mount the filesystem to replay the log, and unmount it before
> > > re-running xfs_check.  If you are unable to mount the filesystem, then use
> > > the xfs_repair -L option to destroy the log and attempt a repair.
> > > Note that destroying the log may cause corruption -- please attempt a mount
> > > of the filesystem before doing this.
> > > 
> > 
> > You want to use xfs_repair (-n) rather than xfs_check. I think you
> > mentioned in your other email you've tried xfs_repair..? The above
> > message indicates a dirty log, have you attempted to mount the device to
> > replay the log?
> > 
> > > And the output from mdadm -D /dev/md4 is as follows
> > > 
> > 
> > How did you put the array back together? Did it assemble fine or did you
> > have to recreate it? If the latter, how are you sure the geometry is
> > correct (it looks like it's syncing)?
> > 
> > Brian
> > 
> > > root@ubuntu:~# mdadm -D /dev/md4
> > > /dev/md4:
> > >         Version : 1.0
> > >   Creation Time : Fri Jan  1 01:31:17 2010
> > >      Raid Level : raid5
> > >      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
> > >   Used Dev Size : 2928240640 (2792.59 GiB 2998.52 GB)
> > >    Raid Devices : 5
> > >   Total Devices : 5
> > >     Persistence : Superblock is persistent
> > > 
> > >     Update Time : Fri Nov 14 15:58:16 2014
> > >           State : clean 
> > >  Active Devices : 5
> > > Working Devices : 5
> > >  Failed Devices : 0
> > >   Spare Devices : 0
> > > 
> > >          Layout : left-symmetric
> > >      Chunk Size : 512K
> > > 
> > >            Name : (none):4
> > >            UUID : e0829810:9782b51f:25529f65:8823419c
> > >          Events : 1243386
> > > 
> > >     Number   Major   Minor   RaidDevice State
> > >        0       8        2        0      active sync   /dev/sda2
> > >        6       8       18        1      active sync   /dev/sdb2
> > >        2       8       34        2      active sync   /dev/sdc2
> > >        5       8       50        3      active sync   /dev/sdd2
> > >        4       8       66        4      active sync   /dev/sde2
> > > 
> > > 
> > > And then the response from mdadm -E /dev/md4
> > > 
> > > root@ubuntu:~# mdadm -E /dev/md4
> > > mdadm: No md superblock detected on /dev/md4.
> > > 
> > > Not sure what to do, any help would be appreciated
> > > 
> > > Regards,
> > > 
> > > Rob
> > > 
> > > 
> > > 
> > >                  		 	   		  
> > 
> > > _______________________________________________
> > > xfs mailing list
> > > xfs@oss.sgi.com
> > > http://oss.sgi.com/mailman/listinfo/xfs
> > 
>  		 	   		  

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Mismatch UUID
  2014-11-15  2:17     ` Robert Tench
@ 2014-11-15 14:39       ` Brian Foster
  0 siblings, 0 replies; 9+ messages in thread
From: Brian Foster @ 2014-11-15 14:39 UTC (permalink / raw)
  To: Robert Tench; +Cc: Chris Murphy, Eric Sandeen, xfs

On Sat, Nov 15, 2014 at 01:17:39PM +1100, Robert Tench wrote:
> Hi all,
> 
> Many thanks for the replies so far.
> 
> So here is the reponse from the command line mdadm -E /dev/sd[abcde]2
> 
> ubuntu@ubuntu:~$ sudo mdadm -E /dev/sd[abcde]2
> /dev/sda2:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : e0829810:9782b51f:25529f65:8823419c
>            Name : (none):4
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>    Super Offset : 5856481536 sectors
>           State : clean
>     Device UUID : d405d0c5:2a07d7ed:27abcb5a:0eeadc7d
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>        Checksum : c082e9bb - correct
>          Events : 1243386
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 0
>    Array State : AAAAA ('A' == active, '.' == missing)
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : e0829810:9782b51f:25529f65:8823419c
>            Name : (none):4
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>    Super Offset : 5856481536 sectors
>           State : clean
>     Device UUID : 33acf23a:52a0ad00:cda69d9f:468a8c13
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>        Checksum : 22fdc525 - correct
>          Events : 1243386
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 1
>    Array State : AAAAA ('A' == active, '.' == missing)
> /dev/sdc2:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : e0829810:9782b51f:25529f65:8823419c
>            Name : (none):4
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>    Super Offset : 5856481536 sectors
>           State : clean
>     Device UUID : 1a4f20e2:34926568:af5351a1:9c9f60e9
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>        Checksum : 96b21ac - correct
>          Events : 1243386
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 2
>    Array State : AAAAA ('A' == active, '.' == missing)
> /dev/sdd2:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : e0829810:9782b51f:25529f65:8823419c
>            Name : (none):4
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>    Super Offset : 5856481536 sectors
>           State : clean
>     Device UUID : 0ee6b711:242fbd18:44c2839e:60e8ad0d
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>        Checksum : ada0762 - correct
>          Events : 1243386
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 3
>    Array State : AAAAA ('A' == active, '.' == missing)
> /dev/sde2:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : e0829810:9782b51f:25529f65:8823419c
>            Name : (none):4
>   Creation Time : Fri Jan  1 01:31:17 2010
>      Raid Level : raid5
>    Raid Devices : 5
> 
>  Avail Dev Size : 5856481280 (2792.59 GiB 2998.52 GB)
>      Array Size : 11712962560 (11170.35 GiB 11994.07 GB)
>    Super Offset : 5856481536 sectors
>           State : clean
>     Device UUID : f89fdd4c:627a6650:3de0dcab:6790076f
> 
>     Update Time : Fri Nov 14 15:58:16 2014
>        Checksum : ec5bd289 - correct
>          Events : 1243386
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 4
>    Array State : AAAAA ('A' == active, '.' == missing)
> 
> 
> What I am curious about is how I should go about attempting to Mount the device and the command line needed. Am I mounting the Raid Array /dev/md4, or am I trying to mount the XFS data partitions contained within with would be made up of /dev/sd[abcde2]? I am a little bit confused by all the various options and what I am exactly suppose to use given being a complete linux novice.
> 

Normally the array (/dev/md*) is used directly, but I suppose it could
be partitioned. It depends on how the storage is configured. It does
look like you have several arrays configured from the log snippet in the
subsequent mail.

What does 'cat /proc/mdstat' on the box show? How about 'file -s
/dev/md##' for each md device specified in the former output?

Brian

> Any help is appreciated?
> 
> Rob
> 
> 
> > Date: Fri, 14 Nov 2014 17:40:10 -0600
> > From: sandeen@sandeen.net
> > To: lists@colorremedies.com; robtench@hotmail.com
> > CC: xfs@oss.sgi.com
> > Subject: Re: Mismatch UUID
> > 
> > On 11/14/14 4:31 PM, Chris Murphy wrote:
> > 
> > > It’s very good to ask instead of haphazardly trying things. Trying to
> > > normally mount the file system should be safe; and then use dmesg to
> > > check for kernel messages. The xfs kernel code is responsible for log
> > > replay and making most kinds of repairs, anything it can’t deal with
> > > will be reported as a kernel message. If mount fails, report kernel
> > > xfs related messages, and also the results from xfs_check -n.
> > 
> > xfs_repair -n
> > 
> > xfs_check is deprecated and has no -n option ;)
> > 
> > -Eric
>  		 	   		  

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-11-15 14:40 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-14  8:57 Mismatch UUID Robert Tench
2014-11-14 13:42 ` Brian Foster
     [not found]   ` <BLU172-W1423188AD065CADCC115E6C48C0@phx.gbl>
2014-11-15 14:27     ` Brian Foster
2014-11-14 22:31 ` Chris Murphy
2014-11-14 23:40   ` Eric Sandeen
2014-11-15  0:03     ` Chris Murphy
2014-11-15  2:17     ` Robert Tench
2014-11-15 14:39       ` Brian Foster
2014-11-15  2:58     ` Robert Tench

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.