All of lore.kernel.org
 help / color / mirror / Atom feed
* WARNING xfsdump [still] Cannot allocate memory for list of [root|non-root] attributes for nondir ino xxyz
@ 2010-06-27  1:10 Linda A. Walsh
  2010-06-28  2:27 ` Dave Chinner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-06-27  1:10 UTC (permalink / raw)
  To: xfs-oss


FWIW -- that bug I mentioned a few weeks back that I thought was gone in 
the latest CVS
of xfsdump -- isn't.

It seems less frequent (maybe external attributes have been getting slow 
weeded out as
they fail -- if it is random).

What's 'unique' about this dump -- I just copied from one partition to 
another, to resize the first,
and this is on the copy back.

I.e. Stuff that xfs-restore just wrote has problems being read.

First problem was about 30 minutes into the 'copy'

Then about 2 more in the next half hour -- so not real often.

This is with kernel 2.6.34.

xfstools were downloaded a few weeks ago, so fairly recent.

xfsdump: WARNING: could not get list of non-root attributes for nondir 
ino 2428439: Cannot allocate memory (12)
in @  108 MB/s, out @  0.0 kB/s,  116 GB total, buffer   6% fullxfsdump: 
status at 02:25:06: 918808/2250704 files dumped, 54.3% data dumped, 1800 
seconds elapsed
in @ 14.0 MB/s, out @ 14.0 MB/s,  123 GB total, buffer   0% fullxfsdump: 
status at 02:30:06: 975579/2250704 files dumped, 57.6% data dumped, 2100 
seconds elapsed
in @  0.0 kB/s, out @  0.0 kB/s,  135 GB total, buffer  14% fullxfsdump: 
status at 02:35:07: 1062278/2250704 files dumped, 63.2% data dumped, 
2401 seconds elapsed
in @  226 MB/s, out @  157 MB/s,  152 GB total, buffer  91% fullxfsdump: 
status at 02:40:06: 1191037/2250704 files dumped, 71.5% data dumped, 
2700 seconds elapsed
in @  0.0 kB/s, out @  0.0 kB/s,  169 GB total, buffer   0% fullxfsdump: 
status at 02:45:06: 1321397/2250704 files dumped, 79.2% data dumped, 
3000 seconds elapsed
in @  0.0 kB/s, out @  120 MB/s,  176 GB total, buffer  30% full

xfsdump: WARNING: could not get list of secure attributes for nondir ino 
3929093: Cannot allocate memory (12)
in @  0.0 kB/s, out @  118 MB/s,  177 GB total, buffer   7% full

xfsdump: WARNING: could not get list of non-root attributes for nondir 
ino 3929095: Cannot allocate memory (12)

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: WARNING xfsdump [still] Cannot allocate memory for list of [root|non-root] attributes for nondir ino xxyz
  2010-06-27  1:10 WARNING xfsdump [still] Cannot allocate memory for list of [root|non-root] attributes for nondir ino xxyz Linda A. Walsh
@ 2010-06-28  2:27 ` Dave Chinner
  2010-06-29 22:33   ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Linda Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-06-28  2:27 UTC (permalink / raw)
  To: Linda A. Walsh; +Cc: xfs-oss

On Sat, Jun 26, 2010 at 06:10:55PM -0700, Linda A. Walsh wrote:
> 
> FWIW -- that bug I mentioned a few weeks back that I thought was
> gone in the latest CVS
> of xfsdump -- isn't.
> 
> It seems less frequent (maybe external attributes have been getting
> slow weeded out as
> they fail -- if it is random).
> 
> What's 'unique' about this dump -- I just copied from one partition
> to another, to resize the first,
> and this is on the copy back.
> 
> I.e. Stuff that xfs-restore just wrote has problems being read.
> 
> First problem was about 30 minutes into the 'copy'
> 
> Then about 2 more in the next half hour -- so not real often.
> 
> This is with kernel 2.6.34.
> 
> xfstools were downloaded a few weeks ago, so fairly recent.
> 
> xfsdump: WARNING: could not get list of non-root attributes for
> nondir ino 2428439: Cannot allocate memory (12)
> in @  108 MB/s, out @  0.0 kB/s,  116 GB total, buffer   6%
> fullxfsdump: status at 02:25:06: 918808/2250704 files dumped, 54.3%
> data dumped, 1800 seconds elapsed
> in @ 14.0 MB/s, out @ 14.0 MB/s,  123 GB total, buffer   0%
> fullxfsdump: status at 02:30:06: 975579/2250704 files dumped, 57.6%
> data dumped, 2100 seconds elapsed
> in @  0.0 kB/s, out @  0.0 kB/s,  135 GB total, buffer  14%
> fullxfsdump: status at 02:35:07: 1062278/2250704 files dumped, 63.2%
> data dumped, 2401 seconds elapsed
> in @  226 MB/s, out @  157 MB/s,  152 GB total, buffer  91%
> fullxfsdump: status at 02:40:06: 1191037/2250704 files dumped, 71.5%
> data dumped, 2700 seconds elapsed
> in @  0.0 kB/s, out @  0.0 kB/s,  169 GB total, buffer   0%
> fullxfsdump: status at 02:45:06: 1321397/2250704 files dumped, 79.2%
> data dumped, 3000 seconds elapsed
> in @  0.0 kB/s, out @  120 MB/s,  176 GB total, buffer  30% full
> 
> xfsdump: WARNING: could not get list of secure attributes for nondir
> ino 3929093: Cannot allocate memory (12)
> in @  0.0 kB/s, out @  118 MB/s,  177 GB total, buffer   7% full
> 
> xfsdump: WARNING: could not get list of non-root attributes for
> nondir ino 3929095: Cannot allocate memory (12)

Can you get a list of all the attributes and their sizes on the
inodes xfsdump is complaining about?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-28  2:27 ` Dave Chinner
@ 2010-06-29 22:33   ` Linda Walsh
  2010-06-29 23:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
  2010-06-30 18:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Michael Monnerie
  0 siblings, 2 replies; 29+ messages in thread
From: Linda Walsh @ 2010-06-29 22:33 UTC (permalink / raw)
  To: Dave Chinner, xfs-oss

Dave Chinner wrote:
> Can you get a list of all the attributes and their sizes on the
> inodes xfsdump is complaining about?
> 
> Cheers,
> 
> Dave.
Wish I could...had a software problem that had me have to reuse the
drive I just copied those examples from.

But have another XFS problem that is much more reliably persistent.
I don't know if they are at all related, but since I have this problem
that's a bit "stuck", it's easier to "reproduce".

Filesystem is one of my larger ones:

Ishtar:/Torrents> 'df' -T --sync  .
Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/dm-7      xfs   6442319744 2305631080 4136688664  36% /Torrents
Ishtar:/Torrents> 'df' -iT --sync  .
Filesystem    Type    Inodes   IUsed   IFree IUse% Mounted on
/dev/dm-7      xfs   1288490112   34313 1288455799    1% /Torrents

I have 3 files that developed 'bugs' in them in 3 separate directories.  
Oddly, they were they were 3 copies of the same 3 files.  Very Odd.

Symptom is from ls:
---------------------------------------------------------------------------------------------------
Ishtar:/Torrents> 'ls' -ni bad*     
ls: cannot access bad/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
ls: cannot access bad/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
ls: cannot access bad/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
bad:
total 0
2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory

bad2:
total 0
2220560 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
2220561 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
2218302 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
ls: cannot access bad3/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
ls: cannot access bad3/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
ls: cannot access bad3/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory

bad3:
total 0
2218295 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
2218296 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
2218297 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
---------------------------------------------------------------------------------------------------

The file system, labeled 'Torrents' is layered on a lvm base (I'm not convinced that
lvm is as reliable as physical partitions at this point, which is why I mention it).

It's a 'live' file system.  What info do you want me to dump from it?

I'm dumping the files from it now, so I can try to remake the file system.

The problem is 'spreading' to "new" files.  Basically, any file that is being written to
now, seems to be in danger of becoming inaccessible.  

Also, FWIW -- I did unmount the file system and run xfs_repair.  It finds no problems -- 
so why are these files not accessible?

Ideas?

Next steps?

Thanks!
Linda




_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ (was xfs_dump problem...)
  2010-06-29 22:33   ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Linda Walsh
@ 2010-06-29 23:25     ` Dave Chinner
  2010-06-29 23:55       ` Michael Weissenbacher
  2010-06-30  0:01       ` Linda A. Walsh
  2010-06-30 18:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Michael Monnerie
  1 sibling, 2 replies; 29+ messages in thread
From: Dave Chinner @ 2010-06-29 23:25 UTC (permalink / raw)
  To: Linda Walsh; +Cc: xfs-oss

On Tue, Jun 29, 2010 at 03:33:02PM -0700, Linda Walsh wrote:
> Dave Chinner wrote:
> >Can you get a list of all the attributes and their sizes on the
> >inodes xfsdump is complaining about?
> >
> >Cheers,
> >
> >Dave.
> Wish I could...had a software problem that had me have to reuse the
> drive I just copied those examples from.
> 
> But have another XFS problem that is much more reliably persistent.
> I don't know if they are at all related, but since I have this problem
> that's a bit "stuck", it's easier to "reproduce".
> 
> Filesystem is one of my larger ones:
> 
> Ishtar:/Torrents> 'df' -T --sync  .
> Filesystem    Type   1K-blocks      Used Available Use% Mounted on
> /dev/dm-7      xfs   6442319744 2305631080 4136688664  36% /Torrents
> Ishtar:/Torrents> 'df' -iT --sync  .
> Filesystem    Type    Inodes   IUsed   IFree IUse% Mounted on
> /dev/dm-7      xfs   1288490112   34313 1288455799    1% /Torrents
> 
> I have 3 files that developed 'bugs' in them in 3 separate
> directories.  Oddly, they were they were 3 copies of the same 3
> files.  Very Odd.
> 
> Symptom is from ls:
> ---------------------------------------------------------------------------------------------------
> Ishtar:/Torrents> 'ls' -ni bad*     ls: cannot access bad/30-Omoide
> to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
> ls: cannot access bad/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
> ls: cannot access bad/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
> bad:
> total 0
> 2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
> 2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
> 2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
> ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
> ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
> ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory

Those file names have a weird character in them - are you sure that
the terminal supports that character set and is not mangling it and
hence not matching what is actually stored on disk?

> The file system, labeled 'Torrents' is layered on a lvm base (I'm not convinced that
> lvm is as reliable as physical partitions at this point, which is why I mention it).
> 
> It's a 'live' file system.  What info do you want me to dump from it?
> 
> I'm dumping the files from it now, so I can try to remake the file system.
> 
> The problem is 'spreading' to "new" files.  Basically, any file that is being written to
> now, seems to be in danger of becoming inaccessible.

Do those "new" files have the same strange characters in them?

> Also, FWIW -- I did unmount the file system and run xfs_repair.  It
> finds no problems -- so why are these files not accessible?

If there are no problems reported by repair, then I suspect that
it's a terminal level problem...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ (was xfs_dump problem...)
  2010-06-29 23:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
@ 2010-06-29 23:55       ` Michael Weissenbacher
  2010-06-30  0:42         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
  2010-06-30  0:01       ` Linda A. Walsh
  1 sibling, 1 reply; 29+ messages in thread
From: Michael Weissenbacher @ 2010-06-29 23:55 UTC (permalink / raw)
  To: xfs

Hi!
>> 2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
>> 2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
>> 2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
>> ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
> 
> ...
>
> If there are no problems reported by repair, then I suspect that
> it's a terminal level problem...
> 
Looking at this i remember having similar problems when my filesystems
was mounted with inode64 before and after i left out that parameter. So
Linda, could you re-try mounting the fs with "inode64".

cheers,
Michael

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-29 23:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
  2010-06-29 23:55       ` Michael Weissenbacher
@ 2010-06-30  0:01       ` Linda A. Walsh
  2010-06-30  1:06         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
  1 sibling, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-06-30  0:01 UTC (permalink / raw)
  To: Dave Chinner, xfs-oss



Dave Chinner wrote:
---------------------------------------------------------------------------------------------------
>> Ishtar:/Torrents> 'ls' -ni bad*     ls: cannot access bad/30-Omoide
>> to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
>> bad:
>> total 0
>> 2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
>> 2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
>> 2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
>> ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>> ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
> 
> Those file names have a weird character in them - are you sure that
> the terminal supports that character set and is not mangling it and
> hence not matching what is actually stored on disk?
-----
Those files were 'fine' before today.

I know it is not a terminal problem --
I told ls to list all files in the directory -- then it says "no such file".

Can you say that "*" shouldn't match everything?

Those question marks are in the place for the size!

There are no weird characters in those file names.

Here are the same files in another directory:
mp3> ll 3*
-rwx------ 1 3255702 2010-06-14 10:54 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3*
-rwx------ 1 3272004 2010-06-14 10:54 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3*
-rwx------ 1 3234876 2010-06-14 10:54 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3*


The fields it can't display are the file size, time and dates!  The names display ok!

How can file size, time and date be in unprintable characters that "ls" can't display?

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-29 23:55       ` Michael Weissenbacher
@ 2010-06-30  0:42         ` Linda A. Walsh
  2010-06-30  1:16           ` Dave Chinner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-06-30  0:42 UTC (permalink / raw)
  To: Michael Weissenbacher, xfs-oss



Michael Weissenbacher wrote:
> Hi!
>>> 2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
>>> 2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
>>> 2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
>>> ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>>> ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>>> ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
>> ...
>>
>> If there are no problems reported by repair, then I suspect that
>> it's a terminal level problem...
>>
> Looking at this i remember having similar problems when my filesystems
> was mounted with inode64 before and after i left out that parameter. So
> Linda, could you re-try mounting the fs with "inode64".
----
	I ran into that before as well -- already tried

Notice the listing you see is the output of "ls -in".
Those numbers are the inodes.  ...HEY, wait.
When do you need 64-bit inodes?  The dump size said it was:
xfsdump: estimated dump size: 2360915740992 bytes

It's getting right near to overflowing a 32-bit integer.

Do I need >32 bit inodes if the filesystem size is > 2T?

I'm guessing the file system just recently passed the 2G mark.
Note -- I DID try the inode64 mount option -- it made no difference.


But if you need 64-bit inodes for file systems > 2T file systems,
then shouldn't I have gotten some error rather than it eating files
for lunch?

The inode numbers themselves, are not even close to being over 32bits.  

Sigh...


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ (was xfs_dump problem...)
  2010-06-30  0:01       ` Linda A. Walsh
@ 2010-06-30  1:06         ` Dave Chinner
  2010-06-30  1:52           ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-06-30  1:06 UTC (permalink / raw)
  To: Linda A. Walsh; +Cc: xfs-oss

On Tue, Jun 29, 2010 at 05:01:12PM -0700, Linda A. Walsh wrote:
> 
> 
> Dave Chinner wrote:
> ---------------------------------------------------------------------------------------------------
> >>Ishtar:/Torrents> 'ls' -ni bad*     ls: cannot access bad/30-Omoide
> >>to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
> >>ls: cannot access bad/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
> >>ls: cannot access bad/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
> >>bad:
> >>total 0
> >>2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
> >>2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
> >>2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
> >>ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
> >>ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
> >>ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
> >
> >Those file names have a weird character in them - are you sure that
> >the terminal supports that character set and is not mangling it and
> >hence not matching what is actually stored on disk?
> -----
> Those files were 'fine' before today.
> 
> I know it is not a terminal problem --
> I told ls to list all files in the directory -- then it says "no such file".
> 
> Can you say that "*" shouldn't match everything?
> 
> Those question marks are in the place for the size!
> 
> There are no weird characters in those file names.

I beg to differ ;)

> Here are the same files in another directory:
> mp3> ll 3*
> -rwx------ 1 3255702 2010-06-14 10:54 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3*
> -rwx------ 1 3272004 2010-06-14 10:54 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3*
> -rwx------ 1 3234876 2010-06-14 10:54 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3*
                                                                       ^^^
That character is a non-ascii character, which is why I was
wondering about terminals and character sets.  It does not display
correctly in mutt (a bold vertical bar) or Vim (a dotted, double
character width square) using LANG=en_AU.UTF-8 here....

> The fields it can't display are the file size, time and dates!

Yes, I know.

To stat a file to get that infomration first you have to open it and
that requires getting the file name exactly right. If you try to
open a file encoded with one language setting/character set and then
decode and re-encode it with another, the raw filename data will be
different to what is on disk. Hence when I see filenames with
unprintable characters in the mail, it's always worth checking
first...

> How can file size, time and date be in unprintable characters that "ls" can't display?

They aren't. They are printed as ??? because the stat failed and
hence they are unknown.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-30  0:42         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
@ 2010-06-30  1:16           ` Dave Chinner
  2010-06-30  2:45             ` Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-06-30  1:16 UTC (permalink / raw)
  To: Linda A. Walsh; +Cc: Michael Weissenbacher, xfs-oss

On Tue, Jun 29, 2010 at 05:42:02PM -0700, Linda A. Walsh wrote:
> Do I need >32 bit inodes if the filesystem size is > 2T?

No. inode64 is recommended for multi-terabyte filesystems, but is
not mandatory.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-30  1:06         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
@ 2010-06-30  1:52           ` Linda A. Walsh
  2010-06-30 21:01             ` Stan Hoeppner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-06-30  1:52 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs-oss



Dave Chinner wrote:
> On Tue, Jun 29, 2010 at 05:01:12PM -0700, Linda A. Walsh wrote:
>>
>> Dave Chinner wrote:
>> ---------------------------------------------------------------------------------------------------
>>>> Ishtar:/Torrents> 'ls' -ni bad*     ls: cannot access bad/30-Omoide
>>>> to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>>>> ls: cannot access bad/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>>>> ls: cannot access bad/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
>>>> bad:
>>>> total 0
>>>> 2359101 ?????????? ? ? ? ?                ? 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3
>>>> 2354946 ?????????? ? ? ? ?                ? 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3
>>>> 2354949 ?????????? ? ? ? ?                ? 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3
>>>> ls: cannot access bad2/30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3: No such file or directory
>>>> ls: cannot access bad2/31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3: No such file or directory
>>>> ls: cannot access bad2/32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3: No such file or directory
>>> Those file names have a weird character in them - are you sure that
>>> the terminal supports that character set and is not mangling it and
>>> hence not matching what is actually stored on disk?
>> -----
>> Those files were 'fine' before today.
>>
>> I know it is not a terminal problem --
>> I told ls to list all files in the directory -- then it says "no such file".
>>
>> Can you say that "*" shouldn't match everything?
>>
>> Those question marks are in the place for the size!
>>
>> There are no weird characters in those file names.
> 
> I beg to differ ;)
----
	They are standard UTF-8 characters!  What's weird about
them?!?   Next you'll be complaining about my hair style... ;).


> 
>> Here are the same files in another directory:
>> mp3> ll 3*
>> -rwx------ 1 3255702 2010-06-14 10:54 30-Omoide to Yakusoku (TV saizu|Reinaʼs Ver.).mp3*
>> -rwx------ 1 3272004 2010-06-14 10:54 31-Omoide to Yakusoku (TV saizu|Tomoeʼs Ver.).mp3*
>> -rwx------ 1 3234876 2010-06-14 10:54 32-Omoide to Yakusoku (TV saizu|Nanualʼs Ver.).mp3*
>                                                                        ^^^
> That character is a non-ascii character, which is why I was
> wondering about terminals and character sets.  It does not display
> correctly in mutt (a bold vertical bar) or Vim (a dotted, double
> character width square) using LANG=en_AU.UTF-8 here....
----
you don't have the right font for your Vim. :-)   

  I use vim here and it displays
in the TTY version, the X version and even the windows version!

Anyway -- as you can see above, the files display fine in another directory with the
same character.  IT's not the vertical bar char.  I use UTF-8 characters all over
the place -- I created that file manually.  Go get 'Bablemap'.  It's free (well donation
requested) from http://www.babelstone.co.uk/Software/BabelMap.html.  My systems handle
them 'fine'.  That's not the problem here.  
I have 3 directories that all have copies of files 30-32 in them that are corrupt.

I have another directory "Shakugan no Shana II OST", that I can't cd into or ls.
Just says 'not found'.

This all happened today.  They were accessible before today.  I'm not sure what changed,
other than new files were added.



> 
>> How can file size, time and date be in unprintable characters that "ls" can't display?
> 
> They aren't. They are printed as ??? because the stat failed and
> hence they are unknown.
----
	Ok, why would the stat fail?  

	There's nothing magical about a 2TB limit?  (sigh...)

	Ideas?

Linda

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-30  1:16           ` Dave Chinner
@ 2010-06-30  2:45             ` Linda A. Walsh
  2010-07-01 23:58               ` Dave Chinner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-06-30  2:45 UTC (permalink / raw)
  To: xfs-oss

To make matters more interesting -- xfsdump can't access a couple of
files and a directory or two.

It thinks they are 'stale NFS handles' (I'm not running any NFS 
file systems).

in @  0.0 kB/s, out @  0.0 kB/s,  0.0 kB total, buffer   0% fullxfsdump: WARNING: unable to open directory: ino 2082342: Stale NFS file handle
xfsdump: WARNING: unable to open directory: ino 2082343: Stale NFS file handle
in @ 4079 kB/s, out @ 4079 kB/s, 2040 kB total, buffer   0% fullxfsdump: dumping non-directory files
in @ 68.0 MB/s, out @ 68.0 MB/s, 1209 GB total, buffer   0% fullll
in @  107 MB/s, out @  105 MB/s, 2200 GB total, buffer   0% fullxfsdump: ending media file
xfsdump: media file size 2362017678616 bytes
xfsdump: dump size (non-dir files) : 2361953613176 bytes
xfsdump: dump complete: 10926 seconds elapsed
xfsdump: Dump Status: SUCCESS

Running xfs_db on the file system (finished dumping)
a block get returns:
dir 1133368 block 0 extra leaf entry 5438b33d 79
dir 1133368 block 0 extra leaf entry 6624beba 71
dir 1133368 block 0 extra leaf entry 6d832f88 69
dir 1133368 block 0 extra leaf entry e6279e2d 80
dir ino 1133368 missing leaf entry for e627de2d/80
dir ino 1133368 missing leaf entry for 7624beba/71
dir ino 1133368 missing leaf entry for 5418b33d/79
dir ino 1133368 missing leaf entry for 6d832f80/69
dir 1134347 block 0 extra leaf entry 3907754f 4e
dir 1134347 block 0 extra leaf entry 3cae5257 47
dir 1134347 block 0 extra leaf entry 6e3a4cfb 40
dir ino 1134347 missing leaf entry for 6e38ccfb/40
dir ino 1134347 missing leaf entry for 3905f54f/4e
dir ino 1134347 missing leaf entry for 3cacd257/47
dir 1149453 block 0 extra leaf entry 190f789f 7d
dir ino 1149453 missing leaf entry for 190b78df/7d
dir 1193988 block 0 extra leaf entry 8f2b7b0c 80
dir ino 1193988 missing leaf entry for 8d2b7b0c/80
dir 2098474 block 0 extra leaf entry 891d210e b5
dir ino 2098474 missing leaf entry for 891d200f/b5
dir 2103050 block 0 extra leaf entry 3c8b96be 79
dir ino 2103050 missing leaf entry for 3c8b96ae/79
dir 2106377 block 0 extra leaf entry 6554830e 185
dir 2106377 block 0 extra leaf entry 8265fcda 173
dir 2106377 block 0 extra leaf entry b251fcda 17f
dir 2106377 block 0 extra leaf entry b687730b 179
dir ino 2106377 missing leaf entry for 8265fcd2/173
dir ino 2106377 missing leaf entry for b251fcd2/17f
dir ino 2106377 missing leaf entry for b687734b/179
dir ino 2106377 missing leaf entry for 6554c31e/185
dir 2107395 block 0 extra leaf entry 207fea65 ff
dir 2107395 block 0 extra leaf entry dd1d75a2 f8
dir ino 2107395 missing leaf entry for df1d75a2/f8
dir ino 2107395 missing leaf entry for 206fea65/ff
dir 2215937 block 0 extra leaf entry 1d5bcd6d 3a
dir 2215937 block 0 extra leaf entry 9c82da3c 9c
dir 2215937 block 0 extra leaf entry ff88d238 a4
dir ino 2215937 missing leaf entry for 1f5bcd6d/3a
dir ino 2215937 missing leaf entry for fd88d238/a4
dir ino 2215937 missing leaf entry for 9e82da3c/9c
dir 2218278 block 0 extra leaf entry 1d5bcd6d 3a
dir 2218278 block 0 extra leaf entry 9c82da3c 9c
dir 2218278 block 0 extra leaf entry ff88d238 a4
dir ino 2218278 missing leaf entry for 1f5bcd6d/3a
dir ino 2218278 missing leaf entry for fd88d238/a4
dir ino 2218278 missing leaf entry for 9e82da3c/9c
dir 2241309 block 0 extra leaf entry 51ae209 39
dir 2241309 block 0 extra leaf entry b5c3baca 32
dir ino 2241309 missing leaf entry for 5186209/39
dir ino 2241309 missing leaf entry for b5c13aca/32
dir 2291983 block 0 extra leaf entry 16650e73 117
dir 2291983 block 0 extra leaf entry 355a317c 112
dir 2291983 block 0 extra leaf entry 47464246 ee
dir 2291983 block 0 extra leaf entry 620ec6fb 109
dir 2291983 block 0 extra leaf entry 62170366 e7
dir 2291983 block 0 extra leaf entry 73c17bba 10d
dir 2291983 block 0 extra leaf entry 747a8790 e2
dir 2291983 block 0 extra leaf entry 894504e3 ff
dir 2291983 block 0 extra leaf entry b17d4203 11c
dir 2291983 block 0 extra leaf entry d12c165b 105
dir 2291983 block 0 extra leaf entry d33f483d fb
dir 2291983 block 0 extra leaf entry e28ebb3b f2
dir 2291983 block 0 extra leaf entry f9f9b6d1 f6
dir ino 2291983 missing leaf entry for 8b4504e3/ff
dir ino 2291983 missing leaf entry for d33f48ad/fb
dir ino 2291983 missing leaf entry for 375a317c/112
dir ino 2291983 missing leaf entry for e9f9bed1/f6
dir ino 2291983 missing leaf entry for 45464246/ee
dir ino 2291983 missing leaf entry for 71c17bba/10d
dir ino 2291983 missing leaf entry for b13d4203/11c
dir ino 2291983 missing leaf entry for 745a8790/e2
dir ino 2291983 missing leaf entry for 16650e63/117
dir ino 2291983 missing leaf entry for e21f0366/e7
dir ino 2291983 missing leaf entry for e08ebb2b/f2
dir ino 2291983 missing leaf entry for d12896db/105
dir ino 2291983 missing leaf entry for 620ec6eb/109

xfs_repair -n now shows:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 2
        - agno = 1
        - agno = 0
        - agno = 5
        - agno = 4
        - agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "10 Otome ha DO MY BEST desho¿ (off vocal).flac" (ino 2359102) in dir 1133368 is a duplicate name, would junk entry
entry "06 Otome ha DO MY BEST desho¿ Otome ver..flac" (ino 2359100) in dir 1133368 is a duplicate name, would junk entry
entry "05 Otome ha DO MY BEST desho¿ Hime ver..flac" (ino 2359099) in dir 1133368 is a duplicate name, would junk entry
entry "04 Otome ha DO MY BEST desho¿ 2007ver..flac" (ino 2359086) in dir 1133368 is a duplicate name, would junk entry
entry "04 Finality blue «instrumental».flac" (ino 2220801) in dir 1134347 is a duplicate name, would junk entry
entry "06 Koko ni Atta ne «instrumental».flac" (ino 2220800) in dir 1134347 is a duplicate name, would junk entry
entry "05 Heart All Green «instrumental».flac" (ino 2359103) in dir 1134347 is a duplicate name, would junk entry
entry "01_Tenkai Hen Josô e no Zensô Kyoku.mp3" (ino 2359093) in dir 1149453 is a duplicate name, would junk entry
entry "01 Träumerei.flac" (ino 2359098) in dir 1193988 is a duplicate name, would junk entry
entry "16 - Pretty Guardian Sailor Moon - Kiss!² Bang!².ogg" (ino 2359097) in dir 2098474 is a duplicate name, would junk entry
entry "03 Broken MÖBIUS.ogg" (ino 2359094) in dir 2103050 is a duplicate name, would junk entry
entry "14 - Broken MÖBIUS Henkei A.ogg" (ino 2359095) in dir 2106377 is a duplicate name, would junk entry
entry "27 - Shin·Henshin Sailormoon.ogg" (ino 2359092) in dir 2106377 is a duplicate name, would junk entry
entry "19 - Broken MÖBIUS Henkei B.ogg" (ino 2359088) in dir 2106377 is a duplicate name, would junk entry
entry "06 - GURU·GURU·GURA.ogg" (ino 2359089) in dir 2106377 is a duplicate name, would junk entry
entry "35 Ankoku no Uranai Noshi · Lilith.ogg" (ino 2359091) in dir 2107395 is a duplicate name, would junk entry
entry "17 Kaguya Shima·Mukashi Mono Gatarii.ogg" (ino 2359087) in dir 2107395 is a duplicate name, would junk entry
bad hash table for directory inode 2215937 (hash value mismatch): would rebuild
bad hash table for directory inode 2218278 (hash value mismatch): would rebuild
entry "03 Free and Dream «Instrumental».flac" (ino 2359090) in dir 2241309 is a duplicate name, would junk entry
entry "04 Asu he -Sorairo no Tegami- «Instrumental».flac" (ino 2359096) in dir 2241309 is a duplicate name, would junk entry
entry "18 Rafraîchissement.flac" (ino 2359085) in dir 2291983 is a duplicate name, would junk entry
entry "17 Différent malgré la ressemblance.flac" (ino 2218273) in dir 2291983 is a duplicate name, would junk entry
entry "09 Sincèrement.flac" (ino 2218271) in dir 2291983 is a duplicate name, would junk entry
entry "15 Déjà Vu.flac" (ino 2218276) in dir 2291983 is a duplicate name, would junk entry
entry "20 Fête d'école.flac" (ino 2218275) in dir 2291983 is a duplicate name, would junk entry
entry "12 Péripétie.flac" (ino 2218270) in dir 2291983 is a duplicate name, would junk entry
entry "27 Lieu de la bataille décisive.flac" (ino 2218272) in dir 2291983 is a duplicate name, would junk entry
entry "06 Sérénité.flac" (ino 2218274) in dir 2291983 is a duplicate name, would junk entry
entry "04 Tragédie.flac" (ino 2218266) in dir 2291983 is a duplicate name, would junk entry
entry "25 Serment éternel.flac" (ino 2218268) in dir 2291983 is a duplicate name, would junk entry
entry "16 Force étrange.flac" (ino 2218277) in dir 2291983 is a duplicate name, would junk entry
entry "28 Manoeuvres secrètes.flac" (ino 2218269) in dir 2291983 is a duplicate name, would junk entry
entry "26 Jours passés.flac" (ino 2218267) in dir 2291983 is a duplicate name, would junk entry
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 2218266, would move to lost+found
disconnected inode 2218267, would move to lost+found
disconnected inode 2218268, would move to lost+found
disconnected inode 2218269, would move to lost+found
disconnected inode 2218270, would move to lost+found
disconnected inode 2218271, would move to lost+found
disconnected inode 2218272, would move to lost+found
disconnected inode 2218273, would move to lost+found
disconnected inode 2218274, would move to lost+found
disconnected inode 2218275, would move to lost+found
disconnected inode 2218276, would move to lost+found
disconnected inode 2218277, would move to lost+found
disconnected inode 2220800, would move to lost+found
disconnected inode 2220801, would move to lost+found
disconnected inode 2359085, would move to lost+found
disconnected inode 2359086, would move to lost+found
disconnected inode 2359087, would move to lost+found
disconnected inode 2359088, would move to lost+found
disconnected inode 2359089, would move to lost+found
disconnected inode 2359090, would move to lost+found
disconnected inode 2359091, would move to lost+found
disconnected inode 2359092, would move to lost+found
disconnected inode 2359093, would move to lost+found
disconnected inode 2359094, would move to lost+found
disconnected inode 2359095, would move to lost+found
disconnected inode 2359096, would move to lost+found
disconnected inode 2359097, would move to lost+found
disconnected inode 2359098, would move to lost+found
disconnected inode 2359099, would move to lost+found
disconnected inode 2359100, would move to lost+found
disconnected inode 2359102, would move to lost+found
disconnected inode 2359103, would move to lost+found
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

-------------
This isn't going to help to repair it though -- I repaired it this
morning and it showed up as fine!

Then all of the above happened today!

It would appear that 2.6.34 might have some problems in it?

-linda

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-29 22:33   ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Linda Walsh
  2010-06-29 23:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
@ 2010-06-30 18:25     ` Michael Monnerie
  2010-06-30 23:30       ` rsync and corrupt inodes (was xfs_dump problem) Dave Chinner
  1 sibling, 1 reply; 29+ messages in thread
From: Michael Monnerie @ 2010-06-30 18:25 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1.1: Type: Text/Plain, Size: 8441 bytes --]

On Mittwoch, 30. Juni 2010 Linda Walsh wrote:
> But have another XFS problem that is much more reliably persistent.
> I don't know if they are at all related, but since I have this
>  problem that's a bit "stuck", it's easier to "reproduce".
 
I think my problem is similar. I have a Linux ("orion") running Samba. 
A Win7 client uses it to store it's "Windows Backup". That's OK.

From another Linux ("saturn"), I do an rsync via an rsync-module, 
and have already 4 Versions where the ".vhd" file of that Windows Backup 
is destroyed on "saturn". So the corruption happens when starting 
rsync @saturn, copying orion->saturn, both having XFS.

As I cannot delete the broken files, I moved the whole dir away, 
and did an rsync again. The same file destroyed again on saturn.
Some days later, again 2 versions which are destroyed.

The difference to Linda is, I get:
drwx------+ 2 zmi  users     4096 Jun 12 03:15 ./
drwxr-xr-x  7 root root       154 Jun 30 04:00 ../
-rwx------+ 1 zmi  users 56640000 Jun 12 03:05 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd*
??????????? ? ?    ?            ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd 

and on dmesg:
[125903.343714] Filesystem "dm-0": corrupt inode 649642 ((a)extents = 5).  Unmount and run xfs_repair.                                                                                                       
[125903.343735] ffff88011e34ca00: 49 4e 81 c0 02 02 00 00 00 00 03 e8 00 00 00 64  IN.............d                                                                                                          
[125903.343756] Filesystem "dm-0": XFS internal error xfs_iformat_extents(1) at line 558 of file /usr/src/packages/BUILD/kernel-desktop-2.6.31.12/linux-2.6.31/fs/xfs/xfs_inode.c.  Caller 0xffffffffa032c0ad
[125903.343763]                                                                                                                                                                                              
[125903.343791] Pid: 17696, comm: ls Not tainted 2.6.31.12-0.2-desktop #1                                                                                                                                    
[125903.343803] Call Trace:                                                                                                                                                                                  
[125903.343821]  [<ffffffff81011a19>] try_stack_unwind+0x189/0x1b0                                                                                                                                           
[125903.343840]  [<ffffffff8101025d>] dump_trace+0xad/0x3a0                                                                                                                                                  
[125903.343858]  [<ffffffff81011524>] show_trace_log_lvl+0x64/0x90                                                                                                                                           
[125903.343876]  [<ffffffff81011573>] show_trace+0x23/0x40                                                                                                                                                   
[125903.343894]  [<ffffffff81552b46>] dump_stack+0x81/0x9e                                                                                                                                                   
[125903.343947]  [<ffffffffa0321b4a>] xfs_error_report+0x5a/0x70 [xfs]                                                                                                                                       
[125903.344085]  [<ffffffffa0321bcc>] xfs_corruption_error+0x6c/0x90 [xfs]                                                                                                                                   
[125903.344248]  [<ffffffffa032bb84>] xfs_iformat_extents+0x234/0x280 [xfs]                                                                                                                                  
[125903.344409]  [<ffffffffa032c0ad>] xfs_iformat+0x28d/0x5a0 [xfs]                                                                                                                                          
[125903.344569]  [<ffffffffa032c542>] xfs_iread+0x182/0x1c0 [xfs]                                                                                                                                            
[125903.344729]  [<ffffffffa0327938>] xfs_iget_cache_miss+0x78/0x250 [xfs]                                                                                                                                   
[125903.344882]  [<ffffffffa0327c3c>] xfs_iget+0x12c/0x1b0 [xfs]                                                                                                                                             
[125903.345036]  [<ffffffffa0347b8e>] xfs_lookup+0xce/0x100 [xfs]                                                                                                                                            
[125903.345256]  [<ffffffffa0354e6c>] xfs_vn_lookup+0x6c/0xc0 [xfs]                                                                                                                                          
[125903.345453]  [<ffffffff81157782>] real_lookup+0x102/0x180                                                                                                                                                
[125903.345473]  [<ffffffff811598c0>] do_lookup+0xd0/0x100                                                                                                                                                   
[125903.345491]  [<ffffffff81159e12>] __link_path_walk+0x522/0x880                                                                                                                                           
[125903.345510]  [<ffffffff8115a6f6>] path_walk+0x66/0xd0                                                                                                                                                    
[125903.345528]  [<ffffffff8115a7cb>] do_path_lookup+0x6b/0xb0                                                                                                                                               
[125903.345546]  [<ffffffff8115a9d1>] user_path_at+0x61/0xc0                                                                                                                                                 
[125903.345565]  [<ffffffff811514d1>] vfs_fstatat+0x41/0x90                                                                                                                                                  
[125903.345584]  [<ffffffff811515ac>] vfs_lstat+0x2c/0x50                                                                                                                                                    
[125903.345602]  [<ffffffff811515fe>] sys_newlstat+0x2e/0x70                                                                                                                                                 
[125903.345621]  [<ffffffff8100c682>] system_call_fastpath+0x16/0x1b                                                                                                                                         
[125903.345645]  [<00007f72dc451e65>] 0x7f72dc451e65

Trying to "xfs_repair -n" seems to find errors, see attachment "repair1.log"
Trying to "xfs_repair" crashes, see attachment "repair2.log"

Saturns kernel is 2.6.31.12-0.2-desktop from openSUSE 11.2, 
xfs_repair is 3.1.2 (I tried down several versions down to 3.0.1, all without success).

Even after xfs_metadump and xfs_mdrestore the error exists, and cannot be 
repaired with xfs_repair, because that crashes.

I've put a new metadump containing only the broken stuff for public review:
http://zmi.at/saturn_bigdata.metadump.only_broken.bz2 (197 MB)

What should I do, apart of ripping the whole filesystem and copying new? 
The problem is, would be destroyed again, just like it did 4 times now.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.1.2: repair1.log --]
[-- Type: text/x-log, Size: 1919 bytes --]

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
would have corrected attribute entry count in inode 649642 from 40 to 0
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, would clear attr fork
would have cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, would clear attr fork
would have cleared inode 2195133988
would have corrected attribute entry count in inode 2902971474 from 163 to 0
would have corrected attribute totsize in inode 2902971474 from 6 to 4
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, would clear attr fork
would have cleared inode 2195133988
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, would clear attr fork
would have cleared inode 649790
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

[-- Attachment #1.1.3: repair2.log --]
[-- Type: text/x-log, Size: 1597 bytes --]

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-30  1:52           ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
@ 2010-06-30 21:01             ` Stan Hoeppner
  2010-07-07 21:40               ` utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...) Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Stan Hoeppner @ 2010-06-30 21:01 UTC (permalink / raw)
  To: xfs

Linda A. Walsh put forth on 6/29/2010 8:52 PM:

> I have another directory "Shakugan no Shana II OST", that I can't cd
> into or ls.
> Just says 'not found'.
> 
> This all happened today.  They were accessible before today.  I'm not
> sure what changed,
> other than new files were added.

It is interesting that all of these "corrupt" files originate from Japan.  The
filenames have likely passed through many different character encodings on
their journey from their source to your XFS filesystems.  And they probably
originated on a MS Windows machine.

How can you be so certain that there isn't a wonky bit somewhere that's
wreaking havoc with BabelMap?  Maybe that program has a bug, that after after
Y access cycles flips a bit, changes a character, or something along these
lines?  Did you update this program recently, or any other programs that might
affect character encoding/displaying, or anything remotely related to such?
Have you done any software updates recently, period?

Given the entirety of what we're looking at, and that you're apparently not
seeing this with files created in a native English language encoding, I'd say
Dave is probably on the right track here.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* rsync and corrupt inodes (was xfs_dump problem)
  2010-06-30 18:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Michael Monnerie
@ 2010-06-30 23:30       ` Dave Chinner
  2010-07-01  8:25         ` Michael Monnerie
  2010-07-07 21:56         ` Linda Walsh
  0 siblings, 2 replies; 29+ messages in thread
From: Dave Chinner @ 2010-06-30 23:30 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Wed, Jun 30, 2010 at 08:25:20PM +0200, Michael Monnerie wrote:
> On Mittwoch, 30. Juni 2010 Linda Walsh wrote:
> > But have another XFS problem that is much more reliably persistent.
> > I don't know if they are at all related, but since I have this
> >  problem that's a bit "stuck", it's easier to "reproduce".
>  
> I think my problem is similar. I have a Linux ("orion") running Samba. 
> A Win7 client uses it to store it's "Windows Backup". That's OK.
> 
> From another Linux ("saturn"), I do an rsync via an rsync-module, 
> and have already 4 Versions where the ".vhd" file of that Windows Backup 
> is destroyed on "saturn". So the corruption happens when starting 
> rsync @saturn, copying orion->saturn, both having XFS.

Are you running rsync locally on saturn (i.e. pulling data)? If so,
can you get an strace of the rsync of that file so we can see what
the order or operations being done on the file is. If you are
pushing data to saturn, does the problem go away if you pull it (and
vice versa)?

> As I cannot delete the broken files, I moved the whole dir away, 
> and did an rsync again. The same file destroyed again on saturn.
> Some days later, again 2 versions which are destroyed.
> 
> The difference to Linda is, I get:
> drwx------+ 2 zmi  users     4096 Jun 12 03:15 ./
> drwxr-xr-x  7 root root       154 Jun 30 04:00 ../
> -rwx------+ 1 zmi  users 56640000 Jun 12 03:05 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd*
> ??????????? ? ?    ?            ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd 

On the source machine, can you get a list of the xattrs on the
inode?

> and on dmesg:
> [125903.343714] Filesystem "dm-0": corrupt inode 649642 ((a)extents = 5).  Unmount and run xfs_repair.
> [125903.343735] ffff88011e34ca00: 49 4e 81 c0 02 02 00 00 00 00 03 e8 00 00 00 64  IN.............d
> [125903.343756] Filesystem "dm-0": XFS internal error xfs_iformat_extents(1) at line 558 of file /usr/src/packages/BUILD/kernel-desktop-2.6.31.12/linux-2.6.31/fs/xfs/xfs_inode.c.  Caller 0xffffffffa032c0ad

That seems like a different problem to what linda is seeing
because this is on-disk corruption. can you dump the bad inode via:

# xfs_db -x -r -c "inode 649642" -c p <dev>

> [125903.343791] Pid: 17696, comm: ls Not tainted 2.6.31.12-0.2-desktop #1

That's getting a bit old now.

This kernel does not have any of the swap extent guards we added to
avoid fsr corrupting inodes with attribute forks, and the above
corruption report and the repair output look exactly like I saw when
intentionally corrupting inodes with xfs_fsr.

> Trying to "xfs_repair -n" seems to find errors, see attachment "repair1.log"

Hmmmm - do you run xfs_fsr? The errors reported and the corrutpion
above are exactly what I'd expect from the swap extent bugs we fixed
a while back....

> Trying to "xfs_repair" crashes, see attachment "repair2.log"
> 
> Saturns kernel is 2.6.31.12-0.2-desktop from openSUSE 11.2, 
> xfs_repair is 3.1.2 (I tried down several versions down to 3.0.1, all without success).
> 
> Even after xfs_metadump and xfs_mdrestore the error exists, and cannot be 
> repaired with xfs_repair, because that crashes.
> 
> I've put a new metadump containing only the broken stuff for public review:
> http://zmi.at/saturn_bigdata.metadump.only_broken.bz2 (197 MB)

I'll take a look.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-06-30 23:30       ` rsync and corrupt inodes (was xfs_dump problem) Dave Chinner
@ 2010-07-01  8:25         ` Michael Monnerie
  2010-07-02  2:42           ` Dave Chinner
  2010-07-07 21:56         ` Linda Walsh
  1 sibling, 1 reply; 29+ messages in thread
From: Michael Monnerie @ 2010-07-01  8:25 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 48931 bytes --]

On Donnerstag, 1. Juli 2010 Dave Chinner wrote:
> > From another Linux ("saturn"), I do an rsync via an rsync-module,
> > and have already 4 Versions where the ".vhd" file of that Windows
> > Backup is destroyed on "saturn". So the corruption happens when
> > starting rsync @saturn, copying orion->saturn, both having XFS.
> 
> Are you running rsync locally on saturn (i.e. pulling data)? If so,
> can you get an strace of the rsync of that file so we can see what
> the order or operations being done on the file is. If you are
> pushing data to saturn, does the problem go away if you pull it (and
> vice versa)?

Oh dear, I made a mistake. It's a push @orion, doing
rsync -aPvHAXy / saturn::orionbackup/

The problem is: I cannot 100% replicate it. I found the problem once, 
moved the dir with the broken file away and synced again. Again broken. 
Then I reported here. Meanwhile, Windows has done a new backup, that 
file doesn't seem to get broken. But with another fresh Windows backup, 
it came again. I don't know if it depends on the file, it happened 4 
times until now.

I rsynced today 3 times, twice with the openSUSE kernel and once with 
2.6.34, no problem. Sorry (or maybe "lucky me"?).

> > 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd* ??????????? ? ?    ?     
> >       ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
> 
> On the source machine, can you get a list of the xattrs on the
> inode?

How would I do that? "getfattr" on that file gives no return, does that 
mean it doesn't have anything to say? I never do that things, so there 
shouldn't be any attributes set.
 
> > and on dmesg:
> > [125903.343714] Filesystem "dm-0": corrupt inode 649642 ((a)extents
> > = 5).  Unmount and run xfs_repair. [125903.343735]
> > ffff88011e34ca00: 49 4e 81 c0 02 02 00 00 00 00 03 e8 00 00 00 64 
> > IN.............d [125903.343756] Filesystem "dm-0": XFS internal
> > error xfs_iformat_extents(1) at line 558 of file
> > /usr/src/packages/BUILD/kernel-desktop-2.6.31.12/linux-2.6.31/fs/xf
> >s/xfs_inode.c.  Caller 0xffffffffa032c0ad
> 
> That seems like a different problem to what linda is seeing
> because this is on-disk corruption. can you dump the bad inode via:
> 
> # xfs_db -x -r -c "inode 649642" -c p <dev>

Uh, that's a long output.

# xfs_db -x -r -c "inode 649642" -c p /dev/swraid0/backup 
core.magic = 0x494e                                                                                                         
core.mode = 0100700                                                                                                         
core.version = 2                                                                                                            
core.format = 2 (extents)                                                                                                   
core.nlinkv2 = 1                                                                                                            
core.onlink = 0                                                                                                             
core.projid = 0                                                                                                             
core.uid = 1000                                                                                                             
core.gid = 100                                                                                                              
core.flushiter = 4                                                                                                          
core.atime.sec = Mon Jun 14 10:53:41 2010                                                                                   
core.atime.nsec = 000000000                                                                                                 
core.mtime.sec = Sat Jun 12 03:15:57 2010                                                                                   
core.mtime.nsec = 000000000                                                                                                 
core.ctime.sec = Mon Jun 14 10:53:41 2010                                                                                   
core.ctime.nsec = 180152802                                                                                                 
core.size = 36569189376                                                                                                     
core.nblocks = 8928025                                                                                                      
core.extsize = 0                                                                                                            
core.nextents = 5                                                                                                           
core.naextents = 0                                                                                                          
core.forkoff = 9                                                                                                            
core.aformat = 1 (local)                                                                                                    
core.dmevmask = 0                                                                                                           
core.dmstate = 0                                                                                                            
core.newrtbm = 0                                                                                                            
core.prealloc = 0                                                                                                           
core.realtime = 0                                                                                                           
core.immutable = 0                                                                                                          
core.append = 0                                                                                                             
core.sync = 0                                                                                                               
core.noatime = 0                                                                                                            
core.nodump = 0                                                                                                             
core.rtinherit = 0                                                                                                          
core.projinherit = 0                                                                                                        
core.nosymlinks = 0                                                                                                         
core.extsz = 0                                                                                                              
core.extszinherit = 0                                                                                                       
core.nodefrag = 0                                                                                                           
core.filestream = 0                                                                                                         
core.gen = 112968465                                                                                                        
next_unlinked = null                                                                                                        
u.bmx[0-4] = [startoff,startblock,blockcount,extentflag] 0:
[0,549849376,2097151,0] 1:[2097151,551946527,2097151,0] 2:
[4194302,554043678,2097151,0] 3:[6291453,556140829,2097151,0] 4:
[8388604,558237980,539421,0]
a.sfattr.hdr.totsize = 4                                                                                                                                                                                         
a.sfattr.hdr.count = 40                                                                                                                                                                                          
a.sfattr.list[0].namelen = 35                                                                                                                                                                                    
a.sfattr.list[0].valuelen = 136                                                                                                                                                                                  
a.sfattr.list[0].root = 1                                                                                                                                                                                        
a.sfattr.list[0].secure = 0                                                                                                                                                                                      
a.sfattr.list[0].name = 
"\035GI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004"                                                                              
a.sfattr.list[0].value = 
"\346\000\a\000\000\000\000\000\004\377\377\377\377\000\006\000\000\000\000\000\020\377\377\377\377\000\000\000\000\000\000\000 
\377\377\377\377\000\000\000\000\000IN\201\377\002\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025\000\000\000\000L\022\337\316\000\000\000\000L\025\356\025\024\'\314\214\000\000\000\000\000\000\004\242\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273"                                                                                                                                                                    
a.sfattr.list[1].namelen = 195                                                                                                                                                                                                              
a.sfattr.list[1].valuelen = 12                                                                                                                                                                                                              
a.sfattr.list[1].root = 1                                                                                                                                                                                                                   
a.sfattr.list[1].secure = 1                                                                                                                                                                                                                 
a.sfattr.list[1].name = 
"\377\377\377\000\000\000\000\000\000\000\000\000\006\000\000\373\340\000\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000S\001\000\f@\002SGI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004\346\000\a\000\000\000\000\000\004\377\377\377\377\000\a\000\000\000\000\000\020\377\377\377\377\000\a\000\000\000\000\000 
\377\377\377\377\000\a\000\000\000IN\201\377\002\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025"                                                                                                                                                                                                               
a.sfattr.list[1].value = "\000\000\000\000L\022\337\316\000\000\000\000"                                                                                                                                                                    
a.sfattr.list[2].namelen = 76                                                                                                                                                                                                               
a.sfattr.list[2].valuelen = 21                                                                                                                                                                                                              
a.sfattr.list[2].root = 1                                                                                                                                                                                                                   
a.sfattr.list[2].secure = 1                                                                                                                                                                                                                 
a.sfattr.list[2].name = 
"\025\024\'\314\214\000\000\000\000\000\000\0046\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273\303\f\377\377\377\377\000\000\000\000\000\000\000\000\000\a\000\000u@\000\001\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                                                               
a.sfattr.list[2].value = 
"\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                             
a.sfattr.list[3].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[3].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[3].root = 0                                                                                                                                                                                                                   
a.sfattr.list[3].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[4].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[4].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[4].root = 0                                                                                                                                                                                                                   
a.sfattr.list[4].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[5].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[5].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[5].root = 0                                                                                                                                                                                                                   
a.sfattr.list[5].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[6].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[6].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[6].root = 0                                                                                                                                                                                                                   
a.sfattr.list[6].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[7].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[7].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[7].root = 0                                                                                                                                                                                                                   
a.sfattr.list[7].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[8].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[8].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[8].root = 0                                                                                                                                                                                                                   
a.sfattr.list[8].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[9].namelen = 0                                                                                                                                                                                                                
a.sfattr.list[9].valuelen = 0                                                                                                                                                                                                               
a.sfattr.list[9].root = 0                                                                                                                                                                                                                   
a.sfattr.list[9].secure = 0                                                                                                                                                                                                                 
a.sfattr.list[10].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[10].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[10].root = 0                                                                                                                                                                                                                  
a.sfattr.list[10].secure = 0                                                                                                                                                                                                                
a.sfattr.list[11].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[11].valuelen = 83                                                                                                                                                                                                             
a.sfattr.list[11].root = 0                                                                                                                                                                                                                  
a.sfattr.list[11].secure = 0                                                                                                                                                                                                                
a.sfattr.list[11].value = 
"\000\f@\002SGI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004\346\000\a\000\000\000\000\000\004\377\377\377\377\000\a\000\000\000\000\000\020\377\377\377\377\000\a\000\000\000\000\000 
\377\377\377\377\000\a\000\000\000IN"                                                                                                                                                                            
a.sfattr.list[12].namelen = 129                                                                                                                                                                                                             
a.sfattr.list[12].valuelen = 255                                                                                                                                                                                                            
a.sfattr.list[12].root = 1                                                                                                                                                                                                                  
a.sfattr.list[12].secure = 0                                                                                                                                                                                                                
a.sfattr.list[12].name = 
"\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025\000\000\000\000L\022\337\316\000\000\000\000L\025\356\025\0247\017{\000\000\000\000\000\000$2\000\000\000\000\000\000\000\003\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273\303\f\377\377\377\377\000\000\000\000\000\000\000\000\000\001\003\350y\240\000\003\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                                                                                                    
a.sfattr.list[12].value = 
"\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000S\001\000\f@\002SGI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004\346\000\a\000\000\000\000\000\004\377\377\377\377\000\a\000\000\000\000\000\020\377\377\377\377\000\a\000\000\000\000\000 
\377\377\377\377\000\a\000\000\000IN\201\377\002\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025\000\000\000\000L\022\337\316\000\000\000\000L\025\356\025\024FR\242\000\000\000\000\000\000\031\216\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273\303\f\377\377\377\377\000\000\000\000\000\000\000\000\000\002\003;
\365\000\000\002\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                                                                                                                            
a.sfattr.list[13].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[13].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[13].root = 0                                                                                                                                                                                                                  
a.sfattr.list[13].secure = 0                                                                                                                                                                                                                
a.sfattr.list[14].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[14].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[14].root = 0                                                                                                                                                                                                                  
a.sfattr.list[14].secure = 0                                                                                                                                                                                                                
a.sfattr.list[15].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[15].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[15].root = 0                                                                                                                                                                                                                  
a.sfattr.list[15].secure = 0                                                                                                                                                                                                                
a.sfattr.list[16].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[16].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[16].root = 0                                                                                                                                                                                                                  
a.sfattr.list[16].secure = 0                                                                                                                                                                                                                
a.sfattr.list[17].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[17].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[17].root = 0                                                                                                                                                                                                                  
a.sfattr.list[17].secure = 0                                                                                                                                                                                                                
a.sfattr.list[18].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[18].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[18].root = 0                                                                                                                                                                                                                  
a.sfattr.list[18].secure = 0                                                                                                                                                                                                                
a.sfattr.list[19].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[19].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[19].root = 0                                                                                                                                                                                                                  
a.sfattr.list[19].secure = 0                                                                                                                                                                                                                
a.sfattr.list[20].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[20].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[20].root = 0                                                                                                                                                                                                                  
a.sfattr.list[20].secure = 0                                                                                                                                                                                                                
a.sfattr.list[21].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[21].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[21].root = 0                                                                                                                                                                                                                  
a.sfattr.list[21].secure = 0                                                                                                                                                                                                                
a.sfattr.list[22].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[22].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[22].root = 0                                                                                                                                                                                                                  
a.sfattr.list[22].secure = 0                                                                                                                                                                                                                
a.sfattr.list[23].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[23].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[23].root = 0                                                                                                                                                                                                                  
a.sfattr.list[23].secure = 0                                                                                                                                                                                                                
a.sfattr.list[24].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[24].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[24].root = 0                                                                                                                                                                                                                  
a.sfattr.list[24].secure = 0                                                                                                                                                                                                                
a.sfattr.list[25].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[25].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[25].root = 0                                                                                                                                                                                                                  
a.sfattr.list[25].secure = 0                                                                                                                                                                                                                
a.sfattr.list[26].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[26].valuelen = 83                                                                                                                                                                                                             
a.sfattr.list[26].root = 0                                                                                                                                                                                                                  
a.sfattr.list[26].secure = 0                                                                                                                                                                                                                
a.sfattr.list[26].value = 
"\000\f@\002SGI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004\346\000\a\000\000\000\000\000\004\377\377\377\377\000\a\000\000\000\000\000\020\377\377\377\377\000\a\000\000\000\000\000 
\377\377\377\377\000\a\000\000\000IN"                                                                                                                                                                            
a.sfattr.list[27].namelen = 129                                                                                                                                                                                                             
a.sfattr.list[27].valuelen = 255                                                                                                                                                                                                            
a.sfattr.list[27].root = 1                                                                                                                                                                                                                  
a.sfattr.list[27].secure = 0                                                                                                                                                                                                                
a.sfattr.list[27].name = 
"\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025\000\000\000\000L\022\337\315\000\000\000\000L\025\356\025\024FR\242\000\000\000\000\000\000\021\234\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273\303\f\377\377\377\377\000\000\000\000\000\000\000\000\000\003\003P5\200\000\002\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                                                                                                 
a.sfattr.list[27].value = 
"\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000S\001\000\f@\002SGI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004\346\000\a\000\000\000\000\000\004\377\377\377\377\000\a\000\000\000\000\000\020\377\377\377\377\000\a\000\000\000\000\000 
\377\377\377\377\000\a\000\000\000\000\000\000\000\000\000\000\000\361\017\000\000\000\000\000\000\020\360s\001\000\000\000\000\310t\251I\300\177\000\000\020\360s\001\000\000\000\000\2000t\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"                                                                                                                                                                                                               
a.sfattr.list[28].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[28].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[28].root = 0                                                                                                                                                                                                                  
a.sfattr.list[28].secure = 0                                                                                                                                                                                                                
a.sfattr.list[29].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[29].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[29].root = 0                                                                                                                                                                                                                  
a.sfattr.list[29].secure = 0                                                                                                                                                                                                                
a.sfattr.list[30].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[30].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[30].root = 0                                                                                                                                                                                                                  
a.sfattr.list[30].secure = 0                                                                                                                                                                                                                
a.sfattr.list[31].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[31].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[31].root = 0                                                                                                                                                                                                                  
a.sfattr.list[31].secure = 0                                                                                                                                                                                                                
a.sfattr.list[32].namelen = 0                                                                                                                                                                                                               
a.sfattr.list[32].valuelen = 0                                                                                                                                                                                                              
a.sfattr.list[32].root = 0                                                                                                                                                                                                                  
a.sfattr.list[32].secure = 0
a.sfattr.list[33].namelen = 0
a.sfattr.list[33].valuelen = 0
a.sfattr.list[33].root = 0
a.sfattr.list[33].secure = 0
a.sfattr.list[34].namelen = 0
a.sfattr.list[34].valuelen = 0
a.sfattr.list[34].root = 0
a.sfattr.list[34].secure = 0
a.sfattr.list[35].namelen = 0
a.sfattr.list[35].valuelen = 0
a.sfattr.list[35].root = 0
a.sfattr.list[35].secure = 0
a.sfattr.list[36].namelen = 0
a.sfattr.list[36].valuelen = 0
a.sfattr.list[36].root = 0
a.sfattr.list[36].secure = 0
a.sfattr.list[37].namelen = 0
a.sfattr.list[37].valuelen = 0
a.sfattr.list[37].root = 0
a.sfattr.list[37].secure = 0
a.sfattr.list[38].namelen = 0
a.sfattr.list[38].valuelen = 0
a.sfattr.list[38].root = 0
a.sfattr.list[38].secure = 0
a.sfattr.list[39].namelen = 0
a.sfattr.list[39].valuelen = 0
a.sfattr.list[39].root = 0
a.sfattr.list[39].secure = 0
 
> > [125903.343791] Pid: 17696, comm: ls Not tainted
> > 2.6.31.12-0.2-desktop #1
> 
> That's getting a bit old now.

It's the most actual for openSUSE 11.2, which is the actual release. 
Well, 11.3 should land on July 15...

> This kernel does not have any of the swap extent guards we added to
> avoid fsr corrupting inodes with attribute forks, and the above
> corruption report and the repair output look exactly like I saw when
> intentionally corrupting inodes with xfs_fsr.
>
> Hmmmm - do you run xfs_fsr? The errors reported and the corrutpion
> above are exactly what I'd expect from the swap extent bugs we fixed
> a while back....

Yes, xfs_fsdr was running. Disabled it now, and compiled and changed to 
kernel 2.6.34 now. Hope that's OK ;-)
 
> I'll take a look.

Thank you!

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-06-30  2:45             ` Linda A. Walsh
@ 2010-07-01 23:58               ` Dave Chinner
  2010-07-07  3:18                 ` Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-07-01 23:58 UTC (permalink / raw)
  To: Linda A. Walsh; +Cc: xfs-oss

On Tue, Jun 29, 2010 at 07:45:21PM -0700, Linda A. Walsh wrote:
> To make matters more interesting -- xfsdump can't access a couple of
> files and a directory or two.
> 
> It thinks they are 'stale NFS handles' (I'm not running any NFS file
> systems).
> 
> in @  0.0 kB/s, out @  0.0 kB/s,  0.0 kB total, buffer   0% fullxfsdump: WARNING: unable to open directory: ino 2082342: Stale NFS file handle
> xfsdump: WARNING: unable to open directory: ino 2082343: Stale NFS file handle

xfsdump uses the handle interfaces to open files direct from
bulkstat information, and this is a typical error when bulkstat
returns an inode and it is unlinked before dump opens the handle
created from the bulkstat information.

> in @ 4079 kB/s, out @ 4079 kB/s, 2040 kB total, buffer   0% fullxfsdump: dumping non-directory files
> in @ 68.0 MB/s, out @ 68.0 MB/s, 1209 GB total, buffer   0% fullll
> in @  107 MB/s, out @  105 MB/s, 2200 GB total, buffer   0% fullxfsdump: ending media file
> xfsdump: media file size 2362017678616 bytes
> xfsdump: dump size (non-dir files) : 2361953613176 bytes
> xfsdump: dump complete: 10926 seconds elapsed
> xfsdump: Dump Status: SUCCESS
> 
> Running xfs_db on the file system (finished dumping)
> a block get returns:

Just a reminder - you can't trust xfs_db output on a live mounted
filesystem....

> dir 1133368 block 0 extra leaf entry 5438b33d 79
> dir 1133368 block 0 extra leaf entry 6624beba 71
> dir 1133368 block 0 extra leaf entry 6d832f88 69
> dir 1133368 block 0 extra leaf entry e6279e2d 80
> dir ino 1133368 missing leaf entry for e627de2d/80
> dir ino 1133368 missing leaf entry for 7624beba/71
> dir ino 1133368 missing leaf entry for 5418b33d/79
> dir ino 1133368 missing leaf entry for 6d832f80/69

I'm not sure why the blockget thinks there's a extra
entries in block 0 in the directory, but then says the
entries for the same hash index are missing.

I'd need a metadump of the filesystem to be able to look at it
directly...

> xfs_repair -n now shows:
.....
> Phase 6 - check inode connectivity...
>        - traversing filesystem ...
> entry "10 Otome ha DO MY BEST desho¿ (off vocal).flac" (ino 2359102) in dir 1133368 is a duplicate name, would junk entry
> entry "06 Otome ha DO MY BEST desho¿ Otome ver..flac" (ino 2359100) in dir 1133368 is a duplicate name, would junk entry
> entry "05 Otome ha DO MY BEST desho¿ Hime ver..flac" (ino 2359099) in dir 1133368 is a duplicate name, would junk entry
> entry "04 Otome ha DO MY BEST desho¿ 2007ver..flac" (ino 2359086) in dir 1133368 is a duplicate name, would junk entry
....

Every single filename has some special character in it. Of course,
my question is why are there two copies of the same directory name?
Was the file created twice? How did these files get created? If you
just copy them, does the destination directory end up corrupted?

> It would appear that 2.6.34 might have some problems in it?

I don't think we changed anything at all directory related in
XFS in 2.6.34 so I'm a little perplexed as to why this is suddenly
all happening. Did these problems only show up when you updated to
2.6.34, or can you reproduce them on an older kernel?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-07-01  8:25         ` Michael Monnerie
@ 2010-07-02  2:42           ` Dave Chinner
  2010-07-02  6:21             ` Michael Monnerie
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-07-02  2:42 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Thu, Jul 01, 2010 at 10:25:03AM +0200, Michael Monnerie wrote:
> On Donnerstag, 1. Juli 2010 Dave Chinner wrote:
> > > From another Linux ("saturn"), I do an rsync via an rsync-module,
> > > and have already 4 Versions where the ".vhd" file of that Windows
> > > Backup is destroyed on "saturn". So the corruption happens when
> > > starting rsync @saturn, copying orion->saturn, both having XFS.
> > 
> > Are you running rsync locally on saturn (i.e. pulling data)? If so,
> > can you get an strace of the rsync of that file so we can see what
> > the order or operations being done on the file is. If you are
> > pushing data to saturn, does the problem go away if you pull it (and
> > vice versa)?
> 
> Oh dear, I made a mistake. It's a push @orion, doing
> rsync -aPvHAXy / saturn::orionbackup/
> 
> The problem is: I cannot 100% replicate it. I found the problem once, 
> moved the dir with the broken file away and synced again. Again broken. 
> Then I reported here. Meanwhile, Windows has done a new backup, that 
> file doesn't seem to get broken. But with another fresh Windows backup, 
> it came again. I don't know if it depends on the file, it happened 4 
> times until now.

So it's the rsync daemon on saturn that is doing all the IO?

> I rsynced today 3 times, twice with the openSUSE kernel and once with 
> 2.6.34, no problem. Sorry (or maybe "lucky me"?).
> 
> > > 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd* ??????????? ? ?    ?     
> > >       ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
> > 
> > On the source machine, can you get a list of the xattrs on the
> > inode?
>
> How would I do that? "getfattr" on that file gives no return, does that 
> mean it doesn't have anything to say? I never do that things, so there 
> shouldn't be any attributes set.

"getfattr -d"

> > > and on dmesg:
> > > [125903.343714] Filesystem "dm-0": corrupt inode 649642 ((a)extents
> > > = 5).  Unmount and run xfs_repair. [125903.343735]
> > > ffff88011e34ca00: 49 4e 81 c0 02 02 00 00 00 00 03 e8 00 00 00 64 
> > > IN.............d [125903.343756] Filesystem "dm-0": XFS internal
> > > error xfs_iformat_extents(1) at line 558 of file
> > > /usr/src/packages/BUILD/kernel-desktop-2.6.31.12/linux-2.6.31/fs/xf
> > >s/xfs_inode.c.  Caller 0xffffffffa032c0ad
> > 
> > That seems like a different problem to what linda is seeing
> > because this is on-disk corruption. can you dump the bad inode via:
> > 
> > # xfs_db -x -r -c "inode 649642" -c p <dev>
> 
> Uh, that's a long output.
> 
> # xfs_db -x -r -c "inode 649642" -c p /dev/swraid0/backup 
.....
> u.bmx[0-4] = [startoff,startblock,blockcount,extentflag] 0:
> [0,549849376,2097151,0] 1:[2097151,551946527,2097151,0] 2:
> [4194302,554043678,2097151,0] 3:[6291453,556140829,2097151,0] 4:
> [8388604,558237980,539421,0]
> a.sfattr.hdr.totsize = 4
> a.sfattr.hdr.count = 40
> a.sfattr.list[0].namelen = 35
> a.sfattr.list[0].valuelen = 136
> a.sfattr.list[0].root = 1
> a.sfattr.list[0].secure = 0
> a.sfattr.list[0].name =
> "\035GI_ACL_FILE\000\000\000\005\000\000\000\001\377\377\377\377\000\a\000\000\000\000\000\002\000\000\004"
> a.sfattr.list[0].value = 
> "\346\000\a\000\000\000\000\000\004\377\377\377\377\000\006\000\000\000\000\000\020\377\377\377\377\000\000\000\000\000\000\000
> \377\377\377\377\000\000\000\000\000IN\201\377\002\002\000\000\000\000\003\350\000\000\000d\000\000\000\001\000\000\000\000\000\000\000\000\000\000\000\002L\025\356\025\000\000\000\000L\022\337\316\000\000\000\000L\025\356\025\024\'\314\214\000\000\000\000\000\000\004\242\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\001\000\000\c\001\000\000\000\000\000\000\000\000\006\273"

>From the metadump, I can see that other valid .vhd files are in
local format with:

core.forkoff = 9
a.sfattr.hdr.totsize = 83
a.sfattr.hdr.count = 1
a.sfattr.list[0].namelen = 12
a.sfattr.list[0].valuelen = 64
a.sfattr.list[0].root = 1
a.sfattr.list[0].secure = 0
a.sfattr.list[0].name = "SGI_ACL_FILE"
a.sfattr.list[0].value = <snipped>


All the broken inodes are in the same format as the valid .vhd files,
but the shortform attribute header is completely toast. Once I correct the
header and the lengths, the only thing that looks wrong is:

xfs_db> p a.sfattr.list[0].name
a.sfattr.list[0].name = "\035GI_ACL_FILE"

The first character of the name is bad, everything after that -
including the attribute value - is identical to that on other
inodes.  What this implies is that we've overwritten the start of
the attribute fork with something, and that looks exactly like the
swap extents problems that we've fixed recently....

> > Hmmmm - do you run xfs_fsr? The errors reported and the corrutpion
> > above are exactly what I'd expect from the swap extent bugs we fixed
> > a while back....
> 
> Yes, xfs_fsdr was running. Disabled it now, and compiled and changed to 
> kernel 2.6.34 now. Hope that's OK ;-)

Ok, so we have identified a potential cause. Either disabling fsr or
upgrading to 2.6.34 should be sufficient to avoid the problem. If no
problem show up now you are on 2.6.34, then I'd switch fsr back on
and see if they show up again...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-07-02  2:42           ` Dave Chinner
@ 2010-07-02  6:21             ` Michael Monnerie
  2010-07-04 22:53               ` Dave Chinner
  0 siblings, 1 reply; 29+ messages in thread
From: Michael Monnerie @ 2010-07-02  6:21 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 2091 bytes --]

On Freitag, 2. Juli 2010 Dave Chinner wrote:
> So it's the rsync daemon on saturn that is doing all the IO?

Yes.
 
> > I rsynced today 3 times, twice with the openSUSE kernel and once
> > with 2.6.34, no problem. Sorry (or maybe "lucky me"?).
> >
> > > > 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd* ??????????? ? ?    ?
> > > >       ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
> > >
> > > On the source machine, can you get a list of the xattrs on the
> > > inode?
> >
> > How would I do that? "getfattr" on that file gives no return, does
> > that mean it doesn't have anything to say? I never do that things,
> > so there shouldn't be any attributes set.
> 
> "getfattr -d"

Sorry, doesn't work:

# getfattr -d 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
getfattr: 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd: Structure needs 
cleaning

> The first character of the name is bad, everything after that -
> including the attribute value - is identical to that on other
> inodes.  What this implies is that we've overwritten the start of
> the attribute fork with something, and that looks exactly like the
> swap extents problems that we've fixed recently....
> >
> > Yes, xfs_fsdr was running. Disabled it now, and compiled and
> > changed to kernel 2.6.34 now. Hope that's OK ;-)
> 
> Ok, so we have identified a potential cause. Either disabling fsr or
> upgrading to 2.6.34 should be sufficient to avoid the problem. If no
> problem show up now you are on 2.6.34, then I'd switch fsr back on
> and see if they show up again...

So far, so good. I'm on 2.6.34 now. Is there any chance for a fixed 
version of xfs_repair, so that I can either get rid of the 4 broken 
files (i.e. delete them), or repair the filesystem? ATM, xfs_repair 
asserts on this filesystem.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-07-02  6:21             ` Michael Monnerie
@ 2010-07-04 22:53               ` Dave Chinner
  2010-07-12 11:28                 ` Michael Monnerie
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-07-04 22:53 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Fri, Jul 02, 2010 at 08:21:51AM +0200, Michael Monnerie wrote:
> On Freitag, 2. Juli 2010 Dave Chinner wrote:
> > So it's the rsync daemon on saturn that is doing all the IO?
> 
> Yes.
>  
> > > I rsynced today 3 times, twice with the openSUSE kernel and once
> > > with 2.6.34, no problem. Sorry (or maybe "lucky me"?).
> > >
> > > > > 852c268f-cf1a-11de-b09b-806e6f6e6963.vhd* ??????????? ? ?    ?
> > > > >       ?            ? 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
> > > >
> > > > On the source machine, can you get a list of the xattrs on the
> > > > inode?
> > >
> > > How would I do that? "getfattr" on that file gives no return, does
> > > that mean it doesn't have anything to say? I never do that things,
> > > so there shouldn't be any attributes set.
> > 
> > "getfattr -d"
> 
> Sorry, doesn't work:
> 
> # getfattr -d 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd
> getfattr: 852c2690-cf1a-11de-b09b-806e6f6e6963.vhd: Structure needs 
> cleaning

I meant run it on an uncorrupted version of the file, but I don't
think that information is needed now...

> > The first character of the name is bad, everything after that -
> > including the attribute value - is identical to that on other
> > inodes.  What this implies is that we've overwritten the start of
> > the attribute fork with something, and that looks exactly like the
> > swap extents problems that we've fixed recently....
> > >
> > > Yes, xfs_fsdr was running. Disabled it now, and compiled and
> > > changed to kernel 2.6.34 now. Hope that's OK ;-)
> > 
> > Ok, so we have identified a potential cause. Either disabling fsr or
> > upgrading to 2.6.34 should be sufficient to avoid the problem. If no
> > problem show up now you are on 2.6.34, then I'd switch fsr back on
> > and see if they show up again...
> 
> So far, so good. I'm on 2.6.34 now. Is there any chance for a fixed 
> version of xfs_repair, so that I can either get rid of the 4 broken 
> files (i.e. delete them), or repair the filesystem? ATM, xfs_repair 
> asserts on this filesystem.

What version of xfs_repair? v3.1.2 does not assert fail here on the
metadump image you posted, but it does take 3 runs to fix up all the
problems with the busted inodes....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-07-01 23:58               ` Dave Chinner
@ 2010-07-07  3:18                 ` Linda A. Walsh
  2010-07-07  5:56                   ` Linda Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-07-07  3:18 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs-oss



Dave Chinner wrote:
> I don't think we changed anything at all directory related in
> XFS in 2.6.34 so I'm a little perplexed as to why this is suddenly
> all happening. Did these problems only show up when you updated to
> 2.6.34, or can you reproduce them on an older kernel?
----
	I've not had a chance to check this.
	But have been running 2.6.34 for about 3+ weeks.

	Have had alot of changes on my sys, including adding
disks -- resulting in ALOT of disk-disk copyies that I've mostly
done with a "xfsdump|mbuffer|xfsrestore" pipe, though I've started
using 'star' (handles acls & ext_attrs) with fewer errors than the 
pipe.  

	However -- the biggie that was giving me the most problems is
a disk with lots of 'foreign' names on it -- many of which have been
getting stored via 'samba' from WinXP clients that don't speak
very good UTF-8 (if at all).  With WinXP, I had files that were 
undelete-able on Windows (explorer or shell) that I *could* delete, 
from linux because linux spoke UTF-8.  

	Well now I have some similar files on linux -- they might
have been stored in a weird character set -- ASCII and UTF-8 don't
work to read them.  They don't read as errors but nothing can
read them.  

	While it is "bothersome" that I can't read them, I would have
expected that xfsdump/restore would have used character-set 'irrelevant'
calls to dump/restore files just to circumvent these problems.  

	At this point, I haven't yet stumbled onto how to address those
files in linux in order to delete or rename them to a UTF-8 normalized
form.

	At this point, I'm just guessing that this is the problem, but
it's my best guess so far...

Any ideas would be appreciated...will keep you posted.
-linda

p.s. (on top of things had file system problems that caused last week's email
to evaporate!  so anything that was sent to me was lost, though I could read
through archives to find things that were sent to the list -- not the best use
of time, though, I'm afraid...).




_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-07-07  3:18                 ` Linda A. Walsh
@ 2010-07-07  5:56                   ` Linda Walsh
  2010-07-07  6:36                     ` Dave Chinner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda Walsh @ 2010-07-07  5:56 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs-oss

For what it's worth, my 'undeletable' files give every appearance of
being 'plain' UTF-8 encoded file names.  Wrote a perl script that looped
through and used the 'bytes' pragma to look at the bytes.  Looks like
valid UTF-8.  Also had it try to remove the files.  Didn't work.



Read 9 files
 1: 2e                                                >|.               |<
 2: 2e 2e                                             >|..              |<
 3: 42 6f 72 64 75 72 65 20  70 6f 69 6e 74 69 6c 6c  >|Bordure pointill|<
    c3 a9 65 20 2d 20 42 6c  61 6e 63 2d 6e 6f 69 72  >|..e - Blanc-noir|<

Unlink of "Bordure pointillée - Blanc-noir" failed: No such file or directory

 4: 42 6f 72 64 75 72 65 20  70 6f 69 6e 74 69 6c 6c  >|Bordure pointill|<
    c3 a9 65 20 2d 20 4e 6f  69 72 2d 62 6c 61 6e 63  >|..e - Noir-blanc|<

Unlink of "Bordure pointillée - Noir-blanc" failed: No such file or directory

 5: 43 61 64 72 65 20 63 65  6e 74 72 c3 a9 20 31 20  >|Cadre centr.. 1 |<
    2d 20 42 61 73 65                                 >|- Base          |<

Unlink of "Cadre centré 1 - Base" failed: No such file or directory
....rest of output truncated.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-07-07  5:56                   ` Linda Walsh
@ 2010-07-07  6:36                     ` Dave Chinner
  2010-07-07  9:30                       ` Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-07-07  6:36 UTC (permalink / raw)
  To: Linda Walsh; +Cc: xfs-oss

On Tue, Jul 06, 2010 at 10:56:34PM -0700, Linda Walsh wrote:
> For what it's worth, my 'undeletable' files give every appearance of
> being 'plain' UTF-8 encoded file names.  Wrote a perl script that looped
> through and used the 'bytes' pragma to look at the bytes.  Looks like
> valid UTF-8.  Also had it try to remove the files.  Didn't work.

Can you send me a link to a metadump image of the filesystem so I can
look at the broken directories in detail?

Cheers.

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-07-07  6:36                     ` Dave Chinner
@ 2010-07-07  9:30                       ` Linda A. Walsh
  2010-07-07 21:01                         ` Linda Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-07-07  9:30 UTC (permalink / raw)
  To: Dave Chinner, xfs-oss


Dave Chinner wrote:
> Can you send me a link to a metadump image of the filesystem so I can
> look at the broken directories in detail?


----
	I'll have to work on that -- right now I'm taking a copy and trying to
delete everything off of it, so the meta copy, hopefully will have just the
stuff that won't delete... (at least that's the theory)...

BTW -- reviewing log files...there some stuff there that corresponds to not
begin able to read files and metadata, though I don't know the exact runtime messages
I was getting at the time:


Jul  6 08:15:54 Ishtar kernel: [33919.157874] xfsdump: page allocation failure. order:4, mode:0x40d0
Jul  6 08:17:12 Ishtar kernel: [33997.243321] xfsdump: page allocation failure. order:4, mode:0x40d0
Jul  6 08:17:51 Ishtar kernel: [34036.405012] xfsdump: page allocation failure. order:4, mode:0x40d0
....
....
Jul  6 08:17:12 Ishtar kernel: [33997.243326] Pid: 5047, comm: xfsdump Not tainted 2.6.34T610-Vanilla #1
Jul  6 08:17:12 Ishtar kernel: [33997.243329] Call Trace:
Jul  6 08:17:12 Ishtar kernel: [33997.243338]  [<ffffffff810c24d6>] __alloc_pages_nodemask+0x5b6/0x6a0
Jul  6 08:17:12 Ishtar kernel: [33997.243343]  [<ffffffff810c25d2>] __get_free_pages+0x12/0x50
Jul  6 08:17:12 Ishtar kernel: [33997.243349]  [<ffffffff810f26d9>] __kmalloc+0x189/0x1e0
Jul  6 08:17:12 Ishtar kernel: [33997.243353]  [<ffffffff8125b82e>] xfs_attrlist_by_handle+0xbe/0x130
Jul  6 08:17:12 Ishtar kernel: [33997.243358]  [<ffffffff8125c6c9>] xfs_file_ioctl+0x5f9/0x980
Jul  6 08:17:12 Ishtar kernel: [33997.243364]  [<ffffffff8158f4e1>] ? down_read+0x11/0x30
Jul  6 08:17:12 Ishtar kernel: [33997.243370]  [<ffffffff8123276d>] ? xfs_iunlock+0xdd/0x130
Jul  6 08:17:12 Ishtar kernel: [33997.243374]  [<ffffffff81250d05>] ? xfs_free_eofblocks+0xd5/0x260
Jul  6 08:17:12 Ishtar kernel: [33997.243379]  [<ffffffff81107ca8>] vfs_ioctl+0x38/0xd0
Jul  6 08:17:12 Ishtar kernel: [33997.243382]  [<ffffffff81107e7a>] do_vfs_ioctl+0x8a/0x5a0
Jul  6 08:17:12 Ishtar kernel: [33997.243387]  [<ffffffff810fa40c>] ? __fput+0x19c/0x210
Jul  6 08:17:12 Ishtar kernel: [33997.243391]  [<ffffffff81108411>] sys_ioctl+0x81/0xa0
Jul  6 08:17:12 Ishtar kernel: [33997.243396]  [<ffffffff81002deb>] system_call_fastpath+0x16/0x1b
Jul  6 08:17:12 Ishtar kernel: [33997.243398] Mem-Info:
Jul  6 08:17:12 Ishtar kernel: [33997.243400] DMA per-cpu:
Jul  6 08:17:12 Ishtar kernel: [33997.243402] CPU    0: hi:    0, btch:   1 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243405] CPU    1: hi:    0, btch:   1 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243407] CPU    2: hi:    0, btch:   1 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243409] CPU    3: hi:    0, btch:   1 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243411] DMA32 per-cpu:
Jul  6 08:17:12 Ishtar kernel: [33997.243413] CPU    0: hi:  186, btch:  31 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243416] CPU    1: hi:  186, btch:  31 usd:  10
Jul  6 08:17:12 Ishtar kernel: [33997.243418] CPU    2: hi:  186, btch:  31 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243425] CPU    3: hi:  186, btch:  31 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243427] Normal per-cpu:
Jul  6 08:17:12 Ishtar kernel: [33997.243431] CPU    0: hi:  186, btch:  31 usd:  12
Jul  6 08:17:12 Ishtar kernel: [33997.243435] CPU    1: hi:  186, btch:  31 usd:  36
Jul  6 08:17:12 Ishtar kernel: [33997.243439] CPU    2: hi:  186, btch:  31 usd:  15
Jul  6 08:17:12 Ishtar kernel: [33997.243443] CPU    3: hi:  186, btch:  31 usd:   0
Jul  6 08:17:12 Ishtar kernel: [33997.243451] active_anon:39844 inactive_anon:31818 isolated_anon:0
Jul  6 08:17:12 Ishtar kernel: [33997.243454]  active_file:854853 inactive_file:4597578 isolated_file:44
Jul  6 08:17:12 Ishtar kernel: [33997.243457]  unevictable:262348 dirty:348876 writeback:2 unstable:0
Jul  6 08:17:12 Ishtar kernel: [33997.243460]  free:59942 slab_reclaimable:294751 slab_unreclaimable:26479
Jul  6 08:17:12 Ishtar kernel: [33997.243463]  mapped:18561 shmem:6 pagetables:2966 bounce:0
Jul  6 08:17:12 Ishtar kernel: [33997.243473] DMA free:15868kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15772kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:12 Ishtar kernel: [33997.243491] lowmem_reserve[]: 0 2990 24200 24200
Jul  6 08:17:12 Ishtar kernel: [33997.243504] DMA32 free:199276kB min:2456kB low:3068kB high:3684kB active_anon:48kB inactive_anon:2672kB active_file:195724kB inactive_file:1726428kB unevictable:55768kB isolated(anon):0kB isolated(file):0kB present:3062596kB mlocked:55768kB dirty:107612kB writeback:0kB mapped:6352kB shmem:0kB slab_reclaimable:509064kB slab_unreclaimable:22184kB kernel_stack:64kB pagetables:444kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:12 Ishtar kernel: [33997.243524] lowmem_reserve[]: 0 0 21210 21210
Jul  6 08:17:12 Ishtar kernel: [33997.243537] Normal free:24624kB min:17440kB low:21800kB high:26160kB active_anon:159328kB inactive_anon:124600kB active_file:3223688kB inactive_file:16663884kB unevictable:993624kB isolated(anon):0kB isolated(file):176kB present:21719040kB mlocked:993624kB dirty:1287892kB writeback:8kB mapped:67892kB shmem:24kB slab_reclaimable:669940kB slab_unreclaimable:83732kB kernel_stack:3320kB pagetables:11420kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:12 Ishtar kernel: [33997.243557] lowmem_reserve[]: 0 0 0 0
Jul  6 08:17:12 Ishtar kernel: [33997.243565] DMA: 1*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15868kB
Jul  6 08:17:12 Ishtar kernel: [33997.243583] DMA32: 26095*4kB 10064*8kB 798*16kB 52*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 199516kB
Jul  6 08:17:12 Ishtar kernel: [33997.243602] Normal: 5486*4kB 0*8kB 0*16kB 4*32kB 2*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 24376kB
Jul  6 08:17:12 Ishtar kernel: [33997.243621] 5462983 total pagecache pages
Jul  6 08:17:12 Ishtar kernel: [33997.243625] 10294 pages in swap cache
Jul  6 08:17:12 Ishtar kernel: [33997.243629] Swap cache stats: add 455795, delete 445501, find 338025/357649
Jul  6 08:17:12 Ishtar kernel: [33997.243633] Free swap  = 8151268kB
Jul  6 08:17:12 Ishtar kernel: [33997.243637] Total swap = 8393924kB
Jul  6 08:17:12 Ishtar kernel: [33997.345120] 6291455 pages RAM
Jul  6 08:17:12 Ishtar kernel: [33997.345122] 104293 pages reserved
Jul  6 08:17:12 Ishtar kernel: [33997.345124] 3464575 pages shared
Jul  6 08:17:12 Ishtar kernel: [33997.345125] 2682993 pages non-shared
Jul  6 08:17:51 Ishtar kernel: [34036.405017] Pid: 5047, comm: xfsdump Not tainted 2.6.34T610-Vanilla #1
Jul  6 08:17:51 Ishtar kernel: [34036.405020] Call Trace:
Jul  6 08:17:51 Ishtar kernel: [34036.405029]  [<ffffffff810c24d6>] __alloc_pages_nodemask+0x5b6/0x6a0
Jul  6 08:17:51 Ishtar kernel: [34036.405034]  [<ffffffff810c25d2>] __get_free_pages+0x12/0x50
Jul  6 08:17:51 Ishtar kernel: [34036.405040]  [<ffffffff810f26d9>] __kmalloc+0x189/0x1e0
Jul  6 08:17:51 Ishtar kernel: [34036.405044]  [<ffffffff8125b82e>] xfs_attrlist_by_handle+0xbe/0x130
Jul  6 08:17:51 Ishtar kernel: [34036.405048]  [<ffffffff8125c6c9>] xfs_file_ioctl+0x5f9/0x980
Jul  6 08:17:51 Ishtar kernel: [34036.405055]  [<ffffffff8158f4e1>] ? down_read+0x11/0x30
Jul  6 08:17:51 Ishtar kernel: [34036.405060]  [<ffffffff8123276d>] ? xfs_iunlock+0xdd/0x130
Jul  6 08:17:51 Ishtar kernel: [34036.405065]  [<ffffffff81250d05>] ? xfs_free_eofblocks+0xd5/0x260
Jul  6 08:17:51 Ishtar kernel: [34036.405070]  [<ffffffff81107ca8>] vfs_ioctl+0x38/0xd0
Jul  6 08:17:51 Ishtar kernel: [34036.405073]  [<ffffffff81107e7a>] do_vfs_ioctl+0x8a/0x5a0
Jul  6 08:17:51 Ishtar kernel: [34036.405084]  [<ffffffff810fa40c>] ? __fput+0x19c/0x210
Jul  6 08:17:51 Ishtar kernel: [34036.405087]  [<ffffffff81108411>] sys_ioctl+0x81/0xa0
Jul  6 08:17:51 Ishtar kernel: [34036.405093]  [<ffffffff81002deb>] system_call_fastpath+0x16/0x1b
Jul  6 08:17:51 Ishtar kernel: [34036.405095] Mem-Info:
Jul  6 08:17:51 Ishtar kernel: [34036.405097] DMA per-cpu:
Jul  6 08:17:51 Ishtar kernel: [34036.405099] CPU    0: hi:    0, btch:   1 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405101] CPU    1: hi:    0, btch:   1 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405104] CPU    2: hi:    0, btch:   1 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405106] CPU    3: hi:    0, btch:   1 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405108] DMA32 per-cpu:
Jul  6 08:17:51 Ishtar kernel: [34036.405110] CPU    0: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405112] CPU    1: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405114] CPU    2: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405117] CPU    3: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405118] Normal per-cpu:
Jul  6 08:17:51 Ishtar kernel: [34036.405120] CPU    0: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405123] CPU    1: hi:  186, btch:  31 usd:   0
Jul  6 08:17:51 Ishtar kernel: [34036.405125] CPU    2: hi:  186, btch:  31 usd:  30
Jul  6 08:17:51 Ishtar kernel: [34036.405127] CPU    3: hi:  186, btch:  31 usd:  15
Jul  6 08:17:51 Ishtar kernel: [34036.405133] active_anon:39844 inactive_anon:31812 isolated_anon:0
Jul  6 08:17:51 Ishtar kernel: [34036.405134]  active_file:865726 inactive_file:4534161 isolated_file:35
Jul  6 08:17:51 Ishtar kernel: [34036.405135]  unevictable:262348 dirty:195473 writeback:0 unstable:0
Jul  6 08:17:51 Ishtar kernel: [34036.405136]  free:57638 slab_reclaimable:344326 slab_unreclaimable:32158
Jul  6 08:17:51 Ishtar kernel: [34036.405137]  mapped:26657 shmem:6 pagetables:2971 bounce:0
Jul  6 08:17:51 Ishtar kernel: [34036.405144] DMA free:15868kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15772kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:51 Ishtar kernel: [34036.405150] lowmem_reserve[]: 0 2990 24200 24200
Jul  6 08:17:51 Ishtar kernel: [34036.405159] DMA32 free:179088kB min:2456kB low:3068kB high:3684kB active_anon:48kB inactive_anon:2648kB active_file:196548kB inactive_file:1548864kB unevictable:55768kB isolated(anon):0kB isolated(file):0kB present:3062596kB mlocked:55768kB dirty:52076kB writeback:0kB mapped:10560kB shmem:0kB slab_reclaimable:701508kB slab_unreclaimable:27664kB kernel_stack:64kB pagetables:444kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:51 Ishtar kernel: [34036.405166] lowmem_reserve[]: 0 0 21210 21210
Jul  6 08:17:51 Ishtar kernel: [34036.405175] Normal free:35596kB min:17440kB low:21800kB high:26160kB active_anon:159328kB inactive_anon:124600kB active_file:3266356kB inactive_file:16587780kB unevictable:993624kB isolated(anon):0kB isolated(file):140kB present:21719040kB mlocked:993624kB dirty:729816kB writeback:0kB mapped:96068kB shmem:24kB slab_reclaimable:675796kB slab_unreclaimable:100968kB kernel_stack:3320kB pagetables:11440kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul  6 08:17:51 Ishtar kernel: [34036.405181] lowmem_reserve[]: 0 0 0 0
Jul  6 08:17:51 Ishtar kernel: [34036.405185] DMA: 1*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15868kB
Jul  6 08:17:51 Ishtar kernel: [34036.405194] DMA32: 26498*4kB 8649*8kB 150*16kB 41*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 179088kB
Jul  6 08:17:51 Ishtar kernel: [34036.405204] Normal: 7071*4kB 557*8kB 20*16kB 2*32kB 2*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 35428kB
Jul  6 08:17:51 Ishtar kernel: [34036.405214] 5410107 total pagecache pages
Jul  6 08:17:51 Ishtar kernel: [34036.405216] 10290 pages in swap cache
Jul  6 08:17:51 Ishtar kernel: [34036.405218] Swap cache stats: add 455867, delete 445577, find 338111/357759
Jul  6 08:17:51 Ishtar kernel: [34036.405221] Free swap  = 8151272kB
Jul  6 08:17:51 Ishtar kernel: [34036.405222] Total swap = 8393924kB
Jul  6 08:17:52 Ishtar kernel: [34036.530592] 6291455 pages RAM
Jul  6 08:17:52 Ishtar kernel: [34036.530595] 104293 pages reserved
Jul  6 08:17:52 Ishtar kernel: [34036.530597] 3301623 pages shared
Jul  6 08:17:52 Ishtar kernel: [34036.530598] 2859224 pages non-shared

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...)
  2010-07-07  9:30                       ` Linda A. Walsh
@ 2010-07-07 21:01                         ` Linda Walsh
  0 siblings, 0 replies; 29+ messages in thread
From: Linda Walsh @ 2010-07-07 21:01 UTC (permalink / raw)
  To: Dave Chinner, xfs-oss

> Dave Chinner wrote:
>> Can you send me a link to a metadump image of the filesystem so I can
>> look at the broken directories in detail?
-----
Don't have link - not sure where I'd put it up for download,
Would there be a place you'd want an upload, or I it's small 
enough to email, but maybe not to the list.

with the deletions, and compressed with unix util 7z, it's 2.7M.

xfs_dump options were 'xfs_metadump -o -w -g /dev/Home/Home metadump.out









_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...)
  2010-06-30 21:01             ` Stan Hoeppner
@ 2010-07-07 21:40               ` Linda A. Walsh
  2010-07-07 23:40                 ` Stan Hoeppner
  0 siblings, 1 reply; 29+ messages in thread
From: Linda A. Walsh @ 2010-07-07 21:40 UTC (permalink / raw)
  To: Stan Hoeppner, xfs-oss



Stan Hoeppner wrote:
> 
> It is interesting that all of these "corrupt" files originate from Japan.  The
> filenames have likely passed through many different character encodings on
> their journey from their source to your XFS filesystems.  And they probably
> originated on a MS Windows machine.
----
	Actually, while the Japanese comment is true for most, its not true
for all -- some have the 'copyright' or 'trademark' symbol in the.

> How can you be so certain that there isn't a wonky bit somewhere that's
> wreaking havoc with BabelMap? 
----
	Sorry, I didn't mean to come across with certainty that everything was
'unwonky'.  Just that normally these files work correctly -- I have good 
compatibility across all my tools:

1) logged into linux with a tty (secureCRT that supports unicode), and use
   "ls" to view them in bash (or use "echo *" in bash)
2) use the tty version of Vim in that tty window
3) the 'X' version of Vim (displayed through cygwin's X server, 
   which also handles unicode), 
4) over the net using Samba on the linux server, in windows7
5) editing the files on Win7 using 'Gvim'

The "broken" files don't work anywhere.  And it is not name or character
specific.  I had 3-4 occurances of 2-3 names broken in 4 copies of 1 directory,
but I also had 2 other copies of that directory that were 'fine' -- same
names, same characters -- some corrupt, some not.


> Y access cycles flips a bit, changes a character, or something along these
> lines?  Did you update this program recently, or any other programs that might
> affect character encoding/displaying, or anything remotely related to such?
> Have you done any software updates recently, period?
----
	Have been forced to do file system copies (which I did with
an "xfsdump|mbuffer|xfsrestore" pipe running in background. It was 
there I really began to notice a pattern of problems, though some 
nightly backups were giving errors as far as a few weeks ago -- with my
first NOTICING it (I'm often not attentive to automatic processes that
have been working fine for months or years), a few weeks ago, or shortly
after upgrading to 2.6.34.  Due to an upgrade to SuSE 11.2 about ... 6-8 
months back, my normal logs were lost as it changed, **AGAIN**, the system
logger (first from syslog->ng-syslog, which was a good thing, but now
from ng-syslog to rsyslog -- a step backwards in flexibility), with the
result that all my logfile patterns were no longer used and much logging
was simply thrown away.  AFter I caught it -- switched back to ng-syslog
and that's when I began noticing multiple oddities in my log files.


> 
> Given the entirety of what we're looking at, and that you're apparently not
> seeing this with files created in a native English language encoding, I'd say
> Dave is probably on the right track here.
---
	Yeah...something to do with character encoding...I'd agree there.
But not just foreign -- just "utf-8"  some english names but with special
symbols:
Favorites/Cannabis, EO's & Plant info sources/Plant, Tree sources/The Online Nursery » buckeye tree.URL
Favorites/Cannabis, EO's & Plant info sources/Plant, Tree sources/The Online Nursery » Black Walnut.URL
Favorites/Hw/Intel® Xeon® Processor Numbers.URL
Favorites/Hw/Intel® 5000X Chipset Overview.URL
Favorites/Hw/Computer(s), peripherals, parts/Intel CPU and chips.../Intel® Xeon® Processor 5000 Sequence - Technical Documents.URL
Favorites/Hw/Computer(s), peripherals, parts/Intel CPU and chips.../Intel® Xeon® Processor Numbers.URL
Favorites/Hw/Computer(s), peripherals, parts/Intel CPU and chips.../Intel® Core™ Microarchitecture.URL
Favorites/Hw/Computer(s), peripherals, parts/Intel CPU and chips.../How to Choose between Hardware and Software Prefetch on 32-Bit Intel® Architecture - Intel® Software Network.URL
Favorites/Hw/Computer(s), peripherals, parts/Intel CPU and chips.../Preparing Applications for Intel® Core™ Microarchitecture.URL
Favorites/Microsoft/JSI, INC. - Your Windows Server 2003 - Windows NT - Windows 2000 - Windows XP ® Resource.URL
Favorites/Web Technologies/Ajaxian » Behold the, um, Beholder!.URL
Favorites/Web Technologies/mezzoblue § css Zen Garden Resources.URL
hw/misc+interest/Freedom to Tinker » Blog Archive » Making and Breaking HDCP Handshakes_files
hw/misc+interest/Freedom to Tinker » Blog Archive » Making and Breaking HDCP Handshakes.htm
Receipts_n_inf_etc/WinZip® Order Confirmation-v14-2009.pdf

-----
So (R) and the angular right quote ">>", the paragraph mark...

All of the above are filenames that can't be accessed, among several french, spanish, greek
and japanese filenames.
The french/spanish are from Adobe documentation.

Even the proper Knuth spelling of "Latex" with the lowered 't' (theta I believe)...etc.

So yup...foreign char delight.

I can easily image most or all of these having been imported from my winXP machine at
one point -- as I only recently started using Win7.  -- and many of the troublsome japanese
filenames were 'downloaded japanese anime-related stuff' that I did on my old
XP machine -- which I used as a download client while I did work on my Win7 machine...
That gave a huge influx of foreign names from a WinXP machine.  That could be what
made the problem jump out so noticeable -- before it was only maybe 20-30 files out of 
about a million or more.  But in the new batch it was hundreds out of several thousand, so
they stand out alot more.



--------------------

That said -- and I note

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-06-30 23:30       ` rsync and corrupt inodes (was xfs_dump problem) Dave Chinner
  2010-07-01  8:25         ` Michael Monnerie
@ 2010-07-07 21:56         ` Linda Walsh
  1 sibling, 0 replies; 29+ messages in thread
From: Linda Walsh @ 2010-07-07 21:56 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Michael Monnerie, xfs

Dave Chinner wrote:
> On Wed, Jun 30, 2010 at 08:25:20PM +0200, Michael Monnerie wrote:
>> From another Linux ("saturn"), I do an rsync via an rsync-module, 
...
> Are you running rsync locally on saturn
----

Data point:

I have two file systems with corruption -- one recent one had a WinXP
client (as well as a Win7, and linux host), dumping foreign names into
the file system (the Win clients through samba).

However, my "home partition" use to regularly be used to backup the
WinXP's home dir via  **rsync**.


Some of those files were _un_readable on WinXP (like ones containing copyright
symbols / registered trademark symbols, etc) -- but at the time, they
WERE readable on linux.  Now some of those same files are no longer
readable on linux.  Just thought I'd mention that on my 'home' partition,
the corruption is in files that were transfered with rsync.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...)
  2010-07-07 21:40               ` utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...) Linda A. Walsh
@ 2010-07-07 23:40                 ` Stan Hoeppner
  2010-07-08  0:38                   ` Linda A. Walsh
  0 siblings, 1 reply; 29+ messages in thread
From: Stan Hoeppner @ 2010-07-07 23:40 UTC (permalink / raw)
  To: xfs

Linda A. Walsh put forth on 7/7/2010 4:40 PM:

> So yup...foreign char delight.
> 
> I can easily image most or all of these having been imported from my
> winXP machine at
> one point -- as I only recently started using Win7.  -- and many of the
> troublsome japanese
> filenames were 'downloaded japanese anime-related stuff' that I did on
> my old
> XP machine -- which I used as a download client while I did work on my
> Win7 machine...
> That gave a huge influx of foreign names from a WinXP machine.  That
> could be what
> made the problem jump out so noticeable -- before it was only maybe
> 20-30 files out of about a million or more.  But in the new batch it was
> hundreds out of several thousand, so
> they stand out alot more.

I run into a similar problem frequently when saving downloads to my XFS samba
shares via a Windows client.  I don't recall ever seeing XFS corruption, but I
do have problems manipulating these file names through bash.  Quite often I
end up having to rename the files through Windows Explorer to something bash
can handle.  That usually fixes the problem--not always, but usually.

Just for comparison, I'm running:

Debian stable (Lenny 5.0.4)
rolled 2.6.32.9 from kernel.org, XFS in kernel not as module
xfsprogs:  2.9.8-1lenny1
bash:      3.2-4
samba:     3.2.5-4lenny9
W2K and XP Pro clients

As I've never seriously dealt with character encoding issues (i.e. changed
anything related in Debian), I don't even know where/how to find my servers
default character encoding.  Google isn't being very friendly here.  I'm using
whatever character encoding is the default for US English Debian Lenny.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...)
  2010-07-07 23:40                 ` Stan Hoeppner
@ 2010-07-08  0:38                   ` Linda A. Walsh
  0 siblings, 0 replies; 29+ messages in thread
From: Linda A. Walsh @ 2010-07-08  0:38 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs



Stan Hoeppner wrote:
> I run into a similar problem frequently when saving downloads to my XFS samba
> shares via a Windows client.  I don't recall ever seeing XFS corruption,
----
	Just a note:

	Corrupt isn't exactly the right word -- as it doesn't spread to the rest
of the file system as a real corrupt likely would ...   It seems to be limited
to these filenames -- where you can't remove them, or 'address' them, or rename
them. -- and their information  in the owner and date fields shows up as question
marks.  It could be that once I retired  any of these disks that contain these
files the problem will stay on them.  


> but I
> do have problems manipulating these file names through bash.  Quite often I
> end up having to rename the files through Windows Explorer to something bash
> can handle.  That usually fixes the problem--not always, but usually.
----
	These files are not visible to explorer -- I tried/looked.

I'm running suse -- and they default to UTF-8, which is compatible with Win7
speaking UTF16.  It's just my older WinXP clients that have problems emitting
characters that become incomprehensible.


> As I've never seriously dealt with character encoding issues (i.e. changed
> anything related in Debian), I don't even know where/how to find my servers
> default character encoding.  Google isn't being very friendly here.  I'm using
> whatever character encoding is the default for US English Debian Lenny.

I started to some number of years ago due to having music from foreign lands -- and
it's only gotten worse from there :-)...now half my music and video collection has
foreign chars in it.

But even in english, you there there are unicode display values for colon, slash and
backslash, so you can use those usually forbidden characters in filenames -- since they
are different characters.  They do look slightly different since it's not confusing, but
it's better than putting in some substitute.

Lots of places a colon is called for in music and movie titles where you can use the
full width colon -- don't even need spaces around it:see? (this is where I use Babelmap
alot!...when I want to insert my own...).  You can find the reverse of character in
Firefox with the character identify extension.  Something I didn't know -- if you have
the Japanese characters, you can use the extension -- and what they call 'Romaji' -- 
the anglicized version of Japanese -- is just the names of the characters -- which
you can read out in the character identifier.  Interesting but slow translating.  AT 
least you can sound out the words that way...:-).

Read the book "Fonts and encodings", and you'll get hooked, I guarantee it!
(google it, it's the first result).




> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: rsync and corrupt inodes (was xfs_dump problem)
  2010-07-04 22:53               ` Dave Chinner
@ 2010-07-12 11:28                 ` Michael Monnerie
  0 siblings, 0 replies; 29+ messages in thread
From: Michael Monnerie @ 2010-07-12 11:28 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1.1: Type: Text/Plain, Size: 1452 bytes --]

Been on holidays, so I didn't answer till now...

On Montag, 5. Juli 2010 Dave Chinner wrote:
> > So far, so good. I'm on 2.6.34 now. Is there any chance for a fixed
> > version of xfs_repair, so that I can either get rid of the 4 broken
> > files (i.e. delete them), or repair the filesystem? ATM, xfs_repair
> > asserts on this filesystem.
> 
> What version of xfs_repair? v3.1.2 does not assert fail here on the
> metadump image you posted, but it does take 3 runs to fix up all the
> problems with the busted inodes....

Do you mean this one?
http://zmi.at/saturn_bigdata.metadump.only_broken.bz2 (197 MB)

I have xfs_repair 3.1.2, and made a shell script which 10x does 
xfs_repair that image, I attached the output here. Doesn't seem to 
repair anything, just crashing.

Maybe I did something wrong? I configured xfsprogs 3.1.2 with
CFLAGS=-march=athlon64-sse3 ./configure --prefix=/usr
and then 
make;make install

I recompiled the whole thing now with
# gcc --version
gcc (SUSE Linux) 4.4.1 [gcc-4_4-branch revision 150839]

and it's the same output as ever. Either you meant another metadump, or 
there is a problem somewhere I don't see.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.1.2: repair.log --]
[-- Type: text/x-log, Size: 15828 bytes --]

xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 1
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
data fork in inode 649642 claims metadata block 537266460
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 649642 claims metadata block 537266460
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
data fork in inode 649642 claims metadata block 537266460
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
data fork in inode 2195133988 claims metadata block 537122652
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.
xfs_repair version 3.1.2
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
corrected attribute entry count in inode 649642, was 40, now 0
problem with attribute contents in inode 649642
local inode 649790 attr too small (size = 1, min size = 4)
bad attribute fork in inode 649790, clearing attr fork
clearing inode 649790 attributes
cleared inode 649790
        - agno = 1
local inode 2195133988 attr too small (size = 3, min size = 4)
bad attribute fork in inode 2195133988, clearing attr fork
clearing inode 2195133988 attributes
cleared inode 2195133988
corrected attribute entry count in inode 2902971474, was 163, now 0
corrected attribute entry totsize in inode 2902971474, was 6, now 4
problem with attribute contents in inode 2902971474
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 0
        - agno = 3
        - agno = 2
        - agno = 4
        - agno = 5
data fork in inode 2195133988 claims metadata block 537122652
        - agno = 6
        - agno = 7
xfs_repair: dinode.c:2101: process_inode_data_fork: Assertion `err == 0' failed.

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2010-07-12 11:25 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-06-27  1:10 WARNING xfsdump [still] Cannot allocate memory for list of [root|non-root] attributes for nondir ino xxyz Linda A. Walsh
2010-06-28  2:27 ` Dave Chinner
2010-06-29 22:33   ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Linda Walsh
2010-06-29 23:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
2010-06-29 23:55       ` Michael Weissenbacher
2010-06-30  0:42         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
2010-06-30  1:16           ` Dave Chinner
2010-06-30  2:45             ` Linda A. Walsh
2010-07-01 23:58               ` Dave Chinner
2010-07-07  3:18                 ` Linda A. Walsh
2010-07-07  5:56                   ` Linda Walsh
2010-07-07  6:36                     ` Dave Chinner
2010-07-07  9:30                       ` Linda A. Walsh
2010-07-07 21:01                         ` Linda Walsh
2010-06-30  0:01       ` Linda A. Walsh
2010-06-30  1:06         ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ;-/ " Dave Chinner
2010-06-30  1:52           ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ " Linda A. Walsh
2010-06-30 21:01             ` Stan Hoeppner
2010-07-07 21:40               ` utf-8' chars from Winxp machine may be problem related (was Re: xfs file system in process of becoming corrupt; though xfs_repair...) Linda A. Walsh
2010-07-07 23:40                 ` Stan Hoeppner
2010-07-08  0:38                   ` Linda A. Walsh
2010-06-30 18:25     ` xfs file system in process of becoming corrupt; though xfs_repair thinks it's fine! ; -/ (was xfs_dump problem...) Michael Monnerie
2010-06-30 23:30       ` rsync and corrupt inodes (was xfs_dump problem) Dave Chinner
2010-07-01  8:25         ` Michael Monnerie
2010-07-02  2:42           ` Dave Chinner
2010-07-02  6:21             ` Michael Monnerie
2010-07-04 22:53               ` Dave Chinner
2010-07-12 11:28                 ` Michael Monnerie
2010-07-07 21:56         ` Linda Walsh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.